title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Expected Length of Walk on Truncated Icosahedron
Consider a truncated icosahedron with 12 pentagons and 20 hexagons. Starting from a hexagonal face, we go to any neighboring polygon randomly with equal probability. What is the expected number of steps it takes for us to visit the starting hexagon a second time? I know that this is easily solvable with a Markov Chain, but the question specifically requires little computation, stating that the problem can be finished with just simple mental math. I know the answer to be 30, but I cannot find an elegant way to assert so. One argument is that since 3/2 is the expected number of steps from one hexagon to another, and there are 20 hexagons, that the answer is simply 3/2 times 20, which is 30. However, this clearly lacks rigor.
The symmetry of the Markov chain associated with the random walk enables you to calculate the expected return time fairly easily. The chain has $\ 32\ $ states $\ h_1,h_2,\dots,h_{20},p_1,p_2,\dots,p_{12}\ ,$ where $\ h_i\ $ are the hexagonal faces and $\ p_i\ $ the pentagonal faces. If $\ \pi\ $ is the stationary distribution of the chain, then by symmetry we must have $$ \pi_{h_i}=\pi_{h_j}=\mathfrak{h}\ , $$ say, for all $\ i,j\in\{1,2,\dots,20\}\ ,$ and $$ \pi_{p_i}=\pi_{p_j}=\mathfrak{p}\ , $$ say, for all $\ i,j\in\{1,2,\dots,12\}\ ,$ with $$ 20\mathfrak{h}+12\mathfrak{p}=1 $$ and $$ \mathfrak{h}=\frac{\mathfrak{h}}{2}+\frac{3\mathfrak{p}}{5}\ , $$ because the entry to a hexagon from each of its three adjacent hexagons occurs with probability $\ \frac{1}{6}\ $ , and the entry to it from each of its three adjacent pentagons occurs with probability $\ \frac{1}{5}\ $ . These solution to these equations is $\ \mathfrak{p}=\frac{1}{36}, \mathfrak{h}=\frac{1}{30}\ .$ It now follows fro
|graph-theory|expected-value|random-walk|polyhedra|mental-arithmetic|
0
Approximate solution of the following integral
I have the following integral. $$ G(q) = \int_{0}^{\infty} y^\alpha \exp(-y) \exp(-j b q\exp(-c y)) dy $$ with $$ \alpha \in \mathbb{R}^+, \quad q, b, c \in \mathbb{R}, \quad j = \sqrt{-1}$$ I have asked this before in different forms, but I have settled down with the following approximation using Gauss-Laguerre Polynomial. $$ G(q) = \sum_{i = 1}^n w_i f(y_i), $$ where, $$ w_i = \frac{\Gamma(\alpha + n + 1)y_i}{n! (n+1)^2 [L^{\alpha}_{n+1}(y_i)]^2} $$ $$ f(y) = \exp(-j b q\exp(-c y)) $$ And $y_i$ are the zeros of the Laguerre polynomial $L_n^{\alpha}(x)$ . To compute this faster (as I need to compute it for several $q$ , $\alpha$ , and $c$ combinations), I have stored the zeros of $L_n^{\alpha}(x)$ in a matrix form for several $\alpha$ values. The questions I am asking myself: Can this still be reduced to a nicer form if not a closed form? Can the original integral be computed in the complex plane? I want to create an array of $G(q)$ with $q$ . And I have seen the following: $$ G(-q) =
We can express by infinte series: $$\int_0^{\infty } y^{\alpha } \exp (-y) \exp (-i b q \exp (-c y)) \, dy=\sum _{m=0}^{\infty } \frac{(1+c m)^{-1-\alpha } (-i b q)^m \Gamma (1+\alpha )}{m!}$$ with 10 terms we have 10 correct digits. If $\alpha \in \mathbb{Z}_{>\, 0} $ then:
|definite-integrals|complex-numbers|approximation|laguerre-polynomials|
0
On the bijection between homogenous prime ideals of $A_f$ and prime ideals of $(A_f)_0$
Let $A$ be a $\mathbb{Z}^{\geq0}$ graded ring, and $f$ an element of positive degree. It is well known that the homogenous prime ideals of $A_f$ are in bijection with prime ideals of $(A_f)_0$ , that is the prime ideals of the sub ring of degree zero elements in $A_f$ . The way I know how to show this bijection is outlined in Vakil's notes, and makes sense to me. However, in this question , one of the answers states that the way Vakil goes about it is perhaps not the best way to think about it. Indeed, the answer states the better way to do this is to take $\mathfrak{p}_0\subset (A_f)_0$ and map it to $\sqrt{\mathfrak{p}_0 A_f}$ . Now I interpret this notation to mean that we take $\mathfrak{p}_0$ as a subset of $A_f$ , define a new ideal $I$ generated by $\mathfrak{p}_0$ over $A_f$ , and take it's radical. Now, I think it should be clear that if $\iota:(A_f)_0\rightarrow A_f$ is the inclusion map, then $\iota^{-1}(\sqrt{\mathfrak{p}_0A_f})=\mathfrak{p}_0$ , as clearly $\mathfrak{p}_0\
To save some typing, I will write $I={\mathfrak p}_0 A_f$ and $J=\sqrt{{\mathfrak p}_0 A_f}$ . As you said, both are homogeneous ideals of $A_f$ . We want to show that $J$ is prime if ${\mathfrak p}_0\subset (A_f)_0$ is prime. Since $J$ is homogeneous, we only need to show if $a\in (A_f)_k$ and $b\in (A_f)_l$ are homogenous, we have $ab\in J$ implies that $a\in J$ or $b\in J$ . (See for example Showing that a homogenous ideal is prime. ) By the definition of radical, $\exists r\geq 1$ such that $(ab)^r\in I$ . Let $j=\deg f\geq 1$ . Then $$ (ab)^{jr}/f^{(k+l)r}\in I\cap (A_f)_0={\mathfrak p}_0, $$ since its degree is 0, and $(ab)^r$ divides it. Also, clearly $({\mathfrak p}_0A_f)\cap (A_f)_0={\mathfrak p}_0$ by degree consideration and using ${\mathfrak p}_0\subset (A_f)_0$ is an ideal. Now $$ (ab)^{jr}/f^{(k+l)r} = a^{jr}/f^{kr} \cdot b^{jr}/f^{lr} \in {\mathfrak p}_0, $$ and both $a^{jr}/f^{kr}$ and $b^{jr}/f^{lr}$ belong to $(A_f)_0$ , so by ${\mathfrak p}_0$ is a prime ideal in $(A
|algebraic-geometry|commutative-algebra|ideals|graded-rings|projective-schemes|
1
amount of numbers in [10000] of which the sum of digits equals to a specific number
I am working on calculating the amount of numbers between 1 and 10000 with a specific sum of digits using combinatorics. In the first problem, I have to find it for 9 and I did. I constituted the problem as a set containing 4-tuples of digits, with the condition that the elements of each tuple sum up to 9. And then I used (9 + 4 - 1) over 9 to solve it and the result is correct. But for 19, the same method is not working because the previous method also includes the cases where a coefficient is not a digit. I don't know how to continue and would appreciate any help.
This is a problem of balls in bins with limited capacity . You have $k=19$ balls to distribute over $m=4$ bins, all with capacity $R=9$ . Thus, according to the last formula derived in the linked article, the number of distributions that satisfy the capacity constraints is $$ \sum_{t=0}^4(-1)^t\binom4t\binom{4+19-10t-1}{4-1}=\binom{22}3-\binom41\binom{12}3=660\;, $$ where, contrary to the usual definition, binomial coefficients with negative upper index are taken to be zero. In the present case, you don’t really need the full inclusion–exclusion machinery, as not more than one digit at a time can exceed its capacity, so to subtract out the inadmissible distributions from the initial count of $\binom{19+4-1}{4-1}=\binom{22}3$ you just have to choose one digit, $\binom41$ , and count the distributions in which that digit exceeds its capacity. Since you need to use $10$ balls for that digit, you have $19-10=9$ balls left to distribute over the $4$ digits, which yields a count of $\binom{1
|combinatorics|permutations|
0
Upper bound on expectation given probability
Let $X$ be a positive random variable. I know that $\mathbb{P}[X \leq a] \geq q$ . Any hints about how to find an upper bound on $E[X]$ in terms of $q$ and $a$ ? Using Markov inequality, I can obtain $E[X] \geq a(1-q)$ . But I need an upper bound, not a lower one. Any hints? Thanks!
There does not exist any such upper bound. Take $\mathbf{P}(X=0)=\frac{1}{2}$ (here $a=0$ and $q=\frac{1}{2}$ ) and $\mathbf{P}(X=n)=\frac{1}{2}$ , for $n$ integer. Then $$ \mathbf{E}(X)=\frac{n}{2}. $$ So we can make $\mathbf{E}(X)$ as large as we want. However, if we have $$ \forall a\geq 0: \quad \mathbf{P}(X\geq a) \leq Q(a), $$ then you may use the integral identity $$ \mathbf{E}(X)=\int_{0}^\infty \mathbf{P}(X\geq a) \mathrm{d}a $$ to get an upper bound.
|probability|inequality|expected-value|upper-lower-bounds|
0
Cayley Table Sudoku
Given an $n\times n$ grid partially filled with the numbers $0,\ldots, n-1$ , we can play a Sudoku-like game by trying to fill in the rest of the grid so that the end result is the Cayley table for a group (edit: where position $i,j$ in the table would represent the product of $i $ and $j$ in the group structure). I am curious about the following value: let $g(n)$ represent the smallest natural number such that if $g(n)$ squares in an $n\times n$ grid are filled, then the configuration always can be extended to a solution in at most $1$ way. Another way to think of this question is as follows: How similar can two distinct $n\times n$ Cayley tables be? $g(n)$ is the smallest natural number such that if two $n\times n$ Cayley tables agree for $g(n)$ entries, then they are identical. What does $g(n)$ look like? I'm most interested in the asymptotic behaviour (as I imagine $g(n)$ may fluctuate wildly as $n$ increases due to the fluctuation in the number of groups of order $n$ as $n$ increa
As a partial answer to your second question: in 1992 Ales Drápal proved that if two finite groups agree on 89% on their multiplication tables, the groups must be isomorphic! He conjectured that the same holds true if the tables agree on 75% of their entries. I do not think the conjecture has been proved yet. See also Groups St. Andrews 2001 at Oxford , featuring the paper in Vol. 1, p. 143, of Drápal On the distance of 2-groups and 3-groups . See also A. Drápal, How far apart can the group multiplication tables be? , European J. Combin. 13 (1992) , no. 5, 335–343.
|abstract-algebra|group-theory|information-theory|cayley-table|
0
Is the tensor product of two vector space on $R$ isomorphic to $R^{d^2}$?
Let $V$ , $W$ be two fine-dimensional vector spaces over the field of real numbers $\mathbb{R}$ . Assume the dimension of both spaces is d. Is there a unique isomorphism between $V \otimes W$ and $\mathbb{R}^{d^2}$ ? ie, can we interpret elements of $V \otimes W$ as being vectors in a $\mathbb{R}^{d^2}$ space? So on the easiest case scenario, is an element of $V \otimes W$ just a real number?
Suppose $V$ and $W$ are finite-dimensional vector spaces over the field $\Bbb{R}$ of the real numbers saying $\text{dim}(V) = n$ and $\text{dim}(W) = m \qquad n,m \in \Bbb{N}$ Their tensor-product has the dimension $\text{dim}(V \otimes W) = n \cdot m$ as stated in S. Lang: Algebra, Revised third edition, p.609, Cor. 2.4. Because two finite-dimensional vector spaces are isomorphic iff they have equal dimension there will be an isomorphism, but this isomorphism has not to be necessarily unique. So, the vectors of the tensor-product can be regarded als $n\cdot m$ tuples of real numbers, but they are not real numbers for their own. The vector space structure is that of a tensor-product with its universal property.
|tensor-products|tensors|vector-space-isomorphism|
0
Proving the Intermediate Value Theorem from the Bolzano-Weierstrass Theorem
The least upper bound property of the real numbers (lub) can be used to prove the intermediate value theorem (IVT) as well as the Bolzano-Weierstrass theorem (BWT) . I've been trying to prove the IVT without the lub property, nor the Archimedean property , using the BWT as an axiom. The sketch of my proof goes as follows: Lemma : If $u$ is a bounded sequence of reals such that $u_{n+1}-u_n \xrightarrow[n \to +\infty]{} 0$ then the set of accumulation points of $u$ is an interval. I do not give a proof of this fact as it can be found elsewhere and in more general forms,e.g. in Show that set of all accumulation points is the closed interval $[\liminf_{n\rightarrow\infty}(x_n),\limsup_{n\rightarrow\infty}(x_n)]$ Set of cluster points of a bounded sequence . If a sequence satisfies $\lim\limits_{n\to\infty}|a_{n+1} - a_n|=0$ then the set of its limit points is connected Proof attempt : Given a function $f$ that is continuous on an interval I, take $(a,b)\in I^2$ s.t. $a and $f(a)\leqslant
Answers: This seems OK. You are relying only on compactness (Bolzano--Weierstrass), which by the way is the property you routinely use in analysis, more than the least upper bound. I think your method is creative, but overtly complicated. More down-to-earth, you could use the bisection method to prove the intermediate value theorem from Bolzano--Weierstrass. Your function satisfies $f(0) 0$ . What is the value of $f(1/2)$ ? Zero? END. Positive? Consider $f(1/4)$ and restart. Negative? Consider $f(3/4)$ and restart. Either you find a zero in finite time or you construct an infinite sequence in $[0, 1]$ , which by compactness has a limit point, and that limit point is a zero of $f$ . EDIT In comments, I have been asked how I can prove the very last statement of point 3., i.e. "that limit point is a zero of $f$ ". Let me answer that. We have actually constructed two sequences of points in $[0, 1]$ ; one is $(x_n^-)$ , at which $f(x_n^-) , the other is $(x_n^+)$ , at which $f(x_n^+)>0$ . B
|real-analysis|limits|solution-verification|compactness|alternative-proof|
1
Intertemporal optimization: When to use the Hamiltonian vs Lagrangian
Assume a producer wishes to maximize the net present value, choosing optimal quantities of K and L. variables are time dependent. y is the production, p is the price of y. K is capital, r is the price of capital, L is labor, w is wage. $$ \max_{K(t), L(t)} \pi(t)= \int_{t=0}^T e^{-\rho.t}. (p(t) . y(t) - [r(t).K(t) + w(t).L(t)]).dt $$ Subject to the constraint: $$ S.t: y(t) = K(t)^a . L(t)^b $$ I'm trying to find the quantities of Y, K and L to maximize the net present value of profit, but I don't know what to use to solve this optimization problem, the lagrangian or the Hamiltonian? I don't know what to use because all the variables are expressed in time $t$ , not $t-1$ or $t+1$ .
Discretizing $t$ we have $$ \max_{K(t), L(t)} \pi(t)= \int_{t=0}^T e^{-\rho.t}. (p(t) . y(t) - [r(t).K(t) + w(t).L(t)]).dt\approx \sum_{j=0}^{j=n}e^{-\rho \delta j}\left(p(\delta j)K^a(\delta j)L^b(\delta j)-r(\delta j)K(\delta j)-w(\delta j)L(\delta j)\right) $$ then calling $$ O(K_j,L_j) = \sum_{j=0}^{j=n}c_j\left(p_jK^a_jL^b_j-r_jK_j-w_jL_j\right) $$ the stationary points are at the solutions for $$ \cases{ O_{K_j}= a p_j K^{a-1}_jL^b_j-r_j=0\\ O_{L_j}= b p_j K^a L^{b-1}_j-w_j=0 } $$ now assuming $K_j>0, L_j>0$ we have $$ \cases{ (a-1)\ln K_j + b \ln L_j = \ln\left(\frac {r_j}{a p_j}\right)\\ a \ln K_j+(b-1)\ln L_j = \ln\left(\frac {w_j}{b p_j}\right) } $$ thus obtaining expressions to $K^*_j, L^*_j$ optimals. $$ \cases{ K_j^* = \left(\frac{r_j}{a p_j}\right)^{\frac{1-b}{a+b-1}}\left(\frac{w_j}{b p_j}\right)^{\frac{b}{a+b-1}}\\ L_j^* = \left(\frac{r_j}{a p_j}\right)^{\frac{b}{a+b-1}}\left(\frac{w_j}{b p_j}\right)^{\frac{a-1}{a+b-1}} } $$
|optimization|lagrange-multiplier|dynamic-programming|
0
Big O Multiplication by Constant
I have been working though CSE 373 Lectures by Skiena and I cannot understand his explanation of why multiplication by a constant does not change the asymptotics: $O(c*f(n)) \to O(f(n))$ In the lectures, Skiena gives the example with $c = 100$ and $f(n) = n^3$ . Next, he proceeds with showing that there is a constant $C$ such that $100n^3 . Doesn't it just show that $g(n) = 100n^3$ is bounded by $C*n^3$ ? How does it tell us anything about the $O(c*f(n))$ ?
Doesn't it just show that $g(n)=100n^3$ is bounded by $C∗n^3$ ? Well that is what it means to be $O(n^3)$ by definition. Recall that $g$ is $O(f)$ if there exists a positive constant $C$ and some $x_0$ such that $|g(x)|\leq C\cdot f(x)$ for all $x\geq x_0$ . In other words, $g$ is $O(f)$ if it is eventually dominated by $C\cdot f$ . Hence a function is $O(n^3)$ if it is eventually dominated by $C\cdot n^3$ , where $C$ is some fixed positive constant. In the example given $100n^3$ is in $O(n^3)$ because it is eventually dominated by e.g. $101\cdot n^3$ . In general, if $g\in O(c\cdot f)$ , then for some fixed constant $C$ we eventually have $|g(x)|\leq C \cdot (c\cdot f(x)) = (C\cdot c)\cdot f(x)$ . Now with $D=C\cdot c$ as the constant we eventually have $|g(x)|\leq D\cdot f(x)$ , therefore $g$ is $O(f)$ .
|asymptotics|
1
Integrating a function's gradient
Given the gradient of a function, how do you find the function itself? For a scalar-valued function $f : \mathbb{R}^n \to \mathbb{R}$, the gradient of $f$, denoted $\nabla f : \mathbb{R}^n \to \mathbb{R}^n$, is defined as $\nabla f(x)_i := \frac{\delta f(x)}{\delta x_i}, \quad i=1,\ldots,n.$ For example, for $A \in \mathbb{R}^{n \times n}$ and $b,x \in \mathbb{R}^n$, the gradient of the function $f(x) := \frac{1}{2} (Ax-b)^T(Ax-b)$ is $\nabla f(x) = (Ax-b)^T A.$ Now suppose we are told that the gradient of a scalar-valued function $g$ is $\nabla g(x) = (Ax-b)^T D$ for some diagonal matrix $D \in \mathbb{R}^{n \times n}$. Is there a nice closed-form expression for $g(x)$? These two threads ( here , here ) seem to use guess-and-check. We tried looking at function of the form $g(x) := \frac{1}{2} (Ax-b)^T W (Ax-b)$ for various weighting matrices $W$, but no dice.
This is not an answer... Just to give a simple example why it is not always solvable. Imagine a function $\mathbb R^2 \to \mathbb R$ . Let us prescribe any non-closed (importante!) contour of gradient orthogonal to the direction of the contour with $\nabla g = \|\nabla g\|\hat n$ . Now let every other point in the plane have $\nabla g = 0$ What happens at the end points of this contour? We can walk (integrate) around it along lines consisting of points of gradient 0. Along these lines the function value must not change because both partial differentials are 0. But yet if we walk (integrate) across the contour the function value is prescribed to change. It becomes an impossible situation. Example : Consider the function $$x,y \to f = \cases{1, x^2+y^2 and its gradient with color coded direction: Example if we set the negative half-plane (y $0$ and try to solve with a numerical solver, optimizing $$\min_f\|(\nabla f) - g_0\|_2^2$$ , where $g_0$ is the truncated gradient field preserved o
|linear-algebra|multivariable-calculus|vector-analysis|
0
Dot Product Intuition
I'm searching to develop the intuition (rather than memorization) in relating the two forms of a dot product (by an angle theta between the vectors and by the components of the vector ). For example, suppose I have vector $\mathbf{a} = (a_1,a_2)$ and vector $\mathbf{b}=(b_1,b_2)$. What's the physical or geometrical meaning that $$a_1b_1 + a_2b_2 = |\mathbf{a}||\mathbf{b}|\cos(\theta)\;?$$ Why is multiplying $|\mathbf{b}|$ times $|\mathbf{a}|$ in direction of $\mathbf{b}$ the same as multiplying the first and second components of $\mathbf{a}$ and $\mathbf{b}$ and summing ? I know this relationship comes out when we use the law of cosines to prove, but even then i cant get a intuition in this relationship. This image clarifies my doubt: Thanks
Let $\hat{\mathbf a}$ be the unit vector corresponding to the vector $\mathbf a$ . We have that $$\mathbf a \cdot \mathbf b = ||\mathbf a || \hat{ \mathbf a }\cdot ||\mathbf b || \hat{\mathbf b} = ||\mathbf a || \ ||\mathbf b || \ \hat{\mathbf a} \cdot \hat{\mathbf b} $$ Now let $A(\mathbf a,\mathbf b) = \hat{\mathbf a} \cdot \hat{\mathbf b}$ so that $\mathbf a \cdot \mathbf b = ||\mathbf a || \ ||\mathbf b || A(\mathbf a,\mathbf b)$ . What we want to show is that $A$ is in fact the cosine of the angle between $\mathbf a$ and $\mathbf b$ . Let's go with $\hat{\mathbf x},\hat{\mathbf y}$ as the usual x and y unit vectors. It's crucial here, that we use the property of the dot product, that $\hat{\mathbf x} \cdot \hat{\mathbf y} = 0$ and that $\hat{\mathbf x} \cdot \hat{\mathbf x} = \hat{\mathbf y} \cdot \hat{\mathbf y}= 1$ Now take a vector $\mathbf a = \alpha \hat{\mathbf x} + \beta \hat{\mathbf y}$ . We know that $\mathbf a \cdot \hat{\mathbf x} = a$ by the above facts. We also know t
|geometry|vector-spaces|inner-products|intuition|
0
Proving the Preposition: Log-Convexity of a Function and the Monotonicity of Ratios
While reading a research paper on log convexity, I encountered a preposition (which is my question). I tried to prove it. I'm not getting any idea how to proceed. The statement is as follows: Suppose $f: [a, b] \rightarrow \mathbb{R}^{+}$ is continuous and has a constant sign. Then $f$ is log-convex (strictly log-convex) $ \iff x \mapsto \frac{f(x+h)}{f(x)}$ is non-decreasing (increasing) on $[a, b-h]$ for each fixed $h > 0$ .
The answer was in front of eyes only. This answer is completely due to Martin's comments. I'm writing this answer as it requires the concept of Wright convex. A function $f:[a,b] \mapsto \mathbb{R}$ is said to be Wright convex if, for every $\delta >0$ and $x \leq y$ with $x$ and $y+\delta \in [a,b]$ we have $f(x+h) - f(x) \leq f(y+h) - f(y)$ $log f(x)$ is convex $\iff$ $log f(x)$ is Wright convex as $f$ is continuous (Due to Jensen). i.e., for $ x \leq y,$ $\log f(x+h) - \log f(x) \leq \log f(y+h) - \log f(y) \iff \frac{f(x+h)}{f(x)} \leq \frac{f(y+h)}{f(y)}.$ The hypothesis constant sign says $\frac{f(x+h)}{f(x)}$ is positive which allowed us writing $\log f(x+h) - \log f(x) = \log \frac{f(x+h)}{f(x)}.$ For the result of Jensen, see CONVEX FUNCTIONS AND THEIR APPLICATIONS A contemporary approach ( by Constantin P. Niculescu, Lars-Erik Persson) Theorem 1.1.4. or J. L. W. V. Jensen, Sur les fonctions convexes et les inegalités entreles valeurs moyennes, Acta Math., 30 (1906), 175-193.
|real-analysis|inequality|convex-analysis|convexity-inequality|
0
Why is the number 3 come up so often in Chaos theory and Undecidability as a boundry? Is it just a coincedence?
What I mean is that the number 3 comes up a lot in these fields as sort of a boundary between decidability and undecidability, or chaos and order. Examples: Quadratic Diophantine equations are always decidable, but cubic ones are not. Time independent second order ODEs cannot admit chaotic solutions, but third order ones can If a Discrete dynamical System as a point with period 3, it implies the system is chaotic and has periods of every possible point. Knot equivalency in 2d is decidable(and is even in NP), but in 3d it is undecidable. 3SAT is NP complete, 2SAT is in P. Hell, our universe has 3 spacial dimensions even, and you would have to change a lot of physics in order to have life in 2 or 4 spacial dimensions to my knowledge. So what's going on here? Is there anything deeper going on here? Or is this just an interesting numerological coincidence? My best guess, based on heuristics, is that if you have a system, any system, with one or two elements, it is hard to construct a syste
Three-Body Problem in Physics: The three-body problem in celestial mechanics is a famous example where the motion of three celestial bodies interacting through gravity is considered. While two-body problems can be solved precisely with Kepler's laws, introducing a third body results in a system that is chaotic and unpredictable, with no general solution. This is a clear transition where complexity and unpredictability increase dramatically with the addition of the third body.
|diophantine-equations|chaos-theory|decidability|
0
An 8th Grade Geometry Problem
$△ABC$ is an isosceles right-angled triangle with $∠A = 90^\circ$ . $DE\parallel BC$ . Square $DFGH$ is constructed, with $F$ lying on $AC$ and $G$ on $BC$ . Prove that $∠EDF=∠EGF$ . I attempted to prove it using some equivalent methods, such as demonstrating the concyclicity of points DEFG or the perpendicularity of EG to BC, etc. However, I couldn't find a method to prove it, or I kept falling into the fallacy of circular reasoning. Appreciate any hints or assistance in solving the problem.
Here is the solution based on the helpful tips from everyone, and I consider it as my practice using MathJax. For greater clarity, I have re-labeled the names of the points in the image. Problem: $△ABC$ is an isosceles right-angled triangle with $∠A = 90^\circ$ . $GH\parallel BC$ . The three points of square $GDEF$ lie on the three sides of $△ABC$ . Prove that $∠8=∠5$ . Solution: The objective is to demonstrate that the quadrilateral $GHDE$ is concyclic, thereby proving that $∠8=∠5$ . If it can be proven that the sum of both pairs of opposite angles in the quadrilateral $GHDE$ is $180^\circ$ each, then the four points lie on a circle. Furthermore, since $∠4 = 90^\circ$ , it can be deduced that $GE$ is the diameter of this circle, and $∠1 = 90^\circ$ ; it follows that $HE$ is perpendicular to both $GH$ and $BC$ . Sum of opposite angles in the blue group: $(∠1+∠2) + (∠5+∠6) = 135^\circ + 45^\circ = 180^\circ$ . Sum of opposite angles in the orange group: $(∠3+∠4) + (∠7+∠8) = (∠3+90^\circ
|geometry|
0
Term for a product of groups that is neither direct, nor semi-direct, but generalises semi-direct ones
Suppose $H$ and $K$ are subgroups of a group $G$ such that every element $g\in G$ can be uniquely written as $g = hk$ with $h\in H$ and $k\in K$ . (It follows then that every element $g\in G$ can also be uniquely written as $g = kh$ with $h\in H$ and $k\in K$ .) What is the standard term for this kind of a product of $H$ and $K$ ? (It is not always a direct or a semi-direct product.) A useful example is the decomposition of $SL(2,\mathbf{R})$ into a product of $SO(2,\mathbf{R})$ and the subgroup of upper-triangular matrices with positive diagonal elements from $SL(2,\mathbf{R})$ .
No doubt the literature varies, but often the term factorization is used the situation in which $G = HK$ for proper subgroups $H, K \le G$ . For the stronger situation you describe in which moreover $H \cap K = \{1\}$ , the term exact factorization gets used. See for example the following paper. Liebeck, Martin W.; Praeger, Cheryl E.; Saxl, Jan , On factorizations of almost simple groups , J. Algebra 185, No. 2, 409-419 (1996). ZBL0862.20016 .
|group-theory|reference-request|terminology|
0
Derivative of a Euclidean distance matrix
Let's say I have $X \in \mathbb{R}^{n \times d}$ , a collection of $n$ row vectors of size $d$ . We can calculate an $n \times n$ distance matrix, $D$ , from $X$ , where each $\{i,j\}$ entry denotes the squared L2 distance between the vectors $X_i$ and $X_j$ . That is, $$ D_{ij} = \|X_{i} - X_{j}\|_2^2 = \sum_{d}(X_{id} - X_{jd})^2 $$ I wish to find the derivative of D with respect to X. Since this is a matrix by matrix operation, I'm aware that this would probably resolve to a fourth-order tensor. This is what I tried so far ( $\delta_{ij}$ is the Kronecker delta): $$ \frac{\partial D_{ij}}{\partial X_{kl}} = \frac{\sum_{d}(X_{id} - X_{jd})^2}{\partial X_{kl}} = 2\sum_{d}(X_{id} - X_{jd}) \left ( \frac{\partial X_{id}}{\partial X_{kl}} - \frac{\partial X_{jd}}{\partial X_{kl}} \right ) $$ $$ = 2\sum_{d}(X_{id} - X_{jd})(\delta_{ik}\delta_{dl} - \delta_{jk}\delta_{dl}) \\ = 2\sum_{d}(X_{id} - X_{jd})(\delta_{ik} - \delta_{jk}) \delta_{dl} \\ = 2\sum_{d}(X_{id}\delta_{ik} - X_{jd}\delta
$ \def\d{\delta} \def\e{\bar{e}} \def\h{\odot} \def\w{\widehat} \def\o{{\tt1}} \def\E{\w{E}} \def\Eij{\E_{ij}} \def\Ekl{\E_{kl}} \def\Elk{\E_{lk}} \def\BR#1{\Big[#1\Big]} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\c#1{\color{red}{#1}} \def\CLR#1{\c{\LR{#1}}} \def\fracLR#1#2{\LR{\frac{#1}{#2}}} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} $ The distance matrix can be written using the Hadamard product $(\h)$ and the all-ones matrix $J$ (with the same dimensions as $X)$ $$\eqalign{ D = (X\h X)^TJ + J^T(X\h X) - 2X^TX \\ }$$ The component-wise self-gradient of a matrix is $$\eqalign{ \grad{X}{X_{kl}} &= \e_k\e_l^T \;=\; \Ekl \\ \grad{X^T}{X_{kl}} &= \gradLR{X}{X_{kl}}^T \,=\; \Elk \\ }$$ where $\e_i$ denotes the $i^{th}$ Cartesian basis vector and $\E_{kl}$ is the matrix whose components are all zero, except for the $(k,l)$ component whi
|derivatives|matrix-calculus|
1
A set which only exists in the $\sigma$-algebra completion
Say I have a $\sigma$ -algebra $\mathcal{F}$ and its completion $\mathcal{F}^{\ast}$ . If a set $E\in \mathcal{F}^{\ast}$ but $E\notin \mathcal{F}$ , does that mean that I definitely have sets $A,B\in\mathcal{F}$ s.t. $A\subset E\subset B$ and $\mu\left(B\backslash A\right)=0$ ? That is how we defined the $\sigma$ -algebra completion in class to prove its existence, but I'm not sure that this guarantees the existence of such sets $A,B$ .
Yes. You can show that the family consisting of sets of the form $f\setminus t$ , where $f$ is any element $\mathcal{F}$ and $t$ is any subset of any null element of $\mathcal{F}$ , is already a $\sigma$ -algebra containing every subset of every null algebra of $\mathcal{F}$ . Therefore that family must be the completion itself of $\mathcal{F}$ . Now if $E=f\setminus t$ , then you can choose $B=f$ and $A=f\setminus T$ , where $T$ is a null element of $\mathcal{F}$ such that $t\subset T$ .
|measure-theory|
1
Show discontinuity with $\epsilon-\delta$
I want to use the $\epsilon-\delta$ -criterion to show that for any $c\in\mathbb R$ the function $$ f: \mathbb R \rightarrow \mathbb R, x\mapsto \begin{cases} \frac{1}{x}, &x\neq 0 \\ c, & x=0. \end{cases} $$ is not continuous. Let $x = 0$ . Then we have $$ \mid f(x) - f(0)\mid = \mid\frac{1}{x} - c\mid = \mid \frac{1-xc}{x}\mid = \mid \frac{cx-1}{x} \mid = \frac{\mid cx-1 \mid }{\mid x \mid}. $$ From triangle inequality we have $$ \frac{\mid cx-1 \mid }{\mid x \mid} \leq \frac{\mid cx\mid +1 }{\mid x \mid} = \frac{\mid c\mid \cdot \mid x-0\mid +1 }{\mid x \mid} \leq \frac{\mid c\mid \cdot \delta +1 }{\mid x \mid} $$ But how can I get rid of the denominator? If I can't, what is an argument that it is not possible?
An easier way would to be argue by sequential definition of continuity. If we can find a sequence ${x_n}\to 0$ but $f(x_n)\not\to f(0)=c$ then we're done. But if you pick the sequence $x_n=\frac{1}{n}$ , that converges to zero , but $f(x_n)\rightarrow +\infty$ . Therefore, not continuous at $x=0$ , we're done because we found such a sequence. But if your hand is forced on $\epsilon - \delta$ I think the error in your method comes from trying to estimate $|f(x)-f(0)|$ from above: Really , if you are proving discontinuity, surely you want to try and bound it from below? Remember , the definition of being discontinuous at $x=0$ is that: $$\exists \epsilon>0, \forall \delta>0 , \exists x \text{ s.t. } |x|
|inequality|continuity|epsilon-delta|
0
Prove that $I=(I\cap R_1)\oplus (I\cap R_2)\oplus \dots \oplus (I\cap R_n)$
Let $R$ be a ring with identity, $R=R_1\oplus R_2 \oplus \dots \oplus R_n$ , where $R_i \cap R_j =\emptyset$ , $I$ is an ideal of $R$ . Prove that $$I=(I\cap R_1)\oplus (I\cap R_2)\oplus \dots \oplus (I\cap R_n)$$ I was learning ring theory. Please tell me please.
To prove that an ideal $I$ of a ring $R = R_1 \oplus R_2 \oplus \dots \oplus R_n$ is equal to the direct sum of the intersections of $I$ with each $R_i$ , we'll proceed step by step. Step 1: Define the Direct Sum The ring $R$ is the direct sum $R_1 \oplus R_2 \oplus \dots \oplus R_n$ , meaning each element of $R$ can be uniquely expressed as a tuple $(r_1, r_2, \dots, r_n)$ where $r_i \in R_i$ . Step 2: Components of $I$ in $R_i$ Since $I$ is an ideal, for any $(r_1, r_2, \dots, r_n) \in I$ , the elements $r_i$ must belong to $I \cap R_i$ . This follows from the ability to multiply $(r_1, r_2, \dots, r_n)$ by $(1, 0, \dots, 0)$ , and similar elements, to isolate each component. We examine how the elements of the ideal $I$ are expressed within the context of the ring $R$ being a direct sum of other rings $R_1, R_2, \ldots, R_n$ . Every element in $I$ can be represented as a tuple $(r_1, r_2, \ldots, r_n)$ , with each $r_i$ belonging to the corresponding component ring $R_i$ . An importa
|abstract-algebra|ring-theory|ideals|
1
Topology on $\mathbb N$ formed by taking the open sets to be $\emptyset, \mathbb{N}$ and $\{ 1, 2, 3, \ldots, n \} $ for each $n \in \mathbb{N}$
I am having some definition-wise problems. Problem: Prove that we get a topology for $\mathbb{N} = \{ 1, 2, 3, \ldots \} $ by taking the open sets to be $\emptyset, \mathbb{N}$ and $\{ 1, 2, 3, \ldots, n \} $ for each $n \in \mathbb{N}$. My point of confusion: How do we show that any union of the open sets, as defined, is an open set "formally"?
Denote $[n] := \{1,2,...,n\}$ . Denote $U$ as variable for open set Case 0: $\bigcup U_i = \bigcup \varnothing = \varnothing$ , hence open. so for nonempty unions: Case 1: $\bigcup U_i$ contains $U_j = \mathbb{N}$ , then union is $\mathbb{N}$ , hence open. Case 2: $\bigcup U_i$ does not contains $U_j = \mathbb{N}$ , so rewrite it as $\bigcup [n_i]$ If this is a finite union, then $\bigcup [n_i] = [max\{n_1,...n_k\}]$ , hence open. If this is a infinite union, then $ \bigcup [n_i] = \mathbb{N}$ , hence open. Why? $ \bigcup [n_i] \subseteq \mathbb{N}$ no doubt, and, $\mathbb{N}\subseteq \bigcup [n_i]$ because $n_i$ 's are different, we have ordering going $n_1 , note $1\le n_1, 2\le n_2,...$ , so if $n \in \mathbb{N}$ then $n then $n\in [n_m] \subseteq \bigcup [n_i]$ . P.S. hello MA 564 people
|general-topology|integers|
0
Keep inner rectangle the same aspect ratio when rotated inside outer rectangle
I am having some problems trying to calculate the correct width/height of a rectangle inside another when rotated. Let's define our problem using algebraic expressions: The outer rectangle has dimensions $(W_o \times H_o).$ The inner rectangle before rotation is positioned with its top-left corner at $(X, Y)$ and has dimensions $W_i \times H_i$ . These dimensions and positions are expressed as percentages of the outer rectangle's dimensions: $X = \alpha W_o$ $Y = \beta H_o$ $W_i = \gamma W_o$ $H_i = \delta H_o$ where $0 are the percentage values. Given a rotation of the inner rectangle by an angle $\theta$ about its center, I aim to find the new values $\alpha', \beta', \gamma', \delta'$ that describe the position and dimensions of the rotated inner rectangle such that it remains fully inside the outer rectangle without exceeding its bounds. The challenge arises from ensuring the rotated rectangle's corners do not extend beyond the outer rectangle, thus necessitating an adjustment of t
With the notation introduced in the question statement, and using a Cartesian coordinate system with its origin at the top left corner of the outer rectangle, it follows that the center of the inner rectangle is at $ X_c = X + \dfrac{1}{2} W_i $ $ Y_c = - ( Y + \dfrac{1}{2} H_i )$ And these coordinates are fixed. Now, starting at the top left corner, and moving clockwise, the scaled inner rectangle has its vertices at $ A =( X_c - s W_i /2, Y_c + s H_i/2 ) $ $ B =( X_c + s W_i/2 , Y_c + s H_i/2 ) $ $ C = (X_c + s W_i/2 , Y_c - s H_i/2 ) $ $ D = (X_c - s W_i/2, Y_c - s H_i/2 ) $ where $s$ is the scale factor. Note that this way, the aspect ratio of the inner rectangle is preserved. Let $W'_i = W_i/2 $ and $H'_i = H_i/2 $ Applying a rotation by an angle $\theta$ counter clockwise about the center of the inner rectangle gives the new coordinates of its four vertices as follows: $A'_x = X_c + \cos \theta ( - s W'_i ) - \sin \theta ( s H'_i ) = X_c + s ( - W'_i \cos \theta - H'_i \sin \thet
|geometry|
0
Existence of a limit from limsup and liminf
Consider a sequence of random variables $(p_n)$ such that: (1) $\Pr\bigg(\limsup_{n \to \infty} p_n\leq U\bigg)=1$ (2) $\Pr\bigg(\liminf_{n \to \infty} p_n\geq L\bigg)=1$ where $L,U$ are real numbers. Does these assumptions imply $$ \Pr\bigg(\lim_{n \to \infty}d\big(p_n, \big[L,U\big]\big)= 0\bigg)=1, $$ where $d(p_n, [L,U] ):= \inf \{|p_n - y| : y \in [L,U ] \}$ . In other words, I wonder if the limit of the distance exists. Intuitively, I think it exists. In fact, by (1) and (2), $\inf \{|p_n - y| : y \in [L,U ] \}$ becomes almost surely zero. Could you help me to show this formally?
There is not much probability here: You show that $\limsup_{n\to\infty} p_n≤U$ and $\liminf_{n\to\infty p_n} ≥ L$ together imply $\lim_{n\to\infty} d(p_n, [L,U])$ for a not-random sequence $(p_n)$ . This then implies $$ \mathbb P(\lim_{n\to\infty} d(p_n, [L,U])) ≥ \mathbb P\left(\limsup_{n\to\infty} p_n≤U, \liminf_{n\to\infty p_n} p_n ≥ L \right) = 1. $$ To show the non-probabilistic result, note that $d(p_n,[L,U]) = \max(p_n-U, L-p_n, 0)$ . But since $\limsup_{n\to\infty} p_n-U, \limsup_{n\to\infty} L-p_n$ and $\limsup_{n\to\infty} 0$ are all 0, this means that the limsup of the maximum, $d(p_n,[L,U])$ must be smaller equal 0. But because the limit is not negative, this already implies that $\lim_{n\to\infty}d(p_n,[L,U]) = 0$ .
|probability|limits|limsup-and-liminf|almost-everywhere|
1
A matrix manipulation from Berezin's paper
I am reading a paper by Berezin, entitled General Concept of Quantization . He writes: and then: This ought to be some clever manipulation of matrices and yet I am not able to show how (1.4) follows from (1.1) if $\omega$ is given to be invertible. Any leads?
If we multiply, $$\omega^{\gamma k}\frac{\partial \omega^{\alpha\beta}}{\partial x^k} +\omega^{\beta k}\frac{\partial \omega^{\gamma\alpha}}{\partial x^k} +\omega^{\alpha k}\frac{\partial \omega^{\beta\gamma}}{\partial x^k}=0$$ by $\omega_{i\alpha}\omega_{j\beta}\omega_{l\gamma}$ we get, $$ \omega_{i\alpha}\omega_{j\beta}\omega_{l\gamma}\omega^{\gamma k}\frac{\partial \omega^{\alpha\beta}}{\partial x^k} +\omega_{i\alpha}\omega_{j\beta}\omega_{l\gamma}\omega^{\beta k}\frac{\partial \omega^{\gamma\alpha}}{\partial x^k} +\omega_{i\alpha}\omega_{j\beta}\omega_{l\gamma}\omega^{\alpha k}\frac{\partial \omega^{\beta\gamma}}{\partial x^k}=0 $$ Since $\omega_{l\gamma}\omega^{\gamma k}=\delta^k_l$ , $\omega_{j\beta}\omega^{\beta k}=\delta^k_j$ , $\omega_{i\alpha}\omega^{\alpha k}=\delta^k_i$ we have, $$ \omega_{i\alpha}\omega_{j\beta}\delta^k_l\frac{\partial \omega^{\alpha\beta}}{\partial x^k} +\omega_{i\alpha}\omega_{l\gamma}\delta^k_j\frac{\partial \omega^{\gamma\alpha}}{\partial x^k} +\omega_
|matrices|partial-derivative|matrix-calculus|
1
Non-decreasing expectated value of non-decreasing function with family of probability density satisfying monotone likelihood ratio property.
I am trying to prove the Lemma(3.4.2)(1) of book "Testing statistical hypotheses" by Lehmann and Romano, its statement is " If $\{p_{\theta}(x)\}$ be a family of densities on the real line with monotone likelihood ratio in $x$ and if $\phi(x)$ is a non-decreasing function of $x$ , then $E_{\theta}(\phi(X))$ is a non-decreasing function of $\theta$ ." In the proof, let $\theta^{'}>\theta$ and define $A$ and $B$ are the set of $x$ such that $p_{\theta^{'}}(x) and $p_{\theta^{'}}(x)>p_{\theta}(x)$ respectively. Let $a=sup_{A}\phi(x)$ and $b=sup_{B}\phi(x)$ then it is clear that $b\ge a$ . Now consider that \begin{equation} \begin{split} \int \phi(x)(p_{\theta^{'}}(x)-p_{\theta}(x))dx&=\int_{A}\phi(x)(p_{\theta^{'}}(x)-p_{\theta }(x))dx+\int_{B}\phi(x)(p_{\theta^{'}}(x)-p_{\theta}(x))dx \\ &\ge a\int_{A}(p_{\theta^{'}}(x)-p_{\theta }(x))dx+b\int_{B}(p_{\theta^{'}}(x)-p_{\theta}(x))dx \\ &= (b-a)\int_{B}(p_{\theta^{'}}(x)-p_{\theta }(x))dx \\ &\ge 0 \end{split} \end{equation} My doubt is th
On $A$ we have $\phi (x) \le a$ and $p_{\theta'} . So, $p_{\theta'}-p_{\theta} . $c\le d$ and $t implies $ct \ge dt$ right?
|probability|statistics|probability-distributions|expected-value|estimation|
0
Absolute Value Condition in Generalized Binomial Theorem
What is the coefficient of $\frac{y^3}{x^8}$ in $(x+y)^{-5}$ ? when $|\frac{y}{x}| (EAMCET 2020) I am using the generalized binomial theorem $$(x+y)^r=\sum_{k=0}^{\infty}{r \choose k} x^{r-k}y^k$$ Substituting $n=-5$ and $r=3$ in the above gives: $${5+3-1 \choose 3}(-1)^3=-{7 \choose 3}=-35$$ I did not use the condition $|\frac{y}{x}| anywhere. Was it required as an assumption for what I did? What would removing that condition do? Note: I do not know Laurent expansions (though I do know Taylor series), but neither are included in the syllabus of this exam. So answers that refer to the binomial theorem, and how to use it would be appreciated. I am looking at the question from a generating function point of view. So convergence issues are not primary (unless they relate to generating functions).
Yes. When $|\frac{y}{x}| , the Laurent expansion of $(x+y)^{-5}$ is obtained by taking the Taylor expansion of $(1+\theta)^{-5}$ , substituting $\theta=\frac{y}{x}$ , and multiplying by $x^{-5}$ . When $|\frac{x}{y}| , the Laurent expansion of $(x+y)^{-5}$ is obtained by taking the Taylor expansion of $(1+\beta)^{-5}$ , substituting $\beta=\frac{x}{y}$ , and multiplying by $y^{-5}$ . The former series does not converge when $|\frac{x}{y}| , and vice versa. Basically, yes it does matter, the coefficients are NOT the same in the other case.
|combinatorics|algebra-precalculus|binomial-coefficients|binomial-theorem|
1
Are these two questions related?
$f$ and $g$ are bounded functions with common domain $D$ $\sup\limits_{x\in D}\big\{f(x)+g(x)\big\}\leqslant\sup\limits_{x\in D}f(x)+\sup\limits_{x\in D}g(x).$ Let both $S $ and $T$ be non-empty subsets of $R$ $S+T=\begin{Bmatrix}z & \mid z=s+t,s\in S,t\in T\end{Bmatrix}.$ $\sup(S+T)=\sup S+\sup T$ What I want to know is, are these two issues related?
Note that $$\text{Im}(f + g) = \left\{f(x) + g(x) ~|~ x \in D\right\} \subseteq \text{Im}(f) + \text{Im}(g) = \left\{f(x) + g(y) ~|~ x,y \in D\right\}.$$ That's why you have an inequality \begin{align} \sup\limits_{x\in D}\big\{f(x)+g(x)\big\} &= \sup\text{Im}(f + g)\\ &\le \sup \left(\text{Im}(f) + \text{Im}(g)\right)\\ &= \sup \text{Im}(f) + \sup \text{Im}(g)\\ & = \sup\limits_{x\in D}f(x)+\sup\limits_{x\in D}g(x) \end{align}
|analysis|supremum-and-infimum|
1
Product of natural numbers (self-referentiality).
Assume that $\mathbb{R}$ is defined axiomatically as a complete ordered field, and let us define the set of natural numbers $\mathbb{N}$ as the smallest inductive set of $\mathbb{R}$ . In other words, $$\mathbb{N}=\{1,1+1,1+1+1,\ldots\} .$$ This is the standard quick (top-down) approach in Analysis (like in Baby Rudin) to introduce the set of real numbers and its distinguished subsets. Question How can one write a proof that $\mathbb{N}$ is closed under multiplication, i.e., that given two real numbers $x,y\in\mathbb{R}$ , if $x,y\in \mathbb{N}$ then $x\cdot y$ in $\mathbb{N}$ . The "problem" is that I have to show the proof to first-year calculus students, and I cannot write the proof without expecting philosophical questions about some kind of "self-referentiality" from them. I would like to give concise proof that convinces students there is no problem with that. Any help? PS This "problem" is also present if one wants to show that $\mathbb{N}$ is closed under addition, but it is le
Let $$S=\{x\in\mathbb{N}:(\forall y\in \mathbb{N})(y+x\in\mathbb{N})\}$$ and $$M=\{x\in\mathbb{N}:(\forall y\in\mathbb{N})(yx\in\mathbb{N})\}.$$ Then $S$ and $M$ are inductive subsets of $\mathbb{N}$ , and are therefore equal to $\mathbb{N}$ . By definition of inductive, $y+1\in \mathbb{N}$ for all $y\in\mathbb{N}$ . For $x\in S$ and $y\in\mathbb{N}$ , by definition of $S$ , associativity of addition, and definition of inductive, $y+(x+1)=(y+x)+1\in\mathbb{N}$ . So $x+1\in S$ and $S=\mathbb{N}$ . For all $y\in\mathbb{N}$ , $y\cdot 1=y\in \mathbb{N}$ , so $1\in M$ . For $y\in\mathbb{N}$ and $x\in M$ , $$y(x+1)=yx+y\in \mathbb{N}+\mathbb{N}\subset\mathbb{N},$$ using the fact that $S=\mathbb{N}$ . Since you asked about multiplication but not addition, you may have already established that $S=\mathbb{N}$ , in which case the first half of this solution can just be recalled.
|calculus|analysis|foundations|
1
Is the tensor product of two vector space on $R$ isomorphic to $R^{d^2}$?
Let $V$ , $W$ be two fine-dimensional vector spaces over the field of real numbers $\mathbb{R}$ . Assume the dimension of both spaces is d. Is there a unique isomorphism between $V \otimes W$ and $\mathbb{R}^{d^2}$ ? ie, can we interpret elements of $V \otimes W$ as being vectors in a $\mathbb{R}^{d^2}$ space? So on the easiest case scenario, is an element of $V \otimes W$ just a real number?
Let $\{v_1,...,v_n\}$ and $\{w_1,...,w_m\}$ be bases for $V$ and $W$ respectively. The tensor product space $V\otimes W$ is a vector space with basis from objects (vectors in the product space) of form $v_i \otimes w_j$ for $1\leq i \leq n$ , $1\leq j\leq m$ . Thus dimension of $V\otimes W$ is $nm$ . To make things less abstract, if $V,W$ were space of column vectors with $n,m$ entries in $\mathbb{R}$ , tensor product is the space spanned by $n\times m$ matrices $E_{ij}$ whose $ij$ th enry is $1$ and else is $0$ . Thus dimension is $nm$ . Now, I think the word you are looking for is canonical or natural, which to me it the one above, but the set of isomorphisms between two same dimensional vector spaces over the same fields is a group, and in case of $\mathbb{R}$ there is not just single such isomorphism. (e.g if $A$ is such isomorphism, so is $2A$ ).
|tensor-products|tensors|vector-space-isomorphism|
0
Does the graph have a Hamiltonian circuit or a Hamiltonian path?
Certain necessary conditions for a Hamiltonian circuit such as the graph being 2-connected, having zero pendants are met. Dirac's and Ore's theorem provide sufficient conditions, which are not satisfied by this graph, so it may possibly not be a Hamiltonian circuit. But as they are not necessary conditions, we can't yet conclude that this is not a Hamiltonian circuit. To me it seems that this graph does not contain a Hamiltonian circuit, as the entrances to the inner subgraphs have 2-degree vertices as their neighbours, then those edges must be traversed, similarly if we start from the inside of the subgraph, same situation occurs. But I cant put together a coherent formal argument for it. In case of Hamiltonian path, the same thing happens, and I also cannot write it out properly. What way should I go about to argue in detail that this graph does not contain a Hamiltonian path or a circuit? An additional query is, in this graph must only 2-degree vertices be the end vertieces of a pat
More detailed hints than in the comments. Why there is no Hamiltonian path in this graph. For convenience, let the vertices of degree 2 be red. Let the other vertices be blue. First, note that the red vertices in each path cannot be neighbors, since they form an independent set. It follows that if a Hamiltonian path starts with a blue vertex, then since there are $8$ red vertices, this path must also end with a blue vertex. Question: Where is the location of vertex $p$ ? Let a Hamiltonian path start from a red vertex. In this case it is from the inner square (why?). By the same reasoning, there should be two blue vertices at the end of this path, and the last one is $p$ . Now find the first vertex on the Hamiltonian path from the outer square. It is preceded by a vertex from the small square. So they are both blue (why?). Addition. I will try to answer the OP's question from the comment below. Recall that since the graph is bipartite and the parts have 8 and 9 elements, a Hamiltonian c
|graph-theory|bipartite-graphs|hamiltonian-path|hamiltonicity|
0
approximate a sum
Is there a way to simplify the given function: $f(n):={\sum}_{x={\lceil\frac{n}{\mathrm e}\rceil}}^{n}\frac{\ln\left(\frac{n}{x+1}\right) \left(x+1\right)}{\left(x-1\right) x}$ , with $n>2$ The result can be an approximation for growing n. I thought there must be a way to get rid of the sum.
I wouldn't expect there to be a simple exact answer unless some lucky coincidence happens. But it's not so hard to approximate. First of all, if we replace $\frac{x+1}{(x-1)x}$ with $\frac1x$ then the error is of order $1/x^2$ ; $\log\frac n{x+1}$ is never much bigger than 1, so the error in the whole sum is at most about $1/n^2$ . Similarly, replacing $\log\frac n{x+1}$ with $\log\frac nx$ makes a small difference for large $n$ . So $f(n)\simeq\sum_{x=\lceil\frac ne\rceil}^n\frac{\log(n/x)}x$ . Now write $x=tn$ ; the sum is approximately $\int_{1/e}^1\frac{-\log t}{tn}\,n\,dt=-\int_{1/e}^1\frac{\log t}t\,dt=-\frac12[(\log t)^2]_{1/e}^1=\frac12$ . (A few numerical evaluations suggest that the error is of order $1/n$ , which should be easy to prove by being just a bit more precise about the approximations above.)
|summation|approximation|closed-form|ceiling-and-floor-functions|
1
Given a sequence $a_n$ s.t. forall $\varepsilon>0$ exists $m,n_{0} \in \mathbb{N}$ s.t. $\forall n \geq n_0$ $|a_n-a_m|<\varepsilon$
So I have that question: Suppose a sequence $a_n$ satisfies that forall $\varepsilon>0$ exists $m,n_{0} \in \mathbb{N}$ s.t. $\forall n \geq n_0$ $|a_n-a_m| determine if that sequence is a cauchy sequence. I think it's not necessarily cauchy because the constant $a_m$ depends on $\varepsilon$ , but I really struggle to give a counter example, i will be glad if someone will help me to think about a counter example (or maybe tell me that I'm wrong) thanks in advance.
It is correct that $m$ depends on $\varepsilon$ , but that condition actually does imply that $(a_n)$ is a Cauchy sequence. Roughly speaking: If $a_n$ and $a_k$ are both close to some $a_m$ then they are close to each other. Formally, this is the triangle inequality: Given $\varepsilon > 0$ there exist $m,n_{0} \in \mathbb{N}$ such that $|a_n-a_m| for all $n \ge n_0$ . Then $$ |a_n - a_k| \le |a_n-a_m| + |a_m-a_k| for all $n, k \ge n_0$ , so that the Cauchy condition is satisfied.
|real-analysis|sequences-and-series|limits|cauchy-sequences|
1
Interpreting $P(\alpha|\text{data})\propto P(\text {data} | \alpha)\cdot P(\alpha)$
In the context of posterior and prior probabilities, one has $P(\alpha|\text{data})\propto P(\text {data} | \alpha)\cdot P(\alpha)$ . What confuses me here is that probability is defined for events, and $\alpha$ is not an event, it is a parameter. But then how to interpret $P(\alpha)$ and $P(\alpha|\text{data})$ if $\alpha$ is not an event?
I'm not familiar with using capital $P$ in this context, which can be confusing. Typically, $P(\alpha)$ and $P(\alpha|\text{data})$ are pdfs/pmfs of the parameters $\alpha$ and $\alpha|\text{data}$ . For example, suppose $\alpha\sim\text{Beta}(\kappa,\lambda)$ and $N\sim \text{Binomial}(n,\alpha)$ , then in your notation, $$P(\alpha)=\frac{\alpha^{\kappa-1}(1-\alpha)^{\lambda-1}}{B(\kappa,\lambda)}.$$ I would prefer to use the notation $f(\alpha)$ instead of $P(\alpha)$ , because $P$ can look like a probability, which, as you said, should be probability of an event. I've also seen authors use $\pi(\alpha)$ , where $\pi$ is for prior. Similarly, if the data we observe is that $N=k$ , then $\alpha|\text{data}\sim\text{Beta}(\kappa+k,\lambda+n-k)$ , and $$P(\alpha|\text{data})\propto\alpha^{\kappa+k-1}(1-\alpha)^{\lambda+n-k-1}.$$ In this context, $\text{data}$ is often also an outcome, not an event, and $P(\text{data}|\alpha)$ is the value of a pdf/pmf.
|probability|statistics|probability-distributions|statistical-inference|bayesian|
1
Probability of walking
A beetle is moving across a cartesian coordinate . Initially it is at origin. In each move it chooses one of the four adjacent lattice points and goes to it . After $n$ moves what's the probability that it is at origin again ? My Attempt : We know the total sample space is $4^{n}$ , but can't find the numerator of the probability? Please help .
I am not very sure at the moment. But here's a potential solution. For each move that the beetle makes, it must make a countermove in the exact opposite direction. So, it has the freedom of choice in only half of the moves $(n/2)$ . The other half will be determined automatically. That makes the number of distinct paths that work for us $4^{n/2}$ . Hence the required probability would be $$P = \frac{4^{n/2}}{4^n} = \frac{1}{4^{n/2}}$$ However, since the beetle can only take full steps and not half-steps, $n/2$ must be a whole number. Meaning $n$ must be even. If $n$ is odd, the beetle will not be able to return to the origin. So probability would be $0$ .
|probability|combinatorics|
0
Finding Integer Solutions for a Set of Equations Involving Powers and Logarithms
I am trying to find integer solutions for a set of equations and would appreciate any help or insights on methods to determine if solutions exist for certain cases or generally. The equations are as follows: For the case where $\frac{3^n - 1}{2}$ is odd: $$ x = \frac{\left(\frac{3^n - 1}{2}\right) - 2^y}{2 \times \left(3^{n-1} - 2^y\right)} $$ and $$ y = \log_2\left(\frac{3^{n-1} \cdot x + \left(\frac{3^n - 1}{2}\right)}{2 \cdot x + 1}\right) $$ For the case where $\frac{3^n - 1}{2}$ is even: $$ y = \log_2\left(\frac{3^{n-1} \cdot x + \frac{3^n - 1}{4}}{2 \cdot x + 1}\right) $$ where $x$ , $y$ , and $n$ must all be integers. Edit : $n > 1$ . and $y \neq (n-1) \cdot \log(3)$ and $x > 1$ . I have attempted to programmatically search for solutions by iterating over a range of values for $n$ and $y$ , but I am unsure if this approach is coherent, especially without knowing a priori whether solutions exist. Some intermediate calculations suggest the possibility of solutions under specific c
I will solve the first equation for integer $x,y,n \ge 0$ . The condition ( $\frac{3^n - 1}{2}$ is odd) means that $3^n-1\underset{4}\equiv 2$ . Or $3^n\underset{4}\equiv 3$ . So $n$ must be odd. Now consider the equation: $$ x = \frac{\left(\frac{3^n - 1}{2}\right) - 2^y}{2 \times \left(3^{n-1} - 2^y\right)} $$ It can be rewritten as: $$4x(3^{n-1}-2^y)=3^n-1-2^{y+1}.$$ Then as: $$4x\cdot 3^{n-1}-2x\cdot 2^{y+1}=3\cdot 3^{n-1}-1-2^{y+1}.$$ And then as: $$(2x-1)\cdot 2^{y+1}=(4x-3)\cdot 3^{n-1}+1.$$ Let us consider the last equation modulo $4$ . The right hand side is $(-3)\cdot 1+1 \underset{4}\equiv 2$ (remember that $n$ is odd). Now the left hand side. If $y>0$ then it is divisible by $4$ . So $y=0$ . Then the equation becomes: $$4x-2=(4x-3)\cdot 3^{n-1}+1.$$ Or $$4x-3=(4x-3)\cdot 3^{n-1}.$$ Since $4x-3\neq 0$ we see that $x$ can be any number and $n=1$ . Thus, the answer to your first equation is $$x\text{ is any non-negative integer}, y=0, n= 1.$$
|diophantine-equations|
0
How to determine a (compass and straightedge) constructible number of a trigonometric equation?
How to determine if the solutions of $$ \left(\frac{\pi}{12}+\frac{5}{3}x^2-x\sqrt{1-x}\right) = \arcsin x $$ are (compass and straightedge) constructible numbers? I got this question from my students in my calculus class: "find the implicit derivative of " $$ \left(\frac{\pi}{12}+\frac{5}{3}x^2-x\sqrt{1-x}\right) = \arcsin x, $$ which is a simple question, but this leaves me with the curiosity of knowing if this equation really has real solutions. Then Wolfram Alpha helped me to find 2 real solutions. But now I want to know more on these numbers: are these two number (compass and straightedge) constructible? I am aware, having consulted the relevant Wikipedia entry, that " $x$ is constructible if and only if there is a closed-form expression for $x$ using only integers and the operations for addition, subtraction, multiplication, division, and square roots." but how can I apply this to the roots of the above equation?
The answer is no. Any solution of your equation must be transcendental , but a number constructible with a compass and ruler must be algebraic of degree that's a power of $\ 2\ .$ First note that the sum, product and ratio of any pair of algebraic numbers, and the square root of any non-negative algebraic number, must be algebraic, and \begin{align} \sin\frac{\pi}{12}&=\frac{\sqrt{3}-1}{2\sqrt{2}}\\ \cos\frac{\pi}{12}&=\frac{\sqrt{3}+1}{2\sqrt{2}}\ . \end{align} Writing your equation as $$ \frac{5x^2}{3}-x\sqrt{1-x}=\arcsin x-\frac{\pi}{12} $$ and taking the sine of both sides gives \begin{align} \sin\left(\frac{5x^2}{3}-x\sqrt{1-x}\right)&=\sin\left(\arcsin x-\frac{\pi}{12}\right)\\ &=x\cos\frac{\pi}{12}-\sin\frac{\pi}{12}\cos\arcsin x\\ &=\left(\frac{\sqrt{3}+1}{2\sqrt{2}}\right)x-\left(\frac{\sqrt{3}-1}{2\sqrt{2}}\right)\sqrt{1-x^2}\ . \end{align} Now if $\ x\ $ were algebraic, so would be $\ i\left(\frac{5x^2}{3}-x\sqrt{1-x}\right)\ ,$ $\ \left(\frac{\sqrt{3}+1}{2\sqrt{2}}\right)x-
|geometry|geometric-construction|
0
Affine Combinations and Span
I was reading a bit of convex analysis and came across this problem. Let $S$ be convex. Let $A$ be the set of finite affine combinations of points in $S$ (i.e. finite linear combinations whose weights sum to $1$ ). Prove that the difference $A+(-A)$ , where $+$ denotes the Minkowski sum, is the span of $S+(-S)$ . It seems like one direction of the set containment is straightforward (going from the span to the difference). In the other direction starting with the difference, I ran into a roadblock. We can express elements as differences of finite affine combinations, but I did not manage to decompose it as a linear combination of differences of elements in $S$ . I also tried Caratheodory's theorem (since $A$ is convex) to try to force the decomposition, but the question of which differences of elements in $S$ to choose still remains (I am still not sure why $S$ being convex is relevant). I am looking for an efficient method to prove this other direction, as it seems like several other a
Let $r_1 x_1 + \ldots + r_m x_m$ and $s_1 y_1 + \ldots + s_n y_n$ be elements of $A$ , i.e., two affine combinations where all the $x_i$ and $y_j$ belong to $S$ . Then $r_m = 1 - (r_1 + \ldots + r_{m-1})$ and so we may write $$r_1 x_1 + \ldots + r_m x_m = r_1(x_1 - x_m) + \ldots + r_{m-1}(x_{m-1}-x_m) + x_m$$ with a similar observation for $s_1 y_1 + \ldots + s_n y_n$ . Then $$(r_1 x_1 + \ldots + r_m x_m) - (s_1 y_1 + \ldots + s_n y_n) =$$ $$r_1(x_1 - x_m) + \ldots + r_{n-1}(x_{m-1} - x_m) - s_1(y_1 - y_m) + \ldots - s_n(y_{n-1} - y_n) + 1 \cdot (x_m - y_n)$$ is manifestly a linear combination of elements of $S + (-S)$ , and so $A + (-A) \subseteq \mathrm{span}(S + (-S))$ , which seems to be the direction you want. By the way, the other inclusion is wrong for $S$ the empty set! Because the span always contains the zero element and yet $A$ and $-A$ and $A + (-A)$ for this case are all empty!
|linear-algebra|convex-analysis|convex-geometry|convex-hulls|sumset|
0
What does "convex class of probability measures" mean in the definition of scoring rules?
Taken from Wikipedia ( here ), a scoring rule has the following definition Let $\Omega$ be a sample space, and $\mathcal{A}$ is a $\sigma$ -algebra of subsets of $\Omega$ . Let $\mathcal{P}$ be a convex class of probability measures on $(\Omega, \mathcal{A})$ . A scoring rule is a function $S: \mathcal{P} \times \Omega \to \overline{\mathbb{R}}$ , where $\overline{\mathbb{R}} := \mathbb{R} \cup \{\pm \infty\}$ , such that the integral of $S$ on $\Omega$ exists. What do they mean by "convex class of probability measures"? Do they mean that the probability measures are convex? If a probability measure are (uniquely) identified by the distribution function, does that measure belong to the class $\mathcal{P}$ ? All in all, I am just not sure what "convex class of probability measures" is.
To say that a set $\mathcal{P}$ of probability measures on $(\Omega, \mathcal{A})$ is convex (more standard than saying `a convex class') just means that for all $P, Q \in \mathcal{P}$ , and all $\alpha \in [0,1]$ , the probability measure $\alpha P + (1-\alpha)Q$ is a member of $\mathcal{P}$ . Recall that $\alpha P + (1-\alpha)Q$ is defined pointwise: for any $A \in \mathcal{A}$ , $$(\alpha P + (1-\alpha)Q)(A) := \alpha P(A) + (1-\alpha)Q(A)$$ A bit more generally, convex sets are usually taken to be convex subsets of some ambient vector space, so that convexity is spelled out in terms of the vector space operations. In this case, the vector space could be the vector space of signed measures on $(\Omega, \mathcal{A})$ with addition and scalar multiplication defined pointwise. So spelling things out, $\mathcal{P}$ is a convex class of probability measures if it contains only probability measures and is a convex subset of that vector space.
|probability-theory|measure-theory|definition|
1
Cardinality of Schemes
I was thinking about set theoretic considerations of scheme theory and a question came to me. I was wondering if there is a way to bound the cardinality of a scheme $S$ (the underlying set of the underlying topological space) if we know a bound on the cardinality of its rings of regular functions ( $\sup_{U\subset S\;\text{open}}\lvert\Gamma(U,\mathcal{O}_S)\rvert\leq\alpha$ ) and let’s say a bound on the cardinality of its topology ( $\rvert\{U\text{open of}\;S\}\rvert\leq\alpha$ ). Any suggestion is much appreciated, Thanks in advance, Max
In fact the problem is quite trivial as $\Gamma(U,\mathcal{O}_S)=A$ for all affine of S written $U=Spec(A)$ .
|algebraic-geometry|set-theory|cardinals|schemes|
1
Fundamental form $\omega=\sum_{i\leq m}v^*_i\wedge (Jv_i)^* $ with a complex structure $J$
Let $V$ be a $\mathbb{C}-$ vector space, $J$ an almost complex structure on $V$ and take a real orthonormal basis $\langle v_1,Jv_1,\ldots,v_n,Jv_n\rangle $ with a scalar product $\langle,\rangle = \sum_{i\leq m}v^*_i\otimes v^*_i + (Jv_i)^*\otimes (Jv_i)^*$ , where $v^*_i$ are the dual. We can define the fundamental form $\omega (v,w):=\langle Jv, w \rangle$ . It turns out that $\omega = \sum_{i\leq m}v^*_i\wedge (Jv_i)^* $ . I am trying to deduct that, but I fail to do so. I tried to calculate $\omega (v_i,v_j)$ but I am stuck here $\omega (v_i,v_j)=\sum_{i\leq m}(v^*_i\otimes v^*_i)(Jv_i\otimes v_j) + ((Jv_i)^*\otimes (Jv_i)^*)(Jv_i\otimes v_j)$ Where $(v^*_i\otimes v^*_i)(Jv_i\otimes v_j)=v^*_i(Jv_i)\cdot v^*_i(v_j)$ which gives $0$ when $i\neq j$ and $J$ $(v^*_i(Jv_i)= Jv^*_i(v_i ) $ ? when $i= j$ . And similar for $(Jv_i)^*\otimes (Jv_i)^*)(Jv_i\otimes v_j)$ which doesn't make any sense. Can someone explain what we mean and how to get $\omega = \sum_{i\leq m}v^*_i\wedge (Jv_i)^*
Your approach is mostly correct. In fact, we can compute $\omega(v, w)$ for all $v,w$ in a set of basis vectors to see that $\omega$ indeed has the proposed expression. But $(v_1, \dots, v_n)$ is not a complete real basis of $V$ , so computing $\omega(v_i, v_j)$ is not enough. In fact, you also have to compute $\omega(v_i, J v_j)$ , $\omega(Jv_i, v_j)$ and $\omega(Jv_i, Jv_j)$ . To do this, note that $(Jv_i)^*$ is the unique element of the dual, which sends $Jv_i$ to $1$ and all other basis vectors to $0$ . If you do this, you find: $$\omega(v_j, v_k) = \sum_{i} \underbrace{{v_i}^*(Jv_j)}_{=0} \cdot {v_i}^*(v_k) + (Jv_i)^*(Jv_j) \cdot \underbrace{(Jv_i)^*(v_k)}_{=0} = 0$$ $$\omega(v_j, Jv_k) = \sum_{i} \underbrace{{v_i}^*(Jv_j)}_{=0} \cdot {v_i}^*(Jv_k) + \underbrace{(Jv_i)^*(Jv_j)}_{=\delta_{ij}} \cdot \underbrace{(Jv_i)^*(Jv_k)}_{=\delta_{ik}} = \delta_{jk}$$ and similarly $\omega(Jv_j, v_k) = -\delta_{jk}$ and $\omega(Jv_j,Jv_k) = 0$ . For the latter two terms you have to use $J^2 =
|inner-products|tensor-products|complex-geometry|multilinear-algebra|exterior-algebra|
1
Check if a general point is inside a given cylinder
For a particular purpose, I want to define a cylinder in 3D space and go through a list of given 3D points and tell if the point is inside or outside the cylinder volume. I can define the cylinder by specifying 2 points along the axis, and the radius of the cylinder. A (x1, y1, z1 ) B (x2, y2, z2 ) and radius = R right now what I'm doing is that I find the vector AB, connecting A and B by AB = A - B then calculate the shortest distance from each point to the vector AB, if the distance is less than R, the point is inside. The problem with this method is that it only works if either A or B is the origin. for example, If I try to find the points inside the cylinder connecting p1 ( 100,10,20) p2 ( 100,-10,20) we get the points inside the cylinder ( 0,20,0) [ which is actually the cylinder formed by ( 0,0,0) and (0,20,0) ] certainly, I'm missing something, can anyone point it out? N.B: For some complicated reason, I can't use an auxiliary coordinate system or shift the origin. What I'm look
We can solve this by constructing the cylinder negatively: start with an infinite space, and throw out everything that isn't within the cylinder. This uses John Alexiou's answer but makes it significantly simpler for the second part, as well as more generally correct as it has no special cases where it doesn't work. This is done with 3 cuts: First , a cylindrical cut of radius R about an infinite line that goes through A and B. This proceeds as per John Alexiou's answer above: Points A and B have vectors $\boldsymbol{r}_A$ and $\boldsymbol{r}_B$ . Direction between them is: $$\boldsymbol{e} = \boldsymbol{r}_B-\boldsymbol{r}_A$$ Distance from any point P at $\boldsymbol{r}_P$ to the line is: $$d = \frac{\| \boldsymbol{e}\times\left(\boldsymbol{r}_{P}-\boldsymbol{r}_{A}\right) \|}{\|\boldsymbol{e}\|}$$ Exclude all points with $d > R$ . This can be optimised slightly by comparing the square of the magnitude and radius, avoiding the square root. Second , a planar cut that throws away the s
|geometry|euclidean-geometry|
0
Evaluate the limit involving summation
Evaluate $$\lim_{n\to\infty}\left(\left[\sum_{k=2}^{n+1}\left(1-\frac{1}{k}\right)^{\frac{1}{k}}\right]\left(\left[\sum_{k=1}^{n}\left(1+\frac{1}{k}\right)^{\frac{1}{k+1}}\right]\right)^{-1}\right)$$ where $[.]$ denotes the greatest integer function. I have no idea how to begin. I can't see any Riemann integration here. I plugged it in wolframalpha but it's time exceeded. Any help is greatly appreciated.
You can apply the Cesàro Mean Theorem to each sum since the terms in each sum approach a constant. Looking at the first sum, we see that $$\left(1 - \frac{1}{k}\right)^{\frac{1}{k}} \to 1$$ as $k\to\infty$ . Similarly $$\left(1+\frac{1}{k}\right)^{\frac{1}{k+1}} \to 1$$ Applying the theorem leads to $$\lim_{n\to\infty}{\ \frac{1}{n}\sum_{k=2}^{n+1}\left(1-\frac{1}{k}\right)^{\frac{1}{k}}} = \lim_{n\to\infty}{\ \frac{1}{n}\sum_{k=1}^{n}\left(1+\frac{1}{k}\right)^{\frac{1}{k+1}}} = 1$$ Finally, since each limit is convergent their quotient must also be convergent implying that $$\frac{1}{1} = \frac{\lim_{n\to\infty}{\ \left(\frac{1}{n}\sum_{k=2}^{n+1}\left(1-\frac{1}{k}\right)^{\frac{1}{k}}\right)}}{\lim_{n\to\infty}{\ \frac{1}{n}\sum_{k=1}^{n}\left(1+\frac{1}{k}\right)^{\frac{1}{k+1}}}} = \lim_{n\to\infty}\frac{\sum_{k=2}^{n+1}\left(1-\frac{1}{k}\right)^{\frac{1}{k}}}{\sum_{k=1}^{n}\left(1+\frac{1}{k}\right)^{\frac{1}{k+1}}}$$
|calculus|integration|sequences-and-series|limits|
1
How to evaluate this sum $\sum_{n=1}^{\infty} \frac{(-1)^n}{(n^2 + 3n + 1)(n^2 - 3n + 1)}$
How to evaluate this sum $$\sum_{n=1}^{\infty} \frac{(-1)^n}{(n^2 + 3n + 1)(n^2 - 3n + 1)}$$ My attempt $$\sum_{n=1}^{\infty} \frac{(-1)^n}{(n^2 + 3n + 1)(n^2 - 3n + 1)}$$ $$= \sum_{n=1}^{\infty} \frac{(-1)^{n - 1 + 1}}{(n^2 + 2 \cdot \frac{3}{2}n + 1)(n^2 - 2 \cdot \frac{3}{2}n + 1)}$$ $$ = \sum_{n=1}^{\infty} \frac{(-1)^{n - 1} \cdot (-1)}{(n^2 + 2 \cdot \frac{3}{2}n + \frac{9}{4} - \frac{5}{4})(n^2 - 2 \cdot \frac{3}{2}n + \frac{9}{4} - \frac{5}{4})} $$ $$= - \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{(n^2 + 2 \cdot \frac{3}{2}n + \left(\frac{3}{2}\right)^2 - \left(\frac{\sqrt{5}}{2}\right)^2)(n^2 - 2 \cdot \frac{3}{2}n + \left(\frac{3}{2}\right)^2 - \left(\frac{\sqrt{5}}{2}\right)^2)}$$ $$= - \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{[(n + \frac{3}{2})^2 - (\frac{\sqrt{5}}{2})^2][(n - \frac{3}{2})^2 - (\frac{\sqrt{5}}{2})^2]}$$ $$ = -\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{(n + \frac{3}{2} - \frac{\sqrt{5}}{2})(n + \frac{3}{2} + \frac{\sqrt{5}}{2})(n - \frac{3}{2} - \frac{\sqrt{5}}{2})(n - \f
Let's use a standard way of evaluation of such sums. Denoting $S$ the desired sum, we consider the integral in the complex plane along a big circle $C_R$ with the radius $R$ $$I_C=\oint_C\frac\pi{\sin\pi z}\frac{dz}{(z^2 + 3z + 1)(z^2 - 3z + 1)}\to 0 \,\,\text{at}\,\,R\to\infty$$ On the other hand, $$I_C=2\pi i\sum\operatorname{Res}\frac\pi{\sin\pi z}\frac1{(z^2 + 3z + 1)(z^2 - 3z + 1)}\,\,(\to 0)$$ We have simple poles inside our closed contour: at the points $z=0, \pm1, \pm2,...$ (the residues at these points give $\displaystyle \sum_{n=-\infty}^\infty\frac{(-1)^n}{(n^2 + 3n + 1)(n^2 - 3n + 1)}=2S+1\,\,$ ); and four poles at $z=\pm\frac32\pm\frac{\sqrt5}2$ . Evaluating the residues at these four poles, $$2S+1+\frac{2\pi}{\sin\pi(\frac32+\frac{\sqrt 5}2)}\frac1{3\sqrt5(3+\sqrt5)}+\frac{2\pi}{\sin\pi(-\frac32+\frac{\sqrt 5}2)}\frac1{3\sqrt5(3-\sqrt5)}=0$$ $$S=-\frac\pi{6\cos\frac{\pi\sqrt 5}2}-\frac12$$
|calculus|sequences-and-series|summation|closed-form|
1
Explanation for sum of sequence
I saw that in a textbook. Could somebody explain how this sum of a sequence was obtained? ⌈n/2⌉+...+⌈n/2⌉+⌈n/2⌉ = ⌈(n+1)/2⌉⌈n/2⌉ OP has indicated in the comments that the above series contains $n/2$ terms.
@Daniel Donnelly Get $k=mp+q$ with $0\le q\le m-1$ $0\le p\le \left\lfloor\dfrac{n-q}{m}\right\rfloor= \left\lfloor\dfrac{n}{m}\right\rfloor -1$ Therefore $\max(k)=m\left(\left\lfloor\dfrac{n}{m}\right\rfloor-1\right)+m-1= m\left\lfloor\dfrac{n}{m}\right\rfloor -1$ and plus range $m\left\lfloor\dfrac{n}{m}\right\rfloor\le k\le n $ \begin{align*} \sum_{k=0}^n\left\lfloor\dfrac{k}{m}\right\rfloor&=\sum_{p=0}^{\left\lfloor\frac{n}{m}\right\rfloor-1}\sum_{q=0}^{m-1} \left\lfloor\dfrac{mp+q}{m}\right\rfloor+\sum_{k=m \left\lfloor\frac{n}{m}\right\rfloor}^n \left\lfloor\dfrac{k}{m}\right\rfloor \\ &=\sum_{p=0}^{\left\lfloor\frac{n}{m}\right\rfloor-1}mp+\left(n+1-m \left\lfloor\dfrac{n}{m}\right\rfloor\right) \left\lfloor\dfrac{n}{m}\right\rfloor \\ &=\dfrac m2\left(\left\lfloor\dfrac{n}{m}\right\rfloor-1\right) \left\lfloor\dfrac{n}{m}\right\rfloor+ \left(n+1-m \left\lfloor\dfrac{n}{m}\right\rfloor\right) \left\lfloor\dfrac{n}{m}\right\rfloor \\ &= \left\lfloor\dfrac{n}{m}\right\rfloor\left(
|sequences-and-series|summation|
0
Show that if a ring $R$ satisfies that for every $x\in R$, $x$ or $1-x$ is invertible then $R$ has a unique max ideal
I tried to prove that if $R$ is like that the group of all not invertible in $R$ is ideal and from there to prove the last but that didnt work for me.
Showing that $m$ is indeed an ideal: Let $a\in m$ and $r\in R$ , suppose $ra$ becomes a unit then we find some $b\in R$ with $b(ra)=1$ but then $(br)a=1$ and thus $a$ is a unit, which contradicts the definition of $m$ hence $ra\in m$ Now let $x,y\in m$ suppose $x+y$ becomes a unit, then $1=b(x+y)=bx+by$ and hence $bx=1-by$ . We already now, that $by$ is not a unit by above argumentation. By assumption $1-by$ must me a unit and therefore $bx$ is a unit, making $x\in m$ a unit. Which is a contradiction.
|abstract-algebra|ring-theory|maximal-and-prime-ideals|
1
Check if a general point is inside a given cylinder
For a particular purpose, I want to define a cylinder in 3D space and go through a list of given 3D points and tell if the point is inside or outside the cylinder volume. I can define the cylinder by specifying 2 points along the axis, and the radius of the cylinder. A (x1, y1, z1 ) B (x2, y2, z2 ) and radius = R right now what I'm doing is that I find the vector AB, connecting A and B by AB = A - B then calculate the shortest distance from each point to the vector AB, if the distance is less than R, the point is inside. The problem with this method is that it only works if either A or B is the origin. for example, If I try to find the points inside the cylinder connecting p1 ( 100,10,20) p2 ( 100,-10,20) we get the points inside the cylinder ( 0,20,0) [ which is actually the cylinder formed by ( 0,0,0) and (0,20,0) ] certainly, I'm missing something, can anyone point it out? N.B: For some complicated reason, I can't use an auxiliary coordinate system or shift the origin. What I'm look
For your information: I think you mean by a cilinder a body with a circle (not an eclipse) as a ground surface. In $2D$ it simply means that you want to verify if a point $(x_1, y_1)$ is located within the circle with radius $R$ , to which most people will say " Easy: just check if $x_1^2+y_1^2 \le R^2$ ", which is, mathematically speaking, completely correct. However, when dealing with computers, the situation becomes more complicated: although the mentioned formula is still correct, an easy improvement is possible: // This is the normal function function bool inside(float X1, float Y1, float R) { return X1^2+Y1^2 As you can easily see, you are doing at least two multiplications of floating point numbers, which might be very time consuming (certainly when you start checking hundreds, thousands, ... of numbers). Therefore you can do the following instead: // This is a faster function function bool inside_fast(float X1, float Y1, float R) { if (abs(X1) > R return false; if (abs(Y1) > R
|geometry|euclidean-geometry|
0
Understanding $\mathbb{C}P^2$
I am trying to understand $\mathbb{C}P^2$ . Since I understand the Hopf fibration quite well, I like the following construction: Attach a $\mathbb{D}^2$ (2-cell) to a point $\mathbb{D}^0$ (0-cell) to get $S^2$ (thanks to @Leo Mosher for suggesting the more sensible order of attaching) Now attach $\mathbb{D}^4$ to $S^2$ by gluing the boundary $\partial \mathbb{D}^4=S^3$ to $S^2$ using the projection map P of the Hopf bundle $S^1 \hookrightarrow S^3 \rightarrow S^2$ I picture the result as a $S^2$ bundle over $S^2$ which I know is $\textbf{wrong}$ *. Here is my reasoning. Where did I go wrong? As $S^3 = \partial \mathbb{D}^4$ is a $S^1$ bundle over $S^2$ , the "interior" $\mathbb{D}^4 $ is a $\mathbb{D}^2$ bundle over $S^2$ (by just "filling" every $S^1$ ) Gluing the boundary $\partial \mathbb{D}^4 = S^3$ to $S^2$ via the projection of the Hopf fibraton amounts to gluing the boundary of every fiber (ie $\partial \mathbb{D}^2 = S^1$ ) together, giving us $S^2$ at every point. I think that
Your reasoning goes wrong in your point number $1$ . Just because you have fibered $S^3$ by disjoint $S^1$ 's, you cannot assert without justification that you can extend this to a fibering of $\mathbb D^4$ by disjoint $\mathbb D^2$ 's. And if you attempted to justify this assertion, you would run into this problem: if $D_1,D_2 \subset \mathbb D^4$ are two properly embedded discs with boundary circles $C_1,C_2 \subset S^3$ , then the linking number of $C_1$ and $C_2$ in $S^3$ is $0$ ; but the linking number of any two fibers of the Hopf fibration is $1$ .
|differential-geometry|algebraic-topology|differential-topology|low-dimensional-topology|4-manifolds|
0
Expected path length in a ladder-like graph if edges can be randomly removed.
Question from an old exam: The King of Squares sets out to patrol the City of Squares. The City of Squares is an infinite ladder, i.e., a graph $(V, E)$ where $V = \mathbb{N} \times \{0,1\}$ and the vertex $(x, y)$ is connected to $(x-1, y)$ for $x>0,(x+1, y)$ and $(x, 1-y)$ . Unfortunately, due to the revolt, some parts of the streets are blocked. Each edge independently becomes blocked with probability $\frac{1}{2}$ . The king leaves his palace at the point $(0,0)$ . Let $X$ be the largest value of the coordinate $x$ that the King can reach without passing through the blocked streets. The king can look ahead, so he will choose the best route before he sets off. In the situation in the image below $X=5$ . (a) Find EX. (b) Find the probability that both points $(X, 0)$ and $(X, 1)$ are reachable. My attempt at (a): Let $$ X_i = \begin{cases} i, & \text{if it's possible to reach $(i, 0)$ or $(i,1)$} \\ 0, & \text{otherwise}. \end{cases} $$ Now we need to find the probability $p(i)$ that
For each nonegative integer $x$ and each $Y\subset\{0,1\}$ let the event $R_{x,Y}$ be: for each $y\in \{0,1\}$ we have $y\in Y$ iff there exists an abscissa-nondecreasing path from $(0,0)$ to $(x,y)$ . Then $p(R_{0,\varnothing})=0$ , $p(R_{0,\{0\}})=\frac 12$ , $p(R_{0,\{1\}})=0$ , and $p(R_{0,\{0,1\}})=\frac 12$ . Moreover, it is easy to see that for each natural $x\ge 0$ we have: $$p(R_{x+1,\varnothing})=p(R_{x,\varnothing})+\frac 12 p(R_{x,\{0\}})+\frac 12 p(R_{x,\{1\}})+ \frac 14 p(R_{x,\{0,1\}}),$$ $$p(R_{x+1,\{0\}})=0p(R_{x,\varnothing})+\frac 14 p(R_{x,\{0\}})+0 p(R_{x,\{1\}})+ \frac 18 p(R_{x,\{0,1\}}),$$ $$p(R_{x+1,\{1\}})=0p(R_{x,\varnothing})+0p(R_{x,\{0\}})+\frac 14 p(R_{x,\{1\}})+ \frac 18 p(R_{x,\{0,1\}}),$$ $$p(R_{x+1,\{0,1\}})=0p(R_{x,\varnothing})+\frac 14 p(R_{x,\{0\}})+\frac 14 p(R_{x,\{1\}})+ \frac 12 p(R_{x,\{0,1\}}).$$ Put $p_x=(p(R_{0,\varnothing}),p(R_{x,\{0\}}),p(R_{x,\{1\}}),p(R_{x,\{0,1\}}))^t.$ Then $p_0=\left(0,\frac 12,0,\frac 12\right)^t$ , and for each n
|probability|expected-value|
0
Some pattern in the gap of the extrema/root of a product using Stirling approximation
Playing with Desmos calculator and Stirling approximation I define : $$x>0,f(x)=\prod_{n=1}^{M}\left(2-\frac{\Gamma(\frac{x}{n}+1)}{\sqrt{2\pi×\frac{x}{n}}\left(\frac{x}{e×n}\right)^{\frac{x}{n}}}\right)$$ If you plot it properly over $(1/2,5/4)$ say $M=30$ some patterns appears : The extrema and roots are very regular in their successive gap and it's oscillating . For $M=30$ the gap is around $3/5$ Question : Can we show the existence of a constant or $constant\simeq root_{i+1}-root_{i}$ or the gap diverges like the harmonic sum ?
$x$ is a root of $f$ iff it makes one of the factors zero; that is, iff for some $n$ we have $g(x/n)=2$ where $g(t)=\frac{\Gamma(1+t)}{\sqrt{2\pi t}e^{-t}t^t}$ . It happens that the function $g$ is strictly decreasing and equals 2 at just one place, near $t=0.5728$ . Call this $\tau$ . So $g(x/n)=2$ when $x=n\tau$ for positive integer $n\leq M$ . So yes, the roots are very regular indeed :-).
|functions|roots|gamma-function|products|
1
Understanding the term almost everywhere in measure theory
I am trying to understand the term "almost everywhere" from measure theory correctly. So given two extended real-valued integrable functions $f, g: X \rightarrow \bar{\mathbb{R}}$ with $$\int_A f d\mu = \int_A g d\mu, $$ for all measurable $A$ , then it follows that $f=g$ almost everywhere, i.e., that $f$ and $g$ are equal except on a $\mu$ nullset. What I now dont understand is, if there is one specific null set on which they dont agree or if they dont agree on all null sets. But if they dont agree on all null sets this would be weird, since we could (with $\mu$ the Lebesgue measure), for example, always take a countable number of real numbers, which form a nullset, and $f$ and $g$ wouldnt be allowed to agree on this set. But then we could take another such set, again with measure 0, and again $f$ and $g$ wouldnt be allowed to agree on them. And we could go on and on like this and in the end there wouldnt be any set left on which $f$ and $g$ would actually still be equal...
"Almost everywhere" means that the set where whatever-it-is fails is a null set. If you have two things happening "almost everywhere" then the sets where they fail will in general be different null sets. (But it's OK because e.g. the set where at least one of them fails is the union of those two null sets, and the union or two null sets is a null set.) If something holds outside every null set then it holds literally-everywhere because the empty set is a null set. If something fails-to-hold on every null set then it holds literally-nowhere because every singleton set is a null set.
|measure-theory|lebesgue-integral|lebesgue-measure|almost-everywhere|
1
number of combinations for distributing 20 grades (between 0 and 100) with a a difference of at least 4 between every 2 grades
im asked to find the number of combinations for distributing 20 grades (between 0 and 100) with a difference of at least 4 between every 2 grades. \ the approach i tried was that if i choose the first number from 0 to 5 (they can increase by 4 or 5 for 19 times) theres: $6\cdot\mathrm{2}^{19}$ combinations. \ and for the numbers from 6 to 24 we have: $\sum_{i=0}^{19}{19\choose i}$ . since 5 is the biggest number from which we can start and only add 5s and by 20th number we reach 100, which is the maximum number. so starting from 6 we have to add less 5s and add 4s (in the 19 increments we have to add 4 somewhere hence the "choose"). but that seems very counter productive and i just realized that i can add 6 or 7 at least one time but i didnt consider them. \ what should i do?
Make $19$ blocks of four $\boxed{\bullet\circ\circ\circ}$ + a "short" block $\boxed{\bullet}$ with the bullets representing chosen grades $101-77 = 24$ balls remain, and we now have $44$ entities ( $24$ balls + $20$ blocks) Place the blocks in $\dbinom{44}{20}$ ways, with the "short" block last amongst blocks
|combinatorics|
0
Is it possible that it exists a $C>0$ that verify for all continuous function on $[a;b]$; $\|f\|_{\infty} \leq C \|f\|_2 $?
Question: When $f \in C^2[a;b]$ we define $ \|f\|_2=\sqrt{\int_{[a;b]}|f|^{2}}$ and when $f$ is bounded on $[a;b]$ we define $\|f\|_{\infty} = \sup_{x \in [a;b]} |f(x)| $ . Is it possible that it exists a $C>0$ that verify for all continuous function on $[a;b]$ ; $||f||_{\infty} \leq C \|f\|_2 $ ? Answer: 1- Let choose the set of functions $ f_k(x)=\frac{(x-a)^k}{(b-a)^k} , k \geq 1$ . Hence $ \|f_k(x)\|_2 ^2 = \int_a^b \frac{(x-a)^{2k}}{(b-a)^{2k}} dx = \frac{1}{(b-a)^{2k}} [\frac{(x-a)^{2k+1}}{2k+1}]_a^b = \frac{b-a}{2k+1} $ 2- When $k \to \infty$ we have that for every $C$ we can choose: $ C \|f_k(x)\|_2 \to 0 $ . 3- On an other side we have that $ \|f_k(x)\|_{\infty} = f(b)=1 $ 4- Thus no matter which $C$ we can choose $ \exists K $ s.t. $ \forall k > K $ we will have $ \|f\|_{\infty} = 1 > C \|f\|_2 $ For exemple you can choose: $K = \frac{b-a-1}{2}$ Is this correct? Thank you.
The answer is no, there is no such $C$ . For example, it is possible to construct a sequence of functions $(f_n)$ which are all continuous and satisfies both $$ \lim_{n \rightarrow \infty} \|f_n\|_2 \rightarrow 0 \hspace{1cm} \|f_n\|_\infty = 1 \ \forall \ n=1,2,... $$ The existence of such a sequence makes it impossible for a universal $C > 0$ to exists such that for all continuous $f$ it holds $\|f\|_\infty \leq C \|f\|_2$ . To see this fix an arbitrary $C>0$ . By the fact that the limit goes to zero and each term is non-negative there must be some $n$ where $\|f_n\|_2 , but for such an $n$ we have $$\|f_n\|_\infty = 1 = C\frac{1}{C} > C\|f_n\|_2$$ which goes the opposite direction of what you are asking for. Since this holds for arbitrary $C > 0$ we can conclude that no universal $C$ exists. Here is one explicit construction of the series $f_n$ . $$f_n(x) = \begin{cases} 1 - n(x-a) & a \leq x \leq a+1/n \\ 0 & a + 1/n Graphically, this a height 1 triangle where the base is of width
|calculus|integration|analysis|normed-spaces|
1
How to get the max of $(aa^*-bb^*)^2 - [(ab^*)^2 - (a^*b)^2]^2$?
If I have two complex numbers $ a,b \in \mathbb{C} $ ,and $ |a|^2 + |b|^2 = 1 $ ,I need to get the max of \begin{aligned} (aa^*-bb^*)^2 - [(ab^*)^2 - (a^*b)^2]^2 \end{aligned} In the answer book, the auther let \begin{aligned} a = \cos \frac{\beta}{2} \\ b = \sin \frac{\beta}{2} e^{i\alpha} \end{aligned} then we have \begin{aligned} \cos ^2 \beta +\frac14 \sin ^4 \beta \sin ^2 2\alpha \end{aligned} In physics, $\beta$ is the angle with Z-ax, and $\alpha$ is the angle with X-ax. But is this transformation is reasonably? Why we can let $a$ with not phase factor? Does there are some math explain? Thanks by LiZ.
$|a|^2+|b|^2=1\iff\begin{cases}|a|=\cos(t)\\|b|=\sin(t)\end{cases}\iff\begin{cases}a=\cos(t)e^{iu}\\b=\sin(t)e^{iv}\end{cases}\quad$ with $t,u,v$ reals. Once developed the expression can be expressed as $f(a,b)=\Big[\ 2|a|^4|b|^4+|a|^4+|b|^4-2|a|^2|b|^2\ \Big]-\underbrace{\Big(\bar a^4b^4+a^4\bar b^4\Big)}_g$ Since $|e^{iu}|=|e^{iv}|=1$ they are of no importance for the big square bracket. On the other hand $g(a,b) = \cos(t)^4\sin(t)^4\Big(e^{i(4v-4u)}+e^{i(4u-4v)}\Big)$ But you can notice that if you take instead $\begin{cases}a'=\cos(t)\\b'=\sin(t)e^{i(v-u)}\end{cases}$ You'll get $\quad g(a,b)=g(a',b')$ Therefore in the end, WLOG you can set $\beta=2t$ and $\alpha=v-u$ and get the announced parametrisation.
|inequality|complex-numbers|parametrization|
1
What is the quantitative relationship between $∠BAF$ and $∠CHG$ and the line segment that is equal to $AF$?
Question: As shown in the diagram, in isosceles $\triangle ABC,\ AB=AC.$ $H$ is a point on $AC$ , take points $E$ and $F$ in turn on the extension line of $BC$ , and take point $BD$ on the extension line of $CB$ , so that $EF=DB$ , make $EG // AC$ across point $E$ to cross the extension line of $DH$ at point $G$ , connect $AF$ , and if $\angle HDF+ \angle F= \angle BAC$ : Explore the quantitative relationship between $\angle BAF$ and $\angle CHG$ , and find a line in the figure that is equal to the line segment $AF$ and justify your conclusion. (Note: The equivalent line must be only in the original image and cannot equivalent auxiliary lines.) Attachment A: Image of the question This question will be simple by using congruent triangles. But I still wonder how to deal with this question. One possible way I've got is to: For the "quantitative relationship between $\angle BAF$ and $\angle CHG$ " part, I think $\angle BAF$ = $\angle CHG$ , because $∠F + ∠HDF = ∠BAC$ , it follows that $∠CH
Let $M$ be the midpoint of $DF$ , flip the image over the vertical from $M$ (so $D \iff F$ and $B \iff E$ ). Then $G$ is mapped to $G'$ which lies on $AB$ , and $AFG'$ is isosceles (with vertex angle $F-D$ ). Therefore $|DG|=|AF|$ .
|geometry|triangles|
0
Question regarding set notation
In an academic research paper in computer science, I have a formula using the set builder notation in order to define a set $A$ , as follows: $$A = \{ y(b_i)=a_i | b_i \in B \}.$$ In the particular example, I want to convey in a concise way that each element $a_i \in A$ results from the transformation given by the function $y(b_i)$ for all $b_i \in B$ . Is that an appropriate and correct formulation, especially the part on the left-hand side (LHS) of the set builder? That is, in set theory, is it allowed/correct to use an equality on the LHS of the set builder in the same way as above?
The notation you have suggested parses, and would likely be understood by most readers. However, a couple of things would make it more clear: You are defining an object $a_i$ , which is an element of $A$ . It is easier to read from left-to-right, so I would suggest writing $a_i = y(b_i)$ (instead of $y(b_i) = a_i$ ). The index $i$ is doing nothing for you in this notation. I would drop it, and write $a = y(b)$ with $b \in B$ . The spacing around the vertical bar is wrong. Either use \mid , or \ |\ (or just use a colon). My version of that notation would be $$ A = \{ a = y(b) \mid b \in B \}. $$ You could also move the specification of $a$ to the right-hand side and write $$ A = \{ a \mid a = y(b), b \in B\}, $$ though I think that my preference would be for the former. As noted in the comments, there are other common notations which would give the same set. If $y$ is a function, then $A$ is the image of $B$ under $y$ . A very short notation for this is $$ A = y(B). $$ You can also comp
|discrete-mathematics|notation|set-theory|
1
Maximum value for $F(y)=\int_0^1[y'\sin(\pi y)-(y-t)^2]dt$?
This is what I have done thus far: $F(y)=\int_0^1[y'\sin(\pi y)-(y-t)^2]dt=-\frac{1}{\pi}\int_0^1[(\cos(\pi y))\frac{d}{dt}]dt-\int_0^1(y-t)^2dt$ (as $-\frac{1}{\pi}[(\cos(\pi y))\frac{d}{dt}]=-\frac{1}{\pi}\cdot -\pi y'\sin(\pi y)=y'\sin(\pi y)$) I then used the fundamental theorem of calculus to determine the first part of the integral and determined the second part simply as a polynomial. $=-\frac{1}{\pi}\cdot\cos(\pi y)-(y^2-y+\frac{1}{3})$ Then, to find the maximum of $F(y)$, I took the derivative and equated it to $0$. $F'(y)=\sin(\pi y)-2y+1=0$ However, I'm not sure how I should use the above equation to solve for y. Also, did I use the fundamental theorem of calculus correctly above? *Edit: $y(t)=t$
Another way. By the Euler-Lagrange equation in the Calculus of variations, $$f_y-\frac{d}{dt}(f_{y'})\\ =\pi\cos(\pi y)y'-2(y-t)-\frac{d}{dt}(\sin\pi y)\\ =-2(y-t)=0.$$
|optimization|calculus-of-variations|
0
Graph has at least two colorings
Let $k \geq 2$ and $G$ be an incomplete graph that is $k$ -colorable and has less than $(k-1)|V(G)|-{k\choose 2}$ edges. I want to show that $G$ has at least two $k$ -colorings. It looks really simple but I just don't know how to start here.
Thanks to the great help in the comments I could find a solution. If $k$ is greater than the chromatic number you can take a minimal coloring and change the color of one of the vertices to one of the unused colors. Now let $k$ be the chromatic number and $\mathscr{C} = \{C_1, \dots,C_k\}$ be a $k$ -coloring of $G$ (partition of $|V(G)|$ ). Assume that there is no other $k$ -coloring, we want to lead this to a contradiction. This means that any Kempe chain for two colors must contain all of the vertices of these colors. Therefore, the A,B-Kempe chain is at least a tree induced by $A\cup B$ . Every edge can only be in one chain so we can sum over the edges in every Kempe chain to get $|E(G)|$ . A tree T has $|V(T)|-1$ edges and there are ${k\choose 2}$ chains so by summing over the Kempe chains we get: $$\begin{align*}|E(G)|&\geq \sum_{i=1}^k \sum_{j=i+1}^k(|C_i\cup C_j|-1)\\ &=(\sum_{i=1}^k \sum_{j=i+1}^k|C_i|+| C_j|)-{k\choose 2}\\ &=(k-1)\sum_{i=1}^k|C_i|-{k\choose 2}\\ &=(k-1)|V(G)|-
|graph-theory|coloring|
1
Asymptotic expansion gives Euler-Mascheroni constant
Mathematica gives the following asymptotic expansion: $$ \int_0^\Lambda \frac{1-\cos(qx)}{q} \mathrm{d}q \overset{\Lambda\to\infty}{\sim}\gamma+\log\left(x\Lambda\right) +\mathcal{O}\left(\frac{1}{x\Lambda}\right) $$ where $\gamma\approx 0.577$ is the Euler-Mascheroni constant. How can this expansion be obtained analytically? In particular, where does the $\gamma$ come from?
Let's denote $\displaystyle I(R)=\int_0^R\frac{1-e^{i t}}tdt$ , then the desired integral $\displaystyle\int_0^\Lambda \frac{1-\cos(qx)}{q} \mathrm{d}q=\Re \,I\Big(\frac\Lambda x\Big)$ Integrating along the quarter-circle in the complex plane in the positive direction - adding a big arch of the radius $R$ (adding also a small quarter-circle around $z=0$ - to close the contour; integral along this small arch tends to zero) $$\oint\frac{1-e^{i z}}zdz=I(R)+\int_0^{\pi/2}\frac{1-e^{i R e^{i\phi}}}{Re^{i\phi}}iRe^{i\phi}d\phi+\int_R^0\frac{1-e^{-t}}tdt=0$$ as we do not have poles inside our contour. Denoting $z=e^{i\phi}$ in the second term, $$\Rightarrow\,\,I(R)=\int_0^R\frac{1-e^{-t}}tdt-\int_1^{e^{\pi i/2}}\frac{1-e^{iRz}}zdz$$ Integrating by parts, $$I(R)=\ln t(1-e^{-t})\bigg|_0^R-\int_0^Re^{-t}\ln tdt-\ln z\bigg|_1^i+\frac1{iR}\frac{e^{iRz}}z\bigg|_1^i+\frac1{iR}\int_1^i\frac{e^{iRz}}{z^2}dz$$ Dropping exponentially small terms, $$I(R)=\ln R-\int_0^\infty e^{-t}\ln tdt-\frac{\pi i}2-\f
|integration|asymptotics|euler-mascheroni-constant|
1
Maximum and Minimum Entropy Calculations
Consider a bookstore that has an inventory of used books $U' = \{\text{`To Kill a Mockingbird', `1984'}, \ldots\}$ , and an inventory of new books $N' = \{\text{`Project Hail Mary', `The Midnight Library'}, \ldots\}$ . The book selected by a customer buying a used book is a random variable $U$ taking values in the set $U'$ , with entropy $H(U)$ . Similarly, the book chosen by a customer buying a new book is a random variable $N$ taking values in the set $N'$ , with its entropy denoted by $H(N)$ . The bookstore is interested in the random variable $X$ , which represents the next customer's choice of book, where $X$ takes values in the set $X' = U' \cup N'$ . Let $p$ represent the probability that the next customer chooses a used book, then $X$ can be defined as: \begin{equation} X = \begin{cases} U & \text{with probability } p \\ N & \text{with probability } 1 - p \end{cases} \end{equation} Given this scenario, how can the bookstore's uncertainty, $H(X)$ , be expressed in terms of the g
The expansion looks correct. Write $$ H(X)=p \left[H(U)+\log(1/p)\right]+ (1-p) \left[H(N)+ \log (1/(1-p))\right] $$ which is a convex combination of two nonnegative functions. You probably can assume $H(U)$ and $H(V)$ are given constants and then use Lagrange multipliers (or other favourite method) to optimize over $p.$
|information-theory|entropy|
0
How long do I need to wait to get a treadmill?
There are $15$ treadmills at my gym each person uses them for $40$ minutes. The possible times spent on the treadmill, for example $5$ minutes or $20.23$ minutes, are equally likely. I arrive at my gym, I see every treadmill is occupied. What is the probability that at least one person gets off within $10$ minutes of my arrival? This is a practical question so I'm asking it here. My assumption is it's $1$ minus the complement to the $15$ th power.
Your question isn't completely clear to me but I think you mean the possible time someone spends on a treadmill is a uniform distribution from 0 to 40 mins. In that case, first imagine there's one treadmill. Let's say when you walk into the gym, the person on that treadmill will have a total time x spent on it. Let's find the expected value of x. Let's say the gym is open for y minutes a day and take the limit $y\rightarrow \infty$ , then the probability that you walk in on a particular treadmill user, User 1, given that they use it for X minutes is $p(\textrm{walk in on User 1}|x=X)=X/y$ . The average time to use a treadmill is 20 mins, so the total number of treadmill users is $y/20$ , so $p(x=X|\textrm{walk in on User 1})=\frac{p(\textrm{walk in on User 1}|x=X)p(x=X)}{p(\textrm{walk in on User 1})}=\frac{(X/y)(1/40)}{20/y}=X/800$ . The probability that they'll finish within 10 mins given x is $p(\textrm{finish in 10 mins}|x)=10/x$ . The expected probability of them finishing in 10 m
|probability|
0
Is This Proof Involving the Sub-matrix of $D_pF$ Correct?
The Problem to Solve Prove that for any smooth map between manifolds $F\in C^{\infty}(M,N)$ and any given point $p\in M$ , there is an open neighborhood $U$ of $p$ such that $rank_p(F)\leq rank_q(F)$ for all $q\in U$ . Note : The hint provided is to first work out why it is enough to show this fact holds for maps $\mathbb{R}^m \rightarrow \mathbb{R}^n$ , then to show it is true for these maps, choose an invertible sub-matrix of $D_pF$ of size $rank_p(F)\times rank_p(F)$ and think about how the determinant of the sub-matrix varies as p changes. Attempt A smooth map between manifolds locally resembles the map $\mathbb{R}^m \rightarrow \mathbb{R}^n$ . Hence, it is sufficient to show that this fact holds for $\mathbb{R}^m \rightarrow \mathbb{R}^n$ . Consider the smooth map $F: \mathbb{R}^m \rightarrow \mathbb{R}^n$ . Let $p\in \mathbb{R}^m$ , and $rank_pF = r$ . We want to prove that in an open neighborhood $U$ of $p$ , the rank of any other point $q$ is at least $r$ . First, choose an $r\
It is correct. You can improve it a little bit as follows: Mention explicitly that an $(n \times m)$ -matrix $A$ has rank $\ge r$ if and only if it has an $(r \times r)$ -submatrix $\tilde A$ with $\det \tilde A \ne 0$ . Note, however, that $\tilde A$ is in general not unique. Apply this to the Jacobian matrix $D_pF = DF(p)$ to get an $(r \times r)$ -submatrix $\tilde DF(p)$ with $\det \tilde DF(p) \ne 0$ . Avoid to say "This sub-matrix has the largest linearly independent partial derivatives of $F$ at $p$ ". This sentence does not make much sense. Also there is no need to mention that the submatrix is invertible (although this is correct). The determinant of the sub-matrix $\tilde DF(p)$ is a continuous function of the entries of the sub-matrix. Since the determinant is non-zero at $p$ , then, by continuity, there exists an open neighbourhood $U$ around $p$ such that $\det \tilde DF(q) \ne 0$ for $q \in U$ . Hence, for $q \in U$ , the matrix $DF(q)$ contains the $(r \times r)$ -submat
|differential-geometry|manifolds|smooth-manifolds|
0
How do you prove that $\{ Ax \mid x \geq 0 \}$ is closed?
Let $A$ be a real $m \times n$ matrix. How do you prove that $\{ Ax \mid x \geq 0, x \in \mathbb R^n \}$ is closed (as in, contains all its limit points)? The inequality $x \geq 0$ is interpreted component-wise. This fact is used in some proofs of Farkas's lemma. It seems like it should be easy, but the proof I've seen seems to be unexpectedly complicated. Is there a very clear / easy / obvious proof of this fact? (Note that linear transformations do not always map closed sets to closed sets, as discussed in this question . For example, let $S = \{ (x,y) \in \mathbb R^2 \mid y \geq e^x \}$ and let $T:\mathbb R^2 \to \mathbb R^2$ such that $T(x,y) = (0,y)$. Then $S$ is closed, but $T(S)$ is not closed.) Edit: Here is a simple proof in the case where $A$ has full column rank. (A very similar proof is given in Nocedal and Wright, in the Notes and References at the end of chapter 12.) Let $y^*$ be a limit point of $\Omega = \{ Ax \mid x \geq 0, x \in \mathbb R^n \}$. There exists a sequenc
Because set $S := \{Ax \colon x \in \mathbb{R}^n_+\}$ is the projection of $\{(x, y) \in \mathbb{R}^{n} \times \mathbb{R}^{m} \colon y = Ax, x \geq 0\}$ on to $\mathbb{R}^m$ , we can obtain a H-representation of $S$ by Fourier Motzkin elimination procedure . Because the resulting H-representation is an intersection of finite number of halfspaces, $S$ is closed.
|real-analysis|general-topology|convex-analysis|convex-optimization|
0
Question about properties of Picard-Lindeloef existence theorem
I have a few questions about solutions that arise from differential equations where Picard-Lindeloef can be applied: In the problem $y'=f(t,y)=-y^2$ , solutions have the form $\frac{1}{x-c}$ and always have a discontinuity point, what is causing this? Is it that $f(t,y)$ has a zero for some value of $y$ ? Will this always happen? If a structure funciton $f$ is locally Lipschitz on an entire interval, do the unique solutions always exist on that entire interval? I don't mean globally Lipschitz, I mean locally Lipschitz on an interval such as $[0,1)$ , for example $f(t,y)=\frac{1}{y-1}$ What about $y'=\arctan(y)$ , this is globally Lipschitz but also has a zero, do the solutions have a discontinuity point? (I don't think this is analytically solvable, but I'm guessing it can still be analyzed using numerical methods)
$f(t,y)$ having a zero doesn't have much to do with whether the solution has a discontinuity (except that if there is $y_0$ such that $f(t,y_0) = 0$ for all $t$ then the constant $y = y_0$ is a solution, which of course is continuous on all of $\mathbb R$ ). If the differential equation is linear, i.e. $f(t,y) = a(t) y + b(t)$ for some continuous functions $a(t)$ and $b(t)$ , then all solutions are defined on all of $\mathbb R$ . For differential equations that may not be linear but where $f(t,y)$ is bounded by a function linear in $y$ , we can bound the solutions using Grönwall's inequality, so it can't go to $\infty$ in finite time. This applies, for example, to your $y' = \arctan(y)$ , so the solutions of that equation are continuous on all of $\mathbb R$ .
|ordinary-differential-equations|
0
Proving the convergence of a series with very little information
Let $m \in \mathbb{N}$ be a fixed natural number and $ (a_n)_{n \geq 1}$ be a sequence of positive real numbers such that $\forall n \geq 1\colon a_{n+1} \leq a_n - a_{mn}$ . Prove that the series $\sum_{n=1}^{\infty}n^\alpha a_n$ converges $\forall \alpha \in \mathbb{R}_+$ . My first idea was to show that $a_n$ is decreasing. From the hypothesis, we have that $ 0 . So we can sum this equality up from $n=s$ to $n=t$ , so we get $\sum_{n=s}^t a_{nm} \leq a_s-a_{t+1}$ . However I do not think that it helps that much.
Let $(a_n)$ be any sequence as in OP. We begin by establishing key lemmas. Lemma 1. For any $l \geq m$ , $$ \sum_{n=l}^{\infty} a_n \leq m a_{\lfloor l/m \rfloor}. $$ Proof of Lemma 1. By the positivity of $(a_n)$ , we have $a_{n+1} \leq a_n - a_{mn} and hence $(a_n)$ is decreasing. Then for any $l \geq m$ , we have $$ a_{\lfloor l/m \rfloor} \geq \sum_{n=\lfloor l/m \rfloor}^{\infty} (a_n - a_{n+1}) \geq \sum_{n=\lfloor l/m \rfloor}^{\infty} a_{mn} \geq \frac{1}{m} \sum_{n=m\lfloor l/m \rfloor}^{\infty} a_{n}. $$ This proves both assertions of Lemma 1. $\square$ Lemma 2. Define $\xi(\alpha) = \sum_{n=1}^{\infty} n^{\alpha} a_n \in [0, \infty]$ . Then $\xi(\alpha) for all $\alpha \leq 0$ . For each $\alpha > 0$ , there exists a constant $K_{\alpha} \in (0, \infty) $ such that \begin{equation}\label{l2} \xi(\alpha) \leq K_{\alpha} (1 + \xi(\alpha - 1)). \tag{1} \end{equation} Proof of Lemma 2. Item 1 is immediate from Lemma 2. For Item 2, let $\alpha > 0$ . Then there exists a constant
|real-analysis|calculus|sequences-and-series|analysis|problem-solving|
1
How to find a matrix $A$ such that $\langle x,y\rangle = x^T A y$ for the $L^2 (D)$ inner product
So I know that I should have some matrix $A$ such that $\langle x,y\rangle = x^T A y$ for the $L^2 (D)$ inner product. I would like to find this $A$ . My domain $D$ is a $3 \times 3$ square grid in $\mathbb{R}^2$ . I am assuming that the points making up this grid are $x_i = x_0 + i$ for $i = 0,1,2,3$ and $y_j = y_0 + j$ for $j = 0,1,2,3$ (note here I'm assuming the step size is 1 for both $x$ and $y$ directions). I also am using the standard piecewise linear finite element basis functions as the basis functions for my space which I believe will come into play so $\varphi_{ij} = \phi_i(x) \phi_j(y)$ where $\begin{equation*} {\large \phi _j (y) =\left\{ \begin{array}{ll} \frac{y-y_{j-1}}{h} & y\in I_j \\ \frac{y_{j+1}-y}{h} & y \in I_{j+1} \\ 0 & \mbox{else} \end{array} \right.} \mbox{ for } j = 1,...,N-1 \end{equation*}$ I genuinely have no idea how to proceed and I've tried searching everything I can think of online but cannot find anything helpful. In general it confuses me how a vec
To be a bit more precise, what you have done is select a finite-dimensional subspace $\mathcal{V}_h\subset L^2(D)$ with basis elements $\varphi_k$ given by the product of your 1D basis elements as you have described. If we assume you choose a consistent way to order them, then we represent elements in $\mathcal{V}_h$ as linear combinations like so $$u_h = \sum_{k=1}^N u_k\varphi_k,\quad v_h = \sum_{k=1}^N v_k\varphi_k.$$ Now, we certainly cannot create a finite-dimensional operator $A$ that generates the inner product on all of $L^2$ , but we can create one that generates the $L^2$ inner product restricted to $\mathcal{V}_h$ , since it is finie dimensional. Let $U = (u_1,\dots,u_N)$ be the vector of coefficients of the function $u_h$ , then we seek a spd matrix $A$ such that $$\langle U, AV\rangle_{\mathbb{R}^N} = \langle u_h, v_h\rangle_{L^2(D)},\quad \forall u_h,v_h\in \mathcal{V}_h,\\ \text{ where } u_h = \sum_{k=1}^N u_i \varphi_i, \, v_h = \sum_{k=1}^N v_i \varphi_i.$$ To obtain t
|linear-algebra|functional-analysis|numerical-methods|
1
The proof of van der Corput inequality
$\newcommand{\lrp}[1]{\left(#1\right)}$ $\newcommand{\lrmod}[1]{\left|#1\right|}$ I am trying to understand the proof of the van der Corput inequality given in Lemma 1 of this blog entry due to Tao . We will use the Big-O notation, whose definition can be found here. Lemma. (Van der Corput Inequality). Let $a_1 a_2, a_3 , \ldots$ be a sequence of complex numbers bounded by magnitude $1$ . Then for any $1\leq H\leq N$ we have $$ \lrmod{ \frac{1}{N} \sum_{n=1}^N a_n } \leq \lrp{ \frac{1}{H}\sum_{h=0}^{H-1} \lrmod{ \frac{1}{N} \sum_{n=1}^N a_{n+h}\bar a_n} }^{1/2} + O\lrp{ \frac{H}{N}} $$ The proof Tao has provided proceeds as follows. It is easy to see that $$ \lrmod{ \frac{1}{N} \sum_{n=1}^N a_n - \frac{1}{N} \sum_{n=1}^N a_{n+h} } = O\lrp{\frac{H}{N}} $$ Therefore $$ \frac{1}{H}\sum_{h=0}^{H-1}\lrmod{ \frac{1}{N} \sum_{n=1}^N a_n - \frac{1}{N} \sum_{n=1}^N a_{n+h} } = O\lrp{\frac{H}{N}} $$ and hence by triangle inequality we have that $$ \lrmod{\frac{1}{N} \sum_{n=1}^N a_n - \frac{1}{N
Here is a slightly different proof. Let $a_1,\ldots, a_N$ be complex numbers (not necessarily of modulus $1$ ). Define $a_n=0$ for $n\leq0$ or $n>N$ . For any $1\leq H\leq N$ we have that $$H\sum^N_{n=1}a_n=\sum^{N+H-1}_{p=1}\sum^{H-1}_{h=0}a_{p-h}$$ By Cauchy-Schwartz \begin{align} H\Big|\sum^N_{n=1}a_n\Big| &\leq (N+H-1)^{1/2}\left(\sum^{N+H-1}_{p=1}\left|\sum^{H-1}_{h=0}a_{p-h}\right|^2\right)^{1/2}=(N+h-1)^{1/2}\left(\sum^{N+H-1}_{p=1}\left(\sum^{H-1}_{h=0}a_{p-h}\right)\left(\sum^{H-1}_{h=0}\overline{a_{p-h}}\right)\right)^{1/2}\\ &=(N+H-1)^{1/2}\left(\sum^{N+H-1}_{p=1}\left(\sum^{H-1}_{h=0}|a_{p-h}|^2+2\sum_{1\leq s The nonzero terms in the sum $$S:=\sum^{N+H-1}_{p=1}\sum_{1\leq s are of the form $a_n\overline{a_{n+h}}$ where $1\leq n\leq N-1$ and $1\leq h\leq H-1$ . For fixed $n=p-r$ and $h=r-s$ in $\{1,\ldots N-1\}$ and $\{1,\ldots, H-1\}$ respectively, the pairs $(s, r)$ with $0\leq s that yield $a_n\overline{a_{n+h}}$ are $(0,h),(1,h+1),\ldots (H-1-h,h+H-1-h)$ and for each su
|inequality|
0
Guillemin's proof of manifold boundary dimension
I'm having trouble understanding Guillemin's proof that given a k-dimensional manifold $X$ with a boundary, $\partial X$ 's a $(k-1)$ dimensional manifold without boundary. I understand up to the point that we need to show $\phi(\partial U)=\partial V$ . However, I am having a hard time understanding why showing $\phi(\partial U) \supset \psi(\partial W)$ holds for a different local parameterization $\phi:W\to V$ would imply $\phi(\partial U) \supset \partial V$ .
The idea is to reduce the property $\phi(\partial U)\supset\partial V$ on a manifold into an equivalent property on the base space $\mathbb{H}^k$ (where the boundry $\partial\mathbb{H}^k$ is easier to understand). For this, one uses the fact that $X$ is a manifold, i.e. the fact that is covered by charts that behave well under composition: Since $\psi(\partial W)\subset \partial V$ holds for all charts $\psi$ from $W\subset\mathbb{H}^k$ into $V\subset X$ , and because additionally for each $x\in V$ there exists at least one chart $\psi$ with $x\in\psi(W)$ , the property $\phi(\partial U)\supset\partial V$ can be understood as $\phi(\partial U)\supset \psi(\partial W)$ for all charts $(\psi,W)$ with $\psi(W)\subset V$ . This is then translated into $\partial U \supset (\phi^{-1}\circ\psi)( \partial W)$ , which can be proven using the property that a composition $\phi^{-1}\circ\psi$ of two charts is a homoemorphism from $\mathbb{H}^k$ to itself which preserves open sets.
|general-topology|differential-geometry|manifolds|differential-topology|manifolds-with-boundary|
0
Count of each (not necessarily distinct) element of the Powerset of Powerset elements
Let $X$ be a finite set. Recall that the power set of $X$ is the set of all subsets of $X$ . We denote the power set of $X$ by $\mathcal{P}(X)$ . It can be proved that, if $X$ has $n$ elements $(n \in \mathbb{N}_0)$ , then $\lvert \mathcal{P}(X) \rvert = 2^n$ . Question. What is the value of $ \sum_{A \in \mathcal{P}(X)} \lvert \mathcal{P}(A) \rvert$ ? I imagine there is a basic combinatorial identity that I - an humble programmer - am unaware of.
You must first find the distribution of sizes of the parts $A$ of $X$ . This is conveniently expressed by the "size-generating polynomial" $P=\sum_{k=0}^nc_kY^k\in\Bbb{Z}[Y]$ where $c_k$ is the number of parts $A$ of $X$ of size $k$ . You probably know that $c_k=\binom nk$ and therefore $P=\sum_{i=0}^n\binom{n}kY^k=(1+Y)^n$ by the binomial theorem. Now for every part $A$ of size $k$ you want to contribute $2^k$ ; since such a part contributed $1$ to the coefficient $c_k$ of $Y^k$ , your summation simply becomes the result of substituting $Y=2$ into $P$ ; this gives $P[Y:=2]=(1+2)^n=3^n$ as value.
|combinatorics|binomial-coefficients|
1
Kernel of an operator defined by a power series
Let $K = K(x,y) \in L^{2}(\mathbb{R}^{d}\times \mathbb{R}^{d})$ . This function defines an integral operator on $L^{2}(\mathbb{R}^{d})$ , which I denote by $K$ , as follows: $$(Kf)(x) = \int_{\mathbb{R}^{d}}K(x,y)f(y)dy$$ This operator $K$ is proven to be a Hilbert-Schmidt operator, with Hilbert-Schmidt norm $\|K\|_{\mathcal{J}_{2}}$ . Analogously, the complex conjugate $\overline{K} = \overline{K(x,y)}$ induces an operator $\overline{K}$ on $L^{2}(\mathbb{R}^{d})$ . Next, consider the operators $A_{K}$ and $B_{K}$ defined by its power series: $$A_{K} = \sum_{n=0}^{\infty}\frac{(K\overline{K})^{n}K}{(2n+1)!} \quad \mbox{and} \quad B_{K} = \sum_{n=1}^{\infty}\frac{(K\overline{K})^{n}}{(2n)!}$$ These two series converge in the Hilbert-Schmidt norm, so these operators are well-defined as Hilbert-Schmidt operators. Hence, both $A_{K}$ and $B_{K}$ can be written as integral operators, with integral kernels $A_{K}(x,y)$ and $B_{K}(x,y)$ , that is: $$(A_{K}f)(x) = \int_{\mathbb{R}^{d}}A_{K}(x
This is more of a comment that wouldn't fit rather than a proper answer, but we can find "simple" expressions of $A_K(y,x)$ and $B_K(y,x)$ relatively to $A_{K^*}$ and $B_{K^*}$ thanks to the properties of Hilbert-Schmidt operators which might help. The set $\mathcal{I}_2$ of Hilbert-Schmidt operators is a $*$ -ideal, and for an integral operator we even have that the adjoint $K^*$ of $K$ is exactly the integral operator tied to $K^* : (x,y) \mapsto \overline{K(y,x)} = \overline{K}(y,x)$ . Now, the action of $*$ on operators is an antilinear isometry on $\mathcal{I}_2$ , therefore bounded, meaning that it commutes with infinite series with respect for the Hilbert-Schmidt norm, hence: $$\begin{split}A_K^* = \left(\sum_{n=0}^{\infty}\frac{\left(K\overline{K}\right)^{n}K}{(2n+1)!}\right)^* &= \sum_{n=0}^{\infty}\frac{\left(\left(K\overline{K}\right)^{n}K\right)^*}{(2n+1)!}\\ &= \sum_{n=0}^{\infty}\frac{K^*\left(\overline{K}^*K^*\right)^{n}}{(2n+1)!}\\ &= \sum_{n=0}^{\infty}\frac{\left(K^*\
|functional-analysis|analysis|operator-theory|
0
Proving that a strongly convex function is coercive
I am having trouble with this proof. I am given the following 2 definitions: 1) A function $f$ is coercive if $\lim_{||x|| \rightarrow \infty} f(x) = \infty$ 2) A $C^2$ function $f$ is strongly convex if there exists a constant $c_0 > 0$ such that: $(x - y)^T (\nabla f(x) - \nabla f(y)) \geq c_0 ||x - y||^2 \hspace{5mm}$ $\forall x,y \in \mathbb{R}^n$ The question is to show that if $f$ is strongly convex then it is coercive. I am only allowed to use basic theorems such as Taylor expansion, triangle inequality etc. However, I can use the fact that a function is coercive $\iff$ all its level sets are compact. My instinct is that the answer can be obtained by a doing a Taylor expansion and manipulating the result, but I've been stuck for days using this approach. Any help would be greatly appreciated.
Another route would be by using the alternative characterizations of strong convexity. Def.: A function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is strongly convex, if $\exists \mu > 0$ so that $\forall x, y \in \mathbb{R^n}$ and $\lambda \in [\ 0, 1 ]\ $ , $f(\lambda x + (1-\lambda) y) + \mu \lambda (1-\lambda) ||x -y||^2 \leq \lambda f(x) + (1-\lambda f(y).$ Further: Lemma: A continuously differentiable $f$ is strongly convex iff $ \exists \mu >0$ , so that $\forall x, y \in \mathbb{R}^n, \lambda \in [\ 0, 1 ]\ $ , $ \nabla f(x)^T(y-x) + \mu ||x-y||^2 \leq f(y) - f(x).$ With the second characterisation, which can be obtained for a strongly convex function in a very similar way as the characterisation for convex $f$ from Lemma 2 in the first answer, we can set $x=0$ and obtain $f(y) \geq \nabla f(0)^Ty + \mu ||y||^2 + f(0) \geq -||\nabla f(0)||.||y|| + \mu ||y||^2 + f(0), \forall y \in \mathbb{R}^n$ with Cauchy- Schwarz and clearly $f(y) \rightarrow \infty$ , if $||y|| \rightarrow \in
|multivariable-calculus|convex-optimization|nonlinear-optimization|coercive|
0
Can a discontinuous function be increasing or decreasing
How would we decide increasing or decreasing function if the function is not differentiable? Can a discontinuous function be increasing or decreasing? And can it be monotonic?
Here's an example of a function $f$ which is discontinuous in every neighbourhood and strictly increasing everywhere. Let $r_0,r_1,r_2,...$ be an enumeration of the set $D$ of finite decimals between $r_0=0$ and $1$ such that the $0$ -place decimals are counted first, the $1$ -place decimals next, and so on.* Let $f(r_0)=0$ and, for each $k\in\Bbb N$ , define $f(r_{k+1})=f(r_k)+2^{-k}$ . For $x\in[0\,\pmb,\,1)\setminus D$ , define $$f(x):=\sup\{f(r):r\in D\,;\,r All other values of $f$ are determined by the identity $f(x+1)=f(x)+2\,$ ( $x\in\Bbb R$ ). * For example, $(r_0,r_1,...)=(0; 0.1, ...,0.9;0.01,...,0.99;\,...)$ .
|calculus|functions|derivatives|monotone-functions|
0
Problems regarding the denseness of $a\mathbb{Z}+\mathbb{Z}$
I came across a problem related to the denseness of $a\mathbb{Z}+\mathbb{Z}$ . Explicitly, the problem asks: Let $a$ be irrational. (a) For any $y \in [0,1]$ , prove that there are infinitely many $n \in \mathbb{N}$ so that $\{na\} \in [y-\epsilon, y+\epsilon]$ . (b) Prove that there exists $b \in [0,1]$ such that there are infinitely many $n \in \mathbb{N}$ with $\{na\} \in [b-9^{-n}, b+9^{-n}]$ . Where $\{x\}$ denotes the fractional part of $x$ . This is a contest problem ( Bulgaria EGMO TST ). Being always interested in analysis, I tried solving the problem. This is my attempt: Let $\{x\}$ denote the fractional part of $x$ . We choose $n$ big enough so $\frac{1}{n} . For $k=0,1,...,n$ , we let $$ x_k = \{ka \} $$ Divide $[0,1)$ into $n$ equal intervals, then two $x_i, x_j (i falls in the same interval, so $|\{(i-j)a\}| . That is, there exists $p = j-i and $q$ so that $$ |pa - q | This proves the Dirichlet's approximation theorem. In fact, an implication of which is that $\{na\}$ is
Since $\{ n\alpha \}$ is dense in $[0,1]$ , Part $(a)$ is almost obvious. For Part $(b)$ , let's assume: $$B_1=[\{m_1\alpha\} -\frac{1}{9^{m_1}}, \{m_1\alpha\} +\frac{1}{9^{m_1}}],$$ where $m_1 \in \mathbb N.$ It's easy to choose $m_1$ such that $B_1 \subset [0,1]$ as we will see a similar reasoning in the following. Take $b^* \in B_1$ and $\epsilon >0 $ such that: $$[b^*-\epsilon,b^*-\epsilon] \subset B_1=[\{m_1\alpha\} -\frac{1}{9^{m_1}}, \{m_1\alpha\} +\frac{1}{9^{m_1}}].$$ By Part $(a)$ , there are infinitely many $n$ such that $\{na\} \in [b^*-\frac{\epsilon}{2},b^*-\frac{\epsilon}{2}].$ Pick such a sufficiently large $n$ , say $m_2$ , such that: $$[\{m_2\alpha\} -\frac{1}{9^{m_2}}, \{m_2\alpha\} +\frac{1}{9^{m_2}}] \subset [b^*-\epsilon,b^*-\epsilon] .$$ Hence, we have: $$B_2=[\{m_2\alpha\} -\frac{1}{9^{m_2}}, \{m_2\alpha\} +\frac{1}{9^{m_2}}] \subset [\{m_1\alpha\} -\frac{1}{9^{m_1}}, \{m_1\alpha\} +\frac{1}{9^{m_1}}]=B_1.$$ Doing the same procedure infinitely many times, we wil
|analysis|contest-math|compactness|additive-combinatorics|
0
Partial derivative of sample standard deviation w.r.t individual data points
I would like to derive the partial derivative $\delta f \,/ \, \delta x_j $ for $$ f(x_1,x_2,...,x_{j-1},x_j,x_{j+1},...,x_n)=\sqrt{\frac{\sum_{i=1}^n (x_i - \bar{x})^2}{n-1}} $$ which is the partial derivative of the sample standard deviation with respect to a specific data point $x_j$ . Using the chain rule of differentiation: $$ \frac{\delta f}{\delta x_j}=\frac{\delta f}{\delta u}\frac{\delta u}{\delta x_j} $$ $$ u=\frac{\sum_{i=1}^n (x_i - \bar{x})^2}{n-1} $$ $$ \frac{\delta f}{\delta u}=\frac{1}{2\sqrt{u}} $$ $$ \frac{\delta u}{\delta x_j}=\frac{2}{n-1} \Bigl( \frac{x_j}{n}-\bar{x} \Bigr) $$ $$ \frac{\delta f}{\delta x_j} = \frac{\frac{1}{n-1}(\frac{x_j}{n}-\bar{x})}{s} $$ where $s$ is the sample standard deviation itself. However, I think this is incorrect. In a simple example: $x_1=1, x_2=2, x_3=3$ , then my estimate of $\delta f \,/ \, \delta x_j $ is always negative. It seems to me that $\delta f \,/ \, \delta x_3 $ should be positive. Does anyone know where I might have mess
sorry for the late answer! I got $\frac{\delta u}{\delta x_j} = \frac2{n-1}(x_j - \overline{x}) $ . And for $\frac{\delta f}{\delta x_j} = \frac{x_j-\overline{x}}{(n-1)s} $ It seems like this formula agrees with sympy for your example. import sympy x1, x2, x3 = sympy.symbols('x1 x2 x3') mean = (x1 + x2 + x3) / 3 sigma2 = ((x1 - mean)**2 + (x2 - mean)**2 + (x3 - mean)**2) / (3 - 1) sympy.diff(sigma2, x1) $\frac{2x_1}{3} -\frac{x_2}3 - \frac{x_3}3$ $\frac{\delta \sigma^2}{\delta x_j} = \frac2{n-1}(x_j-\overline{x}) = \frac{2x_1}{3} -\frac{x_2}3 - \frac{x_3}3$
|derivatives|summation|partial-derivative|standard-deviation|
0
Kernel of an operator defined by a power series
Let $K = K(x,y) \in L^{2}(\mathbb{R}^{d}\times \mathbb{R}^{d})$ . This function defines an integral operator on $L^{2}(\mathbb{R}^{d})$ , which I denote by $K$ , as follows: $$(Kf)(x) = \int_{\mathbb{R}^{d}}K(x,y)f(y)dy$$ This operator $K$ is proven to be a Hilbert-Schmidt operator, with Hilbert-Schmidt norm $\|K\|_{\mathcal{J}_{2}}$ . Analogously, the complex conjugate $\overline{K} = \overline{K(x,y)}$ induces an operator $\overline{K}$ on $L^{2}(\mathbb{R}^{d})$ . Next, consider the operators $A_{K}$ and $B_{K}$ defined by its power series: $$A_{K} = \sum_{n=0}^{\infty}\frac{(K\overline{K})^{n}K}{(2n+1)!} \quad \mbox{and} \quad B_{K} = \sum_{n=1}^{\infty}\frac{(K\overline{K})^{n}}{(2n)!}$$ These two series converge in the Hilbert-Schmidt norm, so these operators are well-defined as Hilbert-Schmidt operators. Hence, both $A_{K}$ and $B_{K}$ can be written as integral operators, with integral kernels $A_{K}(x,y)$ and $B_{K}(x,y)$ , that is: $$(A_{K}f)(x) = \int_{\mathbb{R}^{d}}A_{K}(x
Let $L^\intercal$ be the transpose operator of $L$ . Using the integral you can easily prove that $$(L_1 L_2)^\intercal=L_2^\intercal L_1^\intercal.$$ Indeed, \begin{align} \left(L_1L_2\right)^{\intercal}(x, y) = \left(L_1L_2\right)(y, x) &= \int_{\mathbb R^d} L_1(y, z) L_2(z, x) \mathrm d z\\ &= \int_{\mathbb R^{d}} L_2^\intercal(x, z) L_1^\intercal(z, y)\mathrm d z = \left(L_2^\intercal L_1^\intercal\right)(x,y) \end{align} Now apply this and you will have $$A_{K}^\intercal = A_{K^\intercal} = A_K$$ same thing for $B_K$ For $KA_K$ and $K B_K$ they are not a prior symmetric, because: $$\left(KA_K\right)^\intercal = A_{K^\intercal} K^{\intercal} = A_KK.$$
|functional-analysis|analysis|operator-theory|
1
How to find this polynomial chromatic function for this graph?
I have this graph here picture . And i tried to calculate this myself with removing an edge and contracting an edge here. Below And i got my Polyomiak function to $\lambda*(\lambda-1)^{4}*(\lambda-2) - \lambda*(\lambda-1)^{3}*(\lambda-2)$ But that is not correct according to the answers because when i plug in 3 i get 24 but the correct answer they get in the answers section is 30 and they have PG to $ \lambda(\lambda − 1)(\lambda − 2)^{2} (\lambda^2-2*\lambda+2)$ what have i done wrong here i don't really not know what i have done wrong in this question to get the correct chromatic polynomial right?
The chromatic polynomial of a triangle is $\lambda(\lambda-1)(\lambda-2)$ . It is well know that the chromatic polynomial of $C_k$ is $(\lambda-1)^k +(-1)^k(\lambda-1)$ , so the chromatic polynomial of $C_5$ is $(\lambda-1)^5 +(-1)^5(\lambda-1) = (\lambda-1)((\lambda-1)^4 -1) = \lambda(\lambda-1)(\lambda-2)(\lambda^2-2\lambda+2)$ (factorization obtained by applying twice the standard identity. To find the total number of coloring, we take the product, but we need to divide by $\lambda(\lambda-1)$ because the choice of a coloring for the triangle forces the colors of two vertices for the $C_5$ , hence we get the correct result: $$\lambda(\lambda-1)(\lambda-2)^2(\lambda^2-2\lambda+2)$$
|graph-theory|coloring|
0
"Simple" group of order $1004913$ problem, fixed point part
Let $G$ be a group of order $1004913 = 3^3 \cdot 7 \cdot 13 \cdot 409$ . We suppose that $G$ is simple. We want to obtain a contradiction. This is the Exercise 29 in Chapter 6.2 of Dummit-Foote. As far as I know, in MSE, there are two QAs: QA1 , QA2 . And I found an exercise and a solution . I have determined $n_3, n_7, n_{13}, n_{409}$ uniquely as mentioned in QAs above. We consider the action of $G$ on $Syl_{409}(G)$ by conjugation. Let $G \to S_{819}$ be the permutation representation of this. I see that elements of order $3$ in $N_G(P)$ fix exactly $3$ points in $S_{819}$ , where $P \in Syl_{409}(G)$ . However, I don’t know how to obtain a contradiction by this. According to QAs above, elements of order $3$ in $G$ fix exactly $9$ points in $S_{819}$ , which follows from Burnside’s lemma or some higher theory. In Dummit-Foote, Burnside’s lemma is stated in the last chapter about character theory. Therefore, I wonder if there is a solution without it, i.e., solution with tools in Cha
Let $\left|G\right|=1004913=3^{3}7\cdot 13\cdot 409$ and suppose $G$ is simple. Then the Sylow $3$ -subgroup is elementary abelian and the class equation implies there are at most $2$ cycle types for elements of order $3$ . On the other hand, using the permutation representation in $S_{819}$ , the normalizers for the Sylow $7$ , $13$ , and $409$ subgroups produce $3$ distinct cycle types for elements of order $3$ , a contradiction. It must be that $G$ is not simple. Thus the result.
|group-theory|finite-groups|sylow-theory|simple-groups|
0
Chain rule for partial derivatives from properties of differential
I'm having some head scratching with the chain rule applied to partial derivatives. I know that if $f$ and $g$ are differentiable functions then $d(f \circ g(x)) = df(g(x))dg(x)$ . This is quite clear. I know also that $\frac{\partial}{\partial x_i}f(x) = ⟨\nabla f(x), e_i⟩$ if $e_i$ is the $i$ -th base vector. However, I don't really understand how these two things combine to achieve the partial derivative of composite functions. Suppose for example I have to calculate $\frac{\partial}{\partial x} f(g(x, y), h(x,y))$ . I don't really understand how to do this.
Suppose $f = (f_1, \ldots, f_q)$ is a function with values in $\mathbf{R}^q$ and defined over an open set of $\mathbf{R}^p;$ likewise, let $g = (g_1, \ldots, g_r)$ defined on an open set containing the image set of $f$ and with values on $\mathbf{R}^r.$ Then $Df(x)$ is of dimensions $(q,p)$ and $Dg(y)$ is of dimensions $(r,q);$ in addition, the rows of $Df(x)$ or $Dg(y)$ are the derivatives of $f_j$ and $g_i,$ respectivel, and the columns of are the partial derivatives. This is so because the partial derivative is the derivative of $t \mapsto f(x + te_k)$ and the chain rule demonstrates this derivative at $t = 0$ is $Df(x) e_k$ which is the $k$ th column. Whereas the function $f_k$ is obtained by dot product $x \mapsto e_k \cdot f(x) = e_k^\intercal f(x)$ (if $f(x)$ is writen as a $q$ -column vector) whose derivative is $e_k^\intercal Df(x)$ (chain rule again). Therefore, the $k$ partial derivative of $g \circ f$ at $x$ (with $y = f(x)$ ) is $D(g \circ f)(x) e_k = Dg(y) (Df(x) e_k)$ an
|multivariable-calculus|
0
How many sets $(A_1,A_2,\cdots, A_k)$ which are subsets of $\{1,2,\cdots ,n\}$
For given natural numbers $n,k$, how many are there $k$-tuples $(A_1,A_2,\cdots ,A_k)$ such that $$A_1\subseteq A_2\subseteq\cdots \subseteq A_k\subseteq \{1,2,3,\cdots ,n\}$$ I've thought to prove by induction on $k$ that the number of $k$-tuples is equal to $$\sum_{t=0}^n{n\choose t}k^t=(k+1)^n$$ Though I have no idea what happens when you add another subset,my idea was when I add another subset to let it be $A_1$ and shift every other subset index by $1$.Then split it into $n+1$ cases such that $|A_2|=t$ for each $t$ from $0$ to $n$. Maybe there is a better way.
A direct combinatorial proof that the number of these $k$ -tuples equals $(k+1)^n$ , generalizing that one for the case $k=2$ , is the following: Each element of $\{1,2,\dots,n\}$ can be chosen to be: in none of the $A_i$ s (case $0$ ) only in the largest one, $A_k$ (case $1$ ) only in $A_k,A_{k-1}$ (case $2$ ) $\dots$ in all of them but $A_1$ (case $k-1$ ) in all of them (case $k$ ). These $k+1$ choices being independent for each element, there are $(k+1)^n$ choices for the $k$ -tuples $(A_1,\dots,A_k)$ .
|combinatorics|contest-math|
0
Outer product as an operator in an infinite Hilbert space
The outer product between a bra-ket $|a\rangle\langle a|$ where if $|a\rangle\in\mathcal{H}$ and $\langle a|\in\mathcal{H}_{dual}$ is a vector in the tensor vector space formed by the Hilbert space and its dual. Then it can be related to a linear operator due to the isomorphism with the space of all endomorphism of the Hilbert space: $$\mathcal{H}\otimes\mathcal{H}_{dual}\simeq End(\mathcal{H})$$ This means that for any vector in the tensor vector space I will be able to find a one-to-one corrspondence with a linear transformation $\rho:V\to V$ where $\rho\in End(\mathcal{H})$ . This one-to-one correspondence arises because in finite dimensions, the space $\mathcal{H}\otimes\mathcal{H}_{dual}$ captures all bilinear forms on $\mathcal{H}$ and the set of linear transformations $End(\mathcal{H})$ also represents all linear maps on $\mathcal{H}$ . However, in infinite-dimensional spaces, this correspondence should break down because the potential presence of additional elements in $\mathca
The isomorphism breaks for infinite-dimensional vector spaces, but in the opposite way to what you are thinking: the tensor product $\mathcal{H} \otimes \mathcal{H}^*$ is too small in this case, not too big. More formally, we can still define map from $\mathcal{H} \otimes \mathcal{H}^*$ to the space $\mathrm{B}(\mathcal{H})$ of bounded linear operators on $\mathcal{H}$ . To do this, we assign to every rank $1$ tensor $\left|a\rangle\langle b\right|$ the linear map $\left|x\right> \mapsto \left|a\rangle\langle b|x\right>$ , and then extend using linearity and continuity. This map is injective, but for infinite-dimensional vector spaces there exist linear operators which cannot be obtained from tensors, for example, the identity operator. The linear operators in $\mathrm{B}(\mathcal{H})$ which correspond to tensors in the algebraic tensor product are exactly operators of finite rank, and the linear operators which correspond to tensors in the topological tensor product are called Hilbert
|vector-spaces|quantum-mechanics|quantum-information|
1
Sums and Products over sets of sets
Say I have a set of sets eg. $A:=\{\{1,2\},\{2,3\}\}$ . Does this formula $$\sum_\limits{A_i \in A}\prod_{j\in A_i}x_j$$ yield $x_1x_2+x_2x_3$ ? Or does it even make sense to define a sum over a set of sets?
I assume the product in the sum was meant to be $\prod_{j\in A_i}x_j$ . As long as some indexing set $I$ (possibly a set of sets) and the corresponding value for each index is something compatible with multiplication or addition then $\prod_{i\in I}$ or $\sum_{i\in I}$ respectively makes sense (a formal polynomial in this case). So even though the sum of $A_i$ s themselves may not make sense, we still have $\prod_{j\in A_i}x_j$ which makes sense as $A_i$ varies over elements of $A$ which $\sum_{A_i\in A}$ sums over.
|elementary-set-theory|summation|products|
0
Information-theoretic Inequality
If we have two discrete RVs, X, and Y. How can we show: $$\sum_{x,y} p(x|y)p(y|x) \geq 1.$$ The question goes further with finding a sufficient and necessary condition for equality. My attempt: For equality, assuming that X, Y are independent will enable us to sum over each variable PMF and get exactly one. However, I am stuck with showing how the inequality holds in general, and I appreciate any hints and tips.
Applying Bayes' rule, $$ S:= \sum_{x,y} p(x|y) p(y|x) = \sum_{x,y} \frac{p(x,y)^2}{p(x) p(y)}. $$ Now let $q(x,y) := p(x) p(y)$ , which is the law with independent $X,Y$ that matches the marginal distributions of $p$ . Then we have $$ S = \sum_{x,y} q(x,y) \cdot \left(\frac{p(x,y)}{q(x,y)}\right)^2 =\mathbb{E}_q[ f(X,Y)^2],$$ where $f(x,y) = p(x,y)/q(x,y)$ . But by Jensen't inequality, $$ S \ge (\mathbb{E}_q[f(X,Y)])^2 = 1.$$ In fact, you're essentially looking at the $\chi^2$ -divergence, which is defined as $$D_{\chi^2}(P\|Q) = \mathbb{E}_Q\left[ \left( \frac{\mathrm{d}P}{\mathrm{d}Q}(X) - 1\right)^2\right],$$ which in the discrete case exactly works out to $\sum_{x} \frac{p^2(x)}{q(x)} - 1.$ This is an $f$ -divergence, and has many of the nice properties of KL divergence. In particular, $D_{\chi^2} \ge 0$ , with equality if and only if $P = Q$ (which in your case translates to $p(x,y) = p(x) p(y)$ , i.e., independence). The actual functional you're studying is (roughly) a $\chi^2$ -
|probability|inequality|probability-distributions|conditional-probability|information-theory|
0
Is ${\log(x)}^2$ uniformly continuous on $(1, \infty)$?
Is $f(x)={\log(x)}^2$ uniformly continuous on $(1,\infty)$ ? I have found the above question in an old exam and was wondering how to solve it. Actually, a few friends and I have found 2 different solutions, one showing that the function is uniformly continuous, while the other showing that it isn't, and we can't figure out which one is wrong. Here is the first one: Let $\varepsilon = 1$ and $\delta>0$ be given. Let $x = \max\{\frac{\delta}{2},e\} > 1$ , and $y = x + \frac{\delta}{2} > 1$ . Then we have $$|x - y| = \left|x - \left(x + \frac{\delta}{2}\right)\right| = \frac{\delta}{2} and \begin{align*} |f(x)-f(y)| &= \left|{\log(x)}^2 - {\log(y)}^2 \right| \\[0.5em] &= \left|(\log x + \log y)(\log x - \log y) \right| \\[0.5em] &\geq |(\log e)(\log x - \log y)| \\[0.5em] &= |\log x - \log y\:| \\[0.5em] &= \left| \log \left(x+\frac{\delta}{2}\right) - \log x\right| \\[0.5em] &= \left| \log \left( 1+ \frac{\delta}{2x} \right)\right| \\[0.5em] &\geq 1 \\ &= \varepsilon, \end{align*} so $f(
You already pointed it out. $f'$ is bounded on $(1, \infty)$ , whence $\lvert f(x) - f(y) \rvert \leq \sup_{\xi \in (1, \infty)}\lvert f'(\xi) \rvert \lvert x -y \rvert$ for all $x, y \in (1, \infty)$ . This is uniform continuity (Lipschitz continuity even). Concerning your first argument: The second line already is wrong. There is no way that $\tfrac{\delta}{2} >\delta$ . Moreover, it is logically flawed. To "disprove" uniform continuity, you need to show that there is $\varepsilon >0$ such that for every $\delta> 0$ there is a pair $x_\delta, y_\delta \in (1, \infty)$ such that $\lvert x_\delta - y_\delta \rvert and $\lvert f(x_\delta) - f(y_\delta)\rvert \geq \varepsilon$ . The second line really blurs whether you are attempting to show this. And as Martin R. correctly points out in the comments: For the proof to work, $\delta$ has to be allowed to become arbitrarily small. But then $\left \lvert \log\left( 1+ \tfrac{\delta}{2x} \right) \right \rvert$ approaches $0$ .
|real-analysis|continuity|uniform-continuity|
0
Proving Hölder's Inequality
Let $f,g,\alpha:[a,b]\rightarrow \mathbb{R}$ with $\alpha$ increasing and $f,g \in \mathscr{R}(\alpha)$, and $p,q>0$ with $\frac{1}{p}+\frac{1}{q}=1$. Prove that $$\left|\int_a^b f(x)g(x)d\alpha\right|\leq \left(\int_a^b \left|f(x)\right|^p d\alpha \right)^{1/p} \left(\int_a^b \left|g(x)\right|^q d\alpha \right)^{1/q}$$ I am using Young's inequality, which states that for $a,b>0$, $uv\leq \frac{1}{p}u^{p}+\frac{1}{q}v^{q}$. This gets me as far as showing that $$\left|\int_a^b f(x)g(x)d\alpha\right|\leq \int\left( \frac {1}{p}|f(x)|^p +\frac{1}{q}|g(x)|^q\right)d\alpha$$ But here I'm stuck. I'm vaguely thinking that I could use the fact that $\frac {1}{p}|f(x)|^p +\frac{1}{q}|g(x)|^q$ is a convex combination and so if I do some Jensen's inequality type thing, but I can't figure out a way to make it work out.
NOTE to the moderation: Please don't delete my answer for no reason I have spent a lot of time writing it and she goes further and use different ways than the other answers above. -Let go further and here a little bit more general prove that hope will help futur reader and student like me. Prove that with $ \frac{1}{p} + \frac{1}{q} = \frac{1}{r}$ and $ p, q ,r \in [0; \infty ] $ we have $$ (\int |f(x)g(x)|^r dx)^\frac{1}{r} = \| fg\|_r \leq \| f \|_p \| g \|_q = (\int |f(x)|^p dx)^\frac{1}{p} (\int |g(x)|^q dx)^\frac{1}{q} $$ . -For simplifying the scripture we write $F = \| f \|_p , G = \| g \|_q$ . I/Special cases 1- If $F=0$ or/and $G=0 \Rightarrow 0 \leq \| fg\|_r \leq \| f \|_p \| g \|_q =0 $ because $ \| . \|_r \geq 0$ as a measure. And thus $ \| fg\|_r = 0$ too and so the inequality is respected. 2- If $F= \infty$ or/and $G= \infty $ in such a case we have that obviously $\| fg\|_r \leq F \cdot G = \infty$ . 3- If for exemple $q = \infty \Rightarrow \frac{1}{q}=\frac{1}{\infty}
|real-analysis|integration|inequality|
0
What exactly is the orbit-stabilizer theorem?
Obviously, being a professional group theorist, I know what the orbit-stabilizer theorem is. Or at least I thought I did. I thought that the orbit-stabilizer theorem was that if $G$ is a finite group acting on a set $X$ , with stabilizer $H$ of a point $x$ , then $|G|$ is the product of $|H|$ and the length of the $G$ -orbit of $x$ . But now I see another version of the orbit-stabilizer theorem, which is that if $G$ acts transitively on a set $X$ , and $H$ is the point stabilizer, then the action is equivalent to the action on the cosets of $H$ . Applying Lagrange's theorem yields the statement that I think is the orbit-stabilizer theorem. I have no name for this statement about transitive actions being equivalent to coset actions. So my question is: how widespread is this second version? Wikipedia calls the second one the orbit-stabilizer theorem, but a sample of lecture notes that Google served up all used the first version. This is important because, in a second course on group theo
The way I state it in my final year group theory course is: Let $G$ act on $\Omega$ and let $\alpha \in \Omega$ . Then there is a bijection between the right cosets $G_\alpha g$ of $G_\alpha$ in $G$ and $\alpha^G$ defined by $G_\alpha g \mapsto \alpha^g$ . In particular, if $G$ is finite then $|G|=|\alpha ^G| |G_\alpha |$ . That has the advantage of giving more information than the "in particular" part, which is often useful, and also it does not restrict the statement to finite groups. I state the equivalence of transitive actions with coset actions as a separate result.
|group-theory|permutations|soft-question|finite-groups|orbit-stabilizer|
0
What is $(-8)^\frac{2}{3}$?
I am comfuse about something. I want to compute $(-8)^\frac{2}{3}$ Is it $(-2^3)^\frac{2}{3}$=$(-2)^{3\cdot\frac{2}{3}}$=$(-2)^2=4$ ? Is there any problem here because the base is negative? Thanks.
As $2$ and $3$ are coprime integers, riaising to the ( $\frac23$ )th power (equivalently, taking the ( $\frac32$ )th root, is esdsentially squaring and cubing. While generally, using the simple $p$ th power then $q$ th root to calculate the ( $\frac pq$ )th power generally works for only positive real bases, it works in the case of negative numbers for a $(\frac23$ )th power. $$\sqrt[3]{(-8)^2}=\sqrt[3]64=2$$ $$(\sqrt[3]{(-8)})^2=(-2)^2=2 $$ This does not seem to work for all exponents. $$\sqrt{(-1)^3}=\sqrt{-1}=i$$ if $i^2=-1$ However, $$(\sqrt{-1})^3=i^3=-i\neq-1$$ But why? Let us look at the numerator and denominator. We were taking a square root, which is an even root, in this example. In the $(-8)^{\frac23}$ example, we were taking a cube root, which is an odd root. Anbd here we have it. For all nonzero integers $z$ , none of the $2z$ th roots of negative real numbers are real numbers. But for a ( $2z+1)$ th root, every real number, including negative numbers, has a unique real ro
|algebra-precalculus|
0
Parallel lines diverge behind the observer in projective geometry
Of course parallel lines converge at a point at infinity in projective geometry, but visually, they appear to diverge as one gets closer and closer to the start of one's vision, i.e. they diverge behind the observer. I can't find much about this fact. How is this dealt with in the different projective spaces? Are there any interesting hyperbolic 'perspectives' on this fact?
The usual notion of parallelism is foreign to projective geometry. We need to go back to definitions, and the best way to do this is to define a projective space $P(E)$ associated with a vector space $E$ . A projective line $D_i$ is then by definition a projective linear manifold associated with a vector plane $P_i$ of $E$ . And $$D_1\cap D_2=P(P_1)\cap P(P_2)=P(P_1\cap P_2)$$ If $P_1\neq P_2 $ then $P_1\cap P_2 $ is a vector line $D$ and $P(D)$ is a point of $P(E)$ . All these statements are easily illustrated in $E:=\mathbb R^3$ :
|geometry|projective-geometry|projective-space|
0
Show by hand : $e^{e^2}>1000\phi$
Problem: Show by hand without any computer assistance: $$e^{e^2}>1000\phi,$$ where $\phi$ denotes the golden ratio $\frac{1+\sqrt{5}}{2} \approx 1.618034$ . I come across this limit showing: $$\lim_{x\to 0}x!^{\frac{x!!^{\frac{2}{x!!-1}}}{x!-1}}=e^{e^2}.$$ I cannot show it without knowing some decimals and if so using power series or continued fractions. It seems challenging and perhaps a tricky calculation is needed. If you have no restriction on the method, how to show it with pencil and paper ? Some approach: We have, using the incomplete gamma function and continued fractions: $$\int_{e}^{\infty}e^{-e^{-193/139}x^{193/139+2}}dx=\frac{139}{471}\cdot e\cdot\operatorname{Ei}_{332/471}(e^2)>e^{-e^2},$$ where $\operatorname{Ei}$ denotes the exponential integral. Finding an integral for the golden ratio $\phi$ is needed now. Following my comment we have : $$e^{-e^2} Where the function in the integral follow the current order for $x\ge e$ .As said before use continued fraction of incomple
Although this calculation is long, no special skill, nor even special patience, is required to carry it out entirely by hand. From e Continued Fraction --- from Wolfram MathWorld , \begin{multline*} e > 2 + \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{4 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{6}}}}}}}} = 2 + \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{4 + \cfrac{1}{1 + \cfrac{6}{7}}}}}}} \\ = 2 + \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{4 + \cfrac{7}{13}}}}}} = 2 + \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{13}{59}}}}} \\ = 2 + \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{1}{1 + \cfrac{59}{72}}}} = 2 + \cfrac{1}{1 + \cfrac{1}{2 + \cfrac{72}{131}}} = 2 + \cfrac{334}{465} = \cfrac{1264}{465}, \end{multline*} From $465 \times 271{,}827 = 126{,}399{,}555,$ we get $$ e > \frac{1264}{465} > 2.71827. $$ Also, from $738{,}904 \times 465^2 = 184{,}726 \times 8{,}649 \times 100 = 159{,}769{,}517{,}400,$ we get $$ e^2 >
|inequality|constants|golden-ratio|number-comparison|
0
Prove that $||v||^2 ≥ \dfrac{1}{n}$ - alternative way
Let $v$ = $ \begin{align} & \begin{bmatrix} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{bmatrix} \end{align} ∈ \Bbb R^n$ s.t $x_1, ...,x_n ≥ 0$ and $\sum_{i=1}^{n}x_i =1$ . Prove that $||v||^2 ≥ \dfrac{1}{n}$ . The official solution uses Cauchy-Schwartz inequality but I solved it in a much simpler way. By definition, The standard norm is $||v||^2$ = $(\sqrt{\langle v,v \rangle})^2= $$(\sqrt{(\sum_{i=1}^{n}(x_i)^2})^2$ = $\sum_{i=1}^{n}x_i$ $*$ $\sum_{i=1}^{n}x_i$ (finite sum) = $1$ ≥ $\dfrac{1}{n}$ for every $n≥1$ . There is something wrong here? It looks too naive.
-Take into account that I am just a student so I hope that my answer is correct. -You have a more simple prove. 1- We know that $\sqrt{\frac{\sum_{i=1}^n x_i^2}{n}} \geq \frac{\sum_{i=1}^n x_i}{n} = \frac{1}{n}$ as it is given $\sum_{i=1}^n x_i = 1 $ . -This cause that $\sqrt{\frac{\sum_{i=1}^n x_i^2}{n}} \geq \frac{\sum_{i=1}^n x_i}{n} = \frac{1}{n} \Rightarrow \sqrt{\sum_{i=1}^n x_i^2} \geq \frac{1}{\sqrt{n}} $ 2- Now as $x^2$ is an increasing function fofr $x \geq 0$ we have that $0 \leq x_1 \leq x_2 \Rightarrow x_1^2 \leq x_2^2$ . Apply to here it gives us $\| v \|^2 = \sum_{i=1}^n x_i^2 =(\sqrt{\sum_{i=1}^n x_i^2})^2 \geq (\frac{1}{\sqrt{n}})^2= \frac{1}{n}$ Q.E.D.
|linear-algebra|inner-products|cauchy-schwarz-inequality|
0
How to integrate $\int_{0}^{1} \int_{0}^{1} \tanh^{-1}\left(\frac{x}{y} + \frac{y}{x}\right) \,dx\,dy$
how to integrate $$\int_{0}^{1} \int_{0}^{1} \tanh^{-1}\left(\frac{x}{y} + \frac{y}{x}\right) \,dx\,dy$$ My attempt $$\int_{0}^{1} \int_{0}^{1} \tanh^{-1}\left(\frac{x}{y} + \frac{y}{x}\right) \,dx\,dy = \int_{0}^{1} \int_{0}^{1} \tanh^{-1}\left(\frac{x^2 + y^2}{xy}\right) \,dx\,dy$$ $$\int_{0}^{1} \int_{0}^{1} \frac{1}{2} \ln\left[\frac{1 + \frac{x^2 + y^2}{xy}}{1 - \frac{x^2 + y^2}{xy}}\right] \,dx\,dy$$ $$\int_{0}^{1} \frac{1}{2} \int_{0}^{1} \ln\left[\frac{xy + x^2 + y^2}{xy - (x^2 + y^2)}\right] \,dx\,dy$$ $$\frac{1}{2} \int_{0}^{1} \int_{0}^{1} \ln\left[\frac{x^2 + yx + y^2}{-(x^2 - yx + y^2)}\right] \,dx\,dy$$ $$\frac{1}{2} \int_{0}^{1} \int_{0}^{1} \{\ln(x^2 + yx + y^2) \,dx\,dy - \ln(-1)\left[(x^2 - yx + y^2)\right]\}$$ $$\frac{1}{2} \int_{0}^{1} \int_{0}^{1} \ln(x^2 + yx + y^2) - \ln(-1) - \ln(x^2 - yx + y^2) \,dx \,dy$$ $$\frac{1}{2} \int_{0}^{1} \int_{0}^{1} \left[\ln(x^2 + yx + y^2) - \ln(x^2 - yx + y^2) - \ln(e^{i\pi})\right] \,dx\,dy$$ $$\frac{1}{2} \int_{0}^{1} \left[\i
There is no indication that complex numbers should be used in this exercise. Hence it is tacitly implied that this is a calculation on the real numbers. Now the function $\tanh$ yields a value in the range $(-1,1)$ . Hence the inverse function only accepts values in this interval; other values are outside of its domain. In this case we can quickly establish that the argument lies outside, since $x/y + y/x \ge 2$ for all positive values of $x$ and $y$ . Therefore the integrand does not exist and the integral is undefined.
|calculus|integration|multivariable-calculus|definite-integrals|closed-form|
0
Why $(-2)^{2.5}$ isn't equal to $((-2)^{25})^{1/10}?\,$ [Fractional powers of negative numbers]
I've tried both calculations on Wolfram Alpha and it returns different results , but I can't get a grasp of why it is like that. From my point of view, both calculations should be the same, as $2.5=25/10,$ and $(-2)^{2.5}$ is equal to $(-2)^{25/10},$ relying on a general rule $(a^m)^n=a^{mn}$ . Links to sources: https://www.wolframalpha.com/input/?i=(-2)%5E(2.5) https://www.wolframalpha.com/input/?i=((-2)%5E(25))%5E(1%2F10)
In general, sequential powerings and root extractions are not commutative among the complex numbers. If $c_1$ , $c_2$ , and $c_3$ are complex numbers, $(c_1^{c_2})^{c_3}$ does not always equal $(c_1^{c_3})^{c_2}$ . For example, $\sqrt{(-1)^3}=\sqrt{-1}=i$ , if $i^2=-1$ , but $(\sqrt{-1})^3=i^3=-i\neq-1$ . It is always commutative in the special case of positive reals. For positive rational exponents, it is always commutative for all nonnegative real bases. It is also always commutative if the bases are nonzero real and all exponents are rational numbers whose lowest term denominator is an odd number. If the exponents are all positive, then it is commutative for all real bases. If the exponents are integers, then it is commutative for all complex bases. A similar pattern can be seen regarding multiplication of quaternions. Let $i^2=J^2=k^2=ijk=-1$ . Then $ij=k$ , while $ji=-k$ , so multiplication is not always commutative among quaternions. But multiplication is commutative among the co
|algebra-precalculus|complex-numbers|exponentiation|
0
Detail in Verification of Suspension-Loop Adjunction in Infinity Category Theory
Let $\mathscr{C}$ be a pointed $\infty$ -category. Then, $\Sigma : \mathscr{C} \rightleftarrows \mathscr{C} : \Omega$ can be defined through $\Sigma = \operatorname{colim}( * \leftarrow X \to *)$ and $\Omega = \lim ( * \to X \leftarrow *)$ . There seems to be some part in the theory of (co-)limits and adjunctions for $\infty$ -categories that I'm not understanding well enough. I wanted to verify $\Sigma \dashv \Omega$ which caused me a lot of confusions. Initially, I thought that this would be a quick computation via \begin{align*} \operatorname{map}(\Sigma x, y) &\simeq \operatorname{map}(*, y) \times_{\operatorname{map}(x,y)} \operatorname{map}(*, y) \\ &\simeq \Omega \operatorname{map}(x,y) \\ &\simeq \operatorname{map}(x,*) \times_{\operatorname{map}(x,y)} \operatorname{map}(x,*) \\ &\simeq \operatorname{map}(x, \Omega y). \end{align*} This seems fine but to show that we get an adjunction one should really prove that $$ \operatorname{map}(\Sigma x, y) \xrightarrow{\Omega} \operator
The functor $\Omega$ is the composite of a bunch of functors, which I will now describe. Write $\mathsf{A}$ for the diagram $2\to 3\leftarrow 1$ . The inclusion $i_3\colon *\to\mathsf{A}, *\mapsto 3$ induces a left Kan extension $\mathrm{Lan}_{i_3}\colon\mathscr{C}\to\mathrm{Fun}(\mathsf{A},\mathscr{C})$ (informally sending $X\in\mathscr{C}$ to $*\to X\leftarrow*$ ). Write $\mathsf{Sq}$ for the category $[1]\times[1]$ , but instead of $(00)$ , $(01)$ , $(10)$ , and $(11)$ , I call the objects $0$ , $1$ , $2$ and $3$ , respectively. Write $i\colon\mathsf{A}\to\mathsf{Sq}$ for the obvious inclusion. This induces a right Kan extension $\mathrm{Ran}_i\colon\mathrm{Fun}(\mathsf{A},\mathscr{C})\to\mathrm{Fun}(\mathsf{Sq},\mathscr{C})$ . Finally, the inclusion $i_0\colon *\to\mathsf{Sq}, *\mapsto 0$ induces a restriction functor $i_0^*\colon\mathrm{Fun}(\mathsf{Sq},\mathscr{C})\to\mathscr{C}$ . The functor $\Omega$ is the composite $$ \mathscr{C}\xrightarrow{\mathrm{Lan}_{i_3}}\mathrm{Fun}(\m
|algebraic-topology|category-theory|homotopy-theory|higher-category-theory|
1
Probability of toasted bread
Here is the problem: The time it takes for a baker to bake a loaf of bread without it being underbaked is normally distributed with a mean of µ minutes and a standard deviation of σ minutes. Bread is considered to be slightly baked if it is baked for longer than (µ + 1.5σ) minutes. What is the probability that bread randomly selected by the baker will be slightly toasted? Possibile answers: A. 93,32% B. 69,12% C. 30,85% D. 6,68% I wonder od if it is possibile(and how)to calculate it so I don't have to look at normal distribution curve. I tried to look for some information on the internet but its didn't help.
I think the empirical rule can be applied here. Remember that in normal distributions (with a random variable $T$ - time to bake a loaf of bread without underbaking it) $$P(\mu-\sigma $$P(\mu-2\sigma Now, if you imagine the normal distribution curve in your mind, it is symmetric over the mean $\mu$ , so the probabilities of time being outside these ranges (e.g. below $\mu-\sigma$ or above $\mu+\sigma$ ) are just $1-P(\text{inside these ranges})$ , and (because these probabilities are equal due to symmetry) we divide by $2$ to get the probability of time being bigger than the upper bound of these ranges $$P(T>\mu+\sigma)\approx\frac{1-0.68}{2}\approx0.16\quad(16\%)$$ $$P(T>\mu+2\sigma)\approx\frac{1-0.95}{2}\approx0.025\quad(2.5\%)$$ So, the probability of $T$ being greater than $\mu+1.5\sigma$ has to be between these two percentages. The only answer that applies is D - $6.68\%$ . Hope this helps!
|probability|normal-distribution|
1
A Mal'cev algebra $\mathbf{A}$ have typ$\{\mathbf{A}\} \subseteq \{\mathbf{2},\mathbf{3}\}$
A Mal'cev algebra is an algebra $\mathbf{A}$ with a ternary term $t$ such that $\mathbf{A} \models t(x,x,y) \equiv y, t(x,y,y) \equiv x$ . For the remaining of the question, refer to the definitions and results contained in Hobby and McKenzie's book The Structure of Finite Algebras . I'm trying to prove that every finite Mal'cev algebra $\mathbf{A}$ can only have typ $\{\mathbf{A}\} \subseteq \{\mathbf{2},\mathbf{3}\}$ . My approach is the following: let $(\alpha, \beta)$ be a prime quotient. If $(\alpha, \beta)$ is abelian, $\text{typ}(\alpha, \beta)=\mathbf{2}$ since strong abelianity is incompatible with having a Mal'cev term. But if $(\alpha, \beta)$ is nonabelian, how can I rule out the possibility that $(\alpha, \beta)$ has type $\mathbf{4}$ or $\mathbf{5}$ ? Any suggestion? (I'd prefer a hint rather than an answer). Thanks!
The references below are to results in the book you refer to. If $\mathrm{typ}(\alpha,\beta) \in \{\mathbf{4,5}\}$ , then by Lemma 5.24 there exist two reflexive and admissible relations $\rho_0$ and $\rho_1$ satisfying $\alpha \neq \rho_i \subseteq \beta$ and $\rho_0 \cap \rho_1 = \alpha$ , whence $\alpha \subset \rho_i \subseteq \beta$ . By Lemma 5.22, under these conditions $\rho_i \in \mathbf{Con\,A}$ , because $\mathbf A$ is Mal'cev. Use the conditions above to conclude that $\beta = \alpha$ , a contradiction.
|universal-algebra|
1
Getting azimuth/elevation angle for a sensor that is not in rotation origin
I have the following mathematical Problem and somehow I'm not able to solve it. Lets say I have an optical device which is able to rotate around it z-axis for azimuth and around its y-axis for elevation. If I now have a object which I want to observe its pretty easy to calculate the angles for elevation/azimuth if the camera is in the coordinate frame origin e.g. the center of my rotation. But my Problem is now my Camera sensor is not in the center of this rotation but has a fixed offset of (x, y,z) to the center. Is there any possibility to calculate the angle of azimuth and elevation so that the camera sensor looks at the object. I know the position of the object and the offset of the camera sensor. I made an very simplified 2D drawing that illustrates the Problem. It would be very nice if someone knows how to do it :) hope that illustrate my problem
I will be assuming that the target point (T) is at a much greater distance than the offset of your optical sensor (S) from its effective center of rotation (C). orient the optical sensor such that its center axis is parallel to the connecting line (CT) between the center of rotation and the target point. Our viewport can then be simplified to a parallel projection along the same connecting line. now if you took a photograph in this orientation, you would miss the target by an offset x' and y' that is identical to the offset x and y of your optical sensor to its effective center of rotation. going back to a perspective view, this x' and y' offset is now experienced as an error in azimuth and elevation, given by arctan(x'/CT) and arctan(y'/CT) respectively. this error can now be subtracted from our original sensor orientation, resulting in a well approximated alignment at an assumed large target distance. The following sketch shows the error in azimuth (as seen from above). Note that the
|geometry|rotations|transformation|
0
Is the metric in normal coordinates constant for flat manifolds?
Simple question. Let $(M,g)$ be a flat manifold (i.e. a Riemannian manifold with vanishing Riemannian curvature). Is it true that the metric $g_{ij}$ in normal coordinates around each point is constant and equal to the identity, i.e. $g_{ij}\equiv \delta_{ij}$ ? It looks the Taylor expansion of $g_{ij}$ can be expressed in terms of the Riemannian curvature, which suggests that this is true, but I am not sure (I think that only a few terms of the Taylor expansion have been calculated).
Working locally throughout, we can regard normal coordinates as maps $\varphi:\mathbb{R}^n\to M$ defined by $$ \varphi(x^1,\cdots,x^n)=\exp_p(x^ie_i) $$ where $e_1,\cdots,e_n$ is an orthonormal basis of $T_pM$ . From here, one can prove your claim using the following facts: If $\psi:M\to N$ is a local isometry with $\psi(p)=q$ and $\varphi:\mathbb{R}^n\to M$ is a normal coordinate chart on $M$ centered at $p$ , then $\psi\circ\varphi$ is a normal coordinate chart on $N$ centered at $q$ . The identity $\operatorname{id}:\mathbb{R}^n\to\mathbb{R}^n$ is a normal coordinate chart on the standard Euclidean $n$ -space $(\mathbb{R}^n,\delta)$ . A Riemannian $n$ -manifold is flat iff it is locally isometric to Euclidean $n$ -space.
|riemannian-geometry|coordinate-systems|curvature|
0
Continuous functions for countable dense subset in a separable metric space
Question: Suppose that $(X, d)$ is a separable metric space with countable dense subset $\widetilde{X}$ , and Y a metric space. Let $f : X \to Y$ and $g : X \to Y$ be continuous functions such that $f(x) = g(x)$ for all $x \in \widetilde{X}$ . Show that $f(x) = g(x)$ for all $x \in X$ . Here is my attempt: Since $\forall{x} \in \widetilde{X}$ , $f(x), g(x)$ are continuous and $f(x) = g(x)$ , let $\{{x_n}\}_{n \in \mathbb{N}} \subset \widetilde{X}$ , then $x_n \to x$ implies that $f(x_n)=g(x_n) \to g(x)=f(x)$ , but since $\widetilde{X}$ is a countable dense subset of $X$ , $x_n \subset \widetilde{X} \subset X$ and the limit points $x \in X$ , so $\forall{x} \in X, f(x)=g(x)$ . I appreciate if any one can help me validate the proof or give me suggestions on it, thanks!
Your proof lacks accuracy. I can't see the final argument why $f(x)=g(x)$ for all $x\in X$ . I suggest the following. Fix $x\in X$ . Since $X$ is metric and $\widetilde{X}$ is dense, there exists a sequence $(x_n)$ such that $x_n\in\widetilde{X}$ and $x_n\to x$ . By continuity of $f$ and $g$ , we have $$f(x)=\lim_{n\to\infty}f(x_n)=\lim_{n\to\infty}g(x_n)=g(x).$$ QED. Remark: Some assumptions are redundant here and the following stronger theorem holds (but the proof is different): If $f,g\colon X\rightarrow Y$ are continuous, $Y$ is Hausdorff, and $f(x)=g(x)$ for all $x\in \widetilde{X}$ , where $\widetilde{X}$ is dense, then $f=g$ .
|real-analysis|
1
Under $ad-bc=1$, is every element of a finite field of the form $a^2+b^2+c^2+d^2$?
The Question: Let $x\in\Bbb F_q$ , where $\Bbb F_q$ is the field of $q=p^r$ elements, $p$ prime, $r\in\Bbb N$ . Can we write $$x=a^2+b^2+c^2+d^2\tag{1}$$ for $a,b,c,d\in\Bbb F_q$ such that $ad-bc=1$ ? Thoughts: Relevant theorems include: Lagrange's Four Squares Theorem: Each natural number is the sum of four squares. The number of squares in $\Bbb F_q$ is $q$ if $q$ is even, and $\frac{q+1}{2}$ if $q$ is odd. Each element of a finite field is the sum of two squares. I have tested via GAP that it holds for prime powers up to $q=101$ . Motivation: I was playing around with traces of elements of $\operatorname{SL}_2(\Bbb F_q)$ . I have intuitive, hard-to-articulate reasons to suspect the answer to my question is "yes". It would help my research to know an answer; however, field theory is not my forte. I will, of course, credit whoever answers first (if this is a new problem).
We first treat the case where $\mathbb F_q$ has characteristic two. If $x=0$ , we may take $a=d=1$ and $b=c=0$ . If $x\neq 0$ , then there exists some square root $a$ of $x$ ; note $a\neq 0$ . Taking $b=0$ and $c=d=1/a$ , we have $$ad-bc=a(1/a)-0=1;\qquad a^2+b^2+c^2+d^2=x+0+(1/a)^2+(1/a)^2=x.$$ We now treat the odd characteristic case. We count the number $N$ of solutions to $a^2+b^2+c^2+d^2=ad-bc=0$ . Inspired by the comment of Amateur_Algebraist, whenever this is nonzero modulo the characteristic $p$ , Theorem 2.2(a) of this article : Theorem 2.2 (Chevalley–Warning at the Boundary, Preliminary Form). Let $f_1, \ldots, f_r \in \Bbb F_q[t_1, \ldots, t_n]$ be polynomials of degrees $d_1, \ldots, d_r \in \Bbb Z^+$ , and suppose that $d:= \sum_{j=1}^r d_j \le n.$ Let $E: \Bbb F^n_q \to \Bbb F^r_q$ , $x \mapsto (f_1(x), \ldots, f_r(x))$ be the associated evaluation map. Then: (a) For all $b,c \in \Bbb F_q^r$ we have $ |E^{-1}(b)| \equiv |E^{-1}(c)| (\text{mod } p)$ . will imply that the d
|abstract-algebra|field-theory|finite-fields|
0
Integral solutions $(a,b,c)$ for $a^\pi + b^\pi = c^\pi$
We know that $a^n + b^n = c^n$ does not have a solution if $n > 2$ and $a,b,c,n \in \mathbb{N}$, but what if $n \in \mathbb{R}$? Do we have any statement for that? I was thinking about this but could not find any immediate counter examples. Specifically, can $a^\pi + b^\pi = c^\pi$ for $a,b,c \in \mathbb{N}$? I found this . It has a existential proof that $\exists \ n \in \mathbb{R}$ for any $(a,b,c)$ The question remains open for $n = \pi$. This question is just for fun to see if we can some up with some simple proof :)
Assuming [https://en.wikipedia.org/wiki/Schanuel%27s_conjecture](Schanuel's conjecture) the answer is negative. The fact that I have to resort to Schanuel's conjecture suggests that the question is impossible to answer provably. Schanuel's conjecture says that if the numbers $x_1, ... x_n$ are linearly independent over the rationals, the transcendence degree of $x_1, e^{x_1} ... x_n, e^{x_n}$ is at least $n$ . To use Schanuel's conjecture I have to divide the argument into cases depending on which of the logarithms of $a$ , $b$ , and $c$ are linearly independent. Note that $c$ cannot be $1$ , as $a$ and $b$ are positive integers and thus at least $1$ (if one of them is $0$ , it's a trivial solution), so that $c^\pi \ge 2$ and $c \gt 1$ , and also note that $a$ and $b$ cannot both be $1$ as that would imply that $c=2^{1/\pi}$ , which is not an integer as $1 \lt 2^{1/\pi} \lt 2$ . Case 1: $a=1$ (meaning that $\log a=0$ ) and $\log b$ and $\log c$ are rational multiples of each other. Thi
|number-theory|diophantine-equations|
0
Coefficients of a rational function that depend meromorphically on a parameter
Let $D \equiv 1, 2 \, (\textrm{mod }4)$ be a positive, squarefree integer. Let \begin{align} r_D(n) = \{(x, y) \in \mathbf{Z}^2 \mid x^2 + Dy^2 = n\} \end{align} for any positive integer $n$ . Consider the Dirichlet series \begin{align} R_D(s) := \sum_{n \geq 1} \frac{r_D(n)}{n^s} = \sum_{\substack{(x, y) \in \mathbf{Z}^2 \\ (x, y) \neq (0, 0)}} \frac{1}{|x + y\sqrt{-D}|^2} = D^{-s/2} \sum_{\substack{(x, y) \in \mathbf{Z}^2 \\ (x, y) \neq (0, 0)}} \frac{(\sqrt{D})^s}{|x + y\sqrt{-D}|^2}. \end{align} Therefore the values taken by $R_D(s)$ are (almost) special values of the Eisenstein series \begin{align} \tilde{G}_{2s}(\tau) = \sum_{\substack{(x, y) \in \mathbf{Z}^2 \\ (x, y) \neq (0, 0)}} \frac{\textrm{Im}(\tau)^s}{|x + y\tau|^{2s}} \end{align} obtained by taking $\tau = \sqrt{-D}$ . The function $\tilde{G}_{2s}(\tau)$ is weakly modular of weight zero with a simple pole at $\infty$ . Any such function is a rational function in $j$ , where \begin{align} j(\tau) = \frac{1}{q} + 744 + 196
Note that the function $\tilde{G}_{2s}(\tau)$ only takes real values, so it cannot be meromorphic unless it is constant (it would be constant on an open set by Cauchy-Riemann, hence constant everywhere by identity theorem). Although it is weakly modular of weight zero, we cannot say that it is a rational function in $j$ , i.e., it is not meromorphic so it is not in the function field of the curve $X(1)$ .
|complex-analysis|number-theory|modular-forms|
1
Conjugate of complex number raised to complex power
Why is $\overline{z^{n}} = (\bar{z})^{n}$ true only for $n \in \mathbb{Z}$ ? What about a real or complex exponent in general? Let $$\log z=\{\ln \rho + i (\theta + 2k\pi)\mid k\in\Bbb Z\}$$ then $$ \overline{\log z} = \overline{\ln \rho + i (\theta + 2k\pi)} = \ln \rho - i (\theta + 2k\pi) = \ln \rho + i (-\theta + 2k\pi) = \log \bar{z} $$ So, let $\alpha \in \mathbb{C}$ , then $$ \overline{z^{\alpha}} = \overline{e^{\alpha \log z}} = e^{\bar{\alpha} \log \bar{z}} = (\bar{z})^{\bar{\alpha}} $$ It seems than the integer exponent is a particular case of a more general property. Did I make some mistake in my proof? Edit: My question is twofold. Is the first equality about conjugate of the log correct? And if it is, is the second one about the conjugate of $z^\alpha$ correct? Maybe I am doing something wrong and I'll be glad if somebody could point out where are my mistakes. Thank you.
If we consider $z^\alpha$ to be a multivalued function - in particular " $z^\alpha$ " does not represent a complex number, but rather a set of complex numbers, then, yes, $\overline {z^\alpha} = {\overline z}^{\overline \alpha}$ holds. But we have to be careful about what exactly we mean by it. If $z = \rho e^{i\theta} = \rho e^{i(\theta + 2k\pi)}$ for all $k\in \Bbb Z$ , and $\alpha = x + iy$ , then $$z^\alpha = \{\rho^xe^{-y(\theta + 2k\pi)}e^{i(y\ln \rho + x(\theta + 2k\pi))} : k \in \Bbb Z\}$$ $$\overline{z^\alpha} = \{\rho^xe^{-y(\theta + 2k\pi)}e^{-i(y\ln \rho + x(\theta + 2k\pi))} : k \in \Bbb Z\}$$ while $${\overline z}^{\overline\alpha} = \{\rho^xe^{-(-y)(-\theta - 2k\pi)}e^{i((-y)\ln \rho + x(-\theta - 2k\pi))} : k \in \Bbb Z\}$$ Since $(-y)(-\theta - 2k\pi) = y(\theta + 2k\pi)$ and $i((-y)\ln \rho + x(-\theta - 2k\pi)) = -i(y\ln \rho + x(\theta + 2k\pi))$ . The two sets are the same. Everything that is in one is in the other. I.e., $\overline{z^\alpha} = {\overline z}^{\over
|complex-numbers|logarithms|
0