title
string
question_body
string
answer_body
string
tags
string
accepted
int64
How to precisely define the natural domain of a function?
It is commonly defined as every real number that let the expression make sense. Is there a more precise and rigorous way to define it? Actually I have some idea about this: First, define basic functions like +,-,*( $\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ ),/( $\mathbb{R}\times\mathbb{R}-\{0\}\rightarrow\mathbb{R}$ ) Then, define arithmetic on functions. For example: $h=f+g$ , where the domain of $h$ is the intersection of the domain of $f$ and $g$ , and for every $x$ in the domain of $h$ , $h(x)=f(x)+g(x)$ And then we can talk about the natural domain of functions that are combinations of those operations.
This is more of a comment, but too long for a comment. We have operations that you mention ( $+,*,-,/$ ), and you've proposed a way to apply these to functions (pointwise) and a way to define the natural domain of a combination (say $f+g$ ) in terms of the domains of $f,g$ . I like this approach, but I would make the following observations. You have a method of combining functions (say $f,g\mapsto f+g$ ), but we don't yet have any functions to combine. Constant functions and the identity function $f(x)=x$ would be a good start, but probably not everything you want to build up all elementary functions (including exponentials, for example). A general procedure could be: Start with a set $\mathcal{C}$ of functions with specified natural domains. For example, $\mathcal{C}$ could be the functions $+,*,-,/$ that you defined, where each has natural domain $\mathbb{R}\times \mathbb{R}$ except division, because we can't divide by zero. However, $\mathcal{C}$ could contain some functions whose d
|functions|
1
The partial derivative of a call option with respect to $t$
In Black-Scholes related computations, why do we not treat the stock price $S$ as a function of $t$ when taking partial derivatives with respect to $t$ ? For example, if $$c(t,T)=SN(d_1)-Ke^{-r(T-t)}N(d_2)$$ is the price of a call option and we want to find $\partial C/\partial t$ , we never include the term $\partial S/\partial t$ and don't consider $S$ as a function of $t$ , but as a separate variable. How can this be justified?
Your call formula can be written as a function of $S$ , where $S$ is the initial stock price $$c(t,T, S, K)=SN(d_1)-Ke^{-r(T-t)}N(d_2)$$ In this case, $S$ has no dependence on $t$ (this formula is not stochastic!) so you can take the $t$ -derivative as your usual partial derivative.
|probability|stochastic-calculus|finance|
0
Tautologies in classical logic
According to its truth table, $P \lor \neg P$ is a tautology, i.e. it is true for all truth values of its constituent propositions. But how come that is true in classical logic? $\lor$ is the inclusive 'or', so $P \lor \neg P$ means 'either $P$ , or $\neg P$ , or both of them'. But in classical logic it is never true that both $P$ and $\neg P$ , i.e. $P \land \neg P$ is a contradiction (false for all truth values of its constituent propositions). So why is $P \lor \neg P$ always tautologically true?
"According to its truth table, $P \lor \lnot P$ is a tautology, i.e. it is true for all truth values of its constituent propositions." OP is missing the Point that the constituent propositions should be independent to be assigned truth values arbitrarily & independently. We can not assign truth values to derived terms : We have to evaluate the derived terms to get the truth values using the independent variables. " $X \lor Y$ means 'either (A1) $X$ , or (A2) $Y$ , or (A3) both of them'" Here , $X$ & $Y$ are independent and we can assign the truth values independently & arbitrarily to check whether it is a tautology. When we assign $X=Y=0$ , we get $X \lor Y = 0$ . Very other Case [ A1 , A2 , A3 ] we get $X \lor Y = 1$ Hence it is not a tautology. " $P \lor \lnot P$ means 'either (B1) $P$ , or (B2) $\lnot P$ , or (B3) both of them'" Here , $P$ & $\lnot P$ are not independent and we can not assign the truth values independently & arbitrarily , to check whether it is a tautology. When we
|logic|propositional-calculus|intuition|
1
Existence of a kind of balanced tournament schedule
Recently I was confronted with another tournament design problem: Suppose we have a tournament with $2n$ teams ( $n\in \mathbb{N}$ ). We have $n$ different types of games (say at $n$ distinct locations), to be played by a pair of teams at a time. The tournament will be played in $2n$ rounds. In the first $n$ rounds as well as in the final $n$ rounds, each team should have played at every location. Each team should have confronted each of the other $2n-1$ teams in contest. For which $n$ can this problem be solved and with what method? Relevant references / terminology? I tried to solve it for $n=6$ (for practical application) and nearly succeeded. I got stuck with an arrangement where the final constraint is not completely satisfied.
This is too long for a comment, but not an answer. Obviously we can see that it works for two teams, and doesn't work for two teams. For two teams, if teams $A,B$ play at location $1$ in round $1$ , then in order for them to play at each location in each of the first two rounds, $A$ and $B$ must both move to location $2$ at round $2$ . So $A$ plays $B$ twice. By exactly the same reasoning, whomever $A$ plays in round $3$ will also be played by $A$ again in round $4$ , so $A$ plays against at most two of the three other teams. We can also see that it does not work for six teams. However, the method does not work for $n=4$ . I did write some code to do check every single case for $n=4$ , and I did not find any that worked. However, I'm a mathematician, not a programmer, so don't quote me on that. The case $n=5$ was already too large for my code to do a case by case check. For $n=3$ , let's suppose we have such a tournament schedule where every one of the six teams plays all five other te
|combinatorics|reference-request|
0
In wheel theory, does $1/0$ have a one-to-one mapping to any set? Or is the concept of a one-to-one mapping outside of its definition?
As I understand it, infinities typically have a one-to-one mapping with known sets. The denumerable infinity has a one-to-one mapping with integers, rational numbers, and natural numbers. The nondenumerable infinity has a one-to-one mapping to the set of real numbers and the power set of the integers. Is there any such set that maps to $1/0$ in wheel theory? It's definition appears to be a type of infinity. Am I correct about this or is my assumption outside the standard definitions used in wheel theory?
Thinking of $\mathbb R$ as an affine line, one can complete it to a real projective line $\mathbb R P^1$ by adding an "ideal point at infinity" sometimes denoted $\infty$ . Similarly, viewing $\mathbb C$ as an affine complex line, one adds a point at infinity to form the complex projective line $\mathbb C P^1$ . These are examples of wheels in wheel theory where one similarly adds points at infinity. Such a notion of a point at infinity has nothing to do with Cantorian infinite cardinals, whether denumerable or non-denumerable.
|abstract-algebra|infinity|wheel-theory|
1
Why is $2\sin\theta$ the upper limit for this inner integral, and why is $0$ the lower limit?
I am having difficulty understanding how this practice question is supposed to be solved. Here it is: Use polar coordinates to find the exact value of the double integral, $\iint_D x$ $dA$ where $D$ is the region bounded above by the line $y = x$ and below by the circle $x^2 + (y-1)^2 = 1$ . The polar equation for the circle is given by $r = 2\sin\theta$ . The intersection points of the line and the circle are $(0,0)$ and $(1,1)$ . The provided solution uses this step: let $x=r\cos\theta, y = r\sin\theta$ $\iint_D x dA = \int_0^{\pi/4}\int_0^{2\sin\theta} (r \cos\theta) r\,dr\,d\theta$ From here on it's pretty straightforward. I understand the outer limits (since $y=x$ forms an angle of $\pi/4$ with the $x$ -axis) and why $x$ is replaced with $(r \cos\theta) r\,dr\,d\theta$ . But why is $2\sin\theta$ the upper limit for this inner integral, and why is $0$ the lower limit, considering the diagram should look like this: (Enclosed area $D$ shaded in diagram)
The inner integral has a constant $\theta$ , say $\theta_0$ . It is integrating along the ray $\theta = \theta_0$ (in polar coordinates), which starts at the origin and moves outwards at angle $\theta_0$ to the $x$ -axis. As is also true in cartesian coordinates, it has to intergrate over all parts of the ray that are inside your area $D$ . The part of the ray inside $D$ starts already at the origin, so integration starts at $r=0$ , as the solution says. The ray leaves $D$ when it hits he circle. As you were given in the problem statement, the polar equation of that circle is $r=2 \sin(\theta)$ , so for this ray it ends at $r=2 \sin(\theta_0)$ , which is the upper end of the ingetral. The only difference is that it uses simply $\theta$ instead of $\theta_0$ .
|integration|polar-coordinates|trigonometric-integrals|
1
Mutual information analysis for sum of Bernoulli and Gaussian
Question: I have two random variables $X\sim\text{Bernoulli}(\alpha)$ and $Y\sim\mathcal{N}(0,1)$ . Is it possible to compute the mutual information $I(X;X+Y)$ analytically? My attempt: By the definition of mutual information, we have $$ I(X;X+Y)=H(X)-H(X|X+Y), $$ where $H(X)$ can be worked out analytically but I don't know how to work out $H(X|X+Y)$ .
Let $Z=X+Y$ . Notice that $Z$ has a density $f_Z$ correspoing to a gaussian mixture. Now $P(X=0|Z=z)= \frac{f_Z(z|X=0) P(X=0)}{f_Z(z)}=\frac{f_Y(z) (1-\alpha)}{f_Z(z)}=g(z)$ So $H(X|Z=z)= h_b(g(z))$ where $h_b$ is the binary entropy function. And $$H(X|Z)= \int f_Z(z) h_b(g(z)) dz$$ I might be easier to compute instead $h(Z)-h(Z|X)$
|probability|analysis|statistics|information-theory|mutual-information|
1
Are the two functions different by a constant?
Started from \begin{align} \int{\frac{1}{\cos x}dx}&=\int{\frac{\cos x}{1-{{\sin }^{2}}x}dx}\\ &=\tanh^{-1}(\sin x)+C\quad\ldots\ldots\ldots\ldots\text{ (*)}\\ &=\ln\sqrt{\left|\frac{\sin x+1}{\sin x-1}\right|}+C \end{align} (*) can be solved by setting $\tanh Y=\sin x=\frac{e^{2Y}-1}{e^{2Y}+1}$ . On the other side, if Euler formula is used, it will take a substantially different look, i.e., \begin{align} \int{\frac{1}{\cos x}dx}&=\int\frac{2e^{ix}}{e^{2ix}+1}dx\\ &=-2i\tan^{-1}(e^{ix})+C\\ &=\ln\left|\frac{1-ie^{ix}}{1+ie^{ix}}\right|+C \end{align} But I could not convert one to the other. Would it be possible that two functions are equivalent $\left|\frac{1-ie^{ix}}{1+ie^{ix}}\right|\leftrightarrow\sqrt{\left|\frac{\sin x+1}{\sin x-1}\right|}$ ? EDIT: The motivation is that in general the trigonometric functions can be easier to integrate in exponential form, but the final results can be intimidatingly different from the expected.
The steps are the following. $$ \sqrt{\frac{e^{ix}-e^{-ix}+2i}{e^{ix}-e^{-ix}-2i}}= \sqrt{\frac{e^{2ix}-1+2ie^{ix}}{e^{2ix}-1-2ie^{ix}}}= \sqrt{\left(\frac{e^{ix}+i}{e^{ix}-i}\right)^2} $$
|calculus|trigonometry|
1
Why is the group of rotations in $\mathbb{R}^n$ not $n$-dimensional?
I have a rather basic mis-understanding about Lie groups and Lie algebras. Consider the Lie group $SO(N)$ for $N>3$ of rotations on $\mathbb{R}^N$ . On the one hand this Lie group has dimension $N(N-1)/2$ , since every $SO(N)$ element can be parametrized as $e^{X}$ , where $X$ is an anti-symmetric matrix with $N(N-1)/2$ free parameters. On the other hand, $\mathbb{R}^N$ has $N$ cardinal axes. Can I not express every rotation in $\mathbb{R}^N$ as products of rotations about the axes, implying that $SO(N)$ has dimension $N$ ? Where do the additional degrees of freedom come from? A follow up related question is: each Lie algebra generator $X(i,j)$ , for $1 \leq i generates a one-parameter group of rotations $O(\theta) = e^{\theta X_{ij}}$ . Is there a simple geometric explanation for which axis this rotation is about?
Nobody seems to have mentioned $\mathbb R=\mathbb R^1$ yet, and yet it would seem to be a good answer to the question: the group of rotations of $\mathbb R^1$ is $0$ -dimensional. Namely, it consists of only two "rotations": the identity map and the antipodal map $x\mapsto -x$ .
|linear-algebra|geometry|lie-groups|
0
What does this sentence mean in polynomial rings?
I have trouble with understanding this sentence in the context of polynomial rings: "We will assume that polynomials satisfy right evaluation. That is, a polynomial can only be evaluated once it is written so that powers of $x$ appear to the right of their coefficients". I don't underatand this statement. I really appreciate if someone could explain it to me. Thanks in advance.
(clarification from comments: this is from "Null ideals of subsets of matrix rings over fields" by N.J.Werner, and the polynomials are over a noncommutative ring) They define the evaluation map $ev_r$ : $\ldots \to R$ only on polynomials of form $f(x) = \sum\limits_{i=0}^n a_i x^i$ with $f \mapsto \sum\limits_{i=0}^n a_i r^i$ . Unlike the commutative case, this definition need not extend to a homomorphism $R[x] \to R$ , hence the need for a convention. Edit: Noncommutative rings and the evaluation homomorphism
|ring-theory|polynomial-rings|
1
Doubt in a problem related to homotopy lifting
This was a problem I came across while studying the proof of Monodromy Theorem. Let $p: X \to B$ be a covering map (assume $X$ to be path connected). Let $F:I \times I \to B$ be a continuous map ( $I$ is the interval $[0,1]$ ). Let $F(0,0)=b_0$ and let $x_0 \in p^{-1}(b_0)$ . Let $\widetilde{F}:I \times I \to X$ be the unique lift given by Homotopy Lifting Property such that $\widetilde{F}(0,0) = x_0$ . If $F(0,t)=b_0$ for $t \in I$ , then $\widetilde{F}(0,t) = x_0$ for $t \in I$ . If $F(1,t)=b_1$ for $t \in I$ (for some $b_1 \in B$ ), then there exist a $x_1 \in p^{-1}(b_1)$ such that $\widetilde{F}(1,t) = x_1$ for $t \in I$ . I was able to do the first part using the Path Lifting Property. However I'm stuck at the second part for hours. I tried to something using the paths $F(t,0)$ and $F(t,1)$ but to no avail. I'm not even sure that problem is correct (ie. free of typos). I would appreciate some help.
The second property is also due to a basic application of path lifting: The path $t \mapsto \tilde{F}(1, t)$ is a lift of $t \mapsto F(1, t)$ , so since the latter is constant at $b_1$ , the former, too, must be constant at some $x_1 \in p^{-1}(b_1)$ .
|algebraic-topology|
1
Logarithmic Equation Involving Trigonometric Functions
Solve the following equation in real numbers: $\log_2(\sin x) + \log_3(\tan x) = \log_4(\cos^2 x) + \log_5(\cot x)$ My approach: $\log_2(\sin x) + \log_3\left(\frac{\sin x}{\cos x}\right) = \log_2(\cos x) + \log_5\left(\frac{\cos x}{\sin x}\right)$ Now the equation can be written as: $\log_2(\sin x) + \log_5(\sin x) + \log_3(\sin x) = \log_2(\cos x) + \log_5(\cos x) + \log_3(\cos x)$ How can I continue my proof?
I think it's better to rearrange like below $$\log_2(\sin x) + \log_3(\tan x) = \log_4(\cos^2 x) + \log_5(\cot x)\\\log_4(\sin^2 x) + \log_3(\tan x) = \log_4(\cos^2 x) + \log_5(\cot x)\\\log_4(\sin^2 x) -\log_4(\cos^2 x) = \log_5(\cot x)- \log_3(\tan x)\\\log_4(\frac {\sin^2 x}{\cos^2 x})=\log_5(\tan x)^{-1}-\log_3(\tan x)\\\log_4(\tan^2x)=-\log_5(\tan x)-\log_3(\tan x)\\$$ can you take over? Second hint: $\boxed{ something \ +=something \ -}$ the only solution is ???
|functions|trigonometry|logarithms|
0
How to integrate $\int \frac{1}{\cos(x)}\,\mathrm dx$
could you help me on this integral ? $$\int \frac{1}{\cos(x)}\,\mathrm dx$$ Here's what I've started : $$\int \frac{1}{\cos(x)}\,\mathrm dx = \int \frac{\cos(x)}{\cos(x)^2}\,\mathrm dx = \int \frac{\cos(x)}{1-\sin(x)^2}\,\mathrm dx$$ Now, I did : $u = \sin(x)$ , so $\mathrm du = 1$ . Now I have : $$\int \frac{\text{???}}{1-u^2}\,\mathrm du$$ But at this point, I think I did the most of the job but I'm stuck. Could you help me to solve this integral please (to the integration by substitution at the end) ? Thanks EDIT : Now I follow the steps and I got : $$\int \frac{1}{1-u^2}\,\mathrm du$$ Doing the partial fraction I got $A = 1/2$ and $B = 1/2$ . So basically I have : \begin{align} & \int \frac{1}{1-u^2}\,\mathrm du = \int \frac{1/2}{1+u}\,\mathrm du + \int \frac{1/2}{1-u}\,\mathrm du \\[8pt] = {} & \frac 1 2 \left(\int \frac{1}{1+u} \, du - \int \frac{1}{1-u} \, du\right) \\[8pt] = {} & \frac 1 2 \ln\left(\frac{1+u}{1-u}\right) = \ln\left(\left(\frac{1+\sin(x)}{1-\sin(x)}\right)^{1/2}
Actually, the standard approach (Euler formula) when integrating trigonometric functions can be used, but it will take a different form, i.e., \begin{align} \int{\frac{1}{\cos x}dx}&=\int\frac{2e^{ix}}{e^{2ix}+1}dx\\ &=-2i\tan^{-1}(e^{ix})+C\\ &=\ln\left|\frac{1-ie^{ix}}{1+ie^{ix}}\right|+C \end{align} which is equivalent to $\ln\sqrt{\left|\frac{\sin x+1}{\sin x-1}\right|}+C$ since \begin{align} \sqrt{\left|\frac{\sin x+1}{\sin x-1}\right|}= \sqrt{\left|\frac{e^{ix}-e^{-ix}+2i}{e^{ix}-e^{-ix}-2i}\right|}= \sqrt{\left|\frac{e^{2ix}-1+2ie^{ix}}{e^{2ix}-1-2ie^{ix}}\right|}= \sqrt{\left(\frac{e^{ix}+i}{e^{ix}-i}\right)^2}=\left|\frac{1-ie^{ix}}{1+ie^{ix}}\right| \end{align}
|integration|trigonometry|indefinite-integrals|
0
Simple probability doubt regarding relation between two events.
Say there is a competition with 3 participants A, B, and C. There are no ties. We know that A does not come 3rd. So we can say that the probability of A coming 1st/2nd is 0.5 So will it be correct to say that the probability of the 1st/2nd position being occupied by A is also 0.5 with the probability of it being occupied by B or C being 0.25? Or will it still be 1/3?
Your possible outcomes given that A is not in 3rd place are (order 1st, 2nd, 3rd): BA C AB C CA B AC B The probability that A is in the 1st position is 1/2, B 1/4, C 1/4. Same for the second positions.
|probability|
1
Pairwise Distance Distribution on a Flat Torus
I'm currently trying to work out the following. Given two uniformly distributed random variables on the unit flat torus (i.e. $[0,1]^2$ which wraps around), what is the probability that their distance is less than some real number $r$ ? Further I will need the p.d.f, but this is just differentiating. Also if there's a better name for this I'd love to know it! I've got the following: If $0\leq r \leq 0.5$ , then $P(d(X,Y) , so the p.d.f is $f_{d(X,Y)}(r)=2\pi r$ . If $r\geq\sqrt{2}/2$ , the c.d.f. is $1$ . If $0.5 , I think the c.d.f is $\pi r^2$ minus the overlap of the circle around a point when the area of that circle wraps round the torus, something like this: Now I get this area to be $8r\arccos\left(\frac{1}{2r}\right)-2\sqrt{r^2-\frac{1}{4}}$ , which differentiates to $$ 8\arccos\left(\frac{1}{2r}\right)-\frac{8}{r\sqrt{4-r}}+\frac{2r}{\sqrt{r^2-\frac{1}{4}}} $$ For the p.d.f. However, this has a singularity at $r=1/2$ , whereas it should be $\pi$ since that's where the p.d.f for
Your guess is not quite correct. Actually, by translation invariance, you can always fix one point to be the origin (up to a lattice translation). Your problem thus becomes finding the distance distribution of a uniformly distributed point on a square of unit side length with respect to the center of the square. You are correct for the cdf for $r and $r>\sqrt2/2$ , but not for the remaining interval. The correct area is rather the area of the intersection of the disk of radius $r$ and the square. To find it, decompose it into circular wedges and triangles. Your formula therefore changes to: $$ F_ so the total cdf is: $$ F_ and you can check that $F_ , and by differentiation you get the pdf: $$ f(r) = \begin{cases} 2\pi r & r \leq \frac12 \\ 2r(\pi-4\arccos(1/2r)) & r\geq\frac12 \end{cases} $$ It's actually faster to get the pdf as it's just the length of the remaining circle arc inside the square (the red dotted lines in your figure, the $2\pi r$ is the length of the total circle and $
|probability|geometry|
1
When is cdf $F_{X_1+\dots+X_n}(c)$ of sum of iid zero mean random variables decreasing in sample size $n$?
Let $X_1, X_2, \dots$ be a sequence of i.i.d. random variables with mean zero (e.g., $N(0,1)$ ). Let $n > m$ and $c \geq 0$ . I want to show that $$ P\left(\sum_{i=1}^n X_i \leq c \right) \leq P \left( \sum_{i=1}^m X_i \leq c \right).$$ In view of existing concentration bounds like Hoeffding's inequlity which scale with the the length of the sequence, i.e. $n$ and $m$ , I would think that the above statement should hold. Edit: Since it was pointed out that this doesn't hold for specific cases where $n$ and $m$ are small and the distribution of $X_i$ is discrete, assume that $X_1, X_2, \dots$ are Gaussian and $n$ sufficiently large.
Take $X_i$ such that $X_i=-2$ with probability $\dfrac{1}{3}$ and $X_i=1$ with probability $\dfrac{2}{3}$ . A direct calculation shows that $\mathbb{E}[X_i]=0$ . Take $n=2, m=1$ , $c=0$ . Obviously, $$P\left(\sum_{i=1}^m X_i \leq c\right)=P(X_1\leq 0)=P(X_1=-2)=\dfrac{1}{3}$$ Now notice that $X_1+X_2>0$ iff $X_1=X_2=1$ , so $$P\left(\sum_{i=1}^n X_i \leq c\right)=1-P(X_1=1, X_2=1)=1-P(X_1=1) P(X_2=1)=1-\left(\dfrac{2}{3}\right)^2=\dfrac{5}{9}>\dfrac{1}{3}$$
|probability|probability-theory|concentration-of-measure|
0
Proving convexity of a function using Linear Algebra
The question is: State true or false: The function, $f(x_1, x_2, x_3)$ = $2x_1^2 + 3x_2^2 + 2x_3^2− 3x_1x_2 − x_2x_3 − 2x_1x_3$ is convex. This is a part of a course that is mostly just linear algebra and I know it has something to do with finding whether all eigenvalues are positive or not. But that's just dirty work. I could not find any theory behind it to study it more wholly and completely. (I did get the eigenvalues as positive by the way => hence convex)
You have a function $f : \mathbb R^n \to \mathbb R$ which can be written in the form $f(x) = x'Hx$ where $H$ is a symmetric matrix. The intuition is that $f$ is convex if and only if the restriction of $f$ to any line is still convex. That is, you want to check that for any $x \in \mathbb R^n$ and $v \in \mathbb R_0^n$ the functions $$ \phi(t) : \mathbb R \to \mathbb R : t \mapsto f(x + tv) = (x+tv)'H(x+tv) $$ are all convex. With simple algebra you can expand the product $(x + tv)'H(x+tv)$ which will give you a polynomial in $t$ .Then you will find that the condition for all these functions to be convex is exactly that $$ v'Hv \geq 0 \quad \forall v \in \mathbb R^n, v \neq 0$$ When this above condition on $H$ is met we say that $H$ is non-negative definite which is equivalent to all the eigenvalues of $H$ being greater or equal to $0$ . To see why negative eigenvalues would cause problems imagine that we had an eigenvector $e_i$ with an eigenvalue $\lambda_i then we would have $$ e_i'
|linear-algebra|matrices|eigenvalues-eigenvectors|convex-analysis|
0
Is there any prime $p$ such that $6x^3 − p^2 − y^2 = 0$ has an integer solution?
I need to find whether there is any prime for which $6x^3 − p^2 − y^2 = 0$ has a integer solution. For prime $p \neq 3$ ,considering this equation in modulo $3$ ,I find that there is no solution. But for $3$ whether there is a solution or not I could not get this. Please help.
For $p=3$ we have: $$6x^3-9-y^2=0\implies 3\,|\,y$$ Writing $y=3Y$ we then have $$6x^3-9-9Y^2=0\implies 2x^3-3-3Y^2=0\implies 3\,|\,x$$ Writing $x=3X$ we then have $$54X^3-3-3Y^2=0\implies 18X^3-1-Y^2=0$$ And now working $\pmod 3$ shows that there are no solutions.
|number-theory|modular-arithmetic|polynomial-congruences|
1
Find two coins of weight a, among n coins, where n-2 coins are coins of weight b
Task: Among the n coins there are exactly 2 coins of weight a, and exactly $n − 2$ coins of weight $b, a . It is allowed to compare the weight of any two coins in one turn (there are three comparison results: lighter, heavier, of the same weight). Purpose: to determine the weight of each coin. Prove that there is such a constant $C$ that at least $\frac{2n}{3} − C$ comparisons are necessary. Some thinkings: Not sure where to begin. As far as I understand, in this task it is necessary to prove that in less than $\frac{2n}{3}$ comparisons it is impossible to determine the weight of each coin. UPD: In the comments, people left really good advice on solving this task, but it is unclear how to link this to the constant C. It is clear that at best it is possible to determine the weight of all coins in 2 comparisons, and in $\frac{2n}{3}$ comparisons in the worst case. But how exactly to relate this to the constant C is not entirely clear to me
Sudeep has given a nice hint in the comments. First observe that for $n=3$ , you require 2 moves in the worst case, so $C\le 0$ . By "determine" I mean deduce the type of coin. Next, note that we can divide the $n$ coins into $n/3$ parts or $\lfloor n/3\rfloor+1$ parts depending on $n \text{ mod } 3$ . In the second case, the last part contains one or two coins, while all other parts contain three coins. It takes two moves to completely determine a triplet of coins (can you see why?). So, for $n=3k$ , you require $2n/3$ moves, for $n=3k+1$ , the last part has length $1$ and is completely determined by the results of the previous part, so you only need to determine the first $k$ parts. For $n = 3k+2$ , you need to determine all the $k+1$ parts, but for the last part, you only need one move. Note that $\lfloor 2n/3 \rfloor = 2k+1$ , so we're good.
|discrete-mathematics|algorithms|game-theory|algorithmic-game-theory|
1
Exchanging limit and inferior limit
Let $b(k,M)$ be a real sequence such that for any $k$ , $b(k,M)\in [-M,M]$ . I know that $\lim_{M\to+\infty} \sup_{k\geq0} |b(k,M)-a(k)|=0 $ , meaning that $b(k,M)$ converges to $a(k)$ when $M\to+\infty$ uniformly in $k$ . There exists $\lim_{k\to+\infty} a(k)=L\in \mathbb{R}$ . I want to prove that $\lim_{k\to +\infty} \lim_{M\to+\infty} b(k,M)\geq \lim_{M\to+\infty} \liminf_{k\to+\infty} b(k,M)$ . If I knew that for any $M$ (or definitely in $M$ ) the limit $\lim_{k\to +\infty}b(k,M)$ was well defined, then I would actually have the equality in the previous inequality due to the uniform convergence. But since I don't know if $\lim_{k\to+\infty} b(k,M)$ exists, I would be satisfied with the inequality.
To prove the inequality $$\lim_{k \to +\infty} \lim_{M \to +\infty} b(k,M) \geq \lim_{M \to +\infty} \liminf_{k \to +\infty} b(k,M),$$ let's first understand the given conditions and what we're trying to achieve. Given Conditions: $b(k,M)$ is a real sequence such that $b(k,M) \in [-M, M]$ for any $k$ . $\lim_{M \to +\infty} \sup_{k \geq 0} |b(k,M) - a(k)| = 0$ . This means $b(k,M)$ converges uniformly to $a(k)$ as $M \to +\infty$ . $\lim_{k \to +\infty} a(k) = L \in \mathbb{R}$ . To Prove: $$\lim_{k \to +\infty} \lim_{M \to +\infty} b(k,M) \geq \lim_{M \to +\infty} \liminf_{k \to +\infty} b(k,M).$$ Approach and Proof: Since $b(k,M)$ converges uniformly to $a(k)$ , for every $\epsilon > 0$ , there exists an $M_\epsilon$ such that for all $M > M_\epsilon$ and for all $k$ , $|b(k,M) - a(k)| . Given $\lim_{k \to +\infty} a(k) = L$ , for every $\epsilon > 0$ , there exists a $K_\epsilon$ such that for all $k > K_\epsilon$ , $|a(k) - L| . Consider $\liminf_{k \to +\infty} b(k,M)$ . This is t
|limits|analysis|uniform-convergence|limsup-and-liminf|
0
how to evaluate $ \int_0^1 \frac{\arctan \left(x \pm \sqrt{x^2+1}\right)}{x+1} d x $
how to evaluate $$ \int_0^1 \frac{\arctan \left(x \pm \sqrt{x^2+1}\right)}{x+1} d x $$ Denote: \begin{align*} I &= \int_0^1 \frac{\arctan \left(x+\sqrt{x^2+1}\right)}{x+1} \,dx \\ J &= \int_0^1 \frac{\arctan \left(x-\sqrt{x^2+1}\right)}{x+1} \,dx \\ \left(x+\sqrt{x^2+1}\right)\left(x-\sqrt{x^2+1}\right) &=-1 \Rightarrow\left(x-\sqrt{x^2+1}\right)=\frac{-1}{\left(x+\sqrt{x^2+1}\right)} \\ \Rightarrow \arctan \left(x-\sqrt{x^2+1}\right) &=-\arctan \left(\frac{1}{x+\sqrt{x^2+1}}\right) \\ \end{align*} \begin{align*} &I+\int_0^1 \frac{\arctan \left(x+\sqrt{x^2+1}\right)+\arctan \left(x-\sqrt{x^2+1}\right)}{x+1} \,dx \\ = &\int_0^1 \frac{\arctan \left(\frac{x+\sqrt{x^2+1}+x-\sqrt{x^2+1}}{1-\left(x+\sqrt{x^2+1}\right)\left(x-\sqrt{x^2+1}\right)}\right)}{x+1} \,dx \\ =& \int_0^1 \frac{\arctan \left(\frac{2x}{1-\left(x^2-\left(x^2-1\right)\right)}\right)}{x+1} \,dx=\int_0^1 \frac{\arctan (x)}{x+1} \,dx \end{align*}
Integrate by parts \begin{align} &\int_0^1 \frac{\tan^{-1} \left(x +\sqrt{x^2+1}\right)}{x+1} d x\\ =& \ \tan^{-1}\left(x +\sqrt{x^2+1}\right)\ln(1+x)\bigg|_0^1 -\frac12\int_0^1 \frac{\ln(1+x)}{1+x^2}dx\\ =&\ \frac{3\pi}8\ln2 - \frac12\cdot\frac\pi8\ln2=\frac{5\pi}{16}\ln2\\ \end{align} Simplilarly \begin{align} &\int_0^1 \frac{\tan^{-1} \left(x -\sqrt{x^2+1}\right)}{x+1} d x=-\frac{3\pi}{16}\ln2\\ \end{align}
|calculus|integration|definite-integrals|
1
Weird Functional Equation problem on the irrationals
This weird question was given by my professor as a part of my assignment : $\textbf{Question :}$ “Let $f: \mathbb{R} \to \mathbb{R}$ be a function such that it satisfies $f(x + y) = x f(\frac{1}{y}) + y f(\frac{1}{x})$ , whenever $x$ and $y$ are both irrational numbers. Then prove that $f(0)$ is always $0$ .” $\textbf{My attempt :}$ Since $0$ is rational, we can’t let any of the variables to be $0$ . So I tried the substitution $y = -x$ to get $f(0) = x f(\frac{1}{-x}) - x f(\frac{1}{x})$ . I tried proving f to be an even function after this, so that $f(0)$ becomes $0$ . But I couldn’t do so or proceed further anyhow. Can somebody kindly provide me hints or solutions for this problem ?
With $x = y = t$ (any irrational) you get $$ f(2t) = 2 t f(1/t) $$ and with $x = y = 1/(2t)$ you get $$f(1/t) = f(2t)/t$$ Substitute the first into the second and it says $$ f(1/t) = 2 f(1/t)$$ from which you conclude $f(1/t) = 0$ , i.e. $f$ is $0$ on all irrationals. Then use $y = -x$ to get $f(0) = 0$ .
|functional-equations|
1
State-sum construction of the Drinfeld center of a fusion 2-category
If $\mathcal{C}$ is a fusion 1-category, the Turaev-Viro state-sum produces a 3d TQFT whose modular tensor category is the Drinfeld center of $\mathcal{C}$ . In particular this means that the Turaev-Viro construction for two Morita equivalent fusion categories gives the same 3d TQFT. Equivalently said, if $\mathcal{C}$ has Froboenius algebra objects $A$ we can contruct Morita equivalent categories by condensing $A$ , which turns out the be the category of bimodules of $A$ , and this corresponds to all possible Lagrangian algebra objects of the Drinfeld center. My question is whether this extends to fusion 2-cateogries and 4d TQFTs. I know that there are state-sum constructions of 4d TQFTs from the datum of a braided fusion 1-category (which can be understood as a fusion 2-category with one simple object), like the Crane-Yetter state-sum, or more generally from the datum of a spherical fusion 2-category (by Douglas and Reutter). It seems that still the braided fusion 2-category associat
(Summary of the comments:) The input of Turaev-Viro model is a spherical fusion 1-category $\mathcal{C}$ , and it produces a fully extended 3d TQFT in the Morita 3-category of tensor categories c.f. DSPS whose value on the circle $S^1$ gives you the Drinfeld center $\mathcal{Z}(\mathcal{C})$ . In comparison, Reshetikhin-Turaev model takes a modular tensor category $\mathcal{M}$ as input, and produces a 3d TQFT extended down to 1d , whose value on the circle $S^1$ gives you the original MTC $\mathcal{M}$ . The former 3d TQFT can be viewed as anomaly-free , while the later 3d TQFT is anomalous , whose anomaly can be described by a 4d TQFT, which is exactly the Crane-Yetter model you mentioned at the beginning. This 4d TQFT is (probably not surprising) anomaly-free again, which means you can extend it down to a point. Hypothetically this 4d TQFT should be realized as a symmetric monoidal 4-functor from the cobordism 4-category to the 4-category $\mathbf{4Vect}$ . The target 4-category mig
|category-theory|higher-category-theory|topological-quantum-field-theory|fusion-categories|morita-equivalence|
0
What is the relation of the metric matrix with the signature of a Geometric Algebra?
As far as I know, the metric matrix is used to measure the length of vectors regardless of the chosen basis used to represent them. I read in the book " Álgebra Geométrica e Aplicações " by Fernandes, Lavor & Neto that the diagonalized metric matrix is used to identify the signature of a Geometric Algebra by counting the positive values $p$ , negative values $q$ , and zeros $r$ along its diagonal. For example, if I have the matrix $$A = \begin{pmatrix}5 & -3/4 \\ -3/4 & 5/16\end{pmatrix}$$ composed of non-orthogonal basis vectors, how could it be translated into a signature, if possible? I ask this because the diagonalized version of A does not have the commonly found values on its diagonal in definitions of the geometric product between the same vector: $e_i^2=[1, -1,\text{ or }0]$ . Does this only apply to orthogonal vectors? Also, why is it necessary for the chosen matrix representing the signature of a Geometric Algebra to be diagonalized?
Geometric algebras can also be constructed with dot products that have non-diagonal quadradic form representations $\mathbf{x} \cdot \mathbf{y} = \mathbf{x}^\text{T} A \mathbf{y},$ such as your example $A =\begin{bmatrix}5 & -3/4 \\ -3/4 & 5/16\end{bmatrix}.$ With fractions like that, this quadratic form must be associated with some North-American's wood working shop. As the matrix is symmetric, it must have an orthogonal diagonalization. The eigenvalues are $\begin{aligned}\lambda_1 &= \frac{1}{{32}} \left( {85 + 3 \sqrt{689}} \right) \\ \lambda_2 &= \frac{1}{{32}} \left( {85 - 3 \sqrt{689}} \right) \\ \end{aligned}$ with associated eigenvectors $\begin{aligned}\mathbf{p}_1 &=\begin{bmatrix}-\frac{1}{{8}} \left( { 25 + \sqrt{689}} \right) \\ 1\end{bmatrix} \\ \mathbf{p}_2 &=\begin{bmatrix}-\frac{1}{{8}} \left( { 25 - \sqrt{689}} \right) \\ 1\end{bmatrix}\end{aligned}.$ These eigenvectors can be orthonormalized $\begin{aligned}\mathbf{p}_1 &=\begin{bmatrix}-0.988034 \\ 0.154233\end{bma
|linear-algebra|geometric-algebras|
0
how to evaluate $ \int_0^1 \frac{\arctan \left(x \pm \sqrt{x^2+1}\right)}{x+1} d x $
how to evaluate $$ \int_0^1 \frac{\arctan \left(x \pm \sqrt{x^2+1}\right)}{x+1} d x $$ Denote: \begin{align*} I &= \int_0^1 \frac{\arctan \left(x+\sqrt{x^2+1}\right)}{x+1} \,dx \\ J &= \int_0^1 \frac{\arctan \left(x-\sqrt{x^2+1}\right)}{x+1} \,dx \\ \left(x+\sqrt{x^2+1}\right)\left(x-\sqrt{x^2+1}\right) &=-1 \Rightarrow\left(x-\sqrt{x^2+1}\right)=\frac{-1}{\left(x+\sqrt{x^2+1}\right)} \\ \Rightarrow \arctan \left(x-\sqrt{x^2+1}\right) &=-\arctan \left(\frac{1}{x+\sqrt{x^2+1}}\right) \\ \end{align*} \begin{align*} &I+\int_0^1 \frac{\arctan \left(x+\sqrt{x^2+1}\right)+\arctan \left(x-\sqrt{x^2+1}\right)}{x+1} \,dx \\ = &\int_0^1 \frac{\arctan \left(\frac{x+\sqrt{x^2+1}+x-\sqrt{x^2+1}}{1-\left(x+\sqrt{x^2+1}\right)\left(x-\sqrt{x^2+1}\right)}\right)}{x+1} \,dx \\ =& \int_0^1 \frac{\arctan \left(\frac{2x}{1-\left(x^2-\left(x^2-1\right)\right)}\right)}{x+1} \,dx=\int_0^1 \frac{\arctan (x)}{x+1} \,dx \end{align*}
Noting $$ (x+\sqrt{x^2+1})(\sqrt{x^2+1}-x)=1 $$ one has $$\begin{eqnarray} I+J &=& \int_0^1 \frac{\arctan \left(x+\sqrt{x^2+1}\right)+\arctan \left(x-\sqrt{x^2+1}\right)}{x+1} \,dx\\ &=&\int_0^1\frac{\arctan x}{x+1}\,dx=\frac18\pi\ln2\\ I-J &=& \int_0^1 \frac{\arctan \left(x+\sqrt{x^2+1}\right)-\arctan \left(x-\sqrt{x^2+1}\right)}{x+1} \,dx\\ &=&\int_0^1\frac{\frac\pi2}{x+1}\,dx=\frac12\pi\ln2 \end{eqnarray}$$ and hence $$ I=\frac{5\pi}{16}\ln2,J=-\frac{3\pi}{16}\ln2.$$
|calculus|integration|definite-integrals|
0
Notational inconsistency for subsets and strict subsets
I am reading "Introduction to Topology" by Gamelin and Greene (2nd edition) and I am confused by what appears to be an inconsistency in notation for subsets and strict subsets. The image below shows page 61 from this book. From the solid red highlighted expressions, it appears that $A \subset B$ means $A$ is a subset (not strict) of $B$ . However, from the blue dashed highlighted expression, it appears that $A \subseteq B$ means $A$ is a subset (not strict) of $B$ . Is this a typo in the book, or have I misunderstood something? Frustratingly, they don't define the symbols $\subset$ and $\subseteq$ anywhere. UPDATE: Thanks to the helpful comments and answer, it's clear that there is a notational inconsistency and the authors mean "subset" (not "strict subset") with both $\subset$ and $\subseteq$ on pg 61. But, is it safe to assume they always mean "subset" (and never have a use for "strict subset") through the entire book? I'm new to topology, and there are other places where I cannot f
$\subset$ and $\subseteq$ have the same meaning. Most authors prefer $\subset$ , but there are also authors using $\subseteq$ . It is just a matter of taste. However, I agree to you that one should not use both symbols in the same text. Having a quick view into the book, I only found very few occurrences of $\subseteq$ . I would therefore regard these as "unintended" occurrences. Perhaps the authors (there are two of them!) had different preferences, or made typos in the manuscript, or $\subseteq$ was used erroneously in the print set created by the book publisher - but that is pure speculation. Your example $\operatorname{int} S \subset S$ from p. 61 contrasts to $\operatorname{int} Y \subseteq Y$ on p. 3. This should say enough: The use is not standardized in the book.
|elementary-set-theory|notation|
1
Help solving a simple ODE?
Consider the ODE $$h'(t) = -k \sqrt{h(t)} \,,$$ where $k$ is a positive constant. Suppose we also know that $h(0)=16$ and $h(30)=4$ . How do we find the function $h(t)$ ? Attempt: we could try integrating the ODE between $t=0$ and $t=30$ : \begin{align*} \int_0^{30} h'(s) ds &= -k \int_0^{30} \sqrt{h(s)} ds \end{align*} the left side of which is $ h(30) - h(0) = 12$ . For the right side, we can use u-substitution with $u= \sqrt{h(s)}$ , and $du = h'(s) ds$ to get \begin{align*} -k \int_0^{30} \sqrt{h(s)} ds &= -k \int_0^{30} \sqrt{u} \frac{1}{h'(s)} du \\ &= -k \int_0^{30} \sqrt{u} \frac{1}{-k\sqrt{h(t)}} du \\ &= -k \int_0^{30} \frac{1}{-k} du \\ &= -30k \,. \end{align*} Hence $12 = -30k$ . But we don't get any more information about $h(t)$ . attempt 2: separation of variables: $$ \frac{dh}{\sqrt{h}} = -kdt $$ integrating both sides, $$ \int_0^{h(t)} \frac{dh}{\sqrt{h}} = -k\int_0^t ds $$ $$2\sqrt{h(t)} = -kt $$ so, $\sqrt{h(t)} = -\frac{kt}{2}$ which is weird because a square root is
(1) We are taking $\sqrt{y}$ hence $y$ can not be negative. (2) We can move the terms to get $[dy/dx]^2=k^2y$ (3) Let $y=c_1x^n+\cdots$ , then $dy/dx=c_1nx^{n-1}$ & $[dy/dx]^2=c_1^2n^2x^{2n-2}$ Hence $n=2n-2$ & $c_1k^2=c_1^2n^2$ Hence $n=2$ & $c_1=k^2/4$ (4) Let the Solution be $y=k^2x^2/4+c_2x+c_3$ $[dy/dx]=k^2x/2+c_2$ $[dy/dx]^2=k^2x^2/4+c_2^2+c_2k^2x$ (5) Equate the terms with $k^2y$ to get the whole Solution : $k^2c_2=c_2k^2$ $k^2c_3=c_2^2$ We then have 2 independent unknowns $c_1,c_2$ here , while $c_3,k$ are dependent on the other 2 unknowns. (6) With the boundary conditions , we can get $k$ & $c_1,c_2,c_3$ , which we can plug into the original equation to check the consistency & verify the Solution.
|calculus|ordinary-differential-equations|
0
Why do these three conditions imply divergence in probability?
This is a follow up to A martingale converging in distribution but not a.s. or in probability . Suppose we have a sequence of integer-valued RVs that satisfies (1) $P(X_n=a~i.o.)=1$ for each $a=-1,0,1$ (2) $\sup_n|X_n(w)| a.s. (3) For some $p\in(0,1/2)$ , $P(X_n=1),P(X_n=-1)\to p$ and $P(X_n=0)\to 1-2p$ . How does one show this sequence of RVs doesn't converge in probability? My Attempts: Clearly $X_n$ converges in distribution to RV $X$ with $P(X=1)=P(X=-1)=p,P(X=0)=1-2p$ . Assume by contradiction that $X_n$ converges in probability, then it must be the case that $X_n\to X$ in probability. So there must be a subsequence $X_{n_k}\to X$ a.s.. But I couldn't find a way to go forward.
I'm suspicious of this claim. Here's an example. I fold together the values $1$ and $-1$ from your situation, but that simplification is unimportant. Example: On an appropriate probability space $(\Omega,\mathcal F,P)$ let $U$ be uniformly distributed on $(0,1)$ and let $\xi_1,\xi_2,\ldots$ be a sequence of independent $\{0,1\}$ -valued random variables, independent of $U$ , such that $P(\xi_n =1)=1/n$ . For fixed $p\in(0,1)$ define $$ X_n =\cases{ 1,&$0 Then $P(X_n=1)=p\cdot(1-{1\over n})+(1-p){1\over n}$ , so $\lim_n P(X_n=1)=p$ . Likewise, $\lim_n P(X_n=0)=1-p$ . You have $$ P(X_n=1 \hbox{ i.o.})=p\cdot P(\xi_n=0\hbox{ i.o.})+(1-p)\cdot P(\xi_n=1\hbox{ i.o.})=p+(1-p)=1, $$ and similarly $P(X_n=0\hbox{ i.o.})=1$ . Finally, let $Y:=1_{\{U\le p\}}$ . The $P(X_n\not= Y)\le 2P(\xi_n=1)=2/n\to 0$ as $n\to\infty$ . Thus $X_n$ converges in probability to $Y$ .
|probability|probability-theory|convergence-divergence|martingales|
1
Exchanging limit and inferior limit
Let $b(k,M)$ be a real sequence such that for any $k$ , $b(k,M)\in [-M,M]$ . I know that $\lim_{M\to+\infty} \sup_{k\geq0} |b(k,M)-a(k)|=0 $ , meaning that $b(k,M)$ converges to $a(k)$ when $M\to+\infty$ uniformly in $k$ . There exists $\lim_{k\to+\infty} a(k)=L\in \mathbb{R}$ . I want to prove that $\lim_{k\to +\infty} \lim_{M\to+\infty} b(k,M)\geq \lim_{M\to+\infty} \liminf_{k\to+\infty} b(k,M)$ . If I knew that for any $M$ (or definitely in $M$ ) the limit $\lim_{k\to +\infty}b(k,M)$ was well defined, then I would actually have the equality in the previous inequality due to the uniform convergence. But since I don't know if $\lim_{k\to+\infty} b(k,M)$ exists, I would be satisfied with the inequality.
Assuming $|b(k,M)|\le M$ is useless. Under your two other hypothesis $\forall\epsilon>0\quad\exists M_\epsilon\quad\forall M\ge M_\epsilon,\forall k\quad|b(k,M)-a(k)|\le\epsilon$ $\forall\epsilon>0\quad\exists k_\epsilon\quad\forall k\ge k_\epsilon\quad|a(k)-L|\le\epsilon$ , let us define $$c(M):=\liminf_{k\to+\infty}b(k,M)$$ and prove that $\lim_{M\to\infty}c(M)$ exists, and is not only $\le L$ but $=L$ : $$\lim_{M\to+\infty}c(M)=L.$$ Let $\epsilon>0$ . From the two hypothesis, we derive $$\forall M\ge M_\epsilon\quad\forall k\ge k_\epsilon\quad b(k,M)\in[L-2\epsilon,L+2\epsilon]$$ hence $$\forall M\ge M_\epsilon\quad\forall j\ge k_\epsilon\quad c(j,M):=\inf_{k\ge j}b(k,M)\in[L-2\epsilon,L+2\epsilon]$$ and therefore $$\forall M\ge M_\epsilon\quad c(M)=\lim_{j\to\infty}c(j,M)\in[L-2\epsilon,L+2\epsilon],$$ which ends the proof.
|limits|analysis|uniform-convergence|limsup-and-liminf|
1
Non-deterministic bounded Lévy process
Does there exist a non-deterministic $\mathbb{R}$ - valued Lévy process $(X_t)_{t \in [0, \infty)}$ such that there exist a $t_0 \in (0, \infty) $ and $R>0$ such that $$ P(0 i.e. $X_{t_0}$ is almost surely positive and bounded by $R$ . Since Lévy processes are more or less generated by infinitely divisible random variables, this rases the question if there exist a non-deterministic $\mathbb{R}$ - valued infinitely divisible random variable (i.e. it's density measure $\mu_X$ is infinitely divisible with respect convolution) such that there exist $R>0$ with $ P(0 ; equivalently, $\mu_X((0,R))=1$ . The question is motivated by the problem here: translation invariance of expectation value of hit counting variable for Lévy process I'm seeking to construct a counterexample with a $\mathbb{R}$ - valued Lévy process $(X_t)_{t \in [0, \infty)}$ such that there exist $s,a,u>0$ with $$\mathbb{E}[M_u(a,s)]= \mathbb{E}[M_0(a,s)] $$ where $M_u(a,s)$ the random variable counting the number of tiles $
No. (I presume that $X_0=0$ .) Under your hypothesis there exists $\epsilon\in(0,R)$ such that $c:=P(X_{t_0}>\epsilon)>0$ . Then for positive integer $n$ , $$ P(X_{nt_0}>n\epsilon)\ge c^n>0. $$ Now choose $n$ so large that $n\epsilon >R$ to see that $P(X_{nt_0}>R)>0$ . In short, $X$ isn't a.s. bounded above.
|stochastic-processes|levy-processes|
0
Prove that $\left\lfloor{\frac{n}{2}}\right\rfloor+\left\lfloor\frac{\left\lceil\frac{n}{2}\right\rceil}{2}\right\rfloor+\cdots=n-1$.
Prove that, for $n\in \Bbb{Z}^+$ , $$\left\lfloor{\frac{n}{2}}\right\rfloor+\left\lfloor\frac{\left\lceil\frac{n}{2}\right\rceil}{2}\right\rfloor+\left\lfloor\frac{\left\lceil\frac{\left\lceil\frac{n}{2}\right\rceil}{2}\right\rceil}{2}\right\rfloor+\cdots = n - 1\,,$$ where there are $\lceil{\log_2n}\rceil$ addends on the left-hand side. I don't know how I could prove this. Any ideas? There is an intimate relationship here with a binary tree where each addend is the number of nodes on that layer, and $n$ is the number of leaves.
Alternative to the prior post, only the following lemmas to be considered (also the ones i'd mentioned in the previous post): $$\forall (x,n)\in \mathbb{R}×\mathbb{N}, \; \left \lceil \frac{\lceil x \rceil}{n} \right \rceil = \left \lceil \frac{x}{n} \right \rceil$$ or equivalently, $$\forall (m,n,x)\in \mathbb{Z}×\mathbb{N}×\mathbb{R}, \: \left\lceil \frac{\lceil x \rceil + m}{n} \right\rceil = \left\lceil \frac{x+m}{n} \right\rceil$$ $$\forall (a,b)\in \mathbb{Z}×\mathbb{N}, \; \left\lceil \frac{a}{b} \right\rceil = \left\lfloor \frac{a-1}{b} \right \rfloor +1 \iff \left\lfloor \frac{a}{b} \right \rfloor = \left\lceil \frac{a+1}{b} \right\rceil - 1$$ Last, but not least $$\forall (x,n)\in \mathbb{R}×\mathbb{N}, \; \lceil nx \rceil = \sum_{k=0}^{n-1} \left\lceil x-\frac{k}{n} \right\rceil$$ or in other words, Hermite's Ceiling Function Identity (which can easily be obtained by substituting $x\to -x$ into the Hermite's Floor Function Identity) The validity of all of these shall be left
|elementary-number-theory|discrete-mathematics|ceiling-and-floor-functions|trees|binary|
0
How to find missing vertex/point of a tetrahedron
I need to find the missing point of a tetrahedron side length $x$ , with three points being $$v_1=(0,0,0) \quad v_2=\left(\frac{1}{2}x,0,\frac{\sqrt3}{2}x\right) \quad v_3=(x,0,0)$$ I can't seem to find the 4th point. Can anyone help? I just need the equation.
That triangle lives in the $xz$ plane. Find its center $P=(a,0,b)$ (which is $1/3$ the sum of the vertices). There are two possible fourth vertices, each of the form $(a,t,b)$ with $t$ chosen so that the point is $x$ units from the origin, since that is the side length of your regular tetrahedron.
|geometry|solid-geometry|
0
general Convex set definition
A set is $C$ is convex if $tC+(1-t)C\subset C$ where $0\leq t \leq 1$ . how do we manage to extend it for n vectors? I manage to get $t^2 C + t(1-t)C + t(1-t)C + (1-t)^2 C \subset C$ but induction through this process only gives convexity definition for 2,4,8.. vectors only. There is an alternate derivation for the following definition: $C$ is convex $\iff$ $\sum_{i=1}^{n} a_i C \subset C$ with $\sum_{i=1}^{n}a_i=1$ and $0 \leq a_i \leq 1$ . can somebody how the two defintion are equivalent and can be derived from each other?
Suppose $a_i\in[0,1]$ ( $i=1,2,3$ ) with $a_1+a_2+a_3=1$ , and $x_1,x_2,x_3\in C$ . At least one of the $a_i$ s is less than $1$ ; let's assume it's $a_3$ . Then $$ a_1x_1+a_2x_2+a_3x_3 = (1-a_3)\left[{a_1\over 1-a_3}x_1+{a_2\over 1-a_3}x_2\right]+a_3x_3. $$ Now $x^*:={a_1\over 1-a_3}x_1+{a_2\over 1-a_3}x_2\in C$ because ${a_1\over 1-a_3}+{a_2\over 1-a_3}=1$ . Therefore $$ a_1x_1+a_2x_2+a_3x_3 = (1-a_3)x^*+a_3x_3\in C $$ as well. You should be able to base a full induction argument on this observation.
|topological-vector-spaces|convex-geometry|
0
general Convex set definition
A set is $C$ is convex if $tC+(1-t)C\subset C$ where $0\leq t \leq 1$ . how do we manage to extend it for n vectors? I manage to get $t^2 C + t(1-t)C + t(1-t)C + (1-t)^2 C \subset C$ but induction through this process only gives convexity definition for 2,4,8.. vectors only. There is an alternate derivation for the following definition: $C$ is convex $\iff$ $\sum_{i=1}^{n} a_i C \subset C$ with $\sum_{i=1}^{n}a_i=1$ and $0 \leq a_i \leq 1$ . can somebody how the two defintion are equivalent and can be derived from each other?
Fix $x_1,\ldots, x_n,x_{n+1}$ and positive numbers $w_1,\ldots, w_n,w_{n+1}$ with $\sum_{i=1}^{n+1}w_i=1$ . Then $$\sum_{i=1}^{n+1} w_i = (1-w_{n+1})x+w_{n+1}x_{n+1},$$ where $$x=\sum_{i=1}^n \frac{w_i}{1-w_{n+1}}x_i.$$
|topological-vector-spaces|convex-geometry|
0
How to find missing vertex/point of a tetrahedron
I need to find the missing point of a tetrahedron side length $x$ , with three points being $$v_1=(0,0,0) \quad v_2=\left(\frac{1}{2}x,0,\frac{\sqrt3}{2}x\right) \quad v_3=(x,0,0)$$ I can't seem to find the 4th point. Can anyone help? I just need the equation.
In a plane you have these points forming an equilateral triangle. If you know that medians trisect one another, you know the centroid of this triangle is $\frac{1}{3}$ the way along the indicated line. If you didn't know that, you could calculate the midpoint of either side, derive the line equation, and calculate the intersection, to arrive at the same centroid of $(\frac{x}{2},0,\frac{\sqrt{3}}{6})$ . The fourth vertex lies above or below this at whatever distance forms a side of length $x$ : $x^2(\frac{1}{4}+z^2+\frac{3}{36})=x^2 \Rightarrow z=\pm \frac{2}{\sqrt{6}}$
|geometry|solid-geometry|
0
Subgroups of $\mathbb{Z}_{2}\oplus\mathbb{Z}_{16}$
I wonder, is there any subgroups of $\mathbb{Z}_{2}\oplus\mathbb{Z}_{16}$ isomorphic to $\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}$ ? It seems like the answer is no since $\mathbb{Z}_{16}$ is not isomorphic to $\mathbb{Z}_{4}\oplus\mathbb{Z}_{4}$ , but I want to know more explicit and accurate proof. Or may be there is some classification of all subgroups of any finite abelian group like the one we have for $\mathbb{Z}\oplus\cdots\oplus\mathbb{Z}$ ?
You're right. For instance you can argue that there's $12$ elements of order $4$ in $\Bbb Z_4×\Bbb Z_4.$ But in $\Bbb Z_2×\Bbb Z_{16}$ , there's only $4.$ Actually for finitely generated abelian groups the invariant factors of a subgroup have to divide those of the parent group. So since $4\not \mid 2,$ that's another way.
|abstract-algebra|group-theory|abelian-groups|
1
Non-surjective linear operator that is open?
Is there any non-surjective linear operator that is open? By the open mapping theorem, $T:X \to Y$ is open iff $T$ is surjective. Is it possible then to have a non-surjective linear operator that is open? I have been thinking of $T:\mathbb{R}^2 \to \mathbb{R}^2,\ (x, y) \to (x, 0)$ Here T is not surjective but the set $A = {(x, 0) \in \mathbb{R}^2} \subset \mathbb{R}^2$ is closed: Let $(z^{(n)})_n \subset A,\ z^{(n)} = (x_n, y_n)$ be some sequence with $\lim x_n = x$ . Then the sequence converges in $A$ , and therefore $A$ is closed. So this wouldnt work as an example.
No such $T$ exists. $X$ , of course, is open in $X$ , so if $T$ is open then $T(X)$ must be open in $Y$ . But $T(X)$ is a linear subspace of $Y$ . If $T$ is not surjective, then $T(X)$ is a proper linear subspace, and one may see that a proper linear subspace can never be open.
|functional-analysis|linear-transformations|open-map|
0
Tautologies in classical logic
According to its truth table, $P \lor \neg P$ is a tautology, i.e. it is true for all truth values of its constituent propositions. But how come that is true in classical logic? $\lor$ is the inclusive 'or', so $P \lor \neg P$ means 'either $P$ , or $\neg P$ , or both of them'. But in classical logic it is never true that both $P$ and $\neg P$ , i.e. $P \land \neg P$ is a contradiction (false for all truth values of its constituent propositions). So why is $P \lor \neg P$ always tautologically true?
$P \lor \neg P~$ is an axiom. It can be seen as the theoretical basis for the construction of truth tables, which are simply a convenient visualization of a proof by cases. Example Consider the truth table for $A \to B$ : There are 4 cases to consider. It translates: $~~~~~~[(A \land B) \to (A\to B)]~~~~~~~~~~~~~$ (Case 1) $~~\land [(A \land \neg B) \to \neg (A\to B)]~~~~~~~~$ (Case 2) $~~\land [(\neg A \land \ B) \to (A\to B)]~~~~~~~~~~$ (Case 3) $~~\land [(\neg A \land \neg B) \to (A\to B)]~~~~~~~~$ (Case 4) Formal Proof of Case $1$ using a form of natural deduction: Plain text version: Case 1: 1 A & B Premise 2 A Premise 3 B Split, 1 Discharge premise on line 2 4 A => B Conclusion, 2 Discharge premise on line 1 As Required: 5 A & B => [A => B] Conclusion, 1
|logic|propositional-calculus|intuition|
0
Solving $x^2y''+2xy'+2y=0$ using power series
One way to solve $x^2y''+2xy'+2y=0$ is using the substitution $x=e^t$ . This time I'm asked to use power series (Fröbenius method). Assuming $y=\sum_{n\geq 0}a_nx^{n+r}$ , the diff eq reduces to $$\sum_{n\geq 0}(n+r)(n+r-1)a_nx^{n+r}+\sum_{n\geq 0}2(n+r)a_nx^{n+r}+\sum_{n\geq 0}2a_nx^{n+r}=0$$ $$\sum_{n\geq 0}\{(n+r)(n+r-1)+2(n+r)+2\}a_nx^{n+r}=0$$ This is the part where I'm lost at: We need to impose $(n+r)(n+r-1)+2(n+r)+2=0$ as $a_n\neq 0$ , right? Defining $u\equiv n+r$ we get $$u(u-1)+2r+2=u^2+u+2=0\implies u=n+r=-\frac{1}{2}\pm i\frac{\sqrt 7}{2}$$ So I'm guessing the solution should look like $$y=\sum_{n\geq 0}a_nx^{-\frac{1}{2}+ i\frac{\sqrt 7}{2}}+\sum_{n\geq 0}a_nx^{-\frac{1}{2}- i\frac{\sqrt 7}{2}}$$ But what about the $a_n$ s? What can be said about them? In view that the differential equation is a $2$ nd order one, shouldn't there be only two coefficients $a_0$ and $a_1$ determined by boundary conditions? Therefore, somehow, $$y=a_0x^{-\frac{1}{2}+ i\frac{\sqrt 7}{2}}+a_1x^
Let's call the indicial roots $r_+ = -1/2 + i \sqrt{7}/2$ and $r_- = -1/2 + i \sqrt{7}/2$ . Your solutions (for $x > 0$ ) are $$y = a x^{r_{+}} + b x^{r_{-}} = \frac{a}{\sqrt{x}} \exp(i \sqrt{7} \ln(x)/2) + \frac{b}{\sqrt{x}} \exp(-i \sqrt{7} \ln(x)/2)$$ Now use the formulas $\exp(it) = \cos(t) + i \sin(t)$ and $\sin(t + B) = \sin(t) \cos(B) + \cos(t) \sin(B)$ to express $\sin(\sqrt{7} \ln(x)/2+B)/\sqrt{x}$ in this form.
|ordinary-differential-equations|power-series|frobenius-method|
0
matrix with 'row space = column space', but 'null space $\neq$ left null space'.
Would you please give an example of a matrix with row space equal to the column space but null space is not equal to the left nullspace? Or explain why it is not possible? Thanks
Notice that whenever you have the same column and row space for a matrix $A$ , we can express the columns of $A$ with some matrix $M$ utilizing the rows: $MA=A^T$ . (Also, both matrices are square.) Now, if $Ax=0$ , then $(MA)x=0$ , so that $A^Tx=0$ . Which means that $x\in N(A)\rightarrow x\in N(A^T)$ . Similarly, we can achieve the other direction.
|linear-algebra|
0
Book Recommendation for Integer partitions and $q$ series
I have been studying number theory for a little while now, and I would like to learn about integer partitions and $q$ series, but I have never studied anything in the field of combinatorics, so are there any prerequisites or things I should be familiar with before I try to study the latter?
You can try reading An Introduction to q-analysis by Warren P. Johnson. It does not have many prerequisites either.
|combinatorics|reference-request|book-recommendation|integer-partitions|q-series|
0
$L[a] \cap 2^{\omega}$ is $\Sigma_2^1$
I have the following question: Let $a\in \mathbf{R}$ sucht that $X = L[a] \cap 2^{\omega}$ is uncountable. Why is $X$ is a $\Sigma_2^1$ set? $L[a]$ is the inner model that can be built by constructibility relative to $a$ . Thank you very much for any help in advance.
The proof should be available in Kanamori's book, and the proof is not trivial for the person who met this type of argument first. You may learn how to encode set-theoretic statements via arithmetical statements from Simpson's book on Reverse Mathematics (especially, the Chapter for $\beta$ -models.) The trick is that $L[a]$ is absolute for transitive models of a sufficiently strong fragment of $\mathsf{ZFC}$ containing $a$ . Note for the sufficiently strong fragment: $\mathsf{ZFC}^-$ , $\mathsf{ZFC}$ without Powerset but with Collection should suffice, but $\mathsf{KP}$ also works. You can see that if $x\in L[a]\cap\mathbb{R}$ , then $x\in L_\alpha[a]$ for some countable $\alpha$ . Furthermore, you can see that $L_\alpha[a]$ can be a model of $\mathsf{ZFC^-} + (V=L)$ . Proof. Let $\kappa=\omega_1^{L[a]}$ . Then $L_\kappa[a]$ is a model of $\mathsf{ZFC}^-+(V=L)$ and contains $x$ . Take a countable elementary closure $M\prec L_\kappa[a]$ containing $x$ and consider the transitive collap
|set-theory|descriptive-set-theory|
1
tensor notation in complex geometry
Suppose I have $g$ , a symmetric, Hermitian metric on a complex manifold of complex dimension $n$ . Can anyone please suggest how to evaluate the following tensorial quantity (assuming Einstein summation convention): $$g^{\gamma \bar{\delta}}g_{\alpha \bar{\delta}}g_{\bar{\beta} \gamma}$$ where $g^{\gamma \bar{\delta}}$ is the inverse metric of $g$ . I was thinking of evaluating this by feeding it vectors i.e. evaluating the quantity $$g^{\gamma \bar{\delta}}g_{\alpha \bar{\delta}}g_{\bar{\beta} \gamma}u^{\alpha}\bar{u}^{\beta}$$ but there also I am not sure how to calculate this. Here is what I think : $$g^{\gamma \bar{\delta}}g_{\alpha \bar{\delta}}g_{\bar{\beta} \gamma}u^{\alpha}\bar{u}^{\beta} = g_{\alpha \bar{\beta}}u^{\alpha}\bar{u}^{\beta} = ||u||^2.$$ Please let me know if I am correct.
First, I am going to switch indices because I want to use $\delta_i^j$ with its standard meaning ( $1$ when $i=j$ and $0$ otherwise). Second, since the conjugate index should be in the second slot, I'm replacing your third term with $g_{\gamma\bar\beta}$ . I will assume this was a typo. Of course, since $g$ is hermitian symmetric, $\overline{g_{\alpha\bar\beta}} = g_{\beta\bar\alpha}$ . But a barred index in the first slot is not conventionally defined unless you mean $g_{\bar\beta\alpha} = \overline{g_{\beta\bar\alpha}}=g_{\alpha\bar\beta}$ . So, let's consider $g^{\gamma\bar\sigma}g_{\alpha\bar\sigma}g_{\gamma\bar\beta}$ . Since we have $$g^{\gamma\bar\sigma}g_{\alpha\bar\sigma} = \delta_\alpha^\gamma,$$ it follows that $$g^{\gamma\bar\sigma}g_{\alpha\bar\sigma}g_{\gamma\bar\beta} = \delta_\alpha^\gamma g_{\gamma\bar\beta} = g_{\alpha\bar\beta}.$$ Then $g^{\gamma\bar\sigma}g_{\alpha\bar\sigma}g_{\gamma\bar\beta}u^\alpha u^{\bar\beta} = g_{\alpha\bar\beta}u^\alpha u^\bar\beta = \langl
|complex-geometry|tensors|
1
Wheel-Spinning Probabilities
I saw this problem a while ago and a found the same answer as the textbook, yet I wasn’t truly sure if that was correct. Here was the problem: You have a wheel with 4 equal sections, red, yellow, green and blue. Another wheel has 3 equal sections: red, yellow, and green. Knowing that exactly 1 wheel landed green, what are the odds the second wheel was the one that did so. I immediately counted: if the first when is green, there are 2 possibilities for the second wheel: red and yellow. If however the second wheel landed green, there are 3 possibilities for the first wheel. Red, yellow and blue. I hence concluded there are 3 out of 5 possibilities in which the second wheel was green, so the answer is 3/5. This however doesn’t seem correct as we didn’t account for the fact that the first wheel has 4 sections, and the second has 3. Am I correct, or is there something I didn’t account for? Thank you in advance!
The problem is small so we can brute force it by counting to have certainty in the calculation. The possible outcomes are (using 1-4 for colors, wheel 1 then wheel 2): 11 12 13 21 22 23 31 32 33 41 42 43 Of these, there are 5 spins with exactly one green (#1), and of these 3 of them are in the second wheel, 2 in the first, so the odds are 3:2. The probability is $\frac{3}{5}=0.6$ But then brute force is usually not possible, and the more compact route to the solution would be: Probability of green only in wheel 2: $\frac{3}{4}\frac{1}{3}=\frac{1}{4}$ Probability of green only in wheel 1: $\frac{1}{4}\frac{2}{3}=\frac{1}{6}$ Odds: $\frac{\frac{1}{4}}{\frac{1}{6}}=\frac{3}{2}$ Probability: $\frac{\frac{1}{4}}{\frac{1}{6}+\frac{1}{4}}=\frac{\frac{1}{4}}{\frac{2}{12}+\frac{3}{12}}=\frac{12}{20}=\frac{3}{5}$
|probability|conditional-probability|
1
What is the full axiom of choice (with "$\in$" as the only atomic formula)?
I've seen the axiom of choice as $\forall X\left[\varnothing \notin X\implies \exists f\colon X\rightarrow \bigcup_{A\in X}A,~~~ \forall A\in X\,(f(A)\in A)\right]$ . But isn't there a version which doesn't need the definition of a function and of the union?
If $a$ is a set of pairwise disjoint sets, then there is a set $c$ whose intersection with each nonempty element of $a$ is a singleton. $$\forall a\{\exists s\exists t\exists x\exists y[s\in a\land t\in a\land x\in s\land x\in t\land y\in s\land y\notin t]\lor\exists c\forall s\forall w[s\in a\land w\in s\to\exists x\forall y(y\in c\land y\in s\leftrightarrow\forall z(z\in x\leftrightarrow z\in y))]\}$$
|elementary-set-theory|
1
How many different undirected, planar, connected graphs are there where every edge is in a 3-cycle?
I am interested in undirected, planar, connected graphs where every edge is in a 3-cycle. If there are 4 vertices then, up to isomorphism, there are two such graphs. How many are there for 5 vertices?
There are really not that many graphs with $5$ -vertices up to isomorphism, so this is really just a matter of checking them out. Here are all $34$ of them: $K_5$ is the only non-planar of the bunch and its easy to get rid the unconnected ones.
|combinatorics|graph-theory|planar-graphs|
0
Can GAP compute this 16-dimensional representation of AlternatingGroup(6)?
I am interested in a particular 16-dimensional representation of $A6$ , the alternating group on 6 things. I first construct an amalgam, gamma, of two copies of SymmetricGroup(4): F:=FreeGroup(["s1","s2","s3","t1","t2","t3"]); AssignGeneratorVariables(F); rel:=Union( [s1^2,s2^2,s3^2,s1*s3*s1^-1*s3^-1,(s1*s2)^3,(s2*s3)^3], [t1^2,t2^2,t3^2,t1*t3*t1^-1*t3^-1,(t1*t2)^3,(t2*t3)^3], [s1*s3*(t1*t3)^-1, s1*(t2*t3*t1*t2)^-1, s2*s3*s1*s2*t1^-1]); gamma:=F/rel; #an amalgam of two SymmetricGroup(4) Next, I find an epimorphism from gamma onto $A6$ and take the kernel, $K$ : A6:=AlternatingGroup(6); QA:=GQuotients( gamma, A6); f:=QA[1]; #an epimorhism from gamma onto A6 K:=Kernel(f); GeneratorsOfGroup(K); AbelianInvariants(K); # the abelianization of the kernel K of f is free abelian of rank 16 Size(AbelianInvariants(K)); Note that the abelian quotient, $K_{ab}$ , of $K$ is free abelian of rank 16. Now A6 acts (up to inner automorphisms) on $K$ and $K_{ab}$ . For instance, the generators of A6 act o
Take the maximal abelian Quotient of $Q$ . (I'm doing this in the development version. It is possible that Version 4.12 will act a bit more clunky): gap> ma:=MaximalAbelianQuotient(K);; gap> Q:=Range(ma); gap> gens:=GeneratorsOfGroup(Q); [ f1, f2, f3, f4, f5, f6, f7, f8, f9, f10, f11, f12, f13, f14, f15, f16 ] Now we compute, for every Generator of gamma, the action on the generators of $Q$ : take $q\in Q$ , preimage in $\Gamma$ , act by conjugation, map back into $Q$ : gap> act:=List(GeneratorsOfGroup(gamma),x->List(GeneratorsOfGroup(Q), > y->ImagesRepresentative(ma,PreImagesRepresentative(ma,y)^x))); and convert to vector/matrix form (writing additively): act:=List(act,x->List(x,y->ExponentSums(UnderlyingElement(y))));; The command UnderlyingElement here simply takes the free group word representing the finitely presented group element. These matrices give you a $\mathbb{Z}$ -representation of the factor, i.e. the matrices you asked for: gap> List(act,Order); [ 2, 2, 2, 2, 2, 2 ] gap
|group-theory|representation-theory|gap|
0
Do we need the axiom of choice for choosing a sequence out of countably infinite nonempty sets?
I'm reading a proof of the statement that each closed subset $F \subset \mathbb{R}^n$ can be written as an intersection of a sequence of open subsets. The corresponding sequence is given by $U_n := \{x \in \mathbb{R}^n | \;||x-y||_2 , i.e. one wants to prove that $F = \bigcap_{n\in \mathbb{N}} U_n$ . For the direction $\bigcap_{n\in \mathbb{N}} U_n \subset F$ the argument is the following: each $x \in \bigcap_{n\in \mathbb{N}} U_n$ is the limit of a sequence of elements in $F$ . Since $F$ is closed this implies $x \in F$ . Question: Let $x \in \bigcap_{n\in \mathbb{N}} U_n$ . It's clear, that every for every $n \in \mathbb{N}$ we should be able to choose an element $x_n \in F$ such that $||x_n-x||_2 since $U_n$ is not empty (at least when $F$ is not empty). But then we choose infinitely many elements at the same time. Can we somehow justify this or do we need the axiom of choice here?
The argument you gave uses the axiom of choice, but an epsilon argument can be given without needing choice. Specifically, consider an arbitrary $x\in\bigcap_nU_n$ , and let an arbitrary $\varepsilon>0$ be given. For any $n>\frac1{\varepsilon}$ , the fact that $x\in U_n$ implies that there is some $y\in F$ within distance $\varepsilon$ of $x$ . Since $F$ is closed, this means $x\in F$ . More generally, in the absence of countable choice, there can be a severe shortage of sequences. So a statement about sequences can be equivalent, when choice is available, to an epsilon-style statement but no longer equivalent in the absence of choice.
|real-analysis|elementary-set-theory|
1
What does it mean for a map to factor through another map?
In Darij Grinberg's "Hopf algebras in combinatorics", there is a statement about existence of quotient coalgebras: "Indeed, $J ⊗ C + C ⊗ J$ is contained in the kernel of the canonical map $C ⊗ C → (C/J) ⊗ (C/J)$ ; therefore, the condition $∆(J) ⊂ J ⊗ C + C ⊗ J$ shows that the map $C → C ⊗ C \twoheadrightarrow (C/J) ⊗ (C/J)$ factors through a map $∆ : C/J → (C/J) ⊗ (C/J)$ ." As far as I understand, " $f: A\to B$ factors through a set $S$ " means "there exists a surjective map $\pi : A\to S$ and an injective map $i: S \to B$ such that $f=i \circ \pi$ ". But what does it mean for a map to factor through another map?
A map $f:A\to C$ factors through $g:A\to B$ if there exists $h:B\to C$ such that $f=h\circ g$ . In such setup we also can say that $f$ factors through $h$ . No surjectivity/injectivity conditions are required here, unless explicitly stated by the author. Which btw I don't think it is typically assumed for "factors through $S$ " as well, but maybe someone does assume it.
|abstract-algebra|function-and-relation-composition|morphism|coalgebras|
1
Proof that all congruent shapes that have the same orientation are the result of a rigid motion
It seems pretty intuitive, and I have seen many sources make this claim. From what I've seen, congruence in geometry is defined as a direct isometry between two shapes, that is, an isometry that preservers handedness. Here is what Wikipedia says: "The direct isometries comprise a subgroup of E(n), called the special Euclidean group. They include the translations and rotations, and combinations thereof; including the identity transformation, but excluding any reflections." However, I haven't found any proof that the set of all direct isometries is composed solely of rotations and translations, or that these transformations do preserve handedness. This might be a well-known result, but I'm new to geometry, so if anyone could suggest some books related to this topic, that would be greatly appreciated.
I don't have a book on hand for this, but it's a pleasant exercise to verify that isometries of Euclidean space are of the form $x \mapsto Mx + b$ where $b$ is the translation vector and $M$ is an orthogonal matrix (I'm ignoring orientation for now). It requires a bit of vector algebra (not only to prove, but even to formulate). Let $x \mapsto f(x)$ be an isometry on $\mathbb{R}^n$ . Write $g(x) = f(x) - f(0)$ ; then $g$ is an isometry such that $g(0) = 0$ . It remains to show that $g$ is represented by an orthogonal matrix. Notice $$(g(x) - g(y)) \cdot (g(x) - g(y)) = (x - y) \cdot (x - y) \qquad (1)$$ (both sides represent squared distances); in particular, using $g(0) = 0$ , $$g(x) \cdot g(x) = x \cdot x, \qquad g(y) \cdot g(y) = y \cdot y. \qquad (2)$$ Combining (1) and (2) appropriately, and using distributivity and symmetry properties of the dot product, derive $$g(x) \cdot g(y) = x \cdot y \qquad (3)$$ for all $x, y \in \mathbb{R}^n$ . Then $$g(x + x') \cdot g(y) = (x + x') \cdo
|analytic-geometry|
1
Fredholm index of elliptic operator, reference request
Let $E$ be a vector bundle over a smooth manifold $M$ ( $M$ may need to be compact and without boundary). Furthermore let $$T:\Gamma(M,E)\to\Gamma(M,E)$$ be an elliptic k-th order differential operator. I am looking for a reference (and a confirmation) for the following claims: If we pick a metric $h$ on $E$ , a covariant derivative $\nabla$ and set $H^l:=H^l(h,\nabla)$ , then $T$ induces a family of Fredholm operators $(T_l)_{l\in\mathbb Z}$ with $T_l\in L(H^l,H^{l-k})$ . The index of $T_l$ is independent of the choice of $(l,h,\nabla)$ .
This is proposition $2.3(3)$ in David Reutter's essay on the The Heat Equation and the Atiyah-Singer Index Theorem . I will sketch an incomplete proof. Perhaps someone can complete it, so I decided to add it to this answer: As discussed here we have $\operatorname{coker}(A)\cong \operatorname{ker}(A^*)$ and in addition (this should be checked) $(T_l)^*=(T^*)_l$ (where $T^*$ is the formal adjoint of $T$ ). Hence $$\operatorname{ind}(T_l)=\operatorname{dim}\operatorname{ker}T_l-\operatorname{dim}\operatorname{ker}(T^*)_l$$ and the RHS is independent of $l,h,\nabla$ since the kernel of $A_l$ is simply the kernel of $A$ (this should also be verified).
|functional-analysis|vector-bundles|elliptic-operators|
0
Evaluate $\lim_{x\to0^+}\int_{ax}^{bx}\frac{\tanh y\cdot\tan^2y}{y^4}\,\mathrm{d}y$
I've came across an interesting exercise about calculus. For all $x \neq 0$ , let $$ f(x) = \int_{ax}^{bx} \frac{\tanh y \cdot \tan^2 y}{y^4} \ \mathrm{d}y,$$ where $a$ and $b$ are positive real numbers. Then, the limit as $x \to 0^+$ is: a) $0~~~$ b) $+\infty~~~$ c) undetermined $~~~$ d) $\ln(b/a)~~~$ e) $\arctan(b/a)$ I tried to solve it using integration by parts, but honestly, I couldn't find a good substitution. And if I did, the integral got much worse. To avoid this, I tried to find a relation within the fundamental theorem of calculus (judging by the look of those integration limits xD), and then I'm not sure if I can swap the limits (the one the question asks and the the limit of the derivative). I appreciate all the help or tips I can get.
Since $x\to 0^+$ , you can consider the expansion of the integrand function, keeping in mind that $\tanh y \sim y$ and $\tan y \sim y$ : $$\frac{\tanh y\tan^2y}{y^4}\sim \frac1y $$ Hence you have: $$\int_{ax}^{bx}\frac1y\mathrm dy=\log (bx)-\log (ax)=\log(b/a) $$ Edit As suggested in the comments, we have: $$\frac{\tanh y\tan^2y}{y^4}=\frac1y+\frac y3+o(y) $$ so when taking the limit as $x\to 0^+$ , the error goes to $0$ .
|calculus|limits|definite-integrals|
0
Mixing saddle points of a zero-sum game
I am fairly new to proof writing so I would appreciate if you could help me out! Suppose that $(X^*, Y^*)$ and $(X^0, Y^0)$ are saddle points of a zero-sum game with payoff matrix $A$ . Show that $(X^*, Y^0) and (X^0, Y^*)$ are also saddle points. This is what I know: If $(X^*, Y^*)$ is a saddle point for a zero sum game with payoff matrix A (which is an $n \times m$ matrix), then $ E(X,Y^*) \leq E(X^*,Y^*) \leq E(X^*,Y) \ \forall X \in S_n, Y \in S_m $ Where $E(X,Y)$ denotes the expected payoff to Player 1. Similarly, for $(X^0,Y^0)$ we have $E(X,Y^0) \leq E(X^0,Y^0) \leq E(X^0,Y) \ \forall X \in S_n, Y \in S_m $ Now, to check whether $(X^0, Y^*)$ is a saddle point, I know that $E(X,Y^0)\leq E(X^*,Y^0)$ since $X^*$ is an optimal strategy for Player 1, but this feels like an illegal step to make. How should I go about it? Should I use that $E(X,Y)=\sum_j\sum_i x_i a_{i,j}y_j^T$ ? I am stuck since the result looks so obvious but I can't think of a way to justify it.
Then using the chain of inequalities from above i.e., $E(X^0,Y^0) \le E(X^0,Y*) \le E(X*,Y*)$ & $E(X*,Y*)\le E(X*,Y^0)\le E(X^0,Y^0)$ along with (*) and (**) We have, $E(X,Y*)\le E(X^0,Y*)\le E(X^0,Y)$ $E(X,Y^0)\le E(X*,Y^0)\le E(X*,Y)$ Thus, $E(X*,Y^0)$ & $E(X^0,Y*)$ are indeed saddle points
|proof-writing|game-theory|
0
Tensor product of Banach spaces
Take two Banach spaces $X,Y$ and consider their tensor product $W=X\otimes Y$ . Any $w\in W$ can be written, generally not in a unique way, as a linear combination of simple tensors: $$w=\sum_{i=1}^n (x_i\otimes y_i)\ \ \ \ \text{ with }x_i\in X \text{ and } y_i\in Y\ \forall i\quad\quad\quad (1)$$ Under which assumptions is it possible to say that the function $f$ defined on $W$ as: $$f(w)=\big\{\max ||x_i||_X||y_i||_Y : \text{$x_i,y_i$ satisfy (1)}\big \}$$ is such that $|f(w)| for every $w\in W$ ?
This happens almost never -- in particular, $f(w)$ is finite for all $w \in W$ if and only if $X = 0$ or $Y = 0$ . One direction is clear -- if $X = 0$ or $Y = 0$ then $f(w) = \{0\}$ for all $w \in W$ . In the other direction, suppose $X \neq 0$ and $Y \neq 0$ . Then pick nonzero vectors $x \in X$ and $y \in Y$ . Now we can write $x \otimes y$ as $(n x) \otimes y - ((n-1)x) \otimes y$ for any positive integer $n$ . This gives $$n\lVert x \rVert \lVert y \rVert = \max\{\lVert nx \rVert \lVert y \rVert, \lVert (n-1)x \rVert \lVert y \rVert\} \in f(x \otimes y)$$ for all positive integers $n$ . Since $\lVert x \rVert \lVert y \rVert \neq 0$ , we conclude that $f(x \otimes y)$ is an infinite set.
|banach-spaces|tensor-products|tensors|
1
find limit without using Cauchy's Definitions
I'm trying to find the limit of this: $\lim_{x\to \infty}x^\frac{2}{3}$$[\sqrt[3]{x+1} - \sqrt[3]{x}]$ since I'm not allowed to use any of Cauchy's definitions of the limit, i tried to divide by x. = $\sqrt[3]{x^2}$ $[\sqrt[3]{x+1} - \sqrt[3]{x}]$ = $[\sqrt[3]{x^3+x^2} - \sqrt[3]{x^3}]$ = = $[\sqrt[3]{x^3+x^2} - x]$ and so I divided by x and got $[\sqrt[3]{1+\frac1x}$ - 1] and since $\frac1x$ approaches $0$ as $x$ approaches infinity and so it'd be $[\sqrt[3]{1+0} - 1]=0.$ but I already know the answer is supposed to be $\frac13$ , and I'm not sure why this method doesn't work and what else could i try.
For every $a,b\in\mathbb{R}$ , $$a^3-b^3=(a-b)(a^2+ab+b^2)$$ Plugging $a=\sqrt[3]{x+1}, b=\sqrt[3]{x}$ , we get $$1=x+1-x=(\sqrt[3]{x+1})^3-(\sqrt[3]{x+1})^3=(\sqrt[3]{x+1}-\sqrt[3]{x})\left(\sqrt[3]{(x+1)^2}+\sqrt[3]{(x+1)x}+\sqrt[3]{x^2}\right),$$ so $$\sqrt[3]{x+1}-\sqrt[3]{x}=\dfrac{1}{\left(\sqrt[3]{(x+1)^2}+\sqrt[3]{(x+1)x}+\sqrt[3]{x^2}\right)}$$ and $$x^{2/3}(\sqrt[3]{x+1}-\sqrt[3]{x})=\dfrac{x^{2/3}}{\left(\sqrt[3]{(x+1)^2}+\sqrt[3]{(x+1)x}+\sqrt[3]{x^2}\right)}$$ Dividing numerator and denominator by $x^{2/3}$ , we get that $$\dfrac{x^{2/3}}{\left(\sqrt[3]{(x+1)^2}+\sqrt[3]{(x+1)x}+\sqrt[3]{x^2}\right)}=\dfrac{1}{\left(\sqrt[3]{\dfrac{(x+1)^2}{x^2}}+\sqrt[3]{\dfrac{(x+1)x}{x^2}}+\sqrt[3]{\dfrac{x^2}{x^2}}\right)}=$$ $$=\dfrac{1}{\left(\sqrt[3]{\left(1+\dfrac{1}{x}\right)^2}+\sqrt[3]{1+\dfrac{1}{x}}+1\right)},$$ and the result, $\dfrac{1}{3}$ , follows immediately.
|calculus|limits|
1
Why is $O(n)$ not the double cover of $SO(n)$?
$O(n)$ has two connected components, $(\det)^{-1}(1)$ and $(\det)^{-1}(-1)$ . While I know it is not true, I am wondering why the above is not enough to say that $O(n)$ is the double cover of $SO(n)$ since $SO(n)$ can be identified with $(\det)^{-1}(-1)$ . Is this because there is no way to associate $(\det)^{-1}(1)$ to elements of $SO(n)$ ?
$\mathrm{O}(n)$ is a double cover of $\mathrm{SO}(n)$ in the purely topological sense. For any $R\in\mathrm{O}(n)\setminus\mathrm{SO}(n)$ , $$ \pi(A)=\begin{cases} A & A\in\mathrm{SO}(n) \\ AR & A\in\mathrm{O}(n)\setminus\mathrm{SO}(n) \end{cases} $$ is a double covering $\mathrm{O}(n)\to\mathrm{SO}(n)$ . However, notice this is not a group homomorphism. In the sense of group theory and Lie theory, a double cover implies the covering map is a homomorphism. In this context, $\mathrm{O}(n)$ is only a double cover of $\mathrm{SO}(n)$ in odd dimension. Assuming a double covering homomorphism does exist, say its kernel was $\{I,R\}$ for $R\in\mathrm{O}(n)$ . Since kernels are normal subgroups, this would force $R$ to be fixed by conjugation, i.e. central. The center is given by $Z(\mathrm{O}(n))=\{\pm I_n\}$ , so the quotient is necessarily the projective orthogonal group $\mathrm{PO}(n)=\mathrm{O}(n)/\{\pm I_n\}$ . If $n$ is even then $-I_n\in\mathrm{SO}(n)$ is in the identity component, s
|general-topology|lie-groups|covering-spaces|linear-groups|
1
Solving $x^2y''+2xy'+2y=0$ using power series
One way to solve $x^2y''+2xy'+2y=0$ is using the substitution $x=e^t$ . This time I'm asked to use power series (Fröbenius method). Assuming $y=\sum_{n\geq 0}a_nx^{n+r}$ , the diff eq reduces to $$\sum_{n\geq 0}(n+r)(n+r-1)a_nx^{n+r}+\sum_{n\geq 0}2(n+r)a_nx^{n+r}+\sum_{n\geq 0}2a_nx^{n+r}=0$$ $$\sum_{n\geq 0}\{(n+r)(n+r-1)+2(n+r)+2\}a_nx^{n+r}=0$$ This is the part where I'm lost at: We need to impose $(n+r)(n+r-1)+2(n+r)+2=0$ as $a_n\neq 0$ , right? Defining $u\equiv n+r$ we get $$u(u-1)+2r+2=u^2+u+2=0\implies u=n+r=-\frac{1}{2}\pm i\frac{\sqrt 7}{2}$$ So I'm guessing the solution should look like $$y=\sum_{n\geq 0}a_nx^{-\frac{1}{2}+ i\frac{\sqrt 7}{2}}+\sum_{n\geq 0}a_nx^{-\frac{1}{2}- i\frac{\sqrt 7}{2}}$$ But what about the $a_n$ s? What can be said about them? In view that the differential equation is a $2$ nd order one, shouldn't there be only two coefficients $a_0$ and $a_1$ determined by boundary conditions? Therefore, somehow, $$y=a_0x^{-\frac{1}{2}+ i\frac{\sqrt 7}{2}}+a_1x^
When you see groups $x^ky^{(k)}$ where the power for $x$ is the same as the order of derivation for $y$ then you are dealing with Cauchy-Euler ODE. The recommended substitution in this case is $y(x)=u(\ln(x))$ You will get a linear one with constant coefficients $$u''+u'+2u=0$$ Characteristic equation $r^2+r+2=0\iff r=-\frac 12\pm i\frac{\sqrt{7}}{2}$ $\begin{align}y(x)&=e^{-\frac 12\ln(x)}\Big(A\cos(\frac{\sqrt{7}}2\ln(x))+B\sin(\frac{\sqrt{7}}2\ln(x)\Big)\\\\&=\frac{A}{\sqrt{x}}\cos\left(\frac{\sqrt{7}}2\ln(x)\right)+\frac{B}{\sqrt{x}}\sin\left(\frac{\sqrt{7}}2\ln(x)\right)\end{align}$ Which you can transform to amplitude and phase instead if you wish. Using Frobenius method here will mostly be difficult because the resulting series will be a composition of series for $\sin,\cos$ and $\ln$ so it won't be trivially recognizable.
|ordinary-differential-equations|power-series|frobenius-method|
0
How to classify $2^n$ groups up to isomorphism?
Groups of order $2^n$ have most non-isomorphic types and are notoriously hard to classify. For example, order $32$ has $51$ non-isomorphic groups, and order $64$ has $267$ ones. In general group theory texts, only groups of order up to $15$ are (completely) classified. Even the order of $16$ is not available for a full classification. On this website , all groups up to order $500$ are classified with given names. My question is, are groups of $64$ or $128$ or higher classified manually through presentations or by programming (computational group theory)?
Order 64 was done by hand still -- Hall, Marshall, Jr.; Senior, James K. The groups of order $2^n$ ( $n\le 6$ ). Macmillan, 1964. Order 128 is by computer, but would have been possible by hand, see also O'Brien, Vaughan-Lee The groups with order $p^7$ for odd prime $p$ . J. Algebra 292 (2005), 243–258
|abstract-algebra|group-theory|
0
$|f(z_1)-f(z_2)|=|z_1-z_2|$ implies $f$ being linear
Statement: Let $f=u+\textbf iv$ , where $u, v\in C^{1}(D)$ , $D$ is subdomain of complex plane. For all $z_1,z_2\in D$ : $$|f(z_1)-f(z_2)|=|z_1-z_2|$$ Prove, that $f(z)=\exp\{\textbf i\theta\} z + a$ or $f(z)=\exp\{\textbf i\theta\} \bar z + a$ , where $\theta\in\mathbb R$ and $a\in\mathbb C$ . My ideas: Suppose that $f$ is holomorphic in $D$ . Then due to the statement equation we have that $|f'|\equiv 1$ in $D$ . It implies that all functions like $$f(z)=\exp\{\textbf i\theta\} z + a$$ are solutions. Another idea I came up with with is that if $f(z)$ is a solution, then $\bar f(z), f(\bar z), \bar f(\bar z)$ are solutions too. So we get that all functions like $$f(z)=\exp\{\textbf i\theta\} \bar z + a$$ are solutions. The reasoning is true when $f$ is supposed to be holomorphic. The problem is in the " $\mathbb C$ -differentiation" paragraph, so applying $\frac{\partial }{\partial z}$ or $\frac{\partial}{\partial \bar z}$ can help, but I haven't come up with anything.
WLOG, we may assume $0\in D$ (otherwise we can perform a translation and then apply $f$ ). As $D$ is a domain, we may find $r\in\mathbb{R}$ such that $B(0,r)\subseteq D$ . We prove a lemma first: Lemma : Let $h$ be an isometry defined in a domain $D$ such that $h(0)=0$ , $h(r/2)=r/2$ and $h(ir/2)=ir/2$ , where $r\in\mathbb{R}$ is such that $r/2, ir/2\in D$ . Then $h$ is the identity. As $h$ is an isometry, $|h(z)-h(w)|=|z-w|$ for $z,w\in D$ . Taking $w=0, r/2$ and $ir/2$ respectively, we get that $$|h(z)|=|z|, \space|h(z)-r/2|=|z-r/2|, \space|h(z)-ri/2|=|z-ri/2|$$ So $h(z)$ and $z$ are the same distance from $0,r/2$ and $ri/2$ . As three circles intersect in at most one point (this can also be proved analitically, try it!), necessarily $h(z)=z, \forall z\in D$ . $\space\blacksquare$ Let's now consider the function $k(z)=\dfrac{r}{2}\dfrac{f(z)-f(0)}{f(r/2)-f(0)}$ . It is obvious $k$ is an isometry and that it fixes $0$ and $r/2$ . Then, $$|k(ir/2)|=|k(ir/2)-k(0)|=|ir/2-0|=r/2$$ and $$\
|complex-analysis|
1
Fundamental group of simplicial space
This question asks the same as mine, but it was unsuccessful in getting an answer, so I try again. For context I am reading Weibel's K-book and am struggling with proposition 8.4 which computes $K_0$ for Waldhausen categories. Suppose we have a simplicial space $X_\bullet$ with $X_0$ point, then how can I find the fundamental group? According to Weibel it is generated by path components of $X_1$ subject to the relations $\partial_1([y])=\partial_0([y])\partial_2([y])$ for all $[y]\in\pi_0(X_2)$ . I find it completely counter intuitive that we don't actually care bout the topology of $X_1, X_2$ to compute $\pi_1$ , the fact that it is very hard to visualize geometric realization of simplicial spaces is not helping me to try and get an intuition. Any help is appreciated, thank you very much.
An element of $\pi_1$ is trivial if it extends to a disk. Note that Weibel's definition basically just identifies two paths namely $\partial_0$ and $\partial 2$ if there is a $2$ -cell joining them (homotopy) which in geometric realization is a an element of $X_2$ . This is in $\pi_1$ if you contract a spanning tree after geometric realization. Write $[n] \mapsto N_\bullet wS_n \mathcal{C}$ for the simplicial set in question. Note that the topology here matters because this is a simplical space by composing with the geometric realization functor. to get a sense for $K_0$ we want to show how to get from objects to guys in $\pi_1$ . $S_0$ is a trivial category, so we are connected. geometric realization commutes with products so literally take $(N_\bullet wS_1 \mathcal{C}) \times I$ . Note that $S_1 \mathcal{C}=\mathcal{C}$ and $I= |\Delta^1|$ . Now $S_0$ is the trivial map so collapses $0 \times \Delta^1$ and $d_0,d_1$ collapses $N_\bullet w \mathcal{C} \times \partial \Delta^1$ . the p
|algebraic-topology|fundamental-groups|group-presentation|simplicial-stuff|
0
Branch cut integral $\int_{-1}^1\left(1-x^2\right)^{\frac{1}{2}} d x$
Define the branch of $f(z)=\left(1-z^2\right)^{\frac{1}{2}}$ by the branch cut $(-\infty,-1] \cup [1,\infty), f(0)=1$ . Use this branch and a suitably chosen semi-circular contour (with finite radius $R$ greater than 1) in the upper half plane to evaluate $$ \int_{-1}^1\left(1-x^2\right)^{\frac{1}{2}} d x $$ I am thinking of this contour where $\gamma_3,\gamma_4$ should cancel and $\gamma_1,\gamma_2$ evaluates to $0$ . I stuck at $\gamma_R$ since there seems to be no place to use the Residue theorem. Any help is appreciated.
The integral over $\gamma_R$ , $R>1$ can be evaluated by noting that $\int_0^\pi e^{i2\phi}\,d\phi = 0$ and that $\sqrt{1-z^2}=iz\sqrt{1-1/z^2}$ for the selected branch cuts and branch. Therefore, we find that $$\begin{align} \int_0^\pi \sqrt{1-R^2e^{i2\phi}}\,iRe^{i\phi}\,d\phi&=\int_0^\pi \left(-R^2e^{i2\phi}\right)\sqrt{1-\frac1{R^2e^{i2\phi}}}\,d\phi\\\\ &=\int_0^{\pi} \left(-R^2e^{i2\phi}\right)\left(1-\frac1{2R^2e^{i2\phi}}+O\left(\frac1{R^4}\right)\right)\,d\phi\\\\ &=\frac\pi 2+O\left(\frac1{R^2}\right) \end{align}$$ Letting $R\to \infty$ , we find that $\int_{-1}^1 \sqrt{1-x^2}\,dx=\pi/2$ .
|complex-analysis|contour-integration|complex-integration|branch-cuts|
1
Kronecker delta expressed as a derivative when there are multiple indices.
For instance, when differentiating four-vectors the result is straightforward: $$\frac{\partial x^\mu}{\partial x^\nu}=\delta_\nu^\mu$$ as the derivative is only non-zero when the Lorentz indices match. Here $\mu, \nu = 0,1,2,3$ . But when differentiating type $(0,2)$ tensors, for example, $$\frac{\partial\left(\partial_cA_d\right)}{\partial\left(\partial_\mu A_\nu\right)}=\delta_c^\mu\delta_d^\nu\tag{1}$$ I note that the order of the indices in $(1)$ (going from left to right) is preserved; $\mu$ is 'paired' with $\mathrm{c}$ , and $\nu$ is paired with $\mathrm{d}$ . My question is very simple, am I allowed to commute the two contravariant or covariant derivatives in $(1)$ ? Put simply, is it true that $$\frac{\partial\left(\partial_cA_d\right)}{\partial\left(\partial_\mu A_\nu\right)}=\delta_c^\mu\delta_d^\nu=\delta_c^\color{red}{\nu}\delta_d^\color{red}{\mu}=\delta_\color{blue}{d}^\mu\delta_\color{blue}{c}^\nu?$$ My reason for asking is that since the condition for $(1)$ to be non-z
Suppose for a contradiction that $$\frac{\partial\left(\partial_cA_d\right)}{\partial\left(\partial_\mu A_\nu\right)}=\delta_c^\color{red}{\nu}\delta_d^\color{red}{\mu}.$$ Take $c=\mu=0$ and $d=\nu=2$ in such equation. Then we have $$\frac{\partial\left(\partial_0A_2\right)}{\partial\left(\partial_0 A_2\right)}=\delta_0^\color{red}{2}\delta_2^\color{red}{0}=0.$$ This is a contradiction, because $$\frac{\partial\left(\partial_0A_2\right)}{\partial\left(\partial_0 A_2\right)}=1.$$ Therefore, $$\frac{\partial\left(\partial_cA_d\right)}{\partial\left(\partial_\mu A_\nu\right)}\neq\delta_c^\color{red}{\nu}\delta_d^\color{red}{\mu}.$$ Now, suppose for a contradiction that $$\frac{\partial\left(\partial_cA_d\right)}{\partial\left(\partial_\mu A_\nu\right)}=\delta_\color{blue}{d}^\mu\delta_\color{blue}{c}^\nu.$$ Take $c=\mu=0$ and $d=\nu=2$ in such equation. Then we have $$\frac{\partial\left(\partial_0A_2\right)}{\partial\left(\partial_0 A_2\right)}=\delta_2^\color{blue}{0}\delta_0^\color{blu
|calculus|partial-derivative|tensors|index-notation|kronecker-delta|
1
Prove that the maximum difference between $\frac{n}{6}$ and $\lfloor \frac{(n-3)}{6} \rfloor$ is $1 \frac{1}{3}$?
Prove that the maximum difference between $\frac{n}{6}$ and $\lfloor \frac{(n-3)}{6} \rfloor$ where $n$ is an integer is $1 \frac{1}{3}$ ? I know this is true because for $n = 8 + 6i$ where $i$ is an integer $\geq 0$ , the difference is $1 \frac{1}{3}$ . Plugging other values for $n$ gives a difference that is less than $1 \frac{1}{3}$ . How do you prove this without having to plug numbers into the equations and finding a pattern? Edit: question was modified to specify that $n$ is an integer.
The actual approach is essentially a fancier version of "plugging numbers in and finding a pattern". Let $n = 6m + k$ , where $0 \leq k \leq 5$ . Then $\frac{n}{6} = m + \frac{k}{6}$ , and $\lfloor \frac{n - 3}{6} \rfloor = \lfloor (m - 1) + \frac{k + 3}{6} \rfloor = m - 1 + \lfloor \frac{k + 3}{6} \rfloor$ . So $\frac{n}{6} - \lfloor \frac{n - 3}{6} \rfloor = 1 + \frac{k}{6} - \lfloor \frac{k + 3}{6} \rfloor$ , and then we just need to prove that the difference between those last two terms is always less than or equal to $\frac{1}{3}$ (but bigger than $-\frac{7}{3}$ , which is pretty obvious). You can split it into two cases - one where the floor function returns $0$ , and one where it returns $1$ .
|ceiling-and-floor-functions|
0
How can we distinguish elements of $D_n$ that include reflections versus those that don't?
For $n \geq 3$ , the dihedral group $D_n =\langle r, s \rangle$ , where $r ^n = s^2 = e$ . Within this group, we can distinguish two types of elements: Those of the form $r^i$ , where $i$ is any integer Those of the form $sr^i$ , where $i$ is any integer The latter includes not only $sr^i$ but any element with an odd number of $s$ in it. Examples are $r^is, sr^isr^js,$ etc. What is a good way to distinguish this second class? What is the best way to define it? The definition above is cumbersome. Can it be defined directly? And: Is there a name for it?
For odd $n$ , the elements of order $2$ are precisely the elements of the form $sr^i$ , $0\leq i\lt n$ . For even $n$ , the non-central elements of order $2$ are precisely those of the form $sr^i$ , $0\leq i\lt n$ . The only element of order $2$ that is central is $r^{n/2}$ . Indeed, every element of the form $sr^i$ is of order $2$ : $(sr^i)^2 = sr^isr^i = ssr^{-i}r^i = s^2=1$ . If $r^i$ has order $2$ , then $2i=n$ . Thus, for odd $n$ these are the only elements of order $2$ . For even $n$ , the exception is $r^{n/2}$ ; but this element is central: it commutes with $r$ , and $sr^{n/2} = r^{-n/2}s = r^{n/2}s$ , so it also commutes with $s$ . And elements of the form $sr^i$ are not central: $(sr^i)r = sr^{i+1}$ , and $r(sr^i) = sr^{-1}r^i = sr^{i-1}$ . If these were equal, then $r^{i+1}=r^{i-1}$ , so $r=r^{-1}$ ; but this requires $n=2$ .
|abstract-algebra|group-theory|terminology|dihedral-groups|reflection|
1
Variance of time integral of a function on an Ito process
I was struggling a bit with the time integral of an Ito process. Say I have this: $$\int^T_t \alpha\circ X_t ds$$ Where $X_t$ is an Ito process, and $\alpha$ is a continuous function. What can we say about the variance and expectation of such a function? Thanks!
Not much $$E\int^T_t \alpha\circ X_s ds=\int^T_t E[\alpha\circ X_s] ds$$ and $$E(\int^T_t \alpha\circ X_t ds)^{2}=\iint^T_t E[(\alpha\circ X_s)( \alpha\circ X_r)] dsdr.$$ Here you need information about $\alpha$ to proceed further. If you further have L-Lipschitz from the comment, then the expectation is bounded by $$|E\int^T_t \alpha\circ X_s ds|\leq |\alpha(X_{0})|(T-t)+L \frac{T^{2}-t^{2}}{2}$$ and the second moment by $$|E(\int^T_t \alpha\circ X_t ds)^{2}|\leq \iint^T_t (|\alpha(X_{0})|+Lr)(|\alpha(X_{0})|+Ls)drds$$ $$=|\alpha(X_{0})|^{2}(T-t)^{2}+2|\alpha(X_{0})|(T-t)L\frac{T^{2}-t^{2}}{2}+L^{2}(\frac{T^{2}-t^{2}}{2})^{2}.$$
|statistics|stochastic-processes|
0
Solving equation involving shifts of the unknown function.
Let's consider the following equation: $$ a_2.f(t+2) + a_1.f(t+1) + a_0.f(t+0) = g(t) $$ where $a_0,a_1,a_2$ are non zero reals and $g(t)$ is a known function. Although it looks like an ordinary differential equation of order 2, the "changing" parts on the left part of this equation doesn't involve derivatives but rather a shift on the main variable $t$ . Question(s): what is the name of such an equation and what kind of methods do we have to find the unknown function $f(t)$ ?
This is a recursively defined sequence of the 2nd order. Assumming that $a_1\ne 0$ , it becomes: $$ f(t+2) = c f(t+1)+d f(t)+h(t). $$ where $c=-a_1/a_2$ and $d=-a_0/a_2$ , and also $h(t)=-g(t)/a_2$ . Let $\lambda, \mu$ be the zeroes of the quadratic equation $x^2-cx-d=0$ , then $$ f(t+2) = (\lambda+\mu) f(t+1)+\lambda\mu f(t)+h(t), $$ and hence $$ f(t+2)-\lambda f(t+1) = \mu \big( f(t+1)-\lambda f(t)\big)+h(t). $$ Set $F(t)=f(t+1)-\lambda f(t),$ then $$ F(t+1) = \mu F(t)+h(t). $$ If we know $F\restriction [0,1)$ , then $F$ (and consequently $f$ ) is obtained, the same way we obtain the formula of the recursive sequence $$ A_0=a, \quad A_{n+1}=\mu A_n+h_n, \quad n\in\mathbb N. $$
|ordinary-differential-equations|recurrence-relations|functional-equations|
0
Is This Proof For $a_nx^n+a_{n-1}x^{n-1}+...+a_1x^1+a_0=0\Rightarrow a_n=a_{n-1}=...a_1=a_0=0$, $\forall n\forall x$ Okay?
Base Case $n=1$ : if $ax+b=0$ for all values of $x$ , then $a=b=0$ \begin{align} ax+b=0\\ (0)+b=0\\ b=0\\\\ ax+(0)=0\\ ax=0\\ a(1)=0\\ a=0 \end{align} Inductive Step: if $(a_nx^n+a_{n-1}x^{n-1}+...+a_1x^1+a_0=0)\Rightarrow(a_n=a_{n-1}=...=a_0=0)$ for all values of $x$ , then $(a_{n+1}x^{n+1}+a_{n}x^{n-1}+...+a_1x^1+a_0=0)\Rightarrow(a_{n+1}=a_n=...=a_0=0)$ is also true. $$a_{n+1}x^{n+1}+a_{n}x^{n-1}+...+a_1x^1+a_0=0$$ $$a_{n+1}(0)^{n+1}+a_n(0)^n+...+a_1(0)^1+a_0=0$$ $$\Rightarrow a_0=0$$ Now all that is left is to show that all the coefficients of $a_{n+1}x^{n+1}+a_{n}x^{n-1}+...+a_1x^1=0$ are zero. $$a_{n+1}x^{n+1}+a_{n}x^{n-1}+...+a_1x^1=0$$ $$x(a_{n+1}x^n+a_nx^{n-1}+...a_1)=0$$ Since I'm not interested in the $x=0$ possibility, set the right factor to equal $0$ . Since that is in the form of what we are assuming to be true, we've shown that if the theorem works for $n$ , it works for $n+1$ . $\therefore a_nx^n+a_{n-1}x^{n-1}+...+a_1x^1+a_0=0\Rightarrow a_n=a_{n-1}=...a_1=a_0=0$ , $\
Your proof has a gap. The equation $$x(a_{n+1}x^n+a_nx^{n-1}+...+a_1)=0 \tag{1}$$ means $x = 0$ or $a_{n+1}x^n+a_nx^{n-1}+\ldots+a_1 = 0$ . This allows to conclude that $$p(x) = a_{n+1}x^n+a_nx^{n-1}+\ldots+a_1 = 0 \text{ for all } x \ne 0 . \tag{2}$$ Induction only works if we know that also $p(0) = 0$ . The argument needed here is continuity . All polynomials are continuous, thus $$p(0) =\lim_{x \to 0} p(x) = 0 .$$
|induction|
1
Expected number of picks from normal distribution such that the sum exceeds ? Does the value converge?
I have found the similar question for Uniform Distribution i.e, U(0,1) but not for Normal distribution, So basically the problem is, using a standard normal distribution i.e., ∼(, $^2$ ) random number generator, keep generating and summing until the sum is greater than . How to find the expected number of moves you will take. I am not even sure the value is defined. I was modelling the equation like here - Link , and now the equation also has Normal distribution term in the right hand side while picking a number, this converts it to a form where it appears, convolution theorem might be used, tho I am not able to proceed ahead or gain any useful breakthrough. Welcome to any other methods as well
Assume $\mu$ and $r$ are positive. Let $Y_i \sim N(\mu, \sigma^2)$ be i.i.d., and let $X_r$ be the minimum number of these variables needed to sum to at least $r$ . Then you have $$ E[X_r]=1+\sum_{k=1}^{\infty}P[X_r > k] = 1+\sum_{k=1}^{\infty}P[Y_1+\ldots+Y_k Since $Y_1+\ldots+Y_k \sim N(k\mu, k\sigma^2)$ , this reduces to $$ E[X_r]=1+\sum_{k=1}^{\infty}\Phi\left(\frac{r-k\mu}{\sigma\sqrt{k}}\right), $$ where $\Phi$ is the c.d.f. of the standard normal distribution. This sum converges; the terms (once $k\mu \gg r$ ) decay exponentially with $k$ .
|probability-theory|normal-distribution|expected-value|laplace-transform|stopping-times|
0
Find circle center point that is tangential to line segment and another circle?
Given circle A (center point and radius $R$ is given), and a line segment (2 points are given: $P_{0}$ and $P_{1}$ ). $P_{0}$ is on the circle. There is another circle with given radius $r$ , and it is tangential on the circle and the line segment. What is the circle's location? (there are 2 possible locations)
Let $P_1=(x_1,y_1),\space P_0=(x_0,\sqrt{R^2-x_0^2})$ ; the center should be in the circle $x^2+y^2=(R-r)^2$ By geometry of the figure, the center $C$ can be found as follows: $(1)$ Ligne $P_1P_0:ax+by=c$ where $a,b,c$ are known. So there are two distant parallel of $r$ from the $P_1P_0$ whose equations are $$ax+by=c\pm r\sqrt{a^2+b^2}$$ $(2)$ The center is given by the system $$\begin{cases}ax+by=c\pm r\sqrt{a^2+b^2}\\x^2+y^2=(R-r)^2\end{cases}$$ EXAMPLE.- $(R,r)=(5,1),\space P_0=(3,4), P_1=(-2,1)$ and line $P_0P_1:\space 3x-5y=-11$ Parallel lignes to a distance $r:3x-5y=-11\pm\sqrt{34}$ Solutions of the two systems $(x,y)=(0.889,3.9)$ and $(x,y)=(2.889,2.767)$ So we get the two circles $$(x-0.889)^2+(y-3.9)^2=1 \text { and } (x-2.889)^2+(y-2.767)^2=1$$ which appear in the attached figure.
|trigonometry|circles|
1
Change of Basis over $\mathbb{Z}$
$[v ]_B$ coordinate vector of v with respect to $B$ $[v ]_C$ coordinate vector of v with respect to $C$ ${}_C[ Id_V ]_B$ change of basis matrix from $B$ to $C$ For example we have the following set of vectors in $\mathbb{Z_3^3}$ $B$ = {[1 0 2] ${^T}$ , [2 1 1] ${^T}$ , [1 1 1] ${^T}$ } $C$ = {[1 1 1] ${^T}$ ,[2 1 1] ${^T}$ , [1 1 2] ${^T}$ } How can I compute the change of basis from $B$ to $C$ using the following formula: ${}_C[ Id_V ]_B [v ]_B = [ v ]_C$ where , ${}_C[ Id_V ]_B = [ [b_1]_C \cdots [b_n]_C]$ Again $V$ belongs to $\mathbb{Z_3^3}$ and $Id_V : V \rightarrow V$ I think the formula is a fairly simple computation but the confusing part is the change of coordinate vectors , namely ${}_C[ Id_V ]_B = [ [b_1]_C \cdots [b_n]_C]$ . Also, I have checked that the given set of vector do form basis of $\mathbb{Z_3^3}$
What a strange notation. Note that ${}_I[Id_V]_B=B$ , where $I$ is the unit matrix, and then ${}_C[Id_V]_I=C^{-1}$ and ${}_C[Id_V]_B = {}_C[Id_V]_I \, {}_I[Id_V]_B=C^{-1}B$ .
|linear-algebra|matrices|change-of-basis|
1
Verifying a formula for $\frac{\partial}{\partial u}w_x$, where $w = f(x,y)$, $x = u + v$, and $y = u - v$
If I have $w = f(x,y)$ , where $x = u + v$ and $y = u - v$ , then why is the following true? $$\frac{\partial}{\partial u}w_x = \frac{\partial w_x}{\partial x}\cdot\frac{\partial x}{\partial u} + \frac{\partial w_x}{\partial y}\cdot\frac{\partial y}{\partial u}$$ Is my assumption that $\frac{\partial}{\partial u}w_x = \frac{\partial^2 w}{\partial u \partial x}$ incorrect since $w_x = \frac{\partial w}{\partial x}$ ? Lastly, How does that imply the following? $$\frac{\partial w_x}{\partial x}\cdot\frac{\partial x}{\partial u} + \frac{\partial w_x}{\partial y}\cdot\frac{\partial y}{\partial u} = w_{xx} + w_{xy}$$
If I have $w = f(x,y)$ , where $x = u + v$ and $y = u - v$ , then why is the following true? $$\frac{\partial}{\partial u}w_x = \frac{\partial w_x}{\partial x}\cdot\frac{\partial x}{\partial u} + \frac{\partial w_x}{\partial y}\cdot\frac{\partial y}{\partial u}$$ Is my assumption that $\frac{\partial}{\partial u}w_x = \frac{\partial^2 w}{\partial u \partial x}$ incorrect since $w_x = \frac{\partial w}{\partial x}$ ? Not exactly. The notation of $\frac{\partial^2 w}{\partial u\partial x}$ should be avoided, since it is not a double derivative of independent variables ( $x$ being dependent on $u$ and $v$ ). Also, we should preferably use one style or the other, rather than mix notations, to avoid confusions. $$\begin{align}\dfrac{\partial~~}{\partial u}\dfrac{\partial w}{\partial x}&=\dfrac{\partial^2 w}{\partial x\,^2}\cdot\dfrac{\partial x}{\partial u}+\dfrac{\partial^2 w}{\partial y\,\partial x}\cdot\dfrac{\partial y}{\partial u}\\[1ex] [w_x]_u &= w_{xx}x_u +w_{xy}y_u\end{align}$$ Las
|calculus|multivariable-calculus|derivatives|partial-differential-equations|partial-derivative|
1
If $f$ is surjective, it has a right inverse
I've been struggling to understand how the surjection of a function $f : X \rightarrow Y$ implies that it has a right inverse. My questions basically reside on the application of the axiom of choice to the proof. First question : The axiom of choice states that, for any set $Y$ , if $\varnothing \not\in Y$ , then there exists a function $g : Y \rightarrow \bigcup_{y\in Y} X_y$ . such that $\forall y \in Y g(y)\in X_y$ . Why, if $f : X \rightarrow Y$ is surjective, $\varnothing \not\in Y$ ? I can't find my way on this one. Second question : As far as I understand, $\bigcup_{y\in Y} X_y$ represents the union of a family of the sets $X_y$ indexed by the elements $y\in Y$ . Hence, $\bigcup_{y\in Y} X_y$ is equal to $\{x : \exists y \in Y (x\in X_y)\}$ . If $X_y$ is defined as $\{x\in X : y=f(x)\}$ , then $\{x : \exists y\in Y (x\in X_y)\}$ is equal to $\{x : \exists y[y\in Y \land x\in X \land y=f(x)]\}$ , which is the same as $\{x \in X : f(x) \in Y\}$ . The set $\{x \in X : f(x) \in Y\}$
If $f:X\to Y$ , maybe $\emptyset\in Y$ . I mean, we don't really know or care about that. The things which we require to be nonempty in the standard proof are the fibres $f^{-1}(y)$ , $y\in Y$ , and we would look at a choice function on $K:=\{f^{-1}(y):y\in Y\}$ . Which, by the way, is NOT equal to $\bigcup_{y\in Y}f^{-1}(y)$ ; the line: " $\{X_y:y\in Y\}=\bigcup_{y\in Y}X_y$ " Is incorrect. The elements of the left hand side are just the sets $X_y$ , not the elements of the sets $X_y$ (which are what live in the right hand side). But please note that the choice function $g$ on $K$ is not the function $Y\to X$ that we want. $K$ is not $Y$ ! We do not use a choice function on $Y$ !! We use a choice function on the set $K$ of all fibres,... because that's exactly what we want to do, choose a preimage of $y$ for each $y$ . To your third question: yes, that's the point. Given a choice function $g$ on $K$ , we know $g(f^{-1}(y))\in f^{-1}(y)$ thus $\phi:Y\ni y\mapsto g(f^{-1}(y))\in X$ sati
|functions|logic|solution-verification|axiom-of-choice|
0
If $f$ is surjective, it has a right inverse
I've been struggling to understand how the surjection of a function $f : X \rightarrow Y$ implies that it has a right inverse. My questions basically reside on the application of the axiom of choice to the proof. First question : The axiom of choice states that, for any set $Y$ , if $\varnothing \not\in Y$ , then there exists a function $g : Y \rightarrow \bigcup_{y\in Y} X_y$ . such that $\forall y \in Y g(y)\in X_y$ . Why, if $f : X \rightarrow Y$ is surjective, $\varnothing \not\in Y$ ? I can't find my way on this one. Second question : As far as I understand, $\bigcup_{y\in Y} X_y$ represents the union of a family of the sets $X_y$ indexed by the elements $y\in Y$ . Hence, $\bigcup_{y\in Y} X_y$ is equal to $\{x : \exists y \in Y (x\in X_y)\}$ . If $X_y$ is defined as $\{x\in X : y=f(x)\}$ , then $\{x : \exists y\in Y (x\in X_y)\}$ is equal to $\{x : \exists y[y\in Y \land x\in X \land y=f(x)]\}$ , which is the same as $\{x \in X : f(x) \in Y\}$ . The set $\{x \in X : f(x) \in Y\}$
Your statement of the Axiom of Choice is unclear, because you never explain what $ X _ y $ is. There are two ways to fix this (and if you're going through a course or a textbook, then you should probably find out which version of AC they're using): If $ \varnothing \notin Y $ , then there's a function $ g \colon Y \to \bigcup Y $ (that is $ g \colon Y \to \bigcup _ { y \in Y } y $ ) such that for each $ y \in Y $ , $ g ( y ) \in y $ . If $ X $ is a $ Y $ -indexed family of sets and for each $ y \in Y $ , $ X _ y \ne \varnothing $ , then there's a function $ g \colon Y \to \bigcup _ { y \in Y } X _ y $ such that for each $ y \in Y $ , $ g ( y ) \in X _ y $ . I'll call these two versions AC1 and AC2. First question: There is no reason why $ \varnothing \notin Y $ . If you use AC1, then the $ Y $ in your hypothesis is not the same as the $ Y $ in AC. Instead of $ Y $ , you have to use $ \{ A \subseteq X \; | \; \exists \, y \in Y , \; A = f ^ * [ \{ y \} ] \} $ , the set of all preimages
|functions|logic|solution-verification|axiom-of-choice|
1
Fundamental group of simplicial space
This question asks the same as mine, but it was unsuccessful in getting an answer, so I try again. For context I am reading Weibel's K-book and am struggling with proposition 8.4 which computes $K_0$ for Waldhausen categories. Suppose we have a simplicial space $X_\bullet$ with $X_0$ point, then how can I find the fundamental group? According to Weibel it is generated by path components of $X_1$ subject to the relations $\partial_1([y])=\partial_0([y])\partial_2([y])$ for all $[y]\in\pi_0(X_2)$ . I find it completely counter intuitive that we don't actually care bout the topology of $X_1, X_2$ to compute $\pi_1$ , the fact that it is very hard to visualize geometric realization of simplicial spaces is not helping me to try and get an intuition. Any help is appreciated, thank you very much.
Do you know that the fundamental group of a CW complex depends only on its 2-skeleton? Then it will no longer be surprising. There are a few ways to see this. I really like the covering-theoretic exposition here which uses a simplicial analogue of the classical theory to demonstrate what you say. Also consult the Kerodon and the text of Goerss and Jardine, "Simplicial Homotopy Theory" for the relevant and supporting material. I realise this is a brief answer but let me know if you're struggling to find a specific proof within these texts or are hung up on one of the details. I mean, all of these texts omit (many) details, as is the norm, so there is always room for confusion and for the reader to do their own work.
|algebraic-topology|fundamental-groups|group-presentation|simplicial-stuff|
0
Derivatives of a polynomials on a Banach space as a multilinear maps
I'm reading "Differential Equations Driven by Rough Paths" by Lyon, Caruana and Lévy and I can't wrap my head around the following part (beginning of Section 1.4.2). Let $V$ be a finite-dimensional Banach space. For an integer $K \geq 0$ let $P: V \to \mathbb{R}$ be a polynomial of degree $k$ . Let $X : [0,T] \to V$ be a Lipschitz continuous path. Then, if $P^1:V \to L(V,\mathbb{R})$ denotes the derivative of $P$ , then Taylor's theorem asserts that, for all $t \in [0,T]$ , $$ P(X_t) = P(X_0) + \int_0^t P^1(X_s)dX_s. $$ Let now $P^2 : V \to L(V \otimes V, R)$ denote the second derivative of $P$ . Recall that $L(V \otimes V, R)$ is the space of bilinear forms on $V$ . Then, by substitution into the last expression, we get $$ P(X_t) = P(X_0) + P^1(X_0) \int_{0 Integrals are understood in Young's sense. I have trouble understanding the following concepts: I'm only familiar with polynomials over rings, not Banach spaces. Where does the multiplication come from? Why is the derivative of a p
An homogenous polynomial of degree $r$ in a vector space is the composition of a $r$ -linear function $N$ (with values in some vector space) and the $r$ -diagonal map $v \mapsto v^{(r)} = (v, \ldots, v) \in V^r.$ Thus, a homogeneous polynomial of degree $r$ is $f_r(v) = N(v^{(r)}).$ A polynomial of degree at most $n \in \mathbf{N},$ is the sum up to $r \leq n$ of homogenous polynomials of degree $r.$ Polynomials in vector spaces in general do not have product defined. However, there is the following construction, suppose $W_1, W_2$ are two vector spaces and $P:W_1 \times W_2 \to X$ a "product" (i.e., bilinear map, $X$ being another vector space). Suppose $f_s$ and $g_t$ are homogenous polynomials of degree $s$ and $t$ with values in $W_1$ and $W_2,$ respectively. The product of $f_s$ and $g_t$ relative to $P$ is $P \circ (f_s, g_t).$ This is a homogeneous polynomial of degree $s+t.$ To see this, consider $s$ -linear and $t$ -linear functions $M$ and $N$ such that $f_s(v) = M(v^{(s)})$
|functional-analysis|polynomials|tensor-products|rough-path-theory|
1
Easy to understand proof of Self-similar IFS fractal dimension
I'm preparing for a seminar talk on fractals, the topic is Self-similar IFS fractal dimension, proving the main theorem used: Given $IFS=\left\{ \mathbb{R}^{n};w_{1},...,w_{N}\right\} $ with Attractor $A$ , each mapping is a similitude with scaling factor $0 if the IFS is totally disconnected then the fractal dimension is the only solution $D(A)$ to the equation: $$ \sum_{i=1}^{N}r_{i}^{D(A)}=1 $$ I've gone over the sketch of the proof given in Barnsley, M.F. (2012) Fractals everywhere , and it's just missing the main concept that make the theorem difficult. It assumes that $D(A)$ which is defined as the limit $D(A)=\lim_{\epsilon\rightarrow 0}{\frac{\ln\left(\mathcal{N}\left(A,\epsilon\right)\right)}{\ln\left(\frac{1}{\epsilon}\right)}}$ exists, and that $\mathcal{N}\left(A,\epsilon\right)\sim C\epsilon^{-D(A)}$ both reasonable assumptions but they make the proof pretty useless besides giving some intuition why it is correct. I've checked out the proof in a more advanced book Fractal
I think that the reality is that it's not particularly easy to prove that the similarity dimension agrees with some other notion of dimension under reasonably general hypotheses. You might have a look, though, at Chapter 5 of this text whose main purpose is to prove that the similarity dimension agrees with the box-counting dimension under the appropriate assumptions. Some main features of the text include: Self contained descriptions of similarity dimension and box-counting dimension, together with all their properties required for the proof, No essential reference to the Hausdorff dimension or measure theory, and Use of the strong open set condition as characterization of "non-overlapping". As far as I know, there's no published proof of the main result that similarity dimension agrees with box-counting dimension under these hypotheses that doesn't use Hausdorff dimension as an intermediate step. So, I wrote this up to teach these ideas to my undergraduate students. The main complica
|alternative-proof|education|fractals|
1
Fundamental group of simplicial space
This question asks the same as mine, but it was unsuccessful in getting an answer, so I try again. For context I am reading Weibel's K-book and am struggling with proposition 8.4 which computes $K_0$ for Waldhausen categories. Suppose we have a simplicial space $X_\bullet$ with $X_0$ point, then how can I find the fundamental group? According to Weibel it is generated by path components of $X_1$ subject to the relations $\partial_1([y])=\partial_0([y])\partial_2([y])$ for all $[y]\in\pi_0(X_2)$ . I find it completely counter intuitive that we don't actually care bout the topology of $X_1, X_2$ to compute $\pi_1$ , the fact that it is very hard to visualize geometric realization of simplicial spaces is not helping me to try and get an intuition. Any help is appreciated, thank you very much.
First note that $|X|$ is actually connected (so that we do not have to worry about basepoints). Namely, for a general simplicial space $X$ , we have a surjection $\pi_0X_0\to\pi_0|X|$ . You can either check this by direct inspection, or by a fancy homotopy colimit argument: $|X|$ is the homotopy colimit $\mathrm{hocolim}_\mathrm{[n]\in\Delta^\mathrm{op}}\,X_n$ , and $\pi_0$ sends homotopy colimits of spaces to colimits of sets (it has an underlying left adjoint $\infty$ -functor $\pi_0\colon\mathsf{Spaces}\to\mathsf{Set}$ ), so we can write $\pi_0|X|\cong\mathrm{colim}_{[n]\in\Delta^\mathrm{op}}\,\pi_0X_n$ . The subcategory $1\rightrightarrows 0$ is cofinal in $\Delta^\mathrm{op}$ , so $\pi_0|X|\cong\mathrm{coeq}(\pi_0X_1\rightrightarrows\pi_0X_0)$ , and $\pi_0X_0$ clearly surjects onto this coequalizer. In particular, if $X_0$ is connected, so is $|X|$ . Now, let's compute $\pi_1|X|$ . We use the skeleton filtration $\mathrm{sk}^0X\to\mathrm{sk}^1X\to\ldots\to\mathrm{sk}^nX\to\ldots$
|algebraic-topology|fundamental-groups|group-presentation|simplicial-stuff|
1
When a group homorphism must be $K$-linear
I say a field $K$ is cool if every group homomorphism between to $K$ -vector spaces is $K$ -linear. $\mathbb{Z}_p$ is cool because multiplication by a scalar is like an iterated sum. $\mathbb{Q}$ is cool too. Lef $f$ be such a group homorphism. $$bf(\frac a b x)=f(\frac a b x)+\ldots +f(\frac a b x)=f(\frac a b x+\ldots+\frac a b x)=f(ax)$$ (the dots mean " $b$ times") and hence $$f(\frac a b x)=\frac a b f(x).$$ The field of $p$ -adic rationals are cool too (same argument). $\mathbb{F}_4=\{0,1,a,a+1=a^2\}$ is not cool. The map $$0\mapsto 0,1\mapsto 1,a\mapsto a+1,a+1\mapsto a$$ repsects sum but not multiplication by $a$ . $\mathbb{R}$ is not cool. Let $\{x_1,x_2,\ldots,x_i,\ldots\}$ be an (uncountable) basis for $\mathbb{R}$ as a $\mathbb{Q}$ -vector space. The map $$(\alpha_1,\alpha_2,\ldots,\alpha_i,\ldots)\mapsto (\alpha_2,\alpha_1,\ldots,\alpha_i,\ldots)$$ (swapping the first two components) is $\mathbb{Q}$ -linear and so respects sum but it's not $\mathbb{R}$ -linear because $x_3
The only cool fields are $\mathbb{Q}$ and $\mathbb{Z}/p\mathbb{Z}$ . So "cool field" is a synonym for "prime field". You showed that the prime fields are cool. Now suppose $F$ is not a prime field. Let $k\subsetneq F$ be the prime subfield of $F$ . Then $F$ is a $k$ -vector space of dimension at least $2$ , so it admits a nontrivial $k$ -linear automorphism, which is an additive group automorphism, but which is not $F$ -linear (since the only $F$ -linear automorphism of $F$ is the identity). In particular, the $p$ -adic field $\mathbb{Q}_p$ is not cool.
|abstract-algebra|field-theory|
1
ring meniscus at cylinder
I came across a somewhat interesting differential equation while studying the shape of a meniscus ring formed at the bottom of a cylinder. Here are some 2D cross sections through a cylinder symmetry plane, computed using numerical integration, assuming a "wetting" surface, i.e. a contact angle of zero: The equilibrium condition is the requirement of constant mean interface curvature. This requirement can be derived from the assumptions of uniform ambient pressure outside the fluid and constant hydrostatic pressure inside the fluid, both valid for menisci small enough to justify the neglect of effects due to gravity. The Young-Laplace equation then asserts that the pressure difference across a curved interface is related by a multiplicative constant (called the surface tension) to the mean interface curvature. Different profiles can thus be characterized by their initial radii of curvature $0 at the vertical walls in the vertical direction. The dotted lines are circular profiles with ra
Capillary Miniscus depth Equation (1) In its simplest form/cases and equilibrium consideration we get $ R_m$ for surface mean curvature $$ \frac{1}{R_1} +\frac{1}{R_2} =\frac{2}{R_m}$$ belong to the family named after Delaunay, as Delaunay Unduloids. There are three varieties. These roulettes have maximum volume for given surface area in contact with air. They are classed as CMC surfaces. The ordinary differential equation is $$ \cos \phi= \frac{a}{r} + \frac{r}{b} $$ when $r_{}$ is meniscus radius, $\phi$ is slope angle to be sure and $(a,b)$ are constants depending on geometry/ relative adhesive/cohesive force of liquid determining contact angle between glass tube/water.
|curvature|integral-equations|statics|
1
Is $\omega_0-1$ infinite?
I have read in another answer Is infinity an odd or even number? that the $\omega_0$ is the "smallest infinity", but is $\omega_0-1$ not also infinite?
Nearly 8 years after he asked for it, but I'll finally add the answer Mark S requested in the comments on the OP. The answer to the question depends on what is meant by two things: What does it mean to subtract $1$ from $\omega_0$ ? What does it mean to say something is "finite" or "infinite"? The first question is part of a larger question of "where is this taking place"? $\omega_0$ is the least ordinal that is not a natural number, so the obvious choice is in the ordinal numbers, with ordinal arithmetic. And this is just what the other answers and comments before my own discussed. $\omega_0 - 1$ would then mean "the ordinal to which adding $1$ gives $\omega_0$ ". But there is still a problem: ordinal addition is not commutative. In general $a + b \ne b + a$ , so the question arises of "adding $1$ to which side?" However, there is no ordinal $\alpha$ such that $\alpha + 1 = \omega_0$ , but there is a unique ordinal $\beta$ such that $1 + \beta = \omega_0$ . So if $\omega_0 - 1$ is to
|elementary-set-theory|infinity|ordinals|
1
How many initial and discounted raffle tickets should I try to sell to earn $400 when the probability of selling discounted decreases per buyer?
How many initial and discounted raffle tickets should I try to sell to profit $400? The initial ticket costs $10 and each successive ticket costs $3 . $$ f(x) = 10 \\ f(y) = 3 $$ The probability of a person buying a $10 ticket is 0.3 . The probability of any additional purchases start at a probability of 0.5 , and decrease by 0.1 with each following purchase. $$ P(x) = 0.3 \\ P(y{\tiny 0}) = 0.5 \\ P(y{\tiny n}) = P(y{\tiny n-1}) -0.1 $$ Initially, I started off trying to maximize 10x+3y ≥ 400 to find how many tickets I should sell of each. $$ 10x+3y =400$$ I realized that the real situation involved probability, so I added in the chance of buying an additional discounted ticket. I want to know how many initial and additional tickets can be expected to sell to hit the goal. How should I approach this problem? And how can I better express this mathematically?
You should start by computing the expected revenue from each person that you approach. With probability $0.7$ you get nothing. With probability $0.3 \cdot (1-0.5)$ you get $\$10$ because they buy only one ticket where the first factor is buying the first ticket and the second is not buying the second ticket, which is why I wrote it that way. Compute the chance of each possible amount of revenue and use that to get the expected value. Then divide that value into $\$400$
|probability|optimization|
0
Do Wikipedia, nLab and several books give a wrong definition of categorical limits?
It seems unlikely that all these sources are wrong about the same thing, but I can’t find a flaw in my reasoning – I hope that either someone will point out my error or I can go fix Wikipedia and write some errata emails. A standard definition of a limit of a diagram is that it’s a terminal object in the category of cones to the diagram. That is, a limit of a diagram is a cone to the diagram with apex $C$ such that for any cone to the diagram with apex $C'$ there is a unique morphism from $C'$ to $C$ that makes everything commute. I was surprised to find that quite a number of sources replace “any cone” by “any other cone”. Wikipedia : A limit of the diagram $F:J\to C$ is a cone $(L, \phi)$ to $F$ such that for every other cone $(N, \psi)$ to $F$ there exists a unique morphism $u:N\to L$ such that $\phi_X\circ u=\psi_X$ for all $X$ in $J$ . The definition has included “other” since this edit $16$ years ago. Similar formulations are in A First Course in Category Theory by Ana Agore (p.
I agree with you and both commenters that the word "other" doesn't belong here. In particular, this would give two inconsistent definitions of "terminal object" itself, since of course a terminal object is the limit of an empty diagram. The subtly wrong definitions that you quote would make the unique object of any one-object category terminal, for instance, which is definitely not what we want. I certainly don't think these authors are intending the mathematically unnatural "for any cone not equal to the given cone" when they say "other", but just writing in more characteristic English and becoming a bit imprecise in doing so. Actually, it's an interesting general question about mathematical terminology--if $x$ is in $X$ and you say $\varphi(x)$ holds if for any other $y\in X,\psi(x,y)$ holds, I'd guess you almost certainly mean $\forall y\in X,$ not $\forall y\in X\setminus \{x\}.$ It'd be interesting to see examples from other areas.
|category-theory|definition|limits-colimits|universal-property|terminal-objects|
0
Entire function bounded in every horizontal line, and has limit along the positive real line
Let $f(z)$ be an entire function (holomorphic function on $\mathbb{C}$ ) satifying the following condition: $$|f(z)|\leq \max (e^{\text{im}(z)},1 ),\ \forall z\in\mathbb{C}$$ $$\lim_{\mathbb{R}\ni t\rightarrow+\infty} f(t)=0$$ My question is: can we prove that the following limit exists? $$\lim_{\mathbb{R}\ni t\rightarrow-\infty} f(t)$$ Maybe we can even prove $f(z)=0$ . But I do not know how to prove it. My trying: if $f(z)$ has finitely many zeros, then by Hadamard factorization theorem, $f(x)=e^{az+b}P(z)$ where $P(z)$ is a polynomial. such a function can not be bounded by $\max(e^{\text{im}(z)},1)$ , unless $P(z)\in\mathbb{C}$ and $ia\in\mathbb{R}$ . Then we know $f(z)$ has to be $0$ . But if $f(z)=0$ has infinitely many solution, then I do not know how to proceed.
I will sketch the construction of such an $f$ which satisfies all the required conditions but for which the limit at $-\infty$ doesn't exist. Pick a sequence $a_n \to 0, n \to \infty$ very fast eg $a_n=(2n)^{-2n}, n \ge 1$ and another sequence that is bounded by $1$ but oscillates eg $a_{-2n+1}=1$ and $a_{-2n}=1/2$ when $n \ge 1$ . Consider the function $$g(z)=\sum_{n \in \mathbb Z, n \ne 0}a_n\frac{1}{(z-n)^2}$$ We claim that $g$ is meromorphic with double poles at non zero integers and $|g(z)| \le K$ on $\mathbb C -\cup_{n \ne 0}B(n,1/4)$ . The series converges uniformly on any compacts that stay away from the nonzero integers so $g$ is meromorphic. For the second part, noting that $|a_n| \le 1$ , let $z=x+iy$ and then the at most three terms where $|x-n| \le 1$ can be bound by $16$ since $|z-n| \ge 1/4$ for all $n$ , while for the rest we can bound $|1/(z-n)^2|$ by $1/(x-n)^2$ and that sum is clearly uniformly bounded by a multiple of $\zeta(2)$ We claim that $h(z)=g(z)\frac{\sin^2
|complex-analysis|harmonic-functions|entire-functions|
0
A seemingly random sequence.
Take the function $f(x,n)=\frac{10^x-1}{9}$ $mod$ $n$ and the sequence $s_n$ that follows these rules: $x$ is an integer counting up from 1 If $f(x,n)$ repeats at some point, $s_n$ is equal to how long the loop is (For the 3. sequence $[1,2,3,4,3,4,3...]$ this number would be 2) If $f(x,n)$ stops (loops a number), $s_n$ is equal to how long it takes to stop. If you do this, you get the sequence $[0,0,3,1,0,3,6,2,9,0,2,3,12,6,3,3...]$ which is seemingly random. My question is, Is there an infinite number of $0$ 's in this sequence?
The integers $n$ for which $\frac{10^2-1}9\mod n = \frac{10^1-1}9\mod n$ are precisely the divisors of $10$ (and thus the OP found them all). Note that this equality is equivalent to each of \begin{align*} \frac{10^2-1}9 &\equiv \frac{10^1-1}9 \pmod n \\ \frac{10^2-1}9 - \frac{10^1-1}9 &\equiv 0 \pmod n \\ 10 &\equiv 0 \pmod n, \end{align*} and the last congruence is true (by definition) exactly when $n\mid 10$ . Remark 1: If $10$ and $9$ are replaced by $b$ and $b-1$ , respectively, the same argument shows that the answer is all divisors of $b$ . Remark 2: This is one of many examples why (in my opinion) looking at the relation $a\equiv b\pmod n$ is much more helpful than looking at the function $a\mod n$ . Remark 3: Without framing the relationships among the values of $f(x,n)$ in terms of congruences, it's not even clear that the repeating patterns must always continue—nor is it even clear that once a value repeats, it must keep repeating!
|sequences-and-series|
1
Example of a primitive recursive function whose range is properly general recursive
I know that not every recursive function from $\mathbb{N}$ to $\mathbb{N}$ has a range which is a recursive set. I wonder, is there a primitive recursive function $f$ such that the range of $f$ is a general recursive set, but not a primitive recursive set?
A standard "waiting" trick proves that in fact "r.e. = p.r.e." - any (nonempty) set which is the range of a recursive function is also the range of a primitive recursive function. The key point is that Kleene's $T$ predicate is primitive recursive. Suppose $A=ran(\varphi_e)$ with $\varphi_e$ total recursive (total doesn't really matter here but it simplifies things). Fix $a\in A$ , let $\langle\cdot,\cdot\rangle$ be the Cantor pairing function, and let $$\psi(\langle x,y\rangle)=\begin{cases} \varphi_e(x) & \mbox{ if }\varphi_e(x)[y]\downarrow\\ a & \mbox{ otherwise.} \end{cases}$$ Here " $\varphi_e(x)[y]\downarrow$ " means " $\varphi_e(x)$ halts in at most $y$ steps," and is implemented using the $T$ -predicate. The point is that because of the runtime bound, the function $\psi$ is primitive recursive. And we clearly have $ran(\psi)=A$ .
|computability|
1
$\Gamma <\mathrm{PSL}_2(\mathbb{R})$: non-compact if contains parabolic element.
It seemingly is a fact that a discrete subgroup of $\mathrm{PSL}_2(\mathbb{R})$ acting on hyperbolic space cannot be compact if it contains a parabolic element. I was wondering if the following proof works: Let $\Gamma$ be our group and $T \in \Gamma$ be parabolic. Then we can conjugate $\Gamma$ in $\mathrm{PSL}_2(\mathbb{R})$ such that $T':=gTg^{-1} \in g\Gamma g^{-1}=: \Gamma'$ is of the form $z\overset{T'}{\mapsto} z+a$ for some $a\in\mathbb{R}$ . By discreteness we can choose such $T$ such that $a$ is minimal amongst parabolic elements of $\Gamma$ . Therefore, we can choose a fundamental domain $D$ of $\Gamma'$ such that the lines $i\mathbb{R}_+$ and $a + i\mathbb{R}_+$ are part of the boundary in the sense that there is some $y$ such that for $P_y := \{ c + id : c \in [0,a], d>y\}$ , we have $P_y \cap \overline{D} = P_y$ , where $\overline{D}$ is the closure of $D$ . For the sequence $x_n := i(y+n) \in D $ , $x_n \rightarrow \infty$ and thus $D$ is not compact. It follows that $\G
First of all, a discrete subgroup $\Gamma$ of a locally compact topological group $G$ is called cocompact (not compact!) if $G/\Gamma$ is compact. In the setting of subgroups of $G= PSL(2, {\mathbb R})$ , a subgroup $\Gamma is cocompact if and only if $\Gamma$ acts properly discontinuously on ${\mathbb H}^2$ with compact quotient space ${\mathbb H}^2/\Gamma$ . Lemma. If a discrete subgroup $\Gamma is cocompact, then $\Gamma$ contains no parabolic elements. Proof. Suppose that $\Gamma$ contains a parabolic element $\gamma$ . Then there exists a sequence $x_n\in {\mathbb H}^2$ such that $$ \lim_{n\to\infty} d(x_n, \gamma x_n)=0. $$ Since $\Gamma$ is cocompact, there exists a compact subset $C\subset {\mathbb H}^2$ which intersects every $\Gamma$ -orbit in at least one point. Thus, there exists a sequence $g_n\in \Gamma$ such that $g_n(x_n)=y_n\in C$ . Since $C$ is compact, after pasing to a subsequence, we can assume that the sequence $(y_n)$ converges to some point $y\in C$ . Note that
|group-actions|hyperbolic-geometry|geometric-group-theory|
1
$L^p_{loc}(\Omega)$ is completely metrizable
Let $\Omega \subset \mathbb{R}^n$ be a (not necessarily bounded) domain and $1 \leq p \leq \infty$ . Then define $L^p_{loc}(\Omega)$ to be the set of functions $f: \Omega \rightarrow \mathbb{R}$ such that $$\Big(\int_K |f|^p\Big)^{1/p} I have read that this space is a completely metrizable space but cannot find a proof so I would like to prove it myself but I am having some trouble. My approach is to find a metric such that convergence with respect to that metric agrees with convergence in $L^p_{loc}$ . I am not sure if this is the right approach since constructing such a metric is not obvious. Is there a better way to prove this?
Constructing the metric: As explained in my comments above, we want to pick semi-norms $L^p(\Omega_n)$ for "larger and larger" $\Omega_n$ in order to make sure that "locally our metric looks like $L^p$ ". A standard way of doing this (e.g. for Fréchet spaces) is to patch these semi-norms together in the following way $$ d(f,g) = \sum_{n\geq 1} 2^{-n}\frac{\Vert f-g \Vert_{L^p(\Omega_n)}}{1+\Vert f-g\Vert_{L^p(\Omega_n)}}. $$ First we note that the series above does indeed converge. For $\Vert f-g\Vert_{L^p(\Omega_n)}$ finite for all $n$ we have $$ d(f,g)\leq \sum_{n\geq 1} 2^{-n} =1 $$ as $$ 0\leq\frac{\Vert f-g \Vert_{L^p(\Omega_n)}}{1+\Vert f-g\Vert_{L^p(\Omega_n)}}\leq 1. $$ One can convince oneself that $d$ is symmetric and satisfies the triangle inequality (this might help for the later If $d(x,y)$ is a metric, then $\frac{d(x,y)}{1 + d(x,y)}$ is also a metric ). In order for this to be a metric we need that $d(f,g)=0$ implies $f=g$ . In order for this to be true, we need that $$
|functional-analysis|analysis|lp-spaces|metrizability|
1
Prove If dim($V$) is even, Then there exists a linear transformation $T: V \rightarrow V$ such that Ker($T$) = Image($T$)
Prove If dim( $V$ ) is even, then there exists a linear transformation $T: V \rightarrow V$ such that Ker( $T$ ) = Image( $T$ ) I'm having trouble trying to prove this statement. I've tried to use the Rank-Nullity Theorem to maybe show that the dimensions are the same but I don't think that works. Any help is appreciated. Thank you in advance.
Let the dimension of $V$ be $2n$ and construct a basis $\{u_1,\dots,u_n,v_1,\dots,v_n\}$ . Now define $T$ by $Tu_i = 0$ , $Tv_i = u_i$ . It is obvious that $\mathrm{Ker}(T) = \mathrm{span}\{u_1,\dots,u_n\}$ , and we also have that $\mathrm{Image}(T) = \mathrm{span}\{u_1,\dots,u_n\}$ , so we are done.
|linear-algebra|linear-transformations|
1
Finding formula for sequence
Find an explicit formula for a sequence of the form $a_1$ ​, $a_2$ ​, $a_3$ ​,… with the initial terms given below. $\frac{1}{5}$ , -1, $\frac{1}{6}$ , - $\frac{1}{2}$ , $\frac{1}{7}$ , - $\frac{1}{3}$ , $\frac{1}{8}$ , - $\frac{1}{4}$ , $\frac{1}{9}$ , - $\frac{1}{5}$ , $\frac{1}{10}$ , - $\frac{1}{6}$ I've figured for even numbers that $a_{even} = \frac{(-1)^{(n-1)}}{(n/2)}$ and that for odd numbers $a_{odd} = \frac{(-1)^{(n-1)}}{n + \frac{9-n}{2}}$ but can not get how to write a formula to describe any $a_n$
The sequence with entries $a(n)$ for $n$ even and $b(n)$ for $n$ odd is $$ \frac{1 + (-1)^n}{2} a(n) + \frac{1 - (-1)^n}{2} b(n) $$
|sequences-and-series|discrete-mathematics|
0
Finding formula for sequence
Find an explicit formula for a sequence of the form $a_1$ ​, $a_2$ ​, $a_3$ ​,… with the initial terms given below. $\frac{1}{5}$ , -1, $\frac{1}{6}$ , - $\frac{1}{2}$ , $\frac{1}{7}$ , - $\frac{1}{3}$ , $\frac{1}{8}$ , - $\frac{1}{4}$ , $\frac{1}{9}$ , - $\frac{1}{5}$ , $\frac{1}{10}$ , - $\frac{1}{6}$ I've figured for even numbers that $a_{even} = \frac{(-1)^{(n-1)}}{(n/2)}$ and that for odd numbers $a_{odd} = \frac{(-1)^{(n-1)}}{n + \frac{9-n}{2}}$ but can not get how to write a formula to describe any $a_n$
If $a_{2n}=b_n$ and $a_{2n+1}=c_n$ then $a_n =\frac12((-1)^{n}+1)b_{\lfloor n/2\rfloor} +(1-(-1)^{n})c_{\lfloor n/2\rfloor}) $ because $(-1)^{n}+1$ is 2 for even $n$ and 0 for odd $n$ while $1-(-1)^{n}$ is 2 for odd $n$ and 0 for even $n$ . To check, substituting $2n$ and $2n+1$ for $n$ , this gives $\begin{array}\\ a_{2n} &=\frac12((-1)^{2n}+1)b_{\lfloor (2n)/2\rfloor} +(1-(-1)^{2n})c_{\lfloor (2n)/2\rfloor})\\ &=\frac12(2b_{n})\\ &=b_{n}\\ \text{and}\\ a_{2n+1} &=\frac12((-1)^{2n+1}+1)b_{\lfloor (2n+1)/2\rfloor} +(1-(-1)^{2n+1})c_{\lfloor (2n+1)/2\rfloor})\\ &=\frac12(2c_{n})\\ &=c_n\\\end{array} $
|sequences-and-series|discrete-mathematics|
0
$\int_{-\infty}^{\infty}\exp\left(-\frac{\beta}{4}\left(y^2+x^2-1\right)^2\right)dy$
I'm interested in the following integral: \begin{equation} P_\beta(x) = \int_{-\infty}^{\infty}\exp\left(-\frac{\beta}{4}\left(y^2+x^2-1\right)^2\right)dy, \end{equation} where $\beta>0$ . For context, this integral comes from the probability density function of the displacement (or velocity) of a Van der Pol-Rayleigh oscillator, as shown by Talmadge 1991 (eqn. 9). This integral is very similar in form to this question which has already been answered on here, however Did's method was only correct for $a>0$ , $b >0$ . For my specific case, $b ( $x^2-1 ) forms an important solution case. Through some trial and error with Mathematica/MATLAB, I was able to find the general solution for my case: \begin{equation} P_\beta(x) = \sqrt{|x^2-1|}\exp\left(-\frac{\beta}{4}\left(x^2-1\right)^2\right) F(x^2-1,\beta), \end{equation} where \begin{equation} F(x^2-1,\beta) = \sqrt{2}{K}_{1/4}\left(\frac{\beta}{4}\left(x^2-1\right)^2\right)+2 H\left(-(x^2-1)\right){I}_{1/4}\left(\frac{\beta}{4}\left(x^2-1
Assume $\beta>0$ . Note that by $(12.5.1)$ \begin{align*} P_\beta (x) &= 2\exp \left( { - \tfrac{\beta }{2}(x^2 - 1)^2 } \right)\int_0^{ + \infty } {\exp \left( { - \tfrac{\beta }{2}y^4 - \beta y^2 (x^2 - 1)} \right){\rm d}y} \\ & = \exp \left( { - \tfrac{\beta }{2}(x^2 - 1)^2 } \right)\frac{1}{{\beta ^{1/4} }}\int_0^{ + \infty } {t^{ - 1/2} \exp \left( { - \tfrac{1}{2}t^4 - \sqrt \beta (x^2 - 1)t} \right){\rm d}t} \\ & = \exp \left( { - \tfrac{\beta }{4}(x^2 - 1)^2 } \right)\frac{{\sqrt \pi }}{{\beta ^{1/4} }}U( 0,\sqrt \beta (x^2 - 1)), \end{align*} where $U$ is one of the parabolic cylinder functions . If $x^2>1$ , then by $(12.7.10)$ this further simpliefies to $$ \exp \left( { - \tfrac{\beta }{4}(x^2 - 1)^2 } \right)\sqrt {\frac{{x^2 - 1}}{2}} K_{1/4} \!\left( {\tfrac{\beta}{4} (x^2 - 1)^2 } \right). $$ If $x^2 , we may write it as $$ \exp \left( { - \tfrac{\beta }{4}(x^2 - 1)^2 } \right)\frac{\pi }{{\beta ^{1/4} }}V(0,\sqrt \beta (1 - x^2 )) $$ by $(12.2.15)$ .
|calculus|probability|integration|bessel-functions|
1
Calculating L-smoothness constant for logistic regression.
I am trying to find the $L$ -smoothness constant of the following function (logistic regression cost function) in order to run gradient descent with an appropriate stepsize. The function is given as $f(x)=-\frac{1}{m} \sum_{i=1}^m\left(y_i \log \left(s\left(a_i^{\top} x\right)\right)+\left(1-y_i\right) \log \left(1-s\left(a_i^{\top} x\right)\right)\right)+\frac{\gamma}{2}\|x\|^2$ where $a_i \in \mathbb{R}^n, y_i \in\{0,1\}$ , $s(z)=\frac{1}{1+\exp (-z)}$ is the sigmoid function. The gradient is given as $\nabla f(x)=\frac{1}{m} \sum_{i=1}^m a_i\left(s\left(a_i^{\top} x\right)-y_i\right)+\gamma x $ . My ideas was that the smoothness constant $L$ has to be bigger than all the eigenvalues of the hermitian of the given function, this follows from the fact that if $f$ is $L$ -smooth, $g(x)=\frac{L}{2} x^T x-f(x)$ is a convex function and therefore the hessian has to be positive semi-definite. The second-order partial derivatives of $f$ are given as $ \frac{\partial^2 }{\partial x_k \partial
Here's my idea: Given the Hessian matrix (follow your notation): \begin{equation} \begin{aligned} \nabla^2 f(x) &= \frac{1}{m}\sum_{i=1}^{m}s(-y_i a_i^Tx)(1-s(-y_i a_i^Tx))a_ia_i^T + \gamma I_d. \end{aligned} \end{equation} To show $f(x)$ is $L$ -smooth, we apply Theorem 5.12 in [1][p. 114]: if $||\nabla^2 f(x)||_2\leq L$ for any $x\in\mathbb{R}^d$ , then $f(x)$ is $L-$ smooth. We then provide the value of $L$ : \begin{equation} \begin{aligned} ||\nabla^2 f(x)||_2 &\leq ||\frac{1}{m}\sum_{i=1}^{m}\frac{1}{4}a_ia_i^T + \gamma I_d||_2 & \text{by } s(t)(1-s(t)\leq \frac{1}{4} \,\forall t\in\mathbb{R} \\ &\leq\frac{1}{4m}||A^TA||_2 + \gamma & \text{by triangle inequality}\\ &\leq \frac{\lambda_{max}}{4m} + \gamma & \text{by }\frac{||A^TAx||^2_2}{||x||^2_2}\leq\lambda_{max}^2\\ &\leq\frac{\lambda_{max}}{4}+\gamma. \end{aligned} \end{equation} Therefore, $f(x)$ is $L$ -smooth with $L=\frac{\lambda_{max}}{4}+\gamma$ . $\blacksquare$ [1] Beck, A. (2017). First-order methods in optimization. Ph
|lipschitz-functions|hessian-matrix|logistic-regression|
0
Showing that the differential is an immersion
If $f: X \rightarrow Y$ is an immersion of smooth manifolds, then show that $df: TX \rightarrow TY$ is also an immersion. The definition of immersion(when dim $X dim $Y$ ) that I have is that for $f: X \rightarrow Y$ , $f$ is an immersion if $df_{x}: TX \rightarrow TY$ is injective. So, for my problem I suppose I would have to show that the differential of $df$ is also injective. But, how do I go about showing that?
In our case, $df_{x}$ is a linear map from the vector space $T_{x} X$ to $T_{f(x)} Y$ . Recall that the derivative of a linear map is itself (one way to think of the derivative of a function $f$ , is that $df_{x}$ is the linear map which best approximates $f$ near $x$ ) So, if $df_{x}$ is injective for $x \in X$ , its derivative will be as well.
|derivatives|manifolds|differential-topology|
0
Two ways to interpret $Q = A\cos\phi + \sqrt{A^2\cos^2\phi}$
I've run into an unexpected problem regarding the following equation: $$Q = A\cos\phi + \sqrt{A^2\cos^2\phi}$$ A is always a positive number. My first inclination is to just simplify the eqn as, $$Q = 2A\cos\phi $$ However, I get two different answers if $\cos\phi$ is negative (i.e. if $\phi$ is equal to 180 degrees, for example). If $\cos\phi = -1$ , from Eqn. 1, I would get the following: $$Q = A(-1) + \sqrt{A^2(-1)^2} = -A + A = 0$$ But from the second equation I would get $$Q = 2A(-1) = -2A$$ Not sure what to do about this. From what I'm doing, I believe Eqn 1 is the way it needs to work, but how do I discount Eqn. 2?
In general, we can write $\sqrt{x^2}=|x|$ , thus, in this case, $\sqrt{A^2\cos^2{\phi}}=|A\cos{\phi}|.$ Our expression simplifies to: $$ A\cos{\phi}+|A\cos{\phi}|. $$ From the properties of the absolute value function, we know $|x|=x$ , if $x\ge 0$ and $|x|=-x$ ,if $x . Thus we can conclude, $A\cos{\phi}+|A\cos{\phi}|=2A\cos{\phi}$ , if $\cos{\phi}\ge 0$ , and $A\cos{\phi}+|A\cos{\phi}|=0$ , if $\cos{\phi} .
|algebra-precalculus|
0
Evaluate using combinatorial argument or otherwise :$\sum_{i=0}^{n-1}\sum_{j=i+1}^{n}\left(j\binom{n}{i}+i\binom{n}{j}\right)$
Evaluate using combinatorial argument or otherwise $$\sum_{i=0}^{n-1}\sum_{j=i+1}^{n}\left(j\binom{n}{i}+i\binom{n}{j}\right)$$ My Attempt By plugging in values of $i=0,1,2,3$ I could observe that this double summation is nothing but $$\left(1+2+3+...+n\right)\left(\binom{n}{0}+\binom{n}{1}+\binom{n}{2}+...+\binom{n}{n}\right)-\left(1\binom{n}{1}+2\binom{n}{2}+...+n\binom{n}{n}\right)$$ which simplifies to $$\frac{n(n+1)}{2}.2^n-n.2^{n-1}=n^22^{n-1}$$ which is indeed the answer. But I am having problem to do this summation by combinatorial argument. $$\sum_{i=0}^{n-1}\sum_{j=i+1}^{n}\left(j\binom{n}{i}+i\binom{n}{j}\right)=\sum_{i=0}^{n-1}\sum_{j=i+1}^{n}\left(\binom{n}{i}\binom{j}{1}+\binom{n}{j}\binom{i}{1}\right)$$ That is as far as I could go.
More generally, note that $$\sum_{i,j: i j} a_{ij} = \sum_{i,j: i\not= j} a_{ij} = \sum_{i,j} a_{ij} - \sum_i a_{ii}.$$ Taking $a_{ij}=j\binom{n}{i}$ yields \begin{align} \sum_{i,j: i For a combinatorial interpretation of $$\sum_{i,j: i\not= j} j\binom{n}{i} = n^2 2^{n-1},$$ maybe consider committees with a chair and vice chair (different from the chair) chosen from people $\{0,1,\dots,n\}$ , subject to the restriction that $0$ cannot be the chair. The RHS is clear. For the LHS, $j$ could represent the larger of the chair and vice chair, leaving $j$ choices $\{0,1\dots,j-1\}$ for the smaller of the two, and $i$ could represent the number of members chosen from among $\{1,2,\dots,n\}$ .
|combinatorics|algebra-precalculus|summation|binomial-coefficients|combinatorial-proofs|
1
Evaluate using combinatorial argument or otherwise :$\sum_{i=0}^{n-1}\sum_{j=i+1}^{n}\left(j\binom{n}{i}+i\binom{n}{j}\right)$
Evaluate using combinatorial argument or otherwise $$\sum_{i=0}^{n-1}\sum_{j=i+1}^{n}\left(j\binom{n}{i}+i\binom{n}{j}\right)$$ My Attempt By plugging in values of $i=0,1,2,3$ I could observe that this double summation is nothing but $$\left(1+2+3+...+n\right)\left(\binom{n}{0}+\binom{n}{1}+\binom{n}{2}+...+\binom{n}{n}\right)-\left(1\binom{n}{1}+2\binom{n}{2}+...+n\binom{n}{n}\right)$$ which simplifies to $$\frac{n(n+1)}{2}.2^n-n.2^{n-1}=n^22^{n-1}$$ which is indeed the answer. But I am having problem to do this summation by combinatorial argument. $$\sum_{i=0}^{n-1}\sum_{j=i+1}^{n}\left(j\binom{n}{i}+i\binom{n}{j}\right)=\sum_{i=0}^{n-1}\sum_{j=i+1}^{n}\left(\binom{n}{i}\binom{j}{1}+\binom{n}{j}\binom{i}{1}\right)$$ That is as far as I could go.
Reindex the double sum as a product of two single sums minus diagonal terms: $$\sum_{0\le i,j\le n,i\ne j}i\binom nj=\sum_{0\le i,j\le n}i\binom nj-\sum_{i=0}^ni\binom ni$$ $$=\left(\sum_{i=0}^ni\right)\left(\sum_{j=0}^n\binom nj\right)-\sum_{i=0}^ni\binom ni$$ $$=\frac{n(n+1)}2\cdot2^n-\sum_{i=0}^ni\binom ni$$ This last sum is well-known to be $n2^{n-1}$ (and can be proved combinatorially), so the answer is $$n(n+1)2^{n-1}-n2^{n-1}=n^22^{n-1}$$
|combinatorics|algebra-precalculus|summation|binomial-coefficients|combinatorial-proofs|
0
Natural isomorphism between tensor product and exterior product
I am requesting help with the following problem. Below all rings are commutative with unit, and for a ring $R$ , we define an $R$ -algebra to be a ring $R'$ with ring homomorphism $f: R \to R'$ . This induces a $R$ -module structure on $R'$ by defining scalar multiplication as $ab := f(a)b$ . Let $R$ be ring, $A$ an $R$ -module, and $S$ a $R$ -algebra. For every positive integer $k$ construct an isomorphism $(\bigwedge^{k} A) \otimes_{R} S = \bigwedge^{k} (A \otimes_{R} S)$ . I know that I should use the universal properties of the tensor and exterior products. But I am stuck.
The notation below will look as though I'm working with left modules instead of right, but in fact, in working in the commutative ring case, every left module $M$ can be rendered as a right module by the definition $mr = rm$ , and in fact in this way all (left) modules can be rendered as bimodules. The following is a somewhat "high-level" categorical explanation. To prove this, it suffices to observe that the functor $$S \otimes_R -: R\textbf{-Mod} \to S\textbf{-Mod}$$ preserves colimits and is a symmetric monoidal functor. The latter means that the canonical map $$(S \otimes_R M) \otimes_S (S \otimes_R N) \to S \otimes_R (M \otimes_R N)$$ defined by the assignment $$(s \otimes m) \otimes (t \otimes n) \mapsto st \otimes (m \otimes n)$$ is an isomorphism that is suitably compatible with the associativity and symmetry isomorphisms pertaining to the tensor products $\otimes_R$ and $\otimes_S$ (see the nLab, starting here and following links as needed). The inverse is defined by $s \otime
|abstract-algebra|ring-theory|modules|tensor-products|exterior-algebra|
0
Characteristics and additional conditions for differential equation
I need to solve such a DE: $$(1+x^2)u_x+u_y=0$$ And then I need to draw its characteristics. The second part of the task says: Write three additional conditions such that this equation: Has one solution Has infinitely many solutions Has none solutions. My idea: Let's use the fact, that the characteristics satisfy such a DE $\frac{dy}{dx}=\frac{1}{1+x^2}$. So the characteristics are such lines: $y=arctan(x)+C$. Now I say that the solution is constant on the characteritics, because $\frac{d}{dx}u(x,arctan(x)+C)=u_x+\frac{1}{1+x^2}u_y=\frac{1}{1+x^2}((1+x^2)u_x+u_y)=0$. So the solution is $f(C)=f(y-arctan(x))$. But I don't know how to continue? How to find the conditions?
If the additional condition is $u(0,y)=f(y)$ , then the solution $u(x,y)=f(y-\arctan(x))$ is unique; If the additional condition is $u(x,\arctan(x))=0$ , then $u(x,y)=g(y-\arctan(x))$ is a solution for any differentiable function $g$ such that $g(0)=0$ ; If the additional condition is $u(x,\arctan(x))=h(x)$ , where $h$ is a nonconstant function, then there is no solution.
|ordinary-differential-equations|partial-differential-equations|
0
How does this integral based on the $\Phi$ function equal $x$?
So a stats problem involving normal random variables has a solution involving a step where the following simplification occurs (based on inspection): $$\int_{-\infty}^{ \infty } y\frac{e^{-(y-x)^2/2}}{\sqrt{2\pi}} dy = x$$ But I don't understand how they arrived at that. I know that: $$\Phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-\frac{t^2}{2}}dt$$ and: $$\int_{-\infty}^{\infty}e^{-x^2}dx = \sqrt{\pi}$$ Context: problem solution that involves this simplification
Making the substitution $y\rightarrow y+x$ gives $$ \int_{-\infty}^{\infty}y\frac{e^{-(y-x)^2/2}}{\sqrt{2\pi}}dy=\int_{-\infty}^{\infty}(y+x)\frac{e^{-y^2/2}}{\sqrt{2\pi}}dy = x \int_{-\infty}^{\infty}\frac{e^{-y^2/2}}{\sqrt{2\pi}}dy + \int_{-\infty}^{\infty}y\frac{e^{-y^2/2}}{\sqrt{2\pi}}dy=x+0=x. $$ (The second integral is zero by symmetry: the integrand is an odd function.)
|calculus|probability-distributions|exponential-function|normal-distribution|cumulative-distribution-functions|
0
How does this integral based on the $\Phi$ function equal $x$?
So a stats problem involving normal random variables has a solution involving a step where the following simplification occurs (based on inspection): $$\int_{-\infty}^{ \infty } y\frac{e^{-(y-x)^2/2}}{\sqrt{2\pi}} dy = x$$ But I don't understand how they arrived at that. I know that: $$\Phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-\frac{t^2}{2}}dt$$ and: $$\int_{-\infty}^{\infty}e^{-x^2}dx = \sqrt{\pi}$$ Context: problem solution that involves this simplification
Applying the substitution $u=(y-x)$ yields; $$\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}(u+x)e^{-\frac{u^2}{2}}du$$ $$\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}ue^{-\frac{u^2}{2}}du+\frac{x}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-\frac{u^2}{2}}du$$ By symmetry, the first integral is equal to $0$ . Which leaves us with... $$\frac{x}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-\frac{u^2}{2}}du=\frac{x}{\sqrt{2\pi}}\cdot\sqrt{2\pi}=x$$ $$\therefore\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}ye^{-\frac{(y-x)^2}{2}}dy=x$$ as expected.
|calculus|probability-distributions|exponential-function|normal-distribution|cumulative-distribution-functions|
1