title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Why is it an equivalent definition of a triangulated full subcategory?
We know that an additive full subcategory $\mathcal{S}$ of a triangulated category $\mathcal{T}$ is called a triangulated subcategory if it is closed under isomorphism, shift and if any two objects in a distinguished triangle in $\mathcal{T}$ are in $\mathcal{S}$ , then so is the third. See Neeman's book section 1.5. Why is it equivalent to that the inclusion functor is exact in the sense that it preserves distinguished triangles? And what if assuming any additive subcategory which may not be full?
Let's contemplate the two possible definitions of triangulated subcategory that OP is hinting at. For simplicity, the translation functor of a triangulated category (that I shall denote as $[1]$ ) is always assumed to be an automorphism. The class of distinguished triangles of a (pre)triangulated category $\mathcal{D}$ is called the (pre)triangulated structure of $\mathcal{D}$ . Definition 1. Let $\mathcal{D}$ be a (pre)triangulated category. Let $\mathcal{D}'\subset\mathcal{D}$ be an additive subcategory such that $\mathcal{D}'[1]=\mathcal{D}'$ . $\mathcal{D}'$ is a Neeman (pre)triangulated subcategory of $\mathcal{D}$ if it is a full replete subcategory satisfying that for every distinguished triangle $X\to Y\to Z\to X[1]$ in $\mathcal{D}$ , if $X,Y$ are in $\mathcal{D}'$ , then $Z$ too (by TR2, this is the same as saying that if two out of three among $X,Y,Z$ are in $\mathcal{D}'$ , then so is the third). $\mathcal{D}'$ is an SP (pre)triangulated subcategory of $\mathcal{D}$ if $\ma
|category-theory|triangulated-categories|
0
Proof using the Lambert W function that 1 = 0 - What went wrong?
All values that satisfy $x^2=2^x$ would satisfy $\ln(2)x^3 = x\ln(2)e^{x\ln(2)}$ , and would therefore satisfy the relationship $W(\ln(2)x^3) = x\ln(2)$ . The problem is that when I graph these functions, there are four real solutions to this relation: $x \approx -0.77$ , and $x=4, 2, 0$ . The first three are solutions to $2^x=x^2$ , but $0$ is not. So what happened?
$$ x^2 =2^x $$ $$ 2\ln(x)=x\ln(2) $$ Note that $x \neq 0 $ , since $\lim_{x\to0}\ln(x) = -\infty$ So you accidentally introduced a extraneous solution. It is much like the classic proof: $$ a =b $$ $$a^2 = ab$$ $$a^2-b^2 = ab - b^2$$ $$ (a-b)(a+b) = b(a-b)$$ $$a+b = b$$ But $b=a$ $$2b = b$$ $$2=1$$ $$\forall a \& b \neq 0$$ With the mistake being that we can't divide by $(a-b)$ , since $a-b = 0$
|algebra-precalculus|exponential-function|lambert-w|
0
I need to verify if I set up the following integral for $u(\textbf{x})= \int\int f(\textbf{x}_0)G(\textbf{x},\textbf{x}_0) dA_0$
I have a 2-D Poisson Equation with homogeneous boundary conditions, and I used an eigenfunction along x, and a Fourier sine series expands the solution to $u(x,y) = \sum_{n=1}^\infty a_n(y)\sin(\frac{n\pi}{L}x)$ . Plugging this into our Poisson Equation, I get the following Green's function: $$G(\textbf{x},\textbf{x}_0) = \sum_{n=1}^\infty\frac{2\sin(\frac{n\pi x_0}{L})\sin(\frac{n\pi x}{L})}{n\pi\sinh(\frac{n\pi H}{L})} \cases{\sinh(\frac{n\pi(y_0-H)}{L})\sinh(\frac{n\pi y}{L}) & $y y_0$}$$ And I know using Green's function, the solution is of the form: $$\tag{1}\boxed{u(\textbf{x}) = \int f(\textbf{x}_0)G(\textbf{x},\textbf{x}_0)dA_0}$$ . Expanding out the integral: $$\tag{2}u(x,y) = \int_0^L\int_0^H f(x,y) G(x,y,x_0,y_0) dy_0 dx_0$$ and plugging in for our Green's function (which is piecewise so it has a different definition at different parts of the domain of integration), I get: $$\tag{3}u(x,y) = \int_0^L \Bigg[\int_0^y\bigg(\frac{2}{\pi}\sum_{n=1}^\infty \frac{\sin(\frac{n\pi x_0
I figured out what I did wrong; the integrands were in the incorrect places and this was done because I misunderstood the inequalities. When we integrated along $y_0$ from 0 to 1, that is to say $$0\le y_0 \le 1$$ and $$0 , so for some point $y=y_0$ on the number line, to the left of the point, $y>y_0$ and $y to the right. So, the integral for (3) should have been: $$\tag{3}u(x,y) = \int_0^L \Bigg[\int_0^y\bigg(\frac{2}{\pi}\sum_{n=1}^\infty \frac{\sin(\frac{n\pi x_0}{L})\sin(\frac{n\pi x}{L})}{n \sinh(\frac{n\pi H}{L})}f(x,y)\sinh(\frac{n\pi y_0}{L})\sinh(\frac{n\pi (y-H)}{L})) \bigg)dy_0 \\ + \int_y^H \bigg(\frac{2}{\pi}\sum_{n=1}^\infty \frac{\sin(\frac{n\pi x_0}{L})\sin(\frac{n\pi x}{L})}{n \sinh(\frac{n\pi H}{L})}f(x,y)\sinh(\frac{n\pi(y_0-H)}{L})\sinh(\frac{n\pi y}{L} \bigg)dy_0 \Bigg]dx_0$$ and from there, it can be simplified further to: $$u(x,y) = \frac{2}{\pi}\sum_{n=1}^\infty \frac{\sin(\frac{n\pi x}{L})}{n \sinh(\frac{n\pi H}{L})} \Bigg[\int_0^L \bigg(\sinh(\frac{n\pi (y-H)
|integration|partial-differential-equations|greens-function|
0
how to solve fokker-planck equation using space-time laplace transform?
I was wondering about how to solve a simple linear Fokker-Planck equation using space-time Laplace transform on space interval $[0,+ \ \infty)$ , $$\frac{\partial f(x,t)}{\partial t}= k_1 \frac{\partial f(x,t)}{\partial x} + k_2 \frac{\partial^2 f(x,t)}{\partial x^2}.$$ The usual method is to do Laplace transform in time and then solve the spatial differential equation by transforming it into a Sturm-Liouville problem. But I feel that for a special case of semi-infinite, i.e., $[0, \ \infty)$ , one can solve it easily if used Laplace transform for space-time instead of only time. EDIT: The OP didn't specify boundary conditions, making the problem ill-defined. Among the many possible boundary conditions confining the process to the interval $[0,+\infty)$ , one of the most used ones are the reflecting boundary conditions, $$\left[k_1 f(x,t)+k_2\frac{\partial}{\partial x}f(x,t)\right]_{x=0}=0,\quad \forall t.$$
This is the farthest I could reach, which does not solve the question, but can be used for others to give a definitive answer. Supposing the initial condition for the process is a well-located state, $$ f(x,t=0)=\delta (x-x_0)$$ with $x_0>0$ . We can compute the temporal Laplace transform: $$ \tilde{f} (x,s)=\int _0^{+\infty} dt \, f(x,t)\,e^{-st}, $$ and $$ -\delta(x-x_0) +s\tilde{f} =k_1 \,\partial_x \tilde{f}+k_2\, \partial^2_x \tilde{f}$$ Now introducing the spatial Laplace transform together with the reflecting boundary, $$ \tilde{g} (z,s)=\int _0^{+\infty} dx \, \tilde{f}(x,s)\,e^{-zx}, $$ and $$ -e^{z\,x_0} +s\tilde{g} =k_1 \,z \,\tilde{g}+k_2\, z^2 \tilde{g}-z \tilde{f}(s,x=0).$$ Therefore, $$ \tilde{g}(z,s)=\frac{-e^{z\,x_0}-a\,z}{k_1 z+k_2 z^2-s}.$$ To solve the problem, one should do the spatio-temporal inverse Laplace transform of $ \tilde{g}(z,s)$ bearing in mind the consistence relation $$ a= \tilde{f}(s,x=0),$$ which does not seem easy to do.
|ordinary-differential-equations|partial-differential-equations|stochastic-processes|
0
Pdf conditioned on inequality
I'm sorry if this ends up being a duplicate, but I cannot find an answer to my question (or maybe I just can't phrase it right!). Suppose we have two continuous random variables $X$ and $Y$ . Suppose I want to write (with an awful abuse of notation but I hope it gets my idea across) $$ f_X(x)=\int f_X(x|C)dP_C $$ where $P_C$ is the probability of the event $C:=\{X . This feels very wrong! I think my intuition is coming from $P(A,B) = P(A,B)/P(B)$ , but I don't think it makes sense to write $f_X(x|C) = f_{X,C}(x,C)/f(C)$ , since $C$ doesn't have a 'density' as such. Does it make more sense to condition on the event $C$ , or would it be better to do something like this: Let $Z = X-Y$ $$ f_X(x)=\int_0^\infty f_X(x|Z=z)f_Z(z)dz $$ But in this case I'm not sure whether the integration variable or limits are right. Any help would be greatly appreciated (as well as suggestions for better notation in the first part!!). Thanks.
Let $X$ be any random variable. Fix $n$ as a positive integer and let $\{B_i\}_{i=1}^n$ be a partition of the sample space into $n$ disjoint events, so $B_i\cap B_j=\phi$ for $i\neq j$ , and $\cup_{i=1}^n B_i=\Omega$ . Then for each $x \in \mathbb{R}$ we have by the law of total probability \begin{align*} F_X(x) &= P[X\leq x]\\ &=\sum_{i=1}^n P[X\leq x|B_i]P[B_i]\\ &= \sum_{i=1}^n F_{X|B_i}(x)P[B_i] \end{align*} which gives us the "law of total CDF" (where we implicitly remove those terms $i$ for which $P[B_i]=0$ ). If we assume PDFs exist that can be obtained by differentiation then by differentiating both sides we obtain the "law of total PDF" $$f_X(x) = \sum_{i=1}^n f_{X|B_i}(x)P[B_i]$$ For your case we can use $n=2$ , $B_1=C, B_2=C^c$ to obtain $$\boxed{f_X(x) = f_{X|C}(x)P[C]+f_{X|C^c}(x)P[C^c]}$$ where we simply remove the corresponding term if $P[C]=0$ or $P[C^c]=0$ . Now let $Z$ be a continuous random variable for which marginal PDF $f_Z(z)$ and joint PDF $f_{X,Z}(x,z)$ exist.
|probability|statistics|conditional-probability|
1
the intuition behind eigenbasis?
I'm learning linear algebra from scratch and I'm trying to grasp the intuition behind the formula that's used to operate with a diagonalized transformation. As far as I understand, the process begins, roughly speaking, like this: you first get the eigenvectors of a transformation, then change the basis of a vector x with this matrix (you assume it exists, but you don't include it in the computation), then apply the original transformation, then apply the inverse of the change of basis, and then you got the diagonalized matrix. Something like this: import numpy as np # some vector vec = np.array([1,1]) # some transformation t = np.array([[1,1], [0,2]]) # change of basis matrix (with the eigenvectors of t) cb = np.array([[1,1], [0,1]]) # inverse of cb cb_inv = np.linalg.inv(cb) # eigenbasis (cb_inv*t*cb) eb = np.dot(cb_inv,np.dot(t,cb)) # array([[1., 0.], # [0., 2.]]) This doesn't sound so crazy to me. But what I can't wrap my head around is why the "second step" looks like this (an inve
Conceptually, change of basis is a type of translation in the sense of languages. Let's say we have a linear transformation $T$ , and on the standard basis vectors $e_{1}, e_{2}, \dots, e_{n}$ effecting $T$ is multiplication by some $n \times n$ matrix $A$ . Let's call multiplying by $A$ "performing $T$ in English." For general reasons, however, there is another language, Diaglish , with coordinate axes spanned by eigenvectors $v_{1}, v_{2}, \dots, v_{n}$ . For each index $j$ , the value $T(v_{j}) = \lambda_{j} v_{j}$ is a scalar multiple of $v_{j}$ for known eigenvalues $\lambda_{j}$ . Since $T$ scales the scales the Diaglish coordinate axes, "performing $T$ in Diaglish" is multiplication by some diagonal matrix $D$ . The algebraic work of finding the eigenvalues and corresponding eigenvectors amounts to compiling an English to Diaglish dictionary: the matrix $P^{-1}$ whose columns are the standard representations of the eigenvectors. The inverse matrix $(P^{-1})^{-1} = P$ translates
|linear-algebra|eigenvalues-eigenvectors|
1
A standard 6-sided fair die is rolled until the last 3 rolls are strictly ascending. What is probability that the first such roll is a 1,2,3, or 4?
A standard $6$ -sided fair die is rolled until the last $3$ rolls are strictly ascending. What is probability that the first such roll is a $1$ , $2$ , $3$ , or $4$ ? My attempt We can investigate the $3-$ roll (when there are exactly $3$ rolls), the $4-$ roll, the $5-$ roll, the $n-$ roll. I did that via SQL, basically n times a cartesian product of a table with digits $1-6$ , with some constraints in place. There are $20$ ascending triplets that can be thrown. Starting with $1: 123, 124, 125, 126, 134, 135, 136, 145, 146, 156$ I call those triplets $T_1$ . There are $10$ $T_1$ ‘s. Likewise $T_2$ : $234, 235, 236, 245, 246, 256$ . There are 6 $T_2$ ’s. $T_3$ : $345, 346, 356$ . There are $3$ $T_3$ 's. $T_4: 456$ . There is only one $T_4$ . We can start with the $3-$ roll: The probability of a $3-$ roll is $\frac{20}{216}=\frac{5}{54}$ . Each triplet has the same probability of being rolled. So the probability of each triplet is $\frac{1}{20}$ . The $4-$ roll. Not any digit can be the
The modelling probability space is the space of all infinite tuples with entries among $1,2,3,4,5,6$ . There is a stopping time $\tau$ , the first occurrence of the pattern $$ a with three last rolls $a,b,c$ . When such a pattern is seen, the "game stops". We have $\tau with probability one. So we break the infinite tuples at the point when such an ascending pattern occurs, and deal only with finite tuples. Such a tuple will be written as a word. So instead of $1,3,1,3,1,3,1,4,2,6,4,1,2,2,5,3,3,5,6$ we simply write $w=1313131426412253356$ , and stop here. This word stays for the event of all tuples starting with the digits in the word, taken exactly in the same order. We need the probabilities $p^{(1)}$ for $a=1$ , below the "[ONE WINS]" case, $p^{(2)}$ for $a=2$ , below the "[TWO WINS]" case, $p^{(3)}$ for $a=3$ , below the "[THREE WINS]" case, $p^{(4)}$ for $a=4$ , below the "[FOUR WINS]" case, for the event occuring when in the moment we stop, the last three rolls are $a,b,c$ , and
|probability|dice|
0
Basic Die Game expected payout after re-roll
Alice rolls a fair 6−sided die with the values 1−6 on the sides. She sees that value showing up and then is allowed to decide whether or not she wants to roll again. Each re-roll costs $1. Whenever she decides to stop, Alice receives a payout equal to the upface of the last die she rolled. Note that there is no limit on how many times Alice can re-roll. Assuming optimal play by Alice, what is her expected payout on this game
The following answer considers only strategies of the form: "stop after the $n$ -th roll $X_n$ iff $X_n\geq t$ ", where $t$ is a threshold integer value. Assume $0\leq t\leq 6$ . Let $\tau=\inf \{n\geq 1:X_n \geq t\}$ , so that $\tau$ has geometric distribution with parameter $(6-t+1)/6$ and note that the final payoff is $X_\tau -\tau +1$ . The expected final payoff is therefore $$E[X_\tau] -\frac{6}{6-t+1}+1.$$ Since $E[X_\tau] = E[X_1|X_1\geq t]= \frac{t+6}{2}$ , it only remains to maximize $t\mapsto \frac{t+6}{2}-\frac{6}{6-t+1}+1$ over the set $\{1,\ldots,6\}$ . The maximal expected payoff is $4$ , and it is reached for the threshold $t=3$ . Thus the strategy is to stop rolling as soon as the roll is $\geq 3$ , and keep rolling otherwise.
|probability-theory|expected-value|dice|
1
Elements of $\infty$-cats Corollary 4.1.3
I am currently studying the book Elements of $\infty$ -Cats and stumbled across the following Corollary (this is Corollary 4.1.3 on page 88 in the book): I am a bit confused on how to get the induced fibered equivalence $\text{Hom}_A(fb,a) \simeq \text{Hom}_B(b,ua)$ is obtained from the fibered equivalence $(\star) \text{Hom}_A(f,A) \simeq \text{Hom}_B(B,u)$ . In principle, I know it would be enough to see that the map $(\star)$ plus the comma category $\text{Hom}_A(fb,a)$ induce a cone for the diagram underlying the limit $\text{Hom}_B(b,ua)$ - yet I am struggling to see this somehow. Note: The comma $\infty$ -categories above are defined by
In the commutative diagram $$\require{AMScd}\begin{CD} \mathsf{Hom}_A(fb,a) @>>> \mathsf{Hom}_A(f,A)@>{\varphi}>> A^{\mathbf{2}}\\ @VVV @VV{(p_1,p_0)}V @VV{(p_1,p_0)}V\\ X\times Y @>{a\times b}>> A\times B @>{\mathrm{id}_A\times f}>> A\times A \end{CD}$$ both the right square and the outer rectangle are pullback squares, and hence the left square is also a pullback square. Likewise, the square $$\require{AMScd}\begin{CD} \mathsf{Hom}_B(b,ua) @>>> \mathsf{Hom}_B(B,u)\\ @VVV @VV{(p_1,p_0)}V\\ X\times Y @>{a\times b}>> A\times B \end{CD}$$ is a pullback square. But then the equivalence $\mathsf{Hom}_A(f,A)\simeq\mathsf{Hom}_B(B,u)$ over $A\times B$ can be pulled back to an equivalence $\mathsf{Hom}_A(fb,a)\simeq\mathsf{Hom}_B(b,ua)$ over $X\times Y$ (as we are pulling back along an isofibration). You could also phrase this as pullbacks along isofibrations being stable under replacing an object in the cospan by an equivalent object.
|category-theory|homotopy-theory|higher-category-theory|pullback|
1
Nonexistence of conformal map from the punctured unit disk to the annulus
Prove that there is no one-to-one conformal map ( $=:f$ ) from the punctured unit disk $\{z:0 onto the annulus $\{z:1 . Proof. Any holomorphic function from the punctured unit disk to the annulus is bounded near $0$ so can be extended to a function that is holomorphic at $0$ . In particular it has a square root as it's a nonzero holomorphic function on a simply connected region. However there is a holomorphic function from the annulus to itself without a square root (for example the identity function). So the punctured unit disk cannot be conformal to the annulus. Let $\tilde{f}$ be the extension of $f$ on the unit disk $\Bbb D$ . I understand the existence of square root $g$ i.e., $g^2 = \tilde{f}$ . But I can't see why this implies the existence of a holomorphic function from the annulus to itself (so the contradiction makes sense). Can you explain this?
Here is another way to do it. By open mapping theorem, $\tilde f(\mathbb{D})$ is open and so $\tilde f(0)$ cannot be on the boundary of the annulus. Say $\tilde f(0)=b$ , since $f$ is a biholomorphism, there exist $a\in\mathbb{D}^*$ such that $f(a)=b$ , which means $\tilde f(a)=b$ . Now let $\Delta_0,\Delta_a$ be open neighborhoods of $0$ and $a$ respectively that are disjoint from each other. $\tilde f(\Delta_0)$ , $\tilde f(\Delta_a)$ are both open neighborhood of $b$ by open mapping theorem. Let $\Delta_b\subseteq\tilde f(\Delta_0)\cap\tilde f(\Delta_a)$ be an open neighborhood of $b$ . Let $\beta\in\Delta_b\setminus\{b\}$ . Then there exist $\alpha\in\Delta_a$ such that $\tilde f(\alpha)=\beta$ and $\gamma\in\Delta_0$ such that $\tilde f(\gamma)=\beta$ . $\gamma\not=0$ because $\tilde f(0)=b\not=\beta=\tilde f(\gamma)$ . So $\gamma\in\mathbb{D}^*$ . This contradict injectivity of $f$ because $f(\alpha)=\beta=f(\gamma)$ .
|complex-analysis|solution-verification|
0
$A, B$ closed of compact metric space $X$, $A \subsetneq B$ then $f_n(A) \subsetneq f_n(B)$
Let $(X,d)$ compact metric space, $D=\{x_n: n \in \mathbb{N} \} $ a countable dense subset of $X$ . For all n, let $f_n:X \rightarrow [0,1]$ defined by $$f_n(x)=\frac{1}{1+d(x_n,x)}$$ Prove exista $n \in \mathbb{N}$ such that for all $A, B$ closed subsets of $X$ such that $A \subsetneq B$ then $f_n(A) \subsetneq f_n(B)$ If $f_n(A)=f_n(B)$ , exist $a \in A, b \in B$ such that $d(x_n, a) =d(x_n, b) $ if we prove $b \in B \setminus A$ and $d(a, b) =0$ we finish but its no clear for me. Could you give a hint for this or other form to solve it?
Firstly, the question is wrong and a counter-example can be constructed as follows. Let $(X,d)=([0,1],d)$ , where $d(x,y)=|x-y|$ . Let $D=\mathbb{Q}\cap X$ . Fix an enumeration $D=\{x_{n}\mid n\in\mathbb{N}\}$ for $D$ . Let $A=\{0\}$ and $B=\{0,1\}$ . Clearly, $A$ and $B$ are closed, and $A\subsetneq B$ . Choose $n$ such that $x_{n}=\frac{1}{2}$ . Note that $f_{n}(A)=f_{n}(B)=\{\frac{2}{3}\}$ . It seems that the question should be rephrased as: For any closed subsets $A,B\subseteq X$ with $A\subsetneq B$ , there exists $n$ such that $f_{n}(A)\subsetneq f_{n}(B).$ Proof: We consider the case that $A\neq\emptyset$ . Choose $b\in B\setminus A$ . Since $A$ is closed, $d(b,A):=\inf\{d(b,x)\mid x\in A\}>0$ . Let $r=\frac{1}{2}d(b,A)>0$ . We go to show that there exists $n$ such that $f_{n}(b)\notin f_{n}(A)$ by contradiction. Suppose the contrary that for each $n$ , $f_{n}(b)\in f_{n}(A)$ , then there exists $a_{n}\in A$ such that $f_{n}(b)=f_{n}(a_{n})$ . Therefore, $d(x_{n},b)=d(x_{n},a_{n
|general-topology|metric-spaces|
0
A standard 6-sided fair die is rolled until the last 3 rolls are strictly ascending. What is probability that the first such roll is a 1,2,3, or 4?
A standard $6$ -sided fair die is rolled until the last $3$ rolls are strictly ascending. What is probability that the first such roll is a $1$ , $2$ , $3$ , or $4$ ? My attempt We can investigate the $3-$ roll (when there are exactly $3$ rolls), the $4-$ roll, the $5-$ roll, the $n-$ roll. I did that via SQL, basically n times a cartesian product of a table with digits $1-6$ , with some constraints in place. There are $20$ ascending triplets that can be thrown. Starting with $1: 123, 124, 125, 126, 134, 135, 136, 145, 146, 156$ I call those triplets $T_1$ . There are $10$ $T_1$ ‘s. Likewise $T_2$ : $234, 235, 236, 245, 246, 256$ . There are 6 $T_2$ ’s. $T_3$ : $345, 346, 356$ . There are $3$ $T_3$ 's. $T_4: 456$ . There is only one $T_4$ . We can start with the $3-$ roll: The probability of a $3-$ roll is $\frac{20}{216}=\frac{5}{54}$ . Each triplet has the same probability of being rolled. So the probability of each triplet is $\frac{1}{20}$ . The $4-$ roll. Not any digit can be the
This can be modelled using a Markov chain . We need $15$ transient states: One initial state in which we have nothing (either because we just started or because the last roll didn’t leave enough room for the remaining ascent), four states in which we obtained $1$ , $2$ , $3$ and $4$ , respectively, and ten states in which we obtained $12$ , $13$ , $14$ , $15$ , $23$ , $24$ , $25$ , $34$ , $35$ and $45$ , respectively. There are also $4$ absorbing states in which we’ve obtained an ascending triple, but I’ll treat them separately. The transition matrix is $$ T=\frac16\,\pmatrix{ 2&1&1&1&1&0&0&0&0&0&0&0&0&0&0\\ 1&1&0&0&0&1&1&1&1&0&0&0&0&0&0\\ 1&1&1&0&0&0&0&0&0&1&1&1&0&0&0\\ 1&1&1&1&0&0&0&0&0&0&0&0&1&1&0\\ 1&1&1&1&1&0&0&0&0&0&0&0&0&0&1\\ 0&1&1&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&1&1&1&0&0&0&0&0&0&0&0&0&0&0\\ 0&1&1&1&1&0&0&0&0&0&0&0&0&0&0\\ 1&1&1&1&1&0&0&0&0&0&0&0&0&0&0\\ 0&1&1&1&0&0&0&0&0&0&0&0&0&0&0\\ 0&1&1&1&1&0&0&0&0&0&0&0&0&0&0\\ 1&1&1&1&1&0&0&0&0&0&0&0&0&0&0\\ 0&1&1&1&1&0&0&0&0&0&0&0&0&0&0\\
|probability|dice|
0
What is wrong about this seemingly simple false proof?
It came accros my mind when doing an exercise in calculus. Consider the following inequalities: $1\leq 2\leq 5$ $1\leq 3\leq 5$ Adding the equaions up is totally fine. But subtracting them, we get: $0\leq 2-3\leq 0$ ?? I fail to understand what is wrong with this step. If someone could formally explain it, it would be greatly appreciated.
Subtracting is, by definition, adding the negative. a- b= a+ (-b). But if 1
|calculus|inequality|fake-proofs|
0
What is wrong about this seemingly simple false proof?
It came accros my mind when doing an exercise in calculus. Consider the following inequalities: $1\leq 2\leq 5$ $1\leq 3\leq 5$ Adding the equaions up is totally fine. But subtracting them, we get: $0\leq 2-3\leq 0$ ?? I fail to understand what is wrong with this step. If someone could formally explain it, it would be greatly appreciated.
When dealing with inequalities, addition of inequalities that are both true is a valid operation that preserves the truth of the individual inequalities. However, subtraction requires a bit more caution, especially in how it affects the direction and integrity of the inequalities. Let's clarify the situation with your example. You start with two true inequalities: $1 \leq 2 \leq 5$ $1 \leq 3 \leq 5$ Adding them is straightforward, and the result would indeed preserve the truth of the inequalities since addition is a direct operation that doesn't invert the direction of the inequalities. However, when you "subtract" the inequalities, what you're effectively trying to do is not directly supported by the rules of inequalities. Let's break it down:- From $1 \leq 2$ and $1 \leq 3$ , if you subtract the second inequality from the first, you're not actually performing a direct operation on the inequalities themselves. Instead, you might be thinking of subtracting the terms within each inequal
|calculus|inequality|fake-proofs|
1
Find gcd of two multivariate polynomials
Is there a simple way to find the $\gcd$ of $x^2y$ and $xy^2+1$ ? I tried adding multiples of $x^2y$ to the other and vice-versa but I found no easy way to find the gcd. For another example I found $(xy,x^3y+1)=(xy,x^3y+1-x^3y)=1$ by adding $-x^2$ of the first to the second.
You can use Buchberger's algorithm to find such a polynomial if it exists. The algorithm is suited to find sets of generating multivariate polynomials for a given ideal, the so called Gröbner bases .
|gcd-and-lcm|
0
What is wrong about this seemingly simple false proof?
It came accros my mind when doing an exercise in calculus. Consider the following inequalities: $1\leq 2\leq 5$ $1\leq 3\leq 5$ Adding the equaions up is totally fine. But subtracting them, we get: $0\leq 2-3\leq 0$ ?? I fail to understand what is wrong with this step. If someone could formally explain it, it would be greatly appreciated.
The subtraction is equivalent to multiplying the second line by $-1$ , then adding it to the first line. However, the first operation changes the inequalities in the second line to $$ -1\geq -3 \geq -5, \quad\text{or}\ -5 \leq -3 \leq -1. \tag{1} $$ Adding $(1)$ to $1\leq 2\leq 5$ , you now obtain the valid result $$ -4\leq -1 \leq 4. \tag{2} $$
|calculus|inequality|fake-proofs|
0
four-digit number is equal to the product of the sum of its digits multiplied by the square of the sum of the squares of its digits
I'm trying to find a four-digit number that is equal to the product of (the sum of its digits) multiplied by (the square of the sum of the squares of its digits). I've tried running all combinations in Python and found two solutions (2023 and 2400). However, my maths teacher gave it to me and said there was a way to solve it analytically. $\sum_{i=0}^{3}10^{3-i}a_i = (\sum_{i=0}^{3} a_i) \times \left(\sum_{i=0}^{3} a_i^2\right)^2$ The only thing I found is that since $6^5 = 7776$ , no $a_i$ for $i \in \{0, 1, 2, 3\}$ cannot be greater or equal than six because otherwise $a_0$ would be greater or equal than 7 and $7^5 > 10000$ .
I’d look for the cases where the sum of digits times the square of the sum of digit squares is not too large. Let the sum of digits be $s$ , the sum of digit squares $S$ , and $T=s \cdot S^2$ . The sum of digit squares is smallest if the digits are as close to $\frac{s}{4}$ as possible. $s=13$ , then $S \geq3^2+3^2+3^2+4^2 = 43$ , and $T \geq 24037$ . $s=12$ makes $S \geq 36$ and $T \geq 15552$ . $s=11$ makes $S \geq 31$ and $T \geq 10571$ . $s=10$ makes $S \geq 26$ and $T \geq 6760$ . Because the digit sum of $T$ is only $10$ , the first digit of $T$ is at least $7$ , which makes $S \geq 7^2+1^2+1^2+1^2=52$ . However, this makes $T$ too large, and so we can disregard this case. $s=9$ makes $S \geq 21$ and $T \geq 3969$ . Similarly, the first digit of $T$ is at least $4$ , making $S \geq 25$ and $T \geq 5625$ . Repeating this logic, we get the first digit of $T$ is at least $6$ , making $S \geq 39$ and $T \geq 10000$ . Therefore, $s \leq 8$ . Additional Note: If we look at larger numbe
|elementary-number-theory|discrete-mathematics|
0
Homeomorphism between the first octant of 2-sphere $S^2$ and the ball $B^2$
It would be great if someone can help me for the proof that $B$ is homeomorphic to the ball $B^2$ where $$S^2=\{(x_1,x_2,x_3):x_1^2+x_2^2+x_3^2=1\}$$ $$B=S^2 \cap \{(x_1,x_2,x_3):x_1 \geq 0,x_2 \geq 0,x_3 \geq 0\} $$ and $$B^2=\{(x_1,x_2):x_1^2+x_2^2 \leq 1\}$$ My attempt is to project $B$ onto the intersection of the first quadrant of ${\bf R}^2$ and $B^2$ by the function $$p(x_1,x_2,x_3)=(x_1,x_2)$$ Since projection map $p$ is continuous and also bijective, the inverse function $p^{-1}(x_1,x_2)=(x_1,x_2,\sqrt{(1-x_1^2-x_2^2)}$ is continous, $p$ is homomorphism between $B$ and the intersection of the first quadrant of ${\bf R}^2$ and $B^2$ . I think if we could somehow show that the intersection of the first quadrant of ${\bf R}^2$ and $B^2$ are homomorphic, then the proof is complete, but I don't know how... It would be great if someone can help me figure it out.
You know that $B$ is homeomorphic to $B' =$ intersection of the first quadrant of $\mathbb R^2$ and $B^2$ . In my answer to Homeomorphism between the unit disc and the unit square I constructed a homeomorphism $f : B^2 \to Q = [-1,1]^2$ . It is easy to see that $f(B') = [0,1]^2$ , i.e. we have $B' \approx [0,1]^2$ . The square $[0,1]^2$ is clearly homeomorphic to $Q$ and thus homeomorphic to $B^2$ .
|general-topology|algebraic-topology|
1
What is the shortest path that visits every node and edge in the dodecahedral graph?
The dodecahedral graph is the Platonic graph corresponding to the connectivity of the vertices of a dodecahedron, with 20 nodes and 30 edges. Assuming all edges have a length (weight) of 1, I want to find the shortest path that visits every node at least once, traverses every edge at least once (regardless of direction), never traverses the same edge twice in a row (must visit other edges before returning) starts and ends at the same node. There are thirty edges, so covering all of them requires a path length of at least 30. I know that because each node is connected to three edges, at least one of those edges must be traversed more than once to cover all three. However, I don't know how many edges need to be double-counted, or if any need to be triple-counted (or more). Is there a good way to figure this out besides brute-force checking all possible paths?
As an initial observation, as each vertex will need to be visited at least twice (at most two edges are used on first visit, so a second visit is necessary to use the third edge) so a clear lower bound would be that it needs at least 40 steps. As a second observation, if you add ten more edges where each vertex was involved in exactly one of the newly added edges, then the graph becomes 4-regular and thus eulerian ( has a circuit which uses each edge exactly once ). As a final observation, if we could find a perfect cover, those ten edges to be added could simply duplicate the edges used in the perfect cover. Indeed, trying to find a perfect cover in a greedy fashion is not difficult. As all vertices in the graph with duplicated edges have even degree, there exists a eulerian circuit here that uses each edge exactly once and is therefore of length $40$ . This circuit corresponds to a circuit in the original graph where we simply reuse edges where we would otherwise have traveled along
|graph-theory|planar-graphs|
1
Are there two contradictory definitions of the limit of a function at a point?
I have found two different definitions of the limit of a function f at a point a in very popular calculus/analysis texts. In one, the function f is asked to be defined on an open interval containing the point a , possibly excluding said point ( Stewart, Wade ). The other definition, more popular in analysis books, simply asks that point a be a limit point (also called a cluster point) of the domain of the function ( Bartle and Sherbert, Abbott ). In both cases the epsilon-delta condition is requested. In the first definition, for the limit to exist it is necessary that x can tend to a from both sides, that is, with values greater and less than a . In the second version, since any extreme of an interval is also a limit point, x is not required to tend to a on both sides. The latter also implies that the sided limits are particular cases of the general definition. Furthermore, the theorem which is usually stated as " There exists the limit of f at a point a and is equal to L if and only
Yes, these are indeed two different definitions of the limit of a function at a point. In my experience, your observations can be generalized: most introductory textbooks use the first definition (where the function is required to be defined in a punctured neighborhood of the point, and thus in particular one-sided limits must be defined separately), while most advanced books use the second definition (where the domain of the function is less constrained). In and of itself, a definition cannot be correct/incorrect—a definition is just giving a shorthand name to a longer mathematical statement. And both definitions are stated rigorously and can be used self-consistently when developing examples and further theory. So the real question is: why does this difference exist? The idea of limit points/cluster points belongs to topology. Even in the case of subsets of the real line, it requires some time and effort to define limit points and teach students how to understand and work with them.
|calculus|limits|analysis|
1
Show that $\int_0^{\frac\pi 2}\frac{\theta-\cos\theta\sin\theta}{2\sin^3\theta}d\theta=\frac{2C+1}4$
While trying to evaluate $\int_0^1 k^2K(k)dk$ related to elliptic integral of the first kind , by integral switching method, I reached the trigonometric integral $$\int_0^{\frac\pi 2}\frac{\theta-\cos\theta\sin\theta}{2\sin^3\theta}d\theta$$ which is evaluated to $\frac{2C+1}4$ by Wolfram Alpha. Here, $C$ is the Catalan constant, sometimes denoted by $G$ . This integral is complicated for me. Are there another methods to evaluate the integrals $\int_0^1k^nK(k)dk$ , $n\geq2$ ? Or can someone please evaluate the trigonometric integral above? Thank you.
With $K$ and $E$ as the Complete Elliptic Integral of the First and Second Kind respectively with parameter $k$ . Denote $$K_n=\int_0^1k^nK\ dk$$ $$E_n=\int_0^1k^nE\ dk$$ then one can prove that, $$n^2K_n-(n-1)^2K_{n-2}=1\tag{1}$$ with the initial values of $$K_0=2G,\quad K_1=1$$ One may have the following closed forms by building upon the recurrence, $$\int_{0}^{1}k^{2n+1}K\ dk=\frac{2^{4n}}{\left(2n+1\right)^{2}}\binom{2n}{n}^{-2}\sum_{k=0}^{n}\binom{2k}{k}^2\frac{1}{2^{4k}}$$ $$\int_{0}^{1}k^{2n}K\ dk=\frac{1}{2^{4n}}\binom{2n}{n}^2\left[2G+\frac{1}{4}\sum_{k=1}^n\frac{2^{4k}}{k^2\displaystyle\binom{2k}{k}^2}\right]$$ The proof of $(1)$ is as follows Use IBP, Integrate $kK$ and differentiate $k^{n-1}$ Using the following result $$\int kK\ dk=E-(1-k^2)K$$ $$K_n=1-(n-1)\int_0^1k^{n-2}[E-(1-k^2)K]\ dk\tag{2}$$ Now we need relation between $E_n$ and $K_n$ One can use the Derivative of $E$ and obtain $$[k^nE]'=k^{n-1}[E-K]+nk^{n-1}E$$ Then integrate to obtain $$1+K_n=(n+2)E_n\tag{3}$$ Us
|integration|definite-integrals|trigonometric-integrals|elliptic-integrals|catalans-constant|
1
A connected k-regular bipartite graph is 2-connected.
I've been struggling with this exercise; all ideas have been unfruitful, leading to dead ends. It is from Balakrishnan's A Textbook of Graph Theory , in the connectivity chapter: Prove that a connected k-regular bipartite graph is 2-connected. (That is, deletion of one vertex alone is not enough to disconnect the graph). I think the objective is to make use of Whitney's theorem according to which a graph (with at least 3 vertices) is 2-connected iff any two of its vertices are connected by at least two internally disjoint paths. But I'll welcome any ideas or solutions. Thank you!
One could also use induction on $k$ as follows: If $k=2$ and $G$ is connected, then it is a cycle which is indeed 2-connected. Otherwise, let $k>2$ . Since $G$ is $k$ -regular, by Hall's Theorem it has a perfect matching $M$ . But then $G-M$ is $(k-1)$ -regular and by induction hypothesis is $2$ -connected. But since $G-M$ is $2$ -connected, then $G$ is also $2$ -connected.
|graph-theory|
0
Are there two contradictory definitions of the limit of a function at a point?
I have found two different definitions of the limit of a function f at a point a in very popular calculus/analysis texts. In one, the function f is asked to be defined on an open interval containing the point a , possibly excluding said point ( Stewart, Wade ). The other definition, more popular in analysis books, simply asks that point a be a limit point (also called a cluster point) of the domain of the function ( Bartle and Sherbert, Abbott ). In both cases the epsilon-delta condition is requested. In the first definition, for the limit to exist it is necessary that x can tend to a from both sides, that is, with values greater and less than a . In the second version, since any extreme of an interval is also a limit point, x is not required to tend to a on both sides. The latter also implies that the sided limits are particular cases of the general definition. Furthermore, the theorem which is usually stated as " There exists the limit of f at a point a and is equal to L if and only
The first definition is specific to functions defined on subsets of $\Bbb R$ . The second definition is more general and applies to functions on any metric space . The definitions agree in all cases where the first definition's domain requirement is met, i.e. $f$ is defined in an open neighborhood of $a$ (possibly excluding $a$ ). The only way they can disagree is when the second definition gives a limit but the first definition refuses to give one because $f$ is "not defined on enough of $\Bbb R$ near $a$ ". In these cases, $f$ can be extended to a larger domain in such a way that the first definition gives the same limit.
|calculus|limits|analysis|
0
long division algorithm
Lets say we are running the long division algorithm ( this long division algorithm ) on two integers $A,B$ and we want to compute $\frac{B}{A}$ . Why are we guaranteed to never have to place a digit greater than or equal to $10$ at the top of the division bracket? Why is this guaranteed to be the case? An ideal explanation will draw on the fact that we use a base 10 number system.
You can actually divide in such a way that you are not guaranteed such a condition. It's kind of like the normal long division algorithm with a twist. The picture below shows how typical long division works. All you do is subtract large groupings of the divisor (in this case 6) over and over until you exhaust your dividend (in this case 7,850) and keep track of how many times you subtract (which you do at the top of the division sign). So normally for 7,850/6, you’d subtract 1000 groups of 6 first, then 300, then 8, and then you’d be done, having only a remainder of 2 left. But other than the training you receive in school, nothing says you HAVE to choose those groups exactly. Now here’s the interesting part: If you scroll down further, you will see the same problem with the same answer, calculated with entirely DIFFERENT groupings. The best part is even if you overshoot and use too many groups, you can still use NEGATIVE groups to get back to the answer. It's a much freer and flashy v
|arithmetic|
0
Is ∞ a limit point of $\mathbb R$ ? If not, how to understand Rudin's definition at the beginning of chapter $4$ (PMA)?
I am starting reading the $4^{th}$ chapter of PMA from Walter Rudin. The chapter is about continuity and it defines $$\lim_{x\to a} f(x)$$ (for a function mapping a metric space $E$ into a metric space $F$ ) saying $a$ is a limit point of $E$ . I don't understand: for a function whose domain is $\mathbb R$ , does it mean that for $\lim_{x\to \infty} f(x)$ we have ∞ being a limit point of $\mathbb R$ ? According to this post , $\mathbb R '=\mathbb R$ , and ∞ is not a limit point of $\mathbb R$ . PS: Here is the (beggining of the) definition:
Rudin introduces the extended real number system $\mathbb R \cup \{-\infty, \infty \}$ in Definition 1.23 on p.11. Let us denote it by $\overline{\mathbb R}$ . In the section entitled "Infinite limits and limits at infinity" (p.97 ff) he explains how to work in the extended real number system: ... we shall now enlarge the scope of Definition 4.1 by reformulating it in terms of neighborhoods. This is made explicit in Definition 4.33 and answers your question what $\lim_{x \to \infty} f(x)$ means. It does not mean that $\infty$ is a limit point in the metric space $\mathbb R$ (this is impossible because $\infty \notin \mathbb R$ ). It has a precise meaning in terms of neigborhoods of $\infty$ in $\overline{\mathbb R}$ . Actually $\infty$ can be regarded as a limit point in $\overline{\mathbb R}$ , although not in the sense of Definition 2.18 (because Rudin does not introduce a metric on $\overline{\mathbb R}$ ).
|general-topology|functions|
0
Relation between eigenvalues of a general complex matrices and its realed one
Let $A=A_R+iA_I$ be a $n\times n$ complex matrix. If we want to solve a linear system with regard to $A$ and do not want to take complex arithmetic, then we often generate the following real matrix: \begin{equation} \widehat{A} = \begin{bmatrix}A_R&-A_I\\ A_I&A_R \end{bmatrix}. \end{equation} Furthermore, when $A$ is Hermitian, the eigenpairs of $A$ can also be calculated from eigenpairs of $\widehat{A}$ . My question is, for a general complex matrix $A$ , do eigenvalues of $A$ have relation with eigenvalues of $\widehat{A}$ ? By numerical experiments, it shows that eig( $A$ ) belongs to eig( $\widehat{A}$ ), and the other $n$ eigenvalues are their conjuate. But how to prove and how to recover to eig( $A$ )?
Partial Answer Consider the map $S: \mathbb C^n \to \mathbb R^{2n}$ defined on each coordinate by sending $x_j + i y_j$ to $x_j$ and $y_j$ , separated by $n$ indices. As an example, when $n = 2$ , we have $$ S\pmatrix{3 + 4i \\ 7 - 2i} = \pmatrix{3 \\7 \\4 \\-2}. $$ This map is $\mathbb R$ -linear, but not $\mathbb C$ -linear. For $p$ and $q$ real numbers, and $v$ a vector in $\mathbb C^n$ , we have $$ S(pv) = p S(v) $$ but $$ S(qi v) = q H(S(v)) $$ where $$ H(w) = \pmatrix{0_n & -I_n \\ I_n & O_n}w $$ so that $$ S((p + qi)v) = pS(v) + qiH(S(v)) $$ There's an obvious inverse to $S$ -- let's call it $T$ -- as well. If $v$ is a vector in $\mathbb C^n$ , I believe a little experimentation in the $n = 2$ case will convince you that $$ S(Av) = A' S(v)). $$ where I'm using prime instead of "hat" because it's easier to type. If experimentation doesn't convince you, writing out everything in terms of $A_R, A_I, v_R,$ and $v_I$ (where these are the real and imaginary parts of the vector $v$ ) s
|numerical-linear-algebra|matrix-decomposition|
0
Elementary proof that, in a topos, every epimorphism is regular?
Mac Lane and Moerdijk have a neat proof on p. 197 that, in a topos $\mathcal{E}$ , every epimorphism $f \colon C \to B$ is a coequalizer. But this depends on the assumption that the slice category $\mathcal{E}/B$ is itself a topos. The Elephant gets there sooner, and has a proof by p. 38 that, in a (pre)topos, every epimorphism is a coequalizer. But again this depends on some moderately fancy earlier moves. So here's the question: is there a significantly more elementary proof of the result which requires less scene-setting, requires only relatively small steps on from the definition of an elementary topos? I had the impression that I'd somewhere, sometime, seen a simpler proof (indeed, maybe even here on math.se??). But I can't locate one: so maybe I'm just having a senior moment!
We will show that in fact, any epimorphism $f : C \to B$ in a topos is effective, i.e. $f$ is the coequalizer of the two projections $C \times_B C \to C$ . Thus, suppose we have a morphism $g : C \to X$ such that $g \circ \pi_1 = g \circ \pi_2$ . Then set $I$ to be the image of $(f, g) : C \to B \times X$ . We claim that the first projection $I \to B$ is an isomorphism; then the required morphism $B \to X$ will be the composition of the inverse of $I \to B$ with the second projection $I \to X$ (and the uniqueness of the map $B \to X$ will be an immediate consequence of $f$ being an epimorphism). To see this, first note that the projection $\bar \pi_1 : I \to B$ is certainly an epimorphism, since $\pi_1 \circ (f, g) = f$ is an epimorphism. Therefore, we reduce to showing that the projection $I \to B$ is a monomorphism. To see this, suppose we have two morphisms $i, j : U \to I$ such that $\bar\pi_1 \circ i = \bar\pi_1 \circ j$ . Then there exists an epimorphism $V \to U$ and two morphis
|category-theory|topos-theory|
0
Is infinity norm submultiplicative? What about its power?
Here is the setting $X$ : Positive semi-definite; Non-negative entries; $||X||_\infty = \lambda$ (largest absolute row sum is $\lambda$ ). $G$ : Diagonal matrix; Non-negative diagonal entries $g_i$ ; All $g_i \leq \tau$ (for some positive $\tau$ ). $XG$ : all $XG$ ’s eigenvalue are within unit circle. I want to bound the largest element of $(XG)^2$ . Two approach: Using submultiplicative: $||XG||_\infty \leq ||X||_\infty \cdot ||G||_\infty \leq \lambda\tau$ so $||(XG)^2||_\infty = ||XG \cdot XG||_\infty \leq ||XG||_\infty \cdot ||XG||_\infty \leq (\lambda\tau) \cdot (\lambda\tau) = \lambda^2 \tau^2$ Using matrix definition: $|c_{ij}| \leq \sum_{k=1}^n |x_{ik}| \cdot |x_{kj}| \cdot |g_k| \cdot |g_j| \leq \sum_{k=1}^n \lambda \cdot \lambda \cdot \tau \cdot \tau \leq n \lambda^2 \tau^2$ Obviously, something must be wrong/sloppy. Is $\lambda^2 \tau^2$ the stricter bound? If indeed correct and stricter, can we also derive that using matrix definition? Appreciated!
What about this? Using matrix definition. To find $|(XG)^2|_{max}$ , the largest element of $(XG)^2$ , we can express $(XG)^2$ as: $(XG)^2$ = XG * XG = X * G * X * G Let's consider the $(i, j)$ th element of $(XG)^2$ , denoted as $((XG)^2)_{ij}$ . $((XG)^2)_{ij}$ = $Σ_k Σ_l (X_{ik} * G_{kk} * X_{lj} * G_{ll})$ Since $G$ is a diagonal matrix, $G_{kk} = g_k$ and $G_{ll} = g_l$ , where $g_k$ and $g_l$ are the diagonal entries of $G$ . Therefore, $((XG)^2)_{ij}$ = $Σ_k Σ_l (X_{ik} * g_k * X_{lj} * g_l)$ Taking the absolute value and using the triangle inequality, we get: $|((XG)^2)_{ij}|$ $Σ_k Σ_l |X_{ik}| * |g_k| * |X_{lj}| * |g_l|$ Since X has non-negative entries, $|X_{ik}| = X_{ik} and |X_{lj}| = X_{lj}$ . Also, |g_k| $|((XG)^2)_{ij}|$ $Σ_k Σ_l X_{ik} * t * X_{lj} * t$ = $t^2$ * $(Σ_k X_{ik})$ * $(Σ_l X_{lj})$ Using the property that the infinity norm of X is λ, we have: $Σ_k X_{ik}$ $Σ_l X_{lj} Substituting this in the above inequality, we get: $|((XG)^2)_{ij}|$ $t^2$ * λ * λ = $(λ *
|linear-algebra|matrices|matrix-norms|
0
Summing a binomial series that arose while counting functions
Define $f:A \to A$ where $A$ contains $n$ distinct elements. How many functions exist such that $ \forall x \in A, f^m(x)=x$ , $(m (and $m$ is prime to avoid the mistake pointed out in the comments) where $f^m$ represents the $m^{th}$ composition of $f(x)$ ? The key idea I recognized here was that there are only two ways in which this rule can be followed; assigning elements in the domain and range identically or to create "loops" of size $m$ i.e. for example, making $a_1$ map to $a_2$ which then maps to $a_3$ and so on until $a_m$ maps to $a_1$ . Now, I began to consider cases where I took $0,1,2...\lfloor \dfrac{n}{m} \rfloor$ loops while assigning the remaining elements identically; When we assign all elements identically, only $1$ such function exists. When one loop is formed, the number of functions becomes $\binom {n}{n-m}.(m-1)!$ (using the idea of cyclic permutations). When two loops are formed, we must first form two groups by $\dfrac{(2m)!}{2!m!^2}$ . Now the total number of
The exponential generating function for the number of permutations of $n$ elements is (see Wikipedia ) \begin{eqnarray*} \frac1{1-z} &=& \exp\left(\log\left(\frac1{1-z}\right)\right) \\ &=& \exp\left(\sum_{k\ge1}\frac{z^k}k\right)\;, \end{eqnarray*} where the term $\frac{z^k}k=\frac{(k-1)!z^k}{k!}$ represents the $(k-1)!$ labelled cycles of length $k$ . To count the permutations all of whose cycle lengths are divisible by $m$ , restrict the sum to the divisors of $m$ to obtain the exponential generating function $$ \exp\left(\sum_{k\mid m}\frac{z^k}k\right)\;. $$ For instance, for $m=6$ , this is $$ \exp\left(z+\frac{z^2}2+\frac{z^3}3+\frac{z^6}6\right)=1+z+z^2+z^3+\frac34z^4+\frac{11}{20}z^5+\frac{11}{20}z^6+\frac{57}{140}z^7+\cdots $$ ( Wolfram|Alpha computation ), so for example there are $7!\cdot\frac{57}{140}=2052$ permutations of $7$ elements whose $6$ -th power is the identity. I’d be surprised if you find a closed form for this. Wikipedia gives a single summation for the case w
|combinatorics|functions|permutations|binomial-coefficients|
1
Solve $a+b=\gcd(a^3,b^3),\;b+c=\gcd(b^3,c^3),\;c+a=\gcd(c^3,a^3)$ for positive integers $a,b,c$
Solve $$\begin{cases}a+b=\gcd(a^3,b^3)\\ b+c=\gcd(b^3,c^3)\\ c+a=\gcd(c^3,a^3)\end{cases}$$ for positive integers $a,b,c$ . We can rewrite as: $$\begin{cases}a+b=\gcd(a,b)^3\\ b+c=\gcd(b,c)^3\\ c+a=\gcd(c,a)^3\end{cases}$$ I ran a program that checked $1\le a\le b\le c \le 500$ , and it didn’t find any solutions. @RandomGuy checked all the numbers up to $200000$ and didn’t find any solutions, too (see in the comments). Probably there aren’t any. Progress I can show that $\color{red}{\gcd(a,b,c)=1}$ . Indeed, let $d=\gcd(a,b,c)$ , $a_1=ad$ , $b_1=bd$ , $c_1=cd$ . Note that $\gcd(a,b)=\gcd(da_1,db_1)=d\cdot\gcd(a_1,b_1)$ . Our system then becomes: $$\begin{cases}a_1+b_1=d^2\cdot\gcd(a_1,b_1)^3\\ b_1+c_1=d^2\cdot\gcd(b_1,c_1)^3\\ c_1+a_1=d^2\cdot\gcd(c_1,a_1)^3\end{cases}\tag1$$ Note that $a_1+b_1+ c_1$ and $d$ are coprime. If $p\mid a_1+b_1+ c_1$ and $p\mid d$ then $p\mid d^2\cdot\gcd(a_1,b_1)^3 = a_1+b_1$ . But then also $p\mid (a_1+b_1+ c_1)-(a_1+b_1)=c_1$ . As well as $p\mid a_1$ and
I notice that $\gcd(a,b)$ is much smaller than $a$ and $b$ so I'll try to find a way to iterate through values of gcd in stead. Let $$u = \gcd\left(a,b\right), v = \gcd\left(b,c\right), w = \gcd\left(a,c\right)$$ then $$ a = a_b u = a_c w$$ $$ b = b_a u = b_c v$$ $$ c = c_a w = c_b v$$ The system can be reduced as $$ \begin{cases} \begin{align} a_b + b_a &= u^2\\ b_c + c_b &= v^2\\ c_a +a_c &= w^2 \end{align} \end{cases} $$ Since $\gcd(a,b,c) = 1$ as shown by Aig, $u,v,w$ are pairwise coprime. This means $(u \mid a_c, b_c)$ , $(v \mid b_a, c_a)$ and $(w \mid a_b, c_b)$ . Notice that since $a_b u = a_c w$ , then $a_b/w = a_c/u$ . Let $x = a_b/w = a_c/u$ , $y = b_c/u = b_a/v$ , $z = c_a/v = c_b/w$ , then the system can be written as follows $$ \begin{cases} \begin{align} xw + yv &= u^2\\ yu + zw &= v^2\\ zv +xu &= w^2 \end{align} \end{cases} $$ Also since $1 = \gcd(a_b,b_a) = \gcd(xw, yv)$ , $x$ and $y$ are coprime, similarly, $x,y,z$ are pairwise coprime. As mentioned by Random guy, at
|elementary-number-theory|systems-of-equations|gcd-and-lcm|
0
Show that $\mathfrak{su}(m,n), \mathfrak{sp}(n,\mathbb R), \mathfrak{so}^*(2n)$ are closed under the conjugate transpose
I’m trying to check that $\mathfrak{su}(m,n), \mathfrak{sp}(n,\mathbb R), \mathfrak{so}^*(2n)$ are closed under the conjugate transpose as described in Anthony Knapp’s “Lie Groups Beyond an Introduction”. On page 60, Knapp asserts that this follows from the definitions of these groups as $$\mathfrak{su}(m,n) = \{X\in\mathfrak{sl}(m+n,\mathbb C) \mid X^*I_{m,n} + I_{m,n} X = 0\}$$ $$\mathfrak{sp}(n,\mathbb R) = \{X\in\mathfrak{gl}(2n,\mathbb R) \mid X^tJ_{n,n} + J_{n,n} X = 0\}$$ $$\mathfrak{so}^*(2n) = \{X\in\mathfrak{su}(n,n) \mid X^tI_{n,n}J_{n,n} + I_{n,n}J_{n,n} X = 0\}$$ and the fact that $I_{m,n}^*=I_{m,n}$ and $J_{n,n}^* = - J_{n,n}$ . I thought that the argument should be simple. However, to show for example that $X^*\in \mathfrak{su}(m,n)$ if $X\in \mathfrak{su}(m,n)$ I need to show that $XI_{m,n} + I_{m,n} X^* = 0$ if $X^*I_{m,n} + I_{m,n} X = 0$ . Unfortunately, this doesn’t follow from simply taking the conjugate transpose of $X^*I_{m,n} + I_{m,n} X = 0$ , which is \begin{a
The proof may turn out to be easier when making a detour through the associated Lie groups, as it is shown below for the first case. Let $X \in \mathfrak{su}(m,n)$ be an element of the Lie algebra. Then, $A(t) := e^{tX} \in SU(m,n)$ defines a curve in the corresponding Lie group, in such a way that $A^*I_{m,n}A = I_{m,n}$ . Now, let's consider $B(t) := e^{tX^*}$ . It satisfies : $$ B^*I_{m,n}B = e^{tX}I_{m,n}e^{tX^*} = AI_{m,n}A^* = AI_{m,n} \cdot I_{m,n}(I_{m,n}A)^{-1} = AI_{m,n}^2A^{-1}I_{m,n}^{-1} = I_{m,n}, $$ since $I_{m,n}^2 = 1$ and $I_{m,n}^{-1} = I_{m,n}$ . In consequence, one concludes that $B \in SU(m,n)$ , hence $X^* = \dot{B}(0) \in \mathfrak{su}(m,n)$ in the end.
|lie-algebras|
1
Infinite wacky race
Dick Dastardly is taking part in an infinite wacky race. What is infinite about it, you ask? Well, just everything! There are infinitely many racers, every one of which can run infinitely fast and the finish line is infinitely far away. To be more precise, the race will take place on the natural numbers, starting at zero. At every moment in time, every player can move further from its position to any natural number. However, Dick has an advantage: the power of cheating! Every racer he overtakes will be snatched by one of his traps and be out of the race for good. What is the largest number of racers $\Delta$ Dick be sure to win against? For the sake of clarity, say Dick and the rest of the racers alternate movements, i.e., first every player moves, then Dick moves, then it repeats. So, for instance, if Dick is racing against countably many people, he is sure to win. All he has to do is number the racers and, at the $n$ th moment, overtake the $n$ th racer. If there are continuum many p
It turns out I misunderstood the question, so my answer addresses a different, but related, problem. I'll leave it up anyway, since it might be of interest to some readers. Every non-empty set of cardinals has a least element, but it is not true that every non-empty set of cardinals has a greatest element. Thus it is preferable to ask "what is the least cardinal $\kappa$ such that there exists a set of racers of cardinality $\kappa$ which Dick cannot win against?" rather than "what is the greatest cardinal $\kappa$ such that Dick can win against every set of racers of cardinality $\kappa$ ?" Thus, I am interpreting your question as follows: What is the least cardinal $\kappa$ with the property that there exists a set $S$ of non-decreasing sequences with $|S| = \kappa$ such that for all non-decreasing $d$ , there exists $f\in S$ such that for all $t\in \mathbb{N}$ , $d(t) \leq f(t)$ ? In fact, this cardinal $\kappa$ is the dominating number $\mathfrak{d}$ , so its exact value is indepen
|set-theory|cardinals|forcing|infinite-games|
0
How to approach an Hyperbolic Integral that doesn't appear to be solvable in closed form.
I'm interested in tackling the following integral: $$\int_{-\ln (2+\sqrt 5)}^{\ln (2+\sqrt 5)} \sqrt{4+\sinh^2(x)} dx$$ While I've attempted various techniques, it appears challenging to find a closed-form solution for this integral. I'm beginning to suspect that it might not have one. Do you have any insights into expressing it as an infinite series or in terms of special functions? Any guidance or suggestions on alternative approaches would be greatly appreciated. Thank you for your assistance!
Let's first translate out of hyperbolic language by substituting $y=e^x$ and integrating by parts. $\newcommand{\arsinh}{\operatorname{arsinh}}$ $$\begin{align*} I &= \int_{-\log (2+\sqrt 5)}^{\log (2+\sqrt 5)} \sqrt{4+\sinh^2(x)} \, dx \\ &= \frac12 \int_{-\log(2+\sqrt5)}^{\log(2+\sqrt5)} \sqrt{e^{2x} + e^{-2x} + 14} \, dx \\ &= \frac12 \int_{-2+\sqrt5}^{2+\sqrt5} \frac{\sqrt{y^4 + 14y^2 + 1}}{y^2} \, dy \\ &= \int_{-2+\sqrt5}^{2+\sqrt5} \frac{y^2+7}{\sqrt{y^4+14y^2+1}} \, dy \end{align*}$$ Now consult this answer which demonstrates how to approach the integrand $\dfrac{ax^2+b}{\sqrt{cx^4+dx^2+e}}$ . WolframAlpha produces the following antiderivatives, which I've adjusted to agree with the non-Wolfram conventions for $E,F$ used in the linked answer. $$\begin{align*} \int \frac{dy}{\sqrt{y^4+\alpha y^2+1}} &= -i A\, F\left(-\arsinh^2(Ay) ; \frac1{A^4}\right) +C \\ \int \frac{y^2\,dy}{\sqrt{y^4+\alpha y^2+1}} &= -i A\, \left[E\left(-\arsinh^2(Ay) ; \frac1{A^4}\right) - F\left(-\arsinh^2
|calculus|integration|hyperbolic-functions|
1
Maximize weights in a weighted average using Lagrange multipliers
For values $\mathbf{a}=(a_1, a_2, ..., a_N)$ with corresponding weights $\mathbf{w}=(w_1, w_2, ..., w_N)$ (assume $\sum_{i=1}^{N} w_i = 1$ and $0 \le w_j \le 1$ for $j \in {\{1,2,...,N\}}$ ), the weighted average is $$ \bar{a} = \sum_{i=1}^{N} w_i a_i $$ How can I solve for the weights $\mathbf{w}$ that maximize $\bar{a}$ using the method of Lagrange multipliers? It is obvious that the solution will give a value 1 to the weight that corresponds to the largest value in $\mathbf{a}$ , and 0 for the remaining weights, e.g. $\mathbf{a}=(1,2,3)$ will give $\mathbf{w}=(0,0,1)$ and $\bar{a}=3$ . However, I am somehow not able to show this using Lagrange Multipliers. What I have tried is to solve the following set of equations ( $j \in {\{1,2,...,N\}}$ ) $$ \dfrac{\partial \bar{a}}{\partial w_j} = \lambda \dfrac{\partial g}{\partial w_j} $$ with the constraint equation $$ g = \sum_{i=1}^{N} w_i = 1 $$ I then calculate $\dfrac{\partial \bar{a}}{\partial w_j} = a_j$ and $\dfrac{\partial g}{\part
You cannot use the Lagrangian method while ignoring $0 \le w_j \le 1$ constraints. In fact, for optimization problems with inequality constraints, you need to use the generalized version of the Lagrangian method known as KKT conditions. However, if you want to use the classical Lagrangian method, you need to somehow modify your optimization problem such that it does not have an inequality constraint. For instance, letting $w_i = v_{i}^{2}$ , you can solve the following optimization problem: minimize $\sum_{i=1}^{n} v_{i}^{2} a_i$ subject to $\sum_{i=1}^{n} v_{i}^{2} =1.$ After finding all $v_i$ s in the previous problem, you can retrieve all the $w_i$ s using the relationship between $v_i$ and $w_i$ . P.S. I leave it to you to convince yourself why these two optimization problems are equivalent.
|lagrange-multiplier|average|
0
Compute $\int \frac{\sin(x)\cos(x)}{x^2+1}dx$
Compute $$\int \frac{\sin(x)\cos(x)}{x^2+1}dx$$ Attempt $u=\sin(x)\Rightarrow du=\cos(x)dx$ $$\int\frac{u}{x^2+1}du$$ But, now I have both $u$ and $x$ in the function. How do I resolve? Second attempt: using Maclaurin series for $\sin(x)$ and $\cos(x)$ , $$\sin(x)=1-x^3/3!+x^5/5!...$$ $$\cos(x)=1-x^2/2!+x^4/4!...$$ $$\sin(x)\cos(x)/(x^2+1)=(1-x^3/3!+x^5/5!...)(1-x^2/2!+x^4/4!...)/(x^2+1),$$ we can simplify the numerator and divide each term by $x^2+1$ , but how do we simplify the numerator? How do we simplify this expression in order to integrate this integrand?
I doubt that this can be massaged into a nice expression, but we have; $$\int\frac{\sin(x)\cos(x)}{x^2+1}dx=\frac{1}{2}\int\frac{\sin(2x)}{x^2+1}dx$$ $$=\sum_{n=0}^{\infty}\frac{(-1)^n4^{n}}{(2n+1)!}\int\frac{x^{2n+1}}{x^2+1}dx$$ Applying the substitution $u=x^2+1$ yields; $$=\frac{1}{2}\sum_{n=0}^{\infty}\frac{4^{n}}{(2n+1)!}\int\frac{(1-u)^{n}}{u}du$$ By the binomial theorem we will then have; $$=\frac{1}{2}\sum_{n=0}^{\infty}\frac{4^{n}}{(2n+1)!}\sum_{k=0}^{n}\frac{(-1)^k\cdot n!}{k!\cdot (n-k)!}\int{u^{k-1}}du$$ Our second sum will produce a divergence when $k=0$ if we simply apply the primitive of $u^{k-1}$ . So we must peel the first term of our sum off and evaluate it separately; $$=\frac{1}{2}\sum_{n=0}^{\infty}\frac{4^{n}}{(2n+1)!}\bigg(\int u^{-1}du+\sum_{k=1}^{n}\frac{(-1)^k\cdot n!}{k!\cdot (n-k)!}\int{u^{k-1}}du\bigg)$$ This gives us our final giant mess; $$=\frac{1}{2}\sum_{n=0}^{\infty}\frac{4^{n}}{(2n+1)!}\bigg(\ln(x^2+1)+\sum_{k=1}^{n}\frac{n!\cdot (-(x^2+1))^k}{k\cdot
|calculus|integration|
0
Proving the convexity of $f(x,y) = x^2 + y^2 + |xy|$
I am trying to show that the function $f(x,y) = x^2 + y^2 + |xy|$ is convex. So I can show that both $x^2 + y^2 + xy$ and $x^2 + y^2 - xy$ are convex because their hessian is a diagonally dominant matrix with strictly positive entries on the diagonal. So this shows that my function is convex within all 4 quadrants. But I would like to prove my function is convex everywhere - not just within each quadrant. Any steps on how I can go about this?
Let $g(x,y)=x^2+y^2+xy$ for $x,y\ge 0$ . The Hessian of $g$ is $\begin{pmatrix}2 & 1\\1 & 2\end{pmatrix}$ . As an alternative to the approach of the OP we may compute the eigenvalues $1$ and $3$ , which shows that the Hessian is positive definite and that $g$ is (strictly) convex. Further, notice that $g$ is (strictly) increasing in the first coordinate, and also increasing in the second coordinate. Now, notice that $f(x,y)=g(|x|,|y|)$ . Fix a (non-trivial) convex combination $z=\alpha(x,y)+\beta(x',y')$ . Since the absolute value $|x|$ is (strictly) convex and $g$ is (strictly) increasing in each coordinate, we have $$f(z) But now, since $g$ is (strictly) convex, this yields $$f(z) This shows that $f$ is (strictly) convex.
|convex-analysis|convex-optimization|convex-geometry|
0
Question regarding an implication of the compactness of $S_n(T)$ for countable and complete theory $T$.
Suppose that there is an $n$ such that the set of $L$ -formulas in variables $x_1,\dots,x_n$ up to $T$ -equivalence is finite. That is, there is no finite set of formulas $\{\varphi_i\}_{i\in I}$ (with $I$ finite) such that for all $L$ -formulas $\psi(x_1, x_2, \ldots, x_n)$ there is a $i \in I$ such that $$T \models \forall x_1, x_2, \ldots, x_n(\psi(x_1, x_2, \ldots, x_n) \leftrightarrow \varphi_i(x_1, x_2, \ldots, x_n)).$$ Then somehow by the the topological compactness of $S_n(T)$ , there must be an non isolated type $p(x)$ . My question is how to conclude that from the hypothesis. I understand that if $S_n(T)$ is compact then for an open cover $\mathcal{C}$ of $S_n(T)$ there is a finite cover $\mathcal{C}'$ of $S_n(T)$ , that is if $S_n(T) = \bigcup_{[\varphi] \in \mathcal{C}} [\varphi]$ , then $S_n(T)= \bigcup_{[\varphi] \in \mathcal{C}'} [\varphi]$ , but from this information I do not know how to obtain a non isolated type $p(x)\in S_n(T)$ . Is there any other property of the co
Let's prove the contrapositive. Suppose every type in $S_n(T)$ is isolated. Then for all $p\in S_n(T)$ , $\{p\}$ is open, and $\bigcup_{p\in S_n(T)}\{p\}$ is an open cover of $S_n(T)$ . By compactness, $S_n(T)$ is finite. Now we can write $S_n(T) = \{p_1,\dots,p_k\}$ , and since each $p_i$ is isolated, we can find a formula $\varphi_i(x_1,\dots,x_n)$ such that $[\varphi_i] = \{p_i\}$ . For each $S\subseteq \{1,\dots,k\}$ , let $\varphi_S$ be $\bigvee_{i\in S} \varphi_i$ . Then $[\varphi_S] = \{p_i\mid i\in S\}$ . Finally, for any formula $\psi(x_1,\dots,x_n)$ , let $S_\psi = \{i\mid p_i\in [\psi]\}\subseteq \{1,\dots,k\}$ . Then $[\psi] = [\varphi_{S_\psi}]$ , so $$T\models \forall x_1\dots x_n (\psi(x_1,\dots,x_n)\leftrightarrow \varphi_{S_\psi}(x_1,\dots,x_n)).$$ Since there are only finitely many formulas of the form $\varphi_S$ , this shows that there are only finitely many formulas up to $T$ -equivalence.
|model-theory|
1
Fifth cyclotomic polynomial over a finite field
Consider the polynomial $g(x)=x^4+x^3+x^2+x+1 \in \mathbb{F}_3[x]$ . It's possible to show that $g$ is irreducible in $\mathbb{F}_3$ . If we let $\alpha$ be a root of $g$ , then $\alpha^4+\alpha^3+\alpha^2+\alpha+1=0$ , and it can be shown from this that $\alpha^5=1$ . There's an interesting result that the relation $g(\alpha)=0$ gives us. If $K$ is the splitting field of $g$ , then $5 \big| p^m-1$ , where $m$ is the degree of the splitting field over $\mathbb{F}_3$ . This comes from Lagrange's theorem, since, if $|K|=p^m$ , then $K^*$ is a group under multiplicaition of order $p^m-1$ . One can pretty easily check that if $|K|\ge p^4$ , since $5$ divides neither $3^2-1$ nor $3^3-1$ . My question is, if we've found the smallest field in which $g$ could split, whether it necessarily does split. That is, given the relation that $\alpha^5=1$ , and that $5\big|3^4-1$ , whether that is sufficient to assure $\mathbb{F}_{3^4}$ is the splitting field of $g$ , or whether the only way to prove $\
There is a calculation-free proof that "the smallest field over which $g$ could split" is indeed the splitting field. The key lemma is that a finite field has a unique extension of cardinality $n$ for any $n$ . Indeed, let $F$ be a finite field of cardinality $q$ and let $K$ be a degree $n$ extension of $F$ . The cardinality of $K^\times$ is $q^n - 1$ , and so every element $K^\times$ satisfies the polynomial $x^{q^n} - x$ . This polynomial is separable (the formal derivative is $-1$ ), so it has $q^n$ distinct roots in $K$ -- but the cardinality of $K$ is $q^n$ , so its roots are exactly the elements of $K$ . That means that this polynomial splits in $K$ and over no subfield of $K$ , and therefore $K$ is the splitting field of this polynomial, which makes it unique up to isomorphism.
|field-theory|finite-fields|irreducible-polynomials|splitting-field|cyclotomic-polynomials|
1
What is the formula for finding the summation of the sequence : $1,2,5,12,26,51,...$ upto $n$ terms?
Q) What is the formula for finding the summation of the sequence $1,2,5,12,26,51,...$ upto $n$ terms ? I know how to find the summation of sequences like $1,2,3,...,$ upto $n$ terms ; $1,2,4,8,...,$ upto $n$ terms, $1,2,4,7,11,...$ , upto $n$ terms. Mainly what I am trying to say is that I know how to find the summations of an Arithmetic Progression, Geometric Progression, Arithmetico-Geometrico Progression. The formula for finding the summation of a finite A.P. is $\frac{n}{2}[2a+(n-1)d]$ ; where $a$ is the $1^{st}$ term of the A.P., $d$ is the Common Difference of the A.P. and $n$ is the no. of terms of the A.P. The formula for finding the summation of a finite G.P. is $a(\frac{r^{n}-1}{r-1})$ when $|r|$ is $> 1$ and $a(\frac{1-r^{n}}{1-r})$ when $|r| ; where $a$ is the $1^{st}$ term of the G.P., $r$ is the common ratio of the G.P., and $n$ is the no. of terms of the G.P. Similarly I also know how to find the summation of an Arithmetico-Geometrico Progression. But I don't know how to
The sum is $$S_n=\frac{1}{120}n(n^4+15n^2+104).$$ You just write the $n$ th difference sequence and sum them up level by level. Explicitly: $$\{a_n\}=1,2,5,12,26,51,\cdots$$ $$\{b_n\}=1,3,7,14,25,\cdots$$ $$\{c_n\}=2,4,7,11,\cdots$$ $$\{d_n\}=2,3,4,\cdots=\{n+1\}.$$ Then $$c_n=c_1+\sum_{k=1}^{n-1}d_k=\frac{n^2+n+2}{2};$$ $$b_n=b_1+\sum_{k=1}^{n-1}c_k=\frac{n^3+5n}{6};$$ $$a_n=a_1+\sum_{k=1}^{n-1}b_k=\frac{n^4-2n^3+11n^2-10n+24}{24};$$ then finally $$S_n=\sum_{k=1}^na_n.$$
|sequences-and-series|arithmetic-progressions|
0
Compute $\int \frac{\sin(x)\cos(x)}{x^2+1}dx$
Compute $$\int \frac{\sin(x)\cos(x)}{x^2+1}dx$$ Attempt $u=\sin(x)\Rightarrow du=\cos(x)dx$ $$\int\frac{u}{x^2+1}du$$ But, now I have both $u$ and $x$ in the function. How do I resolve? Second attempt: using Maclaurin series for $\sin(x)$ and $\cos(x)$ , $$\sin(x)=1-x^3/3!+x^5/5!...$$ $$\cos(x)=1-x^2/2!+x^4/4!...$$ $$\sin(x)\cos(x)/(x^2+1)=(1-x^3/3!+x^5/5!...)(1-x^2/2!+x^4/4!...)/(x^2+1),$$ we can simplify the numerator and divide each term by $x^2+1$ , but how do we simplify the numerator? How do we simplify this expression in order to integrate this integrand?
\begin{align*}&\int \frac{\sin(x)\cos(x)}{x^2+1}dx \stackrel{u=2x}{=}\int\frac{\sin u}{u^2+4}du=\frac1{4i}\int\frac{\sin u}{u-2i}du-\frac1{4i}\int\frac{\sin u}{u+2i}du\\ &=\frac1{4i}\int\frac{\sin(v+2i)}{v}dv-\frac1{4i}\int\frac{\sin(w-2i)}{w}dw \\ &=\frac{\cos2i}{4i}\int\frac{\sin v}{v}dv+\frac{\sin2i}{4i}\int\frac{\cos v}v dv-\frac{\cos2i}{4i}\int\frac{\sin w}w dw+\frac{\sin2i}{4i}\int\frac{\cos w}{w}dw \\ &=\frac1{8e^2} (-(e^4+1)i\operatorname{Si}(v)+(e^4-1)\operatorname{Ci}(v)\\ &\quad\;+(e^4+1)i\operatorname{Si}(w)+(e^4-1)\operatorname{Ci}(w) )+C\\ &=\frac1{8e^2} (-(e^4+1)i\operatorname{Si}(2x-2i)+(e^4-1)\operatorname{Ci}(2x-2i)\\ &\quad\;+(e^4+1)i\operatorname{Si}(2x+2i)+(e^4-1)\operatorname{Ci}(2x+2i) )+C\end{align*} which is the same answer given by Maths lover in the comments, since $\operatorname{Si}$ is an odd, $\operatorname{Ci}$ is an even function.
|calculus|integration|
0
Joint law of Brownian motion maximum and its values at different points
Let $B$ be a standard Brownian motion on $[0, T]$ and let $\tau$ be the (a.s. unique) moment at which it attains its maximum value: $M = \sup_{t \in [0, T]} B(t) = B (\tau)$ . There is a well-known formula for the joint density of $(M, B(T))$ , but I cannot find a joint density of $(M, B(u))$ for $u . Is it possible? I found a very similar question here , but the given answer is incomplete. Also, I am looking for the distribution of $(M, B(u), B(v))$ with some $0 . If the previous question is doable, maybe the joint pdf of this vector may also be found? Finally, I am trying to compute $\mathbb{E} \{ \tau B(u) \}$ for some fixed $u$ . If I knew the joint law from above, I'd be able to find this easily. But maybe there is a chance to calculate this without knowing the joint pdf?
It seems that Thomas Kojar's suggestion does help to solve the problem. Let $0 . Then, since $$ M_T = \max \{ M_u, B_u + \max_{t \in [u, T]} ( B_t - B_u ) \} = \max \{ M_u, B_u + M_{T-u}^* \}, $$ by law of total probability we obtain $$ \mathbb{P} \{ M_T By $$ \mathbb{P} \{ M_u \in d\xi, \ B_u \in dz \} = \frac{2 ( 2\xi - z)}{u^{3/2} \sqrt{2 \pi}} \exp \left( -\frac{( 2 \xi - z )^2}{2u} \right) $$ and $$ \mathbb{P} \{ M_T we obtain $$ \mathbb{P} \{ M_T Simplifying, we obtain $$ \mathbb{P} \{ M_T Okay, now I probably see how to apply this procedure to finding the law of $(M_T, B_u, B_v)$ : $$ \mathbb{P} \{ M_T where $$ \begin{aligned} B_t^* & = B_{t+u} - B_u, \quad t \in [0, v-u], & M_t^* & = \max_{t \in [0, v - u]} B_t^*, \\ B_t^{**} & = B_{t+v} - B_v, \quad t \in [0, T - v], & M_{T-v}^{**} & = \max_{t \in [ 0, T - v ]} B_t^{**}. \end{aligned} $$ Most importantly, random variables with different numbers of stars (0, 1 and 2) are independent of each other. This motivates us to condition
|probability|stochastic-processes|brownian-motion|
0
What is the probability of drawing 5 cards of the same type from a deck of 52 cards?
My textbook has the following problem: There is 52 cards in a deck and 13 cards of each type/color. You are drawing 5 cards. Whats the probability of all these 5 cards being the same type? My solution: There is a $\frac{52}{52}$ probability of drawing the first card, then a $\frac{12}{51}$ chance of drawing a second card of the same type as the first one and so on... $$\frac{52}{52} \cdot \frac{12}{51} \cdot \frac{11}{50} \cdot \frac{10}{49} \cdot \frac{9}{48} = \frac{33}{16660}$$ I solved the same problem by calculating the individual probabilities for each card type and then adding all the probabilities together: $$4\cdot(\frac{13}{52} \cdot \frac{12}{51} \cdot \frac{11}{50} \cdot \frac{10}{49} \cdot \frac{9}{48}) = \frac{33}{16660}$$ My textbook says the solution is $(\frac{13}{51} \cdot \frac{12}{50} \cdot \frac{11}{49} \cdot \frac{10}{48})$ without any explanation. Although I don't see how you arrive at this answer. Whats wrong with my way of solving the problem and how do you arr
Your solution $4\cdot(\dfrac{13}{52}\cdot\dfrac{12}{51}\cdot\dfrac{11}{50}\cdot\dfrac{10}{49}\cdot\dfrac{9}{48})$ is correct. This does simplify to $\dfrac{33}{16660}$ . Your textbook actually just has the incorrect answer.
|probability|combinatorics|card-games|
0
Could I use Green's Theorem here?
I want to solve for the line integral: $$\tag{1}\oint \alpha\nabla \phi_i\cdot \hat{\textbf{n}} ds$$ on the square boundary: $(0\le x \le 1, 0), (1,0 \le y \le 1), (1 \le x \le 0, 1),(0,1\le y \le 0)$ Where the function $\phi_i = \sin(n\pi x)\sin(n\pi y)$ and $\alpha = 1$ on the boundary. I don't know how $\hat{\textbf{n}}$ is oriented with respect to the gradient of $\phi_i$ , so would it be possible to instead use Green's theorem $\oint_C Pdx + Qdy = \int\int_D \Big(\frac{\partial Q}{\partial x} -\frac{\partial P}{\partial y}\Big) dA$ and compute the double integral instead?
The gradient of $$ \phi(x,y) = \sin(n\pi x)\sin(n\pi y) $$ is $$ \nabla\phi=n\pi\pmatrix{\cos(n\pi x)\sin(n\pi y)\\\sin(n\pi x)\cos(n\pi y)}\,. $$ The divergence of that gradient is $$ \Delta\phi(x,y)=-2\,n^2\pi^2\sin(n\pi x)\sin(n\pi y)\,. $$ By Gauss' and Green's theorem \begin{align} \oint_{\partial\,Q}\nabla\phi\cdot\hat{\mathbf{n}}\,ds =\int_Q\Delta\phi\,dA \end{align} where $Q$ is your square and $\hat{\mathbf{n}}$ the outward pointing unit normal at the boundary. It should be very easy to calculate the $dA$ integral over the square. If it is zero or not depends on whether $n$ is odd or even. Since the function $\alpha$ is one on the boundary it has no influence on these results.
|integration|greens-theorem|
1
The analogue of magazine "Quant"
As you know there is an amazing math magazine "Quant" which is in Russian language which contains a lot of interesting and challenging contest math problem. Are there any analogue in English language? Would be very grateful for any information.
Check out the free Crux Mathematicorum archive at: https://cms.math.ca/publications/crux/
|reference-request|contest-math|book-recommendation|
0
Conditional Kullback Divergence
Let X be a discrete random variable drawn according to probability mass function $p(x)$ over alphabet $X$ , and let random variables $Y_1$ and $Y_2$ take value in alphabet $Y$ with probability $p_1(y)$ and $p_2(y)$ , respectively. The divergence and conditional divergence in this notation are: Can conditioning never reduce or never increase the divergence, or none of the above? I have information that conditioning increases KL divergence, but have no clue how to show this information. Also, I am not sure if I can say $p(x)p_1(y|x)= p_1(x,y)$ . Any hints, or tips?
I prefer the notation $D(P_{Y|X}\|Q_{Y|X}|P_X),$ since this makes the law over $X$ explicit. For a pair of laws $P_{XY},Q_{XY},$ the chain rule for KL divergence is $$ D(P_{XY}\|Q_{XY}) = D(P_X\|Q_X) + D(P_{Y|X}\|Q_{Y|X}|P_X).$$ Now, if $P_X = Q_X$ as in the question, then the first term is $0$ . But exchanging the role of $X$ and $Y$ , we can also write $$ D(P_{XY}\|Q_{XY}) = D(P_Y\|Q_Y) + D(P_{X|Y}\|Q_{X|Y}|P_Y), $$ and the final term here must be nonnegative (why?). We can thus infer that $$ D(P_Y\|Q_Y) \le D(P_{XY}\|Q_{XY}) = D(P_{Y|X}\|Q_{Y|X}|P_X).$$
|inequality|information-theory|
0
What is the formula for finding the summation of the sequence : $1,2,5,12,26,51,...$ upto $n$ terms?
Q) What is the formula for finding the summation of the sequence $1,2,5,12,26,51,...$ upto $n$ terms ? I know how to find the summation of sequences like $1,2,3,...,$ upto $n$ terms ; $1,2,4,8,...,$ upto $n$ terms, $1,2,4,7,11,...$ , upto $n$ terms. Mainly what I am trying to say is that I know how to find the summations of an Arithmetic Progression, Geometric Progression, Arithmetico-Geometrico Progression. The formula for finding the summation of a finite A.P. is $\frac{n}{2}[2a+(n-1)d]$ ; where $a$ is the $1^{st}$ term of the A.P., $d$ is the Common Difference of the A.P. and $n$ is the no. of terms of the A.P. The formula for finding the summation of a finite G.P. is $a(\frac{r^{n}-1}{r-1})$ when $|r|$ is $> 1$ and $a(\frac{1-r^{n}}{1-r})$ when $|r| ; where $a$ is the $1^{st}$ term of the G.P., $r$ is the common ratio of the G.P., and $n$ is the no. of terms of the G.P. Similarly I also know how to find the summation of an Arithmetico-Geometrico Progression. But I don't know how to
This can efficiently be done using generating functions. Summing the coefficient sequence corresponds to multiplying the generating function by $\frac1{1-x}=\sum_{k=0}^\infty x^k$ . There are some complications because you’re not including the first term as the first difference, but that can be accounted for by some modifications: \begin{eqnarray*} \frac1{1-x}&=&1+x+x^2+x^3+\cdots\;,\\ \frac1{(1-x)^2}&=&1+2x+3x^2+4x^3+\cdots\;,\\ \frac1{(1-x)^2}+1&=&2+2x+3x^2+4x^3+\cdots\;,\\ \left(\frac1{(1-x)^2}+1\right)\frac1{1-x}&=&2+4x+7x^2+11x^3+\cdots\;,\\ \left(\left(\frac1{(1-x)^2}+1\right)\frac x{1-x}+1\right)\frac1{1-x}&=&1+3x+7x^2+14x^3+\cdots\;,\\ \left(\left(\left(\frac1{(1-x)^2}+1\right)\frac x{1-x}+1\right)\frac x{1-x}+1\right)\frac1{1-x}&=&1+2x+5x^2+12x^3+\cdots\;.\\ \left(\left(\left(\frac1{(1-x)^2}+1\right)\frac x{1-x}+1\right)\frac x{1-x}+1\right)\frac x{(1-x)^2}&=&x+3x^2+8x^3+20x^4+\cdots\;.\\ \end{eqnarray*} Multiplying out and performing a partial fraction expansion yields ( Wolf
|sequences-and-series|arithmetic-progressions|
0
Multivariable limit $\lim_{{(x,y) \to (0,0)}} \frac{x^2y^2}{x^2+y^4}$
I'm struggling to find if the following limit exists: $\lim_{{(x,y) \to (0,0)}} \frac{x^2y^2}{x^2+y^4}$ Intuitively, I believe it does not, but every direction I approach it from ( $y=x$ , $y=x^2$ , $y=0$ , $x=0$ etc.) seems to provide the same limit of zero. Any help is very much appreciated, as well as any tips on finding a suitable direction to prove a limit doesn't exist.(Excluding the polar coordinate conversion method)
Notice that $$\frac{x^2y^2}{x^2+y^4} \leqslant \frac{x^2y^2}{x^2}=y^2$$ and hence the function is bounded near $(0,0)$ by $y^2$ . So, the limit is $0$ .
|limits|multivariable-calculus|
0
Probability of pairing 20 questions to 20 answers correctly
I am working with a problem from a statistics book as follows: You have 20 questions and 20 answers. You are supposed to pair correct answer to the correct question. In the book they say that the probability of guessing and getting everything right is 1/400. However doesn`t order matter here? Is the answer not 1/20! ? Apologies in advance for the dumb question.
I agree with @Srini, however you can simply prove it with 20 questions right away, without having to just use smaller examples. Let’s start by calculating the total number of possibilities. For the first question, there are $20$ possible answers we can match to it. As for the second, we can use the twenty minus the one we used for the first question, or $19$ options. This keeps going until there is one option for the last question. This is of course $20!$ possibilities. If one works, there is a $\dfrac{1}{20!}$ chance of getting it all right as you said. To go more into detail, if an answer overlaps or there is the same answer $x$ times, you will have $\dfrac{x!}{20!}$
|probability|
0
Comultiplication on the tensor algebra
Let $k$ be a commutative base ring. We have a category $\operatorname{Mod}_k$ of $k$ -modules and a category $\operatorname{grMod}_k$ of $\mathbb{Z}$ -graded $k$ -modules. Both of these have monoidal structures, and the first has a canonical symmetric monoidal structure. The second category has (at least) two symmetric monoidal structures/braidings $\beta : V \otimes W \to W \otimes V$ : the "non-oriented" braiding given on homogeneous simple tensors by $\beta(v \otimes w) = w \otimes v$ and the "oriented"/"Koszul" braiding given on homogeneous simple tensors by $\beta(v \otimes w) = (-1)^{|v| \cdot |w|}w \otimes v$ . I'm writing some notes where I try to get the signs right in DG/graded algebra and to maintain consistency I'm trying to do everything using the Koszul braiding. Now consider the tensor algebra $T(V)$ for a $k$ -module $V$ . This can be defined without ambiguity as the free graded algebra on the graded module $\Sigma V$ ( $V$ concentrated in degree $1$ ). We can use the u
I worked out the proof that the ideal generated by $v^2$ is a bi-ideal and it actually requires that we use the koszul sign rule. In that convention two mixed terms $(1\otimes v)(v \otimes 1)$ and $(v \otimes 1)(1\otimes v)$ cancel, while in the unoriented convention they add up to $2 \cdot (v \otimes v)$
|tensor-products|noncommutative-algebra|multilinear-algebra|hopf-algebras|coalgebras|
1
Solve $\begin{cases} (x-y)^2=3-2x-2y\\ y(x-y+1)=x(y-x+1) \end{cases}$
Solve $\begin{cases} (x-y)^2=3-2x-2y \\ y(x-y+1)=x(y-x+1) \end{cases} $ $\Leftrightarrow\begin{cases} x^2-2xy+y^2=3-2x-2y\\xy-y^2+y=xy-x^2+x \end{cases}$ Simplifying the second equation: $\Leftrightarrow x^2-y^2=x-y \\ \Leftrightarrow (x+y)(x-y)=x-y\\ \Leftrightarrow x+y=1$ The first equation becomes: $\Leftrightarrow x^2-2xy+y^2=3-2(x+y)=1\\ \Leftrightarrow (x-y)^2=1 \\ \Leftrightarrow x-y=\pm1$ Hence: $\begin{cases} x+y=1\\x-y=\pm1 \end{cases} \Rightarrow x=1,y=0$ or $x=0,y=1$ which are the correct answers. However the problem is $x=\dfrac{3}{4}, y=\dfrac{3}{4}$ is also given as an answer. But I can't see how its possible for me to derive this answer from the given equations. Thanks for the help.
This is wrong: $$(x+y)(x-y)=x-y \Leftrightarrow x+y=1.$$ Instead, $$(x-y)(x+y-1)=0.$$ Two cases are possible: $x+y=1$ or $x=y$ . The first case is done by you. The second case turns the first equation into $$0=3-4x.$$ Hence the $$x=y=3/4.$$ It’s always a bad idea to divide by something that may be equal to $0$ ( $x-y$ in this case).
|algebra-precalculus|
1
Solve $\begin{cases} (x-y)^2=3-2x-2y\\ y(x-y+1)=x(y-x+1) \end{cases}$
Solve $\begin{cases} (x-y)^2=3-2x-2y \\ y(x-y+1)=x(y-x+1) \end{cases} $ $\Leftrightarrow\begin{cases} x^2-2xy+y^2=3-2x-2y\\xy-y^2+y=xy-x^2+x \end{cases}$ Simplifying the second equation: $\Leftrightarrow x^2-y^2=x-y \\ \Leftrightarrow (x+y)(x-y)=x-y\\ \Leftrightarrow x+y=1$ The first equation becomes: $\Leftrightarrow x^2-2xy+y^2=3-2(x+y)=1\\ \Leftrightarrow (x-y)^2=1 \\ \Leftrightarrow x-y=\pm1$ Hence: $\begin{cases} x+y=1\\x-y=\pm1 \end{cases} \Rightarrow x=1,y=0$ or $x=0,y=1$ which are the correct answers. However the problem is $x=\dfrac{3}{4}, y=\dfrac{3}{4}$ is also given as an answer. But I can't see how its possible for me to derive this answer from the given equations. Thanks for the help.
You made a mistake when simplifying the second equation. $(x+y)(x-y) = x-y \Leftrightarrow x+y = 1$ is only true if $x-y \neq 0$ . I also don't get how you got $(x-y)^2 = 1$ from the first equation. The first equation simplifies to $y = x \pm 2 \sqrt{1-x}-1$ . From there you get $(0, 1)$ , $(1, 0)$ and $(\frac{3}{4},\frac{3}{4})$
|algebra-precalculus|
0
Multivariable limit $\lim_{{(x,y) \to (0,0)}} \frac{x^2y^2}{x^2+y^4}$
I'm struggling to find if the following limit exists: $\lim_{{(x,y) \to (0,0)}} \frac{x^2y^2}{x^2+y^4}$ Intuitively, I believe it does not, but every direction I approach it from ( $y=x$ , $y=x^2$ , $y=0$ , $x=0$ etc.) seems to provide the same limit of zero. Any help is very much appreciated, as well as any tips on finding a suitable direction to prove a limit doesn't exist.(Excluding the polar coordinate conversion method)
There is an alternative but less evident method $$|xy^2|\le {1\over 2}(x^2+y^4)$$ Hence $${x^2y^2\over x^2+y^4}={|x|y^2\over x^2+y^4}|x|\le {1\over 2}|x|$$ Thus the limit is $0$ if $x\to 0.$ In other solutions the limit is $0$ if $y\to 0.$
|limits|multivariable-calculus|
0
Random variable function is random variable?
I'm starting to study random variables and during class my teacher mentioned that a non-descending random variable function is a random variable. Then he said that $|X|$ and the indicator function $I(A_x)$ would be random variables. Intuitively I can see they are. But how do I prove it in the general case, or for these two examples?
Let us suppose a non empty set $\Omega$ and a $\sigma$ -algebra $\mathcal F$ on it. A function $X: \Omega \to \Bbb R$ is said to be $\mathcal F$ - measurable , if $X^{-1}(B)\in \mathcal F$ for every Borel set $B$ (i.e. a set inside the minimum $\sigma$ -algebra generated by the open sets of $\Bbb R$ or equivalently from intervals of the form $(-\infty, x]$ ) $^1$ . If $(\Omega, \mathcal F, P)$ is a probability space, then $X$ is a random variable . My suggestion about the way that one can prove that a function like the above is a random variable, is to check if the sets of the form $X^{-1}((-\infty, x])$ are contained in the given $\sigma$ -algebra $\mathcal F$ for an arbitrary $x\in \Bbb R$ .
|probability-theory|measure-theory|random-variables|measurable-functions|
0
Determinant of a matrix that has 0 on the diagonal
$$ \begin{pmatrix} 0&1&1&\cdots&1&1 \\ 1&0&x&\cdots&x&x \\ 1&x&0&\cdots&x&x \\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots \\ 1&x&x&\cdots&0&x \\ 1&x&x&\cdots&x&0 \end{pmatrix} $$ I have tried to solve it differently, mostly I tried to make a lower triangle matrix. Usually I use way with lambda, but I still don't have right answer
Assume that we are looking at the $n\times n$ matrix. This works for all $n \geq 2$ . I also use $|\cdot|$ to mean determinant. One of the most useful facts for this problem is that if you add or subtract a multiple of a row (or column) from a different row (or column) then the determinant does not change. Subtracting the last row from all but itself and the first one has \begin{equation} \left | \begin{bmatrix} 0 & 1 & 1 & \cdots & 1 & 1 \\ 1 & 0 & x & \cdots & x & x \\ 1 & x & 0 & \cdots & x & x \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & x & x & \cdots & 0 & x \\ 1 & x & x & \cdots & x & 0 \\ \end{bmatrix} \right | = \left | \begin{bmatrix} 0 & 1 & 1 & \cdots & 1 & 1 \\ 0 & -x & 0 & \cdots & 0 & x \\ 0 & 0 & -x & \cdots & 0 & x \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & -x & x \\ 1 & x & x & \cdots & x & 0 \\ \end{bmatrix} \right | \end{equation} Now add the second to second-to-last rows to the last row \begin{equation} \left | \
|linear-algebra|matrices|determinant|
1
Fifth cyclotomic polynomial over a finite field
Consider the polynomial $g(x)=x^4+x^3+x^2+x+1 \in \mathbb{F}_3[x]$ . It's possible to show that $g$ is irreducible in $\mathbb{F}_3$ . If we let $\alpha$ be a root of $g$ , then $\alpha^4+\alpha^3+\alpha^2+\alpha+1=0$ , and it can be shown from this that $\alpha^5=1$ . There's an interesting result that the relation $g(\alpha)=0$ gives us. If $K$ is the splitting field of $g$ , then $5 \big| p^m-1$ , where $m$ is the degree of the splitting field over $\mathbb{F}_3$ . This comes from Lagrange's theorem, since, if $|K|=p^m$ , then $K^*$ is a group under multiplicaition of order $p^m-1$ . One can pretty easily check that if $|K|\ge p^4$ , since $5$ divides neither $3^2-1$ nor $3^3-1$ . My question is, if we've found the smallest field in which $g$ could split, whether it necessarily does split. That is, given the relation that $\alpha^5=1$ , and that $5\big|3^4-1$ , whether that is sufficient to assure $\mathbb{F}_{3^4}$ is the splitting field of $g$ , or whether the only way to prove $\
Here's another trick. Let $F(x)=x^3$ be the Frobenius map. As soon as we get $F^n(\alpha) =\alpha, $ we will have the right extension. $F^2(\alpha) =\alpha ^9=\alpha^4=\alpha ^{-1}\not =\alpha .$ $F^3(\alpha) =\alpha ^{27}=\alpha ^2\not =\alpha. $ $F^4(\alpha) =\alpha ^{81}=\alpha. $ Thus the extension is degree $4.$
|field-theory|finite-fields|irreducible-polynomials|splitting-field|cyclotomic-polynomials|
0
Ordering of group elements in MultiplicationTable vs CosetTable in GAP
I construct a finitely presented discrete group $G$ in GAP, a normal subgroup $H\triangleleft G$ of finite index $N$ , and the factor group $G/H$ of order $N$ . Assume for simplicity that $G$ has two generators $\gamma_1,\gamma_2$ . For each coset $[g_k]\in G/H$ , I wish to construct the $N\times N$ matrices of the left-regular representation of $G/H$ , i.e., $$ U_{ij}([g_k])=\delta_{[g_i],[g_k][g_j]}, $$ as well as the matrix $$ A_{ij}(\gamma_\alpha)=\delta_{[g_j],[g_i][\gamma_\alpha]}, $$ for each generator $\gamma_\alpha$ , $\alpha=1,2$ . Here, $\delta_{[g],[g']}=1$ if $[g]=[g']$ and zero otherwise. For any $[g_k]$ and $\gamma_\alpha$ , the matrices $U$ and $A$ commute. To compute $U_{ij}([g_k])$ , I use the Cayley table of $G/H$ obtained from MultiplicationTable (below, GmodH:=FactorGroup(G,H); ): N:=Order(GmodH);; M:=MultiplicationTable(GmodH);; U:=NullMat(N,N);; for j in [1..N] do i:=M[k][j];; U[i][j]:=1;; od; where k ranges from $1$ to $N$ and denotes which $[g_k]$ I am consider
The ordering of MultiplicationTable is the ordering of the list of its Elements . This will depend on the way how the group is represented. The ordering of the cosets in CosetTable is so that the table is standardized. This means that when reading it row by row, new cosets arise in the natural order of integers. I think it will be easiest to avoid building the two tables, but just work with the transversal: rt:=RightTransversal(G,H); and then for an element $g$ (e.g. a representative from the transversal) U:=NullMat(N,N); A:=NullMat(N,N); for i in [1..N] do A[i][PositionCanonical(rt,rt[i]*g)]:=1; U[PositionCanonical(rt,g*rt[i])][i]:=1; od;
|group-theory|quotient-group|gap|finitely-generated|cayley-table|
1
Multivariable limit $\lim_{{(x,y) \to (0,0)}} \frac{x^2y^2}{x^2+y^4}$
I'm struggling to find if the following limit exists: $\lim_{{(x,y) \to (0,0)}} \frac{x^2y^2}{x^2+y^4}$ Intuitively, I believe it does not, but every direction I approach it from ( $y=x$ , $y=x^2$ , $y=0$ , $x=0$ etc.) seems to provide the same limit of zero. Any help is very much appreciated, as well as any tips on finding a suitable direction to prove a limit doesn't exist.(Excluding the polar coordinate conversion method)
If in neighborhood of some point $a$ you have that $|f(x) - L| \leq g(x)$ and $\displaystyle \lim_{x\to 0} g(x) = 0$ , you have that $\displaystyle \lim_{x\to a} f(x) =L$ . In this case, as it was mentioned in the comments, we have that $$ \left| f(x,y) -0 \right|=\frac{x^2y^2}{x^2+y^4} \leq \frac{(x^2+y^4)y^2}{x^2+y^4} = y^2 \to 0, \quad \text{as } (x,y)\to (0,0)), $$ wich shows that the limit exists and is zero.
|limits|multivariable-calculus|
0
Why is every vertex in $q$ sets?
I am reading the paper " Choosability and fractional chromatic numbers ". It concerns the fractional chromatic numbers of a graph $G$ : let $\mathcal{S}(G)$ be the collection of independent sets in $G$ . A fractional coloring is a function $f:\mathcal{S}(G)\rightarrow\mathbb{R}_{\ge 0}$ such that $\sum_{S\in\mathcal{S}(G):v\in S}f(S)\ge 1$ for every $v\in V(G)$ . The fractional chromatic number $\chi^*(G)$ is $\min \sum_{S\in \mathcal{S}(G)}f(S)$ over all fractional coloring $f$ . It is proved that an optimal solution has the form $f(S)=p_S/q$ for $S$ in a subcollection $\mathcal{S}_0(G)$ of $\mathcal{S}$ , where $p_S,q$ are positive integers, and the weight on all other $S$ is 0. At the end of Section 2, it is claimed that "by taking $p_S$ multiplicity of each $S\in\mathcal{S}_0$ , then each vertex is in exactly $q$ sets". I am wondering why this is true.
It is not precisely true that every vertex lies in exactly $q$ sets; however, we can reduce to that case without loss of generality. First, what do we know? If we take each $S \in \mathcal S_0$ with multiplicity $p_S$ , then for every vertex $v$ , we have $$\sum_{S \ni v} f(S) \ge 1 \iff \sum_{S \ni v} \frac{p_S}{q} \ge 1 \iff \sum_{S \ni v} p_S \ge q.$$ The last inequality says that the total number of sets $S \in \mathcal S_0$ containing $v$ , counted with the multiplicity with which they are taken, is at least $q$ . It is not necessarily true that the total number of sets containing each vertex is exactly $q$ , counted with multiplicity. To see this, consider the graph below: 1 --- 2 3 This graph has fractional chromatic number $2$ (or $\frac21$ ); one way to achieve this is to take $\{1,3\}$ with multiplicity $1$ and $\{2,3\}$ with multiplicity $1$ . Then, vertex $3$ is included in more independent sets than expected. However, in this case, there is a different solution: take the s
|combinatorics|graph-theory|linear-programming|coloring|
1
A (square) integer matrix which doesn't diagonalize over $\mathbb{Z}$ also doesn't diagonalize over some $\mathbb{Z}/p^{k}$
Motivated by a series of recent Twitter polls by Daniel Litt, we ask (and answer, since the issue doesn't seem to have previously been addressed on this site) the following natural question: Given a natural $\text{d}$ , a $\text{d}\times\text{d}$ matrix $A$ with integer entries, and a commutative ring $\mathbb{E}$ , say that $A$ diagonalizes over $\mathbb{E}$ if there exists a matrix $F\in\text{GL}_{\text{d}}\left(\mathbb{E}\right)$ with $F^{-1}AF$ diagonal (over $\mathbb{E}$ ). Clearly if $A$ diagonalizes over $\mathbb{Z}$ , then it diagonalizes over every $\mathbb{E}$ . Question: If $A$ doesn't diagonalize over $\mathbb{Z}$ , must there there exist a prime $p$ and natural $k$ such that $A$ also doesn't diagonalize over $\mathbb{Z}/p^{k}$ ?
Here's a much shorter means by which to handle the long step of the other answer (i.e., fitting Case II into the classification), albeit dependent upon the axiom of countable choice and without the effective bound of the more involved argument. Maintaining its notation, Claim: If $A$ 's characteristic polynomial splits into (not necessarily distinct) linear factors over $\mathbb{Z}$ , with (distinct) roots $\lambda_{0},\dots,\lambda_{\text{n}-1}$ of respective multiplicities $e_{0},\dots,e_{\text{n}-1}>0$ , and if for all primes $p$ and naturals $k$ the matrix $A$ diagonalizes over each $Z/p^{\text{k}}$ , then $$\text{coker}\left(\bigoplus_{\text{j}\colon\text{n}}\text{ker}\left(\lambda_{\text{j}}-A\right)\ \overset{\left(\text{inc.}_{\text{j}}\right)_{\text{j}\colon\text{n}}}{\to}\ \mathbb{Z}^{\text{d}}\right)\ \simeq\ 0\text{.}$$ (The original claim in Case II being the contrapositive.) Proof: If $A$ diagonalizes over each $\mathbb{Z}/p^{\text{k}}$ , then by König's lemma (applied se
|linear-algebra|number-theory|diagonalization|
0
Fremlin 112Y(e) - Sum of measure of counting sets
I am trying to solve Exercise 112Y(e) from Fremlin's Measure Theory (Volume 1): Let $(X,\Sigma,\mu)$ be a measure space and let $(E_k)_{k\in\mathbb{N}}\subset\Sigma$ . Define $H_k=\{x\in X:|\{n\in\mathbb{N}:x\in E_n\}|\geq k\}$ . The goal is to show that $\sum_{k=1}^\infty\mu(H_k)=\sum_{k=1}^\infty\mu(E_k)$ . I proved that $H_k=\bigcup_{S\subset\mathbb{N},|S|=k}\bigcap_{i\in S}E_i$ . Fremlin's hint is to first consider the case where only finitely many $E_k$ are non-empty. In this case one needs to prove $\sum_{k=1}^n\mu\left(\bigcup_{S\subset\{1,\ldots,n\},|S|=k}\bigcap_{i\in S}E_i\right)=\sum_{k=1}^n\mu(E_k)$ . The infinitary case is then clear (see this question ). I tried proving this formula by induction, using the inclusion-exclusion principle but that lead nowhere. It seems that I am missing some elementary set-theoretic manipulation. Any help would be greatly appreciated!
We consider the required case when there are only finitely many sets $E_1,\dots, E_n$ . Put $Y=\{0,1\}^n$ and for each $y=(y_1,\dots,y_n)\in Y$ put $|y|=\sum_{i=1}^n y_i$ . Let $\chi:X\to Y$ be the diagonal product of the sequence $(\chi_{E_1},\dots,\chi_{E_k})$ of the characteristic maps, that is for each $x\in X$ each natural $k\le n$ the $k$ th component of $\chi(x)$ equals $1$ , if $x\in E_k$ , and equals $0$ , otherwise. Then $$\sum_{k=1}^n\mu(E_k)=\sum_{y\in Y}\mu(\chi^{-1}(y))|y|= \sum_{k=1}^n \sum_{y\in Y\,|y|\ge k}\mu(\chi^{-1}(y))=$$ $$\sum_{k=1}^n \mu\left(\bigcup_{y\in Y\,|y|\ge k}\mu(\chi^{-1}(y))\right)=\sum_{k=1}^n \mu(H_k).$$
|measure-theory|elementary-set-theory|
1
Question on manipulation step of trigonometric integral
In our book we have the manipulation \begin{align*} b\int\limits_0^{2\pi}\sqrt{\left(1-\left(1-\frac{a^2}{b^2}\right)\sin^2(t)\right)}~\!dt=4b\int\limits_0^{\frac{\pi}{2}}\sqrt{\left(1-\left(1-\frac{a^2}{b^2}\right)\sin^2(t)\right)}~\!dt \end{align*} where $a,b>0$ . Where does $4$ come from and why the change of the upper integration bound to $\frac{\pi}{2}$ ? As the author didn't add a $4$ to $t$ inside the sine function it can't be integration by substitution, so what justifies this step?
The first integral is taken over the interval $[0, 2\pi]$ , covering the full period of the sine function. The second integral is taken over the interval $[0, \frac{\pi}{2}]$ , covering only one-quarter of the period of the sine function. The integrand function is an even function, the integrand is symmetric about the $y$ -axis. If we denote the integrand as $f(t)$ , then the given equation can be rewritten as: $$ \int_0^{2\pi} f(t) \, dt = 4 \int_0^{\frac{\pi}{2}} f(t) \, dt $$
|real-analysis|integration|trigonometry|trigonometric-integrals|
1
Significance value in Hypothesis testing
A builder claims that heat pumps are installed in $70\%$ of all homes being constructed today in Chennai. Would you agree with this claim if a random survey of new homes in this city showed that $8$ out of $15$ had heat pumps installed? Use a $10\%$ level of significance. Here the null hypothesis(H $_0$ ) is $p = 0.7$ and alternative hypothesis(H $_A$ ) : $p \neq 0.7$ . So, it is a both-ended test. I want to compute the value of the type-1 error using the definition P ( $A^c$ | $H_0$ is true), where A is the acceptance set, but I cannot formulate it correctly. How do we define the acceptance set? I have completed the following two steps. $\bullet$ Reject $H_0$ if $\big|T-10.5\big| > c$ . $\bullet$ P( $A^c$ | $H_0$ is true ) = $\text{P} (T > c+10.5 \,\, \text{or} \,\, T c+10.5 \,\,| \,\, p =0.7) + P(\,\, T . The above step is creating confusion. Any help would be appreciated.
For a two-sided test that controls the Type I error, the critical values for a sample size of $n = 15$ are integers $L$ and $U$ such that $$\Pr[X \le L \mid H_0] \le \alpha/2, \quad \Pr[X \ge U \mid H_0] \le \alpha/2,$$ where $\alpha = 0.10$ is the overall significance level of the test, and $$X \mid H_0 \sim \operatorname{Binomial}(n = 15, p = 0.7)$$ is the random number of homes with heat pumps installed in a sample of $15$ homes surveyed. Thus we want to find the largest such $L$ and smallest such $U$ . To this end, we calculate the cumulative distribution of $X$ under the null hypothesis: $$\begin{align} \Pr[X = 0 \mid H_0] &\approx 1.43489 \times 10^{-8} \\ \Pr[X \le 1 \mid H_0] &\approx 5.16561 \times 10^{-7} \\ \Pr[X \le 2 \mid H_0] &\approx 8.71935 \times 10^{-6} \\ \Pr[X \le 3 \mid H_0] &\approx 9.16587 \times 10^{-5} \\ \Pr[X \le 4 \mid H_0] &\approx 0.000672234 \\ \Pr[X \le 5 \mid H_0] &\approx 0.00365252 \\ \Pr[X \le 6 \mid H_0] &\approx 0.0152425 \\ \Pr[X \le 7 \mid H_0] &
|statistics|hypothesis-testing|
1
Recursive Definition of Normal Form with explicit substitution
Context I assume a simply typed lambda calculus, probably written with de-bruijn indexes. With $\to_\beta$ I denote the $\beta$ -reduction as a relation. Also, my question eventually will use this $\lambda$ -calculus with explicit substitution (originally called $\lambda\sigma$ -calculus), where substitution is introduced on term level via closures $M[s]$ with $\lambda$ -term M and substitution $s$ . A few definitions are needed. Definition: Normal Form A $\lambda$ -term $M$ is in normal form, if there is no $\lambda$ -term $N$ such that $M\to_\beta N$ . Besides this defintion, there is also another definition I'm ultimately interested in: defining normal form recursively: Definition II: Neutral and Normal Form Mutually defining neutral and normal forms: Neutral Form: Every variable is in neutral form. if $M$ is in neutral form and $N$ in normal form then $(M N)$ is in neutral form. Normal Form: If $M$ is in neutral form, then $M$ is in normal form. If $M$ is in normal form, then $\lam
Different recursive definitions - cast in the guise of abstract machines - abound, such as the Krivine Machine or CEK Machine , which are suitable for normal order reduction, or the SECD Machine , I believe, for applicative order reduction. You forgot a key definition: head-normal form . All pure λ-terms are either of the form $xY⋯Z$ headed by a variable $x$ , where $Y$ , ..., $Z$ are other λ-terms, or are a λ-abstraction $λx·A$ , where $A$ is some other λ-term, or else are headed by a β-redex: $(λx·A)BY⋯Z$ . The first two are in head-normal form. In the first case, normal form is reached when $Y$ , ..., $Z$ are each reduced to normal form. In the second case, the same holds true when $A$ is reduced to normal form, with the additional qualifier that an η-reduction $λx·Ax → A$ also occurs if $x ∉ ∂A$ , where $∂A$ will be used here and below to denote the set of variables that occur freely in a λ-term $A$ . So, we can define $(Ax)/x = A$ , if $x ∉ ∂A$ , and $A/x = λx·A$ otherwise. Only i
|substitution|lambda-calculus|
0
Do the projective modules in an exact category determine the admissible epimorphisms?
In an exact category $(\mathbf{C}, \mathcal{E})$ , we define an object $P \in \mathbf{C}$ to be projective if for every admissible epimorphism $g : A \rightarrow B$ $$ \mathbf{C}(P,A) \rightarrow \mathbf{C}(P,B) $$ is a surjective map of abelian groups. Let $\mathcal{P}$ be the class of projective objects in $(\mathbf{C},\mathcal{E})$ . If, for an epimorphism $g : A \rightarrow B$ , $$ \mathbf{C}(P,A) \rightarrow \mathbf{C}(P,B) $$ is surjective for all $P \in \mathcal{P}$ , then is $g$ an admissible epimorphism? If this doesn't hold in general, is there some natural condition on the exact category that ensures that this property holds?
There are examples, even with enough projectives, where the projectives don't determine the admissible epimorphisms. Let $\operatorname{Vect}_k$ be the abelian category of vector spaces over some field $k$ , and let $\mathbf{C}=\{V\mid\dim_k(V)\neq1\}$ be the full subcategory of vector spaces that are not precisely one-dimensional. Then $\mathbf{C}$ is closed under extensions in the abelian category $\operatorname{Vect}_k$ of all vector spaces, and so is an exact category with the exact structure inherited from $\operatorname{Vect}_k$ . It is easy to check that the epimorphisms in $\mathbf{C}$ are just the surjective maps. The admissible epimorphisms in $\mathbf{C}$ are the epimorphisms with kernel in $\mathbf{C}$ . Clearly, every object of $\mathbf{C}$ is projective, and so $\mathbf{C}$ has enough projectives. But a surjective map $k^3\to k^2$ is a non-admissible epimorphism in $\mathbf{C}$ such that $\mathbf{C}(P,k^3)\to\mathbf{C}(P,k^2)$ is surjective for every $P$ . This example de
|abstract-algebra|category-theory|homological-algebra|
0
Extension of Discrete Valuation Rings
Let $(R, m_R, \kappa_R) \subset (B, m_B, \kappa_B)$ be an extension of discrete valuation rings whose corresponding extension of fields of fractions $K :=\text{Frac}(R) \subset \text{Frac}(B) =: L$ is finite of degree $n:=[L:K]$ . It is well known that in general we have only the inequality $[L:K] \ge e_{B/R} \cdot [\kappa_B: \kappa_R]$ , where $e_{B/R}$ is the ramification degree, i.e., minimal $e \ge 1$ with $m_B^e = m_RB$ for the maximal ideals. Questions: I'm looking for a reference for the proof that the equality $[L:K] = e_{B/R} \cdot [\kappa_B: \kappa_R]$ holds iff $R \to B$ is finite. For instance, it is well known that this holds for $L/K$ separable, if we localize the rings of integers at a place for number fields. Secondly, I'm looking for an example of field extension generated by a single element, i.e., $L=K(\alpha)$ , such that we have proper inequality $[L:K] >e_{B/R} \cdot [\kappa_B: \kappa_R]$ . In light of the quoted result in first part, this is equivalent to that th
If $R\subset B$ is finite, then $B$ is a free module over $R$ since finitely generated torsion free module over a dvr is free. The rest is easy. For the second, take $R=\mathbb{C}[x]_{(x)}$ and $L=K(y)$ with $y^2=x+1$ . Then $B= R[y]_{(y-1)}$ is a dvr with the properties you want.
|abstract-algebra|ring-theory|commutative-algebra|
1
Counterexamples about the differentiability of several variables
I've learned about differentiability of several variables. If $f(x,y)$ is differentiable then we can use chain rule on it. But I suspect the converse of this proposition is not right. So, is there a function $f(x,y)$ , such that the partial derivatives $\frac{\partial f}{\partial x}(0,0),\frac{\partial f}{\partial y}(0,0)$ exist, and for every functions $x(t)$ and $y(t)$ differentiable at $0$ satisfying $(x,y)(0)=(0,0)$ , the chain rule $$\frac{df(x(t),y(t))}{dt}(0)=\frac{\partial f}{\partial x}(0,0)\frac{dx}{dt}(0)+\frac{\partial f}{\partial y}(0,0)\frac{dy}{dt}(0)$$ holds, but $f$ is not differentiable at $(0,0)$ ? I'm curious about the counterexamples.
"If $f(x,y)$ is differentiable then we can use chain rule on it. But I suspect the converse of this proposition is not right" : surprisingly, your guess is wrong, i.e. the converse is right. More formally: Theorem . Let $f(x,y)$ be a function, and $a,b\in\Bbb R$ . If, for every differentiable functions $x(t)$ and $y(t)$ satisfying $x(0)=y(0)=0$ , $$\frac{df(x(t),y(t))}{dt}(0)=ax'(0)+by'(0)$$ holds, then $f$ is differentiable at $(0,0)$ (and of course, $df_{(0,0)}(h,k)=ah+bk$ ). Proof (by contraposition). Assume $$\frac{\partial f}{\partial x}(0,0)=a,\quad \frac{\partial f}{\partial y}(0,0)=b,$$ but $f$ is not differentiable at $(0,0)$ , i.e. $\frac{f(h,k)-f(0,0)-ah-bk}{\|(h,k)\|}\not\to0$ as $(h,k)\to(0,0)$ , and let us construct a pair $(x,y)$ of differentiable functions such that $x(0)=y(0)=0$ and $\frac{f(x(t),y(t))-f(0,0)}t\not\to ax'(0)+by'(0)$ as $t\to0$ . Since $\frac{f(h,k)-f(0,0)-ah-bk}{\|(h,k)\|}\not\to0$ as $(h,k)\to(0,0)$ , there exist an $\epsilon>0$ and a sequence $x_n+iy
|calculus|derivatives|differential|
0
Combinatoric identity
I'm trying to get a combinatorial proof of the following identity by making up some story. $$\sum_{k=1}^n {k \choose j}k = {n+1 \choose j+1}n - {n+1 \choose j+2}$$ I can do it, by simplifying the expression first, but I wanted to prove it without any changes to the identity.
Suppose we have $n+1$ people and we would like to select $j+1$ of them to be on a sports team, with one of these being captain. The $n+1$ people are ordered in increasing age and we have been told that the captain should be the older than anyone else on the team. Furthermore we need to pick someone to organise the fixtures, and this person may not necessarily be one of the $j+1$ on the team (but they must be younger than the captain). If the $(k+1)$ st person is captain we can select $\binom{k}{j}$ teams from this and we can pick any of the $k$ people younger than the captain to be the organiser, giving $\binom{k}{j} k$ . Since the captain could be any one of the $n+1$ people (except the first $j$ , which correspond to the $k$ for which $\binom{k}{j}=0$ anyway), this gives $\sum_{k=1}^n \binom{k}{j} k$ teams in total. On the other hand, we could instead first choose our team of $j+1$ from the $n+1$ people and then select someone (from the $n$ people other than the captain) to be the or
|combinatorics|combinatorial-proofs|
1
Show that $[\sqrt{n}]=[\sqrt{n}+\frac{1}{n}]$, for any $n\in N, n\geq 2$
Show that $[\sqrt{n}]=[\sqrt{n}+\frac{1}{n}]$ , for any $n\in N, n\geq 2$ I let $a=\sqrt{n}$ , and we know that $k\leq a , where $k\in N$ . From now all we have to do is to show that $k \leq \frac{1}{a^2}+a I tried processing the first inequality but got to nothing useful. I hope one of you can help me! Thank you!
Thinking aloud. $\sqrt n$ is between two integers. $\frac 1n so adding $\frac 1n$ to $\sqrt n$ will either a) keep the results between the same two integers (in which case $[\sqrt n + \frac 1n] =[\sqrt n]$ ), or b) $\frac 1n$ was just enough to push the result above the upper integer. In this case $[\sqrt n + \frac 1n] =[\sqrt n] + 1$ and there is an integer $k$ so that $\sqrt n . It is now our job to prove $\sqrt n just simply can never happen (if $n \ge 2$ ). $\sqrt n so $n so $k^2 \ge n+1$ and $k \ge \sqrt {n+1}$ If we can sho $\sqrt{n+1} - \sqrt n \ge \frac 1n$ we will be done Let $e = \sqrt{n+1} -\sqrt n = \frac 1{\sqrt{n+1} + \sqrt n}>\frac 1{2\sqrt{n+1}}$ We have $\frac 1{2\sqrt{n+1}}\ge \frac 1n \iff n \ge 2\sqrt{n+1} \iff n^2\ge 4n + 4 \iff n^2 -4n + 4 \ge 8\iff n-2 \ge 2\sqrt 2 \iff n\ge 2\sqrt 2+2$ which holds for $n\ge 5$ but we have to show also holds for $n = 2,3,4$ . In these cases we may have $\sqrt{n+1} - \sqrt n but we probably do not have $k =\sqrt{n+1}$ . If $n = 2$
|inequality|radicals|
0
Unique rotation matrix using two axes when rotating a vector
I have a vector $v1$ , and a rotated vector $v2$ . I want to find two rotation matrices $Rx$ and $Ry$ , which are rotation matrices around x-axis and y-axis, respectively, so that $Rx \times Ry \times v1 = v2$ . My intuition tells me both $Rx$ and $Ry$ are unique if I restrict $\theta$ , which corresponds to $Rx$ as between [-90, 90], and $\phi$ , which corresponds to $Ry$ as between [0, 360]. But how am I going to find those matrices? Make the problem simplified, let $v1=(0,0,1)$ . I finally get this equation: $\begin{pmatrix} \sin(\phi) \\ -\sin(\theta)cos(\phi) \\ \cos(\theta)cos(\phi) \\ \end{pmatrix}=\begin{pmatrix}x\\y\\z\end{pmatrix}.$ But I don't feel right about that. $\phi$ is determined by $x$ ? It's so easy to find a counterexample, e.g. $v1=(0, 0, 1), v2=(x, y, 0)$ . In this case, $\phi$ should be 90 or 270. Where am I wrong?
The rotation about the $x$ axis by an angle $\theta$ is given by the rotation matrix: $ R_x = \begin{bmatrix} 1 && 0 && 0 \\ 0 && c_1 && -s_1 \\ 0 && s_1 && c_1 \end{bmatrix} $ where $c_1 = \cos \theta, s_1 = \sin \theta $ And the rotation about the $y$ axis by an angle $\phi$ is given by the rotation matrix: $R_y = \begin{bmatrix} c_2 && 0 && s_2 \\ 0 && 1 && 0 \\ -s_2 && 0 && c_2 \end{bmatrix} $ where $c_2 = \cos(\phi), s_2 = \sin(\phi) $ So, the combined rotation is $ R = R_x R_y = \begin{bmatrix} 1 && 0 && 0 \\ 0 && c_1 && -s_1 \\ 0 && s_1 && c_1 \end{bmatrix} \begin{bmatrix} c_2 && 0 && s_2 \\ 0 && 1 && 0 \\ -s_2 && 0 && c_2 \end{bmatrix} = \begin{bmatrix} c_2 && 0 && s_2 \\ s_1 s_2 && c_1 && -s_1 c_2 \\ - c_1 s_2 && s_1 && c_1 c_2 \end{bmatrix} $ Now given two arbitrary unit vectors given as follows: $v_1 = ( x_1, y_1, z_1 ) $ and $v_2 = (x_2, y_2, z_2 ) $ where $x_1^2 + y_1^2 + z_1^2 = 1 = x_2^2 + y_2^2 + z_2^2 $ And you want $v_2 = R v_1 $ , so $ \begin{pmatrix} x_2 \\ y_2 \\ z
|matrices|geometry|trigonometry|3d|rotations|
1
Wedge of vectors and wedge of forms
Consider the $\mathbb{R}^3$ vectors expressed on terms of the canonical basis $$X=\sum_{i=1}^3x_ie_i,\,Y=\sum_{i=1}^3 y_ie_i,$$ so the wedge product of vectors is $$X\wedge Y=(x_2y_3-x_3y_2)e_1-(x_1y_3-x_3y_1)e_2+(x_1y_2-x_2y_1)e_3.$$ I was wondering if there's any relantio between this wedge and the usual wedge of forms, so i associate a $1$ -form to each of these vectors, on the basis $\{de_i\}$ of $\Omega^1(\mathbb{R}^3),$ namely $$X=\sum_{i=1}^3 x_ide_i,\,Y=\sum_{i=1}^3 y_ide_i,$$ so the wedge as forms is $$X\wedge Y=(x_1y_2-x_2y_1)de_1\wedge de_e+(x_1y_3-x_3y_1)de_1\wedge de_3+(x_2y_3-x_3y_2)de_2\wedge de_3.$$ Clearly there is much in common between both expressions, but i cannot longer make a formal statement. I thought about $e_i\mapsto de_j\wedge de_k$ if $i,j,k=1,2,3$ as some sort of maps, also the minus is missing when talking about $e_2$ mapped to $e_1\wedge e_3$ . Any thoughts about this?
$ \newcommand\Ext{\mathop{\textstyle\bigwedge}} \newcommand\K{\mathbb K} \newcommand\R{\mathbb R} \newcommand\ExtPow[1]{\mathop{\textstyle\bigwedge^{\mkern-1mu#1}}} \newcommand\form[1]{\langle#1\rangle} \newcommand\dd{\mathrm d} $ You have (partly) discovered the Poincaré isomorphism (which is also intimately related to the Hodge star). One reference is Werner Greub's Multilinear Algebra . It should be noted that a "differential form" is properly a type of field over a manifold. This field structure is irrelevant for this discussion; in the following, any time I say "differential form" what I really mean is more like "differential form at some fixed point on a manifold". Let $V$ be a finite dimensional vector space over an arbitrary field $\K$ . Your "wedge product of vectors" is poor terminology, and it is better to just call it the cross product and reserve the "wedge product" for the exterior algebra $\Ext V$ . This is exactly the analog of differential forms $\Ext V^*$ but built up
|differential-geometry|differential-forms|multilinear-algebra|
1
Density of $\mathbb{Q}$ in $\mathbb{R}$ seemingly contradictory to infinite set cardinality
In my real analysis module we proved that both $\mathbb{Q}$ and $\mathbb{R \setminus Q}$ are dense in $\mathbb{R}$ , meaning that between any two real numbers there exists a rational and an irrational number respectively. In my own reading of set theory and the cardinality of infinite sets, I have learnt that $$\left| \mathbb{Q} \right| = \aleph_0 \text{ , } \left| \mathbb{R} \setminus \mathbb{Q} \right| = \aleph_1 \text{ , } \left| \mathbb{R} \right| = \aleph_1 \quad \colon \aleph_1 > \aleph_0$$ From this it appears that there is a contradiction, since given that the cardinality of the real numbers is strictly greater than the cardinality of the rational numbers, it doesn't follow that the rationals can be dense in the reals. That is to say that given that there are "more" real numbers than there are rational numbers, there simply aren't enough rationals to fill the gaps between the reals. Given that the cardinality of the irrational numbers is the same as the cardinality of the reals
"Given that the cardinality of the irrational numbers is the same as the cardinality of the reals their density does follow since there are enough to go around."—This is an invalid argument. The interval $[0,1]$ also has the same cardinality as the real numbers but is certainly not dense in the reals. I include this in an answer rather than just a comment because I think it's actually quite relevant to the crux of the misunderstanding. Cardinality is a property of "bare" sets—sets with no other structure other than how many elements they have. Density is a property of ordered sets—sets with the structure of an inequality relation. So we shouldn't, for example, expect to be able to tell whether a subset of $\Bbb R$ is dense based only on its cardinality. (Sometimes we can—finite sets aren't dense—but that's an extreme case and an anomaly.)
|real-analysis|set-theory|intuition|fake-proofs|
0
Could I use Green's Theorem here?
I want to solve for the line integral: $$\tag{1}\oint \alpha\nabla \phi_i\cdot \hat{\textbf{n}} ds$$ on the square boundary: $(0\le x \le 1, 0), (1,0 \le y \le 1), (1 \le x \le 0, 1),(0,1\le y \le 0)$ Where the function $\phi_i = \sin(n\pi x)\sin(n\pi y)$ and $\alpha = 1$ on the boundary. I don't know how $\hat{\textbf{n}}$ is oriented with respect to the gradient of $\phi_i$ , so would it be possible to instead use Green's theorem $\oint_C Pdx + Qdy = \int\int_D \Big(\frac{\partial Q}{\partial x} -\frac{\partial P}{\partial y}\Big) dA$ and compute the double integral instead?
So going off of Kurt G, for $\alpha = 1$ on the boundaries, I should get: $$\oint_C 1\nabla \phi_i \cdot \hat{\textbf{n}} ds = \int\int_D \nabla \cdot \nabla\phi_i dA$$ And computing the double integral, I should get: $$-2(n\pi)^2\int_0^1 \int_0^1 \sin(n\pi x)\sin(n \pi y)dxdy = \\ 2n\pi \int_0^1 \Big(\cos(n\pi x)|_0^1\Big)\sin(n\pi y)dy = \\ 2n\pi \Big((-1)^n -1\Big)\int_0^1 \sin(n\pi y)dy = \\ -2\Big((-1)^n -1\Big)\Big(\cos(n\pi y)|_0^1\Big) = \\ \boxed{-2\Big((-1)^n -1\Big)^2}$$ Is that correct? EDIT: Plugging this result into $$b_i = \int\int f(x,y) \phi(x,y)_i dA +\oint \alpha \nabla \phi_i \cdot \hat{\textbf{n}}ds$$ to obtain the solution $$u(x,y) = \sum_i b_i\phi_i$$ , yielded a solution about 99.3% accurate compared to the desired solution within the domain. This can be seen by comparing the 2 plots below. The problem with the plot on the left which I intentionally excluded the values of the solution on the boundary, also evaluated to 0 meaning this solution for whatever reason
|integration|greens-theorem|
0
Density of $\mathbb{Q}$ in $\mathbb{R}$ seemingly contradictory to infinite set cardinality
In my real analysis module we proved that both $\mathbb{Q}$ and $\mathbb{R \setminus Q}$ are dense in $\mathbb{R}$ , meaning that between any two real numbers there exists a rational and an irrational number respectively. In my own reading of set theory and the cardinality of infinite sets, I have learnt that $$\left| \mathbb{Q} \right| = \aleph_0 \text{ , } \left| \mathbb{R} \setminus \mathbb{Q} \right| = \aleph_1 \text{ , } \left| \mathbb{R} \right| = \aleph_1 \quad \colon \aleph_1 > \aleph_0$$ From this it appears that there is a contradiction, since given that the cardinality of the real numbers is strictly greater than the cardinality of the rational numbers, it doesn't follow that the rationals can be dense in the reals. That is to say that given that there are "more" real numbers than there are rational numbers, there simply aren't enough rationals to fill the gaps between the reals. Given that the cardinality of the irrational numbers is the same as the cardinality of the reals
The way I visualize it (vaguely) is that the irrationals are by definition the gaps between the rationals , not the other way around. As you move along the number line, you pass an irrational not every time you pass a rational but every time you finish passing an entire infinite set of rationals like $\{q\in \Bbb Q:q^2 . Since the rationals are densely ordered, this is happening all the time, and it turns out there are more of these sets than there are rationals. The fact that there is a rational between any two irrationals doesn't imply that the sets have the same cardinality, because any such interval contains infinitely many elements of both sets; you can never "zoom in far enough" to see the elements "alternating" in a way that would provide a bijection.
|real-analysis|set-theory|intuition|fake-proofs|
0
Pointwise absolute and uniform convergence of a series of functions for $0<k<1$
To study for $k\in\Bbb R^+$ the pointwise, absolute and uniform convergence in $\Bbb R$ of the following series of functions: $$\sum_{n=1}^\infty\frac{\cos n^k x}{1+n^k}$$ One has obviously, for each $x\in\Bbb R$ : $$\left|\frac{\cos n^k x}{1+n^k}\right|\leq\frac1{1+n^k}\qquad\forall x\in\Bbb R,$$ and the series $\sum_{n=1}^\infty1/(1+n^k)$ converges if $k>1$ , so in that case there is total convergence (hence uniform absolute, pointwise over all $x\in\mathbb R$ ). If $k=1$ the series becomes $$\sum_{n=1}^\infty\frac{\cos(nx)}{1+n};$$ this series is shown, by the Abel-Dirichlet criterion, or by techniques peculiar to Fourier series, to be punctually convergent, except for $x\in2\pi\Bbb Z$ ; and it is also shown to converge uniformly on any set that has strictly positive distance from $2\pi\Bbb Z$ again by techniques related to this criterion. But I would not be able to discuss its absolute convergence, and nothing I could say in the case $0 .
The answer is that the series never converges absolutely, while conditional convergence doesn't hold for any $x$ when $k \le1/2$ and it does hold (uniformly for $x$ in compacts away from $0$ ) for $x \ne 0$ when $1/2 Since (by the Feijer criterion) it is known that for any $x \ne 0$ we have that $n^kx$ is uniformly distributed modulo $2\pi$ when $0 , it immediately follows that the series can never converge absolutely in this case. (as for every $N$ we can find a positive proportion $\sim cN$ of $N/2 \le n \le N, c>0$ fixed for which $|\cos n^kx| \ge 1/10$ say so the partial sum from $N/2$ to $N$ of the absolute series is at least $cN^{1-k}/10$ so is unbounded). Note that $|\cos b - \cos a| \le |b-a|$ so $\cos b \ge \cos a -|b-a|$ . But for $1 \le m \le n$ and $0 we have (by MVT) $|m^k - n^k| \le k(n-m)m^{k-1}$ so $\cos (m+j)^kx \ge \cos m^kx-kxjm^{k-1}$ so summing from $j=1,..A$ we have $$\sum_{j=1}^A\frac{\cos (m+j)^kx }{1+(m+j)^k} \ge \cos m^kx\sum_{j=1}^A\frac{1 }{1+(m+j)^k}-kxm^{k
|real-analysis|calculus|sequences-and-series|
1
Density of $\mathbb{Q}$ in $\mathbb{R}$ seemingly contradictory to infinite set cardinality
In my real analysis module we proved that both $\mathbb{Q}$ and $\mathbb{R \setminus Q}$ are dense in $\mathbb{R}$ , meaning that between any two real numbers there exists a rational and an irrational number respectively. In my own reading of set theory and the cardinality of infinite sets, I have learnt that $$\left| \mathbb{Q} \right| = \aleph_0 \text{ , } \left| \mathbb{R} \setminus \mathbb{Q} \right| = \aleph_1 \text{ , } \left| \mathbb{R} \right| = \aleph_1 \quad \colon \aleph_1 > \aleph_0$$ From this it appears that there is a contradiction, since given that the cardinality of the real numbers is strictly greater than the cardinality of the rational numbers, it doesn't follow that the rationals can be dense in the reals. That is to say that given that there are "more" real numbers than there are rational numbers, there simply aren't enough rationals to fill the gaps between the reals. Given that the cardinality of the irrational numbers is the same as the cardinality of the reals
Density is not about how many numbers are you considering, is about how those numbers are distributed. $\Bbb Q$ and $\Bbb Z$ have the same cardinality, but one is dense and the other is not. The cantor set $C$ and the set of irrationals have the same cardinality, and one is nowhere dense (its closure have empty interior) and the other is dense (its closure is the whole $\Bbb R$ )
|real-analysis|set-theory|intuition|fake-proofs|
0
Is there a fast, reasonable way to compute the approximate value of $\log(- \log x)$?
I have a practical application where it would be useful to have a fast way to compute the approximate value of $\log (- \log x)$, for $0
If you’re working in a language that provides access to the bit representation of floating point values (e.g. Java or C), you can use that to compute approximate logarithms: extract the exponent, multiply it by $\log2$ and add an approximation of the logarithm of the significand by linear interpolation from a table. Here’s Java code that does this with a table size of $1024$ and achieves two-digit precision; you can adjust the table size if you want more precision.
|numerical-methods|approximation|
0
Number of continuous points of a graph (Problem 34 from 97-99 Math GRE practice questions booklet)
Problem Statement: Let $f$ be a function with domain $[-1, 1]$ such that the coordinates of each point $(x, y)$ of its graph satisfy $x^2 + y^2 = 1$ . What is the total number of points at which $f$ is necessarily continuous? My confusion: I've been treating the first sentence as a roundabout way of saying that we have a circle of radius 1 centered at the origin. Then it seems "obvious" to me that $f$ is necessarily continuous at an infinite number of points. However, the correct answer is 2. What am I missing? Update: I now get that it'd be a half-circle not a full circle (since it's a function), but even more than that, I see that I was imagining $f$ as a continuous function...now I'm seeing it as more of a connect-the-dots semi-circle function, but then in this case, now I'd like to think that $f$ has no continuous points. Any hints on how to get to the middle ground of 2 continuous points?
As JonathanZ said, we are dealing with a function, not a graph. Think about the vertical line test. There are only two points, $(-1,0)$ and $(1,0)$ that pass the vertical line test, therefore $f(x)$ is continuous at these points. That's how you get the two points. In more detail: For a function to be continuous at x, it must be defined at x, its limit must exist at x, and the value of the function at that point must equal the value of the limit at x. If we have two possible points for our "function" $(x,y_1)$ and $(x,y_2)$ , we can deduce that one of $y_1,y_2$ is positive and the other is negative. Then we can construct a function that satisfies the conditions for $f(x)$ that alternates between points below and above the $x-$ axis. Then none of our $x$ have a defined limit and therefore can't be continuous at $x$ unless there is only one possible $y$ ; this only occurs at $(−1,0)$ and $(1,0)$ . So many of the math GRE problems are just based on attention to sneaky details rather than r
|continuity|graphing-functions|
0
Local $\partial \bar{\partial}$-lemma..
I am trying to prove the local $\partial \bar{\partial}$ lemma. This says that for a polydisc in $\mathbb{C}^{n}$, a form in $A^{p,q}(U)$ being $d$-closed implies that it is $\partial \bar{\partial}$-exact. I've tried to use the $\partial$ and $\bar{\partial}$ Poincare lemma's to no avail.. I'd like a hint as to where I should get started, I'd greatly appreciate it. EDIT: Due to my lack of conceptual tools and methods of approach.. I guess I would like to try going down this avenue: Given that my form (say, $\alpha$) is $d$-closed, it is $\partial$-closed. By Poincare's lemma, I can find a form $\gamma$ in $A^{p-1,q}(U)$ such that $\partial \gamma = \alpha$. As a stab in the dark, I would like to attempt to decompose $\gamma$ as $P + \bar{\partial}Q$ for a $\partial$-closed form $P$. If I can accomplish this, then $\partial \gamma = \partial (P + \bar{\partial}Q) = \partial \bar{\partial}Q$ and I'll be done. Do you think that this is a long shot? I'm just trying to get my hands dirty a
Jack's answer is not correct. We may not assume $\eta=\eta^{(p,q-1)}+\eta^{(p-1,q)}$ since we may need the terms of other types to cancel $\partial \eta^{(p,q-1)}$ and $\bar{\partial} \eta^{(p-1,q)}$ . Here is a correct approach. Let $n=p+q-1$ and decompose $\eta$ with $d\eta=\alpha$ into parts $\eta^{(k,n-k)}$ of type $(k,n-k)$ . By looking at the components of $d\eta$ of extremal type $(n+1,0)$ and $(0,n+1)$ we conclude $\partial \eta^{(n,0)}=\bar{\partial}\eta^{(0,n)}=0$ . By the Poincaré lemmas, $\eta^{(n,0)}=\partial \gamma_n$ and $\eta^{(0,n)}=\bar{\partial} \rho_n$ . And so, $\bar{\partial} \eta^{(n,0)}=-\partial\bar{\partial}\gamma_n$ and $\partial \eta^{(0,n)} = \partial\bar{\partial} \rho_n$ are $\partial\bar\partial$ -exact. In the case $p$ and/or $q>1$ , from the equation $d\eta = \alpha$ in types $(n,1)$ and $(1,n)$ we find that, $$\partial \eta^{(n-1,1)} = -\bar\partial\eta^{(n,0)} = \partial\bar\partial\gamma_n \quad\text{ and }\quad \bar\partial \eta^{(1,n-1)}=-\partial
|differential-geometry|differential-forms|complex-geometry|
0
Given a set of 2D vectors, determine whether a subset of them sums to a vector with two "large" coordinates
Consider the following decision problem: We have a set of vectors $S \subseteq \mathbb Z^2$ , and a target $k$ . Is there a subset $S' \subseteq S$ such that $\sum_{v \in S'} v$ has both coordinates greater than or equal to $k$ ? This question came up naturally in the course of doing some (hobby) research on another problem, and it would help if it were easy to quickly solve instances of this problem. Does this decision problem have a name? Is it tractable, is it NP-complete, or something else? It looks a lot like the classical knapsack decision problem. That one is NP-complete, but it becomes tractable if the weights are required to be "small" (i.e., they are at most polynomial in the size of the input). Does it help if $k$ and the coordinates are relatively small in the same way?
The problem is weakly $\mathcal{NP}$ -complete. Here is a reduction from the knapsack problem, with capacity $B$ , $n$ objects of weight $w_i$ and value $c_i$ , and of objective value $K$ (We want to decide if we can take more than $K$ as an objective value). We take the coordinates $(K+B, 0), (-w_i, c_i)_{i=1...n}$ . We need to take the first coordinate. We then need to find a set $S$ of coordinates $(-w_i, c_i)_{i\in S}$ such that $\sum_{i\in S}c_i \geq K$ and $K+B + \sum_{i\in S} -w_i\geq K$ , so $ \sum_{i\in S} w_i\leq B$ . Reciprocally, we can reduce to the knapsack problem. Given $(x_i, y_i)_{i=1...n}$ a set of coordinates and $K$ the target, we build a knapsack instance $B = \sum_{x_i\geq 0} x_i - K$ and objects of weight $|x_i|$ , of value $-\text{sign}(x_i)y_i$ and a target $K-\sum_{x_i\geq 0} y_i$ (with $\text{sign}(0) = 1$ ).
|reference-request|algorithms|computational-complexity|
1
Normal and surface area of a plane
Find the area of the portion of the plane $2x+3y+4z=28$ lying above rectangle $1\le\ x \le 3$ , $2 \le\ y\le5$ in the xy-plane. I know this is solved by $||N|| = \sqrt{1+g_x+g_y}\ $ where $g(x,y)=(x,y,7-\frac{x}2\ -\frac{3y}4\ )$ . But how come I cannot simply use $ $ for the normal in the formula $\int_ {1}^{3} \int_{2}^{5} ||N(x,y)|| \,dx\,dy$ ?
The equation of the plane is not unique. For example, you could multiply the entire equation by 10 and still have the same plane but the normal vector would be 10 times longer, yielding 10 times more surface area with the given formulation. Recall that for the formula for surface area we need to start with a surface of the form $z = f(x,y)$ and work from there. Rearranging the plane equation into that form, we obtain $N = \langle -1/2, -3/4, 1 \rangle$ , so $\|N\| = \sqrt{29}/4$ .
|multivariable-calculus|
1
How to show that two generating functions share congruent coefficients
I have two generating functions. $A(x)= \frac{x^{\frac{m \left(m-1\right)}{2}}}{\left(1-x\right)\cdot \left(1-x^{2}\right)\cdot ...\cdot \left(1-x^{m}\right)}$ $B(x)= \frac{A(x)}{x^t} = \frac{x^{\frac{m \left(m-1\right)}{2}}}{\left(1-x\right)\cdot \left(1-x^{2}\right)\cdot ...\cdot \left(1-x^{m}\right) \cdot x^t}$ If $m=2$ and $t=4$ we get: $A(x)= \frac{x}{\left(1-x\right)\cdot \left(1-x^{2}\right)} = x+x^{2}+2 x^{3}+2 x^{4}+3 x^{5}+3 x^{6}+4 x^{7}+4 x^{8}+5 x^{9} + ...$ $B(x)= \frac{1}{\left(1-x\right)\cdot \left(1-x^{2}\right) \cdot x^3 } =x^{-3}+x^{-2}+2 x^{-1}+2+3 x+3 x^{2}+4 x^{3}+4 x^{4}+5 x^{5}+5 x^{6}+6 x^{7}+6 x^{8}+7 x^{9} $ We see that the coefficients are congruent modulo $m=2$ , for the terms with positive exponents. I.e. $3 = 1 \mod 2$ $4 = 2 \mod 2$ $5 = 3 \mod 2$ ... I have been able to prove this for the pairs $t,m = (2,4), (3,18)$ by considering the coefficients of the the partial fraction decomposition of $B(x)-A(x)$ which for $m=2$ and $t=4$ is: $\frac{2}{1-x}+\frac
I would do the computations in the Laurent power series ring in the variable $x$ with coefficients in the field $$ F=\Bbb F_2 $$ with two elements, $0,1$ . Then for instance for the case $m=2$ and $t=4$ the computation is, writing $A=A_{m,t}$ , and $B=B_{m,t}$ for the corresponding $A$ -polynomial and $B$ -polynomial: $$ \begin{aligned} B(x)-A(x)&=(1-x^4)\cdot B(x) \\ &=\frac {1-x^4}{(1-x)(1-x^2)x^3}=\frac{(1+x)^4}{(1+x)(1+x)^2x^3}= \color{brown}{\frac {1+x}{x^3}} = \color{blue}{\frac1{x^3}+\frac1{x^2}}\ , \end{aligned} $$ so the power series for $A$ , $B$ coincide in all powers (in $F$ , or considered modulo $2$ ) except for degrees $-3,-2$ . The next case is similar, $m=3$ and $t=18$ , we are working now in characteristic $3$ , i.e. over the field $$ F=\Bbb F_3\ , $$ i saw this relatively late, and the similar computation is $$ \begin{aligned} &B(x)-A(x)\\ &\ =(1-x^{18})\cdot B(x) \\ &\ =\frac {1-x^{18}}{(1-x)(1-x^2)(1-x^3)x^{15}} =\frac{(1-x^2)^9}{(1-x)\;(1-x)(1+x)\;(1-x)^3\;x^{15}}
|modular-arithmetic|generating-functions|partial-fractions|
1
What is the origin of the terms fences, funnels, and separatrix in differential equations?
What is the origin of the terms fences, funnels, and separatrix in differential equations? I am not asking what they mean but rather who introduced them and how. They seem to be very recent terms, not used in the classical literature; at the same time, they seem to be taken for granted in some contemporary sources. I found that Hubbard's paper uses fence and funnel , but it's not clear from reading if he is introducing them or simply referring to them.
I'm pretty sure "separatrix" is older than "fence" and "funnel". For example, Lefschetz " Lectures on Differential Equations " from 1946 contains "separatrix". [EDIT] It seems "fence" and "funnel" were introduced in 1991 in Hubbard and West's book "Differential Equations: A Dynamical Systems Approach". The Preface says "In order to accomplish this goal, we must introduce some new terminology right at the beginning, in Chapter 1. The descriptive terms "fence," "funnel," and "antifunnel" serve to label simple phenomena that have exceedingly useful properties not exploited in traditional treatments of differential equations."
|geometry|ordinary-differential-equations|reference-request|terminology|math-history|
1
Show that $f$ is differentiable at $a$ if and only if $f(tx+(1-t)a)$ has derivative at $0$ for all $x$
In the book "A Graduate Course on Statistical Inference" (as you can see in this link) there's the following passage: "[a function] $f(\theta )$ is differentiable at $\theta _0$ if and only if, for every $\theta$ in a neighborhood $\theta_0$ , the function $f((1-t)\theta_0+t\theta )$ is differentiable with respect to $t$ at $t=0$ ". I think what the book meant is: Let $U\subseteq\mathbb{R}^m$ be open, $a\in U$ and $f:U\to \mathbb{R}$ . Then the following propositions are equivalent: $f$ is differentiable at $a$ There're $r,s>0$ such that the function $g_x:(-s,s)\to \mathbb{R}$ given by $g_x(t):=f(tx+(1-t)a)$ is well defined and has derivative at $0$ for all $x\in B_r(a)\subseteq U$ . My questions are: Did I understand correctly what the book meant? If no, what did the book want to say? If yes, how can I prove that implication $(2)\Rightarrow (1)$ ? I was able to prove the implication $(1)\Rightarrow (2)$ , however I don't know how to prove that other implication. I know that if there i
It is not true. The requirement For every $\theta$ in a neighborhood $\theta_0$ [which we call $V$ ], the function $f((1-t)\theta_0+t\theta)$ is differentiable with respect to $t$ at $t=0$ means nothing else than that there exist all directional derivatives of $f$ at $\theta_0$ . In fact, consider $f_{\theta}(t) := f((1-t)\theta_0+t\theta) = f(\theta_0 + t(\theta - \theta_0))$ . For each $v \in \mathbb R^p$ there exists $r > 0$ such that $\theta_0 + tv \in V$ for $\lvert t \rvert \le r$ . Let $\theta = \theta_0 + rv$ . Then $$\lim_{t \to 0}\frac{\partial f}{\partial v}(\theta_0) = \lim_{t \to 0}\frac{f(\theta_0 + tv)- f(\theta_0)}{t} = \lim_{t \to 0}\frac{f(\theta_0 + t\frac{\theta - \theta_0}{r})- f(\theta_0)}{t} \\= r\lim_{t \to 0}\frac{f(\theta_0 + \frac t r (\theta - \theta_0){})- f(\theta_0)}{\frac t r} = r\lim_{s \to 0}\frac{f(\theta_0 + s (\theta - \theta_0){})- f(\theta_0)}{s} \\= r\lim_{s \to 0}\frac{f_\theta(s) -f_\theta(0)}{s} = rf'_\theta(0).$$ It is well-known that the exi
|analysis|multivariable-calculus|derivatives|normed-spaces|
1
Curvature of a connection $\nabla + A$ on a vector bundle $E$.
Let $E$ be a vector bundle over a smooth manifold $M$ and $\nabla$ a connection on $E$ . If $A$ is an $\text{End}(E)$ -valued $1$ -form do we have that $$F_{\nabla+A}=F_\nabla+\nabla A+ A \wedge A?$$ I'm a bit confused about the term $\nabla A$ in this expression as $A$ is not really a section of $E$ so I do not know what does this mean.
Write $D=\nabla+A$ . Note that $\nabla$ being a connection on $E$ induces in an obvious manner a connection on $\text{End}(E)$ ; we shall continue to denote this as $\nabla$ . Also, $A$ is an $\text{End}(E)$ -valued $1$ -form, so we can take the exterior covariant derivative $d_{\nabla^{\text{End}(E)}}A$ , which again, I’ll henceforth just write as $d_{\nabla}A$ . Assuming you followed along with the various stuff about vector-bundle valued forms I told you about in previous answers, we have: for any $E$ -valued $k$ -form $\psi$ (we can actually just stick to $k=0$ , but it doesn’t simplify the algebra one bit), \begin{align} R_{D}\wedge_{\text{ev}}\psi &=d_{D}^2\psi\\ &=d_{\nabla}(d_D\psi)+A\wedge_{\text{ev}}d_D\psi\\ &=d_{\nabla}(d_{\nabla}\psi+A\wedge_{\text{ev}}\psi)+ A\wedge_{\text{ev}}(d_{\nabla}\psi+A\wedge_{\text{ev}}\psi)\\ &=d_{\nabla}^2\psi+\underbrace{d_{\nabla}(A\wedge_{\text{ev}}\psi)+A\wedge_{\text{ev}}d_{\nabla}\psi}_{=d_{\nabla}A\wedge_{\text{ev}}\psi}+\underbrace{A\we
|differential-geometry|
0
Showing TFAE for a compact metric space $X$.
Let $X$ be a compact metric space and $\mu,\mu_n$ Borel finite measures such that $\mu_n(X) \to \mu(X)$ . Prove TFAE: $\int f d \mu_n \to \int f d \mu$ for all $f \in C(X)$ . $\lim_{n \to \infty} \sup \mu_n(F) \leq \mu(F)$ for all closed $F \subset X$ . $\lim_{n \to \infty} \inf \mu_n(G) \geq \mu(G)$ for all open $G \subset X$ . $\lim_{n \to \infty}\mu_n(A)=\mu(A)$ whenever $A \subset X$ is Borel and the measure of the boundary is zero. So for $(1) \Rightarrow (2)$ I used the fact that if $X$ has finite measure and the range of the function has min and max values then the integral on the space is bounded below by the min value times the measure of the space and bounded above by the max value times the measure of the space, so let $a \leq f(x) \leq b$ for $a,b \in \Bbb{R}$ and $x \in X$ . Then I have $$a\mu_n(F) - b\mu(F) \leq \int_F f d \mu_n - \int_F f d \mu \leq b \mu_n(F) - a\mu(F).$$ So I get $\mu_n(F) \leq \mu(F)$ . For $(2) \Rightarrow (3)$ I said since $\mu_n(X) \to \mu(X)$ , fo
For (a) implies (b): I claim for any closed $F \subset X$ and any finite Borel measure $\nu$ that $$\nu(F):=\inf \{\int f d \nu: f \in C(X), f \geq \chi_F\}.$$ Clearly, if $f \geq \chi_F$ , then $\nu(F) \leq \int f d \nu$ , it remains to show the other inequality. For any $G \supset F$ open, there exists $f \in C(X)$ such that $f(x) \in [0,1]$ and $f=1$ on $F$ and $0$ on $G^c$ , thus \begin{align} \inf_{f \geq \chi_F} \int f d \nu &\leq\inf_{G \supset F, \text{$G$ open}} \nu(G)\\ &=\nu(F). \end{align} And our claim is proven. So let $F \subset X$ be closed. Then \begin{align} \mu(F)&=\inf_{f \geq \chi_F}\{\int f d \mu\} &&\text{by our claim}\\ &=\inf_{f \geq \chi_F} \lim_{n \to \infty} \int f d \mu_n && \text{by (a)}. \end{align} But \begin{align} \lim_{n \to \infty} \int f d \mu_n&=\lim_{n \to \infty}\sup \int f d\mu_n\\ &\geq\lim_{n \to \infty} \sup \mu_n(F). \end{align} and we are done. Someone besides me upvote this if it’s correct !
|real-analysis|measure-theory|solution-verification|
1
Quantifiers uniform continuity
According to this answer: https://math.stackexchange.com/a/2582334/1098426 We know $\forall x \ \exists y \ \forall z$ differs from $\forall x \forall z\ \exists y$ insofar as $y$ depends on $x$ versus $y$ depends on $x$ and $z$ . I traditionally translate the definition of uniform continuity on some set $A\subseteq\mathbb{R}$ as $\forall \epsilon > 0 \ \exists d > 0 \ \forall x,y \in A \ : |x-y| . Edit: The flaw in my original statement is this: $d$ is independent of $x$ and $y$ . Including the $\forall$ as first quantifier suggests different $d$ for different $x$ and $y$ . Could this be written with the third quantifier as the first? $\forall x,y \in A \ \forall \epsilon > 0 \ \exists d > 0 \ : |x-y| \rightarrow |f(x)-f(y)| Typical continuity then would be something like: $\exists c \in A \ \forall \epsilon > 0 \ \exists d > 0 \ : |x-c| \rightarrow |f(x)-f(c)| Is this a reasonable interpretation? The revised question: If the definition of uniformy continuous is $\forall \epsilon > 0
The formal definition of " $f$ is continuous on $A$ " is $$ \forall x\in A,\ \forall\epsilon>0, \exists \delta>0,\ \forall y\in A,: |x-y| Note that this is just saying " $f$ is continuous at every point $x\in A$ ," where for fixed $x\in A$ , " $f$ is continuous at $x\in A$ " means $$ \forall\epsilon>0,\ \exists\delta>0,\ \forall y\in A:|x-y| When you compare $(1)$ with the definition of " $f$ is uniformly continuous on $A$ ": $$ \forall\epsilon>0, \exists\delta>0, \forall x,y\in A:|x-y| we see that " $f$ is uniformly continuous on $A$ " implies " $f$ is continuous on $A$ " in a formal way.
|real-analysis|calculus|definition|uniform-continuity|quantifiers|
1
Ask the proof of compact linear transformations are necessarily bounded.
A linear transformation $T$ from a normed space $X_1$ into another normed space $X_2$ is compact if for any bounded sequence $\{ x_n\} \in X_1, \{Tx_n \}$ contains a convergent subsequence in $X_2$ . Note that compact linear transformations are necessarily bounded. The textbook gives an outline of proof: Suppose that instead $T$ were unbounded. Then, we could find a bounded sequence $\{ x_n\} \in X_1$ for which $||Tx_n||_2 \geq n$ for each $n$ and, consequently, $\{Tx_n \}$ would not contain a convergent subsequence. I am trying to give a more rigorous step of that "we could find a bounded sequence $\{ x_n\} \in X_1$ for which $||Tx_n||_2 \geq n$ for each $n$ ". An operator $T$ is said to be unbounded if there does not exist a constant $M > 0$ such that $\|Tx\|_2 \leq M\|x\|_1$ for all $x \in X_1$ . In formal terms, this means: $$\nexists M > 0 : \|Tx\|_2 \leq M\|x\|_1, \forall x \in X_1.$$ Given that $T$ is unbounded, for each positive integer $n$ , there exists a vector $x_n \in X_1$
It is worth mentioning that each time you found a vector $x_{n} \in X_{1}$ such that $\|T(x_{n}) \|_{2} > n\|x_{n}\|_{1}$ , it follows that $x_{n} \neq 0$ , so that is why you can divide by $\|x_{n}\|_{1}$ as it is positive. You had the right idea, but note that also dividing by a factor of $n$ in your $y_{n}$ terms caused a problem later on when looking at $\|T(y_{n})\|_{2}$ . Instead, for each $n\in\mathbb{N}$ , try defining $$y_{n} := \frac{1}{\|x_{n}\|_{1}}x_{n}.$$ Then you can check that $\|y_{n}\|_{1} = 1$ for each $n\in\mathbb{N}$ , so that $(y_{n})_{n\in\mathbb{N}}$ is a bounded sequence. From there, you can show that $\|T(y_{n})\|_{2} > n$ for each $n\in\mathbb{N}$ . This obtains your desired result. It is worth noting that if you use that a linear operator from one normed space into another normed space is bounded if and only if its image on the closed unit ball is bounded, then the result is almost immediate. Because if $T$ is not bounded, then $\{ \|T(x)\|_{2} : x\in B_{X_{
|functional-analysis|
1
Question in proof about differentiation of Fourier Series ( Manfred Stoll's Real Analysis book )
I am reading the Manfred Stoll, Introduction to real analysis, proof of 9.5.8 theorem and stuck at some statement. 9.5.8. THEOREM Let $f$ be a continuous function on $[-\pi, \pi]$ with $f(-\pi)=f(\pi)$ , and let $f'$ be piecewise continuous on $[-\pi,\pi]$ . If $$f(x)=\frac{1}{2}a_0 + \Sigma_{n=1}^{\infty}(a_n \operatorname{cos}nx+b_n\operatorname{sin} nx) , x\in [-\pi,\pi] ,$$ is the Fourier series of $f$ , then at each $x\in (-\pi, \pi)$ where $f''(x)$ exists, $$ f'(x)=\Sigma_{n=1}^{\infty}(-na_n\operatorname{sin}nx+nb_n\operatorname{cos}nx).$$ First I propose main question. It is asserted in the proof of 9.5.8. theorem in the Manfred's book. Q. Let $f$ be a continuous function of $[-\pi , \pi]$ with $f( -\pi) =f(\pi)$ , and let $f'$ be piecewise continuous on $[-\pi,\pi]$ . Fix $x_0 \in (-\pi , \pi)$ such that $f''(x_0)$ exists. Then since $f'$ is continous at $x_0$ , by the Dirichlet's theorem, $f'(x_0)= \lim_{n\to \infty} S_n(f')(x_0)$ ?, Here $S_n(f')(x_0)$ is the $n$ th partial
Let me answer for my original question. Let $I:=[-\pi,\pi]$ . Since $f'$ is (right) continuous at $x_0 \in \operatorname{int}I:=(-\pi, \pi) $ , $f'(x_0)=f'(x_0^+)$ ( $\because$ Manfred's book 4.4.3. theorem ) So, $$ \lim_{t\to 0} \frac{f'(x_0+t)-f'(x_0^+)}{t} = \lim_{t\to 0} \frac{f'(x_0+t)-f'(x_0)}{t} =: f''(x_0)$$ So, for $\epsilon >0 $ , there exists $\delta>0$ such that if $0 , then $| \frac{f'(x_0+t) - f'(x_0^+)}{t} - f''(x_0) | so that $|f'(x_0 +t) - f'(x_0^+)| . Letting $M:=|f''(x_0)| + \epsilon$ , we have $$ |f'(x_0 +t)-f'(x_0^+) | \le Mt $$ for all $t, 0 . Similarly, we also can show the existence of a constant $M$ and a $\delta>0$ such that $$ |f'(x_0-t)-f'(x_0^-)| \le Mt $$ for all $t, 0 . Thus $f'$ satisfies the hypothesis of Dirichlet's theorem. (Correct?)
|real-analysis|fourier-analysis|fourier-series|
1
Prove that $b_n = [x^n]\frac{1-x^2+x^3}{1-2x+x^2-x^3}$
A block of a $\{0,1\}$ -string is a maximal nonempty substring consisting only of $0$ s or only of $1$ s. Let $b_n$ be the number of $\{0, 1\}$ strings of length $n$ in which no block has length exactly two. Use the fact that $\{0,1\}^* = \{1\}^* (\{0\}\{0\}^*\{1\}\{1\}^*)^*\{0\}^*$ to prove that $b_n = [x^n]\frac{1-x^2+x^3}{1-2x+x^2-x^3}$ . Here is what I have done so far. Let $S$ be the set of strings in $\{0,1\}$ of length $n$ where no block has length exactly two. We know that \begin{align} \{0,1\}^* &= \{1\}^* (\{0\}\{0\}^*\{1\}\{1\}^*)^*\{0\}^* \\ &= \{\epsilon, 1, 111, \dots\}(\{0\}\{\epsilon,0,000,\dots\}\{1\}\{\epsilon,1,111,\dots\})^*\{\epsilon,0,000,\dots\} \end{align} We see that \begin{align} \Psi(S) &= \Psi(\{\epsilon, 1, 111, \dots\})\Psi(\{0\}\{\epsilon, 0,000,\dots\}\{1\}\{\epsilon,1,111,\dots\})^*)\Psi(\{\epsilon,0,000,\dots\}) \\ &= \Psi(1+x+x^3+\dots)\Psi(1-x^2(1+x+x^3+\dots)^2)^{-1}\Psi(1+x+x^3+\dots), \end{align} and now I'm stuck. Probably this doesn't make any s
\begin{eqnarray*} \{0,1\}^* &= \underbrace{\{1\}^*}_{1+x+\frac{x^3}{1-x}} \underbrace{(\{0\}\{0\}^*\{1\}\{1\}^*)^*}_{\frac{1}{1-\left(x^2+\frac{2x^4}{1-x}+\frac{x^6}{(1-x)^2} \right)}} \underbrace{\{0\}^* }_{1+x+\frac{x^3}{1-x}}\\ \end{eqnarray*} After a bit of algebra & note $1-2x+2x^3-3x^4+2x^5-x^6=(1-2x+x^2-x^3)(1-x^2+x^3)$ ... the result follows.
|combinatorics|generating-functions|bit-strings|
1
How to construct an algebraic function that is not rational?
How to construct a algebraic irrational function $f(x)$ such that $$f(x)= \sum_{i=1}^{\infty}a_i x^i$$ with $a_1, a_2,\dots, a_i,\dots \in \mathbb{N}$ . Reference is appreciated. Update : an instance is $$f(x)=\frac{2}{1+\sqrt{1-4x}} =\sum C_n x^n$$ with $c_n$ being the $n_{th}$ Catalan number. Thank to Jyrki Lahtonen. And some are here Hope more examples.
Something like the Catalan series produces the patalan numbers ( somebody named them that way) $$\frac{1-(1-p^2 x)^{\frac{1}{p}}}{p x} = \sum_{n\ge 0} C_{p,n} x^n$$ with $C_{p,n}$ positive integers for every $p\ne 0$ integer ( for $p=2$ we get $C_{2,n} = C_n$ the usual Catalan). From here one also concludes that $$-\frac{1-(1-p^2 x)^{-\frac{q}{p}}}{p x}$$ also has integer coefficients, which are, for every $n$ $$\frac{p^{n-1}q(q+p)\cdots (q+(n-1) p)}{n!}$$ (and an older result from this)
|functions|algebraic-geometry|reference-request|power-series|
0
intuitive understanding when negating quantifiers with sets ("for some" to "union")
Consider the following statements regarding union and their equivalent statements ( $\alpha$ indexes a family of sets $A_\alpha$ ): $x \in \bigcup_\alpha A_\alpha \iff x \in A_\alpha \text{ for some } \alpha$ $x \notin \bigcup_\alpha A_\alpha \iff x \notin A_\alpha \text{ for all } \alpha$ And the following for intersection: $x \in \bigcap_\alpha A_\alpha \iff x \in A_\alpha \text{ for all } \alpha$ $x \notin \bigcap_\alpha A_\alpha \iff x \notin A_\alpha \text{ for some } \alpha$ So far so good. Now consider the following, which uses the above results, where $A_\alpha$ are subsets of $X$ : $$\begin{align}x \in X \setminus (\bigcup_\alpha A_\alpha) &\iff (x \in X) \land (x \notin \bigcup_\alpha A_\alpha) \\ \\ &\iff (x \in X) \land (x \notin A_\alpha \text{ for all } \alpha) \\ \\ &\iff \bigcap_\alpha (X \setminus A_\alpha) \end{align}$$ The last step seems to make sense to me - the "for all $\alpha$ " translates into an intersection. (It is possible my confidence in understanding this
Rather than using the symbols try to reason it out with natural language. The first two implications are simply expanding what is meant by the symbols. For the last implication, if $x \in X$ and there exist an $A_\alpha$ that does not contain $x$ then $x$ is not in the intersection of the $A_\alpha$ . So there is some $X\setminus A_\alpha$ that does contain $x$ . Therefore $x$ must be in the union of the $A_\alpha$ which is what was to be shown.
|elementary-set-theory|
0
$L^p(T)$ is separable for $1 \le p < \infty$, but $L^\infty(T)$ is not separable
Update: I understand the separability of $L^p(T)$ now, but I'm unable to prove the non-separability of $L^\infty(T)$ . Could someone please provide some more details/hints? Thank you! Prove that $L^p(T)$ is separable for $1 \le p , and $L^\infty(T)$ is not separable. $T$ is the unit circle in $\mathbb C$ , i.e. $T = \{z\in\mathbb C: |z| = 1\}$ . We define $L^p(T)$ for $1\le p as the class of all complex, Lebesgue measurable, $2\pi$ -periodic functions on $\mathbb R^1$ , for which the norm $$\|f\|_p = \left\{\frac{1}{2\pi}\int_{-\pi}^\pi |f(t)|^p\, dt \right\}^{1/p}$$ is finite. We define $L^\infty(T)$ similarly. First of all, what metric do we use to talk about density here? I assume we're using the metric induced by the $L^p$ norm. Attempt for $L^p(T)$ with $p\in [1,\infty)$ : I am trying to explicitly construct a countable dense subset of $L^p(T)$ . I have a proposal for a countable dense subset, which I am sure is countable, but I'm unable to show it is dense. In case it is not dens
Here is a short proof for $p \in [1,\infty)$ : Note $C([0,1]) \subset L^p([0,1])$ is dense and separable. Let $A \subset C([0,1])$ be the countable dense subset. Fix $f \in L^p([0,1])$ , then there exists a $g \in C([0,1])$ such that for any $\epsilon >0$ , $$\vert \vert g - f \vert \vert_p Furthermore, as $C([0,1])$ is separable in the uniform norm, there exists some $h \in A$ such that $$\vert \vert h - g \vert \vert_\infty Observe then that \begin{align} \vert \vert h - g \vert \vert_p&=\bigg(\int \vert h - g \vert^p\bigg)^{\frac{1}{p}}\\ &\leq (m([0,1])\vert \vert h - g \vert \vert_\infty^p)^{\frac{1}{p}}\\ &=\vert \vert h - g \vert \vert_\infty. \end{align} Thus by triangle inequality, \begin{align} \vert \vert h - f \vert \vert_p &\leq \vert \vert h - g \vert \vert_p + \vert \vert g - f \vert \vert_p\\ & As $\epsilon$ was arbitrary, $A$ is dense in $L^p$ .
|real-analysis|functional-analysis|separable-spaces|
0
Turns of closed path $\lambda$ around origin is related to area of paralelogram $\lambda(t), \lambda'(t)$
Given the closed path $\lambda:[a,b]\to\mathbb{R^2}-\{0\}$, $C^1$, call $A(t)$ the 'oriented area' of the paralelogram determined by the vectors $\lambda(t)$ and $\lambda'(t)$ for each $t\in [a,b]$. Show that the number of turns that $\lambda$ does arount he origin is: $$n(\lambda, 0) = \frac{1}{2\pi} \int_a^b \frac{A(t)}{|\lambda(t)|^2}\ dt$$ The oriented area of a paralelogram such that its sides are given by vectros $v,w\in \mathbb{R^2}$, in this order, is positive when the rotation is from $v\to w$ by the least anti-clockwise angle, negative otherwise, and $0$ is the vectors are linearly dependent As I understood, for each $t$, we have one $\lambda(t)$ and $\lambda'(t)$. We must consider the paralelogram formed by these $2$ vectors for each $t$ and call $A(t)$ its area in the point $t$. But I honestly can't see a relation from this to turns around the origin.
A bit of an old reply, but I've been thinking about this exercise and, unless I missed something, there's a much simpler way to solve it. I believe I'm reading the same book OP is/was reading. In there, the number of turns around the origin is defined as $$n(\lambda; 0) = \frac{1}{2\pi}\int_{\lambda} \delta \theta,\text{ where }\delta \theta = -\frac{y}{x^{2}+y^{2}}dx + \frac{x}{x^2+y^{2}}dy.$$ Now, since $\lambda$ is a $C^{1}$ path and $\delta \theta$ is a continuous $1$ -form in $\mathbb{R}^{2}-\{0\}$ , we have the following: $$\int_{\lambda} \delta \theta = \int_{a}^{b} \delta \theta(\lambda(t)) \cdot\lambda'(t) dt = \int_{a}^{b} \frac{-\lambda_{2}(t)\lambda_{1}'(t) + \lambda_{1}(t) \lambda_2'(t)}{|\lambda(t)|^{2}}dt.$$ Note that the expression in the numerator is the determinant of the matrix $\Lambda(t)$ , where $$\Lambda(t) = \begin{bmatrix} \lambda_1(t) & \lambda_1'(t) \\ \lambda_2(t) & \lambda_2'(t) \end{bmatrix},$$ which is exactly the signed area determined by $\lambda$ and $
|calculus|real-analysis|integration|differential-geometry|
0
Finding bound on the solution of systems
Consider $$ \begin{gathered} \dot{x}_1=x_1 x_2 \\ \dot{x}_2=-x_2 . \end{gathered} $$ Show that $$ \left|x_1(t)\right|+\left|x_2(t)\right| \leq \alpha\left(\left|x_1(0)\right|+\left|x_2(0)\right|\right), $$ where $$ \alpha(s) \triangleq s e^s . $$ Here $\alpha$ denotes K class function. I found solution of $x_2$ and have $x_2(t)=x_2(0)e^{-t}$ . Then I plugged that in the first equation and got $x_1(t)=x_1(0)e^{e^{-t}}$ . I am not sure how to proceed with this proof.
First, a correction: the solution to $\dot{x}_1=x_1x_2=x_1x_2(0)e^{-t}$ is $$ x_1(t)=x_1(0)e^{x_2(0)(1-e^{-t})}. \tag{1} $$ Now we have a sequence of straightforward inequalities: if $t\geq 0$ , then $$ |x_2(t)|=|x_2(0)e^{-t}|\leq |x_2(0)| \tag{2} $$ and $$ |x_1(t)|=\left|x_1(0)e^{x_2(0)(1-e^{-t})} \right| \leq |x_1(0)|e^{|x_2(0)(1-e^{-t})|} \leq |x_1(0)|e^{|x_2(0)|}. \tag{3} $$ Therefore, \begin{align} |x_1(t)|+|x_2(t)|&\leq |x_1(0)|e^{|x_2(0)|}+|x_2(0)| \\ &\leq (|x_1(0)|+|x_2(0)|)e^{|x_2(0)|} \\ &\leq (|x_1(0)|+|x_2(0)|)e^{|x_1(0)|+|x_2(0)|} \\ &=\alpha(|x_1(0)|+|x_2(0)|). \quad{\square} \tag{4} \end{align}
|dynamical-systems|upper-lower-bounds|
1
Infinite wacky race
Dick Dastardly is taking part in an infinite wacky race. What is infinite about it, you ask? Well, just everything! There are infinitely many racers, every one of which can run infinitely fast and the finish line is infinitely far away. To be more precise, the race will take place on the natural numbers, starting at zero. At every moment in time, every player can move further from its position to any natural number. However, Dick has an advantage: the power of cheating! Every racer he overtakes will be snatched by one of his traps and be out of the race for good. What is the largest number of racers $\Delta$ Dick be sure to win against? For the sake of clarity, say Dick and the rest of the racers alternate movements, i.e., first every player moves, then Dick moves, then it repeats. So, for instance, if Dick is racing against countably many people, he is sure to win. All he has to do is number the racers and, at the $n$ th moment, overtake the $n$ th racer. If there are continuum many p
As I understand it, the question is about the following game $G(\kappa)$ of length $\omega$ played on a set $S$ of cardinality $\kappa$ . At the $n^\text{th}$ turn, first White chooses a function $\varphi_n:S\to\mathbb N$ , and then Black (a.k.a. Dick) chooses a number $k_n\in\mathbb N$ ; Black wins if $\forall x\in S\ \exists n\in\mathbb N\ \varphi_n(x)\le k_n$ . (More colorfully: at each turn, White cuts what's left of $S$ into countably many pieces, and Black eats any finite number of those pieces; Black wins if he ends up eating the whole thing.) For small values of $\kappa=|S|$ Black has a winning strategy, for large values White has a winning strategy, and for in-between values (if any) the game is undetermined. We want to find the cutoff points. In the notation of Cichoń's diagram , $\mathfrak d$ is the minimum cardinality of a family of functions $F\subseteq\mathbb N^\mathbb N$ such that $\forall g\in\mathbb N^\mathbb N\ \exists f\in F\ \forall n\in\mathbb N\ g(n)\lt f(n)$ , an
|set-theory|cardinals|forcing|infinite-games|
1
Prove that if $A, B$ (both $n\times n$ matrix) are similar, then Null($A$)$=$Null($B$).
Prove that if $A,B$ (both $n\times n$ matrix) are similar, then Null( $A$ ) $=$ Null( $B$ ). Defn: If $A,B$ (both $n\times n$ matrix) are similar, then there exists $P$ which is a $n\times n$ matrix such that $A=PBP^{-1}$ . I used the definition of similarity between matrices to first prove that Null( $A$ ) is a subset of Null( $B$ ). If $x$ is an element of Null( $A$ ), $Ax=0$ $PBP^{-1}x=0$ I need to show that $x$ is an element of Null( $B$ ), but I’m not sure how to.
This is actually false. The counter example could be : matrix A is \begin{bmatrix}1&1\\0&0\end{bmatrix} For any other invertible matrices P, example \begin{bmatrix}1&0\\0&2\end{bmatrix} $P^{-1}$ is \begin{bmatrix}1&0\\0&1/2\end{bmatrix} And by the similarity, A=PB $P^{-1}$ , you can get B= $P^{-1}$ AP which is \begin{bmatrix}1&2\\0&0\end{bmatrix} Then, Null(A)= $ span(\begin{bmatrix}-1\\1\end{bmatrix} )$ which is different from null space of B so the statement is false.
|linear-algebra|
1
Determinant of a matrix that has 0 on the diagonal
$$ \begin{pmatrix} 0&1&1&\cdots&1&1 \\ 1&0&x&\cdots&x&x \\ 1&x&0&\cdots&x&x \\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots \\ 1&x&x&\cdots&0&x \\ 1&x&x&\cdots&x&0 \end{pmatrix} $$ I have tried to solve it differently, mostly I tried to make a lower triangle matrix. Usually I use way with lambda, but I still don't have right answer
Call your matrix $A$ and let $J$ be the matrix of ones. The eigenvalues of $J$ are $n,0,\ldots,0$ , so the eigenvalues of $J-I$ are $n-1,-1,\ldots,-1$ . Thus $\det(J-I) = (-1)^{n-1}(n-1)$ . Now compute the determinant of the matrix $$x(J-I) = \begin{bmatrix} 0 & x & \cdots & x & x \\ x & 0 & \cdots & x & x \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ x & x & \cdots & 0 & x \\ x & x & \cdots & x & 0\end{bmatrix}$$ in two different ways. On one hand, $\det(x(J-I)) = x^n\det(J - I) = x^n(-1)^{n-1}(n-1)$ . On the other hand, by factoring out an $x$ from the first row and column we see $\det(x(J-I)) = x^2 \det(A)$ . Equating our two expressions reveals that $\det(A) = x^{n-2}(-1)^{n-1}(n-1)$ .
|linear-algebra|matrices|determinant|
0
How to find all positive integers $n,k$ such that ${n\choose k}=m$ for a given $m$?
This question is motivated by a simple exercise in Peter Cameron's Combinatorics: Topics, Techniques, Algorithms : A restaurant near Vancouver offered Dutch pancakes with ‘a thousand and one combinations’ of toppings. What do you conclude? The intended solution (according to Cameron's website ) is the following: since ${14 \choose 4}=1001$ , most likely there were $14$ possible toppings and each serving of pancakes allowed the patron to choose $4$ toppings. However, this begs a much more general question: For a given positive integer $m$ , how can we find all positive integers $n$ and $k$ such that ${n \choose k}=m$ ? This seems like a very natural and obvious question to ask, but preliminary searches didn't yield much apart from specific values of $m$ . Any insight to this question for more general classes of positive integers $m$ , or links to relevant references, would be appreciated.
It turns out to be quite fast If $k = 1$ then $n = m$ If $k > 1$ then $\frac{n^k}{k!} > m > \frac{(n-k)^k}{k!}$ , therefore $(k!m)^{1/k} Notice that $\left(\begin{matrix}n\\k\end{matrix}\right) = \left(\begin{matrix}n\\n-k\end{matrix}\right)$ so we only need to check for $k \leq n/2$ . This means $$2k Therefore $$\frac{k^k}{k!} As this grows quite fast for k (as shown in the below plot), only a handful of number of candidates need to be checked. Or one use Stirling approximation $$\frac{k^k}{k!} \approx \frac{e^k}{\sqrt{2\pi k}}$$ UPDATE: The range for possible values of k can be improved even further by directly using Stirling series : $$ m = \left(\begin{matrix}n \\ k\end{matrix}\right) \geq \left( \begin{matrix}2k \\ k\end{matrix}\right) > \frac{4^k\left(1+\frac{1}{12k}\right)}{\sqrt{k\pi}\left(1+\frac{1}{12k}+\frac{1}{288k^2}\right)^2}$$ although it's probably faster to compute some values of $\left(\begin{matrix}2k\\k\end{matrix}\right)$ beforehand
|combinatorics|discrete-mathematics|reference-request|combinations|binomial-coefficients|
1
Inconsistent Function Monotonicity from hand and Mathematica image
$g(x)=\frac{\phi(x)}{1-\Phi(x)}$ , where $\phi(x)$ and $\Phi(x)$ are p.d.f and c.d.f of standard normal distribution respectively. $g'(x)=\frac{\phi'(x)(1-\Phi(x))+\phi^2(x)}{(1-\Phi(x))^2}=\frac{\phi(x)}{(1-\Phi(x))^2} \left( \phi(x)-x(1-\Phi(x)) \right)$ since $\phi'(x)=-x\phi(x)$ . Let $h(x)=\phi(x)-x(1-\Phi(x))$ , $h'(x)$ is thus $\phi'(x)-(1-\Phi(x))+x\phi(x)=\Phi(x)-1 \leq 0$ $h(x)$ is non-increasing, and $\lim_{x\to \infty} h(x)=0 $ , thus $h(x)\geq0$ and since $\frac{\phi(x)}{(1-\Phi(x))^2}\geq 0$ then $g'(x)\geq0$ . $g(x)$ must be non-decreasing. But in Mathematica, I have this like a wavy line. Mathematica Image What's the problem please!
You need to use FullSimplify , after which you will obtain $$g(x) = \sqrt{\frac{2}{\pi}} \frac{e^{-x^2/2}}{\operatorname{erfc}(x/\sqrt{2})},$$ and this function evaluates correctly since the implementation of Erfc avoids the loss of precision due to evaluating Erf at arguments that are so large that it is almost $1$ . So for instance, input PDF[NormalDistribution[0, 1], x]/(1 - CDF[NormalDistribution[0, 1], x]) // FullSimplify to yield the form of $g$ shown above, then plot this function over the desired interval.
|real-analysis|calculus|derivatives|normal-distribution|
1
How can I use the change of basis theorem to find the standard matrix of a linear transformation?
Let $T:\Bbb R^{3}\to\Bbb R^{3}$ be the linear transformation given by reflecting across the plane $x_1 - 2x_2 + 2x_3 = 0$ . Use the change of basis formula to find its standard matrix. What I have tried to do: I know we have to start with three standard basis vectors in $\Bbb R^{3}$ . Then, apply the linear transformation to these standard basis vectors to get the columns of the standard matrix of $T$ . But, I’m not sure how does reflecting these vectors across the plane change them.
The easiest way to do this is to first find $3$ basis vectors whose reflections will be easy to calculate after applying $T$ . The easiest way to achieve this is to choose $2$ linearly independent vectors on the plane of reflection and one that is perpendicular to the plane such that $$T(v_1)=v_1,\space T(v_2)=v_2 , \space T(v_3)=-v_3$$ After we have chosen our vectors we will be able to write our reflection matrix in terms of our chosen basis thusly: $$T_B=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{bmatrix}$$ Finding the standard matrix for $T$ is now a simple matter of applying the change of basis formula $S\circ T_B\circ S^{-1}$ where we have $$S=\begin{bmatrix} v_1 & v_2 & v_3 \\ \end{bmatrix}$$ So, in total, we will have $$\therefore T_S=\begin{bmatrix} v_1 & v_2 & v_3 \\ \end{bmatrix}\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{bmatrix}\begin{bmatrix} v_1 & v_2 & v_3 \\ \end{bmatrix}^{-1}$$ I'll leave the choice of basis and specific calculations to yo
|linear-algebra|
1