title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Class number of $\mathbb{Q}(\sqrt{11})$
|
I want to prove that the class number $h_{11}$ of $K=\mathbb{Q}(\sqrt{11})$ is $1$ . Here is what I got: We know that the discriminant $\Delta_K$ of $K$ is $44$ because $11 \equiv 3 \mod 4$ . We also know that for every non-zero prime ideal $\mathfrak{p}$ of $\mathcal{O}_K$ there exist an unique rational prime $p$ such that $\mathfrak{p} \mid p\mathcal{O}_K$ . For such $p$ we know $\operatorname{Norm}(\mathfrak{p}) = p^m$ for $m \geq 1$ . Further, the class group $\operatorname{CL}(K)$ is generated by $$ \{\overline{\mathfrak{p}} \mid \mathfrak{p} \text{ is a prime ideal and }\operatorname{Norm}(\mathfrak{p}) \leq B_K\}, $$ where $$ B_K = \left(\frac{2!}{2^2}\right)\sqrt{\Delta_K} = \sqrt{11} $$ is the Minkowski bound. Therefore, for every prime ideal $\mathfrak{p}$ we get $\operatorname{Norm}(\mathfrak{p}) = 2, 3$ . I don't know how to proceed.
|
We first compute $\mathfrak O_K$ . If $a+b\sqrt{11}$ is integral for $a,b\in \mathbb Q$ , then $a-b\sqrt{11}$ is its conjugate and hence their minimal polynomial is $$f\left(x\right)=x^2-2a x+a^2-11b^2.$$ Then $2a$ and $a^2-11b^2$ are both rational integers. Put $a=\frac n 2$ and $b=\frac m 2$ , then $$a^2-11b^2=\frac {n^2-11m^2}{4}$$ is a rational integer and hence $$n^2-11m^2\equiv n^2+m^2\equiv 0\mod 4.$$ Therefore $n$ and $m$ are even and hence $\mathfrak O_K=\mathbb Z\left[\sqrt{11}\right]$ . Since $11$ is not a quadratic residue of $3$ , $3\mathfrak O_K$ is a prime ideal. Hence if the norm of $\mathfrak p$ is $3$ , then $\mathfrak p$ divides $3$ and hence $\mathfrak p$ is exactly $3\mathfrak O_K$ . This is impossible since the norm of $3\mathfrak O_K$ is $3^2=9$ . Otherwise the norm of $\mathfrak p$ is $2$ . In this case, since $2$ divides the discriminant, $2$ is ramified in $K$ , namely $2\mathfrak O_K= P^2$ for a prime ideal $P$ . We shall prove that $$P=\left(3+\sqrt{11}\righ
|
|algebraic-number-theory|
| 1
|
Convergence of Pólya urns: what does it mean "expanding real polynomials in a basis"?
|
I'm reading these notes , and in the proof of this Proposition 2 , which states: Let $d\ge 2$ and $S\ge 1$ be integers. Let also $(\alpha_1,\dots,\alpha_d)\in\mathbb{N}^d \setminus\{ 0 \}$ . Let $(P_n)_{n\ge0}$ be the $d$ -color Pólya urn random process having $S\cdot Id$ as replacement matrix and $(\alpha_1,\dots,\alpha_d)$ as initial composition. Then, almost surely and in any $L^t$ , $t\ge 1$ , $$\frac{P_n}{nS}\to V\ \ \text{as}\ n\to\infty$$ where $V$ is a $d$ -dimensional Dirichlet-distributed random vector, with parameters $(\frac{\alpha_1}S,\dots,\frac{\alpha_d}S)$ . I'm stuck near the end of the proof, when it says "Besides, expanding real polynomials $X^p=X_1^{p_1}\dots X_d^{p_d}$ in the basis $(\Gamma_p)_{p\in\mathbb{N}^d}$ , one gets formulae $$X^p=S^{|p|}\Gamma_p+\displaystyle\sum_{{k\in\mathbb{N}^d}\atop {|k|\le |p|-1}}a_{p,k}\Gamma_k{X}\tag{1}$$ where the $a_{p,k}$ are rational numbers, $p=(p_1,\dots,p_d)\in\mathbb{N}^d$ , $|p|=\sum_{k=1}^d p_k$ , $S\ge 1$ is an integer a
|
The second question from the first: I also agree that $J_n$ should be $P_n$ . Apply expectation to $P_n^p$ , by using (1). Except for the first term, all other terms end up $O(n^{p-1})$ . Then it becomes $O(1/n)$ after dividing $(\alpha + nS)^p$ . The first question: Formula (1) on the right side, $X$ must be removed. Note that for $X=(x_1,\ldots, x_d)$ , $$\Gamma_p(X)= \left(\frac{x_1}S \cdot (\frac{x_1}S+1) \cdots (\frac{x_1}S+p_1-1)\right) \cdots \left(\frac{x_d}S \cdot (\frac{x_d}S+1) \cdots (\frac{x_d}S+p_d-1)\right)$$ by repeatedly applying $\Gamma(s+1)=s\Gamma(s)$ . Thus, expanding the polynomial in the basis $\Gamma_p$ means writing monomials $X^p$ in terms of $\Gamma_p$ polynomials as above. This can be done through Stirling numbers of the second kind $S(p,k)$ . It is enough to have for single variable $x$ and $p\in\mathbb{N}$ , $$ x^p=(-1)^p\sum_{k=0}^p S(p,k) (-1)^k x(x+1)\cdots (x+k-1). $$ Putting $x/S$ in $x$ , we have $$ x^p=(-1)^pS^p\sum_{k=0}^p S(p,k) (-1)^k \frac xS (\
|
|expected-value|markov-chains|martingales|polya-urn-model|
| 1
|
Can we conclude that Peano's axioms consistent from soundness?
|
One of the corollaries of soundness says that if $\Gamma$ is satisfiable, then $\Gamma$ is consistent. I am wondering whether we can conclude that Peano's axioms $\mathsf{PA}$ is consistent from the fact that the standard model of arithmetic $\mathcal{M}_A=(\mathbb N, 0^{\mathcal{M}_A}, s^{\mathcal{M}_A}, +^{\mathcal{M}_A}, \times^{\mathcal{M}_A}, satisfies $\mathsf{PA}$ ? Perhaps not?
|
Yes, absolutely. It is a theorem that $\mathsf{PA}$ is consistent. The easiest way to prove this theorem is to exhibit $\mathbb{N}$ as a model of $\mathsf{PA}$ .
|
|logic|model-theory|peano-axioms|
| 0
|
An amazing approximation of $e$
|
As we can read in Wolfram Mathworld's article on approximations of $e$, the base of natural logarithm, An amazing pandigital approximation to e that is correct to $18457734525360901453873570$ decimal digits is given by $$\LARGE \left(1+9^{-4^{6 \cdot 7}}\right)^{3^{2^{85}}}$$ found by R. Sabey in 2004 (Friedman 2004). The cited paragraph raises two natural questions. How was it found? I guess that Sabey hasn't used the trial and error method. Using which calculator can I verify its correctness "to $184\ldots570$ decimal digits"?
|
This may be best answered by looking for an improvement to that formula, and explaining how to find it. The basic idea for all those approximations for $e$ is to write some large number $N$ in two different ways, using exactly the digits from $2$ to $9$ , and then take $(1+\frac{1}{N})^{N}$ as an approximation for $e$ . In Sabey's old (2004) example, he writes $N_{old}=3^{2^{85}}=9^{4^{6*7}}$ . However, using a small digit like $3$ as the base of the exponential tower isn't optimal. For example, if we could find a way to replace the $3$ by a $5$ , that would make $N$ much larger, and the resulting approximation much better... Finding such an $N$ that works actually isn't too hard: Using $N_{new}=5^{3^{84}}$ frees up the digit $2$ , which conveniently can be used to write $\frac{1}{5}$ as either $.2$ or $0.2$ , depending on how you like to write your decimal numbers. At the same time, we have lost access to the digit $4$ , which we have to take care of: Using properties of the exponenti
|
|approximation|constants|
| 0
|
How do I parameterize this double integral?
|
I am new to multivariable calculus and I am trying to learn how surface area double integrals work. I am stuck on how to parameterize this function: $\iint_S x dS$ where $S$ is the part of the cylinder $x^2 + 2y^2 = 2$ bounded between the planes $ z = -4$ and $z = y + 4x + 12$ . I first tried to convert to polar integrals and settled with this integral: $\int_0^{\frac{\pi}{2}} \int_{-4} ^{z = y + 4x + 12} r \cos\theta \, r\, \mathrm dr \, \mathrm d\theta$ but I couldn't solve that integral. So next I tried isolating x and y and making it in terms of $\mathrm dz\, \mathrm dx$ , and I got $\int_{-4} ^{y\sqrt{2-2y^2+12}} \int_{-\sqrt{2-2y^2}}^{\sqrt{-2y^2}} x \, \mathrm dx \, \mathrm dz$ Which I also couldn't solve. Can anyone help point out what I am doing wrong here? This question is really stumping me!
|
The condition $x^2+2y^2=2$ is a very common type of condition, where a good ideal is to use modified cylindrical coordinates, by which I mean $$x=\sqrt{2}\cos{\theta},\quad y=\sin{\theta},\quad \theta \in \left[0,2\pi \right[.$$ Then, in lack of a better idea, to restrict $z$ to be in between $-4$ and $y+4x+12$ , we can parametrize it as $$z= -4 + \left(y+4x+16\right)t = -4 + \left(\sin{\theta}+4\sqrt{2}\cos{\theta}+16\right)t, \quad t \in \left[0,1\right].$$ This gives us the parametrization $$\mathbf{r}(\theta,t)=(\sqrt{2}\cos{\theta},\sin{\theta},-4 + \left(\sin{\theta}+4\sqrt{2}\cos{\theta}+16\right)t),$$ which has the following partial derivatives $$\partial_\theta \mathbf{r}(\theta,t)=(-\sqrt{2}\sin{\theta},\cos{\theta},\left(\cos{\theta}-4\sqrt{2}\cos{\theta}\right)t),$$ $$\partial_t\mathbf{r}(\theta,t)=(0,0,\sin{\theta}+4\sqrt{2}\cos{\theta}+16).$$ Because of the form of these, it is simple enough to calculate their product, and as such, $$\left|\left| \partial_\theta \mathbf{r
|
|integration|multivariable-calculus|definite-integrals|surfaces|surface-integrals|
| 0
|
Solution for a stochastic process driven by two Brownian motions.
|
If a stochastic process follows $dX(t) = a(t)X(t)\mathrm dt + \sigma_1(t)X(t)\mathrm dW_1(t)+ \sigma_2(t)X(t)\mathrm dW_2(t)$ , does the solution take the form $$ X(t) = X(0) \exp\left(\int_0^t (a(s) - (1/2) \sigma_1(s)^2 - (1/2) \sigma_2(s)^2)\mathrm ds + \int_0^t\sigma_1(s)\mathrm dW_1(s) + \int_0^t\sigma_2(s)\mathrm dW_2(s)\right) $$ where $W_1$ and $W_2$ are 2 Brownian motion (we do not know if they are independent or not). I cannot use two different $X(t)$ to define the process, as it is assumed that there are 2 stochastic variables $b_1(t)$ and $b_2(t)$ that drive $X(t)$ together. Here, I assume $b_1(t)$ to be a simple stochastic process that follows $$db_1(t) = \sigma_1 dW_1(t)$$ and likewise for $b_2(t)$ .
|
Indeed, I just noticed that the OP is talking about possibly dependent BMs. In that case, the OP needs to specify the dependence. If they just have constant correlation $\rho$ , we have $W^{1}=B^{1}$ and $$B^{2}=\sqrt{1-\rho^{2}}W^{2}+\rho B^{1},$$ for two independent BMS $W^1,W^2$ . So by rearranging we can just assume that we have two independent BMs but now the coefficients will depend on $\rho$ $\tilde{\sigma}_1=\sigma_1+\sigma_2\rho$ and $\tilde{\sigma}_2=\sigma_2\sqrt{1-\rho^{2}}$ . Using that the two Brownian motions are independent we let $$Y^{1}_t:=E[X_t| \sigma(W^2)]\text{ and }Y^{2}_t:=E[X_t| \sigma(W^2)].$$ So we have $$X_{t}=Y^{1}_{t}+Y^{2}_{t}-E[X_{t}]$$ ( Conditional expectation property for independent sub-sigma algebras ). We also get the following two SDEs due to the linearity of the conditional expectation $$dY^{i}_t=a(t)Y^{i}_{t}dt+\tilde{\sigma}_{i}(t)Y^{i}_{t}dW^{i}_{t},i=1,2$$ and an ODE for $u_{t}:=E[X_{t}]$ $$u_{t}=a(t)u(t)dt\Rightarrow u(t)=u(0)e^{\int^t a(s)d
|
|stochastic-processes|stochastic-calculus|
| 0
|
How is the volume not double counted in the shell method for volume of revolution?
|
If the radius of every shell $k$ for where k is the $k^{th}$ subinterval is, $\left(a + (k-\tfrac{1}{2}) \frac{b-a}{n}\right)$ , you have a increasing radii sequence $(a+\dfrac{(b-a)}{2n}),(a+\dfrac{3(b-a)}{2n}),(a+\dfrac{5(b-a)}{2n}),....$ The sequence means the second shell includes the first shell, and is not hollow, the third shell includes the first and second shell. The third shell is not hollow. Just imagine $a=0, b=5$ , and the height of each shell is constant one. Your radius includes $\dfrac{5}{2n},\dfrac{15}{2n},\dfrac{25}{2n}.$ Your cylinders volumes $2\pi$ times those radii. The third cylinder includes the first and second radii, and you're double counting volume. The correct way to conceptualize the cylindrical shell is to start from a correct understanding of the Riemann sum's partition structure, then take a representative subinterval and compute its volume as it is swept. If we assume that the axis of rotation is the $y$ -axis and that $0 , then a representative subint
|
The radius of the shell is not $a + (k-\tfrac{1}{2})\frac{b-a}{n}$ . As explained in my previous answer , there are two radii: $$r_1 = a + k \frac{b-a}{n}, \quad r_2 = a + (k-1)\frac{b-a}{n}.$$ When we compute the volume of the cylindrical shell, we take the area of the annular base and multiply by the height. The annulus has area $$\pi (r_1^2 - r_2^2),$$ and this happens to simplify because we can factor: $$r_1^2 - r_2^2 = (r_1 + r_2)(r_1 - r_2).$$ The quantity $r_1 - r_2$ is the width of the annulus, which is just the width of the subinterval. The quantity $r_1 + r_2$ is twice the mean radius. This is not the radius of the shell and the shells do not overlap. If I asked you to calculate the area of a circle of radius $r = 10$ , you would compute $A = \pi r^2 = 100\pi$ . But if I told you to do it by adding up the individual areas of a series of concentric, non-overlapping annular regions, each with width $1$ , by partitioning the radius into the subintervals $$[0, 10] = [0,1) \cup [1
|
|calculus|integration|volume|
| 1
|
Every point is umbilical, then plane or sphere. Proving from local to global.
|
I'm trying to fill in some details in Do Carmo's curves and surfaces proof on p.149, If all points of a connected surface $S$ are umbilical points, then $S$ is either contained in a sphere or in a plane. also discussed here . We have proven the local version, i.e., for every point in the surface $S$ there is a connected coordinate neighborhood $V_p$ such that every point in $V_p$ is contained in either a plane or a sphere. I am trying to understand how to conclude the global version, i.e., that all points in $S$ belong to the same plane or sphere . The author first proves that we can connect each point in $S$ to any other point $r$ with a curve $\alpha$ , and that we can cover the curve with a finite number of $V_p$ neighborhoods satisfying the above. Then he just states that If the points of one of these neighborhoods are on a plane, all the others will be on the same plane. Since $r$ is arbitrary, all the points of $S$ belong to this plane. And similarly, if the points of one of thes
|
If $N$ is the surface normal, then in the planar case we know $N=N_0$ is constant. Now consider the quantity $N_0\cdot x$ (where $x$ is the position vector of the surface). Take a curve $\alpha(t)$ in the surface joining any two points (here we use connectedness). Then $(N_0\cdot\alpha)’(t)=N_0\cdot\alpha’(t)=0$ for all $t$ . Then $N_0\cdot x = N_0\cdot\alpha(t_0)=c$ for every $x$ in the surface. This says the surface is contained in a single plane. The spherical case is analogous.
|
|general-topology|differential-geometry|surfaces|geometric-topology|spheres|
| 0
|
How to show that $\mu(\liminf_{n\to\infty} E_n)=0$?
|
Let $(\mathbb{R}, \mathcal{F}, \mu)$ be measure space and $E_n$ be measurable subsets for $n\in \mathbb{N}$ . Let $f: \mathbb{R}\to (0,\infty]$ be positive measurable function. If $\liminf_{n\to\infty} \int_{E_n} fd\mu=0$ , show that $$ \mu(\liminf_{n\to\infty} E_n)=0. $$ I try to apply Fatou's Lemma that $$ 0=\liminf_{n\to\infty} \int_{E_n} fd\mu=\liminf_{n\to\infty} \int fI_{E_n}\ge \int f\times\liminf_{n\to\infty}I_{E_n} $$ I am stuck here. It seems that since $f>0$ , then $\mu(\liminf_{n\to\infty}I_{E_n})=0$ ... Then
|
As you have correctly identified, Fatou's lemma yields \begin{align*} \int f\liminf_n I_{E_n} d\mu = 0. \end{align*} This implies that the integrand $f\liminf_n I_{E_n} = 0$ a.e. $\mu$ , i.e., \begin{align*} \mu\left(f\liminf_n I_{E_n} \neq 0\right) = 0. \end{align*} Because $f$ is strictly positive, this implies that \begin{align*} \mu\left(\liminf_n I_{E_n} \neq 0\right) = 0, \end{align*} Since for any $x \in \mathbb{R}$ , $\{I_{E_n}(x)\}$ is a sequence of $0$ and $1$ , $\liminf_n I_{E_n}(x) \neq 0$ then means $\liminf_n I_{E_n}(x) = 1$ , which requires $I_{E_n}(x) \equiv 1$ for all sufficiently large $n$ , i.e., $E_n$ happens except for finitely many $n$ , namely $\{x: \liminf_n I_{E_n}(x) \neq 0\} = \liminf_n E_n$ . This completes the proof.
|
|real-analysis|
| 1
|
How to take the logarithm to get the limit $\lim_{n \to \infty}\prod_{k=1}^{n}{\left ( 1+\frac{1}{2^{2^{k} } } \right )} $
|
$\lim_{n \to \infty}\prod_{k=1}^{n}{\left ( 1+\frac{1}{2^{2^{k} } } \right )} $ Let $ a_n=\prod_{k=1}^{n}{\left ( 1+\frac{1}{2^{2^{k} } } \right )} $ , $ \left( 1-\frac{1}{2^2} \right) a_n=\left( 1-\frac{1}{2^2} \right) \left( 1+\frac{1}{2^2} \right) \cdots \left( 1+\frac{1}{2^{2n}} \right) = 1-\frac{1}{2^{2n+1}}$ $a_n=\frac{4}{3}\left( 1-\frac{1}{2^{2n+1}} \right) ,n\longrightarrow \infty ,a_n\longrightarrow \frac{4}{3}$ I want to solve it by taking the logarithm $\lim _{n \to \infty } \exp \left [ \ln{\left ( 1+\frac{1}{{4}^{k} } \right ) }-\ln 1 \right ], $ $\left[ \ln \left( 1+\frac{1}{4^k} \right) -\ln 1 \right] =\frac{1}{1+\theta \frac{1}{4^k}}\frac{1}{4^k}\text{,}\theta \epsilon \left( 0,1 \right) $ $1 but how to do next?
|
One basic way to solve this is to see what the partial products are... $$a_1=\Bigg(1+\frac{1}{2^2}\Bigg)$$ $$a_2=\Bigg(1+\frac{1}{2^2}\Bigg)\Bigg(1+\frac{1}{2^4}\Bigg)=1+\frac{1}{2^2}+\frac{1}{2^4}+\frac{1}{2^6}$$ $$a_3=\Bigg(1+\frac{1}{2^2}\Bigg)\Bigg(1+\frac{1}{2^4}\Bigg)\Bigg(1+\frac{1}{2^8}\Bigg)=1+\frac{1}{2^2}+\frac{1}{2^4}+\frac{1}{2^6}+\frac{1}{2^8}+\frac{1}{2^{10}}+\frac{1}{2^{12}}+\frac{1}{2^{14}}$$ proceeding we get $$a_n=1+\frac{1}{2^2}+\frac{1}{2^4}+\frac{1}{2^6}+\frac{1}{2^8}+\frac{1}{2^{10}}+\frac{1}{2^{12}}+\frac{1}{2^{14}}+...+\frac{1}{2^{2(2^n-1)}}$$ Which is essentially a geometric series with common ratio $\frac{1}{2^2}$ and hence the limit is $$\frac{1}{1-\frac{1}{2^2}}=\frac{4}{3}$$
|
|limits|
| 0
|
Probability of international travel
|
The following is a probability question I got during an interview: I had two travels in the last year. At least one of them is an international travel in December, what is the probability that both travels are international? The probability of traveling in one specific month is $\frac {1}{12}$ , and the probability of an international travel is $\frac {1}{2}$ . Each trip is independent. The correct answer is $\frac {23}{47}$ , but I don't know how to get it.
|
The question as worded is a bit ambiguous, but looking at the answer, we can consider the following problem. Imagine $24$ balls organized into $12$ bins; in each bin there is one red ball (representing an international trip) and a green ball (representing a non-international trip). You draw $2$ balls at random, with replacement, and observe that at least one of them is red and from bin $12$ . What is the probability that the other ball is red? First, we compute the number of draws that yield at least one red ball from bin $12$ . If the ball was drawn first, then there are $24$ choices for the second ball. If the ball was drawn second, there are also $24$ choices for the first ball. But we have overcounted $1$ drawing, which is where the red ball from bin $12$ was drawn twice. Therefore, there are $(2) (24) - 1 = 47$ equally-likely drawings that satisfy the required condition. Now, we compute the number of these drawings where both drawn balls are red. This works in basically the same w
|
|probability|
| 1
|
Strategy for Black&White game
|
Consider the following game. Let $n$ be a positive integer. There are two players, $\newcommand\A{\mathrm{A}}\A$ and $\newcommand\B{\mathrm{B}}\B$ , and a referee. $\A$ and $\B$ first agree on a strategy. After this step, $\A$ and $\B$ cannot communicate. Initially, the referee chooses a sequence $\{c_i\}$ of length $n$ , where each entry is either “black” or “white”. $\A$ is made aware of this entire sequence, but $\B$ is not. There are a series of $n$ rounds, where $\A$ and $\B$ simultaneously guess either “black” or “white”. During the $k^\text{th}$ round, the team wins a point if $\A$ ’s guess and $\B$ ’s guess are both equal to the $k^\text{th}$ entry of the referee’s sequence. After each round, $\B$ is told what $\A$ ’s guess was, and what the referee’s pick was, for that round. This means $\B$ makes each decision while aware of all information from previous rounds. Find a strategy for $\A$ and $\B$ so that they can guarantee winning $g(n)$ times, where $\lim\limits_{n\to+\infty}
|
This problem is analyzed in the paper Online Matching Pennies , by Gossner, Hernandez, and Neyman, available at this link . They prove that the upper limit for the proportion of rounds won is $x^*$ , where $x^*\in (0,1)$ is the unique solution to $$ x^*\cdot \log_2(\tfrac1{x^*})+(1-x^*)\cdot \log_2(\tfrac1{1-x^*})+(1-x^*)\cdot \log_23=1. $$ That is, for any $x , there exists a strategy for which $\lim_{n\to\infty} g(n)/n=x$ . You can calculate $x^*\approx 0.8107$ . This shows that a ratio of $3/4=0.75$ is attainable. However, the paper only gives a probabilistic proof of this fact. In what follows, I will give an explicit strategy which shows how to attain $\lim_n g(n)/n=3/4$ . Define a covering code $C$ of length $\ell$ and distance $d$ to be a subset $C\subseteq \{0,1\}^n$ such that, for any word $w\in \{0,1\}^n$ , there exists $c\in C$ such that $w$ and $c$ differ in at most $d$ places. For an example with $\ell=3$ and $d=1$ , $C=\{000,111\}$ is an $(\ell,d)$ -covering code. Given $
|
|probability|combinatorics|information-theory|
| 1
|
What's the optimal way to approximate a binomial distribution with a Poisson distribution?
|
Conventionally, we approximate a binomially-distributed variable $X\sim B(n, p)$ with the Poisson-distributed variable $Y\sim Po(np)$ , with the mean of $X$ and $Y$ being identical. However, we could also approximate $X$ with $Z\sim Po(np(1-p))$ . While the mean of $X$ and $Z$ are different, the variances are identical. That leads me to wonder if the "best" (perhaps under something like mean squared error) to approximate $X$ (with $n\gg 1$ and $p\ll \frac{1}{2}$ ) would be taking a Poisson distribution with $\lambda$ somewhere between $np(1-p)$ and $np$ . So what would that optimal value $\lambda$ be?
|
I very much doubt that there's a closed-form solution for the optimal $\lambda$ . I tried an example: $n = 10$ , $p = 1/10$ . The objective is to minimize $$ g(\lambda) = \sum_{k=0}^\infty (\mathbb P(X = x) - \mathbb P(Y = x))^2$$ The optimal solution turns out to be the root of $$ 2 \,{\mathrm e}^{-2 \lambda} I_{1}\! \left(2 \lambda \right)-2 I_{0}\! \left(2 \lambda \right) {\mathrm e}^{-2 \lambda}+\frac{{\mathrm e}^{-\lambda} \left(\lambda^{10}+890 \lambda^{9}+319950 \lambda^{8}+60361200 \lambda^{7}+6503263200 \lambda^{6}+408316749120 \lambda^{5}+14624406014400 \lambda^{4}+279631499616000 \lambda^{3}+2473292401776000 \lambda^{2}+7029357352416000 \lambda -1405871470483200\right)}{18144000000000000} $$ which is approximately $1.042309798$ , according to Maple. Note that this is greater than $np = 1$ , while your $np(1-p) .
|
|probability-distributions|poisson-distribution|binomial-distribution|approximation-theory|
| 0
|
Convergence of a specific expression
|
I would like to prove the convergence of what follows $$ \lim_{N\to\infty}\sum_{n=0}^{N}\left[(1+\frac{\alpha}{2^{2N+2}})^{2^{2n+1}}-(1+\frac{\alpha}{2^{2N+2}})^{2^{2n}}\right] where $\alpha$ is a real parameter. If needed one can assume $0\le\alpha\le1$ , but numerical computations seem tu suggest the series converges for arbitrary positive $\alpha$ . It is unclear if and how usual tests can be applied, since terms of the series depend on the partial sum upper boundary.
|
Replace $n$ by $N-n$ , then take the limit termwise . For any real (or even complex) $\alpha$ , the result is $$\sum_{n=0}^\infty\left(e^{\alpha/2^{2n+1}}-e^{\alpha/2^{2n+2}}\right)\color{gray}{=\sum_{n=1}^\infty(-1)^{n-1}\left(e^{\alpha/2^n}-1\right)}\color{LightGray}{=\sum_{n=1}^\infty\frac{\alpha^n}{n!(2^n+1)}}.$$
|
|sequences-and-series|convergence-divergence|summation|
| 1
|
Division by zero?
|
I came across a problem where we write dividend = divisor $\times$ quotient + remainder, the dividend being divided by the divisor. the exact question was: $f(x) = x^4 + 9x^3 + 35x^2 -x + 4$ . Find value at $x=-5+4i$ . here we write $x+5 = 4i$ and by squaring both sides reach an equation $x^2 + 10x + 41 = 0$ and ultimately divide $f(x)$ to get remainder $-160$ and by dividend = divisor $\times$ quotient + remainder, here the divisor being $x^2+10x+41 = 0$ (LHS = RHS = $0$ so we are dividing by $0$ ). Ans given is $-160$ after putting divisor = $0$ to get value at $x=-5x+4i$ .
|
You can use long division. Not sure the best way to do it here. Here I'm separating parts with $|$ a vertical bar. The first part is the number I'm dividing, second part is factor I'm dividing by, third part is what I want to multiply the factor by, 4th part is the product. First part on the next line is the remainder, otherwise things remain the same. Finally stop when the degree if the remainder is less than the degree of the factor. Then evaluate the remainder at $x=0$ . Then add up the third part from each line to get the quotient. $x^4+9x^3+35x^2-x+4|x^2+10x+41|x^2|x^4+10x^3+41x^2$ $-x^3-6x^2-x+4|x^2+10x+41|-x|-x^3-10x^2-41x$ $4x^2+45x+4|x^2+10x+41|4|4x^2+40x+164$ $5x-160|x^2+10x+41|...$ So: $x^4+9x^3+35x^2-x+4= (x^2-x+4)(x^2+10x+41)+5x-160$
|
|complex-numbers|
| 0
|
Need help modelling a question that appears to be an expected value question but isn't
|
I'm playing a game right now, and one of the formulas for damage they have is: Your character has 11 equipment slots. Let's call it head, neck, body, back, hand, hand, ring, ring, arm, waist, leg. Your goal is to find out how to maximize your damage. The "formula" for damage is as follows: You have 1100 "total" points of damage. Assume you can assign these points of damage in increments of 100 to any number of equipment slots. Each equipment slot has a 10% chance of dealing its assigned damage value to the enemy. However, only 1 equipment slot will ever be able to deal damage, meaning if even 1 piece of equip deals damage, the damage rolls for the other pieces of equipment is 0. To illustrate extremes, I can decide to allocate 1100 points of damage to head, or evenly distribute 100 points of damage across all pieces of equipment. I can also decide to allocate 500, 500, and 100 to head, neck, and back, respectively. What's the best point distribution to maximize my damage?
|
Each equipment slot has a 10% chance of dealing its assigned damage value to the enemy. However, only 1 equipment slot will ever be able to deal damage, the damage rolls for the other pieces of equipment is 0. If we take this literally as written, it says that there is a $10\%$ chance that the head deals damage and no other slot deals damage. There is also a $10\%$ chance that the neck deals damage and no other slot (including the head) deals damage. Obviously these two events are mutually exclusive. So are the other nine events for each of the other nine slots, each of which has a $10\%$ chance of occurring according to what you wrote. But you can't have eleven mutually exclusive events, each of which has $10\%$ chance of occurring. The probabilities of mutually exclusive events have to add up to $100\%$ or less, and $11 \times 10\% = 110\%$ . Therefore what you wrote cannot literally be the rules of the game. So you must have meant something else. We can only guess what. A reasonable
|
|probability|
| 0
|
Does a functor from one category to another imply homomorphism between automorphism groups.
|
I have two categories $C$ and $D$ and a functor $F: C\rightarrow D$ . I select an object $A\in ob(C)$ , is there guaranteed to be a homomorphism $Aut(A)\rightarrow Aut(F(A))$ ? It seems like there should be, but I am having a hard time formalizing the argument, given I'm somewhat new to the subject. If such a homomorphism is guaranteed to exist, do the properties of the functor imply anything about the group homomorphism? Perhaps faithfulness implies injectivity, or fullness implies surjectivity? I suspect that if the functor isn't injective on the objects that might complicate things.
|
Everything you've said is correct except fullness doesn't imply surjectivity. Part of the data of a functor is a family of maps $\operatorname{Hom}(X, Y) \to \operatorname{Hom}(F(X), F(Y))$ , and in particular we have such a map when $X = Y$ . In this case the laws for being a functor say the map $\operatorname{End}(X) \to \operatorname{End}(F(X))$ is a monoid homomorphism, ie preserves the operation (composition) and the identity element (the identity morphism). Any homomorphism between monoids sends invertible elements to invertible elements (exercise) and so restricts to a homomorphism between their maximal subgroups/groups of units. In particular we get a group homomorphism $\operatorname{Aut}(X) \to \operatorname{Aut}(F(X))$ . Fullness implies $\operatorname{End}(X) \to \operatorname{End}(F(X))$ will be surjective, but this property can break when we restrict to groups of units. For example we have a surjective monoid homomorphism $\mathbb{N} \times \mathbb{N} \to \mathbb{Z}$ give
|
|group-theory|category-theory|automorphism-group|functors|
| 1
|
Prove completely regular space has the initial topology of all continuous functions into the unit interval
|
From the definition, for a completely regular spaces $X$ , a nonempty open set $U\subset X$ , and a point $x\in U$ , there exists a cts $f: X\to[0,1]$ s.t. $f(X-U)=0$ and $f(x)=1$ . Then $x\in f^{-1}((1/2,1])\subset U$ . Does this qualify as a proof of the statement in the title? But aren't we supposed to show there are finitely many cts funcitons $f_i: X\to [0,1]$ such that $x\in \cap_1^nf_i^{-1}(V_i)\subset U$ ? I can't seem to find a proof in a textbook. That is, open preimages should only be a subbase, not a base, right? Thanks. Update It seems, in this particular case, preimages of open sets in $[0,1]$ do indeed form a base, not just a subbase, due to the following fact: For $V_1, \cdots, V_n$ open sets of $[0,1]$ , and $f_1,\cdots, f_n$ cts functions from $X$ to $[0,1]$ , the intersection $U':=\cap_1^n f_i^{-1}(V_i)$ is open in $\tau$ , and so, for every $x\in U'$ , by definition of completely regularity, there exists a cts $f:X\to [0,1]$ s.t. $f(X-U')=0$ and $f(x)=1$ . So $U'':=
|
Let $\tau$ be the topology on $X$ and $\tau_w$ be the initial topology from all continuous functions to $[0,1]$ . Your argument shows that members of $\tau_w$ are a base for $\tau$ . So every member of $\tau$ is a union of members of $\tau_w$ , so every member of $\tau$ is a member of $\tau_w$ . So $\tau\subseteq \tau_w$ . For the other direction, let $x\in S\in\tau_w$ . By definition there are sets $V_1,\ldots,V_n\subseteq [0,1]$ such that $V_i$ are open in $[0,1]$ and continuous functions $f_1,\ldots,f_n$ such that $x\in\bigcap_{i=1}^n f_i^{-1}(V_i)\subseteq S$ . But each $f_i^{-1}(V_i)$ is $\tau$ -open because the $f_i$ are continuous. So $\bigcap_{i=1}^n f_i^{-1}(V_i)$ is $\tau$ -open. This shows that members of $\tau$ are a base for $\tau_w$ . So by similar argument to above, $\tau_w\subseteq \tau$ .
|
|general-topology|
| 1
|
Why do we use a Poisson distribution here rather than Binomial?
|
Approximately 80,000 marriages took place in the state of New York last year. Estimate the probability that for at least one of these couples, (a) both partners were born on April 30 I understand how to find the answer (~.45) but on an exam, what language gives away that this question would be asking for a Poisson distribution rather than a Binomial one?
|
Let's do the calculation both ways and see what happens: Under a binomial model, we have $n = 80000$ , and under the assumption that any individual has an equal probability of being born on any one of $365$ days in a year, and that partners in marriage are not any more or less likely to marry someone sharing the same birthday than any two randomly selected people, then $p = \frac{1}{365^2} = \frac{1}{133225} \approx 7.5061 \times 10^{-6}$ . Then $$X \sim \operatorname{Binomial}(n,p)$$ models the random number of married couples who were both born on April 30, and the question is asking for $$\Pr[X \ge 1] = 1 - \Pr[X = 0] = 1 - \binom{n}{0} p^0 (1-p)^{n-0} = 1 - (1 - p)^n \approx 0.45145729806317.$$ Under a Poisson model, the event rate of couples sharing an April 30 birthday is $$\lambda = np = \frac{80000}{133225} = \frac{3200}{5329} \approx 0.600488.$$ This rate has the units of couples per year, under the assumption that exactly $80000$ couples marry in New York each year; equivalen
|
|probability|poisson-distribution|
| 0
|
Evalute $\lim\limits_{n\to\infty}\int _{0}^{1}\frac{nf(x)}{1+n^2x^2}\,dx$
|
Suppose $f\colon [0, 1] \to \mathbb{R}$ is a continuous function, Find the following limit: $$\lim\limits_{n\to\infty}\int _{0}^{1}\frac{nf(x)}{1+n^2x^2}\,dx$$ I know the answer is equal $\frac{\pi}{2}f(0) $, but I don't prove that. Any ideas or insight would be greatly appreciated.
|
Since $f $ is continuous at $[0,1]$ then $\forall \epsilon >0, \ \exists \delta> {0} $ such that if $|x| . Since Continuous functions attend both maximum and minimum value on compact sets, let $M = 2\sup_{x\in [0,1]} |f(x) |$ $$\int_{0}^{1} \frac{nf(x)}{1+n^2x^2}dx= \int_{0}^{\delta} \frac{nf(x)}{1+n^2x^2}dx+\int_{\delta }^{1} \frac{n(f(x))}{1+n^2x^2}dx $$ $$= \int_{0}^{\delta} \frac{n(f(x)- f(0))}{1+n^2x^2}dx+ \int_{0}^{\delta} \frac{nf(0)}{1+n^2x^2}dx +\int_{\delta }^{1} \frac{n(f(x))}{1+n^2x^2}dx $$ $$=f(0) \arctan(n\delta)+ \int_{0}^{\delta} \frac{n(f(x)- f(0))}{1+n^2x^2}dx \int_{\delta }^{1} \frac{n(f(x))}{1+n^2x^2}dx $$ Since $ \displaystyle \lim_{x \to \infty } \arctan(x) =\frac{\pi}{2}$ then for $\epsilon>0, \ \exists \alpha \in \mathbb{R} $ such that if $ x >\alpha $ then $|\arctan(x)- \frac{\pi}{2}| choose $N$ such that $N\delta >\alpha$ for all $n \ge N $ $$\bigg|\frac{\pi f(0)}{2}-f(0) \arctan(n\delta)- \int_{0}^{\delta} \frac{n(f(x)- f(0))}{1+n^2x^2}dx- \int_{\delta }^{1}
|
|real-analysis|calculus|integration|limits|definite-integrals|
| 0
|
Does every triangle satisfy $a^c + b^c - c^c < \pi$
|
Let $(a,b,c)$ be the sides of a triangle and let its circumradius be $1$ . Is it true that $$ a^c + b^c - c^c My progress : In the special case where at least two of the three sides are equal, I have been able to show that the upper bound is about $3.13861$ . If two sides are equal and the third side is $c = x$ then the length of the two equal side are $a = b = \sqrt{2 + \sqrt{4-x^2}}$ each. Maximizing the expression using Wolfram Alpha $$ 2\left(\sqrt{2 + \sqrt{4-x^2}}\right)^x - x^x $$ gives $3.13861$ as the upper bound in this case. Further more, simulation show that this is also the unconditional maxima but I have not been able to prove it. Since $3.13861$ is very close to $\pi$ so, I expressed the above inequality in terms of $\pi$ to present it in an elegant form but it seems the proximity is just a coincidence unless I am missing something.
|
The three vertices of any such triangle can be taken to be on the unit circle; without loss of generality, one of them can always be $(1,0)$ . Therefore the entire configuration space of triangles can be parametrized by two angles $x$ and $y$ , with the other two vertices being $(\cos x,\sin x)$ and $(\cos y,\sin y)$ . This is a very reasonable numerical optimization problem, and Mathematica finds the maximum value to be about $3.1386122590888217$ when $x=-1.5106639429716202$ and $y=2.3862607049785898$ , which is to say when the three vertices are $(1,0)$ , $(0.060096151557482914, -0.9981925929238206)$ , and $(-0.728044022440954, 0.6855303796244158)$ . (The lengths of the sides of this triangle are $1.8590556694615026$ , $1.8590556863316139$ , and $1.3710607925562726$ , so perhaps the optimal triangle is really isosceles and the accuracy isn't as good as the decimal places indicate.) I strongly suspect that the maximum value is some random transcendal number having nothing to do with $
|
|geometry|algebra-precalculus|inequality|triangles|maxima-minima|
| 0
|
How to show a function is holomorphic
|
Let $f:\mathbb{D}\to\mathbb{D}$ be a holomorphic function s.t $0$ is a zero of order $k\geq 1$ . Prove that $f(z)=z^kg(z)$ , in which $g:\mathbb{D}\to\mathbb{D}$ is holomorphic. My attempt: Since $f$ is holomorphic on $\mathbb{D}$ , then by Taylor's expansion we have $$f(z)=\sum\limits_{n=0}^{\infty}c_nz^n,\quad c_n=\dfrac{f^{(n)}(0)}{n!}.$$ Since $0$ is zero of order $k\geq 1$ , we have $c_1=...=c_{k-1}=0, c_k\neq 0$ . $\Rightarrow\quad f(z)=z^kg(z)$ , in which $g(z)=c_k+c_{k+1}z+...$ My question is how can I show such $g$ is holomorphic on $\mathbb{D}$ and $|g(z)| ? Could someone help me? Thanks in advance!
|
$g$ is holomorphic because it has a power series. Holomorphic is equivalent to analytic. $c_k+c_{k+1}z+\dots $ is a power series. The inequality $1\ge\mid f(z)\mid={\mid g(z)\mid}{\mid z^k\mid}\mid=\mid g(z)\mid$ holds for $\mid z\mid=1.$ Now apply the maximum modulus principle to get the inequality on the whole disk.
|
|complex-analysis|
| 1
|
Fixing a variable of a $H^1_0(\Omega)$ function
|
Let $\Omega \subset \mathbb{R}^{N} = \mathbb{R}^{N_1} \times \mathbb{R}^{N_2}$ . Suppose that $\Omega \subset A_1 \times A_2$ , where $A_i \subset \mathbb{R}^{N_i}$ . If $u \in C^\infty_0(\Omega)$ , I know that, for a fixed $x \in A_1$ , the function $v_x(y) := u(x,y)$ is in $C^{\infty}_0(A_2)$ . I wonder if the same happens for a function in $H^1_0(\Omega)$ , that is, if $u \in H^1(\Omega)$ , then $v_x \in H^1_0(A_2)$ ?
|
This is a particular case of Sobolev embedding theorems. See Adams, Sobolev spaces, Theorem 5.4. There, embedding theorems are formulated in a more general way including your situation. That is embedding theorems are given for $W^{m,p}(\Omega) \hookrightarrow W^{j,q}(\Omega^k)$ , where $\Omega^k$ is the intersection of $\Omega$ and a $k$ -dimensional hyperplane.
|
|lebesgue-integral|sobolev-spaces|weak-derivatives|
| 1
|
Is the sum of a strictly concave and a log concave functions, log concave?
|
So I know that sum of log-concave functions need not be log-concave. But I was wondering if the sum of a strictly concave function with a log-concave function is log-concave. I am considering functions that a sum of a sigmoid and a norm. Something like this: $$ \tfrac{e^x}{1+e^x} - (x-c)^2 $$ When I plot this, visually this looks log-concave. But can't find a more structured way to reason about this.
|
To make the search for a counterexample more systematic, consider the second derivative of the logarithm: $$ (\log f)'=\frac{f'}f\;,\\ (\log f)''=\frac{ff''-f'^2}{f^2}\;. $$ So a function can be log-concave without being concave because the $-f'^2$ term can make it so despite $f$ and $f''$ being positive. That suggests that for a counterexample we need to increase the first term without appreciably changing the second. One way to do that is to just add a constant to increase $f$ . For instance, $$\frac{\mathrm e^x}{1+\mathrm e^x}+10$$ isn’t log-concave. Now just add a small convex part; for instance, $$\frac{\mathrm e^x}{1+\mathrm e^x}+10-\frac{x^2}{100}$$ isn’t log-concave, but it’s the sum of a log-concave and a strictly concave function.
|
|optimization|convex-optimization|
| 1
|
Compute $\lim_{n\to \infty} \int_0^\infty \frac{n\sin(x/n)}{1+x^4}dx$.
|
I try to apply dominated convergence theorem to compute the limit of following integral: $$ \lim_{n\to \infty} \int_0^\infty \frac{n\sin(x/n)}{1+x^4}dx $$ Note that let $y=x/n$ , then $$ \frac{n^2\sin(y)}{1+n^4y^4}\le \frac{n^2}{1+n^4y^4}\le \frac{n^2}{2n^2y^2}=\frac{1}{2y^2} $$ But $\int_0^\infty \frac{1}{2y^2}$ is not convergent... So I am stuck here...
|
Without making the substitution you can use $g(x)=\frac x {1+x^{4}}$ as a dominating integrable function. [ $|\sin t|\le t$ for all $t>0$ ].
|
|real-analysis|
| 0
|
Semidirect product of groups
|
In recent days I am studying semidirect product of groups and I have come up with the following question which has already been answered here ( From semidirect to direct product of groups ), but I can't understand its solution from the comments, can you please clarify me with step by step solutions? Let $G = N \rtimes_{\varphi} H$ . If there exists a homomorphism $f: G \rightarrow N$ which is the identity on $N$ , then is it true that $G$ is the direct product of $N$ and $H$ ?
|
Since the question concerns equality rather than isomorphism, it seems to me that, with the standard convention for identifying subgroups of $N \rtimes_\phi H$ with $N$ and $H$ , the answer to the question is no. Let $H=N$ be any nonabelian group, and let $\phi$ be the action of $H$ on $N$ by conjugation. Then there is homomorphism from $G = N \rtimes_\phi H$ to $N$ which is the identity on $N$ and has kernel $\{(h,h^{-1}) : h \in H \}$ , but $G$ is not the direct product of its subgroups $\{(h,1): h \in H \}$ and $\{(1,n): n \in N\}$ , which are the subgroups that are customarily identified with $H$ and $N$ in the semidirect product. Of course we do have $G \cong N \times H$ , but that was not the question.
|
|abstract-algebra|group-theory|normal-subgroups|semidirect-product|direct-product|
| 1
|
How to prove $a+b+c\ge 3$ with the strange condition?
|
Problem. If $x,y,z\ge 0$ which satisfying both constraits $$xyz\ge \frac{5\sqrt{3}}{9};3(xyz)^2-3+2(x^2+y^2+z^2-x^2y^2-y^2z^2-z^2x^2)=0.$$ Prove that $x+y+z\ge 3.$ I want to ask how to use $3(xyz)^2-3+2(x^2+y^2+z^2-x^2y^2-y^2z^2-z^2x^2)=0.$
|
Some thoughts. Remark. It is still complicated. Let $a := x^2, b := y^2, c := z^2$ . The conditions are written as $$abc \ge \frac{25}{27}, \quad 3abc -3 + 2(a + b + c - ab - bc - ca) = 0. \tag{1}$$ We need to prove that $\sqrt{a} + \sqrt{b} + \sqrt{c} \ge 3$ . WLOG, assume that $a \ge b \ge c$ . We can prove that $$ab \ge 1. \tag{2}$$ Let $p = a + b, q = ab$ . Then $p^2 \ge 4q$ . (1) is written as $$c = \frac{3 + 2q - 2p}{3q + 2 - 2p}, \quad q\cdot \frac{3 + 2q - 2p}{3q + 2 - 2p} \ge \frac{25}{27}. \tag{3}$$ From (2) and (3), we have $3q + 2 - 2p > 0$ and $3 + 2q - 2p > 0$ . From (3), we have $$p \le \frac{27q^2 + 3q - 25}{27q - 25}.$$ We need to prove that $$f(p) := \sqrt{p + 2\sqrt{q}} + \sqrt{\frac{3 + 2q - 2p}{3q + 2 - 2p}} \ge 3. \tag{4}$$ ( Note : $\sqrt{a} + \sqrt{b} = \sqrt{a + b + 2\sqrt{ab}}$ .) We can prove that $f(p)$ is concave on $0 . ( Note : Both $\sqrt{p + 2\sqrt{q}}$ and $\sqrt{\frac{3 + 2q - 2p}{3q + 2 - 2p}}$ are concave. Take the second derivative.) Also, we have
|
|inequality|
| 1
|
Proving that $\text{SL}_2 (\mathbb{Z})$ is closed under inverses
|
I am trying to verify that $\text{SL}_2 (\mathbb{Z})$ is a group under matrix multiplication. The only property I am not certain on is closure under inverses. Given $A \in \text{SL}_2 (\mathbb{Z})$ , I know that $A^{-1}$ exist (it has non-zero determinant). I need to show that $A^{-1}$ has determinant $1$ and integer entires. It's true in general that $\det(A^{-1}) = \frac{1}{\det(A)}$ , so if $\det(A) = 1$ , then $\det(A^{-1}) = 1$ . Proving that $A^{-1}$ has integer entries is a bit tougher, because I don't think it is necessarily true that if $\det(A^{-1}) = 1$ , then all of its entries are integers (though I struggle to think of a counterexample). The only proof I could think of involves the adjugate matrix, $\text{adj}(A)$ . In general, we have $$ A \text{adj}(A) = \det(A) I$. $$ As $\det(A) = 1$ , the inverse of $A$ is the adjugate matrix. The adjugate matrix is the transpose of the matrix of cofactors, whose entries are partial determinants around entry $(i,j)$ , up to a sign. T
|
Proving that $A^{−1}$ has integer entries is a bit tougher, because I don't think it is necessarily true that if $\det(A^{-1})=1$ , then all of its entries are integers (though I struggle to think of a counterexample). This logic is not valid. There is no such counterexample. If $A$ is in $SL_2(\Bbb Z)$ , then we cannot find a counterexample, because actually $SL_2(\Bbb Z)$ is a group, as we know. If $A\in M_2(\Bbb Z)$ just satisfies $\det(A^{-1})=1$ , then we have $\det(A)\det(A^{-1})=\det(AA^{-1})=\det(I)=1$ , so that $\det(A^{-1})=1$ implies also $\det(A)=1$ , i.e., $A\in SL_2(\Bbb Z)$ . So there is no counterexample. If you drop the condition that $A$ has integer coefficients, and only require $\det(A^{-1})=1$ , then of course you can take $$ A=\begin{pmatrix} \frac{1}{2} & 0 \cr 0 & 2\end{pmatrix}. $$
|
|group-theory|
| 0
|
A question about the monotonicity of $y_1 = x^e$ and $y_2 = x^{\pi}$
|
As we know from 1st year Calculus, for function $f(x) = x^n, x \in \mathbb{R}, n \in \mathbb{N}$ , if $n$ is odd, the $f(x)$ is increasing; if $n$ is even $f(x)$ is increasing when $x \geq 0$ and decreasing when $x . But here we only have the cases where $n$ is a positive integer. We don't know how it goes if $n$ is replaced by some irrational number like $\pi$ or $e$ . So I tried to do the plot of $y_1 = x^e$ and $y_2 = x^{\pi}$ through MATLAB to study their monotonicity, and here are the codes and plots of them: $\textbf{Codes}:$ $\textbf{Plots}:$ And from the plots above, we can see that $y_1 = x^e$ and $y_2 = x^{\pi}$ are monotonically increasing on $\mathbb{R}$ , and I am quite curious about how to rigorously prove the monotonicity of them. Can somebody give me some hint on which mathematical tool I should use here, for example derivative or what? Thanks!
|
As you can see there is a warning given which says that the imaginary parts are ignored. It is actually trying to say that in the case of functions like these we cannot have $x because then we will get an imaginary number. (Recall that $\textrm{(-ve number)}^{\textrm{irrational}}$ yeilds an imaginary number.) Now let's say that you define $f:\mathbb{R}\to\mathbb{C}$ and now you want to study the nature of the curve when $x . Then this is actually absurd as there is no such thing as "larger number" in complex numbers. Suppose $x then let $x=-a$ where $a>0$ . Now $$(-a)^{e}=(-1)^ea^e$$ $$=(e^{i\pi})^e a^e=a^e\cdot e^{i\pi e}$$ $$=a^e(\cos(\pi e)+i\sin(\pi e))$$ which is indeed a complex number.
|
|exponential-function|monotone-functions|
| 1
|
$\sigma$-weak continuity of $x \mapsto x \otimes 1$ from $B(H)$ to $B(H \otimes H)$
|
Let $H$ be a separable Hilbert space. Write $B(H)$ for the set of linear bounded operator on $H$ and $H \otimes H$ the tensor product of Hilbert space. For every $A,B \in B(H)$ we can define an operator $A \otimes B$ on $H \otimes H$ by $(A \otimes B)(x \otimes y) = Ax \otimes Ay$ . Thus, we can define $\omega : T \mapsto T \otimes 1$ from $B(H)$ to $B(H \otimes H)$ . Given how simple $\omega$ is, I guessed that it should be continuous with respect to both $\sigma$ -weak topology. That's what I want to show. Here is what I've done so far : Let $(A_i)_i$ be a $\sigma$ -weak converging net in $B(H)$ and let $A$ be it limit. Let $(\xi_n)_n, (\eta_n)_n$ be sequences in $H \otimes H$ such that $\sum_n ||\xi_n||^2, \sum_n ||\eta_n||^2 . We have to show that $\sum_n (\omega(A_i)\xi_n \mid \eta_n) \xrightarrow[i \to +\infty]{}\sum_n (\omega(A)\xi_n \mid \eta_n)$ . For every $n$ , we can write $$\eta_n = \sum_r x_r^n \otimes y_r^n , \hspace{2mm} \xi_n = \sum_s z_s^n \otimes t_s^n$$ First I'll d
|
Here is something useful to keep in mind. A linear map $\pi: M\to N$ between von Neumann algebras is normal if its adjoint $\pi^*$ maps $N_*$ into $M_*$ . In your case, you have $$\pi: B(H)\to B(H\otimes H): x \mapsto x\otimes 1.$$ Consider vectors $\xi, \eta, \xi', \eta'\in H$ . Then we have for $x\in B(H)$ that $$\pi^*(\omega_{\xi, \eta}\otimes \omega_{\xi', \eta'})(x) = \omega_{\xi, \eta}(x) \langle \xi', \eta'\rangle$$ so that $$\pi^*(\omega_{\xi, \eta}\otimes \omega_{\xi', \eta'}) = \langle \xi', \eta'\rangle \omega_{\xi, \eta} \in B(H)_*.$$ Since the functionals $\omega_{\xi, \eta}\otimes \omega_{\xi', \eta'}$ are linearly dense in $B(H\otimes H)_*$ , we conclude that $$\pi^*(B(H\otimes H)_*))\subseteq B(H)_*$$ whence $\pi$ is normal. Remark: No separability of $H$ was used, and you can replace the second factor of the tensor product by an arbitrary Hilbert space. Alternatively, you can use that a $*$ -isomorphism between two von Neumann algebras is automatically $\sigma$ -weakly
|
|functional-analysis|operator-theory|tensor-products|operator-algebras|
| 1
|
One more time, ZF based proof that set of all sets does not exist
|
I know it has been asked several times, but there is always one step that I don't see. My argumentation goes as it follows: Given the set $\mathbb{V}$ set of all sets, then $V \in V$ . Because of the Axiom of Schema of Specfication we can define a certain property $P(x) = x \not\in x$ , and prove that $\exists A \forall x, A = \{x \in V : x \not\in x\}$ We can prove that this is a contradiction by checking if $A \in A$ (since $A \in V$ but $A \not\in A$ , but then if $A \not\in A$ then $A \in A$ , which is a contradiction). This proves that such set $A$ does not exist, but I cannot see why can we conclude that this proves that $\nexists V$ , since we could have started the analysis with any other set than $V$ (this is, the proof is independent of the initial set) that contains $A$ .
|
There is no other set that contains the same $A$ . Start with a set $Y$ and form $$ S=\{p\in Y\colon p\not\in p\} $$ What can we say about $S$ ? Is $S\not\in Y$ ? Possibly. Is $S\in Y$ and $S\not\in S$ ? No because then by definition of $S$ , $S\in S$ . Is $S\in Y$ and $S\in S$ ? No because then by definition of $S$ , $S\not\in S$ . Conslusion: The first possibility must be true and $S\not\in Y$ . Corollary: For any set $Y$ , $\mathcal P(Y)\not\subseteq Y$ where $\mathcal P$ denotes powerset. Now if $V$ is a set of all sets, it must satisfy $\mathcal P(V)\subseteq V$ , because all sets are in $V$ .
|
|elementary-set-theory|set-theory|axioms|
| 1
|
Application of Intermediate Value T.
|
Let $f$ be a continuous function on the set of real numbers. If $f(x)f(1-x) for each $x\in (0,\frac{1}{2})$ , then find $f(\frac{1}{2})$ . I guess we obtain $f(\frac{1}{2})=0$ by applying the intermediate value theorem. However, I am unable to implement it. Help pls.
|
No need for IVT. Let $x \to \frac 12$ to get $f(\frac 1 2)f(\frac 1 2) \leq 0$ . This implies $f(\frac 1 2)=0$ . [ $a^{2}\ge 0$ for every real number $a$ ].
|
|real-analysis|calculus|
| 0
|
One variable equation on real numbers
|
Find the real solutions of the following equation $$\frac{4x-1}{x^2-2x+2}+\frac{4x+7}{x^2+2x+2}=\frac{4(2x+3)}{x^2+2}.$$ I think the problem was given in a junior olympiad. My best idea was to denote $x-1=a$ , $x+1=b$ and after adding $2$ the LHS becomes $$\frac{(a+2)^2}{a^2+1}+\frac{(b+2)^2}{b^2+1}$$ which looks like a Cauchy-Schwarz inequality. Unfortunately, I couldn't find a good form for the RHS. I also tried $x^2-2x+2=a$ , $x^2+2x+2=b$ which leads to a somewhat nice looking $$\frac{b-1}{a}+\frac{7-a}{b}=\frac{4(b-a+6)}{a+b}$$ but the computations are not nice. By plotting, there are four solutions among which are $\pm\sqrt{2}$ .
|
By multiplying with a common denominator the equation is equivalent to $$ - 3x^4 + 8x^3 + 12x^2 - 16x - 12=0 $$ By writing it as $-(3x^2+ax+b)(x^2+cx+d)$ and comparing coefficients we see that this is just $$ (3x^2 - 8x - 6)(x^2 - 2)=0. $$ This gives four real solutions.
|
|algebra-precalculus|contest-math|real-numbers|
| 1
|
Proving that $\text{SL}_2 (\mathbb{Z})$ is closed under inverses
|
I am trying to verify that $\text{SL}_2 (\mathbb{Z})$ is a group under matrix multiplication. The only property I am not certain on is closure under inverses. Given $A \in \text{SL}_2 (\mathbb{Z})$ , I know that $A^{-1}$ exist (it has non-zero determinant). I need to show that $A^{-1}$ has determinant $1$ and integer entires. It's true in general that $\det(A^{-1}) = \frac{1}{\det(A)}$ , so if $\det(A) = 1$ , then $\det(A^{-1}) = 1$ . Proving that $A^{-1}$ has integer entries is a bit tougher, because I don't think it is necessarily true that if $\det(A^{-1}) = 1$ , then all of its entries are integers (though I struggle to think of a counterexample). The only proof I could think of involves the adjugate matrix, $\text{adj}(A)$ . In general, we have $$ A \text{adj}(A) = \det(A) I$. $$ As $\det(A) = 1$ , the inverse of $A$ is the adjugate matrix. The adjugate matrix is the transpose of the matrix of cofactors, whose entries are partial determinants around entry $(i,j)$ , up to a sign. T
|
The argument with the adjugate matrix is correct and using it will get you to the heart of the matter: Any $n\times n$ matrix $A$ with entries in any commutative ring $R$ is invertible (in $R^{n\times n}$ ) if and only if $\det(A)$ is a unit in $R$ . Having established this, it is then clear that $\mathrm{SL}_n(R)$ as the kernel of $\det$ only contains invertible matrices and is therefore a group by your previous arguments.
|
|group-theory|
| 0
|
Proving that $\text{SL}_2 (\mathbb{Z})$ is closed under inverses
|
I am trying to verify that $\text{SL}_2 (\mathbb{Z})$ is a group under matrix multiplication. The only property I am not certain on is closure under inverses. Given $A \in \text{SL}_2 (\mathbb{Z})$ , I know that $A^{-1}$ exist (it has non-zero determinant). I need to show that $A^{-1}$ has determinant $1$ and integer entires. It's true in general that $\det(A^{-1}) = \frac{1}{\det(A)}$ , so if $\det(A) = 1$ , then $\det(A^{-1}) = 1$ . Proving that $A^{-1}$ has integer entries is a bit tougher, because I don't think it is necessarily true that if $\det(A^{-1}) = 1$ , then all of its entries are integers (though I struggle to think of a counterexample). The only proof I could think of involves the adjugate matrix, $\text{adj}(A)$ . In general, we have $$ A \text{adj}(A) = \det(A) I$. $$ As $\det(A) = 1$ , the inverse of $A$ is the adjugate matrix. The adjugate matrix is the transpose of the matrix of cofactors, whose entries are partial determinants around entry $(i,j)$ , up to a sign. T
|
You can just use the (unique) explicit form of the inverse of a $2 \times 2$ matrix, which is given by $$ A^{-1}=\frac{1}{\det(A)} \begin{pmatrix}d & -b \\ -c & a \end{pmatrix} $$ and this has integer entries if $A$ has only integer entries and $\det(A)= \pm 1$ .
|
|group-theory|
| 0
|
how to find a complex integral when the singular point is on the given curve
|
how to evaluate ∮1/(z-2) dz around the square with vertices 2±2i ,-2±2i the function is not analytic on z=2.but z=2 is on the given curve.so that can't apply cauchy,s integral formula.how can i do this?
|
The solution in my first answer IS Cauchy's Principal Value, it has nothing to do with an average. This is a completely geometrical result, it comes from the fact that we are integrating over half a circle. I'll try to make all the steps, but still skip a few easy ones: Let $C$ be a square with corners at $\pm2 + i \pm2$ . We want to calculate the $PV\oint_C\mathrm{d}z/(z-2)$ : (For the parametric contour see my first post, second image.) \begin{eqnarray} PV \oint_C \dfrac{\mathrm{d}z}{z-2} &=& \lim_{\epsilon\rightarrow0^+} \left( \int_{\theta=3\pi/2}^{\pi/2}\dfrac{-i\epsilon e^{-i\theta}\mathrm{d}\theta}{(2-\epsilon e^{-i\theta})-2} + \right. \nonumber\\ && + \int_{z=-2-2i}^{2-2i}\dfrac{\mathrm{d}z}{z-2} + \int_{z=2-2i}^{2-i\epsilon}\dfrac{\mathrm{d}z}{z-2} + \int_{z=2+i\epsilon}^{2+2i}\dfrac{\mathrm{d}z}{z-2} \nonumber\\ &&\left. + \int_{z=2+2i}^{-2+2i}\dfrac{\mathrm{d}z}{z-2} + \int_{z=-2+2i}^{-2-2i}\dfrac{\mathrm{d}z}{z-2} \right) \nonumber\\ &=&\lim_{\epsilon\rightarrow{0^+}} \int
|
|contour-integration|complex-analysis|
| 0
|
Limit Points of a Sequence in the Excluded Point Topology
|
Consider $\mathbb{R}$ equipped with the excluded point topology $\tau_0 = \{U \subseteq \mathbb{R} : 0 \notin U \text{ or } U = \mathbb{R}\}$ . Identify the limit points of the sequence $(1+\frac{1}{n})_{n \in \mathbb{N}}$ in $(\mathbb{R}, \tau_0)$ . So, I've gotten quite far with this, I think I have shown that $0$ is a limit point of this sequence. For points $x\neq 0$ I am struggling to reach a conclusion, this is what I have: Consider $x \neq 0 \in \mathbb{R}$ . Let the elements of $\{U \subseteq \mathbb{R} : 0 \notin \mathbb{R}\}$ be denoted by $U_i$ for $i\in I$ where $I$ is some index set, and note that these are all open sets. The open sets which contain $x$ are $\{x\}, \mathbb{R},$ and $U_i, \forall i \in I$ . Note that $\{x\} \subseteq \mathbb{R}$ , and $\{x\} \subseteq U_i$ . Thus, $x$ is a limit point for the sequence if there exists an $N \in \mathbb{N}$ such that $(1+\frac{1}{n})_{n \in \mathbb{N}} \in \{x\}$ , $\forall n \geq N$ . So, would it be correct to conclude that
|
Your definitions are not quite right. The correct definition is the following: $x$ is a limit point of a sequence $a_n$ if for all open sets $U$ that contain $x$ and all $n\in \mathbb{N}$ there exists an index $m>n$ such that $a_m \in U$ . First, let's prove that $0$ is a limit point: the only open set that contains $0$ is $\mathbb{R}$ itself, and for all $n\in \mathbb{N}$ there exists a $m>n$ (take $m=n+1$ for example) such that $a_m \in \mathbb{R}$ . Now take an $x \neq 0$ . The set $\{x\}$ is open in $\tau_0$ . If $x$ is a limit point we must have $a_n = x$ for infinitely many $n \in \mathbb{N}$ which cannot happen since the terms of $a_n$ are distinct. Hence the set of limit points of $a_n$ is $\{0\}$ .
|
|general-topology|
| 1
|
Compute $\lim_{n\to \infty} \int_0^\infty \frac{n\sin(x/n)}{1+x^4}dx$.
|
I try to apply dominated convergence theorem to compute the limit of following integral: $$ \lim_{n\to \infty} \int_0^\infty \frac{n\sin(x/n)}{1+x^4}dx $$ Note that let $y=x/n$ , then $$ \frac{n^2\sin(y)}{1+n^4y^4}\le \frac{n^2}{1+n^4y^4}\le \frac{n^2}{2n^2y^2}=\frac{1}{2y^2} $$ But $\int_0^\infty \frac{1}{2y^2}$ is not convergent... So I am stuck here...
|
For the pleasure of computing an interesting integral. $$I=\int\frac{n \sin \left(\frac{x}{n}\right)}{x^4+1}\,dx=\int \frac{n^2 \sin (t)}{n^4 t^4+1}\,dt=\int \frac{n^2 \sin (t)}{n^4 \prod_{k=1}^4 (t-r_k)}\,dt$$ where $$r_{1,2}=\pm\frac{1+i}{\sqrt{2} n}\qquad \text{and}\qquad r_{3,4}=\pm\frac{1-i}{\sqrt{2} n}$$ Partial fraction decomposition $$\frac{n^2 }{n^4 \prod_{k=1}^4 (t-r_k)}=\frac 1 {n^2} \sum_{k=1}^4 \frac {A_k}{t-r_k}$$ and then four integrals $$J_k=\int_0^\infty \frac {\sin(t)}{t-r_k}\,dt$$ Obvious changes of variable and expansion of the sine function leads to $$J_k=\frac{1}{2} (2 \text{Si}(r_k)+\pi ) \cos (r_k)-\text{Ci}(-r_k) \sin (r_k)$$ Back to $x$ and the bounds, using series $$K=\int_0^\infty\frac{n \sin \left(\frac{x}{n}\right)}{x^4+1}\,dx=\frac{\pi }{4}-\frac{6 \log (n)-6 \gamma +11}{36 n^2}-\frac{\pi}{480 n^4}+O\left(\frac{1}{n^6}\right)$$ which is a quite good approximation even for small values of $n$
|
|real-analysis|
| 0
|
If $ f(x \cdot f(y) + f(x)) = y \cdot f(x) + x $, then $f(x)=x$
|
Let $ f : \mathbb{Q} \rightarrow \mathbb{Q} $ be a function which has the following property: $$ f(x \cdot f(y) + f(x)) = y \cdot f(x) + x \;,\; \forall \; x, y \in \mathbb{Q} $$ Prove that $ f(x) = x, \; \forall \; x, y \in \mathbb{Q} $. So far, I've found that $f(f(x)) = x$, $f(0) = 0$ and $f(-1) = -1$. (For $f(0)=0$, we substitute $x=0$ to arrive at $f(f(0))-yf(0)$ identically $0$ for all rational $y$; for $f(f(x))=x$, we substitute $y=0$ and use $f(0)=0$. For $f(-1) = -1$, substitute $x=y=-1$ to get $f(0)=-f(-1)-1$, and use $f(0)=0$.)
|
Putting $x = y = 0$ , we get $f(f(0)) = 0.$ Now, putting $x=0$ , we get $f(f(0)) = yf(0)$ $\forall y$ . Hence, $f(0) = 0$ as well. Substituting $y = 0$ , we get that $f(f(x)) = x.$ Also, putting $y = f(y)$ in original equation, we get : $f(xy+f(x)) = f(x)f(y)+x$ . Putting $x=y=1$ in this equation, we get $f(1+f(1)) = f^{2}(1)+1$ Now, putting $x=y=1$ in original equation, we get $f(2f(1)) = f(1)+1$ $\implies 2f(1) = f(f(1) + 1). $ But, $f(f(1)+1) = f^{2}(1)+1 = 2f(1).$ Thus, we get $f(1) = 1.$ Now, assuming $f(n)=n$ and putting $x=1$ and $y=n$ respectively in original equation gives us : $f(n+1) = n+1$ . Hence, $f(x) = x$ $\forall x \in \mathbb{N}$ by induction. Now, putting $x=1,y=-1$ in $f(xy+f(x)) = f(x)f(y) + x$ , we get : $f(f(1)-1) = f(-1) + 1 = 0.$ Hence, $f(-1) = -1.$ Now, putting $x=-1$ and $y=n$ in original equation, we get: $f(-(n+1)) = -(n+1).$ Hence, $f(x) = x$ $\forall x \in \mathbb{Z}.$ Now, putting $y = \frac{p}{q}$ and $x=q$ , we get : $f(q.f(\frac{p}{q})+q) = p+q$ . Ap
|
|functional-equations|rational-numbers|
| 0
|
Recursive definition of union seemingly informal?
|
In my set theory course, we introduced the Axiom of Union. For any set $x$ there exists a set $y$ s.t. all elements of $y$ are elements of some $z\in x$ . Notation for this set is $y= \bigcup x$ . $\forall x\ \exists y \ \forall z (z\in y \leftrightarrow \exists w \ (z\in w \land w\in x))$ My professor then said We define $\{x_1,...,x_{n+1}\}$ recursively as $\{x_1,...,x_n\}\cup\{x_{n+1}\}$ . This makes intuitive sense to me, but the idea of a "recursive definition" doesn't come up until way later in the course! We need the axiom of infinity to define things recursively on $\mathbb{N}$ . This definition of the union was given in lecture 1. Showing recursion on $\mathbb{N}$ was in lecture 13! If this definition doesn't work, we can't guarantee existence of sets of an arbitrary finite size. If I wanted to work with a set of $n$ elements I could easily just apply Pairing and Union and show it existed, but I can't generalise this without the recursiveness definition. Am I missing something
|
There is a subtle point, so subtle and confusing, that it is often better to leave it out and let students ask for clarification than to confuse the entire class so early on. When we do set theory, or really any sort of mathematics, we still work inside a universe. We simply study set theory as an object. This mathematical universe can be "material" (i.e., a larger universe of some set theory) or "syntactic" (i.e., manipulating strings of characters) but it has some kind of understanding of what is induction and recursion. After all, formulas that we use to write down things are often defined by recursion. The difficulty arises when one is exposed to the fact that the meta-universe may disagree with the universe on the concept of the natural numbers. In other words, given a model of set theory inside a large universe of set theory, the model and the universe may very well disagree about the natural numbers and some of their properties. One thing, however, that we can say, is that the u
|
|set-theory|axioms|
| 1
|
One more time, ZF based proof that set of all sets does not exist
|
I know it has been asked several times, but there is always one step that I don't see. My argumentation goes as it follows: Given the set $\mathbb{V}$ set of all sets, then $V \in V$ . Because of the Axiom of Schema of Specfication we can define a certain property $P(x) = x \not\in x$ , and prove that $\exists A \forall x, A = \{x \in V : x \not\in x\}$ We can prove that this is a contradiction by checking if $A \in A$ (since $A \in V$ but $A \not\in A$ , but then if $A \not\in A$ then $A \in A$ , which is a contradiction). This proves that such set $A$ does not exist, but I cannot see why can we conclude that this proves that $\nexists V$ , since we could have started the analysis with any other set than $V$ (this is, the proof is independent of the initial set) that contains $A$ .
|
Suppose there is a set containing all sets, say $A$ . Then by the axiom of comprehension, there is a set $B$ such that $\forall x(x \in B \iff (x \in A \wedge x \notin x))$ . Since this holds for all $x$ , it holds for $B$ so $(B \in B \iff (B \in A \wedge B \notin B))$ . But $B \in A$ as $A$ contains all sets, so $(B \in B \iff B \notin B)$ , a contradiction. Hence there is no set containing all sets,
|
|elementary-set-theory|set-theory|axioms|
| 0
|
One more time, ZF based proof that set of all sets does not exist
|
I know it has been asked several times, but there is always one step that I don't see. My argumentation goes as it follows: Given the set $\mathbb{V}$ set of all sets, then $V \in V$ . Because of the Axiom of Schema of Specfication we can define a certain property $P(x) = x \not\in x$ , and prove that $\exists A \forall x, A = \{x \in V : x \not\in x\}$ We can prove that this is a contradiction by checking if $A \in A$ (since $A \in V$ but $A \not\in A$ , but then if $A \not\in A$ then $A \in A$ , which is a contradiction). This proves that such set $A$ does not exist, but I cannot see why can we conclude that this proves that $\nexists V$ , since we could have started the analysis with any other set than $V$ (this is, the proof is independent of the initial set) that contains $A$ .
|
By axiom schema of separation: $B$ is a set iff $(\forall x)(x\in B\implies x \in A \land \phi(x))$ Now suppose there is a universal set $V$ Then we can replace $A$ in the axiom with $V$ and the axiom trivially becomes abstraction only: $B$ is a set iff $(\forall x)(x\in B\implies x \in V \land \phi(x))\implies (\forall x)(x\in B\implies \phi(x) )$ Now plugin Russel's paradox.
|
|elementary-set-theory|set-theory|axioms|
| 0
|
Differentiable $f: [0, 1] → \mathbb{R}: \int_{0}^{1} f(x)dx = \int_{0}^{1}xf(x)dx.$ Prove $\exists c \in (0, 1):f(c) = 2018\int_{0}^{c}f(x)dx$
|
$f: [0, 1] \rightarrow \mathbb{R}$ is a differentiable function such that $\int_{0}^{1} f(x)dx = \int_{0}^{1}xf(x)dx.$ Prove that there exists $c \in (0, 1)$ such that $f(c) = 2018\int_{0}^{c}f(x)dx$ My attempt: $\int_{0}^{1} f(x)dx = \int_{0}^{1}xf(x)dx$ $\iff \int_{0}^{1}f(x)(x-1)dx = 0 \quad (*)$ Let $F(x)$ be an antiderivative of $f(x)(x-1)$ . $(*) \iff F(1) - F(0) = 0$ $\iff F(1) - F(0) = F(0) - F(0) = 0$ Let $G(x) = F(x) - F(0) = \int_{0}^{x}f(t)(t-1)dt$ and $H(x) = e^{-2018x}G(x)$ $F(x)$ is differentiable on $[0, 1]$ , so $G(x)$ and $H(x)$ are also differentiable on $[0, 1].$ Since $G(1) = G(0) = 0$ , we have that $H(0) = H(1) = 0$ . By Rolle's theorem, there exists $c \in (0, 1)$ such that: $H'(c) = 0$ $\iff -2018e^{-2018c}G(c) + G'(c)e^{-2018c} = 0$ $\iff -2018\int_{0}^{c}f(x)(x - 1)dx + f(c)(c-1) = 0$ $\iff 2018\int_{0}^{c}f(x)dx - f(c) = 2018\int_{0}^{c}xf(x)dx - cf(c)$ At this point, I'm stuck. I need to show that the $LHS = 0$ in order to get the desired outcome, but I hav
|
Let $$G(x) = e^{-2018x}\int_0^x f(t)dt\quad x\in(0,1)$$ $G$ is differentiable with $$G'(x)=e^{-2018x}\left(f(x)-2018\int_0^x f(t)dt\right)$$ so the goal is to find a $c\in(0,1)$ such that $G'(c)=0$ . Rolle's theorem is a good candidate for that. We already have $G(0)=0$ so we need some $x_1 > 0$ such that $G(x_1)=0$ which is equivalent to $\int_0^{x_1} f(t)dt = 0$ . Let's define $$H(x) = x\int_0^x f(t)dt - \int_0^x tf(t)dt$$ $H$ is differentiable with $$H'(x)=\int_0^x f(t)dt$$ so it's enough to find $x_1 \in (0,1)$ such that $H'(x_1)=0$ . From the assumption we have $H(0)=H(1)=0$ and so the existence of such $x_1$ is immediate by Rolle's theorem. Finally, we apply Rolle's theorem again, this time for $G$ in $[0, x_1]$ , to get the required $c \in (0, x_1) \subseteq (0, 1)$ .
|
|real-analysis|calculus|rolles-theorem|
| 1
|
$\int_{1/2}^1t^{{1\over f(x)}-3} \sin t^2dt$
|
I got this integral $\int_{1/2}^1t^{{1\over f(x)}-3} \sin t^2dt$ (note: $t$ is raised to the ${1\over f(x)}-3$ in case its hard to see.) I am really struggling to find the answer to this, I'm not even really sure how to approach it The original question was $\lim_{x\to 0^+} \int_{1/2}^1t^{\frac{1}{f(x)}-3}\sin(t^2)$ we also know that $f$ is a real function with $f(0)=0 ,$ continuous and differentiable for $x \neq0 $ and $f'(x)= \frac{e^x}{3f^2(x)+2f(x)+1}$ . There also exists a $x_0, 0 s.t. $f(x_0)=1$ . I have found : $f^3(x)+f^2(x)+f(x)=e^x-1$ , $f$ is differentiable at $x=0$ with $f'(0)=1$ , $\int_0^{x_0} e^xf(x)= {{23\over 12}} $ Although I am not sure how many of these will be needed. EDIT: I am aware of the solution by DTC but I was hoping to find a "simpler" (lack of a better word) one if possible , any help would be much appreciated!
|
It's easy to see that $3x^2+2x+1 = 2x^2 + (x+1)^2 >0$ for all $x\in \mathbb{R}$ and so $f'(x)>0$ for all $x\in \mathbb{R}$ . So $f$ is strictly increasing and in particular $f(x)\geq 0$ for $x\geq 0$ . Furthermore, $|\sin(t)| \leq |t|$ for all $t\in \mathbb{R}$ and in particular for $0\leq t\leq \pi$ we have $0\leq \sin(t)\leq t$ . So for $t\in [1/2, 1]$ we get $$0\leq t^{\frac{1}{f(x)}-3}\sin(t^2)\leq t^{\frac{1}{f(x)}-3}\cdot t^2 = t^{\frac{1}{f(x)}-1}$$ Integrating both sides from $1/2$ to $1$ we get $$0\leq \int_{1/2}^1 t^{\frac{1}{f(x)}-3}\sin(t^2)\,dt\leq f(x)\left(1-\left(\frac{1}{2}\right)^\frac{1}{f(x)}\right)$$ Finally, notice that $1/f(x) \to \infty$ as $x\to 0^+$ and so the limit of the RHS is equal to $0$ . From the squeeze theorem, we get $$\lim_{x\to0^+}\int_{1/2}^1 t^{\frac{1}{f(x)}-3}\sin(t^2)\,dt=0$$
|
|calculus|integration|
| 1
|
How to simplify the boolean expression $(x\times y)'+(y\times z)$?
|
Expression: $$(x\times y)'+(y\times z)$$ My attempt: $(xy)' + (yz)$ $(x'+y') + (yz) \quad \textit{After applying de Morgan's Axiom}$ $x' + (y' + yz) \quad \textit{After applying 1st Distributive Axiom}$ $x'+ (y' + y) + (y'z) \quad \textit{Rewriting}$ $x' + 1 + (y'z) \quad \textit{After applying Inverse Axiom}$ $x' + y' + z \quad \textit{After applying second Identity Axiom}$ The answer is supposed to be: $(x'+y')+z$ N.B. I am still new to this and learning, I am using the following book: Discrete Mathematics for Computing / Edition 3 by Peter Grossman Question: What am I missing or have I done wrong?
|
We want to simplify $(x\times y)'+(y\times z)$ . I will first show the way how I would solve this task, after that I want to take a look onto your try to solve this task. Simplifying the boolean expression I will use the $\equiv$ symbol to show, that two boolean expressions $A$ and $B$ have the same value. I will use the symbol $\top$ to indicate the value true (or one). \begin{align*} &(x\times y)' + (y\times z)&\overset{(1)}{\equiv} \\ &(x' + y') + (y\times z) &\overset{(2)}{\equiv} \\ &((x'+y')+y)\times((x'+y')+z) &\overset{(3)}{\equiv}\\ &(x'+(y'+y))\times(x'+y'+z) &\overset{(4)}{\equiv} \\ &(x'+\top)\times(x'+y'+z)&\overset{(5)}{\equiv} \\ &\top\times(x'+y'+z)&\overset{(6)}{\equiv} \\ &(x'+y'+z)&\overset{(7)}{\equiv} \\ &(x'+y')+z \end{align*} We see that we are ending up with the solution presented in the book. Now I will shortly present the single steps. $A, B$ and $C$ are all boolean expressions: Here I used the de Morgan Axioms. After that I applied the distributivity rule $A+
|
|logic|boolean-algebra|boolean|
| 0
|
Evaluating $\lim_{(x, y) \rightarrow (0, 0)} \frac{x^2 y + x \sin y}{\sqrt{x^2 + y^2 - x y}}$
|
I'm trying to find the following limit: $$\lim_{(x, y) \rightarrow (0, 0)} \frac{x^2 y + x \sin y}{\sqrt{x^2 + y^2 - x y}}$$ In both iterated limits I have obtained $0$ . I have also tried to get the limit in the line $y = x$ , and I also get $0$ . I don't know how to deal with that numerator. I have also tried: $$\frac{x^2 y + x \sin y}{\sqrt{x^2 + y^2 - x y}} = \frac{x (x y + \sin y)}{\sqrt{x^2 + y^2 - x y}} = \frac{x y (x + \frac{\sin y}{y})}{\sqrt{x^2 + y^2 - x y}}$$ I would like to use something like $|x| \le \sqrt{x^2 + y^2}$ and $|y| \le \sqrt{x^2 + y^2}$ , but the negative term in the square root doesn't allow me to use this method.
|
As suggested in the comments, we have that $x^2 + y^2 - x y \ge |xy|$ then $$\frac{|x^2 y + x \sin y|}{\sqrt{x^2 + y^2 - x y}}\le\frac{|x^2 y| + |x \sin y|}{\sqrt{|xy|}}=\sqrt{|xy|}\left(|x|+\frac{|\sin y|}{|y|}\right) \to 0$$ As noticed by Riemann, when $(x,0)\to(0,0)$ or $(0,y)\to(0,0)$ the inequality is nonsense but it is not needed because the expression is identically equal to zero.
|
|real-analysis|limits|multivariable-calculus|
| 0
|
What trigonometric identity step am I missing with the following: (1+sinx)/cos x=(cos^2(x/2)+sin^2(x/2))+(sin2(x/2))/cos(2(x/2))
|
I may have to revise the question if I've made a mistake in typing it, but it should resemble the below pic, which is part of the solution for the entire trigonometric identity equation. I only included the intermediary step as the entire proof for the identity is rather long. I just figured I would ask about this step because I cannot see how it comes to be. I understand that cos x can be changed into cos 2(x/2). I understand that sin x can be changed into sin 2(x/2) by the same method. But, I am stumped as to how 1 can be changed into cos^2(x/2)+sin^2(x/2). I have a feeling it's a simple thing but it eludes me. This is one of the later exercises in the Chapter review I am studying on trig identities, so it makes sense that it is more involved as they want you to really think through the problems, not just solve them. So, even mistakes are informative, but I'd really like to figure this one out.
|
$$\sin x \rightarrow \sin\left[2\left(\frac{x}{2}\right)\right]$$ $$\cos x \rightarrow \cos\left[2\left(\frac{x}{2}\right)\right]$$ $$\sin^2\frac{x}{2}+\cos^2\frac{x}{2} = 1$$ piecing this all together leaves you with the formula...
|
|trigonometry|
| 0
|
What trigonometric identity step am I missing with the following: (1+sinx)/cos x=(cos^2(x/2)+sin^2(x/2))+(sin2(x/2))/cos(2(x/2))
|
I may have to revise the question if I've made a mistake in typing it, but it should resemble the below pic, which is part of the solution for the entire trigonometric identity equation. I only included the intermediary step as the entire proof for the identity is rather long. I just figured I would ask about this step because I cannot see how it comes to be. I understand that cos x can be changed into cos 2(x/2). I understand that sin x can be changed into sin 2(x/2) by the same method. But, I am stumped as to how 1 can be changed into cos^2(x/2)+sin^2(x/2). I have a feeling it's a simple thing but it eludes me. This is one of the later exercises in the Chapter review I am studying on trig identities, so it makes sense that it is more involved as they want you to really think through the problems, not just solve them. So, even mistakes are informative, but I'd really like to figure this one out.
|
It is a common identity that $$\sin^2\theta+\cos^2\theta=1\:\:\forall\theta\in\mathbb{R}$$ To prove this you can use their definitions in a right angled triangle and do the computations Or you can use the fact that $\sin x=\frac{e^{ix}-e^{-ix}}2$ and $\cos x=\frac{e^{ix}+e^{-ix}}2$
|
|trigonometry|
| 0
|
Compute $\lim_{n\to \infty} \int_0^\infty \frac{n\sin(x/n)}{1+x^4}dx$.
|
I try to apply dominated convergence theorem to compute the limit of following integral: $$ \lim_{n\to \infty} \int_0^\infty \frac{n\sin(x/n)}{1+x^4}dx $$ Note that let $y=x/n$ , then $$ \frac{n^2\sin(y)}{1+n^4y^4}\le \frac{n^2}{1+n^4y^4}\le \frac{n^2}{2n^2y^2}=\frac{1}{2y^2} $$ But $\int_0^\infty \frac{1}{2y^2}$ is not convergent... So I am stuck here...
|
Use $$0\leq x-\sin x\leq\frac{x^3}{6},\quad x\geq0,$$ we have $$\left|\frac{n\sin(x/n)}{1+x^4}-\frac{x}{1+x^4}\right| =\frac{n}{1+x^4}\left(\frac{x}{n}-\sin\frac{x}{n}\right) \leq\frac{n}{1+x^4}\cdot\frac{x^3}{6n^3} =\frac{1}{6n^2}\cdot\frac{x^3}{1+x^4} \leq\frac{M}{6n^2},$$ where $M=\max\limits_{x\in[0,\infty)}\frac{x^3}{1+x^4}$ . This implies $\frac{n\sin(x/n)}{1+x^4}$ converges to $\frac{x}{1+x^4}$ on $[0,\infty)$ uniformly. For $n\in\mathbb N$ and $x\geq0$ , $$\left|\frac{n\sin(x/n)}{1+x^4}\right|\leq\frac{x}{1+x^4,}$$ this implies $$\int_0^\infty \frac{n\sin(x/n)}{1+x^4}dx$$ is uniformly convergent with respect to $n$ . So $$\lim_{n\to\infty}\int_0^\infty\frac{n\sin(x/n)}{1+x^4}dx =\int_0^\infty\lim_{n\to\infty}\frac{n\sin(x/n)}{1+x^4}dx\\ =\int_0^\infty\frac{x}{1+x^4}dx =\frac12\int_0^\infty\frac{dt}{1+t^2}=\frac{\pi}{4}.$$
|
|real-analysis|
| 1
|
Does every triangle satisfy $a^c + b^c - c^c < \pi$
|
Let $(a,b,c)$ be the sides of a triangle and let its circumradius be $1$ . Is it true that $$ a^c + b^c - c^c My progress : In the special case where at least two of the three sides are equal, I have been able to show that the upper bound is about $3.13861$ . If two sides are equal and the third side is $c = x$ then the length of the two equal side are $a = b = \sqrt{2 + \sqrt{4-x^2}}$ each. Maximizing the expression using Wolfram Alpha $$ 2\left(\sqrt{2 + \sqrt{4-x^2}}\right)^x - x^x $$ gives $3.13861$ as the upper bound in this case. Further more, simulation show that this is also the unconditional maxima but I have not been able to prove it. Since $3.13861$ is very close to $\pi$ so, I expressed the above inequality in terms of $\pi$ to present it in an elegant form but it seems the proximity is just a coincidence unless I am missing something.
|
Some thoughts. The conditions are given by $$0 c, \quad b + c > a, \quad c + a > b,$$ $$R = \frac{abc}{\sqrt{(a + b + c)(a + b - c)(b + c - a)(c + a - b)}} = 1. \tag{1}$$ Since $x \mapsto x^{c/2}$ is concave, we have $$a^c + b^c = (a^2)^{c/2} + (b^2)^{c/2} \le 2 \left(\frac{a^2 + b^2}{2}\right)^{c/2}. \tag{2}$$ From (1), we have $$(abc)^2 = (a + b + c)(a + b - c)(b + c - a)(c + a - b)$$ and thus $$(a^2 + b^2 - c^2)^2 = a^2b^2(4 - c^2) \le \frac{(a^2 + b^2)^2}{4}(4 - c^2)$$ which results in $$a^2 + b^2 \le 4 + 2\sqrt{4 - c^2}. \tag{3}$$ Using (2) and (3), we have $$a^c + b^c - c^c \le 2\Big(2 + \sqrt{4 - c^2}\Big)^{c/2} - c^c.$$ Let $f(c) := 2\Big(2 + \sqrt{4 - c^2}\Big)^{c/2} - c^c$ . Numerical experiments show that the maximum of $f(c)$ on $[0, 2]$ is near $3.13861$ when $c$ is near $1.371$ . Numerical experiments also show that $f$ is concave on $[0, 2]$ . Thus, the maximum occurs at $f'(c) = 0$ .
|
|geometry|algebra-precalculus|inequality|triangles|maxima-minima|
| 0
|
How to calculate "$(1, 2, 3, 4, 5) \cdot (1, 2)$"?
|
For a $k$ -cycle $(1, 2, 3, 4, 5)$ and a transposition $(1, 2)$ .Calculation of " $(1, 2, 3, 4, 5) \cdot (1, 2)$ ". We all know that groups satisfy the associative property, i.e. $(a \cdot b) \cdot c=a \cdot (b \cdot c)$ . I will get $ \left \{ 1, 2, 3, 4, 5 \right \} \longmapsto \left \{ 2, 3, 4, 5, 1 \right \} $ , when first calculate the $k$ -cycle. Then after the transposition $(1, 2)$ we get $ \left \{ 2, 3, 4, 5, 1 \right \} \longmapsto \left \{ 3, 2, 4, 5, 1 \right \} $ . But according to the associative property, I first calculate the transposition $(1, 2)$ we get $ \left \{ 1, 2, 3, 4, 5 \right \} \longmapsto \left \{ 2,1, 3, 4, 5 \right \} $ . Then after the $k$ -cycle $(1, 2, 3, 4, 5)$ we get $ \left \{ 2, 1, 3, 4, 5 \right \} \longmapsto \left \{ 1, 3, 4, 5, 2 \right \} $ . (1)Why are the results of these two calculations different? Which step is wrong? (2)Which is the correct calculation?
|
You are confusing associativity, which has nothing to do with your question, and commutativity, which indeed does not hold for permutations.
|
|group-theory|permutation-cycles|
| 0
|
How to calculate "$(1, 2, 3, 4, 5) \cdot (1, 2)$"?
|
For a $k$ -cycle $(1, 2, 3, 4, 5)$ and a transposition $(1, 2)$ .Calculation of " $(1, 2, 3, 4, 5) \cdot (1, 2)$ ". We all know that groups satisfy the associative property, i.e. $(a \cdot b) \cdot c=a \cdot (b \cdot c)$ . I will get $ \left \{ 1, 2, 3, 4, 5 \right \} \longmapsto \left \{ 2, 3, 4, 5, 1 \right \} $ , when first calculate the $k$ -cycle. Then after the transposition $(1, 2)$ we get $ \left \{ 2, 3, 4, 5, 1 \right \} \longmapsto \left \{ 3, 2, 4, 5, 1 \right \} $ . But according to the associative property, I first calculate the transposition $(1, 2)$ we get $ \left \{ 1, 2, 3, 4, 5 \right \} \longmapsto \left \{ 2,1, 3, 4, 5 \right \} $ . Then after the $k$ -cycle $(1, 2, 3, 4, 5)$ we get $ \left \{ 2, 1, 3, 4, 5 \right \} \longmapsto \left \{ 1, 3, 4, 5, 2 \right \} $ . (1)Why are the results of these two calculations different? Which step is wrong? (2)Which is the correct calculation?
|
Let $\sigma=(12345)(12)=\sigma_1\circ \sigma_2$ , with $\sigma_1=(12345)$ and $\sigma_2=(12)$ . The correct calculation is as follows: $$ \sigma(1)=(\sigma_1\circ \sigma_2)(1)=\sigma_1( \sigma_2(1))=\sigma_1(2)=3, $$ and similarly $\sigma(2)=2,\sigma(3)=4,\sigma(4)=5$ and $\sigma(5)=1$ . Hence $2$ is fixed by $\sigma$ , and we have $\sigma=(2)(1345)=(1345)$ , since we delete the fixed ones in the notation of cycles.
|
|group-theory|permutation-cycles|
| 0
|
You are making cookies and add N chips to dough randomly, and split it into 100 equal cookies, again at random. How many chips should go into dough?
|
Question: You are making chocolate chip cookies. You add N chips randomly to the dough and you randomly split the dough into 100 equal cookies. How many chips should go into the dough to give a probability of at least 90% that every cookie has at least one chip? I tried to attempt to solve this using IID random variables. I am not sure how to set the problem up. I know that there should at least be 100 chocolate chips or else the cookies will not meet the "at least 1 chip per cookie" requirement and that there is 10% chance that the cookies do not have a chip.
|
Modeling assumptions: The dough (a subset of $\mathbb R^2$ ) has area $1$ . The dough is divided "randomly" into $100$ disjoint cookies $C_1,\ldots, C_{100}$ each of area $\frac 1{100}$ . Let $\{X_n \in C_i\}$ denote the event that the $n$ -th chip belongs to the $i$ -th cookie. The collections of events $\mathcal A_n = \sigma(\{\{X_n \in C_i\}:1\leq i \leq 100\})$ are independent, and we have $P(\{X_n \in C_i\}) = \frac 1 {100}$ no matter $n$ and $i$ . We look for the smallest $N$ such that $P(\bigcup_{i=1}^{100} \bigcap_{n=1}^N \{X_n \in C_i\}^c)\leq 0.1$ . Let $E_i=\bigcap_{n=1}^N \{X_n \in C_i\}^c$ . Given a subset $I$ of $\{1,\ldots,100\}$ with cardinality $k$ , the event $\bigcap_{i\in I} E_i$ rewrites as $\bigcap_{n=1}^N \left[\bigcup_{i\in I} \{X_n \in C_i\} \right]^c$ . By the independence assumption, $$\begin{aligned} P(\bigcap_{i\in I} E_i) &= [1-P(\bigcup_{i\in I} \{X_n \in C_i\})]^N \\&= [1-\sum_{i\in I} P(\{X_n \in C_i\})]^N \\&= \left(1- \frac k{100}\right)^N \end{aligne
|
|probability|statistics|probability-theory|discrete-mathematics|
| 0
|
Does the operation stitch preserve non-Hamiltonicity?
|
A planar graph is triangulated if its faces are bounded by three edges. A triangle of a planar graph is a separating triangle if it does not form the boundary of a face. That is, a separating triangle has vertices both inside it and outside it; therefore its removal separates the graph. Given planar graphs $H_1$ and $H_2$ , a graph $G$ is a stitch of $H_1$ and $H_2$ if there is a planar drawing $ϕ$ of $G$ and there are proper subgraphs $G_1$ and $G_2$ of $G$ isomorphic to $H_1$ and $H_2$ respectively, such that $G_1 ∪ G_2 = G$ and $T := G_1 ∩ G_2$ is a facial triangle with respect to both $ϕ|G_1$ and $ϕ|G_2$ . Let $G$ be a triangulated planar graph with a separating triangle $T$ . We assume that $G$ be the stitch of $G_1$ and $G_2$ where $T := G_1 ∩ G_2$ . My question is, if $G_1$ and $G_2$ are non-Hamiltonian, is $G$ non-Hamiltonian? This question can be posed more strongly: If either $G_1$ or $G_2$ is non-Hamiltonian, is $G$ necessarily non-Hamiltonian?
|
Let $G_1$ and $G_2$ have more than four vertices. Let $u_1, u_2, u_3$ be the vertices of the stitch triangle. Suppose the hamiltonian cycle goes $u_1 \rightarrow ... \rightarrow u_2 \rightarrow ... \rightarrow u_3 \rightarrow ... \rightarrow u_1$ . This divides the cycles in 3 paths that are fully in $G_1$ or fully in $G_2$ . WLOG two of them are in $G_1$ and third one is in $G_2$ . Thus the path fully in $G_2$ must visit all vertices of $G_2$ except one of the triangle. We easily see that this path can be extended to an hamiltonian cycle of $G_2$ . Same for $G_1$ , the two paths must meet at a vertex of $T$ , the two other extremities are joined by an edge of $T$ , which gives an hamiltonian cycle. By contrapositive, we just showed the stronger result that if $G_1$ OR $G_2$ is non-hamiltonian, then their stitching on a triangular face is also non-hamiltonian, and this regardless of the fact that $G_1$ and $G_2$ are triangulated.
|
|graph-theory|planar-graphs|hamiltonicity|
| 1
|
Image of a linear combination by a continuous function
|
Let $f:\mathbb{R}^{n} \rightarrow \mathbb{R}$ be a continuous function. Take $x,y \in \mathbb{R}^{n}$ with $f(x) and let $L = \{\alpha x + (1-\alpha)y$ : $\alpha \in [0,1]\}$ be the straight line between $x$ and $y$ . Is it true that the continuity of $f$ implies that $f(L) \subset [f(x), f(y)]$ ?
|
No this is not true. Take $f = x^2-1$ , $x = -1$ and $y = 1$ . Then $[f(x),f(y)] = \{0 \}$ but $f(L)$ is not contained in this.
|
|real-analysis|convex-analysis|
| 0
|
Proving that $\text{SL}_2 (\mathbb{Z})$ is closed under inverses
|
I am trying to verify that $\text{SL}_2 (\mathbb{Z})$ is a group under matrix multiplication. The only property I am not certain on is closure under inverses. Given $A \in \text{SL}_2 (\mathbb{Z})$ , I know that $A^{-1}$ exist (it has non-zero determinant). I need to show that $A^{-1}$ has determinant $1$ and integer entires. It's true in general that $\det(A^{-1}) = \frac{1}{\det(A)}$ , so if $\det(A) = 1$ , then $\det(A^{-1}) = 1$ . Proving that $A^{-1}$ has integer entries is a bit tougher, because I don't think it is necessarily true that if $\det(A^{-1}) = 1$ , then all of its entries are integers (though I struggle to think of a counterexample). The only proof I could think of involves the adjugate matrix, $\text{adj}(A)$ . In general, we have $$ A \text{adj}(A) = \det(A) I$. $$ As $\det(A) = 1$ , the inverse of $A$ is the adjugate matrix. The adjugate matrix is the transpose of the matrix of cofactors, whose entries are partial determinants around entry $(i,j)$ , up to a sign. T
|
Here's another approach for proving that $SL_n(\mathbb Z)$ is closed under inverses. Let $A\in SL_n(\mathbb Z)$ . Consider the characteristic polynomial of $A$ , $p_A(x)=\text{det}(xI_n-A)=x^n+a_{n-1}x^{n-1}+\ldots+a_1x+a_0$ . Note that $a_0=p_A(0)=det(-A)=(-1)^ndet(A)=(-1)^n$ and $a_i\in\mathbb Z$ . By Cayley-Hamilton theorem it follows that $p_A(A)=0$ , and hence $$0=A(A^{n-1}+a_{n-1}A^{n-2}+\ldots+a_1I_n)+(-1)^nI_n,$$ which implies that $$A^{-1}=(-1)^{n+1}(A^{n-1}+a_{n-1}A^{n-2}+\ldots+a_1I_n).$$ Then necessarily $A^{-1}$ has integer coefficients.
|
|group-theory|
| 0
|
Image of a linear combination by a continuous function
|
Let $f:\mathbb{R}^{n} \rightarrow \mathbb{R}$ be a continuous function. Take $x,y \in \mathbb{R}^{n}$ with $f(x) and let $L = \{\alpha x + (1-\alpha)y$ : $\alpha \in [0,1]\}$ be the straight line between $x$ and $y$ . Is it true that the continuity of $f$ implies that $f(L) \subset [f(x), f(y)]$ ?
|
No. Take $f(x)=x^2$ , $x=-1$ and $y=2$ . We have that $f(x)=1 , $L=[-1,2]$ and $[f(x),f(y)]=[1,4]$ . Now, $0\in L$ but $f(0)=0\notin [f(x),f(y)]$ .
|
|real-analysis|convex-analysis|
| 1
|
What is the regular language for L = {w | w has even length, and starts and ends with the same symbol}?
|
I think it is it 0(01)*(01)*0 U 1(01)*(01)1 where: two versions: one that starts and ends with 0, the other that starts and ends with 1 connected by plus, which does not mean union of both but either or repeat of (01)* to make the string even. For example, for 0010 you would start with 0 pick 0 from the first 01s pick 0 from the second 01s end with 0 UPDATE: I think this is a more clear and accurate answer: (0+1)(00+11+01+10)∗(0+1) 0+1: You choose either 0 or 1 00 + 11 +01 + 10: All possible options for an even lenght 0+1: last symbol as first
|
Your original $$\begin{align}&(0(0+1)^\ast(0+1)^\ast0) +\\ &(1(0+1)^\ast(0+1)^\ast1) \end{align}$$ suggestion gets part of the requirements correct: every string begins and ends with the same symbols. But it doesn't get the even-length requirement right, because there is nothing about $(0+1)^\ast(0+1)^\ast$ that forces the two groups to have the same length. The first one could have 7 symbols and the other could have 12, and then the total length would be 19, which you don't want. Your second suggestion, $$(0+1)(00+11+01+10)^\ast(0+1)$$ gets the length requirement right, but doesn't guarantee that every string has the same beginning and ending symbol, because there is nothing that forces the two $(0+1)$ parts to produce the same symbol. One could be $0$ and the other could be $1$ . You need to combine these two ideas.
|
|regular-language|regular-expressions|
| 0
|
Basis free proof of the Frobenius formula
|
Let $G$ be a finite group and $H a subgroup. Let $V$ be a representation of $H$ with character $\chi$ . The Frobenius formula states that the character of the induced representation $\text{Ind}_H^G(V) := \mathbb{k}G\otimes_{\mathbb{k}H}V$ is given by: $$\chi_{\text{Ind}_H^GV}(g) = \frac{1}{|H|} \sum_{\substack{x\in G \\x^{-1}gx\in H}}\chi(x^{-1}gx)$$ The usual proof of this fact (for instance the ones in Serre or Fulton&Harris) starts by considering the basis $(g_i \otimes e_j)$ of $\mathbb{k}G\otimes_{\mathbb{k}H}V$ , where the $g_i$ are a set of representatives of the cosets $G/H$ , and the $e_j$ are a basis of V. Then one studies how an element $g\in G$ acts on $\mathbb{k}G\otimes_{\mathbb{k}H}V$ with respect to this basis, and from this, deduces the above formula. Now I was wondering: Is there an alternative proof of this, that doesn't involve a choice of basis of $\mathbb{k}G\otimes_{\mathbb{k}H}V$ ?
|
There is a less basis-dependent proof by observing $g\in G$ acts on $$\mathbb kG\otimes_{\mathbb kH}V=\bigoplus_{xH\in G/H}\mathbb k(xH)\otimes_{\mathbb kH}V$$ by preserving the direct sum decomposition. Thus, the trace of $g$ acting on $\mathbb kG\otimes_{\mathbb kH}V$ is the sum over cosets $xH\in G/H$ fixed by $g$ of the trace of $g$ on $\mathbb k(xH)\otimes_{\mathbb kH}V$ . That is, $$\begin{align*} \chi_{\mathrm{Ind}_H^G V}(g)&=\sum_{\substack{xH\in G/H\\gxH=xH}}\mathrm{tr}(g;\mathbb k(xH)\otimes_{\mathbb kH}V)\\ &=\sum_{\substack{xH\in G/H\\x^{-1}gx\in H}}\chi_V(x^{-1}gx)\\ &=\frac1{|H|}\sum_{\substack{x\in G\\x^{-1}gx\in H}}\chi_V(x^{-1}gx). \end{align*}$$ Here, the second inequality is given by observing $g$ acts on $\mathbb k(xH)\otimes_{\mathbb kH}V$ as $x\otimes v\mapsto gx\otimes v=x(x^{-1}gx)\otimes v=x\otimes (x^{-1}gx)v$ since $x^{-1}gx\in H$ .
|
|finite-groups|representation-theory|tensor-products|
| 0
|
The Green function of Schrodinger operator is positive for non-negative q(x)
|
Consider $\left(E_0\right): L y:=-y^{\prime \prime}+q(x) y=0$ with non-negative function $q \in C(\mathbb{R})$ . Let $y_1$ be the solution of $\left(E_0\right)$ satisfying $y_1(0)=0, y_1^{\prime}(0)=1$ ; and $y_2$ be the solution of $\left(E_0\right)$ satisfying $y_2(1)=0, y_2^{\prime}(1)=-1$ . By ODE theory, we know that $y_1$ and $y_2$ are uniquely determined $C^2$ functions over $\mathbb{R}$ . Let $H$ denote the standard Heaviside function, and $$ G(x, t)=-\frac{1}{W_0}\left[H(x-t) y_1(t) y_2(x)+H(t-x) y_1(x) y_2(t)\right], \quad \forall t, x \in \mathbb{R} . $$ Prove that for any $t \in(0,1), G(0, t)=G(1, t)=0$ . Prove that for any $x, t \in(0,1)$ , there holds $G(x, t)>0$ . I have searched on the internet that we are using variation of parameters to find the particular solutions, and the Wronskian is constant, but in this case in which q(x) is non-negative, I don't know how to prove that this Wronskian is negative, and also I don't know how to prove the above properties, I know if
|
Claim: $y_1$ can't have a zero on $(0,1)$ since $y_1(0)=0$ . Suppose there is $t_0\in(0,1)$ and then $y_1$ reaches the max/min in $(0,t_0)$ . If $y_1$ reaches the max at $t_1\in(0,t_0)$ , then $y_1(t_1)>0$ and $y_1''(t_0)$ which are against each other. Since $y_1(0)=0,y_1'(0)=1>0$ , one must have $y_1(1)>0$ . Now $$ (W_0(y_1,y_2))'=(y_1y_2'-y_1'y_2)'=y_1y_2''-y_1''y_2=0 $$ which implies that $W_0(y_1,y_2)$ is constant and hence $$ W_0(y_1,y_2)=(y_1y_2'-y_1'y_2)(1)=-y_1(1)
|
|ordinary-differential-equations|distribution-theory|eigenfunctions|greens-function|
| 0
|
Character group and tate module of algebraic group
|
Let $k$ be a field and $\bar k$ a separable closure. For an algebraic $k$ -torus, denote by $G_* = \mathrm{Hom}(\mathbb{G}_{m, \bar k}, G_{\bar k})$ its group of cocharacters. This is a finitely generated, free abelian group with a continuous $\mathrm{Gal}_k = \mathrm{Gal}(\bar k , k)$ action. Then it is claimed at various places that there exists a natural isomorphism of $\mathrm{Gal}_{k}$ -modules $$ G_* \otimes_{\mathbb{Z}} \mathbb{Z}_{\ell}(1) \cong T_\ell(G) $$ where $T_\ell(G)$ denotes the $\ell$ -adic Tate module of $G$ , i.e. $$ T_\ell(G) = \varprojlim_{n} G[\ell^n](\bar k) $$ Here $G[\ell^n](\bar k)$ denotes $\bar k$ -valued $\ell^n$ -torsion points of $G$ . Since $G_*$ is a flat and finitely-presented $\mathbb{Z}$ -module, it suffices to show $$ G_* \otimes \mu_{\ell^n} \cong G[\ell^n](\bar k) $$ I am unable to find a proof of this nor did I find a reference. Any suggestion is appreciated.
|
There is an isomorphism $$\begin{align*} \mathrm{Hom}(\mathbb G_{m,\overline k},G_{\overline k})\otimes \mu_{\ell^n}&\simeq G[\ell^n](\overline k)\\ \varphi\otimes z&\mapsto \varphi(z). \end{align*}$$
|
|algebraic-geometry|galois-theory|algebraic-groups|
| 0
|
perpendicular lines through $F$ project to perpendicular lines
|
Let $C(0,0,\sqrt{2})$ and $F(0,\sqrt{2},0)$ be two points in $\Bbb R^3$ . $AF,BF$ are arbitrary perpendicular lines through $F$ on the plane $y=\sqrt{2}$ . These lines project to the lines $DF,EF$ under the perspective projection centered at $C$ onto $xy$ -plane. Then the lines $DF,EF$ perpendicular? This question is based on the last sentence of this post about why perpendicular lines project to perpendicular lines. I simplified the setting by an isometry $(x,\frac{y}{\sqrt{2}},\frac{-y}{\sqrt{2}},z)\mapsto(x,y,z)$ from the subspace $y+z=0$ of the $(x,y,z,w)$ -space to the the $(x,y,z)$ -space. Then I generalized the question to make the angle $\alpha$ between two planes be arbitrary: Let $\alpha\in(0,\frac\pi4)$ be arbitrary. Let $A=(0,0,0),S=(0, 1,\tan(α))$ be two points in $\Bbb R^3$ . $BA,CA$ are arbitrary perpendicular lines through $A$ on the plane $z=0$ . These lines project to the lines $DA,EA$ under the perspective projection centered at $S$ onto the plane $z =\tan(2α) y$ . T
|
Hint (for a strightforward but maybe a bit tedious solution): You can choose points $A$ and $B$ to be given by $\vec a=\left(\cos\phi,\sqrt 2,\sin\phi\right)$ and $\vec b =\left(-\sin\phi,\sqrt 2,\cos\phi \right)$ . The projection onto the $xy$ plane is then given by $\vec\alpha =P\vec a= \vec c +\mu \left(\vec a -\vec c\right)$ , with $\mu$ chosen such that $\alpha_z=0$ , and similar for $\vec b$ . Then you just need to show that $$\left(\vec f -\vec\alpha\right)\cdot\left(\vec f -\vec\beta\right)=0\,.$$
|
|solid-geometry|
| 0
|
Tricky integral $\int_0^\pi{12\cos x\ \mathrm{sech}(\frac \pi2 \tan\frac x2)}\mathrm{d}x=\pi^2$
|
I need to show that the following tricky integral: $$\int_0^\pi{12\cos x\ \mathrm{sech}\left(\frac {\pi}2\tan\frac x2\right)}\mathrm{d}x$$ is equal to exactly $\pi^2$ . I have no idea how to start. I tried the substitution $u=\tan\frac x2$ and ended up with: $$\int_0^\infty24\ \mathrm{sech}\frac{\pi u}{2}\frac{1-u^2}{(1+u^2)^2}\mathrm{d}u$$ I might have to use residue theorem or Fourier transformation here, but I'm lost. Thank you!
|
COMMENT.-By definitions $$I=\int_0^\pi{12\cos(x)\ \mathrm{sech}\left(\frac 12\pi\tan\left(\frac x2\right)\right)}\mathrm{d}x$$ $$I=\int_0^\pi6\cos(x)( e^A-e^{-A}) dx\text { where } A={\frac 12\pi\tan(\frac x2)}$$ or $$I=\int_0^\pi6\cos(x)(e^B-e^{-B})dx\text { where } B=\frac12\pi\sqrt{\frac{1-\cos(x)}{1+\cos(x)}}$$ where, obviously $$A=B=\tan(\frac x2)=\sqrt{\frac{1-\cos(x)}{1+\cos(x)}}$$ What results with changes of possible variables is always non-elementary. My opinion is that there is no clever trick possible as you believe.
|
|calculus|integration|definite-integrals|fourier-transform|trigonometric-integrals|
| 0
|
What trigonometric identity step am I missing with the following: (1+sinx)/cos x=(cos^2(x/2)+sin^2(x/2))+(sin2(x/2))/cos(2(x/2))
|
I may have to revise the question if I've made a mistake in typing it, but it should resemble the below pic, which is part of the solution for the entire trigonometric identity equation. I only included the intermediary step as the entire proof for the identity is rather long. I just figured I would ask about this step because I cannot see how it comes to be. I understand that cos x can be changed into cos 2(x/2). I understand that sin x can be changed into sin 2(x/2) by the same method. But, I am stumped as to how 1 can be changed into cos^2(x/2)+sin^2(x/2). I have a feeling it's a simple thing but it eludes me. This is one of the later exercises in the Chapter review I am studying on trig identities, so it makes sense that it is more involved as they want you to really think through the problems, not just solve them. So, even mistakes are informative, but I'd really like to figure this one out.
|
It's a common misunderstanding: Everybody knows: $$\sin^2x+\cos^2x=1$$ ... but many people forget this is true for every value (not just $x$ , although $x$ represents just any value :-) ), so: $$\sin^2(x+1)+\cos^2(x+1)=1$$ $$\sin^2(3x-8)+\cos^2(3x-8)=1$$ $$\sin^2(x^x+\frac{1}{x!})+\cos^2(x^x+\frac{1}{x!})=1$$ ... and obviously, your example: $$\sin^2(\frac{x}{2})+\cos^2(\frac{x}{2})=1$$
|
|trigonometry|
| 0
|
"Histogram Equalization" Problem
|
Here we deal with the problem of Histogram Equalization. Let $\Omega$ be the set of all pixels in an image, but you can just treat it as some arbitrary set in this case. Let $f: \Omega \rightarrow \{0,...,255\}$ be a map from pixels to pixel intensities. We then have the pixel intensity distribution given by: $$p_f(y) = \frac{|\{x \in \Omega : f(x) = y\}|}{|\Omega|}$$ Its cumulative distribution function is defined as usual: $$F_f(y) = \int_{0}^{y} p_f(y') dy' = \sum_{y'=0}^{y} p_f(y')$$ The goal is to find a pixel intensity mapping $h: \{0,...,255\} \rightarrow [0,255]$ such that $$p_h(z) = \frac{|\{x \in \Omega : h(f(x)) = z\}|}{|\Omega|} = \frac{1}{256}$$ , meaning the intensity values would be distributed evenly among the whole intensity range $[0,255]$ . In other words, the histogram of intensities would be equalized. This is a very useful operation for enhancing image contrast. Now, the operator one can find that solves this problem is supposed to be defined like this: $$h(f(x))
|
Find the lowest $\frac 1{256}$ of the raw intensities and map them to $0$ . The next $\frac 1{256}$ get mapped to $1$ and so on. You need to do some rounding at the breakpoints.
|
|probability-distributions|computer-vision|
| 0
|
Translation (or shift) on chain complexes and the translated (or shifted) differential
|
On p. 31 of Sheaves on Manifolds by Kashiwara & Schapira, they define the translation or shift of degree $[k]$ on the category of chain complexes in an abelian category as follows. Given a morphism of chain complexes $f: X \to Y$, they define $$X[k]^n = X^{n+k},$$ $$d^n_{X[k]}= (-1)^k d^{n+k}_X,$$ and $$f[k]^n = f^{n+k}.$$ I am wondering why there is a factor of $(-1)^k$ in front of the shifted differential? Is this because chain complexes are coalgebras for the shift endofunctor, and they want this shift to give the right notion of a double complex?
|
To my understanding, the introduction of the minus sign in $d^n_{X[k]}=(-1)^kd^{n+k}_X$ is to enable the definition of distinguished triangles in $K(\mathcal{A})$ and $D(\mathcal{A})$ (these categories are, resp., the homotopy category of cochain complexes with terms in an abelian category $\mathcal{A}$ and the derived category of $\mathcal{A}$ ). The class of d.t.'s can be equivalently defined as the respective equivalence class of triangles spawned from two possible classes: either from the class of triangles $$ \label{cone}\tag{1} X^\bullet\xrightarrow{f}Y^\bullet\to C(f)^\bullet\xrightarrow{-p} X^\bullet[1] $$ arising from the cone of any cochain map $f$ (cf. 014E and comments after) or from the class of triangles $$ \label{tws}\tag{2} A^\bullet\to B^\bullet\to C^\bullet\xrightarrow{\delta}A^\bullet[1] $$ arising from term-wise split ses's of cochain complexes (by 05SS the induced triangle in $K(\mathcal{A})$ does not depend on the choice of splittings in each term; by 014L , the t
|
|homological-algebra|abelian-categories|
| 0
|
Random variable independence
|
Consider two discrete binary random variables A and B. If $P(A=1,B=0) = P(A=1)P(B=0)$ , can we conclusively say that A and B are independent? My response is we cannot conclusively say they are independent because the $P(AB) = P(A)P(B)$ should hold for all possible values of A and B.
|
In this special case, the single equation $P(A=1, B=0)=P(A=1)P(B=0)$ actually implies all the other equations. For instance, \begin{align} P(A=0, B=0) &= P(B=0) - P(A=1, B=0) \\ &= P(B=0) - P(A=1) P(B=0) \\ &= (1 - P(A=1))P(B=0) \\ &= P(A=0)P(B=0). \end{align}
|
|probability|
| 1
|
Counterexamples about the differentiability of several variables
|
I've learned about differentiability of several variables. If $f(x,y)$ is differentiable then we can use chain rule on it. But I suspect the converse of this proposition is not right. So, is there a function $f(x,y)$ , such that the partial derivatives $\frac{\partial f}{\partial x}(0,0),\frac{\partial f}{\partial y}(0,0)$ exist, and for every functions $x(t)$ and $y(t)$ differentiable at $0$ satisfying $(x,y)(0)=(0,0)$ , the chain rule $$\frac{df(x(t),y(t))}{dt}(0)=\frac{\partial f}{\partial x}(0,0)\frac{dx}{dt}(0)+\frac{\partial f}{\partial y}(0,0)\frac{dy}{dt}(0)$$ holds, but $f$ is not differentiable at $(0,0)$ ? I'm curious about the counterexamples.
|
Chain rule for functions from multiple variables generally doesn't work in case of only existence of corresponding partial derivatives. Let's consider example $$f(x,y)=\begin{cases} \frac{x^2y}{x^2+y^2}, &x^2+y^2>0 \\ 0, &(x,y)=(0,0) \end{cases}$$ For this function exists partial derivatives everywhere including point $(0,0)$ , where $f'_x(0,0)=f'_y(0,0)=0$ . By the way, exactly here partial derivatives are discontinuous. If we consider simple functions $x(t)=y(t)=t$ and admit, that chain rule works, then we should have $$\frac{df}{dt}=f'_xx'_t+f'_yy'_t=0$$ in point $t=0$ . But if we really substitute $x,y$ in $f$ , then we obtain $$f(x(t),y(t))=\frac{t^2\cdot t}{t^2+t^2}=\frac{1}{2}t$$ which gives derivative everywhere $\frac{1}{2}$ i.e. in point $t=0$ also. Addition . I explained partly in comment below, why I hope that brought answer is useful. And I, also, am adding following example: Let's consider $$g(t)=\begin{cases}\frac{1}{t}, & t\ne 0 \\ 0, & t=0 \end{cases}$$ and $$f(x,y)=\b
|
|calculus|derivatives|differential|
| 0
|
Compute directly that the mapping cone of a homotopy equivalence is contractible
|
Let's consider the category $Ch_R$ of cochain complexes of modules over a commutative ring $R$. I'm trying to prove that if the chain map $\phi:M\rightarrow N$ is a homotopy equivalence then its mapping cone is contractible, WITHOUT using results from triangulated categories (i.e. by crude computation): by hypothesis i have that there is $\psi:N\rightarrow M$ such that $$ \psi^k\phi^k-1^k_M=d^{k-1}_M\rho^k+\rho^{k+1}d_M^k \quad and\quad \phi^k\psi^k-1^k_N=d^{k-1}_N\sigma^k+\sigma^{k+1}d_N^k $$ for some $\rho:M\rightarrow M[-1]$ and $\sigma:N\rightarrow N[-1]$, and i need to find $R:cone\phi\rightarrow cone\phi[-1]$ such that $$ 1^k_{cone\phi}=\bigg( \matrix{1^k_{M[1]} & 0 \\ 0 & 1^k_N} \bigg)=\bigg(\matrix{-d^k_M & 0 \\ \phi^k & d_N^{k-1}} \bigg)R^k+R^{k+1}\bigg(\matrix{-d_M^{k+1} & 0 \\ \phi^{k+1} & d_N^k} \bigg). $$ I already worked out that $R^k=\bigg(\matrix{\rho^{k+1} & \psi^k \\ X^k & -\sigma^k}\bigg)$, but i'm stuck with this last equation $$ \sigma^{k+1}\phi^{k+1}-\phi^k\rho^{k
|
I'd like to add another answer that seems to occupy some middle ground between the explicit formula in Ranicki's paper and the (for some of us) enigmatic statement "it follows from results about triangulated categories". The following is adapted from Proposition 0.7 (page 7) in Brown's book "Cohomology of Groups". I'll talk about chain complexes instead of cochain complexes, since that's what Brown does. Preparatory observation : For any pair of chain complexes $(A_*,\partial^A)$ and $(B_*,\partial^B)$ , there is a chain complex $(\operatorname{Hom}(A_*,B_*),\partial)$ whose degree $n$ part consists of the homomorphisms $\varphi : A_* \to B_*$ of degree $n$ (meaning $\varphi(A_k) \subset B_{k+n}$ for each $k$ ), with boundary operator given by $$ \partial\varphi := \partial^B \circ \varphi - (-1)^{|\varphi|} \varphi \circ \partial^A. $$ In particular, this makes the obvious evaluation map $\operatorname{Hom}(A_*,B_*) \otimes A_* \to B_*$ into a chain map, and as a special case, taking
|
|modules|homological-algebra|homotopy-theory|graded-modules|
| 0
|
Are Caesaro summation and Abelian summation frown upon in mathematical community?
|
If we agree to define $S=\lim_{n\to \infty}\sum_0^na_n$ ,then, only convergent series can have meaningful $S$ . But if we decide to define $S$ , to be the Caesaro mean, now we can also assign a real number to $S$ for some divergent series as well. My question is such relaxation of the definition frown upon in the mathematical community? The reason I ask is I have read several blog posts where authors are adamant that only convergent series are summable and only summmability in the ordinary sense is useful. In those articles, they usually point out the Ramanujan sum as an example of getting nonsensical answers. They especially have huge problems with getting results by analytical continuation of Rieman zeta function. To me Caesaro summation and Abelian summation are natural extensions of the summation in ordinary sense. Where do you draw the line? P.S. I am very new to this topic and mathematics in general so please bear with me for my naive question.
|
Mathematicians don't have any inherent problem with Cesaro sums, regularizations, etc.; they're well-defined and useful. Cesaro sums pop up in Fourier analysis, for example, and Tauberian theorems in number theory have at least the same spirit as the sort of regularization that comes up in dealing with these divergent sums. The problems are with treating these operations on infinite sequences as the same as the ordinary sum (i.e., the limit of partial sums) and blithely applying the same calculus to them. There's a meaningful and useful way to define a function on some infinite sequences that takes the value $−1/12$ at $(a_n) = (1, 2, 3, \dots)$ , to take the inevitable example, but it's absolutely not true in any sense that the sum $1 + 2 + 3 + \cdots$ converges to $-1/12$ .
|
|divergent-series|
| 1
|
on the dimension of a C*-algebra
|
I was reading a book and came across the following: If $f$ is a linear functional on a C*-algebra $A$ such that $a\leq f(a)1$ for all $a\geq 0$ , then $A$ must be finite dimensional. One possible way to see this is by a way of contradiction. That is, if $A$ is infinite dimensional, the there must exist a linear functional $f$ such that the above inequality does not hold. I could not come up with any such example or could not see any other way to go about it. Any help would be highly appreciated.
|
This is definitely using far heavier machinery than necessary. But assume to the contrary that we have such an $f$ defined on an infinite-dimensional $A$ . Note that $f$ is clearly positive and therefore necessarily bounded. By passing to the double dual, we may assume $A$ is a von Neumann algebra and $f$ is normal. Any infinite-dimensional von Neumann algebra admits a separable infinite-dimensional abelian von Neumann subalgebra, so by restricting to such a subalgebra, we may assume $A$ is a separable infinite-dimensional abelian von Neumann algebra and $f$ is normal. But the structure of such an algebra is known and we see that there exists a sequence of nonzero projections $p_n$ converging to $0$ in SOT. But then $f(p_n)1 \to 0$ in norm since $f$ is normal, but $p_n$ all have operator norm $1$ , contradicting the assumption that $p_n \leq f(p_n)1$ for all $n$ .
|
|functional-analysis|c-star-algebras|
| 1
|
Existence of half derivative of 1
|
I'm trying to find general properties for a half derivative operator, yet I encountered what seems to be a paradox. Let $H$ be a linear operator such that $H^2f = Df$ where $D$ is the first derivative. Then we have the following consequences $$H0=0$$ $$H^31=DH1=HD1 \Rightarrow DH1 = H0=0 \Rightarrow H1 = c $$ $$H^2 x = 1 \Rightarrow H^3 x = c \Rightarrow Hx = cx + d \tag{1}\label{eq1}$$ where $c$ and $d$ are some constant functions. Consequently $$1 = H(Hx) = cHx + cd = c\left(Hx + d\right)\tag{2}\label{eq2}$$ $$\Rightarrow Hx = \frac{1}{c}-d\tag{3}\label{eq3}$$ By \eqref{eq3}, $c \neq 0$ . However, by \eqref{eq1} and \eqref{eq3}, $c = 0$ , which is contradiction. So does this mean that $H1$ does not exist? Or is there something wrong with my calculations?
|
Problems There are multiple problems and one of them is that $H$ has many difrent definitions. We call $\operatorname{H}$ a semi derivative and $\operatorname{H}\left( \text{constant} \right)$ does not have to be a constant (e.g. see Riemann-Liouville operator of a constant). Problem $3$ is that not all fractional derivatives are linear (you can quickly see this if you apply them to the series expansions of functions). $\operatorname{D}_{x}^{\mu}\left[ \operatorname{D}_{x}^{\nu}\left[ f\left( x \right) \right] \right] = \operatorname{D}_{x}^{\nu}\left[ \operatorname{D}_{x}^{\mu}\left[ f\left( x \right) \right] \right]$ is not always true for fractional derivatives. Problem $2$ Let's use the Riemann-Liouville operator. According it's definition (if $-b \in \mathbb{C} \setminus \mathbb{N}$ ): \begin{align*} \operatorname{^{RL}D}_{x}^{\alpha}\left[ a \cdot x^{b} \right] &= a \cdot \frac{\Gamma\left( b + 1 \right)}{\Gamma\left( b - \alpha + 1 \right)} \cdot x^{b - \alpha}\\ \operatorname{^
|
|calculus|fractional-calculus|
| 0
|
Proof of Goursat's lemma in Serre's Finite Groups An Introduction
|
On page 8 in Serre's Finite Groups An Introduction about the proof of Goursat's lemma it states that: Proposition 1.6 (Goursat's lemma). For $i=1,2$ , let $N_i$ be a normal subgroup of $G_i$ and let $\varphi : G_1/N_1\to G_2/N_2$ be an isomorphism. Let $H_{N_1,N_2,\varphi}$ be the set of all $(g_1,g_2)\in G_1\times G_2$ which are such that $\varphi(\bar{g_1})=\bar{g_2}$ where $\bar{g_i}$ is the image of $g_i$ in $G_i/N_i$ . Then: i) $\text{pr}_i(H_{N_1,N_2,\varphi})=G_i$ for $i= 1,2$ . ii) Every subgroup $H$ of $G_1\times G_2$ such that $\mathrm{pr}_i(H)= G_i$ for $i = 1,2$ is equal to some $H_{N_1,N_2,\varphi}$ for a unique choice of $N_1,N_2$ and $\varphi$ . [Note that $H_{N_1,N_2,\varphi}$ may be viewed as a kind of graph of $\varphi$ .] Proof of ii). Define $N_1$ as $H\cap G_1$ , where $G_1$ is viewed as a subgroup of $G_1\times G_2$ , namely the kernel of $\text{pr}_2$ . Define similarly $N_2 = H\cap G_2$ . By assumption, we have $H.G_1=G_1\times G_2$ , since both $H$ and $G_2$ no
|
I think this is perhaps less than clear and you are misinterpreting what is meant. Rather than " $H$ and $G_1$ ", it's probably best to say that $H$ normalizes both $H$ and $G_1$ , and $G_2$ centralizes $G_1$ , hence normalizes $H\cap G_1$ . I also think there is a typo there. Explicitly: we want to show $H\cap(G_1\times\{1\})$ is normal. Note $G_2$ centralizes $G_1\times\{1\}$ , so it centralizes (hence normalizes) any subgroup of $G_1\times\{1\}$ ; in particular it normalizes $H\cap (G_1\times\{1\})$ . And $H$ normalizes both $H$ and $G_1\times\{1\}$ (the latter because it is normal in $G_1\times G_2$ ). So $H$ normalizes $H\cap (G_1\times\{1\})$ . Thus, both $H$ and $G_2$ normalize $H\cap (G_1\times \{1\})$ . Now, ( typo alert ) we have $HG_2=G_1\times G_2$ . Since $H$ and $G_2$ both normalize $H\cap(G_1\times\{1\})$ , and $H$ and $G_2$ generate $G_1\times G_2$ , it follows that $H\cap (G_1\times\{1\})$ is normal in $G_1\times G_2$ . The symmetric argument, using $G_1$ instead of $G
|
|abstract-algebra|group-theory|
| 0
|
If $A$ is a Jacobson ring, so is $A[X]$
|
I am studying Jacobson rings, using this file by Matthew Emerton and this source. I am trying to understand the proof of the following: Theorem. if $A$ is a Jacobson ring (that is, $\mathfrak p= \operatorname{Jac}(\mathfrak p)$ for all primes $\mathfrak p$ of $A$ ), then so is the ring of polynomials $A[X]$ . It appears in page 4 of Emerton's notes. He uses two lemmata: Lemma 1. If $A\subseteq B$ are domains such that $A$ is Jacobson, and for some $0\neq a \in A$ , the induced morphism $A_a \to B_a$ is integral, then $\operatorname{Jac}(B)=0$ . Lemma 2. If $\operatorname{Jac}( A ) = 0$ , then $\operatorname{Jac} (A[X] ) = 0$ too. (For clarity, following the notation of Bosch , $\operatorname{Jac}(B)$ is the intersection of all maximal ideals of $B$ , and $\operatorname{Jac}(\mathfrak a) = \bigcap_{\mathfrak {a \subseteq m}} \mathfrak m$ is the intersection of all maximal ideals containing $\mathfrak a$ ). The proof Emerton gives goes as follows: Proof. Let $\varphi:A[X]\to B$ be an epi
|
In $A'[x]$ , there is a well-defined notion of degree. Let $\mathfrak p$ be the kernel of the map $A'[x]\to B$ and let $f\in\mathfrak p-0$ be an element of minimal degree $n$ . By construction, $n>0$ . After localising at the leading coefficient of $f$ , if necessary, we may suppose that $f$ is monic. (I will not clutter the notation any further and just assume that $f$ is monic. Make sure you realise that and why the localisation cannot introduce a "new" element of smaller degree, which might happen in the presence of zero-divisors.) It is indeed the case that $f$ generates $\mathfrak p$ , then (see below), but that's actually immaterial to the proof that $B/A'$ is finite, for if the kernel of $\varphi\colon A'[x]\to B$ contains a monic polynomial $f=x^n-\sum_{i=0}^{n-1}f_ix^i$ , $f_i\in A'$ , then $\varphi(x^n)=\sum_{i=0}^{n-1}f_i\varphi(x^i)$ is a linear combination of the $\varphi(x_i)$ , $i , and by a straightforward inductive argument, so is each $\varphi(x^m)$ , $m>n$ . Therefor
|
|ring-theory|commutative-algebra|ideals|maximal-and-prime-ideals|integral-extensions|
| 0
|
Attraction of events
|
I don't know if the next statement is true or false: Let $(\Omega,\mathcal{F},P)$ be a probability space and let $A,B$ and $C$ be events in $\mathcal{F}$ such that $P(A)>0$ . If $P(B|A)>P(B)$ and $P(C|A)>P(C)$ , then $P(B\cap C|A)\geq P(B\cap C)$ . Neither I haven't been able to prove it, nor I haven't been able to find a counterexample. Can anyone help to determine the truth or falsity of the statement please?
|
This is not the case. Say we uniformly randomly pick an integer from $[1,19]$ . Then it has probability $\frac{10}{19}$ to be odd, $\frac8{19}$ to be prime and $\frac7{19}$ to be an odd prime. If I now tell you that it’s in $[1,9]$ , the probability that it’s odd increases to $\frac59$ and the probability that it’s prime increases to $\frac49$ , but the probability that it’s an odd prime decreases to $\frac39$ .
|
|probability|conditional-probability|examples-counterexamples|
| 1
|
Proof of the universal property of free abelian groups
|
Let $S$ be a set. The group with presentation $(S, R)$ , where $R = \{\ [s , t] \mid s,t\in S\ \}$ is called the free abelian group on $S$ -- denote it by $A (S)$ . Prove that $A (S)$ has the following universal property: if $G$ is any abelian group and $\varphi:S\to G$ is any set map, then there is a unique group homomorphism $\phi : A (S) \to G$ such that $\phi\mid_S=\varphi$ . Deduce that if $A$ is a free abelian group on a set of cardinality $n$ then $$ > A\cong\mathbb{Z}\times\mathbb{Z}\times\cdots\times\mathbb{Z}\ \ (n\text{ factors})\ . > $$ For all $N\unlhd G$ containing $R$ , we have $[s,t]\in R\subseteq N$ , then $[s,t]N=N$ , and so $$ \begin{aligned} \,[s,t]\langle R\rangle&=[s,t]\bigcap_{R\subseteq N\unlhd F(S)}N \\&=\bigcap_{R\subseteq N\unlhd F(S)}N \\&=\langle R\rangle\ , \end{aligned} $$ hence $$ \begin{aligned} st\langle R\rangle&=ts\langle R\rangle \\s\langle R\rangle\,t\langle R\rangle&=t\langle R\rangle\,s\langle R\rangle \end{aligned} $$ implies $F(S)/\langle R\ran
|
Lemma. The normal subgroup of $F(S)$ generated by $R$ , $\langle R\rangle^{F(S)}$ , is precisely $[F(S),F(S)]$ . Proof. Since $R$ is generated by elements of $[F(S),F(S)]$ , then we have $\langle R\rangle^{F(S)}\leq [F(S),F(S)]$ . Conversely, using the commutator identities (I use the convention $[a,b]=a^{-1}b^{-1}ab$ ; similar identities hold for the other convention), we have: $$\begin{align*} [x,zy] &= [x,y][x,z]^y\\ [xz,y] &= [x,y]^z[z,y]\\ [x,y^{-1}] &= [y,x]^{y^{-1}}\\ [x^{-1},y] &= [y,x]^{x^{-1}} \end{align*}$$ it follows that any element of $[F(S),F(S)]$ can be written as a product of conjugates of elements of the form $[s,t]$ with $s,t\in S$ , so $[F(S),F(S)]\leq \langle R\rangle^{F(S)}$ , giving equality. (Alternatively, you already proved that $[F(S),F(S)]\leq \langle R\rangle^{F(S)}$ using the universal property of the abelianization, but I figured one universal property at a time is enough). $\Box$ Proof of the Universal Property. Existence . Let $G$ be an abelian group, a
|
|abstract-algebra|group-theory|category-theory|abelian-groups|free-groups|
| 0
|
A few questions regarding random points on a disk.
|
Today a few questions popped up in my head, and I'm curious to know the answers. a). Let $C$ be a disk of radius $R$ centered at the origin. Given a point on the disk $P = (x,y)$ , what is the expected distance between $P$ and $P'$ , where $P'$ is a point chosen uniformly at random somewhere on the disk. b). Let $C,R,P $ and $P'$ be as above. What is the probability that the distance between $P$ and $P'$ is $\geq r$ , with $r\in [0,2R]$ . Help with either would be greatly appreciated, thanks in advance!
|
$a$ .) Let the initial point chosen be $p=(x_0, y_0)$ . This gives us a distance from some other randomly chosen point on the disc of; $$D(x,y)=\sqrt{(x-x_0)^2+(y-y_0)^2}$$ Integrating this over the entire disc and dividing by $A=\pi R^2$ gives us; $$E(D)=\frac{1}{\pi R^2}\iint_{C}\sqrt{(x-x_0)^2+(y-y_0)^2}dxdy$$ Switching to polar coordinates yields; $$E(D)=\frac{1}{\pi R^2}\iint_{C}\sqrt{(r\cos(\theta)-x_0)^2+(r\sin(\theta)-y_0)^2}rdrd\theta$$ Choosing $p=(0,0)$ makes this integral trivial but it is, in general, a big mess that you are free to crank through if you have the time. $b$ .) Now, we can consider the probability that the distance between $p$ and some other randomly chosen point $p'$ on the disc is $\geq r\in [0,2R]$ by considering the area of intersection between overlapping circles. That is, consider the disc of radius $R$ centered at the origin and a separate disc of radius $r$ centered at your chosen point $p$ . Let $A_1$ be the overlap between these discs and let $A_2=\
|
|probability|statistics|probability-distributions|
| 1
|
Support of $f \in A$ in Spec $A$ for a reduced ring $A$
|
Let $A$ be a reduced ring (i.e. no nilpotents). Take any $f \in A$ and I would like to show that $\operatorname{Supp} f = \overline{D(f)}$, where $D(f)$ denotes the distinguished open set of $\operatorname{Spec}A$ where $f$ does not vanish. I think that the fact that reducible is crucial here, but somehow my argument doesn't use this fact. I can't see where I am going wrong and I would appreciate some assistance. Thanks!
|
I didn't see a proof of the claim in the posts above, so I thought I'd post one. Suppose $A$ is reduced, and $p\in supp(f)$ . We want to show that $p \in \overline{D(f)}$ . Let $D(g)$ be a basic neighborhood of $p$ . We want to show $D(g)$ intersects $D(f)$ , i.e. we want to show that there exists a prime ideal $q$ which contains neither $f$ nor $g$ . If no such $q$ exists, then every prime ideal contains either $f$ or $g$ . But this means every prime contains $fg$ . But $A$ is reduced, so the intersection of all the prime ideals is zero. Thus $fg=0$ . But recall that $g\notin p$ , and thus $f/1= 0$ in $A_p$ , contrary to the assumption $p \in supp(f)$ .
|
|algebraic-geometry|affine-schemes|
| 0
|
If $ f(x \cdot f(y) + f(x)) = y \cdot f(x) + x $, then $f(x)=x$
|
Let $ f : \mathbb{Q} \rightarrow \mathbb{Q} $ be a function which has the following property: $$ f(x \cdot f(y) + f(x)) = y \cdot f(x) + x \;,\; \forall \; x, y \in \mathbb{Q} $$ Prove that $ f(x) = x, \; \forall \; x, y \in \mathbb{Q} $. So far, I've found that $f(f(x)) = x$, $f(0) = 0$ and $f(-1) = -1$. (For $f(0)=0$, we substitute $x=0$ to arrive at $f(f(0))-yf(0)$ identically $0$ for all rational $y$; for $f(f(x))=x$, we substitute $y=0$ and use $f(0)=0$. For $f(-1) = -1$, substitute $x=y=-1$ to get $f(0)=-f(-1)-1$, and use $f(0)=0$.)
|
So far we know $f(f(x)) =x$ and $f(0) = 0$ . Differentiating $f(f(x)) = x$ on both sides gives us $f'(f(x))f'(x) = 1$ which implies $f'(0) = \pm1$ . Now differentiating the orginal equation on both sides with respect to $y$ (keeping in mind $x$ is a constant with respect to $y$ ) we get $$f'(xf(y) + f(x))f'(y)x = f(x)$$ . Now plug in $y=0$ to get $f'(f(x))f'(0)x = f(x)$ and substituting the value of $f'(f(x)) = \frac{1}{f'(x)}$ to get $f'(0)x = f(x)f'(x)$ . This results to the differential equation $$f'(0)xdx = f(x)d(f(x))$$ and integrating both sides gives us $$f'(0)x^2 = f^2(x)$$ knowing constant of integration is $0$ . Thus $f(x) = \pm \sqrt{f(0)}x$ , which gives us $f(x) = x$ .
|
|functional-equations|rational-numbers|
| 0
|
strictly semifinite weights restrict to a von Neumann subalgebra
|
Suppose $\omega$ is a strictly semi-finite weight on a von Neumann algebra $M$ and $N$ is the von Neumann subalgebra of $M$ . Is $\omega|_N$ a strictly semifinite weight on $N$ ?
|
No. Say $\omega$ is the canonical trace on $M = B(l^2)$ and $N \subset M$ is a type III factor.
|
|operator-algebras|von-neumann-algebras|
| 1
|
Group structure on the covering space of a topological group
|
I am trying to prove the following fact: Let $(G, \cdot)$ be a topological group and let $E$ be a universal covering space of $G$ . If $G$ and $E$ are both locally path-connected, then for any choice of element $e$ in the fiber over $1_G \in G$ , there exists a unique topological group structure on $E$ , with $e$ as the identity, for which the covering map $p: E \to G$ is a homomorphism. Following the idea found on Wikipedia : The construction is as follows. Let $a$ and $b$ be elements of $E$ and let $f$ and $g$ be paths in $E$ starting at $e$ and terminating at $a$ and $b$ respectively. Define a path $h : I \to G$ by $h(t) = p(f(t))p(g(t))$ . By the path-lifting property of covering spaces there is a unique lift of $h$ to $E$ with initial point $e$ . The product $ab$ is defined as the endpoint of this path. By construction we have $p(ab) = p(a)p(b)$ . One must show that this definition is independent of the choice of paths $f$ and $g$ , and also that the group operations are continuou
|
I will skip the tedious proof of well-definition and group axioms since you have already got them. Hence I will only focus on the proof of the continuity of the map $$\star\colon E\times E\to E, \qquad (a,b)\mapsto a\star b^{-1}$$ From the construction and the fact that $p$ is a group homomorphism, we have a natural commutative diagram $$\require{AMScd} \begin{CD} E\times E @>{\star}>> E\\ @VVV @VVV \\ G\times G @>{\cdot}>> G \end{CD}$$ where the vertical lines are the product map $p\times p$ and $p\colon E\to G$ respectively, and the horizontal arrow is the continuous map $\cdot \colon G\times G\to G$ sending $(g_1,g_2)\mapsto g_1\cdot g_2^{-1}$ . $\textbf{Definition}$ Given an open subset $U\subseteq E$ , I say that $U$ is ${\it elementary}$ with respect to $p$ if $p_{|U} \colon U\to p(U)$ is an homeomorphism. $\textbf{Lemma:}$ It there exists a topological basis of $E$ given by elementary sets with respect to $p$ . $\textit{proof:}$ Let $U$ be an open set of $E$ and $x\in U$ . It th
|
|algebraic-topology|topological-groups|covering-spaces|
| 1
|
Why is $p$ (or $p + 1$) not the upper bound of the number of solutions of an elliptic curve mod $p$ (finite field)?
|
In the build-up to the enunciation of the Birch, Swinnerton Dyer conjecture (BSD), the following back-of-the-envelop idea comes up: Because half of the $1$ to $n-1$ elements mod $p$ are quadratic residues (squares), for the expression $y^2= x^3 +\cdots$ mod a certain prime $p,$ on average $1/2$ of the input $x$ values will return a quadratic residue, and taking the square root will ultimately yield two $y$ values. Now, since the higher the rank, the more rational numbers are going to fall on the curve, regardless of whether the field is $\mathbb Q$ or $\mathbb F_p,$ it follows that in the limit of $\underset{p \leq X}\Pi \frac {Np}p \approx 1$ , i.e. an expectation of $p$ points on the curve. I bet there is some misunderstanding in the way I paraphrased this concept, explaining why I don't see why this $1$ is not the upper bound, regardless of the rank, instead of being the expectation: Given that only half of the integers mod $p$ are quadratic residues, how is it possible to get more
|
Exactly half of the integers $1,\dots,p-1$ are quadratic residues and half are quadratic nonresidues, for sure. However, the values of the cubic polynomial in $x$ , as $x$ runs through $0,1,\dots,p-1$ , are not just the residue classes $0,1,\dots,p-1$ in a different order; rather, the values of the cubic polynomial will hit some of those residue classes once, some more than once, and some not at all. So it really matters whether the more-than-once/not-at-all images are quadratic residues or nonresidues; a given cubic polynomial can prefer values that are residues or that are nonresidues. One can see this with specific examples like $y^2 = x^3+x+1$ over $\Bbb F_5$ : there are $8$ points on this elliptic curve, namely $(0,\pm1)$ , $(2,\pm1)$ , $(3,\pm1)$ , and $(4,\pm2)$ (and an extra point at infinity if we're counting those), and $8>5$ . Finally, if we were to believe that there can never be more than $p$ (affine) points on an elliptic curve over $\Bbb F_p$ , then we would be forced to
|
|elliptic-curves|
| 1
|
Minimizing Frobenius norm involving inverse
|
I am looking for methods to solve the following minimization. Let $A\in \mathbb{R}^{n\times k}$ , $B\in \mathbb{R}^{n\times m}$ , $C\in \mathbb{R}^{m\times m}$ and $E\in \mathbb{R}^{m\times k}$ , where $n>m>k$ . Additionally, $C$ is positive definite. $$G(d)=||A-B(C+\text{diag}(d))^{-1}E||^2_F$$ Is there any hopes of finding the positive minimizer (is there just 1?) $d\geq0$ of $G$ .
|
$ \def\R#1{{\mathbb R}^{#1}} \def\o{{\tt1}} \def\g{\gamma} \def\t{\times} \def\bR#1{\big(#1\big)} \def\BR#1{\Big(#1\Big)} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\diag#1{\op{diag}\LR{#1}} \def\Diag#1{\op{Diag}\LR{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\mt{\mapsto} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\c#1{\color{red}{#1}} \def\CLR#1{\c{\LR{#1}}} $ Let's use a variable naming convention wherein an upper/lower letter denotes a matrix/vector and a Greek letter denotes a scalar. Under this scheme, the only variable that needs to be renamed is $\,G\to\g$ . Also, to avoid confusion with differential operations, let's rename $d\to x.$ To enforce positivity, construct $x$ from an unconstrained vector $w$ $$\eqalign{ x &= \exp(w) \qiq x \ge 0 \\ dx &= x\odot dw \\ }$$ where $\exp(w)$ is applied elementwise and $\odot$ denotes the elementwise product. For typing convenience, define the auxiliary
|
|optimization|matrix-equations|matrix-calculus|least-squares|
| 1
|
Statements which feel like they shouldn't be first-order expressible, but are
|
I recently had the opportunity to study some (very basic) model-theory, and that made some theorems in ring theory become immediately more interesting. For example, one characterization of the Jacobson radical of a ring $R$ makes it possible to express the formula " $x \in J(R)$ " as a first-order formula in the language of rings: $$\forall r \exists s (s(1-rx) = 1)$$ as it is known that $x \in J(R) \iff 1 - rx$ is invertible for all $r \in R$ . Similarily, the statement "every $R$ -module is flat", which feels very far from first-order expressible, can be written as the sentence: $$\forall a \exists x (a = axa)$$ as both of these statements are satisfied precisely by the von Neumann regular rings. I am also aware of this question , which talks about a way of expressing the sentence $\dim(R) \leqslant k$ for a commutative ring $R$ . In that spirit, I was hoping for more examples (not necessarily from algebra) of seemingly "complex" statements which are equivalent to some first-order fo
|
Here's a possible example that occurred in my own research. If $(a(n))$ is an automatic sequence, then the first-order theory of $\langle \mathbb{N}, +, n \rightarrow a(n) \rangle$ is decidable, and there is a free theorem-prover called Walnut that implements a decision procedure for it. One property of sequences that people are interested in is balance: we say a sequence over a finite alphabet $(a(n))$ is balanced if for all $\ell$ the number of occurrences of each letter $c$ in any two length- $\ell$ contiguous blocks of $(a(n))$ differs by at most $1$ . However, there's no obvious way to express this property in first-order logic, because it seems to require counting the number of $c$ 's occurring in length- $\ell$ blocks. However, in the case where $(a(n))$ takes only two distinct values, say $0$ and $1$ , we can use an alternative characterization of the balance property, which says that a finite block is unbalanced if and only if it contains both $0w0$ and $1w1$ for some block $w
|
|ring-theory|first-order-logic|model-theory|big-list|
| 0
|
Munkres' Proof of Well Ordering Property
|
In Munkres' Topology the proof for the Well-Ordering Property is stated as follows: I'm having confusion with the first and second underlined part: "Let A be the set of all positive integers n for which the statement holds." Isn't this the very statement that we are trying to show is true in this subproof ? Can you please restate this in a different way because I feel as though it might be ambigious?? When he says "this set" in the last underlined part does he mean the set $C \cap \{1,..., n \}$ or the set $A$? And I'm sorry to put it so generally but what is the key idea behind the subproof ; is it to show that intersection between the set of the first $n$ integers and any subset of it has a least element? I don't see how such an intersection should imply the existence of a least element. Thanks in advance guys, I hate asking to clarify so many little points but this one just ain't clicking for me.
|
The answer by @eric-wofsey is excellent. I also spent an embarrassing amount of time trying to figure that out. If I may add something though. Munkres is structuring the proof in two parts for a reason: The first part uses the principle of induction to prove a statement (namely the existence of a smallest element) for a certain class of finite (in size) subsets of $\mathbb{Z}_+$ : the $\{1...n\}$ family of subsets of $\mathbb{Z}_+$ which are indeed finite in size for every $n \in \mathbb{Z}_+$ . These sets not only have a specific shape (they consist of all positive integers less than or equal to $n$ , i.e. no "holes") but, more crucially, they are finite in size. You are not able to use induction to prove that statement if it were extended to sets of that shape having infinite size (actually only one such shaped set of infinite size exists, $\mathbb{Z}_+$ itself, which is what you are trying to prove in the first place!). The second part uses what was proven in the first part (with in
|
|real-analysis|elementary-set-theory|induction|proof-explanation|intuition|
| 0
|
a curious approach to differentiability
|
Given $f:\mathbb{R}^{m}\rightarrow \mathbb{R}^{n}$ , suppose that exists a function $h:\mathbb{R}^{m}\rightarrow L(\mathbb{R}^{m},\mathbb{R}^{n})$ such that for every smooth function $g:\mathbb{R}^{l}\rightarrow \mathbb{R}^{m}$ , $f(g(x)) $ is smooth and $ h(g(x))\circ g'(x)= (f\circ g)'(x)$ . Does It follow that $f$ is smooth and $f'=h$ ?
|
I’m interpreting your question as the following; if $h$ satisfies the chain rule equation for $f$ any of its possible parameterizations, is $h’ = f’$ ? The answer is no, but up to an additive constant yes. If $h’$ satisfies the desired equation for any such $g$ , it in particular satisfies it for $g(x) = x$ . Then your equation specializes to $h’(x) = f’(x)$ . The solutions to this are precisely $f(x) + c$ . Notice that additive constants do work. Replacing $f(x)$ with $f(x) + c$ , we have $(f(g(x)) + c)’ = f’(g(x))g’(x)$ , as desired. Edit: As has been pointed out, once you fixed your formatting you didn’t want $h’$ to satisfy the desired equation but h itself. In that case, the solution is even simpler; the $g(x) = x$ case forces $h(x) = f’(x)$ .
|
|analysis|derivatives|
| 0
|
Prove that the limit of $\sqrt{n+1}-\sqrt{n}$ is zero
|
How would I go about proving that $\lim_{n\to\infty}\sqrt{n+1}-\sqrt{n}=0$? I have tried to use Squeeze theorem but have not been able to come up with bounds that converge to zero. Additionally, I don't think that converting to polar is possible here.
|
I am gonna attempt to answer in a way that doesn't seem like pulling a rabbit outta the hat, and shows how I arrived at a solution. It suffices to have that $(\sqrt{n+1} - \sqrt n)^2\to 0$ , i.e. , $2n + 1 - 2\sqrt{n^2 + n}\to 0$ . Now, note that $2\sqrt{n^2 + n}$ is large enough to cancel $2n$ , but then we also have to take care of the lingering $1$ . This motivates us to take a nonnegative function $f$ such that $$ \sqrt{n^2 + n}\ge n + f(n)\tag{1} $$ so that $(\sqrt{n+1} - \sqrt n)^2\le 1- 2f(n)$ . Thus it suffices to have that $$ \lim_{n\to\infty}f(n) = 1/2.\tag{2} $$ We thus find $f$ satisfying (1) and (2): In view of (2), take $f$ to be the constant function $1/2$ . But then, one quickly sees that this makes (1)'s RHS too large to be true. Thus we can try to reduce $f$ , but in such a way that (2) still holds. Well, the first obvious thing that came to my mind was this: $$ f(n) := \frac{1}{2}\, \frac{n}{n + 1} $$ Now, it's straightforward to check that this indeed satisfies (1).
|
|limits|radicals|
| 0
|
Why does conjugating one of the complex vectors in the inner product, still give the correct dot product?
|
In the complex space, I understand that we need to conjugate one of the vectors in the dot product to avoid getting that v , v > ≤ 0. What I am unable to understand, is how taking the dot product between w as shown in picture 1 and v conjugated still gives the correct dot product? How can the dot product of w and v conjugated still represent how closely the two vectors align, when we instead use the conjugated v ? Is there some sort of interpretation of the complex plane that I'm missing? I´ve tried using the formula of multiplication between two complex numbers as in picture 2 (I don't know how to write formulas on the forum) and Euler's formula to maybe get a better understanding of how taking the inner product with the conjugated still ends up giving the correct relation and length if you take the norm. I think this makes sense if you take the inner product v , v conjugated > as you'll end up getting | v |^2 and therefrom that the norm is equal to | v |. But this obviously won't wor
|
We define the standard inner product on the vector space $\mathbb{C}$ over the field $\mathbb{C}$ as the function $\langle \cdot, \cdot \rangle: \mathbb{C} \times \mathbb{C} \to \mathbb{C}$ via $\langle \cdot, \cdot \rangle:(\boldsymbol{u}, \boldsymbol{v}) \mapsto \boldsymbol{u} \boldsymbol{\bar{v}}$ . This notation can be cumbersome so we denote $\langle \cdot, \cdot \rangle((\boldsymbol{u},\boldsymbol{v}))$ as $\langle \boldsymbol{u},\boldsymbol{v} \rangle$ . We can check that this satisfies the defining properties of an inner product (namely conjugate symmetry, linearity in the first argument, and positive-definiteness). The inner product $\langle \boldsymbol{w}, \boldsymbol{v}\rangle$ is computed as $\boldsymbol{w} \boldsymbol{\bar{v}}$ and not $\boldsymbol{w} \boldsymbol{v}$ . It follows that the inner product between $\boldsymbol{w}$ and $\boldsymbol{\bar{v}}$ is $\langle \boldsymbol{w}, \boldsymbol{\bar{v}}\rangle = \boldsymbol{w} \boldsymbol{\bar{\bar{v}}}=\boldsymbol{w} \bolds
|
|linear-algebra|geometry|complex-numbers|inner-products|
| 0
|
Small $o$ notation and the convergence of sum of identically distributed random variables
|
I am struggling dealing with $o$ notation. Here $k, n$ are all natural numbers, why do we have $o\Big(\sum_{k=0}^{n-1}\frac{(k\log\log k)^{1/2}}{k+1}\Big)=o((n\log\log n)^{1/2})$ as $n\rightarrow\infty$ ? And another one with convergence, why does $\sum_{k=0}^{n-1}\frac{|X_k|}{k^2}$ converges in $L^2$ and a.s. as $n\rightarrow\infty$ ? Where $(X_k)_{k\in\mathbb{N}}$ are normally distributed random variables with mean $0$ , variance $k$ and they have independent and stationary increment (but they themselves are not independent with each other). Comment: I apologise that I changed the second part of the question a bit, since the original question was not formulated in the correct way.
|
I think I come up with an solution, it is much appreciated if you can help me double check that this is correct. For the first small $o$ notation: since $k , we have $\sum_{k=0}^{n-1}\frac{(k\log\log k)^{1/2}}{k+1}\leq \sqrt{\log\log n} \sum_{k=1}^{n-1}\frac{1}{\sqrt{k}}\leq \sqrt{n\log\log n}$ , thus the first result. Regarding the second question: we use convergence of series of independent centered $L^2$ random variables. Here we use exchange the order of summation below $\sum_{k=1}^{n-1}\frac{|X_k|}{k^2}\leq \sum_{k=1}^{n-1}\sum_{i=1}^{k-1}\frac{|X_{i+1}-X_{i}|}{k^2}=\sum_{i=1}^{n-1}\sum_{k=i}^{n-1}\frac{|X_{i+1}-X_{i}|}{k^2}=\sum_{i=1}^{n-1}(\frac{1}{i}-\frac{1}{n-1})|X_{i+1}-X_{i}|$ Here $(X_{i+1}-X_{i})_{i\geq 1}$ is a sequence of independent random variables, with mean $0$ and variance $1$ (because of the independent and stationary increment). We only need to check if $\lim_{n\rightarrow\infty}\sum_{i=1}^{n-1}\mathbb{E}((\frac{1}{i}-\frac{1}{n-1})(X_{i+1}-X_{i}))^2$ converges,
|
|probability|sequences-and-series|probability-theory|convergence-divergence|asymptotics|
| 0
|
Generalizing the idea of a circle kissing the vertex of a parabola
|
Recently, while using GeoGebra, I was able to generalize the idea of a parabolic vertex-kissing circuit I have achieved a wonderful result, but I cannot prove it. If anyone can prove it, please do so Let $P$ be a parabola whose focus is $F$ , whose directory is $L$ , whose vertex is $O$ , and whose axis of symmetry is $D$ . The distance between $F$ and $L$ is equal to $d$ . Let $Q$ be an ellipse whose center is $S∈D$ , one of its peaks $O$ , and $M$ be one of the two vertices of $Q$ that does not lie on $D$ , and let $k=OS/MS$ . Let $g=OS/d$ The largest possible ellipse $Q$ that has the value $k$ such that it touches $P$ at only one point is the ellipse that makes $g=k²$ . Any positive real number can be chosen as the value of the number $k$ The circle kissing the vertex of the parabola is a special case when $k=1$ Is this result known in advance? If so, please provide references that include this result This is a linguistic description of what I have achieved: I have created the lar
|
Your finding is not new: the limiting "kissing" ellipse is such that its curvature $\kappa$ at vertex $O$ is the same as the curvature of the parabola at $O$ . In fact: $$ \kappa_\text{ellipse}={SO\over SM^2}, \quad \kappa_\text{parabola}={1\over d}, $$ where $d$ is the distance between focus and directrix of the parabola (note that $\kappa$ is in both cases the reciprocal of the latus rectum). Just multiply both curvatures by $SO$ to get your equality.
|
|geometry|ordinary-differential-equations|conic-sections|
| 1
|
Need help naming a fractal/image generated by exponential
|
I'm taking an introductory comp-sci course this semester, and my professor pulled up this graphic while explaining mergesort (the graphic shows blocks of size $2^{-n}$ ). For some reason, the vertical lines that appear in this photo look familiar to me, but I can't seem to find anything online or put my finger on it. Is there a name for a fractal that's generated in this manner?
|
Looks like the position of the first nonzero bit when counting in binary. An expression might be $v_2(n)$ where $v_m(n)$ is the largest power of $m$ that divides $n$ . This can be expressed in a few ways. $m^{v_m(n)} | n$ , $m^{v_m(n)+1} \not\mid n$ . Algorithmically, $v_m(n)= \begin{cases} m | n &: 1+v_m(n/m)\\ \text{else}&: 0\\ \end{cases} $
|
|logarithms|computer-science|fractals|
| 0
|
Tricky integral $\int_0^\pi{12\cos x\ \mathrm{sech}(\frac \pi2 \tan\frac x2)}\mathrm{d}x=\pi^2$
|
I need to show that the following tricky integral: $$\int_0^\pi{12\cos x\ \mathrm{sech}\left(\frac {\pi}2\tan\frac x2\right)}\mathrm{d}x$$ is equal to exactly $\pi^2$ . I have no idea how to start. I tried the substitution $u=\tan\frac x2$ and ended up with: $$\int_0^\infty24\ \mathrm{sech}\frac{\pi u}{2}\frac{1-u^2}{(1+u^2)^2}\mathrm{d}u$$ I might have to use residue theorem or Fourier transformation here, but I'm lost. Thank you!
|
Continue with \begin{align} &\int_0^\infty \operatorname{sech}\frac{\pi x}{2}\frac{1-x^2}{(1+x^2)^2}dx\\ =& \int_0^\infty \operatorname{sech}\frac{\pi x}{2}\bigg(\int_0^\infty e^{-y} y \cos(x y) dy\bigg) dx\\ =& \int_0^\infty e^{-y}y \int_0^\infty \frac{\cos (xy)}{\cosh\frac{\pi x}2}dx \ dy =\int_0^\infty \frac{ e^{- y}y} {\cosh y} \overset{t=e^{-2y}}{dy}\\ =& - \frac12\int_0^1 \frac{\ln t}{1+t}dt\overset{ibp}= \frac12\int_0^1 \frac{\ln (1+t)}{t}dt=\frac{\pi^2}{24}\\ \end{align} where $\int_0^\infty \frac{\cos (xy)}{\cosh\frac{\pi x}2}dx=\text{sech}\ y$ and $ \int_0^1 \frac{\ln (1+t)}{t}dt =\frac{\pi^2}{12}$ .
|
|calculus|integration|definite-integrals|fourier-transform|trigonometric-integrals|
| 1
|
Expected number of darts thrown in a game
|
$A$ and $B$ play a game where they take turns throwing darts at a dart board. The winner is the person to hit first. $A$ hits with probability $a$ and $B$ with probability $b$ . I showed (by computing a sum) that the expected number of darts thrown in one game is $$\frac{\alpha}{a}+\frac{\beta}{b}$$ where $\alpha$ and $\beta$ are the probability $A$ and $B$ win resp. This made me think there should be a solution using the law of total probability (since $\alpha+\beta=1$ ). That would suggest that the expected number of darts thrown given $A$ wins is $\frac{1}{a}$ . This is nonsense, so I dismissed this as a coincidence. However, I ran a simulation for the analogous game for three players $A$ $B$ and $C$ and found that the expected number of darts thrown in the game is $$\frac{\alpha}{a}+\frac{\beta}{b}+\frac{\gamma}{c}$$ Can someone explain whats going on here? I'm expecting someone will be able to arrive at this answer (and presumably the result for $n$ players) in a much more satisfy
|
With the following approach the question is trivial. Let us consider three players with probabilities $p_1$ , $p_2$ , $p_3$ of hitting and $q_1$ , $q_2$ , $q_3$ of not hitting. And suppose they throw consecutively, for $N$ rounds, $N$ times each. Then: \begin{align*} & Np_1 &&\text{average number of times the first player will hit and win.}\\ & Nq_1p_2 &&\text{average number of times the second player will win.}\\ & Nq_1q_2p_3 &&\text{average number of times the third player will win.}\\ & N(1+q_1+q_1q_2) &&\text{average number of throws to count.}\\ & N(1-q_1q_2q_3)&&\text{average number of games played.} \end{align*}
|
|probability|algebra-precalculus|conditional-probability|
| 0
|
Prove that $\sqrt{18} - \sqrt{12}-\sqrt{45} + \sqrt{6}$ is irrational
|
Prove that $\sqrt{18} - \sqrt{12}-\sqrt{45} + \sqrt{6}$ is irrational I tried to let $x = \sqrt{18} - \sqrt{12}-\sqrt{45} + \sqrt{6}$ , and assume for the sake of contradiction that $x$ is rational. Then I squared numerous times to attempt to isolate an irrational on one side, and a supposedly rational expression on another side, since we assumed $x$ is rational, to arrive at a contradiction. However, all of my attempts have resulted in very complicated expressions that I am unable to complete. Is this the right approach, or are there other methods such as looking at it as a polynomial root?
|
To assist OP & at the risk of downvoting , I will add a very simple answer which the Particular numbers involved here allow (The Duplicates do not seem to allow this method) . . . . Let $X=\sqrt{18} - \sqrt{12}-\sqrt{45} + \sqrt{6}=3\sqrt{2} - 2\sqrt{3}-3\sqrt{5} + \sqrt{2}\sqrt{3}$ Hence $X - 3\sqrt{2} + 2\sqrt{3} - \sqrt{2}\sqrt{3} = - 3\sqrt{5}$ Squaring will eliminate $\sqrt{5}$ , while retaining $\sqrt{2}$ & $\sqrt{3}$ & $\sqrt{2}\sqrt{3}$ Move the terms around to get : $\square 1+\square \sqrt{3} = \square \sqrt{2} + \square \sqrt{2}\sqrt{3} = \sqrt{2} ( \square + \square \sqrt{3} )$ , where $\square$ indicates some arbitrary Co-Efficients involving $X$ , $X^2$ & Constants , though not involving radicals. Square this to eliminate $\sqrt{2}$ , while retaining $\sqrt{3}$ on either side. Collect all $\sqrt{3}$ terms to make it $\sqrt{3}=\square/\square$ which is a Contradiction. [[ Checking that it is not $0/0$ is left out here , though it is not very hard ]] Summary / Overview : Th
|
|radicals|irrational-numbers|
| 0
|
Parametric recursive sequence depending on initial value
|
Define the sequence $(x_n)_{n \geq 0}$ with $x_0 >0$ and recursive equation $x_{n+1} = \sqrt{2+x_n}$ for all $n \geq 0$ . I wish to determine the limit of the sequence $(x_n)_{n \geq 0}$ in terms of the parameter $x_0 \in (0, \infty)$ . If $x_0 , then $x_n > 0, (\forall)n \geq 0$ , $x_n and $(x_n)_n$ is strictly increasing. By Weierstrass' property, there exists $\ell = \lim_{n \to \infty} x_n \in \mathbb{R}$ . Taking the limit, obtain $\ell^2-\ell-2=0$ with solutions $\ell \in \{-1,2 \}$ . But since the $x_n>0, (\forall)n \geq 0$ , $\ell \geq 0$ so $\lim_n x_n=2$ . If $x_0=2$ , then $x_n=2, (\forall)n \geq 0$ constant sequence, so $\lim_n x_n=2$ . If $x_0>2$ , then $x_n > 0, (\forall)n \geq 0$ and $(x_n)_n$ is strictly increasing. However, because $x_0>2$ we would have $x_n > 2, (\forall)n \geq 0$ . But we know already that the only possible limit points of the sequence are $\lim_n x_n \in \{ -1,2, \pm \infty \} \subset \overline{\mathbb{R}}$ , so does that mean the sequence diverges
|
You've made a mistake somewhere. For $x_{n+1} = \sqrt{2 + x_n}$ , and assuming that all $x_n > 0$ , $x_{n+1} > x_n$ for $x_n , and $x_{n+1} for $x_n > 2$ . So in either case $x_{n+1}$ is closer to $2$ than $x_n$ (either increasing from below or decreasing from above). Why? Let's look at the $x_n > 2$ case. \begin{align} x_n > 2 &\implies x_n - 2 > 0 \quad\text{and}\quad x_n + 1 > 0 \\ &\implies 0 The $x_n case is similar, noting that for $0 , we're between the two roots of the quadratic, so it's value is negative. There's a more systematic way to think about this for recursively defined sequences near a fixed point. Suppose that $x_{n+1} = f(x_n)$ for some function $f$ with fixed point $x = L$ : i.e., $f(L) = L$ . We get convergence if $x_{n+1} > x_n$ for $x_n , and $x_{n+1} for $x_n > L$ , or in other words if $f(x) > x$ for $x , and $f(x) for $x > L$ on some interval around $L$ . In your example, this interval is $(0, \infty)$ . You can see in a cobweb plot that as long as the graph
|
|sequences-and-series|
| 1
|
Statements which feel like they shouldn't be first-order expressible, but are
|
I recently had the opportunity to study some (very basic) model-theory, and that made some theorems in ring theory become immediately more interesting. For example, one characterization of the Jacobson radical of a ring $R$ makes it possible to express the formula " $x \in J(R)$ " as a first-order formula in the language of rings: $$\forall r \exists s (s(1-rx) = 1)$$ as it is known that $x \in J(R) \iff 1 - rx$ is invertible for all $r \in R$ . Similarily, the statement "every $R$ -module is flat", which feels very far from first-order expressible, can be written as the sentence: $$\forall a \exists x (a = axa)$$ as both of these statements are satisfied precisely by the von Neumann regular rings. I am also aware of this question , which talks about a way of expressing the sentence $\dim(R) \leqslant k$ for a commutative ring $R$ . In that spirit, I was hoping for more examples (not necessarily from algebra) of seemingly "complex" statements which are equivalent to some first-order fo
|
Many important classes of finite groups can be described by some family of sentences in the first-order language of group theory. See these slides of John Wilson. Finite soluble groups of derived length at most $d$ Groups of odd order Finite non-abelian simple groups can be defined by a single sentence [1] Finite soluble groups can also be defined by a single sentence [2]: a finite group is soluble if and only if it satisfies the following first order sentence, which states that no non-trivial element $g$ is a product of $56$ commutators of pairs of conjugates of $g$ : $$ \forall g\ \forall x_1 \dotsm \forall x_{56}\ \forall y_1 \dotsm \forall y_{56}\ \bigl(g = 1 \vee g \not= [g^{x_1}, g^{y_1}] \dotsm [g^{x_{56}}, g^{y_{56}}]\bigr) $$ [1] U. Felgner, ‘Pseudo-endliche Gruppen’, Proc. 8th Easter Conf. on Model Theory (Humboldt-Universit ̈at, Berlin, 1990) 82–96. [2] J. S. Wilson, Finite axiomatization of finite soluble groups, J. London Math. Soc. (2) 74 (2006), 566–582.
|
|ring-theory|first-order-logic|model-theory|big-list|
| 0
|
How can I solve a Bessel equation with Reduction of order?
|
If $y_{1}(x) = \frac{\sin(x)}{\sqrt(x)}$ is one solution of the differential equation $$x^2y'' +xy' + (x^2-\frac{1}{4})y = 0$$ find the second solution $y_{2}(x)$ . My effort using Wronskian The general form of the homogeneous linear 2nd order differential equation is : $$ y'' + p(x)y' +q(x)y =0$$ The given differential eqaution is a Bessel equation with $n =\frac{1}{4}$ : $$x^2 y'' +xy' + (x^2 - \frac{1}{4})y = 0$$ Rearranging : \begin{align*} x^2 y'' +xy' + (x^2 =\frac{1}{4})y = 0 \\ y'' +\frac{x}{x^2}y' + \frac{(x^2 -\frac{1}{4})}{x^2}y = 0 \\ y'' +\frac{x}{x^2}y' + \frac{(x^2 -\frac{1}{4})}{x^2}y = 0 \\ y'' +\frac{1}{x}y' + (1 - \frac{\frac{1}{4}}{x^2}) y = 0 \\ y'' +\frac{1}{x}y' + (1 - \frac{1}{4x^2}) y = 0 \\ \end{align*} Therefore $$p(x) = \frac{1}{x}$$ Using Abel's Theorem we have : $$y_{2}(x) = y_{1}(x) \int \left( \frac{ e^{-\int p(x)dx}}{ y_{1}^2(x)} \right) dx$$ Substituting : $$\begin{align*} y_{2}(x) &= \frac{sin(x)}{\sqrt(x)} \int \left( \frac{ e^{-\int \frac{1}{x}dx}}{
|
Let $y_2=\frac{\sin(x)}{\sqrt x}u$ be a solution. Then $$ y_2'=\frac{2 x \sin (x) u'(x)+u(x) (2 x \cos (x)-\sin (x))}{2 x^{3/2}}$$ and $$ y_2''=\frac{u(x) \left(\left(3-4 x^2\right) \sin (x)-4 x \cos (x)\right)+4 x \left(x \sin (x) u''(x)+u'(x) (2 x \cos (x)-\sin (x))\right)}{4 x^{5/2}}. $$ Now putting them in the equation gives $$ x^{3/2}(2\cos(x)u'+\sin(x)u'')=0 $$ which gives $$ \frac{u''}{u'}=-\frac{2\cos(x)}{\sin(x)}. $$ Integrating both sides gives $$ u'=\frac{c_1}{\sin^2(x)} $$ from which one has $$ u=-c_1\cot(x)+c_2. $$ Choose $c_1=-1,c_2=0$ and then $u=\cot(x)$ . So the second solution is $$ y_2=\frac{\sin(x)}{\sqrt x}\cot(x)=\frac{\cos(x)}{\sqrt{x}}. $$
|
|ordinary-differential-equations|wronskian|reduction-of-order-ode|
| 1
|
Construct perspective projection of rotating tesseract by perpendicular lines intersecting ellipse
|
The contruction was used in two different sources on the web: a Geogebra resource and a video using inRm3D so I think it must be documented and proved somewhere, but I didn't find any. Here is the construction (following the second source): On the xy-plane, draw a conic symmetric about $y$ -axis. (Ignore the small green circle, its only purpose is to rotate $C$ on it.) Take a point $A$ on the $y$ -axis. Draw two perpendicular lines through $A$ , intersecting the conic at $E,F,E_1,F_1$ . Rotate $E,F,E_1,F_1$ by $45+n⋅90$ °, $(n=0,1,2,3)$ around the $x$ -axis to get $16$ points in total: When the perpendicular lines rotate around $A$ , the points $E, F, E_1, F_1$ will move on the ellipse, and he claimed that the 16 points will be a perspective projection of 16 vertices of the tesseract $\{(\pm1,\pm1,\pm1,\pm1)\}$ rotated about $xw$ -plane. To verify this, I take $(0,0,0,\text{distance})$ as the center of projection (where $\text{distance}$ is a constant to be determined) and take the dis
|
Also, is there a better way to show that the right angle at the center of the square is preserved by the perspective projection and $xw$ -rotation when $\text{distance}=\sqrt{2}$ ? The center of the square $P_2P_3P_{10}P_{11}$ is $(0,1,-1,0)$ . In the post the "view space" is $w=\text{distance}-1$ . We change this to $w=0$ (will just rescale the projection) then the point $(0,1,-1,0)$ lies on the "view space", so it will be fixed under this perspective projection. To apply this post with the two planes (the original plane $P_2P_3P_{10}P_{11}$ and the image plane $A_2A_3A_{10}A_{11}$ ), we check the conditions are satisfied: First, $(0,1,-1,0)$ is the orthogonal projection of $(0,0,0,\text{distance})$ onto the intersection line of two planes; Second, the perspective center $(0,0,0,\text{distance})$ lies on a bisector plane of the two planes if and only if $\text{distance}=\pm\sqrt{2}$ . so the angle at $(0,1,-1,0)$ will be preserved, by that post. In the figure below, the point $(x,\fra
|
|solution-verification|3d|projective-geometry|projection-matrices|hyperspace|
| 1
|
Minimal polynomial of all elements in a finite field extension of $\mathbb{F}_3$.
|
Let $i \in \mathbb{F}_3$ be a zero of $x^2+ 1$ . Let $\mathbb{F} = \mathbb{F}_3(i)$ , and determine the minimal polynomial of $\alpha\in \mathbb{F}$ over $\mathbb{F}_3$ . I'm trying to solve the question above. I showed that $\mathbb{F}$ is a field extension with 9 elements. Thus $\mathbb{F}\cong \mathbb{F}_9$ . It's clear that we shouldn't bother about the elements contained in $\mathbb{F}_3$ , nor $\pm i$ , since they're trivial for the former and we already know them for the latter. But there are 4 more elements to go. I don't know how they look like, so it's hard for me to imagine how I could even get their respective minimal polynomials. I would really appreciate any help possible.
|
As lulu mentioned, the elements of $\mathbb{F}_{3}(i)$ are precisely those of the form $a + bi$ where $a, b \in \mathbb{F}_{3}$ Consider $\alpha = a + bi \in \mathbb{F}_{3}(i)$ . Now let $x = a + bi$ , where $a, b \in \mathbb{F}_{3}$ . Then $$x - a = bi \iff (x-a)^{2} = -b^{2} \iff x^{2} - 2ax + a^{2} + b^{2} = 0.$$ Hence $h = x^{2} - 2ax + a^{2} + b^{2} \in \mathbb{F}_{3}[x]$ is possibly the desired minimal polynomial. We have that $\alpha$ is a root of $h$ , the other root of $h$ is its conjugate $\overline{\alpha} = a - bi$ . Now either $b = 0$ or $b \neq 0$ . If $b = 0$ : we have that $\alpha = a \in \mathbb{F}_{3}$ , and the linear (thus irreducible) polynomial $X - a \in \mathbb{F}_{3}[x]$ is the minimal polynomial of $\alpha$ over $\mathbb{F}_{3}$ If $b \neq 0$ : we have that $\alpha = a + bi \in \mathbb{F}_{3}(i)$ . Note that $h$ is irreducible over $\mathbb{F}_{3}$ since $i \notin \mathbb{F}_{3}$ . Then $h$ is the minimal polynomial of $\alpha$ over $\mathbb{F}_{3}$
|
|abstract-algebra|field-theory|finite-fields|
| 0
|
How can I motivate and deal with speculation?
|
I am a student of mathematics and have been increasingly feeling doubtful about whether or not I really understand the theory. What I mean by this is highly attributed to the style of textbooks in presenting information. Authors simply write definitions and make vague connections, but I never get to know why or how theorems and their definitions are developed. How could I have discovered it for myself, roughly speaking?
|
I (and probably most people here) can empathize with you. I believe this is a symptom of how papers are meant to be written - most of the dirty groundwork that motivated everything gets swept under the rug, and can become really mysterious. That said, some books and some survey papers actually attempt to give some motivation for the things they present. It's one of the reasons I love Herstein's "Topics in Algebra", for instance. As for what you, as a student, can do, I would say it's often hard to "rediscover" everything. But, after reading some definition, try to create some of your own examples. Pause and ponder for a bit to see where everything fits to what you've been studying so far. If you struggle with this (as I often do when I'm just learning about something), try to study the history of the topic, to see where the concept inserts in the mathematical framework of its time - what were the people who discovered this thinking about? Sometimes, authors purposely present a very opa
|
|soft-question|self-learning|learning|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.