title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
open half-disk is homeomorphic to upper half-plane
|
Let $A=[0,\infty)\times\mathbb{R}$ (upper half-plane) and $B=[0,a)\times(c,d)$ (open half-disk) then $[0,\infty)$ is homeomorphic to $[0,a)$ and $\mathbb{R}$ is homeomorphic to $(c,d)$. Why is their product also homeomorphic?
|
Let $\overline{\mathbb H^2}$ denote the upper half plane. $z\mapsto e^{i\psi }\dfrac{z-z_0}{z-z_0^*}$ where $\psi \in \mathbb R$ and Im $(z_0)>0$ is a good homeomorphism $\overline{\mathbb H^2}\rightarrow D$ , where $D$ is the disk. Then, to get to the upper half-disk compose that map with $(x,y)\mapsto \left(x, y+\dfrac{\sqrt{1-x^2}}{2}\right)$ (Note here I'm passing between $\mathbb C$ and $\mathbb R^2$ using $z=x+iy$ ). Your B looks like a rectangle with open boundaries and a closed boundary along the y-axis with holes at $(0,c)$ and $(0,d)$ but you might be able to show some homeomorphisms between rectangles and disks, but I'm not sure if that is what you meant. What I chose to show here is that the upper-half plane is homeomorphic to the half-disk, but the rest of your question seems somewhat unclear as you might be asking about products of intervals.
|
|general-topology|
| 0
|
Vakil's FOAG Exercise 1.2.B.
|
From the text: If $A$ is an object in a category $\mathcal{C}$ , show that the invertible elements of Mor( $\mathcal{A}$ , $\mathcal{A}$ ) form a group (called the automorphism group of $\mathcal{A}$ , denoted Aut( $\mathcal{A}$ )). Show that two isomorphic objects have isomorphic automorphism groups. (For readers with a topological background: if $X$ is a topological space, then the fundamental groupoid is the category where the objects are points of $X$ , and the morphisms $x\to y$ are paths from $x$ to $y$ , up to homotopy. Then the automorphism group of $x_0$ is the (pointed) fundamental group $\pi_1(X,x_0)$ . In the case where $X$ is connected, and $\pi_1(X)$ is not abelian, this illustrates the fact that for a connected groupoid - whose definition you can guess - the automorphism groups of the objects are all isomorphic, but not canonically isomorphic.) Now, I understand everything up the final bolded sentence. Here are couple questions I have: My guess is that a connected groupo
|
You are right, a groupoid is connected if every pair of objects has a morphism between them. The phrase "In the case where $X$ is connected" is definitely misleading, but I wouldn't say it is wrong. In fact, the groupoid $X$ is connected if and only if the topological space $X$ is path-connected. Thus it would be better to write In the case where the groupoid $X$ is connected or In the case where the topological space $X$ is path-connected You are also right that in a path-connected space $\pi_1(X)$ refers to $\pi_1(X,x_0)$ for any $x_0\in X$ since these pointed fundamental groups are all isomorphic. Strictly speaking the notation $\pi_1(X)$ does not make sense, but one could understand it as the set of isomorphic groups $\pi_1(X,x_0)$ with $x_0 \in X$ . However, for path-connected spaces $X$ it is common use to say things like the fundamental group of $X$ is abelian, finite, etc. $\pi_1(X)$ is abelian, finite, etc. Actually it is a slight abuse of notation to speak about the fundament
|
|algebraic-topology|category-theory|
| 0
|
open half-disk is homeomorphic to upper half-plane
|
Let $A=[0,\infty)\times\mathbb{R}$ (upper half-plane) and $B=[0,a)\times(c,d)$ (open half-disk) then $[0,\infty)$ is homeomorphic to $[0,a)$ and $\mathbb{R}$ is homeomorphic to $(c,d)$. Why is their product also homeomorphic?
|
Given a ray $r$ in the upper half-plane $A$ , we can consider the intersection $R=r\cap B$ of the ray with the open half-disk. If $(a,b)\in R$ we can send it to $\frac{1}{1-|(a,b)|}(a,b)$ , where $|(a,b)|=\sqrt{a^2+b^2}$ is the distance from $(a,b)$ to the origin. You can check that this is a homeomorphism that stretches $R$ to $r$ (try drawing a picture to see how this "moves"). But this map works to stretch any ray in $B$ to a ray in $A$ simultaneously (picture the boundary of $B$ being stretched to infinity). Taking the domain of this map $(a,b)\mapsto\frac{1}{1-|(a,b)|}(a,b)$ to be $B$ instead of just a single ray in $B$ , you can check that this is a homeomorphism from $B$ to $A$ .
|
|general-topology|
| 0
|
Die rolling contest problem
|
I got this problem as part of an interview... Problem Statment: You have two players, Frank and Jane, that take turns in rolling a fair k-sided die. Whoever rolls a k first wins the game. The Python program should output the probability that Frank wins the game for k=6 thru 99. That is, the output will be an array of probabilities where index 0 is the probability when k = 6; index 1 when k = 7; etc. Note that it doesn't state who goes first or put any limit on the number of rolls. I asked for clarification because it sure seems to me the probability of Frank winning is always 50%, regardless of the value of k. I believe there is a slight advantage if Frank rolls first, but that's not in the problem statement. The person responded insisting that the probability of Frank winning does depend on k. I don't see it. Very similar probably is covered here: Probability of winning a game by rolling the die first but I think that solution depends on knowing who rolls first correct?
|
Moderately alternative approach. Given a $~k~$ sided die, let $~P~$ denote the probability that the first player wins, and $~Q~$ denote the probability that the second player wins. This implies that $~P + Q = 1.$ Then you have the equation $$P = \frac{1}{k} + \frac{k-1}{k} \,Q = \frac{1}{k} + \frac{k-1}{k} \,(1-P). \tag1 $$ The equation in (1) above is explained by saying that either the first player wins immediately, or the first player steps into the position of being the second player. That is, if the first player does not win on their very first roll, the probability of their winning is now equivalent to the probability of the second player winning, at the start of the game. Then, you have $$P + P \,\frac{k-1}{k} = \frac{1}{k} + \frac{k-1}{k} = 1 \implies $$ $$P \times \frac{2k - 1}{k} = P \times \left[ ~\frac{k}{k} + \frac{k-1}{k} ~\right] = 1 \implies $$ $$P = \frac{k}{2k-1}.$$
|
|probability-theory|
| 0
|
BPI in Cohen Model
|
I'm reading Miroslav Repicky's paper "A proof of the independence of the Axiom of Choice from the Boolean Prime Ideal Theorem" and I'm confused by the claim that an ideal $I$ on a Boolean algebra $B$ being prime in $OD^{V[G]}[A,f\frown h]$ implies that it must be a maximal ideal of $B$ in $OD^{V[G]}[A,f\frown h]$ . For context, below is a screenshot of the relevant section of the argument: My concern is that while $I$ selects between $x$ and $\neg x$ for every $x\in B\cap OD^{V[G]}[A,f\frown h]$ , it is possible that there is some ideal $I'\subseteq B\setminus OD^{V[G]}[A,f\frown h]$ so that $I$ and $I'$ generate a larger proper ideal $J$ of $B$ , with $I,I',J\in OD^{V[G]}[A,f\frown h]$ . In other words, if $OD^{V[G]}[A,f\frown h]$ is not transitive, $I$ may be prime here without being maximal. Is there something I'm missing? Any help would be greatly appreciated! Edit: It seems like the argument runs properly if we take $k$ to be the least natural number so that for some $h'\in^{k+1}A
|
The argument is not trying to show $I$ is prime in $OD^{V[G]}[A,f\frown h]$ , it is trying to show that $I$ is prime in $M$ . I think this is what you're confused about. I'll try to repeat relevant parts of their argument. As they described, you can construct $I$ . $I\subseteq B$ , so every element of $I$ is in $M$ , and $I$ has the same support as $B$ , so $I\in M$ . We want to show that $M\models I\text{ is prime}$ ; that is, for every $x\in M$ , either $x\in I$ or $\lnot x\in I$ . It will follow that $M\models I\text{ is maximal}$ . Suppose for contradiction that $I$ is not prime in $M$ . Then there must be some $x\in M$ so that neither $x$ nor $\lnot x$ is in $I$ . Let us choose such an $x$ with minimal support. [the rest of their argument will reach a contradiction from assuming $x$ has minimal support; it's not in the screenshot so I assume you understood it]
|
|logic|set-theory|boolean-algebra|foundations|forcing|
| 0
|
on the representation of a field
|
Considered an irreducible polynomial with coefficient over a field $\Bbb K$ $ (f(x)\in\Bbb K[x]$ ) and $\deg f(x)=n$ ; in my textbook is written that: $$ {\Bbb K[x]\over \langle f(x)\rangle} = \{a_0+a_1x+...+a_{n-1}x^{n-1}+\langle f(x)\rangle \mid a_i\in\Bbb K\} $$ Is it true? and if it's how can I prove this ?(I apologize if the question may seem silly)
|
Expanding on Paul Garrett's answer, note that $\mathbb{K}[x]$ is generated by the set $\{1,x,x^2,...\}$ as a $\mathbb{K}$ -vector space. Via the division algorithm for polynomials over a field, every polynomial $g \in \mathbb{K}[x]$ can be written as $$ g = qf + r $$ with $r\equiv 0$ or $\deg r . Passing to the quotient, it follows that $g+(f) = r+(f)$ , and $r$ can be expressed uniquely as a $\mathbb{K}$ -linear combination of the set $\{1,x,...,x^{n-1}\}$ .
|
|abstract-algebra|
| 0
|
Can the General Operator mapping of Eikonal Equation be Continuous?
|
I'm trying to make some general statement about the operator mapping of the Eikonal equation (not an expert on PDEs nor functional analysis), namely \begin{eqnarray}|\nabla u(x)| = f(x)\end{eqnarray} for $f(x) \in C^0(\Omega;\mathbb{R}^d)$ where $\Omega$ is a bounded domain. I believe we can assume since $f(x) \in C^0$ that $u(x)$ has a viscosity solution that is unique and exists approaching the solution $u(x) \in C^0$ from the papers [1], [2] below. But, now I want to consider the continuity of the operator namely: \begin{eqnarray}\mathcal{G}: C(\Omega) \rightarrow C(\Omega)\end{eqnarray} where the operator is defined as the mapping from $f(x)$ into the solution $u(x)$ . From an intuitive standpoint, is it ever possible that this mapping is continuous given $f_1, f_2 \in C(\Omega)$ are such that for $U>0$ , $\|f_1\|_\infty, \|f_2\|_\infty even if $\nabla u(x)$ is not necessarily defined everywhere? I have tried proving continuity using approaches such as Poincaré on $L_2$ as somethin
|
If you are still interested, you can get continuity from the maximum principle (or rather, comparison principle), which says that $u_1\leq u_2$ whenever $f_1\leq f_2$ (assuming here that $u_1=u_2=0$ on the boundary). To see how to do this, suppose $0 Uniform positivity is required even for uniqueness, and the upper bound is used later. Set $g_1 = \lambda f_1$ where $\lambda>0$ is chosen large enough so that $g_1 \geq f_2$ , so $\lambda = \max_\Omega (f_2/f_1)$ . Then the solution corresponding to $g_1$ is $\lambda u_1$ , and by comparison $\lambda u_1 \geq u_2$ . Therefore $$u_2 - u_1 \leq (\lambda - 1)u_1 \leq \left(\max_\Omega \frac{f_2 - f_1}{f_1}\right)u_1 \leq \theta^{-1}\|f_1-f_2\|_{L^\infty(\Omega)}u_1.$$ We can easily bound $u_1 \leq C_\Omega\theta^{-1}$ , where $C_\Omega$ is something like the diameter of $\Omega$ , so we get $$u_2 - u_1 \leq C_\Omega \theta^{-2}\|f_1-f_2\|_{L^\infty(\Omega)}.$$ The same inequality clearly holds for $u_1-u_2$ by swapping $u_1$ and $u_2$ , so $
|
|functional-analysis|partial-differential-equations|continuity|hamilton-jacobi-equation|
| 1
|
What are the conditions to compose limits to infinity?
|
I've recently become a little obsessed with all the ways limits can be composed - and the preconditions for this to take place. I took $A,B_{1},B_{2},C \subseteq \mathbb{R}$ and $f:A\to B_{1},\ g:B_{2} \to C$ and further assumed $f[A] \subseteq B_{2}$ . And then I listed out all the ways to compose $f$ and $g$ with limits: \begin{align} 1.&&\lim_{x\to a}f(x)=b,\lim_{x\to b}g(x)=c&&\implies&&\lim_{x\to a}g(f(x))=c \\\\ 2.&&\lim_{x\to a}f(x)=b,\lim_{x\to b}g(x)=\infty&&\implies&&\lim_{x\to a}g(f(x))=\infty \\\\ 3.&&\lim_{x\to a}f(x)=\infty,\lim_{x\to\infty}g(x)=c&&\implies&&\lim_{x\to a}g(f(x))=c \\\\ 4.&&\lim_{x\to a}f(x)=\infty,\lim_{x\to\infty}g(x)=\infty&&\implies&&\lim_{x\to a}g(f(x))=\infty \\\\ 5.&&\lim_{x\to\infty}f(x)=b,\lim_{x\to b}g(x)=c&&\implies&&\lim_{x\to\infty}g(f(x))=c \\\\ 6.&&\lim_{x\to\infty}f(x)=b,\lim_{x\to b}g(x)=\infty&&\implies&&\lim_{x\to\infty}g(f(x))=\infty \\\\ 7.&&\lim_{x\to\infty}f(x)=\infty,\lim_{x\to\infty}g(x)=c&&\implies&&\lim_{x\to\infty}g(f(x))=c \\\\
|
The general intuition behind all of these compositions of limits is that as the input of $f$ approaches a limit $L_1$ (which I'm using merely as a symbol to represent either the finite or infinite limit), the first limit statement says the output of $f$ approaches a limit $L_2$ , which means the input of $g$ also approaches $L_2$ , and then the second limit statement says the output of $g$ approaches a limit $L_3.$ The reason why you need extra conditions in many of your cases -- and the principle that will tell when the extra conditions are required -- is that there's a small hole in that intuitive argument. (Or one might say that there is not a hole where we need one, but that's using the word "hole" in a completely different way.) The general idea of all of the formulas for limits of $f(x)$ in your examples is that there is a limit point of the input ( $a$ or $\infty$ ) and a limit point of the output ( $b$ or $\infty$ ). We use a parameter $\delta$ to define a family of ever-shrink
|
|real-analysis|limits|proof-writing|
| 1
|
A $2D$ random walk with step size $1$ starts on the edge of a disk of radius $r$. What is probability that the walk will return to the disk?
|
Consider a two dimensional random walk with step size $1$ and each step in a random direction, with the angle $\theta$ uniformly distrbuted in $[0,2\pi)$ . The walk starts on the perimeter of a disk of radius $r$ . What is probability that the walk will ever return to the disk, in terms of $r$ ? By "the walk will ever return to the disk", I mean the walk will have a vertex on or within the perimeter of the disk, besides the initial one. I came up with this question when thinking about the fact that a two dimensional lattice walk will return to the origin with probability $1$ , as proved by Polya in 1921. I have not been able to find any reference that answers my question. Context:
|
For $2$ D random walks, Lord Rayleigh proved that for large number of steps, $N$ , the PDF of the final position of each walk distributed around some initial point follows the Rayleigh distribution: $$f_R(r)=\frac{2r}{N}\exp\left(-\frac{r^2}{N}\right)$$ Since it is normalized only in the radius coordinate, we’ll have to normalize it for the angle too if we want to work in polar coordinates: $$\int_0^{2\pi}\int_0^\infty \frac{2r}{N}\exp\left(-\frac{r^2}{N}\right) r\text dr\text d\theta=\pi\sqrt{\pi N}$$ Therefore, we are looking at $$f_{R;\Theta}(r;\theta)= \frac{2r}{(\pi N)^{\frac{3}{2}}}\exp\left(-\frac{r^2}{N}\right). $$ Since the initial point is on the perimeter of the disk of radius $\rho$ (you said $r$ , but $r$ and $R$ are already taken), without loss of generality and in view that the disk is polar-symmetric, we can center the coordinate system at, for convenience, $(x,y)=(-\rho,0)$ . Thus, the distribution will be centered there and the region of return will be a disk to the r
|
|probability|reference-request|random-walk|
| 0
|
What is an example of a non-Riemannian manifold?
|
A Riemannian manifold $(M,g)$ is a manifold equipped with a metric tensor ( $g$ ) that is both symmetric and positive definite. Now if the metric tensor is symmetric but not positive definite, then it's a pseudo-Riemannian manifold. But what the case of a manifold equipped with a metric tensor $(M,g)$ such that $g$ is not symmetric(regardless of whether it is or is not positive definite)? That is, $\forall (x,y) \ such \ that (\ x \neq y \ ) \in T_{p}(M) \ g(x,y)\neq g(y,x)$ at any point p in M.
|
Let me first try to make sense of the question (in its current revised version; my guess is that this is not the last revision). Recall that a metric tensor by the very definition is required to be symmetric and positive definite. Hence, a "nonsymmetric metric tensor" is an oxymoron. My best guess is that OP really means not a metric tensor, but a covariant 2-tensor $g$ on a manifold $M$ . The nonsymmetry condition, as stated in the revised version, says: For every $p\in M$ and every two distinct vectors $x, y$ in $T_pM$ , we have $g(x,y)\ne g(y,x)$ . Note that if $\lambda\in {\mathbb R}$ is different from $1$ and $x\in T_pM$ is a nonzero vector, then (by the definition of a tensor!) $$ g( \lambda x, x)= \lambda g(x, x)= g(x, \lambda x). $$ Hence, for $y=\lambda x$ , we get $g(x,y)=g(y,x)$ . Thus, there are no tensors satisfying the required inequality. OK, maybe OP really means that $x, y$ are linearly independent in the stated assumption (nonvanishing of $x, y$ is then automatic). Th
|
|differential-geometry|manifolds|riemannian-geometry|
| 1
|
For $(G,X)$ - manifolds can we assume $X$ is simply connected?
|
This is based on my rough understanding, so let me know which part if any is wrong. Suppose $M$ is a $(G,X)$ -manifold, where $X$ is a homogeneous $G$ -space. Pullback a $(G,X)$ structure from $X$ to its universal cover $\widetilde{X}$ . The $(G,X)$ -automorphisms $\mathrm{Aut}(\widetilde{X})$ form a Lie group acting transitively on $\widetilde{X}$ , making it a homogeneous $\mathrm{Aut}(\widetilde{X})$ -space. There is a surjective homomorphism $\phi:\mathrm{Aut}(\widetilde{X})\to G$ pushing down automorphisms whose kernel is equal to the deck transformations of $\widetilde{X}$ . The map $\widetilde{X}\to X$ is equivariant with respect to $\phi$ and the group actions from $\mathrm{Aut}(\widetilde{X})$ and $G$ . Because of this any $(\mathrm{Aut}(\widetilde{X}),\widetilde{X})$ -manifold is automatically a $(G,X)$ -manifold. Finally, each $(G,X)$ structure on $M$ lifts to a unique $(\mathrm{Aut}(\widetilde{X}),\widetilde{X})$ structure on $M$ making the correspondence bijective. If the
|
Here's a partial proof in terms of developing maps. Let's take for granted that $(\mathrm{Aut}(\widetilde{X}),\widetilde{X}))$ is a homogeneous space. Let $\mathrm{dev}:\widetilde{M}\to X$ be a developing map. From the universal property, there exists a lift $\widetilde{\mathrm{dev}}:\widetilde{M}\to \widetilde{X}$ of the developing map to the universal cover. It suffices to show the existence of a developing map $\widetilde{h}:\pi_1(M)\to \mathrm{Aut}(\widetilde{X}).$ That is, for each deck transformation $\gamma\in \pi_1(M)$ we have to prove the existence of a transformation $\widetilde{h}(\gamma)\in \mathrm{Aut}(\widetilde{X})$ such that $\widetilde{h}(\gamma)\circ \widetilde{\mathrm{dev}}=\widetilde{\mathrm{dev}}\circ \gamma$ . Let $\pi:\widetilde{X}\to X$ be the projection, and let $h:\pi_1(M)\to G$ be the holonomy representation for the $(G,X)$ structure. By the universal property, there exists an automorphism $\sigma:\widetilde{X}\to \widetilde{X}$ such that $\pi \circ \sigma =
|
|covering-spaces|homogeneous-spaces|
| 0
|
Question about the expected value of random variable, using expected value properties
|
Suppose that the number of children N of a randomly chosen family satisfies $P(N=n) = \dfrac{3}{5} \left(\dfrac{2}{5}\right)^n $ for $n=1,2,3 ...$ Now suppose that a child is equally likely to be a girl or a boy, and let X be the number of daughters in a randomly chosen family. I am trying to find $E[X]$ (By the way I found $E[N]=\frac{2}{3}$ ) My initial thought is to use $E[X|N = n]$ Because, the number of daughters in family of $n$ children is the random variable $X$ and it is binomial distribution with parameter $n,p$ and as we know expected value in this case $\frac{1}{2}$ . Therefore our $E[X|N=n]=np=\frac{1}{2}$ n Now I am stuck here. I am thinking to use $E[E[X|N=n]]$ but I am not sure if this is appropriate for calculation $E[X]$ .
|
That is exactly correct. Because the conditional variable $X \mid N = n$ is binomial with parameters $n$ and $p = 1/2$ , you computed $$\operatorname{E}[X \mid N] = N/2,$$ hence $$\operatorname{E}[X] = \operatorname{E}[\operatorname{E}[X \mid N]] = \operatorname{E}[N/2] = \operatorname{E}[N]/2.$$ What are your reservations regarding this approach?
|
|probability|statistics|expected-value|
| 0
|
Inequality $\{a,b,c,d\}\subset[0,1]\Rightarrow\sum\limits_{cyc}(a^4+a^2b^2)+8\prod\limits_{cyc}(1-a)\geq1$
|
Let $\{a,b,c,d\}\subset[0,1]$. Prove that: $$a^4+b^4+c^4+d^4+a^2b^2+b^2c^2+c^2d^2+d^2a^2+8(1-a)(1-b)(1-c)(1-d)\geq1$$ I tried convexity, the substitution $a=\frac{x}{x+1}...$ and more, but without success.
|
Here is a human verifiable proof. The desired inequality is written as $$(a^4 + c^4) + (b^4 + d^4) + (a^2 + c^2)(b^2 + d^2) + 8(1 - a)(1 - c) \cdot (1 - b)(1 - d) \ge 1. \tag{1}$$ WLOG, assume that $a \le c$ , $b \le d$ , and $c \le d$ . We can prove that \begin{align*} &(a^4 + c^4) + (b^4 + d^4) + (a^2 + c^2)(b^2 + d^2) + 8(1 - a)(1 - c) \cdot (1 - b)(1 - d)\\ \ge{}& \frac{(a^2 + c^2)^2}{2} + (b^4 + d^4) + (a^2 + c^2)(b^2 + d^2) \\ &\qquad + 8\left(1 - \sqrt{\frac{a^2 + c^2}{2}}\right)^2 (1 - b)(1 - d). \tag{2} \end{align*} (The proof of (2) is given at the end.) Thus, it suffices to prove that \begin{align*} &\frac{(a^2 + c^2)^2}{2} + (b^4 + d^4) + (a^2 + c^2)(b^2 + d^2)\\ &\qquad + 8\left(1 - \sqrt{\frac{a^2 + c^2}{2}}\right)^2 (1 - b)(1 - d) \ge 1. \tag{3} \end{align*} Let $x = \sqrt{\frac{a^2 + c^2}{2}}$ . Then $0 \le x \le 1$ . (3) is written as $$2x^4 + (b^4 + d^4) + 2x^2(b^2 + d^2) + 8(1 - x)^2 (1 - b)(1 - d) \ge 1. \tag{4}$$ Let $p := 2 - b - d, q := (1 - b)(1 - d)$ and $u :=
|
|inequality|contest-math|
| 0
|
A teacher gave her students a paper square. The first student cut this square into two shapes, using one straight cut not through
|
A teacher gave her students a paper square. The first student cut this square into two shapes, using one straight cut not through any of the paper’s corners. The second student cut one of the resulting shapes, using one straight cut not through any of that shape’s corners, and so on. After ten students had made their cuts, there were eleven shapes, including seven triangles, two quadrilaterals, and a pentagon. How many sides were in the remaining shape?
|
The remaining shape has ten sides, as the amount of total sides goes up by 4 each time someone cuts a shape in two. This is because two sides are added by splitting the sides that the cut starts and ends on, and two more sides are added because of the two new sides along the cut. This results in forty new sides added, as ten students cut the paper. If we add this to the four sides we started with, we get fourty-four total sides. If we add up the total number of sides from all the other shapes ( 7 * 3 + 2 * 4 + 5 = 21 + 8 + 5 = 34 ), we can subtract this from the total number of sides ( 44 - 34 = 10 ), and thus, we are left with the number of remaining sides, or the number of sides in the remaining shape, ten.
|
|geometry|
| 0
|
Why is this differential equation's interval not (-$\infty$, $\infty$)
|
I am currently using A First Course in Differential Equations with Modeling Applications, 10th Edition, by Dennis G. Zill. Section 2.3 Question #9. Find the general solution of the given differential equation. Give the largest interval $I$ over which the general solution is defined. Determine whether there are any transient terms in the general solution. The differential equation given was: $$ x\frac{dy}{dx} - y = x^2 \sin(x) $$ Here is my work: Divide by $x$ to get the standard form: $$ \frac{dy}{dx} - \frac{1}{x} y = x \sin(x) $$ Implement integrating factor: $$ \mu = \exp\biggl(\int{-\frac{1}{x}}\,dx\biggr) = e^{-\ln(x)} = e^{\ln(x^{-1})} = \frac{1}{x} $$ Multiply both sides by the integrating factor: $$ \frac{dy}{dx}\frac{1}{x} - \frac{1}{x^2}y = \sin(x) $$ Notice the derivative of a product: $$ \frac{dy}{dx} \biggl[ y \, \frac{1}{x} \biggr] = \sin(x) $$ Take the integral of both sides: $$ \int \frac{dy}{dx} \biggl[ \frac{y}{x} \biggr] \, dx = \int \sin(x) \, dx \quad\implies\quad
|
You have the right idea when it comes to solving the equation in closed form, but for finding the largest interval over which the ODE has a solution you need to write it as the IVP $$y'(t)=F(t,y(t)):=t\sin(t)+y(t)/x \\ y(t_0)=y_0\tag{1}$$ Since $F$ is clearly continuous in its first argument and Lipschitz continuous in its second argument, there exists an open interval $I\subseteq \Bbb R$ where it has a well-defined solution. By standard theory of first order ODEs, the largest such interval is of the form $(t_0-\alpha,t_0+\alpha)$ where $$\alpha=\min(a,b/M)$$ Here, $a$ and $b$ are the dimensions of the largest compact cylinder $C$ on which $F$ is defined, and $M=\sup_C |F|$ . We can see that $F(\cdot, y)$ is defined $\forall y$ , but, $F(0,\cdot)$ is undefined, therefore, we have $$a=t_0~~,~~b=\infty$$ Therefore (assuming $t_0>0$ , the case $t_0 is analogous) given our initial condition at $t_0$ , we have a well-defined solution on the interval $(0,2t_0)$ . However, in general, we can
|
|real-analysis|ordinary-differential-equations|analysis|continuity|
| 0
|
Action of exponential of multiplication operator on $L^2$
|
Let $X: L^2(\mathbb{R}) \rightarrow L^2(\mathbb{R})$ be a multiplication operator, i.e. $f(x) \mapsto xf(x)$ for $f \in L^2$ . Multiplication operators are known to be self-adjoint on some dense subset of $L^2$ , and so by Stone's theorem on one-parameter unitary groups $X$ is the infinitesimal generator of some one-parameter group of operators $U(t)$ such that $U(t) = e^{itX}$ . I am interested in explicitly finding the action of $e^{itX}$ on an $L^2$ function $f$ . In an analogy with the heat semigroup, I think this can be determined by thinking of $e^{itX}$ as the solution to the differential equation $$\partial_t f = ixf$$ but I don't think this would make sense since $f$ is only assumed to be a function of $x$ . How can one find and/or describe how $e^{itX}$ acts on $L^2$ functions? Can anything be said about $e^{tx}$ ?
|
Not sure what the differential equation angle gives you. Maybe someone else can answer that. But the action is very easy to describe. If $X$ is the map $f(x) \mapsto xf(x)$ , then $U(t)$ is the map $f(x) \mapsto e^{itx}f(x)$ . It’s still an action by pointwise multiplication, but now it’s a one-parameter ( $t$ ) family of unitary operators on $L^2$ .
|
|functional-analysis|operator-theory|semigroup-of-operators|
| 1
|
Proof of expected length in random division of $[0,1]$ interval
|
I have trouble fully understanding the proof in a formal manner for the expected length of the $k^{th}$ smallest interval when we randomly divide the $[0,1]$ interval using $n$ points. The $k^{th}$ smallest interval's expected length is equal to $$\frac{\frac{1}{k} + \frac{1}{k+1} + \dots + \frac{1}{n+1}}{n+1}$$ Proof : Without loss of generality, assume the $[0,1]$ segment is broken into segments of length $s_1 \geq s_2 \geq \dots \geq s_n \geq s_{n+1}$ , in that order. We are given that $ s_1 + \dots + s_{n+1} = 1$ , and want to find the expected value of each $s_k$ . Set $ s_i = x_i + \dots + x_{n+1} $ for each $ i = 1, \dots, n+1 $ . Then, we have $ x_1 + 2x_2 + \dots + (n+1)x_{n+1} = 1 $ , and want to find the expected value of $ s_k = x_k + \dots + x_{n+1} $ . If we set $y_i = ix_i $ , then we have $ y_1 + \dots + y_{n+1} = 1 $ , so by symmetry $ E[y_i] = \frac{1}{n+1} $ for all $ i $ . Thus, $ E[x_i] = \frac{1}{i(n+1)} $ for each $ i $ , and now by linearity of expectation $ E[s
|
This feels a bit like the conjugate partition picture (see for example conjugate partition definition ), and it involves counting in the horizontal and vertical directions. In your situation, you are doing nonnegative real numbers $$ s_1\geq s_2\geq \dots\geq s_{n+1}\geq 0 $$ such that $\sum_{i=1}^{n+1} s_i = 1$ . You can represent these numbers as areas of stripes with width 1 and length $s_i$ . Then the total area of the shape is 1. Now compute the area column first. Then you partition the shape into $n+1$ rectangles, where the $i$ -th one (from the right ) would have width $i$ and length $x_i$ , since $x_i=s_i-s_{i+1}$ . Its area is $$ ix_i=y_i,\quad\text{and }\sum_{i=1}^{n+1} y_i=1. $$ Now it remains to say that to randomly partition an area of 1 into $n+1$ horizontal stripes is to randomly partition it into $n+1$ vertical rectangles. So this helps explain what the $y_i$ are, and we have $E[y_i]=\frac{1}{n+1}$ . Here is a drawing of what I mean, which is drastically not up to scale
|
|probability|combinatorics|proof-explanation|expected-value|
| 0
|
Decay property of the Fourier transform
|
Let $f,g:\mathbb{R} \to (0,\infty)$ be nonnegative functions on $\mathbb{R}$ . If $$ \lim_{|x|\to \infty} \frac{g(x)}{f(x)} = 0. $$ Can we conclude anything about the decay property of their respective Fourier transforms $\hat f,\hat g$ ? They are defined by $$ \hat f(y) = \int_{\mathbb{R}} e^{-ixy} f(x)dx, $$ similar for $\hat g$ . In particular, could we conclude anything such as $$ \lim_{|y| \to \infty} \frac{|\hat g(y)|}{|\hat f(y)|} = 0, $$ provided that the division is well-defined? A related question is Decay of Fourier Transform of a Schwartz Function .
|
Hint: Take $g(x)=e^{-|x|}, f(x)=e^{-|x|/2}$ . Fourier transforms of these function can be written down explitly. [Refer to Cauchy distribution in Wikipedia]. The ratio of the FT's actually tends to a positive constant.
|
|reference-request|fourier-analysis|fourier-transform|
| 1
|
About the direct sum.
|
Today I decide to study some topics of Algebra and then faced up with the definition of the direct sum of modules. To let us on the same page, the definition that I'm talking about is Given a ring $R$ and a family $(M_i)_{i\in I}$ of left $R$ -modules, the direct sum of the family $M_i$ to be the set of all sequences $(\alpha_i)$ such that $\alpha_i\in M_i$ and $\alpha_i \neq 0$ for just a finite number of indixes $i\in I$ . I also checked out the definition of the direct product which is basically the same, except the last propertie of $\alpha_i = 0$ for "almost every indixes". My question relies exactly on that propertie. Is there a reason to ask it? Is it demanded just for garantie that the direct sum have a basis? Or exists another readon to justify it. I'm grateful for any help on this subject and I apologize for any error on my english. Also, this post Why is cofiniteness included in the definition of direct sum of submodules? seems to be the same question, but it doesn't give th
|
Given a family $(M_i)$ of $R$ -modules, the direct product $\Pi_i M_i$ , together with the family of projection maps $\pi_i \colon \Pi_i M_i \to M_i$ , has a very important universal property . Namely, given any module $N$ and module homomorphisms $\phi_i \colon N \to M_i$ for each $i$ , there is a unique homomorphism $\phi \colon N \to \Pi_i M_i$ such that for all $i$ , $\phi_i = \pi_i \circ \phi$ . Moreover, $\Pi_i M_i$ is unique up to a unique isomorphism. That is a nice property that gets used all the time, so a natural question is whether the picture can be reversed. It turns out that $\bigoplus_i M_i$ , together with the injections $\iota_i \colon M_i \to \bigoplus_i M_i$ , is exactly what is needed. That is, for any module $N$ and homomorphisms $\psi_i \colon M_i \to N$ , there is a unique homomorphism $\psi \colon \bigoplus_i M_i \to N$ such that for all $i$ , $\psi_i = \psi \circ \iota_i$ . Moreover, $\bigoplus_i M_i$ is also unique up to a unique isomorphism. That provides on
|
|abstract-algebra|modules|direct-sum|
| 1
|
Question about John Lee's proof of Fundamental Theorem for Nonautonomous ODEs by using the result for autonomous ODEs
|
The following is from John Lee's Introduction to Smooth Manifolds that gives the proof to the Fundamental theorem for nonautonomous ODEs. The proof is given in Theorem D.6 below, and it uses Theorem D.1, which is the Fundamental theorem for autonomous ODEs and extend it to cover the nonautonomous case. However, I cannot see how this proof actually works. It says in the proof that Theorem D.1 guarantees that there is an interval $J_0 \subset \mathbb{R}$ containing $s_0$ and an open subset $W_0 \subset J \times U$ containing $(s_0,x_0)$ , such that for any $(t_0,c)\in W_0$ there exists a unique solution to (D.20) defined for $t\in J_0$ , and the solution depends smoothly on $(t,t_0,c)$ . However, if you look at the statement of Theorem D.1, the domain of $V$ is an open subset of $\mathbb{R}^n$ , and the open subset $W_0$ is also a subset of $\mathbb{R}^n$ , and so I cannot see how Theorem D.1 applies here. I can see that by the first statement of the proof, we only need to solve for $y^1
|
The idea is to augment the dimension of $V$ . Replace $V = (V^1,\ldots,V^n)$ with $V = (V^0,V^1,\ldots,V^n)$ , where $$ V^0 = V^0(y^0(t),\ldots,y^n(t)) = 1. $$ Also set $c^0 = y^0(t_0) = t_0$ . Then for $i=0,\ldots,n$ we have $$ \dot{y}^i(t) = V^i(y_0(t),\ldots,y^n(t)),~~~ y^i(t_0) = c^i. $$ Now we're in the setting of Theorem D.1, with $U\subset\mathbb{R}^n$ replaced by $J\times U \subset\mathbb{R}^{n+1}$ , $V:J\times U\to\mathbb{R}^{n+1}$ , and $c = (c^0,\ldots,c^n)\in\mathbb{R}^{n+1}$ .
|
|real-analysis|ordinary-differential-equations|analysis|
| 1
|
How can I calculate a Blue-Red Hackenbush position value using the Simplest Number Tree?
|
In this document (pages 5-7), the Simplest Number Tree is used to explain how to assign a value to any arbitrary Blue-Red Hackenbush position. I'm having trouble following its approach. I think I understand the "simplest number that fits" idea, at least in one direction; given a position value $x$ , travel upward from vertex $x$ along the edges of the Simplest Number Tree, and $x =$ {the first encountered number less than (positioned to the left of) $x$ | the first encountered number greater than (positioned to the right of) $x$ }. I'm not sure I understand the other direction, i.e., how to find $x$ given { $-\frac{3}{4} | \frac{7}{8}$ }. And I certainly don't understand how the document is assigning a value to a pictorial Hackenbush position. It claims the process is inductive, but the example it uses is too trivial to demonstrate how it works. Can someone clarify how use the tree to value an arbitrary Hackenbush position? A better description and perhaps a less trivial example may be
|
For finite numbers and dyadic fractions, if the left and right values straddle an integer take the smallest integer in absolute value, so your example would be $0$ . If no integer fits you want the dyadic fraction with the smallest denominator that fits, so $\{\frac 5{16}|\frac 78\}=\frac 12$ or $\{3\frac 5{16}|3\frac{15}{32}\}=3\frac 38$
|
|algorithms|fractions|trees|combinatorial-game-theory|surreal-numbers|
| 0
|
Describe the set of complex numbers (locus) $z$ such that $z(1-z)$ is a real number.
|
Describe the set of complex numbers (locus) $z$ such that $z(1-z)$ is a real number. My solution goes like this: Let $z=a+ib$ then $z(1-z)=z-z^2=(a+ib)-(a^2-b^2+2abi)=a+ib-a^2+b^2-2abi=r,$ where $r\in \Bbb R.$ Thus, $a-a^2+b^2+i(b-2ab)=r$ and comparing the real and imaginary parts in LHS and RHS of the equation, we get, $a-a^2+b^2=r$ and $b(1-2a)=0.$ Now if $b(1-2a)=0,$ then $b=0$ or $a=\frac 12.$ If $b=0$ then from $a-a^2+b^2=r$ we have, $a+a^2-r=0.$ This is a quadratic equation in $a$ and $a\in \Bbb R$ so, the discriminant must be $\geq 0$ and hence, $1+4r\geq 0.$ Considering $r$ to be a fixed real number in the question snd assuming $r$ satisfies $1+4r\geq 0,$ we have atmost two values of $a.$ So, if $b=0$ we obtain atmost $2$ points on the $x$ axis (or real axis in Argand plane). Now, if $a=\frac 12, $ then from $a-a^2+b^2=r$ and assuming $r$ is so chosen such that $b\in\Bbb R,$ then we have atmost $2$ values of $b.$ Thus, we again get atmost two points on the line $x=\frac 12.$ So
|
$4z(1-z) \in \ \mathbb{R} \Rightarrow [z+1-z]^2-[z-(1-z)]^2 \in \ \mathbb{R} $ $\Rightarrow (2z-1)^2 \in \ \mathbb{R} $ so that $z \in \ \mathbb{R} $ or $2z-1 = ki, k \in \ \mathbb{R} \Rightarrow z = \dfrac{1}{2}+\dfrac{ki}{2} $ We can see that the real axis and the line $x=\dfrac{1}{2}$ are the loci
|
|solution-verification|complex-numbers|
| 0
|
uniform continuity of $f(x) = \frac{x}{1 +x^2}$ on $\mathbb{R}$
|
$f(x) = \frac{x}{1 +x^2}$ on $\mathbb{R}$ to comment about its uniform continuity, I tried to figure out the definition " A function $f(x)$ is said to be uniformly continuous on a set $S$ , if for given $\epsilon > 0$ , there exists $\delta > 0$ such that $x$ , $y \in S$ , $|x − y| " but couldn't get where to start. then i thought about breaking down the interval into some closed and open intervals and as $f(x)$ is continuous in closed bounded interval so it would be uniformly continuous as well. also i observed that limit $x$ tending to infinity and minus infinity exist and is equal to zero but don't know how to use it. any hint would be highly appreciated
|
It is enough to prove that $f$ is Lipschitz continuous. By simple calculation, $$ \left| {f(x) - f(y)} \right| = \frac{{\left| {xy - 1} \right|}}{{(x^2 + 1)(y^2 + 1)}}\left| {x - y} \right|. $$ Observe that $$ \left| {xy - 1} \right| \le |xy|+1=\sqrt {x^2 y^2 } + 1 \le \frac{{x^2 + y^2 }}{2} + 1 = \frac{{(x^2 + 1) + (y^2 + 1)}}{2}. $$ Whence $$ \frac{{\left| {xy - 1} \right|}}{{(x^2 + 1)(y^2 + 1)}} \le \frac{{(x^2 + 1) + (y^2 + 1)}}{{2(x^2 + 1)(y^2 + 1)}} = \frac{1}{{2(y^2 + 1)}} + \frac{1}{{2(x^2 + 1)}} \le \frac{1}{2} + \frac{1}{2} = 1. $$ Therefore, $|f(x)-f(y)|\le |x-y|$ for all $x,y\in \mathbb R$ .
|
|limits|analysis|continuity|uniform-continuity|
| 1
|
uniform continuity of $f(x) = \frac{x}{1 +x^2}$ on $\mathbb{R}$
|
$f(x) = \frac{x}{1 +x^2}$ on $\mathbb{R}$ to comment about its uniform continuity, I tried to figure out the definition " A function $f(x)$ is said to be uniformly continuous on a set $S$ , if for given $\epsilon > 0$ , there exists $\delta > 0$ such that $x$ , $y \in S$ , $|x − y| " but couldn't get where to start. then i thought about breaking down the interval into some closed and open intervals and as $f(x)$ is continuous in closed bounded interval so it would be uniformly continuous as well. also i observed that limit $x$ tending to infinity and minus infinity exist and is equal to zero but don't know how to use it. any hint would be highly appreciated
|
Hint By MVT, $f(x)-f(y)=f^\prime(c)(x-y)$ with $|f^\prime(c)|=\bigg{|}\dfrac{1-c^2}{(1+c^2)^2} \bigg{|}\leq \ \dfrac{1+c^2}{(1+c^2)^2}\ \leq \dfrac{1}{(1+c^2)} \leq 1$ follows $|f(x)-f(y)|\leq |x-y|$ for all $x,y \in \mathbb{R}$ .
|
|limits|analysis|continuity|uniform-continuity|
| 0
|
Can $x\sin(x)$ be algebraic when it is not $0$?
|
It's easy to show (using the Lindemann-Weierstrass theorem ) that, for $x\ne 0$ , at least one of $x$ and $\sin(x)$ must be transcendental. But what about $x\sin(x)$ ? After all, the product of two transcendental numbers could be algebraic. Hence: Can the product $x\sin(x)$ be non-zero and algebraic? Might it even be rational? (The requirement that $x\sin(x)\ne 0$ is meant to rule out the trivial case of $x=k\pi$ .)
|
By request, this is my comment, promoted to an answer: $\lim\sup_{x\to\infty} x \sin(x)=\infty$ and $\lim\inf_{x\to\infty} x \sin(x)=-\infty$ , so by continuity $x \sin(x)$ takes any real value, in particular all rational and all algebraic ones.
|
|trigonometry|examples-counterexamples|irrational-numbers|transcendental-numbers|algebraic-numbers|
| 1
|
Why is $((p \land q) \Rightarrow z) \Rightarrow (p \Rightarrow z) \lor (q \Rightarrow z)$ true?
|
I will propose a counterexample to $$((p \land q) \Rightarrow z) \Rightarrow ((p \Rightarrow z) \lor (q \Rightarrow z)).$$ Let's assume that $p$ is " $n$ is divisible by $2$ ", $q$ is " $n$ is divisible by $5$ " and $z$ is " $n$ divisible by $10$ ". Then, while it's true that ( $n$ is divisible by $2$ and $n$ is divisible by $5$ ) is sufficient for ( $n$ is divisible by $10$ ), the disjunction ( $n$ is divisible by $2$ is sufficient for $n$ is divisible by $10$ ) OR ( $n$ is divisible by $5$ is sufficient for $n$ is divisible by $10$ ) is actually false. What is wrong with my above counterexample and understanding? Why is $$((p \land q) \Rightarrow z) \Rightarrow ((p \Rightarrow z) \lor (q \Rightarrow z))$$ true?
|
I will first present a speedy proof of this, and then an explanation as to why this works that I hope is more clear than the others that have been given. Assume that $((p \land q) \implies z) \implies ((p \implies z) \lor (q \implies z))$ is false. Then it must be the case that If $p \land q \implies z$ is true, it can be that $p \implies z \lor q \implies z $ is false, meaning $$(p \land q \implies z) \land \neg ((p \implies z) \lor (q \implies z))\\ (\neg(p \land q) \lor z) \land ((p \land \neg z) \land (q \land \neg z))$$ This is forced to be a contradiction. By the second parenthesis term, p and q are true and z is false, but by the right hand side if $z$ is false the $p$ or $q$ is false which is a contradiction, and if $z$ is true then it contradicts that $z$ is true. Therefore, the statement cannot be false and is a tautology. $\square$ Now, let's talk about what the sentence means and why your example doesn't make sense. I think the issue here is your prepositions don't exactly
|
|logic|propositional-calculus|boolean-algebra|
| 0
|
Proof of expected length in random division of $[0,1]$ interval
|
I have trouble fully understanding the proof in a formal manner for the expected length of the $k^{th}$ smallest interval when we randomly divide the $[0,1]$ interval using $n$ points. The $k^{th}$ smallest interval's expected length is equal to $$\frac{\frac{1}{k} + \frac{1}{k+1} + \dots + \frac{1}{n+1}}{n+1}$$ Proof : Without loss of generality, assume the $[0,1]$ segment is broken into segments of length $s_1 \geq s_2 \geq \dots \geq s_n \geq s_{n+1}$ , in that order. We are given that $ s_1 + \dots + s_{n+1} = 1$ , and want to find the expected value of each $s_k$ . Set $ s_i = x_i + \dots + x_{n+1} $ for each $ i = 1, \dots, n+1 $ . Then, we have $ x_1 + 2x_2 + \dots + (n+1)x_{n+1} = 1 $ , and want to find the expected value of $ s_k = x_k + \dots + x_{n+1} $ . If we set $y_i = ix_i $ , then we have $ y_1 + \dots + y_{n+1} = 1 $ , so by symmetry $ E[y_i] = \frac{1}{n+1} $ for all $ i $ . Thus, $ E[x_i] = \frac{1}{i(n+1)} $ for each $ i $ , and now by linearity of expectation $ E[s
|
Carrying the above idea's geometric intuition (for the case $n=3$ ): The different colours represent different $x_i$ , and the $s_i$ are separated by darker lines. Call this figure 1. Now, consider a rearrangement (figure 2): Note that the $ix_i$ 's are separated by three (or $n$ for the general case) dark lines. Once you insert these 3 dark lines, you can easily insert the lighter lines (in exactly one way), i.e, the $ix_i$ s are uniquely determined by the $n$ dark lines. Since these three lines can be randomly put anywhere inside the square, and are independent, the expected value of $ix_i$ is $1/4 = 1/(n+1)$ . More directly, there is a clear bijection between the representation of any configuration $(s_1;s_2;s_3;s_4)$ (or $(s_1;s_2;\ldots;s_{n+1})$ in the general case) between figure 1 and figure 2. So instead of going from constructing figure 1, and converting to figure 2, we do the reverse: In a square, randomly draw $3$ (or $n$ ) vertical lines. Call the $4$ regions so formed $y_
|
|probability|combinatorics|proof-explanation|expected-value|
| 0
|
If $v$ is a bounded linear functional on $C^{\infty}_0$, then $v \in H^1$.
|
In a proof I am reading, we are given a function $v \in L^2(\omega)$ , and we must show that $v \in H^1(\omega).$ They argue that "since $|\int_{\omega} v \partial_i \psi dx |\leq C||\psi||_{L^2(\omega)}$ ", we get that $v \in H^1(\omega)$ where $\psi \in C^{\infty}_0(\omega).$ I tried to make sense of this through the definition of weak derivatives, but could not progress.
|
Since $|\int_\omega v \partial_i \psi \, dx| \leq C \|\psi\|_{L^2(\omega)}$ for all $\psi \in C_0^\infty(\omega)$ and as $C_0^\infty(\omega)$ is dense in $L^2(\omega)$ , by Riesz representation theorem there exists $w_i \in L^2(\omega)$ s.t. $\int_\omega v \partial_i \psi \, dx = \int_\omega w_i \psi \, dx$ , i.e., $-w_i$ is the $i$ -th weak partial derivative of $v$ . So $v$ is a function in $L^2$ admitting all first-order weak partial derivatives which are all in $L^2$ , i.e., $v \in H^1(\omega)$ .
|
|functional-analysis|measure-theory|lebesgue-measure|sobolev-spaces|
| 0
|
Self-intersection of exceptional divisor of blowing-up along a singular point
|
Let $X$ be an $n$ -dimensional projective variety with a simple singularity $p \in X$ which can be resolved by a blow-up $\pi\colon \tilde X \to X$ along $p$ . The examples that I am considering are when $p$ is a node ( $\widehat{\mathscr{O}}_{X,p} \cong \mathbb C[\![x_1,...,x_{n+1}]\!]/\langle x_1^2 + \cdots + x_{n+1}^2\rangle$ ) or a cusp ( $\widehat{\mathscr{O}}_{X,p} \cong \mathbb C[\![x_1,...,x_{n+1}]\!]/\langle x_1^2 + \cdots + x_{n}^2 + x_{n+1}^3\rangle$ ). In these cases, the exceptional divisor $E$ is either a smooth quadric in $\mathbb{P}^n$ or a cone over a smooth quadric in $\mathbb{P}^{n-1}$ . I would like to compute the top self-intersection $E^n$ in $\tilde X$ . For $n = 2$ , the resolution is well-understood: when $p$ is a node, $E \cong \mathbb{P}^{1}$ is a $(-2)$ -curve; when $p$ is a cusp, $E$ is a union of two $(-2)$ -curves intersecting at one point. For both cases we have $E^2 = -2$ . Now I am stuck at $n = 3$ . It might be helpful to compute the normal bundle $\m
|
The blowup $\tilde{X}$ of $p$ on $X$ is the strict transform of $X$ in the blowup of the ambient smooth variety (say $Y$ ) of dimension $n + 1$ at $p$ and the exceptional divisor $E_X \subset \tilde{X}$ is the preimage of the exceptional divisor $E_Y \subset \tilde{Y}$ , hence $$ \mathcal{N}_{E_X/\tilde{X}} \cong \mathcal{N}_{E_Y/\tilde{Y}}\vert_{E_X}. $$ Therefore, the normal bundle is $\mathcal{O}(-1)$ both in the case of a node and in the case of a cusp. The formula for $E_X^n$ works regardless of the smoothness of $E_X$ , so the self-intersection is $$ E_X^n = (-1)^{n-1}\cdot 2 $$ in both cases.
|
|algebraic-geometry|blowup|birational-geometry|
| 0
|
Use the MVT for Integrals to bound $\int_0^1\frac{x^6}{\sqrt{1+x^2}}dx$
|
I have an exercise to use the mean value theorem for integrals to show that $$\frac{1}{7\sqrt 2}\le\int_0^1\frac{x^6}{\sqrt{1+x^2}}\ dx\le\frac{1}7$$ I've determined that the integrand is increasing on the given interval and therefore minimized at $x=0$ and maximized at $x=1$ . By the integral MVT there exists some point $c\in [0,1]$ such that $$\int_0^1\frac{x^6}{\sqrt{1+x^2}}\ dx = f(c)(1-0) = f(c)$$ Since $0\le f(c)\le 1/\sqrt 2$ I get a bound but not as tight as the one requested. But at this point I'm not sure where a factor of 7 comes from. I could guess that I should start actually integrating some stuff but then that seems like I'm not using the integral MVT as instructed. If I tried doing just a little, I could set $u=1+x^2$ so that the integral becomes $$\int_1^{2}\frac{x^6}{u^{1/2}} \left(\frac{du}{2x^2}\right) = \frac 1 2 \int_1^2 \frac{x^4\ du}{u^{1/2}} = \frac 1 2 \int_1^2\frac{(u-1)^2\ du}{u^{1/2}}$$ But like ... now I'm just computing the integral.
|
By MVT, we have some $c\in (0,1)$ such that $$ \int_0^1 \frac{x^n}{\sqrt{1+x^2}} d x =\frac{1}{\sqrt{1+c^2}} \int_0^1 x^n d x =\frac{1}{(n+1) \sqrt{1+c^2}} $$ Meanwhile, $$ \begin{aligned} & \frac{1}{\sqrt{2}} \leqslant \frac{1}{\sqrt{1+c^2}} \leqslant \frac{1}{1}=1 \\ \Rightarrow & \frac{1}{(n+1)\sqrt{2}} \leqslant \frac{1}{(n+1) \sqrt{1+c^2}} \leqslant \frac{1}{n+1} \end{aligned} $$ Hence, in general, $$ \boxed{\frac{1}{(n+1) \sqrt{2}} \leqslant \int_0^1 \frac{x^n}{\sqrt{1+x^2}} d x \leqslant \frac{1}{n+1}} $$
|
|integration|inequality|definite-integrals|mean-value-theorem|
| 0
|
Can someone explain the Borel-Cantelli Lemma?
|
I’m looking for an informal and intuitive explanation of the Borel-Cantelli Lemma. The symbolic version can be found here . What is confusing me is what ‘probability of the limit superior equals $ 0 $’ means. Thanks!
|
Application of First Borel-Cantelli Lemma let's consider a scenario where we flip a fair six-sided die infinitely many times. Let's define an event $A_n$ as the event that the die shows up 6 on the $n$ -th flip, but only if $n$ is a perfect square (i.e., $n = 1, 4, 9, 16, 25, 36, \ldots$ ). For all other $n$ , we'll say that $A_n$ cannot occur. Since the die is fair, the probability of getting a 6 on any given flip is $\frac{1}{6}$ . Therefore, the probability of each event $A_n$ is $P(A_n) = \frac{1}{6}$ if $n$ is a perfect square, and $P(A_n) = 0$ otherwise. Now, let's consider the sum of the probabilities of the events $A_n$ . Since there are infinitely many natural numbers but only finitely many perfect squares less than or equal to any given number, the sum of the probabilities is a finite number. Specifically, it's $\sum_{i=1}^\infty P(A_{i^2}) = \sum_{i=1}^\infty \frac{1}{6} = \lim_{n \to \infty}n \cdot \frac{1}{6} =$ finite number. According to the first Borel-Cantelli lemma, i
|
|probability-theory|measure-theory|intuition|limsup-and-liminf|borel-cantelli-lemmas|
| 0
|
Reduce the base $11$ fraction $\dfrac{587}{749}$ to its lowest terms.
|
Reduce the base $11$ fraction $\dfrac{587}{749}$ to its lowest terms. $(\dfrac{587}{749})_{11}=\dfrac{5\times 11^2 + 8\times 11 + 7}{7\times 11^2 + 4\times 11 + 9}$ But $\dfrac{...+7}{...+9}$ can't be simplified any further, so I'm not sure how else to approach this problem.
|
Reducing a fraction to lowest terms means dividing the numerator and denominator by their greatest common factor. There are a couple ways to do this. First, you could use the Euclidean algorithm, which could get an answer without being able to factor either number. The alternative is, of course, to factor numerator and denominator. We're actually pretty lucky in this case that it's very easy to check if a number in base 11 is divisible by 10. For example, in the numerator, we have $$5(11^2)+8(11)+7\equiv5(1^2)+8(1)+7\equiv 20\equiv 0\pmod{10}$$ Similar to the divisibility test for 9 in base 10 numbers, all you need to do is sum the digits and see if the sum is divisible by 10 (or 5 or 2). In this case, both numerator and denominator sum to 20 and are therefore divisible by 10. Dividing by ten, we get $\left(\frac{587}{749}\right) _{11}=\left(\frac{64}{82}\right) _{11}$ . Again, both numerator and denominator sum to a multiple of ten, so we can divide both again to get $\left(\frac{64}{
|
|number-systems|
| 0
|
Is there a Hamiltonian cycle of $m$ x $n$ rectangular lattice points (these are the vertices) in $\mathbb{R}^2$ such that no two edges are parallel?
|
Let $m,n\geq 2$ and consider the rectangular lattice of $mn$ vertices in $\mathbb{R}^2,\ (i,j);\ i\in \{1,2,\ldots,m\},\ j\in \{1,2,\ldots,n\}.\ $ Call these vertices $X_1, X_2, \ldots, X_{mn}.$ Is there a Hamiltonian cycle of these $mn$ points i.e. vertices such that no two lines (i.e. edges) are parallel? [Edges are allowed to intersect each other and edges are allowed to pass through vertices.] I have tried to find an example for small values of $m$ and $n$ but to no avail. Maybe the answer is no, at least in part due to pigeonhole principle and some induction argument. Alternatively, maybe there are examples for large enough $m,n.$
|
As discussed in the comments, the length of the $n$ -th Farey sequence , that is, the number of irreducible fractions between $0$ and $1$ with denominators up to $n$ , is asymptotic to $\frac3{\pi^2}n^2$ . Since the differences in an $n\times n$ lattice go up to $n-1$ , their possible ratios are given by the $(n-1)$ -th Farey sequence and its additive and multiplicative inverses. Thus, asymptotically a square $n\times n$ lattice has $\frac{12}{\pi^2}n^2\approx1.2n^2$ different directions, enough for each of the $n^2$ edges to use a different one. Here’s Java code that performs an exhaustive search for all solutions. Here’s the number of solutions for all lattices with at most $27$ vertices: \begin{array}{c|ccc} m\setminus n&3&4&5&6&7&8&9\\\hline 3&❌&16&❌&1800&❌&365128&❌\\ 4&&1880&56120&34669920\\ 5&&&❌ \end{array} There are no solutions for any of the $2\times n$ lattices (even though they have just enough directions). An ❌ indicates that the lattice has fewer directions than vertices,
|
|graph-theory|induction|examples-counterexamples|pigeonhole-principle|hamiltonian-path|
| 1
|
Eigenvalues of a weighted mean where the weights are positive definite matrices
|
Suppose I have some positive definite matrices $A_1, A_2, \dots A_k \in \mathbb{R}^{n \times n}$ and some values $s_1 \leq s_2\dots \leq s_k \in \mathbb{R}$ . Consider the matrix $A = (\sum_{k=1}A_k s_k)(\sum_{k=1} A_k)^{-1}$ . For $n=1$ (the scalar case), $A$ becomes a weighted mean of the $s_i$ 's, and thus necessarily satisfies $s_1 \leq A \leq s_k$ . For higher $n$ , can we say that the eigenvalues of $A$ must lie between $s_1$ and $s_k$ ? I'm particularly interested if this can be shown (or disproven) in the special case where each $A_k$ is of the form $A_k = x_k x_k^\top$ .
|
This is true. We first observe the following: for two matrices $A$ and $B$ , the spectra of $AB$ and of $BA$ coincide. Indeed, $\sigma(AB)\setminus\{0\} = \sigma(BA)\setminus\{0\}$ is true in all unital algebras. For matrices, $AB$ is invertible iff $A$ and $B$ are both invertible iff $BA$ is invertible, so we also have $\sigma(AB)$ contains $0$ iff $\sigma(BA)$ contains $0$ . Applying to the current case, we have the spectrum of $A = (\sum_{i=1}^k A_i s_i)(\sum_{i=1}^k A_i)^{-1} = (\sum_{i=1}^k A_i s_i)(\sum_{i=1}^k A_i)^{-1/2}(\sum_{i=1}^k A_i)^{-1/2}$ coincides with the spectrum of $A’ = (\sum_{i=1}^k A_i)^{-1/2}(\sum_{i=1}^k A_i s_i)(\sum_{i=1}^k A_i)^{-1/2}$ . Observe that $A’$ is self-adjoint. Furthermore, because $A_i$ are positive and $s_1 \leq s_i \leq s_k$ for all $i$ , we have, $$\begin{split} (\sum_{i=1}^k A_i)^{-1/2}(\sum_{i=1}^k A_i s_i)(\sum_{i=1}^k A_i)^{-1/2} &\leq (\sum_{i=1}^k A_i)^{-1/2}(\sum_{i=1}^k A_i s_k)(\sum_{i=1}^k A_i)^{-1/2}\\ &= s_k (\sum_{i=1}^k A_i)^{-1/
|
|linear-algebra|matrices|eigenvalues-eigenvectors|positive-definite|
| 1
|
Is a Number Divisible by 40
|
One of the "shortcuts" for determining if a number is divisible by 8 is to see if the last three digits are divisible by 8 . One of the "shortcuts" for determining if a number is divisible by 5 is to see whether the last digit is a 5 or a 0 . If I have a number of arbitrary length, is it acceptable to say that the number is divisible by 40 if the number passes both shortcuts?
|
Yes. Any number whose last $3$ digits are a multiple of $40$ are a multiple of $40$ . It also works if the $100$ s digit is even and the last $2$ digits are $00$ , $40$ or $80$ , or if the $100$ s digit is odd and the last $2$ digits are $20$ or $60$ .
|
|divisibility|
| 0
|
Solutions of a special matrix equation
|
Let $R_1$ , $R_2 \in \mathbb R^{N \times M}$ with $M > N$ be matrices of rank $N-1$ . When can we find a symmetric matrix $P\in \mathbb R^{M\times M}$ such that the matrix equation $$ R_1 P = R_2, $$ is satisfied? When does a solution exist? I was thinking that an approach could be rewriting the equation by vecotrizing $P$ , but since it is a symmetric matrix, I don't know how to rewrite $R_1$ and $R_2$ accordingly. I implemented such system of equations in Matlab and it could not find a solution when $P$ is symmetric (even if there are more variables than equations), but if found a solution for a nonsymmetric matrix $P$ .
|
Let $USV^T$ be a singular value decomposition of $R_1$ . The equation can then be rewritten as $SP_1=R$ , where $P_1=V^TPV$ is symmetric and $R=U^TR_2V$ . Now partition $S$ and $R$ as $$ \pmatrix{\Sigma&0_{(N-1)\times(M-N+1)}\\ 0_{1\times(N-1)}&0_{1\times(M-N+1)}} \quad\text{and}\quad \pmatrix{X&Y\\ z^T&w^T} $$ respectively. It becomes clear that the equation $SP_1=R$ is solvable for a symmetric $P_1$ if and only if $\Sigma^{-1}X$ is symmetric and $z^T,w^T$ are zero vectors. When this necessary and sufficient is satisfied, the general solution is given by $P=VP_1V^T$ where $$ P_1=\pmatrix{\Sigma^{-1}X&\Sigma^{-1}Y\\ Y^T\Sigma^{-1}&Z} $$ and $Z$ is any $(M-N+1)\times(M-N+1)$ symmetric matrix. (In particular, you may take $Z=0$ ). In terms of $R_1$ and $R_2$ , this means the equation is solvable if and only if $(I_N-R_1R_1^+)R_2=0$ and $R_1^+R_2R_1^+R_1$ is symmetric. When this necessary and sufficient is satisfied, the general solution is given by $$ P=R_1^+R_2+(I_M-R_1^+R_1)(R_1^+R_2)^
|
|linear-algebra|systems-of-equations|matrix-equations|
| 1
|
How to evaluate the limit $\int_{0}^{\frac{\pi}{2}}Re^{-R\sin\theta}d\theta \quad (\text{as } R \rightarrow \infty)$
|
While doing a mathematical exercise(stein Complex Analysis chapter2,exercise 3), I managed to reduce the problem to the following one: $$\int_{0}^{\omega}Re^{-R\cos\theta}d\theta \rightarrow 0 \quad (\text{as } R \rightarrow \infty)$$ where $0\le \omega . I can prove this without much difficulty: $$\int_{0}^{\omega}Re^{-R\cos\theta}d\theta \le \int_{0}^{\omega}Re^{-R\cos\omega}d\theta =\omega Re^{-R\cos\omega} \rightarrow 0 \quad (\text{as } R \rightarrow \infty)$$ It is crucial that $\omega $ is strictly less than $\frac{\pi}{2}$ . This lead me to raise another interesting problem: what the limit will be if we replace $\omega$ by $\frac{\pi}{2}$ . After changing $\cos\theta$ to $\sin\theta$ (this doesn't matter), now my question is $$\int_{0}^{\frac{\pi}{2}}Re^{-R\sin\theta}d\theta \rightarrow ? \quad (\text{as } R \rightarrow \infty)$$ I have no idea how to calculate, I even don't know if the limit exists.
|
Let $F(R)=\int_0^{\pi/2}e^{-R\cos\theta}\mathbb d\theta$ . Note that \begin{align*} F(R) &=\int_0^{\pi/2}\sin\theta e^{-R\cos\theta}\mathbb d\theta+G(R)\\ &=\frac{1-e^{-R}}{R}+G(R), \end{align*} where \begin{align*} 0\leq G(R) &=\int_0^{\pi/2}(1-\sin\theta)e^{-R\cos\theta}\mathbb d\theta\\ &=\int_0^{\pi/2}(1-\cos\theta)e^{-R\sin\theta}\mathbb d\theta\\ &=2\int_0^{\pi/2}\sin^2(\theta/2)e^{-R\sin\theta}\mathbb d\theta\\ &\leq\frac{1}{2}\int_0^{\pi/2}\theta^2e^{-2R\theta/\pi}\mathbb d\theta\\ &\leq\frac{1}{2}\int_0^{\infty}\theta^2e^{-2R\theta/\pi}\mathbb d\theta\\ &=\frac{\pi^3}{8R^3} \end{align*} This proves that, for large $\beta$ we have $$F(R)=\frac{1}{R}+O\left(\frac{1}{R^3}\right),\quad R\to\infty.$$ So $$\lim_{R\to\infty} \int^{\pi/2}_0 Re^{-R \sin\theta}\mathbb d\theta=1.$$
|
|integration|limits|definite-integrals|
| 0
|
Imaginary components in Discrete Fourier Transform of Gaussian?
|
I am trying to understand Discrete Fourier Transform (DFT), after only having experience with the continuous transformation. The natural idea was to try to understand DFT on the simplest function, being an exponential, which should transform into an exponential too. To check this in practice, I've drafted up a Python script to perform DFT on a 6D Gaussian. Surprisingly, the result seems to have both real an imaginary parts. Based on arguments from the continuous Fourier transform, I would only expect an imaginary part if the function is not even - which the Gaussian should be. The grid is centered symmetrically around zero, so there is no issue with numerical asymmetry either. Should imaginary numbers appear in the DFT of a six-D Gaussian, and, if yes, what is the significance of this observation? I believe my Python script is correct, but just in case, here is the implementation: import numpy as np from scipy.fft import fftn def sixdgauss(coords): return np.exp(-np.pi * (coords[0] **
|
You should use simpler examples when you’re looking into such things. The same thing already happens in one dimension. The transform is real if the input is symmetric about the origin. The origin is at $0$ , not in the middle of the interval. The transform doesn’t know that you used the coordinate value $0$ to generate the function value in the middle of the interval.
|
|numerical-methods|fourier-analysis|python|fast-fourier-transform|
| 1
|
How would you explain a tensor to a computer scientist?
|
How would you explain a tensor to a computer scientist? My friend, who studies computer science, recently asked me what a tensor was. I study physics, and I tried my best to explain what a tensor is, and I said something along the lines of "a mathematical object that is described in between the mappings of vector spaces", and he wasn't quite about that definition. I understood why, since it is a pretty wordy definition I gave, so I decided to give a more, down to earth, definition, describing a tensor as some array, with n-dimensions. However, he was still kind of confused by this. Can anyone synthesise a decent definition, tailored to a computer scientist's understanding?
|
(Answer from a software engineer who has followed a video series on tensors and who would share his personal experience). I think it is a mistake to assume that computer scientists only think in terms of concrete data structures like arrays. Although a tensor can be represented by a multi-dimensional array, we know that isn't what one is. Computer languages have plenty of methods for representing abstract behaviours (e.g., interfaces and abstract classes) and many pieces of software contain complex abstractions and behaviours that are comparable to what you might see in mathematics. The difference is that software tends to be far more verbose and descriptive. Calling a function ƒ in mathematics is normal. Calling a function f in your program (even ASCII f) will get you a comment in the code review, plus a request to specify the parameters, return value and types and a lengthy structured comment that can generate API documentation. (And before you downvote - I don't mean to suggest that
|
|linear-algebra|vector-analysis|tensors|
| 0
|
Mitchell's embedding theorem in Rotman's "An Introduction to Homological Algebra"
|
Theorem 5.99 in Rotman's "An Introduction to Homological Algebra" is If $\mathcal{A}$ is a small abelian category, then there is a covariant full faithful exact functor $F: \mathcal{A} \to Ab$ . Is this correct? I thought one gets only a covariant full faithful exact functor $F: \mathcal{A} \to \text{R-Mod}$ for some (maybe non-commutative) ring $R$ not necessarily $R = \mathbb{Z}$ ?
|
Rotman's errata for An Introduction to Homological Algebra can still be found on the Internet Archive Wayback Machine , and they do indeed include a correction to this theorem: "Page 316 line 15 should read $F: \mathcal{A} \to _R\!\!\mathbf{\text{Mod}}$ for some ring $R$ ." It's not true that every small abelian category has a full exact embedding into the category of abelian groups. If it were, then every object of a small abelian category would have an endomorphism ring isomorphic to that of an abelian group. And so every right Noetherian ring $R$ would be the endomorphism ring of an abelian group, since $R\cong\text{End}(R_R)$ , where $R_R$ is the regular right module, which is an object of the small abelian category of finitely generated right $R$ -modules. But there is no abelian group with endomorphism ring $\mathbb{Q}\times\mathbb{Q}$ , since such a group would have the structure of a vector space over $\mathbb{Q}$ , and no vector space over $\mathbb{Q}$ has endomorphism ring $\
|
|abstract-algebra|category-theory|homological-algebra|
| 1
|
Prove that there are infinitely many distinct natural numbers $a,b$ for which $\sqrt{a+b}, \sqrt{a-b}$ are simultaneously rational
|
the question Prove that there are infinitely many distinct natural numbers $a,b$ for which $\sqrt{a+b}, \sqrt{a-b}$ are simultaneously rational. the idea A radical is rational only if the number below it is a square number. This means that for both of them to be simultaneously rational we get $$a+b=x^2, a-b=y^2$$ From here I tried to take these two to a more general form that would lead us to infinitely many possibilities, but didn't get anything useful. I hope one of you can help me! Thank you!
|
There are infinitely many odd squares $x^2$ and $y^2$ with $x>y>0$ . Let $a=\dfrac{x^2+y^2}2$ and $b=\dfrac{x^2-y^2}2$ .
|
|rational-numbers|square-numbers|radical-equations|
| 1
|
On the remainder term in selberg's proof of prime number theorem
|
Here is an explanation of selberg's proof of prime number theorem. Here by using mertens theorem they show that, $$\sum_{k≤n}\frac{R(k)}{k^2}=O(1)$$ And then proceed to show that, $$\left|\frac{R(y)}{y}\right| The explanation to this is shown in the picture. My question:- $(1)$ Why does $$\sum_{x≤n≤x'}\frac{1}{n}\inf_{x≤y≤x'}|\frac{R(y)}{y}|≤|\sum_{x≤n≤x'}\frac{R(n)}{n^{2}}|$$
|
There seems to be a mistake there (no judgment on whether it's a major or minor mistake in context). For every $n\in[x,x']$ , we certainly have $$ \inf_{x\le y\le x'} \biggl| \frac{R(y)}y \biggr| \le \biggl| \frac{R(n)}n \biggr| $$ by the definition of infemum. Applying this to each term yields $$ \sum_{x\le n\le x'} \frac 1n \inf_{x\le y\le x'} \biggl| \frac{R(y)}y \biggr| \le \sum_{x\le n\le x'} \biggl| \frac{R(n)}{n^2} \biggr|. $$ However, this does not immediately imply $$ \sum_{x\le n\le x'} \frac 1n \inf_{x\le y\le x'} \biggl| \frac{R(y)}y \biggr| \le \biggl| \sum_{x\le n\le x'} \frac{R(n)}{n^2} \biggr| $$ as claimed. Certainly for abstract functions $R(t)$ , it's possible for the right-hand side to equal $0$ , while the left-hand side is always positive when $R(t)$ is not identically $0$ .
|
|number-theory|prime-numbers|analytic-number-theory|
| 0
|
Can an irrational number raised to an irrational power be rational?
|
Can an irrational number raised to an irrational power be rational? If it can be rational, how can one prove it?
|
$e^{\ln(2)}$ , when $e$ is the base of the natural logarithm. Hermite proved that $e$ is transcendental. https://web.math.utk.edu/~freire/m400su06/transcendence%20of%20e.pdf $e$ is a real number. If it were rational, it would be the solution of some $ax+b=0$ for integers $a$ and $b$ , which is impossible per Hermite. So $e$ is irrational. To prove $\ln(2)$ is irrational, we assume coprime integers $p$ and $q$ such that $\frac{p}{q}=\ln(2)$ As $e^{\ln(2)}=2$ , we can substitute $\frac{p}{q}=\ln(2)$ to get $$e^{\frac pq}=2 $$ Raise both sides to the $q$ th power. $$e^p=2^q$$ Now, let $$x^p-2^q=0$$ This is a binomial equation in terms of $x$ with degree $p$ , and $e$ would be a solution, contradicting Hermite above. So $\ln(2)$ is irrational.
|
|irrational-numbers|
| 0
|
Find the derivative of a piecewise constant function
|
I want to differentiate the given function: \begin{equation} f(x)= \begin{cases}1 \ \ \ \ \ x\in[0,1] \\ 0 \ \ \ \ \ \ x\not\in[0,1] \end{cases} \end{equation} However, the simple $f'(x)=0$ seems a little bit too simple, since this is similar to the Dirac delta function, just stretched over the interval $[0,1]$ instead of at $x=0$ . So I tried to represent the function as a generalized function, $$f(x)=\int_0^1\delta(x)\phi(x)\text{d}x,$$ where $\phi(x)$ is a locally summable function, infinitely differentiable, $\in C^\infty$ . and use the differentiation rule $$(D^\alpha f,\phi)=(-1)^{|\alpha|}(f,D^{\alpha}\phi),$$ with $\alpha=1$ : $$(D^1 f,\phi)=(-1)^{|1|}(\delta(x),D^{1}\phi)$$ $$(D^1 f,\phi)=-1(\delta(x),\phi'(x))$$ However, I am not sure how to continue. Any hints? Thanks
|
Notice that just from the definition of the derivative, $f'(x) = 0$ for all $x\neq 0, 1$ . In particular, for any $φ\in C^\infty_c(\mathbb{R})$ , $$0 = \int φ(x)f'(x) dx = -\int φ'(x)f(x) dx = -\int_0^1 φ'(x) dx.$$ Choosing any test function $φ$ such that the last integral is nonzero (there are plenty such functions) gives a contradiction. Hence $f$ is not (weakly) differentiable.
|
|calculus|derivatives|
| 0
|
Does $\int_0^\infty\frac{\ln(1+x)}{x(1+x^n)}dx$ have a general form?
|
Does $$I_n=\int_0^\infty\frac{\ln(1+x)}{x(1+x^n)}dx$$ have a general form? I tried to evaluate some small $n$ s. For $n=1$ , $I_1$ is obviously $\frac16\pi^2$ . For $n=2$ , see here . $I_2=\frac5{48}\pi^2$ . For $n=3$ , I put it in Mathematica and get $$\small{\frac{1}{108} \left(9 \left(4 \left(\text{Li}_2\left(\frac{\sqrt[6]{-1}}{\sqrt{3}}\right)+\text{Li}_2\left(-\frac{(-1)^{5/6}}{\sqrt{3}}\right)\right)+\log ^2(3)\right)+5 \pi ^2\right)}$$ Use the result $$\Re\operatorname{Li}_2\left(\frac{1+ti}2\right)=\frac1{12}\pi^2-\frac12\arctan^2t-\frac18\ln^2\frac{1+t^2}4,$$ I'm able to show $I_3=\frac5{54}\pi^2$ . For $n=4$ , I numerically found $I_4=\frac{17}{192}\pi^2$ . I'm not able to find the general form with $n\in \mathbb{Z}^+$ .
|
We first split the integral into two. $$ I=\int_0^{\infty} \frac{\ln (1+x)}{x\left(1+x^n\right)} d x = \int_0^1 \frac{\ln (1+x)}{x\left(1+x^n\right)} d x+\int_1^{\infty} \frac{\ln (1+x)}{x\left(1+x^n\right)} d x $$ For the second integral, let $x\mapsto \dfrac{1}{x} $ , then $$ \int_1^{\infty} \frac{\ln (1+x)}{x\left(1+x^n\right)} d x =\int_0^1 \frac{x^{n-1}[\ln (1+x)-\ln x]}{x^n+1}dx $$ Plugging back yields $$ \begin{aligned} I & =\int_0^1\left[\frac{\ln (1+x)}{x\left(1+x^n\right)}+\frac{x^{n-1} \ln (1+x)}{x^n+1}\right] d x-\int_0^1 \frac{x^{n-1} \ln x}{x^n+1} d x \\ & =\int_0^1 \frac{\ln (1+x)}{x} d x-\frac{1}{n} \int_0^1 \ln x d \ln \left(x^n+1\right) \\ & =\int_0^1 \frac{\ln (1+x)}{x} d x+\frac{1}{n} \int_0^1 \frac{\ln \left(x^n+1\right)}{x} d x \end{aligned} $$ For any natural number $n$ , $$ \begin{aligned} \int_0^1 \frac{\ln \left(x^n+1\right)}{x} d x = & \int_0^1 \frac{1}{x} \sum_{k=0}^{\infty}(-1)^k x^{n k} \\ = & \sum_{k=1}^{\infty} \frac{(-1)^{k-1}}{k} \int_0^1 x^{n k-1} d x
|
|calculus|integration|definite-integrals|
| 0
|
How to integrate $\int_{0}^{1} \int_{0}^{1} \ln\left(\frac{1}{\sinh^2(x) + \cosh^2(y)}\right) \,dx\,dy$
|
How to integrate $$\int_{0}^{1} \int_{0}^{1} \ln\left(\frac{1}{\sinh^2(x) + \cosh^2(y)}\right) \,dx\,dy$$ My attempt $$\int_{0}^{1} \int_{0}^{1} \ln\left(\frac{1}{\sinh^2(x) + \cosh^2(y)}\right) \,dx\,dy = - \int_{0}^{1} \int_{0}^{1} \ln(\sinh^2(x) + \cosh^2(y)) \,dx\,dy$$ $$= - \int_{0}^{1} \int_{0}^{1} \ln\left(\frac{e^{2x} + e^{-2x} + e^{2y} + e^{-2y}}{4}\right) \,dx\,dy$$ $$ =- \int_{0}^{1} \int_{0}^{1} \ln\left(e^{2x} + \frac{1}{e^{2y}} + e^{2y} + \frac{1}{e^{2x}}\right) \,dx\,dy + 2\ln(2)$$ $$= - \int_{0}^{1} \int_{0}^{1} \ln\left(\frac{e^{2x+2y} + 1}{e^{2y}} + \frac{e^{2x+2y} + 1}{e^{2x}}\right) \,dx\,dy + 2\ln(2)$$ $$=- \int_{0}^{1} \int_{0}^{1} \ln(e^{2(x+y)} + 1) \,dx\,dy +\int_{0}^{1} \int_{0}^{1} \ln(e^{2(x+y)}) \,dx\,dy - \int_{0}^{1} \int_{0}^{1} \ln(e^{2x} + e^{2y}) \,dx\,dy + 2\ln(2)$$ $$= - \int_{0}^{1} \int_{0}^{1} \ln(e^{2x} + e^{2y}) \,dx\,dy - \int_{0}^{1} \int_{0}^{1} \ln\left(1 + e^{2(x+y)}\right) \,dx\,dy + 2\ln(2) + 2$$ I need help with the last two integrals .
|
$$ \begin{split} J &= \int \frac{\ln(y(x^2+1)+x(y^2+1))}{xy}\, dx\\ & = \frac{1}{y} \int \frac{\ln(y(x^2+1)+x(y^2+1))}{x}\, dx \\ & = \frac{1}{y} \int \frac{\ln(x+y)+\ln(x+\frac{1}{y})+\ln(y)}{x}\, dx \\ & = \frac{1}{y} (K + L + \ln(x)\ln(y)) \end{split} $$ $$ \begin{split} K & = \int \frac{\ln(x+y)}{x}\, dx \\ & = \int \frac{\ln(\frac{x}{y}+1)}{x}\, dx + \ln(y)\int \frac{1}{x}\, dx \\ & \qquad \textrm{Substitution} \quad \boxed{\begin{aligned} u&=-\frac{x}{y},\\ du&=-\frac{1}{y}dx \end{aligned}} \\ & = -\int - \frac{\ln(1-u)}{u}\, du + \ln(y)\int\frac{1}{x}\, dx \\ & = -\textrm{Li}_2(-\frac{x}{y}) +\ln(x)\ln(y) \end{split} $$ $$ \begin{split} L & = \int \frac{\ln(x+\frac{1}{y})}{x}\, dx \\ & = \int \frac{\ln(xy+1)}{x}\, dx - \ln(y)\int \frac{1}{x}\, dx \\ & \qquad \textrm{Substitution} \quad \boxed{\begin{aligned} u&=-xy,\\ du&=-ydx \end{aligned}} \\ & = -\textrm{Li}_2(-xy)-\ln(x)\ln(y) \end{split} $$ $$ J = \frac{-\textrm{Li}_2(-xy)-\textrm{Li}_2\left(-\frac{x}{y}\right)+\ln(y)\ln|x|
|
|calculus|integration|multivariable-calculus|definite-integrals|closed-form|
| 1
|
Solving a pretty difficult convergence for series
|
Let $P_n(x)=x^n-nx+1$ be a sequence of polynomials, where $P_n\colon[1, +\infty) \to \mathbb{R}$ and $n \ge 2$ a) Show that for each $n$ , $P_n(x)=0$ has exactly one solution, and for each $n$ let $x_n$ be that unique number such that $P_n(x_n)=0$ . b)Show that $\lim_{n \to \infty}x_n=1$ and study the convergence of the series $\sum_{n \ge 2} (x_n-1)^{\alpha}$ , where $\alpha \in \mathbb{R}$ My approach: For once $\frac{dP_n(x)}{dx}=n(x^{n-1}-1) \ge 0, \forall x \ge 1$ , so $P_n$ is increasing. $P_n(1)=2-n \le 0$ and $\lim_{x \to +\infty}P_n(x)=\infty$ , so by the intermediate value property there is at least one point $x_n$ , where $P_n(x_n)=0$ and it is unique by monotonicity. Now for the limit. Take $b \in (0, 1]$ . Calculate $$P_n\left(1+\frac{1}{n^b}\right)=\left(1+\frac{1}{n^b}\right)^n-n\left(1+\frac{1}{n^b}\right)+1=2-n+\sum_{k=2}^n {n\choose k}\frac{1}{n^{kb}} \ge 2-n + \sum_{k=2}^m{n\choose k}\frac{1}{n^{kb}}$$ where $m \le n$ . The left side is a polynomial in $n$ . The high
|
Too long for a comment. Consider that you look for the largest zero of function $$f(x)=x^n -n x +1$$ It is negative for $x=1$ but very stiff. So, it looks to me that it is better to look at the zero of funxtion $$g(x)=n\log(x)-\log(nx-1)$$ Expanded as a series $$g(x)=-\log (n-1)+\sum_{k=1}^\infty (-1)^k \,\frac 1 k \left(\left(\frac{n}{n-1}\right)^k-n\right)\,(x-1)^k $$ Using power series reversion $$\large\color{red}{x=1+t\Bigg(1+ \sum_{k=1}^\infty \frac {P_k(n)}{(k+1)!}\,u^k\Bigg)}$$ where $$\color{blue}{t=\frac{(n-1) }{(n-2) n}\,\log (n-1)}\qquad \text{and}\qquad \color{blue}{u=\frac{t}{(n-2) (n-1)}}$$ The coefficients of the first polynomials are (they are given from the constant term to the highest power $n^{2k}$ ) $$\left( \begin{array}{cc} k & P_k(n) \\ 1 & \{1,-3,1\} \\ 2 & \{-1,-4,11,-6,1\} \\ 3 & \{-1,5,16,-43,30,-9,1\} \\ 4 & \{13,-22,6,-76,173,-140,58,-12,1\} \\ 5 & \{-47,89,-125,154,197,-645,621,-326,95,-15,1\} \\ 6 & \{-73,480,-795,842,-648,-876,2657,-2814,1728,-624,141,-
|
|real-analysis|sequences-and-series|analysis|problem-solving|limsup-and-liminf|
| 1
|
Calculations involving the Ricci tensor on $S^2$.
|
I am trying to do some calculations with involving the Ricci tensor, but I am off by a factor 2. Here's my problem: The manifold is $S^2$ with the standard metric $g=e^{2\phi}\delta_{ij}$ , with $\phi=\ln2-\ln(1+x^2+y^2)$ . Using formulas from this wikipedia page I have $$\Gamma^1_{11}=-\Gamma^1_{22}=\Gamma^2_{12}=\phi_{,1}$$ and $$\Gamma^1_{12}=-\Gamma^2_{11}=\Gamma^2_{22}=\phi_{,2}.$$ If I understand correctly, the Ricci tensor is $$R_{ij}=g_{ij}$$ since the radius of my sphere is 1. I want to calculate $\nabla_m\nabla_iY^m$ . Using the Ricci tensor, I would expect $$ \nabla_m\nabla_iY^m=\nabla_i\nabla_mY^m+R_{im}Y^m =(d(div(Y)))_i+(Y^\flat)_i. $$ I want to test this formula with a simple vector field $Y^m=x\delta^m_2-y\delta^m_1$ (with, as usual $x=x^1$ and $y=x^2$ ). I have $$ \nabla_iY^j=\frac{x^2+y^2}{1+x^2+y^2}(\delta^2_i\delta^j_1-\delta^1_i\delta^j_2)=A(x,y) (\delta^2_i\delta^j_1-\delta^1_i\delta^j_2). $$ It follows that $$div(Y)=\nabla_iY^i=0.$$ Also $$ \nabla_m\nabla_iY^j=A_
|
Too long for another comment. Your metric can be written as \begin{align} g_{11}&=g_{22}=\frac{4}{(1+x^2+y^2)^2}\,, &g_{12}&=g_{21}=0\,. \end{align} I verified with a symbolic calculator that in this case the Ricci tensor $R_{ij}$ is indeed equal to $g_{ij}$ and the scalar curvature is two like we get on the unit sphere from the standard metric that I mentioned in a comment. So the question arises: What were your coordinates and where does this exercise come from? Another remark on your calculation of Christoffel symbols. Do you know the difference between $\Gamma_{ijk}$ and $\Gamma^i_{ij}\,?$ In calculations you start with the former, and in your case \begin{align} \Gamma_{111} &= -\frac{8x}{1+x^2+y^2}\\ \end{align} which happens to be equal $\frac \partial{\partial x}\phi$ in your special case. You expect an identity for the covariant directional derivative $\nabla_m\nabla_iY^m\,.$ Proof. By the Ricci identity in your link, $$ \nabla_k\nabla_iY^m-\nabla_i\nabla_kY^m=-R_{ki\ell}{}^mY^
|
|differential-geometry|curvature|
| 0
|
Find a Mobius map sending the intersection of two discs to a wedge with a specific constraint on a third point
|
Find a Mobius transformation sending the region $D$ between $|z-1|=1$ and $|z|=1$ to ${0 such that 1 is mapped to $i$ . My idea: The two circles intersect at $x+iy= \frac{1}{2} \pm \frac{\sqrt{3}}{2}i$ , while the two rays of the wedge intersect at $0$ and $\infty$ . I therefore thought that a reasonable way to approach this is to find a Mobius map with the following properties: $\frac{1}{2} - \frac{\sqrt{3}}{2}i \to 0$ , $1\to i$ , $\frac{1}{2} + \frac{\sqrt{3}}{2}i\to \infty$ and then find the map using the cross-ratio preservation: $$ \frac{z-1}{\left( \frac{1}{2} - \frac{\sqrt{3}}{2}i \right)-1} \frac{\left( \frac{1}{2} - \frac{\sqrt{3}}{2}i \right)-\left( \frac{1}{2} + \frac{\sqrt{3}}{2}i \right)}{z-\left( \frac{1}{2} - \frac{\sqrt{3}}{2}i \right)-\left( \frac{1}{2} + \frac{\sqrt{3}}{2}i \right)} = \frac{M(z)-i}{0-i}, $$ where $M$ is the Mobius transformation I am looking for. My question is: What is the image of the unit disc $|z|=1$ under this transformation? From a known theore
|
Let $C_1$ be the circle $|z|=1$ , $C_2$ the circle $|z-1|=1$ , and denote their intersections with $a = 1/2 + i \sqrt 3/2$ and $\bar a = 1/2 - i \sqrt 3/2$ . Let us ignore the condition $M(1) = i$ for a moment, and map $D$ to a sector domain. $a$ and $\bar a$ must be mappend to $0$ and $\infty$ (in some order), so a good start is $$ T(z) = \frac{z-a}{z-\bar a} \, . $$ Then $C_1$ is mapped to a line $L_1$ through the origin and $$ T(1) = \frac{1-a}{1-\bar a} = -a = -\frac 12 - i\frac{\sqrt 3}{2}\, , $$ and $C_2$ is mapped to a line $L_2$ through the origin and $$ T(0) = \frac{a}{\bar a} = - \bar a = -\frac 12 + i\frac{\sqrt 3}{2}\, . $$ Möbius transformations are continuous and bijective functions (aka“ homeomorphisms” aka “topological isomorphisms”) from the extended complex plane onto itself. Therefore the domain $D$ is mapped to one of the four sectors delimited by the lines $L_1$ and $L_2$ . Since $T(1/2) = -1$ , $D$ is mapped to the sector $$ \frac{2\pi}{3} so that $$ M(z) = e^{-2\
|
|complex-analysis|mobius-transformation|
| 1
|
Is there a relationship with how many lines are necessary to divide a circle?
|
I want to use the example of a circle being divided increasing by 1 each time ( 1/1 , 1/2 , 1/3 , 1/4 , etc.). In order it would take this amount of lines: 0, 1, 3, 2, 5, 3, 7, and so on. Is it that an odd division would require an inverse amount of lines (i.e. dividing by 9 correlates to 9 lines) while even division would require half of it’s inverse (i.e. dividing by 16 would require 8 lines)? And if that is the case is there any equation to express this relationship? Would the series diverge or converge?
|
I’m assuming that you need to find the number of line segments required to divide a circle into $n$ sectors of equal area. In that case each sector has a central angle $\frac{2\pi}n$ . Let $D(n)$ denote the number of line segments you need to perform the required division. You can think of it this way: draw the radii for the first sector inclined to each other at the angle $\frac{2\pi}{n}$ calling them $r_1$ and $r_2$ , then draw the next radius $r_3$ inclined to $r_2$ at the same angle, and continue on until you have drawn the $n$ th radius. It’s clear from here that we have drawn $n$ radii, so if none of them form a straight line segment then $D(n)=n$ . Denote by $k = \frac{n}2$ . Clearly if $k \in \mathbb Z$ , then $r_m$ and $r_{m+k}$ for $m \le k$ would form a straight line segment since $k\frac{2\pi}n = \pi$ . That would mean exactly $k$ line segments would be required for the division since $k$ radii get added up with the remaining $k$ radii in pairs to form line segments. So we
|
|algebra-precalculus|circles|divisibility|
| 0
|
How to rewrite $\cos{2n\theta}$ as a summation of $\sin\theta$
|
I want to rewrite $\cos{2 n \theta}$ as $$ \cos{2 n \theta}=\sum_{m=0}^{M} a_m \sin^m\theta $$ How to determine $M$ and coefficients $a_m$. Any comment is much appreciated. Many thanks in advance.
|
Expanding De Moivre's theorem directly leads to $$ \begin{aligned} \cos(2n\theta)+i\sin(2n\theta)&=(\cos\theta+i\sin\theta)^{2n} \\ &=\sum\limits_{r=0}^{2n}{2n \choose r}\cos^r\theta (i\sin\theta)^{2n-r} \\ \end{aligned} $$ Taking the real part, we find that only the even powers of $\cos\theta$ and $\sin\theta$ survive, which yields $$ \cos(2n\theta)=\sum\limits_{r=0}^{n}(-1)^{n-r}{2n \choose 2r}\cos^{2r}\theta\sin^{2n-2r}\theta $$ Substitute $\sin\theta$ for $\cos\theta$ $$ \begin{aligned} \cos(2n\theta)&=\sum\limits_{r=0}^{n}(-1)^{n-r}{2n \choose 2r}(1-\sin^2\theta)^r\sin^{2n-2r}\theta \\ &=\sum\limits_{r=0}^{n}(-1)^{n-r}{2n \choose 2r}\sum\limits_{s=0}^r{r \choose s}(-1)^s\sin^{2s}\theta\sin^{2n-2r}\theta \end{aligned} $$ Collect the $\sin^{2m}\theta$ term with $s+n-r=m$ , whose coefficient is $$ (-1)^m\sum\limits_{r=n-m}^{n}{2n \choose 2r}{r \choose r+m-n} $$ We can use the combinatorial identity $$ \sum\limits_{r=n-m}^{n}{2n \choose 2r}{r \choose r+m-n}=\frac{4n^2(4n^2-2)\cdots[4n
|
|calculus|analysis|
| 0
|
A $2D$ random walk with step size $1$ starts on the edge of a disk of radius $r$. What is probability that the walk will return to the disk?
|
Consider a two dimensional random walk with step size $1$ and each step in a random direction, with the angle $\theta$ uniformly distrbuted in $[0,2\pi)$ . The walk starts on the perimeter of a disk of radius $r$ . What is probability that the walk will ever return to the disk, in terms of $r$ ? By "the walk will ever return to the disk", I mean the walk will have a vertex on or within the perimeter of the disk, besides the initial one. I came up with this question when thinking about the fact that a two dimensional lattice walk will return to the origin with probability $1$ , as proved by Polya in 1921. I have not been able to find any reference that answers my question. Context:
|
A circular sector centered at the starting point with some non-zero angle $\alpha$ and non-zero radius $\rho$ lies entirely within the disk of radius $r$ . By symmetry and linearity of expectation, the expected number of returns to this sector is $\frac\alpha{2\pi}$ times the expected number of returns to the disk of radius $\rho$ about the starting point. Thus, if the expected number of returns to this disk of radius $\rho$ is infinite, so is the expected number of returns to the circular sector, and hence also the expected number of returns to the disk of radius $r$ . This implies that the probability to return to that disk is $1$ . Since this only depends on the long-term behaviour, we can approximate the distance from the origin after $N$ steps by the Rayleigh distribution $$ f(r)=\frac{2r}N\mathrm e^{-\frac{r^2}N} $$ given in Joan S. Guillamet F.’s answer. The probability to be within $\rho$ of the starting point after $N$ steps is thus approximated as $$ \int_0^\rho\frac{2r}N\mat
|
|probability|reference-request|random-walk|
| 1
|
Solving $(2n)^{\log 2}=(5n)^{\log 5}$
|
I have seen this equation from a link named Asisten and German Academy , (it is a video of Facebook) where there is a complicate solution (I invite to watch it) for $$(2n)^{\log 2}=(5n)^{\log 5}$$ I have adopted, instead, this approach: $$(2)^{\log 2}(n)^{\log 2}=(5)^{\log 5}(n)^{\log 5} \iff 2n^{\log 2}=5n^{\log 5}$$ After $$\frac{n^{\log 2}}{n^{\log 5}}=\frac 52 \iff n^{(\log 2-\log 5)}=\frac 52$$ $$\log(n^{(\log 2-\log 5)})= \log 5-\log 2 $$ $$(\log 2-\log 5)\log n=\log 5-\log 2 \iff \log n =-1$$ Hence $$e^{\log n}=e^{-1}\implies n=\frac 1e$$ Now I have seen the solution is $n=1/10$ . Is it different my solution why my base is $e$ and not $10$ ? Generally I have seen that $\log=\log_{10}$ . In Italy we used often $\log=\log_e$ . I yet thought with the old notation that $\operatorname{Log}=\log_{10}.$ I not adopted often $\ln$ where the base is neperian.
|
The question Equation 1 (is repeated here to easier reference) is: $$ \left(2 n \right)^{\log 2} = \left( 5 n \right)^{\log 5} \tag{Eq. 1}$$ Then, $$\begin{align*} \left(2 \right)^{\log 2}\left(n \right)^{\log 2} &= \left( 5 \right)^{\log 5} \left( n \right)^{\log 5} \tag{Eq. 2}\\ \frac{\left(n \right)^{\log 2}}{ \left( n \right)^{\log 5}} &= \frac{\left( 5 \right)^{\log 5}} { \left(2 \right)^{\log 2}} \tag{Eq. 3}\\ n^{\log 2-\log 5 } &= \frac{\displaystyle \left( 5 \right)^{\log 5}} {\displaystyle \left(2 \right)^{\log 2}} \tag{Eq. 4} \end{align*}$$ Finally, the result is given in Equation 5 as: $$ \boxed{ n = \left( \frac{\displaystyle \left( 5 \right)^{\log 5}} { \displaystyle \left(2 \right)^{\log 2}} \right) ^{ \frac{\displaystyle 1} { \displaystyle \log 2 - \log 5 } } = \frac{1}{10} }\tag{Eq. 5a}$$ Also, since all of the symbolic algebraic operation was independent of the base $\log_b$ , the same result holds independent of base $b$ so long that $b>0$ and $b \ne 1$ ( with algebra
|
|algebra-precalculus|logarithms|
| 0
|
Extended and Contracted Ideals in Ring of Fractions
|
In some lecture notes on commutative algebra I'm reading, one defines for a multiplicative subset $S\subseteq A$ the canonical map $f:A\to S^{-1}A$ , $a\mapsto \tfrac{a}{1}$ , then for an ideal $I$ of $A$ , the extended ideal $I^e$ is the ideal of $S^{-1}A$ generated by $f(I)$ , and for an ideal $J$ of $S^{-1}A$ , the contracted ideal $J^c$ is $f^{-1}(J)$ . I can see that for any ideal $J$ of $S^{-1}A$ we have $(J^c)^e=J^{ce}= J$ , so every ideal of $S^{-1}A$ is an extended ideal, and that for an ideal $I$ of $A$ , $I^e=S^{-1}I$ . Now they claim that $I$ is a contracted ideal, i.e. there is some ideal $J$ of $S^{-1}A$ such that $I=J^c$ , if and only if $(I^e)^c=I^{ec}\subseteq I$ . I can prove one implication. If $I=J^c$ , then $I^e=J^{ce}=J$ , so $I^{ec}=J^c=I$ . But if $I^{ec}\subseteq I$ , I can't see how to show that $I=J^c$ for some $J$ .
|
Note that in general $I^{ec}\supseteq I$ , so the condition $I^{ec}\subseteq I$ is equivalent to $I^{ec}=I$ . Then you can just take $J=I^{e}$ .
|
|abstract-algebra|ring-theory|ideals|
| 1
|
$10$ different balls are to be placed in $4$ distinct boxes at random. The probability that two of these boxes contain exactly $2$ and $3$ balls is:-
|
If $10$ different balls are to be placed in $4$ distinct boxes at random, then the probability that two of these boxes containing exactly $2$ and $3$ balls is:- Solving:- total ways $$n(s)=4^{10}$$ ( As, for each ball, 4 boxes to select from) Probability $${\frac{{4\choose2}.{10\choose5}.{5\choose3}.2.2^5}{4^{10}}} = \frac{945}{2^{10}}$$ (As for Rest balls 2 boxes to select from) Way-2:- making Pairs $(2,3,5,0);(2,3,4,1); (2,3,2,3)$ $$n(E)= {\frac{10!×4!}{2!3!5!0!}}+{\frac{10!×4!}{2!3!4!1!}} +{\frac{10!×4!}{2!3!2!3!×2!2!}}$$ Hence, probability equals $${\frac{n(E)}{n(S)}}={\frac{945×17}{2^{15}}}$$ Here, the correct answer is from Way-2 and answer of way-1 is incorrect but can anyone please explain in detail what is wrong in way-1 as, we are getting wrong answer from it.
|
Your first solution is prone to double counting - the usual suspect. Consider the following cases. Let's label the boxes A, B, C, and D, and the balls ${b_1, b_2 ... b_{10}}$ CASE $\mathbf 1$ You pick A and B as the two chosen boxes. A gets two balls - $b_1$ and $b_2$ . B gets three balls $b_3$ , $b_4$ and $b_5$ . The remaining $5$ balls are free to go into C or D - no restrictions. So, $b_6$ and $b_7$ go to C. and $b_8$ , $b_9$ and $b_{10}$ go to D. CASE $\mathbf 2$ You pick C and D as the two chosen boxes. C gets two balls - $b_6$ and $b_7$ . D gets three balls $b_8$ , $b_9$ and $b_{10}$ . The remaining $5$ balls are free to go into A or B. So, $b_1$ and $b_2$ go to A. And $b_3$ , $b_4$ and $b_5$ go to B. The solution counts these as two distinct cases. When, in fact, they are one and the same. Meaning, the number of ways is over-estimated. And so is the probability.
|
|probability|combinatorics|permutations|combinations|
| 1
|
Let $G$ be a group with $|G|=455$. Show that $G$ is a cyclic group.
|
Let $G$ be a group with $|G|=455$. Show that $G$ is a cyclic group.
|
Let $|G|=pqr$ and WLOG $p . Your case is the particular one for $p=5$ , $q=7$ and $r=13$ , where the fortunate conditions $p\nmid q-1$ and $p\nmid r-1$ and $q\nmid r-1$ take place, which allow the following counting argument. By Sylow III, $n_p\mid qr$ and $n_p\equiv 1\pmod p$ . So, $n_p=1,q,r,qr$ and $n_p=1+kp$ . If $p\nmid q-1$ and $p\nmid r-1$ , then $n_p\ne q,r$ and hence $n_p=1,qr$ . Likewise, $n_q\mid pr$ and $n_q\equiv 1\pmod q$ . So, $n_q=1,p,r,pr$ and $n_q=1+lq$ . Now, $q\nmid p-1$ (because $q>p$ ); so, if $q\nmid r-1$ , then $n_q\ne p,r$ and hence $n_q=1,pr$ . Suppose there are $qr$ $p$ -Sylow subgroups and $pr$ $q$ -Sylow subgroups. Since all these subgroups intesect pairwise trivially (they have order $p$ or $q$ ), their union's size amounts to $qr(p-1)+pr(q-1)+1$ , which is greater than $pqr$$^\dagger$ . Therefore, there isn't enough room in $G$ for so many $p$ -Sylows and $q$ -Sylows at the same time, and hence either $n_p=1$ or $n_q=1$ . If $n_p=1$ , then the only $p$ -S
|
|abstract-algebra|group-theory|cyclic-groups|
| 0
|
Does this iterative process always reach $1$?
|
This is an iterative process I thought of: Start with any integer $n$ . If $n$ is prime, then divide it by $3$ and round it. If it’s not prime, add $1$ . For example, for $n = 27$ , we would add 1: $27 \implies 28$ 28 is still composite, so let’s add 1: $27 \implies 28 \implies 29$ Now 29 is prime, so we divide: $[\frac{29}{3}] = 10$ Hence, $27 \implies 28 \implies 29 \implies 10$ . If you’d like it in a more formal notation: $$ \begin{cases} n + 1 & \text{if } n \neq \text{prime} \\ [\frac{n}{3}] & \text{if } n = \text{prime} \end{cases} $$ My claim is that this reaches 1 for all integers. This is the sequence of $27$ : $27 \implies 28 \implies 29 \implies 10 \implies 11 \implies 4 \implies 5 \implies 2 \implies 1$ This is the sequence for 100: $ 100 \implies 101 \implies 34 \implies 35 \implies 36 \implies 37 \implies 12 \implies 13 \implies 4 \implies 5 \implies 2 \implies 1$ We could even construct a directed graph here. I would think that the sequences generally have a downward tr
|
This follows directly from Bertrand's postulate; for any integer $n>3$ there is a prime number $p$ such that $n .
|
|elementary-number-theory|
| 0
|
Does this iterative process always reach $1$?
|
This is an iterative process I thought of: Start with any integer $n$ . If $n$ is prime, then divide it by $3$ and round it. If it’s not prime, add $1$ . For example, for $n = 27$ , we would add 1: $27 \implies 28$ 28 is still composite, so let’s add 1: $27 \implies 28 \implies 29$ Now 29 is prime, so we divide: $[\frac{29}{3}] = 10$ Hence, $27 \implies 28 \implies 29 \implies 10$ . If you’d like it in a more formal notation: $$ \begin{cases} n + 1 & \text{if } n \neq \text{prime} \\ [\frac{n}{3}] & \text{if } n = \text{prime} \end{cases} $$ My claim is that this reaches 1 for all integers. This is the sequence of $27$ : $27 \implies 28 \implies 29 \implies 10 \implies 11 \implies 4 \implies 5 \implies 2 \implies 1$ This is the sequence for 100: $ 100 \implies 101 \implies 34 \implies 35 \implies 36 \implies 37 \implies 12 \implies 13 \implies 4 \implies 5 \implies 2 \implies 1$ We could even construct a directed graph here. I would think that the sequences generally have a downward tr
|
Very simplified answer: Because of Bertrand's postulate , we know that somewhere between $n$ and $2n$ there is a prime. So when you do find your first prime, you will divide by $3$ , and thus end up at at most $\frac{2n}{3}$ . So no matter where you start, the division will take you lower than your starting point. If you now consider this your new starting point, we see that we must necessarily go further and further down. I have skipped details regarding what happens for the smallest starting points (say $n ) and if we're being completely strict here, you could conceivably end up at $\left\lceil \frac{2n}3\right\rceil$ rather than below $\frac{2n}3$ after finding your first prime. But these are minor issues that don't matter for larger numbers, as the main point is that you end up strictly below $n$ itself, and any case below, say, $n = 10$ can easily be checked by hand rather than through this theory.
|
|elementary-number-theory|
| 0
|
Enlargement or reduction of metric may destroy completeness
|
I'm looking for examples in which the following occurs: We have a set $M$ and metrics $d_1$ , $d_2$ on $M$ such that $\forall x, y \in M: d_1(x, y) \leq d_2(x, y)$ . And then either: $(M, d_1)$ is complete but $(M, d_2)$ is not $(M, d_2)$ is complete but $(M, d_1)$ is not I already know an example of (2.): Take $M = \mathbb{R}$ and $d_2(x, y) = |x - y|$ . Now let $f: (0, 1) \to \mathbb{R}, f(x) = \frac{2x - 1}{x - x^2}$ . Let $d_1(x, y) = |f^{-1}(x) - f^{-1}(y)|$ . Then we have $d_1(x, y) \leq d_2(x, y)$ , $(M, d_2)$ is complete, but $(M, d_1)$ is incomplete. However, I have been unable to come up with an example of (1.).
|
For an example of $2$ , you took the usual metric on $d_2$ and made is so that a sequence $x_n$ which is going to $\infty$ becomes Cauchy by shrinking the distances. This is exactly the right idea. The function $f:(-1,1)\to \mathbb{R}$ given by $$f(x)=\frac{x}{1-|x|}$$ also works, and has inverse $$f^{-1}(x)=\frac{x}{1+|x|}.$$ This is just your function composed with a map to take $(0,1)$ to $(-1,1)$ . For the other case, assume $d_1(x,y)\leqslant d_2(x,y)$ for all $x,y\in M$ . Assume that $(M,d_1)$ is complete and $(M,d_1)$ is not. Then if $(x_n)_{n=1}^\infty$ is $d_2$ -Cauchy, this means $$\lim_N\sup_{m,n\geqslant N}d_1(x_m,x_n)\leqslant \lim_N\sup_{m,n\geqslant N}d_2(x_m,x_n)=0,$$ because $(x_n)_{n=1}^\infty$ is $d_2$ -Cauchy. Therefore it is also $d_1$ -Cauchy, and therefore $d_1$ convergent to some $x$ . What we would like is that $(x_n)_{n=1}^\infty$ is not convergent to $x$ in the $d_2$ metric, so $d_2(x_n,x)$ stays large. So what's to stop us from making all the other points of
|
|metric-spaces|
| 0
|
Do matrices really rotate and stretch vectors or is that definition incorrect?
|
I come from the applied math and statistics world, but I was talking to my friend who comes from the pure math and number theory world—in particular Galois representation theory. I mentioned something about the confusing definition of "matrices" in textbooks. Many textbooks talk about a matrix as the solution to a linear system of equations, or other abstract descriptions. The definition that I have always found useful, was the sense that matrices rotate and scale vectors through linear transformations. Now, coming from the pure math side, my friend said that this definition was not accurate. I am trying to paraphrase some of his comments, but he said that in higher dimensions, matrices can stretch and rotate vectors only locally . He also said that it depends on what vectors the matrix acts upon. His response threw me for a loop and I was trying to understand how to resolve his statements. First, is my understanding of matrices incorrect? It is fine if this idea of rotating and stretc
|
Consider the matrix $A=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}$ . In the Euclidean plane, this matrix is the matrix in the canonical basis of rotation with $+\frac{pi}2$ . For example, $D=(5,2)\mapsto D'=(-2,5)$ . But let $u=(5,0)$ and $v=(2,3)$ In the base $\{u,v\}$ , the linear map $f$ such that $f(u)=v$ and $f(v)=-u$ also has the matrix $A$ . But then we won't talk about rotation. Maybe that's why "he also said that it depends on what vectors the matrix acts upon." You have to be precise, even during the superbowl;)
|
|linear-algebra|abstract-algebra|matrices|representation-theory|
| 0
|
Definition of Gödel's $\beta$-function in Shoenfield's book "Mathematical Logic"
|
In Section 6.4, page 116, Shoenfield defines $$ \beta(a,i)=\mu x_{x with $\text{OP}(u,v):=(u+v)\cdot (u+v)+u+1$ and $\text{Div}(u,v):\longleftrightarrow \exists t(tv=u)$ . Furthermore, $\mu x P(x,\overrightarrow{a})$ is defined as the least $x$ such that $P(x,\overrightarrow{a})$ holds assuming the existence of an $x$ with $P(x,\overrightarrow{a})$ . My question is the following: Is the definition of $\beta$ correct since there is not for every natural number $a$ a pair $(y,z)$ of natural numbers satisfying $\text{OP}(y,z)=a$ ? Also if $\beta$ is not total it fulfills its purpose of coding finite sequences of natural numbers. I am aware that questioning the correctness of this definition is daunting as Shoenfield's book is a classic.
|
Check out p. 112, where Shoenfield defines his bounded $\mu$ operator, which is the one in use here. The definition ensures that $\mu x_{x is always defined -- even if there is no $n such that $\varphi(n)$ is true.
|
|logic|recursion|
| 0
|
Do matrices really rotate and stretch vectors or is that definition incorrect?
|
I come from the applied math and statistics world, but I was talking to my friend who comes from the pure math and number theory world—in particular Galois representation theory. I mentioned something about the confusing definition of "matrices" in textbooks. Many textbooks talk about a matrix as the solution to a linear system of equations, or other abstract descriptions. The definition that I have always found useful, was the sense that matrices rotate and scale vectors through linear transformations. Now, coming from the pure math side, my friend said that this definition was not accurate. I am trying to paraphrase some of his comments, but he said that in higher dimensions, matrices can stretch and rotate vectors only locally . He also said that it depends on what vectors the matrix acts upon. His response threw me for a loop and I was trying to understand how to resolve his statements. First, is my understanding of matrices incorrect? It is fine if this idea of rotating and stretc
|
You are quite close. One can always decompose a matrix using the singular value decomposition (SVD). For some matrix $M$ this is written as $$M=U\Sigma V^*,$$ where $V^*$ means the conjugate transpose of $V$ , $U$ and $V$ are unitary matrices and $\Sigma$ is a diagonal matrix. Unitary means $VV^*=1\!\!1$ and it is the complex equivalent of a rotation matrix. In the case that $M$ is real-valued and a square $(n\times n)$ matrix, $U$ and $V$ can be chosen to be real , which means unitarity becomes orthogonality, i.e. $VV^T=1\!\!1$ . So in that case $U$ and $V$ are rotation matrices (or reflections). So this means that any real, square matrix can be seen as a rotation, a scaling and some other rotation. And maybe a reflection as well.
|
|linear-algebra|abstract-algebra|matrices|representation-theory|
| 0
|
Being an integral weight can be checked on a base.
|
Let $\Phi$ be a root system and $\Delta$ a base for a finite dimensional semi-simple complex Lie algebra. For $\lambda \in H^*$ we say that $\lambda $ is integral if $\frac{2(\lambda,\alpha)}{(\alpha,\alpha)}\in \mathbb{Z}$ . Now my question is why we can check this condition on $\Delta$ . Here is my approach so far: By definition of a base we can write $\alpha=\sum_{i=0}^nk_i \alpha_i$ wit $k_i\in \mathbb{Z}$ . This transforms our condition into $$\frac{\sum_{i=0}^nk_i 2(\lambda,\alpha_i)}{\sum_{i,j=0}^nk_ik_j(\alpha_i,\alpha_j)}\in \mathbb{Z}.$$ Now we know that $\frac{2(\alpha_i,\alpha_j)}{(\alpha_i,\alpha_i)}\in \mathbb{Z}$ . I feel like i should get somewhere with this but im a bit stuck here. Thanks for any hints or answers in advance!
|
Use the fact that every root $\alpha$ is conjugated to a simple root $\alpha_i \in \Delta$ by a product of simple reflections $\sigma = \prod_i \sigma_{\alpha_i} \in W$ where $W$ is the Weyl group of $\Phi$ . That means $\sigma(\alpha) = \alpha_i \in \Delta$ . Now define $\langle \lambda, \alpha \rangle:= \frac{2 (\lambda, \alpha)}{(\alpha, \alpha)}$ . This function is invariant under elements of the Weyl group. That means with the chosen $\sigma$ from above $\langle \lambda, \alpha\rangle = \langle \sigma(\lambda), \sigma(\alpha)\rangle$ . Notice that $\sigma(\alpha)$ is now a simple root. That means $\langle \lambda, \sigma(\alpha) \rangle \in \mathbb{Z}$ , since you have checked the condition on the simple roots. The last open question is: How can we go from $\langle \sigma(\lambda), \sigma(\alpha) \rangle$ to $\langle \lambda, \sigma(\alpha) \rangle$ ?
|
|lie-algebras|semisimple-lie-algebras|
| 0
|
Construct ellipse with given foci such that it is tangent to a given circle
|
Given two points $F_1$ and $F_2$ and a circle centered at $r_0$ with radius $s$ , I'd like to construct the ellipse with foci $F_1$ and $F_2$ that is tangent to the given circle. That is the question. My attempt: Let $r = [x, y]^T $ be the position vector of a point in the plane. To simplify the analysis, I'll introduce a new coordinate reference with its origin at the center of the ellipse. This is known, because the center of the ellipse is just the midpoint of the two foci. So let $ C = \dfrac{1}{2} (F_1 + F_2) $ And define the unit vector $ u_1 = \dfrac{ F_2 - F_1}{\| F_2 - F_1 \| } $ And let $u_2 $ be a unit vector that is perpendicular to $u_1$ . Now define the rotation matrix $ R = [u_1, u_2] $ By letting the $x'$ axis point along $u_1$ and the $y'$ axis point along $u_2$ , then if $p' = (x',y')$ is the local coordinate of a (world) point $p$ with resepct to this new frame, then then two vectors are related by $ p = C + R p' $ So that $ p' = R^T (p - C) $ Using these new coordin
|
I have the impression you're looking for the single one solution. You do realise there are two solutions, do you? P.s. I realise very good this should be a comment and not an answer, but there's no way to add an image to a comment.
|
|geometry|conic-sections|
| 0
|
Is the argument for approaching of an approximated quantity to its true value sound?
|
Consider a quantity, $a$ , which could represent various measures such as the area under a curve. Initially, we estimate the value of $a$ . As we refine our approximation, we find that we can make it arbitrarily close to a number $\ell$ . This suggests that $a$ equals $\ell$ . But why is this the case? Here's the rationale: Suppose, for contradiction, that $a$ does not equal $\ell$ , but instead equals a different number, say $m$ . In the process of approximating $a$ , it should approach $m$ (or in other words, we find that we can make it arbitrarily close to $m$ ), as $m$ is its true value. However, $a$ is approaching $\ell$ (Because we said that we can make it arbitrarily close to a number $\ell$ ), and this is a contradiction. Therefore, the actual value of $a$ must be $\ell$ , as any other assumption leads to a contradiction. Is this reasoning sound? Edit: According to Charles Hudgins's comment, my reasoning wasn't sound and as I understood, the sound proof is as follows: $a$ can g
|
What you have done in that first part is argue the logic behind definition of the limit. However, I would argue the beginning logic is slightly flawed. Imagine if I consider the sequence $x_n$ = $\frac {1}{n}$ . Obviously $x_n->0 $ as $n->\infty$ , but $x_n$ never becomes zero, so $x_n$ will never "equal" zero as you have written. The actual proof for that statement is explained in the comments well by Charles Hudgins (and for the record, that is probably the way you should look at this proof), I recommend you look at that!
|
|calculus|limits|logic|
| 0
|
Solving $(2n)^{\log 2}=(5n)^{\log 5}$
|
I have seen this equation from a link named Asisten and German Academy , (it is a video of Facebook) where there is a complicate solution (I invite to watch it) for $$(2n)^{\log 2}=(5n)^{\log 5}$$ I have adopted, instead, this approach: $$(2)^{\log 2}(n)^{\log 2}=(5)^{\log 5}(n)^{\log 5} \iff 2n^{\log 2}=5n^{\log 5}$$ After $$\frac{n^{\log 2}}{n^{\log 5}}=\frac 52 \iff n^{(\log 2-\log 5)}=\frac 52$$ $$\log(n^{(\log 2-\log 5)})= \log 5-\log 2 $$ $$(\log 2-\log 5)\log n=\log 5-\log 2 \iff \log n =-1$$ Hence $$e^{\log n}=e^{-1}\implies n=\frac 1e$$ Now I have seen the solution is $n=1/10$ . Is it different my solution why my base is $e$ and not $10$ ? Generally I have seen that $\log=\log_{10}$ . In Italy we used often $\log=\log_e$ . I yet thought with the old notation that $\operatorname{Log}=\log_{10}.$ I not adopted often $\ln$ where the base is neperian.
|
If we let $n=\frac k{10}$ , then $$ \begin{align} &(2n)^{\log 2}=(5n)^{\log 5} \\ \iff &(\tfrac k5)^{\log2}=(\tfrac k2)^{\log5}\\ \iff &5^{-\log2}k^{\log2}=2^{-\log5}k^{\log5}\\ \stackrel{(*)}{\iff} &k^{\log2}=k^{\log5}\\ \iff &k^{\log5-\log2}=1\\ \iff &k=1 \;\text{and}\; n=\frac1{10} \end{align} $$ $(*)$ By taking $\log$ of both sides, $$5^{-\log2}=2^{-\log5}\iff -\log5\log2=-\log2\log5.$$
|
|algebra-precalculus|logarithms|
| 0
|
Exterior/wedge product of two vectors as an area?
|
It is often said that the exterior/wedge product of two vectors represents an area, or signed area with undefined shape, and they show a picture of two vectors in 2 dimensions (could be i and j ) with an exterior product that is the parallelogram spanned by the vectors, or a circle with the same area as the parallelogram, or so; and a connection is made with the usual cross product, cause both exterior and cross products have the same scalar magnitude or area. So far so good. Now in 3 dimensions instead of i and j let's take the vectors u =(0.853, -0.146, 0.5) and v =(-0.146, 0.853, 0.5); these vectors are just the i , j rotated, so they are just unit vectors spanning a 90º angle. In regard to the cross product nothing essential has changed, it will be again a vector and its length will be the same area as it was with i and j . But according to Wikipedia (Exterior Algebra, section Cross And Triple Products) the exterior product is now: u Λ v = (u1·v2-u2·v1)( i Λ j ) + (u2·v3-u3·v2)( j
|
The exterior product of two vectors doesn't represent an simple area, but an oriented area. It's just that in two dimensions, this becomes a scalar since it only has one vector component (but it can still be considered oriented because if the sign of the component). In three dimensions, an oriented are is equivalent to the combination of an area and a normal vector for that area. Two bivectors that are the results of two different exterior products between vectors can have the same magnitude (the same area), but their normal directions can still be different (in which case the two bivectors are also different). Just like a vector that isn't aligned with any of the basis axis will have more than one non-zero component, a bivector representing a surface that isn't aligned with two of the basis axis will also have more than one non-zero component. In four dimensions or higher, the orientation of the area can't be described by a single normal vector; you would instead describe the orientat
|
|linear-algebra|abstract-algebra|vectors|vector-analysis|
| 0
|
Proof by contradiction for q $\Rightarrow (p \land $ $\neg p)$
|
Hi everyone I'm doing an exercise of logic from the Science of programming book of David Gries, one of the exercises says: Prove that from $\neg q$ follows q $\Rightarrow (p \land $ $\neg p)$ I proved that in the following way: From $\neg q$ infer $q \Rightarrow (p \land $ $\neg p)$ $\neg q$ Preposition 1. from q infer p. 2.1. q Preposition 1. 2.2 From $\neg p$ infer q $\land$ $\neg q$ 2.2.1 q $\land$ $\neg$ q ( $\land$ -I, 1, 2.1). 2.3 p ( $\neg$ -E, 2.2) from q infer $\neg$ p. 3.1. q Preposition 1. 3.2 From p infer q $\land$ $\neg q$ 3.2.1 q $\land$ $\neg$ q ( $\land$ -I 1, 3.1). 3.3 $\neg$ p ( $\neg$ -I, 3.2) p $\land$ $\neg$ p ( $\land$ -I,2,3). q $\Rightarrow (p \land $ $\neg p)$ . ( $\Rightarrow$ -I, 2, 4). From my understading ths proof indicates that if q is false then q $\Rightarrow (p \land $ $\neg p)$ is true. The doubt I have is with this proof: Prove that from q follows q $\Rightarrow (p \land $ $\neg p)$ that I proved in this way: From q infer $q \Rightarrow (p \land $ $\
|
Yes, your first proof is fine; see (3.3.13) page 41: From $p, \lnot p$ infer $q$ . Thus, strating with premises $\lnot q$ and $q$ we have a 1st sub-proof: From $\lnot q, q$ infer $p$ . In the same way, we have a 2nd sub-proof: From $\lnot q, q$ infer $\lnot p$ . From them $(p \land \lnot p)$ follows using $(\land \text {-I})$ and we conclude with $q \to (p \land \lnot p)$ using $(\to \text {-I})$ . In conclusion we have: From premise $\lnot q$ , the conclusion $q \to (p \land \lnot p)$ follows. Regarding your second attempt, you have $q$ as premise and you have derived the contradiction $(p \land \lnot p)$ assuming also $\lnot q$ . Thus, when you conclude with $q \to (p \land \lnot p)$ , there is still the premise $\lnot q$ . Obviously, we cannot prove $q \to (p \land \lnot p)$ because it is not a tautology . Formula $(p \land \lnot p)$ does not follows from $\lnot q$ alone (neither from $q$ alone) [see logical consequence ] because $(p \land \lnot p)$ is a contradiction , i.e. a formu
|
|logic|propositional-calculus|
| 1
|
Proving if a linear part of a function exists using algebra.(proof of the second derivative being zero at some point/points)
|
I know that the following question can be resolved using derivatives but nonetheless, I would like to hear a more fluid and lucid approach. Any hand waving explanation and approximations are welcome. The question is:- Prove that there exists a linear part of the function $f($ x $)$ $=$ $\frac{1}{\sqrt{(x^2+1)^3}}$ at x= $-1/2$ and x= $1/2$ Now initially I thought, we could use the binomial expression to expand the quantity, then I would find the quantity around the said $x$ values and pray to almighty that it would resemble a linear function. But could get nowhere. Thanks in Advance!!!
|
Let $\varepsilon=\pm1$ . $$\begin{align}f\left(\frac\varepsilon2+h\right)&=\left(\left(\frac\varepsilon2+h\right)^2+1\right)^{-3/2}\\ &=\left(\frac54+\varepsilon h+h^2\right)^{-3/2}\\ &=\left(\frac54\right)^{-3/2}\left(1+\frac{4\varepsilon}5h+\frac45h^2\right)^{-3/2}\\ &=\left(\frac54\right)^{-3/2}\left(1-\frac32\left(\frac{4\varepsilon}5h+\frac45h^2\right)+\frac{\left(-\frac32\right)\left(-\frac52\right)}2\left(\frac{4\varepsilon}5h\right)^2+o(h^2)\right)\\ &=\left(\frac54\right)^{-3/2}\left(1-\frac{6\varepsilon}5h+o(h^2)\right). \end{align} $$
|
|functions|
| 1
|
semi-ellipse - vector calculus exercise
|
We have the semi-ellipse curve $C$ defined by $$\frac{x^2}{9} + \frac{y^2}{4} = 1$$ where $$x \le 0$$ Assuming clock-wise orientation (from $(0,-2)$ to $(0,2)$ ) of this curve, find the unit tangent vector and the principal normal vector at the point $P = (-3/\sqrt{2}, 2/\sqrt{2})$ Also find the curvature at that point. I was able to solve this one but I just want to compare my answers. Here are my answers: $T = (3/\sqrt{13}, 2/\sqrt{13}, 0)$ $N = (2/\sqrt{13}, -3/\sqrt{13}, 0)$ $\kappa = (24 \cdot \sqrt{13} ) / 169$ The parametrization which I came up with and which I used was this one $x = -3 \cos{t}$ $y = 2 \sin{t}$ $t \in [-\pi/2, \pi/2]$ When $t=\pi/4$ we get the point $P$ on the semi-ellipse.
|
Everything seems fine to me, except the curvature. Probably it's due a small calculation mistake. For a curve expressed in terms of a generical parametrization $r(t)=(x(t),y(t),z(t))$ , we have that: $$\kappa(t)=\frac{\|r'(t)\times r''(t)\|}{\|r'(t)\|^3}$$ Your curve is parametrize by $r(t)=\left(-3\cos t,2\sin t,0\right)$ , so that $r'(t)=(3\sin t,2\cos t,0)$ and $r''(t)=(3\cos t,-2\sin t,0)$ . The cross product is: $$r'(t)\times r''(t)=\begin{vmatrix} \textbf i & \textbf j & \textbf k \\ 3\sin t & 2\cos t & 0 \\ 3\cos t & -2\sin t & 0 \end{vmatrix}=(0,0,-6\sin^2t-6\cos^2t)=(0,0,-6)$$ So: $$\kappa(t)=\frac{6}{(9\sin^2t+4\cos^2t)^\frac{3}{2}}=\frac{6}{(5\sin^2t+4)^\frac{3}{2}}$$ In the point $P=r\left(\frac{\pi}{4}\right)$ the curvature is: $$\kappa\left(\frac{\pi}{4}\right)=\frac{6}{\left(\frac{5}{2}+4\right)^\frac{3}{2}}=\frac{12\sqrt 2}{13\sqrt{13}}=\frac{12\sqrt{26}}{169}$$
|
|calculus|solution-verification|analytic-geometry|
| 1
|
Finding Weak* limit of a sequence in the space of sequences
|
For $n \in \mathbb{N}$ we define the functional $\phi_n$ on $l^{\infty}$ by $\phi_n(x) = \frac{1}{n} \sum_{j=1}^{n} x_j$ where $x$ denotes the sequence $\{x_j \}^{\infty}_{j=1}.$ It is clear that $\phi_n$ is linear. Using the triangular inequality, and the definition of $\infty-$ norm, I was also able to prove that $\phi_n$ is in fact in ( $l^{ \infty}$ ).* Now, I am stuck with the remaining part of the problem. The problem also says that the sequence $\{\phi_n\}^{\infty}_{n=1}$ has a weak* cluster point $\phi$ in ( $l^{ \infty}$ )*, and $\phi$ does not arise from an element of $l^1.$ This is an exercise (#19 section 6.2) from Folland's Real Analysis book. Any help/suggestions would be highly appreciated. Thank you so much. Stay safe. Edit: Suppose $f: l^1 \rightarrow (l^{\infty})^*$ is given by $(f(y))(x) = \sum_{j=1}^{n} x_{j}y_{j}$ where $y$ denotes the sequence $\{y_j\}^{\infty}_{j=1}$ in $l^1.$ I want to prove that the functional $\phi$ is not in the image of $f$ i.e. $\phi \notin
|
It is straightforward to show $\phi_n\in B_{X^*}$ which is weak* compact by Banach Alouglu. Any compact set with an infinite subset is such that the infinite subset has a cluster point [see Thm 28.1 in Munkre's topology]. Suppose the cluster point is $y\in l_1$ . This is an open set in the weak star topology: $$V_1=\{x^* \: | \:\: |x^*(e_j)-y(e_j)| Take $\phi_{n_1}\in V_1\setminus\{y\}$ (this is possible because we have a cluster boint). Because the weak star is Hausdorff, we may take $y\in \hat{V}_2$ without $\phi_{j}$ for all $j\in \{1,...,n_1\}$ . We define: $$V_2=\{x^* \: | \:\: |x^*(e_j)-y(e_j)| We take $\phi_{n_2}\in V_2\setminus\{y\}$ . Because the weak star is Hausdorff, we may take $y\in \hat{V}_3$ without $\phi_{j}$ for all $j\in \{1,...,n_2\}$ . Define: $$V_3=\{x^* \: | \:\: |x^*(e_j)-y(e_j)| If we proceed inductively, it is clear we have built a sequence such that for every $k\in \mathbb{N}$ : $$\left|\frac{1}{n_k}-y_j\right| Taking the limit in $k$ (notice the $n_k$ are in
|
|real-analysis|functional-analysis|linear-transformations|dual-spaces|
| 0
|
Proof that $P(A \mid C) = 1$ implies $P(A \mid B∩ C) = 1$
|
This isn't homework. I'm reading Probabilistic Graphical Models by Koller et al.$\space$, and an easy problem in chapter $3$ made me think of a more general problem (which I'm now stuck on). I have everything in place except for this: Let $P(B \cap C) \neq 0$. Then $P(A \space|\space B ∩ C) = 1$ if $P(A \mid C) = 1$. It's intuitively obvious, but I haven't been able to formally prove it. I get $$P(A \mid B∩C) = \dfrac{P(A∩B∩C)}{P(B∩C)} = \dfrac{P(C) P(A\mid C) P(B\mid A∩C)}{P(B∩C)} = \dfrac{P(B\mid A∩C)}{P(B|C)}$$ I don't see why that ratio on the right hand side is $1$. Is it? What am I not seeing?
|
Assuming $P(B\cap C)>0$ (hence $P(C)>0$ ), we have $P(A\mid C)=1\iff P(\bar A\mid C)=0\iff P(\bar A\cap C)=0$ and similarly, $P(A\mid B\cap C)=1\iff P(\bar A\cap B\cap C)=0$ . Since $P(\bar A\cap B\cap C)\le P(\bar A\cap C)$ , the result follows.
|
|probability|conditional-probability|
| 0
|
What exactly is being asked in Spivak's Calculus, Ch. 11, "Significance of the Derivative", problem 49, regarding Cauchy Mean Value Theorem?
|
The following is a problem from chapter 10, "Significance of the Derivative", from Spivak's Calculus: Prove that the conclusion of the Cauchy Mean Value Theorem can be written in the form $$\frac{f(b)-f(a)}{g(b)-g(a)}=\frac{f'(x)}{g'(x)}\tag{1}$$ under the additional assumptions that $g(b) \neq g(a)$ and that $f'(x)$ and $g'(x)$ are never simultaneously $0$ on $(a,b)$ . I am not sure exactly what is being asked here. Here is the Cauchy Mean Value Theorem as it appears in Spivak's Calculus Theorem 8 (The Cauchy Mean Value Theorem) If $f$ and $g$ are continuous on $[a,b]$ and differentiable on $(a,b)$ , then there is a number $x$ in $(a,b)$ such that $$[f(b)-f(a)]g'(x)=[g(b)-g(a)]f'(x)$$ (If $g(b) \neq g(a)$ , and $g'(x) \neq 0$ , this equation can be written $$\frac{f(b)-f(a)}{g(b)-g(a)}=\frac{f'(x)}{g'(x)}$$ Starting from the consequent of the theorem, we need only the additional assumptions $g(b) \neq g(a)$ and $g'(x) \neq 0$ to reach $(1)$ . Problem 49 is asking us to alter the secon
|
The Cauchy Mean-Value theorem says there exists some x where the equality is satisfied. You aren't assuming that for that specific x, g'(x) would be 0, however that wherever g' is 0, f' isn't (and vice-versa). Hence, for the specific x found by the Cauchy Mean-Value theorem, g'(x) is also non-zero, hence the second form.
|
|calculus|derivatives|proof-explanation|
| 0
|
Construct an ellipse with given foci such that it is tangent to a given line
|
Given two points $F_1$ and $F_2$ , and a line $n \cdot (r - r_0) = 0$ , where $r = (x,y) , r_0 = (x_0, y_0), n = (n_1, n_2)$ , I'd like to construct the ellipse having foci $F_1$ and $F_2$ and tangent to the line. In this post , the same problem is addressed, but the question there asks for construction using a straightedge and a compass. And the answer gives a description of how to construct such an ellipse using a string. There is no mention of the equation of the ellipse. I was wondering if someone could shed more light on this problem and produce a closed-form equation of the ellipse in terms of the given $F_1, F_2$ and $ n $ and $ r_0 $ . This is my attempt: Let $r = [x, y]^T $ be the position vector of a point in the plane. To simplify the analysis, I'll introduce a new coordinate reference with its origin at the center of the ellipse. This is known, because the center of the ellipse is just the midpoint of the two foci. So let $ C = \dfrac{1}{2} (F_1 + F_2) $ And define the unit
|
Given $$ \cases{ p = (x,y)\\ F_1=(x_1,y_1)\\ F_2=(x_2,y_2)\\ p_0 = (x_0,y_0) } $$ an ellipse is a geometric locus such that $$ \|p-F_1\|+\|p-F_2\|=l,\ \ \ l\gt \|F_1-F_2\| $$ and a line $$ y = y_0 + m(x-x_0) $$ the ellipse locus is immersed on $$ 4l^2\|p-F_2\|^2-(l^2-\|p-F_1\|^2+\|p-F_2\|^2)^2=0 $$ or $$ 4l^2((x-x_2)^2+(y-y_2)^2)-(l^2-x_1^2+2x(x_1-x_2)+x_2^2+2y y_1-y_1^2-2y y_2+y_2)^2=0 $$ As this locus is tangent to the line, after eliminating the line, or substituting $y = y_0 + m(x-x_0)$ remains a polynomial involving $x$ which should have a double root due tangency. After substitution we have $$ c_0 + c_1l^2-l^4+c_2 x + c_3 l^2x + c_4x^2+c_5 l^2x^2=k_0(x-r_0)^2 $$ now calling $l^2=L$ $$ c_0 + c_1L-L^2+c_2 x + c_3 Lx + c_4x^2+c_5 Lx^2-k_0(x-r_0)^2=0 $$ and this should be true for all $x$ then $$ \cases{ c_0+c_1 L-L^2-k_0 r_0^2=0\\ c_2+c_3L+2k_0r_0=0\\ c_4-k_0+c_5L=0 } $$ Given $$ \cases{ F_1=(0,0)\\ F_2=(3,3)\\ p_0=(-2,-2)\\ m=-4 } $$ we have $$ -6084 + 556 L - L^2 - 2808 x + 356 L
|
|linear-algebra|geometry|conic-sections|
| 1
|
Pythagorean theorem for operators
|
For numbers $a,b\in\mathbb{R}_+$ , we have that $$\sqrt{a^2+b^2} \leq a+b.$$ Can we extend this to operators? In other words, given positive semi-definite operators $A,B$ (of same size), is it the case that \begin{align} \sqrt{A^2+B^2} \leq A + B \end{align} where the ordering is the matrix ordering ( $Q\geq0$ means $Q$ is positive semi-definite)? I have been looking into this note , where they show many useful results regarding operator monotonicity and convexity, though could not find a good tool for answering this one. Remark : there exist semi-definite positive matrices $A,B$ that result in $AB+BA$ be non-positive (i.e. $AB+BA$ can have negative eigen-values). (Not sure if the title is a good one! The reason behind choosing it is that given a right-angled triangle with sides $a,b$ the hypothenuse is $\sqrt{a^2+b^2}$ , and from triangle inequality it follows that $\sqrt{a^2+b^2}\leq a+b$ ).
|
No. Towards a counterexample, note that if $A$ and $B$ are orthogonal projections ( $A^2 = A$ , $B^2 = B$ ), the inequality rewrites to $$ \sqrt{A + B} \leq A + B.$$ Since $\sqrt{A + B}$ and $A + B$ commute, this is equivalent to the squared inequality $$ A + B \leq (A + B)^2 = A^2 + B^2 + A B + BA = A + B + A B + BA \\ \Leftrightarrow 0 \leq A B + B A.$$ It is hence enough to find orthogonal projections $A$ and $B$ such that $A B + B A$ is not positive semidefinite. One such pair is $$A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, B = \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} \end{bmatrix} \Rightarrow A B + B A = \begin{bmatrix} 1 & \frac{1}{2} \\ \frac{1}{2} & 0 \end{bmatrix}.$$ These matrices in fact appear more or less in the Example 2.2 of the linked note. One can actually prove that if $A$ and $B$ are orthogonal projections, $A B + B A$ is positive semidefinite only if $A$ and $B$ commute, so any noncommuting projection pair works as a counterexamp
|
|matrices|inequality|convex-analysis|
| 1
|
Showing that for an $\mathcal{O}_X$-module $\mathcal{F}$ we have $\operatorname{Hom}(\mathcal{O}_X, \mathcal{F}) \cong \mathcal{F}(X)$.
|
Let $I$ be an injective $\mathcal{O}_X$ -module. For every open set $U \subset X$ and $\mathcal{O}_U$ -module $\mathcal{F}$ , define an $\mathcal{O}_X$ -module $j_!(\mathcal{F})$ by $$ j_!(\mathcal{F})(V) = \begin{cases} \mathcal{F}(V), & \text{if } V \subset U \\ 0, & \text{if } V \not\subset U. \end{cases} $$ I'm trying to show that $$\operatorname{Hom}_{\mathcal{O}_X}(j_!(\mathcal{O}_U), I) \cong I(U)$$ but I don't quite see how the map should be constructed. For a ring $R$ and an $R$ -module $M$ the isomorphism $\operatorname{Hom}(R,M) \cong M$ is given by $\varphi \mapsto \varphi(1)$ , but here I have a morphism of sheaves $\varphi: j_!(\mathcal{O}_U) \to I$ i.e. a family $\varphi_V:j_!(\mathcal{O}_U)(V) \to I(V)$ so the same map doesn't really work. For each morphism I should get back a section of $I(U)$ . Will $$ \varphi\mapsto \varphi(U)(1) $$ work here? The $1$ is a bit ambiguous. If we have a sheaf of rings with unity then there exist such an element, but what if we consider
|
In general, if $f : (X, \mathcal O_X) \to (Y, \mathcal O_Y)$ is an open immersion of ringed spaces with image $U$ and $\mathcal F$ is a sheaf of $\mathcal O_X$ -modules, you can define the presheaf of $\mathcal O_Y$ -modules by $$ f_{p!}\mathcal{F}(V) = \begin{cases} \mathcal{F}(f^{-1}V), & \text{if } V \subset U \\ 0, & \text{if } V \not\subset U. \end{cases} $$ The sheaf of $\mathcal O_Y$ -modules associated is denoted by $f_!\mathcal F$ . You can show that $f_!$ is a functor between the categories of sheaves of $\mathcal O_X$ -modules and sheaves of $\mathcal O_Y$ -modules (it is even an exact functor). Exercise 1: The functor $f_!$ is a left-adjoint of the functor $f^*$ , in other words for every sheaf of $\mathcal O_X$ -modules $\mathcal F$ and for every sheaf of $\mathcal O_Y$ -modules $\mathcal G$ , we have a natural bijection : $$\operatorname{Hom}_{\mathcal O_X} (\mathcal F, f^*\mathcal G) \simeq \operatorname{Hom}_{\mathcal O_Y} (f_! \mathcal F, \mathcal G)$$ Exercise 2: Ther
|
|algebraic-geometry|sheaf-theory|
| 1
|
Showing that for an $\mathcal{O}_X$-module $\mathcal{F}$ we have $\operatorname{Hom}(\mathcal{O}_X, \mathcal{F}) \cong \mathcal{F}(X)$.
|
Let $I$ be an injective $\mathcal{O}_X$ -module. For every open set $U \subset X$ and $\mathcal{O}_U$ -module $\mathcal{F}$ , define an $\mathcal{O}_X$ -module $j_!(\mathcal{F})$ by $$ j_!(\mathcal{F})(V) = \begin{cases} \mathcal{F}(V), & \text{if } V \subset U \\ 0, & \text{if } V \not\subset U. \end{cases} $$ I'm trying to show that $$\operatorname{Hom}_{\mathcal{O}_X}(j_!(\mathcal{O}_U), I) \cong I(U)$$ but I don't quite see how the map should be constructed. For a ring $R$ and an $R$ -module $M$ the isomorphism $\operatorname{Hom}(R,M) \cong M$ is given by $\varphi \mapsto \varphi(1)$ , but here I have a morphism of sheaves $\varphi: j_!(\mathcal{O}_U) \to I$ i.e. a family $\varphi_V:j_!(\mathcal{O}_U)(V) \to I(V)$ so the same map doesn't really work. For each morphism I should get back a section of $I(U)$ . Will $$ \varphi\mapsto \varphi(U)(1) $$ work here? The $1$ is a bit ambiguous. If we have a sheaf of rings with unity then there exist such an element, but what if we consider
|
Note that $I(U)\cong \operatorname{Hom}_{\mathcal{O}_X}(\mathcal{O}_U, I)$ (this is always true, despite the fact that $I$ is injective) with isomorphism given by the map $\varphi\mapsto \varphi(V)(1)$ . Thus it suffices to show that $\operatorname{Hom}_{\mathcal{O}_X}(j_!(\mathcal{O}_U), I) \cong \operatorname{Hom}_{\mathcal{O}_X}(\mathcal{O}_U, I)$ but this is precisely an expression of the fact that $I$ is an injective object (you have an obvious monomorphism $j_!(\mathcal{O}_U)\to \mathcal{O}_U$ so any morphism $j_!(\mathcal{O}_U)\to I$ can be extended to a unique morphism $\mathcal{O}_U \to I$ ).
|
|algebraic-geometry|sheaf-theory|
| 0
|
If $x+y=2$ is a tangent to the ellipse with foci $(2, 3)$ and $(3, 5)$, what is the square of the reciprocal of its eccentricity?
|
If $x+y=2$ is a tangent to the ellipse with foci $(2, 3)$ and $(3, 5)$ , what is the square of the reciprocal of its eccentricity? This could be done by the property, The product of perpendiculars drawn from focus to the tangent is equal to $b^2$ . But I couldn't figure out where is error in my approach.
|
The reciprocal of the square of the eccentricity is $\frac{41}5,$ as you can read off the equations in the two focus/directrix forms below. One way to get the third listed equation is to start with $$\sqrt{(x-2)^2+(y-3)^2}+\sqrt{(x-3)^2+(y-5)^2}=2a$$ only to square a bit, then to dualize and substitute $(-\frac12,-\frac12)$ (corresponding point in the dual plane of the line $-\frac{x}2-\frac{y}2+1=0$ ) into giving the equation $128a^2(4a^2-41)(4a^2-5)=0,$ where the other positive solution gives $-4(y-2x+1)^2=0,$ which technically also is a conic section tangent to $x+y+2=0$ going through the foci, but degenerate, so not having the two points as foci. Interestingly this is for $a^2=c^2$ and $b^2=0.$ The fourth listed equation is a version of the standard form, where the semi-axes lengths can be read off. They come out at $\approx 3.2$ and $3.$ $$41((x-2)^2+(y-3)^2-\frac1{\frac{41}5}\frac{(x+2y+10)^2}5)=\\41((x-3)^2+(y-5)^2-\frac1{\frac{41}5}\frac{(x+2y-31)^2}5)=\\ 40x^2 - 4xy + 37y^2 -
|
|conic-sections|
| 0
|
Texas Hold'em Poker odds - calculating opponent's odds
|
Scenario: we have reached River, i.e. there are 5 cards on the table, and three of them are hearts. I have no heart on my hand, and there are 6 remaining players other than me. What is the probability, that (at least) one of them will have (any) two hearts on hand? Now if there was just one player other than me, the computation for him I believe would be this: $$\frac{\binom{10}2}{\binom{45}2}=\frac{1}{22} \approx 4.55 \%$$ because there are $10$ remaining hearts in the unseen cards and there are $45$ unseen cards together. Is this correct so far? Now the probability for $6$ remaining players is as simple as multiplying that previous number by $6$ ? One could argue it is, because from the combination point it doesn't matter whether unknown card is in deck or in hand. But then I get this: $$22\frac{\binom{10}2}{\binom{45}2} = 100 \%$$ , exactly. But $45$ unseen cards is $22$ and a half players. So it seems the simplified calculation is not entirely correct, though close. What matters to
|
One approach is to apply inclusion/exclusion. Number the remaining players from $1$ to $6$ , and let's say a deal has "property $i$ " if player $i$ holds two hearts. Define $S_j$ to be the total probability of the deals with $j$ of the properties, for $1 \le j \le 5$ . (It's not possible for a deal to have $6$ of the properties.) Then $$S_j = \binom{6}{j} \prod_{i=0}^{j-1} \binom{10-2i}{2} \bigg{/} \binom{45-2i}{2}$$ The probability that a deal has at least one of the properties, i.e. at least one player holds two hearts, is $$S_1-S_2+S_3-S_4+S_5 = 0.252098$$
|
|probability|combinatorics|poker|
| 1
|
Strange behaviour of $x^2+5x+7$ under iteration
|
If any of the following exposition is unclear, please write a comment. In essence, I am looking at the graph $G$ that is generated by the polynomial $q(x) = x^2+ax+b$ ( $a,b \in \mathbb{Z}$ ) via the edge set $$\{(n, q(n) \text{ mod } p) : n =0,\ldots,p-1\}$$ for some prime $p \in \mathbb{P}$ . After some thought, it should be clear that the statements $$``\text{Iterating } q \text{ for any input will always eventually lead to divisibility by some prime at least once.} ``$$ and $$``G \text{ has a path from any node to 0.}``$$ are equivalent. Furthermore, "at least once" can be replaced by "periodically infinitely many times" and "every node has a path to $0$ " can be characterized by " $G$ is weakly connected and has exactly one loop containing $0$ ". With that out of the way, let's get to the question: The natural question now is to know for which polynomials $q$ we have this nice property of "eventual divisibility" by some prime no matter what number we input, i.e. finding connected
|
We will never achieve eventual divisibility with $x^2+5x+7$ . Cases $p=2$ and $p=3$ fail easily, so we must accept $p>3$ and then reckon with the cases below. Suppose $p=3k+1$ . Then we will have an "isolated" nonzero residue if any such residue solves $x^2+5x+7\equiv x$ , thus $x^2+4x+7\equiv 0$ . But $x^2+4x+7$ has discriminant $-12$ which is a quadratic residue $\bmod p=3k+1$ . So isolated nonzero residues always exist. (The case $p=7$ does give a zero root, but the other root is nonzero.) Suppose $3k-1$ instead. This time we cannot identify isolated residues, but then $x^2+5x+7$ has discriminant $-3$ which is not a quadratic residue $\bmod p=3k-1$ . We can of course try zero residue as an initial condition but would never be able to regenerate it. The case $x^2+5x+7$ is not unique; a similar result is obtainable whenever we specify $x^2+ax+b$ with $(a-1)^2-4b$ a rational square times $a^2-4b$ and $p=2$ fails. The reader is invited to test this using $x^2+17x+75$ . A second approach
|
|number-theory|polynomials|graph-theory|prime-numbers|recreational-mathematics|
| 0
|
Show by hand : $e^{e^2}>1000\phi$
|
Problem: Show by hand without any computer assistance: $$e^{e^2}>1000\phi,$$ where $\phi$ denotes the golden ratio $\frac{1+\sqrt{5}}{2} \approx 1.618034$ . I come across this limit showing: $$\lim_{x\to 0}x!^{\frac{x!!^{\frac{2}{x!!-1}}}{x!-1}}=e^{e^2}.$$ I cannot show it without knowing some decimals and if so using power series or continued fractions. It seems challenging and perhaps a tricky calculation is needed. If you have no restriction on the method, how to show it with pencil and paper ? Some approach: We have, using the incomplete gamma function and continued fractions: $$\int_{e}^{\infty}e^{-e^{-193/139}x^{193/139+2}}dx=\frac{139}{471}\cdot e\cdot\operatorname{Ei}_{332/471}(e^2)>e^{-e^2},$$ where $\operatorname{Ei}$ denotes the exponential integral. Finding an integral for the golden ratio $\phi$ is needed now. Following my comment we have : $$e^{-e^2} Where the function in the integral follow the current order for $x\ge e$ .As said before use continued fraction of incomple
|
Ok, so my answer works but requires you to: Be insanely good at arithmetic (so maybe does not meet your criteria of doable by hand ) Know that $\ln 10 (this might be known to you if you frequently change bases between logarithms, or may be calculated by series expansion) Edit (D S) : I noticed that getting a better approximation for $\ln 10$ significantly reduces the work required for $\ln(\phi)$ . So here it goes: $$\ln(10) = \ln(2)+\ln(5) = 3\ln(2) + \ln(5/4)$$ $$\begin{align}\ln(2) &= \ln((1+1/3)/(1-1/3)) \\ &= 2\sum_{k=0}^\infty \frac{(1/3)^{2k+1}}{2k+1}\\ & and $$\begin{align}\ln(5/4) &= \ln((1+1/9)/(1-1/9)) \\ &= 2\sum_{k=0}^\infty \frac{(1/9)^{2k+1}}{2k+1}\\ & Hence $\ln10 . Let $x$ be such that $$x^{x^2} = 1000\phi$$ or $$x^2\ln(x) = 3\ln(10)+\ln(\phi)$$ We are interested in showing $$3\ln(10)+\ln(\phi) since $x^2\ln(x)$ is increasing in $x$ . First, we approximate $\ln(\phi)$ . Note that the series expansion of $\ln(1+x)$ for $|x|>1$ is given by $$\ln(1+x) = \ln(x) - \sum_{k=1
|
|inequality|constants|golden-ratio|number-comparison|
| 1
|
Does the inner semidirect product of Lie groups need these two subgroups both be closed?
|
I am studying GTM218 and found an unproven theorem that the author left for exercise. Here is the theorem: Theorem 7.35 (Characterization of Semidirect Products). Suppose $G$ is a Lie group, and $N;H \subseteq G$ are closed Lie subgroups such that $N$ is normal, $N\cap H = \{e\}$ , and $N+H =G$ . Then the map $N\rtimes H \rightarrow G$ is a Lie group isomorphism between $N\rtimes H$ and G, where $\theta : H\rtimes N \rightarrow N$ is the action by conjugation: $\theta_{h}(n) = hnh^{-1}$ I think the condition ' $H$ is closed' is not needed. Here is my proof of this theorem without that condition: Consider the immersion $H\rightarrow G$ and $N\rightarrow G$ , we combine them to get the immersion $H\times N \rightarrow G\times G$ . By calculating its derivative we know that the combined map is an immersion too. Then we compose this function with the map $H\times N \rightarrow G: (h,n) \rightarrow hnh^{-1}$ . Hence we construct a smooth map from $H \times N$ to $G$ . Because $N$ is a close
|
You're right. See the note about page 169 on my list of corrections .
|
|lie-groups|smooth-manifolds|
| 1
|
Does $\sin (nx)$ converge in $L^2$?
|
I was just introduced the concept that if $(f_n)$ converges in $L^2$ topology to $g(x)\in L^2([0,2\pi])$ then $\lim_{n\to\infty}\int^{2\pi}_0|f_n(x)-g(x)|^2dx=0$ . I would appreciate any hint to how to start proving for example $sin(nx)$ diverges in $L^2$ . I know that in the normal sense it doesn't even have a pointwise limit in real. Also given was a definition of weak convergence: a function $g(x)\in L^2([0,2\pi])$ such that for any continuous $h$ on $[0,2\pi]$ we have $\lim_{n\to\infty}\int^{2\pi}_0(f_n(x)-g(x))h(x)dx=0$ . Is $\sin (nx)$ convergent in this sense?
|
In fact, $\sin(nx) \to 0$ weakly. Let's consider $X= L^2([0,2 \pi])$ , then the dual (of $X$ ) space is $X^* = (L^2([0,2 \pi]))^*=L^2([0,2 \pi])$ , which is the space of all linear continuous functionals $F$ from $X$ to $\mathbb{F}$ (say $\mathbb F =\mathbb R$ here). That is, for each $g \in X$ we define (an inner product) $ X^* \ni F_g: X \to \mathbb R$ $$\langle f, g \rangle=: F_g(f):=\int_0^{2 \pi} f(x) g(x)dx, \,\,\, \forall f \in X.$$ Let $f_n(x)= \sin(nx)$ . Fix $\epsilon >0$ , by the density of simple function in $L^p$ spaces ( $L^2$ here) we have a step function $s$ such that $$\|f-s\|_{L^2[0,2\pi]} Since simple function is a finite linear combination of characteristic functions, then is suffices to show the weak convergence on one characteristic functions, say $c_i \chi_{[a_i,a_{i+1}]}(x)$ . Then, $$\begin{eqnarray} \lim\limits_{n\to\infty}\int_{a_i}^{a_{i+1}} s(x)\sin(nx)dx &=& f(c_i)\lim\limits_{n\to\infty}\int_{a_i}^{a_{i+1}} \sin(nx)dx\\ &=&f(c_i) \lim\limits_{n\to\infty}
|
|real-analysis|integration|limits|convergence-divergence|lp-spaces|
| 0
|
Geometry problem: finding coordinates on outer circle using extrusion from centre through point of inner circle
|
My problem involves an inner circle (centre $I$ ) with a displaced placement within an outer circle (centre $O$ ). I have a point placed somewhere on the circumference of the inner circle ( $P$ ). I wish to extrude from point $P$ orthogonally until it reaches and intersects the outer circle at position $Q$ , so I need to calculate the position of $Q$ . The radius of the outer circle ( $r_1$ ), inner circle ( $r_2$ ) and displacement of inner circle ( $r_3$ ) are known. Any help would be greatly appreciated. Some background: I actually need to do this for multiple points on the inner circle ( $P_1$ , $P_2$ etc, but I have all these coordinates). What I have so far: $x_Q = x_I + \frac{(x_P - x_I) \cdot r_1}{\sqrt{(x_P - x_I)^2 + (y_P - y_I)^2}}$ $y_Q = x_I + \frac{(y_P - y_I) \cdot r_1}{\sqrt{(x_P - x_I)^2 + (y_P - y_I)^2}}$ which gives me something like this, but the magnitude is wrong: I then tried adding the original displacement of the inner circle, $x_Q = x_I + \frac{(x_P - x_I) \cd
|
In order to do this, you presumably know the coordinates of $I$ and $O$ in your figure. Then, knowing the coordinates of $P$ , you can compute the angle $\angle OIP$ and its sine, $\sin(\angle OIP)$ . Specifically, let $u$ be the vector from $I$ to $O$ and let $u$ be the vector from $I$ to $P$ . Then \begin{align} \cos(\angle OIP) &= \frac{u \cdot v}{\lVert u\rVert\lVert v\rVert}, \\ \sin(\angle OIP) &= \sqrt{1 - \cos^2(\angle OIP)}, \\ \angle OIP &= \arccos(\cos(\angle OIP)). \end{align} Note that this always gives an angle between zero and $\pi$ radians, so you'll also need to keep track of which "side" of the line $IO$ the point $P$ is on so you can project it to the correct point on the outer circle and not the mirror-image point across line $IO.$ Now apply the law of sines and the fact that $\angle OIP = \angle OIQ$ in the triangle $\triangle OIQ$ : $$ \frac{\sin(\angle OIP)}{\angle OIP} = \frac{\sin(\angle IQO)}{\angle IQO}. $$ Therefore $$ \angle IQO = \frac{\sin(\angle IQO)}{\s
|
|geometry|vectors|analytic-geometry|
| 0
|
If $\sum \|f_i\|$ converges and $\sum_{i=1}^\infty f_i$ exists in a normed function space, do we get for free that $\|\sum_{i=1}^n f_i - f\| \to 0$?
|
Let $(V,\|-\|)$ be a normed function space, and suppose that $(f_i)_i$ is a sequence of elements of $V$ so that $\sum \|f_i\| . Further suppose that this data implies that $f = \sum f_i$ , where $\sum_{i=1}^n f_i \to f$ pointwise , is an element of $V$ . Then I would think that we can show immediately that $\sum_{i=1}^n f_i \to f$ in norm, as: $$\left\|\sum_{i=1}^n f_i - f \right\| = \left\|\sum_{i=1}^n f_i- \sum_{i=1}^\infty f_i\right\| = \left\|\sum_{i=n+1}^\infty f_i\right\| \leq \sum_{i=n+1}^\infty \|f_i\|$$ which is sensible, as we can show that any tail of $\sum f_i$ is in $V$ as well. However, as $\sum \|f_i\| , it follows that the right-hand-side goes to zero as $n \to \infty$ , thus proving that $\sum_{i=1}^n f_i \to f$ in norm. Here is my question: Have I made a mistake in the above reasoning? The reason I am suspicious is because whenever I have seen an example of this situation in books (e.g., showing $L^p$ is a Banach space), the author shows convergence in norm using some
|
Unfortunately there is a gap. The inequality $$ \|\sum_{i=n+1}^\infty f_i\|\leq \sum_{i=n+1}^\infty \|f_i\| $$ is not justified in your argument, because the series on the left hand side converges just pointwise. The inequality itself is trivial if the partial sums converge in $V$ , but it is not trivial at all assuming only pointwise convergence. If your function space satisfies the condition that any pointwise converging series satisfies the above inequality, then your argument works. Otherwise, you have no way of proving it in general, and in fact the inequality could be false for some function spaces. Edit. An example where such a step is justified would be any Banach space $X$ that satisfies the following: if $g_n\to g$ in $X$ and $g_n\to h$ pointwise, then $g=h$ . In fact, in that case the series $$ \sum_{i=1}^\infty f_i $$ converges in $X$ to some $g$ since absolutely convergent series converge in Banach spaces, and $f=g$ if the above property holds. In particular, the series co
|
|functional-analysis|convergence-divergence|normed-spaces|
| 1
|
Does path-connected imply simple path-connected?
|
Let $X$ be a path-connected topological space, i.e., for any two points $a,b\in X$ there is a continuous map $\gamma\colon[0,1]\to X$ such that $\gamma(0)=a$ and $\gamma(1)=b$. Note that beyond continuity little is required about $\gamma$. Is it always possible to make $\gamma$ a simple (=injective) curve? (Of course we consider only the case $a\ne b$). If the answer depends on $X$, what are some mild sufficient conditions on $X$? If $X$ is sufficiently nice, e.g., an open subset of $\Bbb R^n$, the answer is "yes"; more generally, what one would call "locally simplepath-connected" suffices (the set of points reachable from $a\in X$ by a simple curve is both open and closed). Also, for arbitrary $X$ it is clear that a single instance of self-crossing can be short-cut. But for infinitely many possible short-cuts the situation may become somewhat hairy, I'm afraid. EDIT: To avoid Mike Miller 's counterexample (indiscrete topology on $X$ with $|X| necessary condition - I am still looking f
|
Since we are looking for conditions as mild as possible, it's worth noting that as long as one is only asking for an injective path joining two points (rather than a homeomorphic image of an interval), one can relax the Hausdorff condition to $US$ , or unique sequential limits. While it is likely possible to wade through various proofs and show that Hausdorff can be replaced by $US$ in each individual step (with or without modifications), we alternatively can make a quick argument to extend the result from the Hausdorff case to the $US$ case as follows: If $(X,\tau)$ is $US$ , then $(X, \tau_s)$ is $US$ and sequential (sequentially closed subsets are closed), where $\tau_s$ is the family of sequentially open sets in $\tau$ . Note that since $[0,1]$ is sequential, $f\colon [0,1]\to X$ is continuous with respect to $\tau$ if and only if it is continuous with respect to $\tau_s$ (in both cases, the continuous maps are precisely the sequentially continuous maps). Hence $(X,\tau)$ is inject
|
|general-topology|connectedness|path-connected|
| 0
|
$\lim_{x \rightarrow 0} \frac{x -\sin x}{x (1 - \cos x)}$
|
I'm trying to find the following limit (without L'Hopital, not there yet): $$\lim_{x \rightarrow 0} \frac{x -\sin x}{x (1 - \cos x)}$$ Tanking into account that: $$\lim_{x \rightarrow 0} \frac{\sin x}{x} = 1$$ and $$\lim_{x \rightarrow 0} \frac{1 - \cos x}{\frac{x^2}{2}} = 1$$ If I divide by x I get: $$\lim_{x \rightarrow 0} \frac{x -\sin x}{x (1 - \cos x)} = \lim_{x \rightarrow 0} \frac{1 - \frac{\sin x}{x}}{(1 - \cos x)}$$ And I still get a $\frac{0}{0}$ indeterminate limit. I have also tried dividing numerator by $\frac{x^2}{2}$ . $$\lim_{x \rightarrow 0} \frac{x -\sin x}{x (1 - \cos x)} = \lim_{x \rightarrow 0} \frac{1 - \frac{\sin x}{x}}{ \frac{x^2}{2} \frac{1 - \cos x}{\frac{x^2}{2}}}$$ But I still get a $\frac{0}{0}$ indeterminate limit. How can I solve this limit problem without L'Hopital?
|
Consider the leading behavior of $\sin(x)$ and $\cos(x)$ : $$\sin(x)=x-\frac{x^3}{3!}+O(x^5)$$ $$\cos(x)=1-\frac{x^2}{2!}+O(x^4)$$ So we have; $$\lim_{x \rightarrow 0} \frac{x-x+\frac{x^3}{3!}-O(x^5)}{x(1-1+\frac{x^2}{2!}-O(x^4))}$$ $$=\lim_{x \rightarrow 0} \frac{\frac{x^3}{3!}-O(x^5)}{\frac{x^3}{2!}-O(x^5)}$$ $$=\lim_{x \rightarrow 0} \frac{\frac{1}{3!}-O(x^2)}{\frac{1}{2!}-O(x^2)}$$ $$= \frac{\frac{1}{3!}}{\frac{1}{2!}}$$ $$=\frac{1}{3}$$
|
|real-analysis|
| 1
|
Regarding the specifity of the A.M.-G.M. inequality in finding maximum and minimum in Number Theory
|
Today, my math teacher solved a problem which asked to find the maximum value of the expression $x^2y^3$ when $x$ and $y$ are related as $3x+4y=5$ . It was solved using the classic A.M.-G.M. inequality by splitting $3x$ as twice of $3x/2$ and $4y$ as thrice of $4y/3$ . He finally got an inequality as $x^2y^3\le3/16$ and he directly mapped the maximum value of the expression to 3/16 by solely depending only on this inequality. I don't get this point of assigning its maximum value. What if some other inequality says that it is even lesser than 3/16 or restricts its range even further to numbers which are less than 3/16. Can't we give any counterexample inequalities ? Or won't any such inequality ever exist? If yes, then why? I'm a pre-calculus student and I'm still a beginner and am sorry if my question is too dishonorable.
|
AM-GM tells us that $x^2y^3 \le \frac{3}{16}$ . It also has an equality case, when all of the terms in the AM-GM are equal—in your case, that would be when $\frac{3x}{2} = \frac{4y}{3}$ . Combining this with the assumption $3x + 4y = 5$ , this gives us only one pair where equality holds: $(x, y) = (\tfrac{2}{3},\tfrac{3}{4})$ . So, if I understand your question correctly, you’re asking why we can outright say that $\frac{3}{16}$ is the maximum. After all, some stronger inequality might be able to prove that $x^2y^3$ is always less than a value less than $\frac{3}{16}$ . The word ‘maximum’ essentially consists of two parts. So, when your teacher is saying that $x^2y^3$ has a maximum value of $\frac{3}{16}$ , he/she is saying two things: $x^2y^3 \le \frac{3}{16}$ for all positive reals $x$ and $y$ such that $3x + 4y = 5$ . There exists a pair $(x, y)$ that satisfies $3x + 4y = 5$ such that $x^2y^3 = \frac{3}{16}$ . The second condition is equivalent to saying that the bound of $\frac{3}{
|
|algebra-precalculus|inequality|contest-math|maxima-minima|a.m.-g.m.-inequality|
| 0
|
Determine the real numbers $a, b, c, d \in [1,3]$, knowing that the relation $(a + b + c + d)^2 = 3(a^2 + b^2 +c^2 + d^2)$.
|
the question Determine the real numbers $a, b, c, d \in [1,3]$ , knowing that the relation $(a + b + c + d)^2 = 3(a^2 + b^2 +c^2 + d^2)$ . my idea $(a+b+c+d)^2=a^2+b^2+c^2+d^2+2(ab+ac+ad+bc+bd+cd)$ $=> 2(ab+ac+ad+bc+bd+cd)=2(a^2 + b^2 +c^2 + d^2)=> ab+ac+ad+bc+bd+cd=a^2 + b^2 +c^2 + d^2$ From here I've been trying to get it to a form where 0 will equal the product of some numbers but I didn't get to anything helpful. I hope one of you can help me! Thank you!
|
COMMENT.-Since $(3r+x)^2=3(3r^2+x^2)\iff r=\dfrac{x}{3}$ the only solution with three equal variables is $(a,b,c,d)=(1,1,1,3)$ and permutations since $\dfrac13$ and $\dfrac23$ are not allowed. Astonishing it would be the only solution. I neither affirm nor deny this statement in this simple comment (I suspect this is so). Put for $a,b,c$ very small among allowed values, for example $(a,b,c,d)=(1,1.01,1.01,x)$ which gives the equation $(3.02+x)^2=3(3.0402+x^2)$ whose solutions are $x=0$ and $x\gt3$ and for greater values the solutions wil be not allowed. I think that what I suspect is true.
|
|algebra-precalculus|square-numbers|
| 0
|
If $\sum \|f_i\|$ converges and $\sum_{i=1}^\infty f_i$ exists in a normed function space, do we get for free that $\|\sum_{i=1}^n f_i - f\| \to 0$?
|
Let $(V,\|-\|)$ be a normed function space, and suppose that $(f_i)_i$ is a sequence of elements of $V$ so that $\sum \|f_i\| . Further suppose that this data implies that $f = \sum f_i$ , where $\sum_{i=1}^n f_i \to f$ pointwise , is an element of $V$ . Then I would think that we can show immediately that $\sum_{i=1}^n f_i \to f$ in norm, as: $$\left\|\sum_{i=1}^n f_i - f \right\| = \left\|\sum_{i=1}^n f_i- \sum_{i=1}^\infty f_i\right\| = \left\|\sum_{i=n+1}^\infty f_i\right\| \leq \sum_{i=n+1}^\infty \|f_i\|$$ which is sensible, as we can show that any tail of $\sum f_i$ is in $V$ as well. However, as $\sum \|f_i\| , it follows that the right-hand-side goes to zero as $n \to \infty$ , thus proving that $\sum_{i=1}^n f_i \to f$ in norm. Here is my question: Have I made a mistake in the above reasoning? The reason I am suspicious is because whenever I have seen an example of this situation in books (e.g., showing $L^p$ is a Banach space), the author shows convergence in norm using some
|
This is too long for a comment, but is only a partial answer. The point Ben is making, is that your mistake lies in the step $$\left\|\sum_{i=1}^n f_i - f \right\| = \left\|\sum_{i=1}^n f_i- \sum_{i=1}^\infty f_i\right\|.$$ The point here is that we do not yet know that $f= \sum_{i=1}^\infty f_i$ in $V$ . Sure, we know that for every $x$ in our domain, $f(x)=\lim_{n\to\infty}\sum_{i=1}^nf_i(x)$ , but that doesn't mean we can replace $f$ by $\sum_{i=1}^\infty f_i$ within the norm, because by doing that you are implicitly assuming that $\left\|\sum_{i=1}^n f_i-f\right\|\to 0$ . $\sum_{i=1}^\infty f_i$ only makes sense as an entity in $V$ by defining it as $\lim_{n\to\infty}\sum_{i=1}^nf_i$ , with the limit taken in $V$ .
|
|functional-analysis|convergence-divergence|normed-spaces|
| 0
|
Proof of Rudin's Theorem 3.10
|
Definition. Let $E$ be a nonempty subset of $X$ , and let $S$ be the set of all real numbers of the form $d(p, q)$ , with $p,q\in E$ . The sup of $S$ is called the diameter of $E$ . Theorem 3.10. If $\overline{E}$ is the closure of a set $E$ in a metric space $X$ , then $$\text{diam }\overline{E} = \text{diam }E.$$ Proof: Fix $\varepsilon>0$ , and choose $p, q \in \overline{E}$ . By the definition of $\overline{E}$ , there are points $p',q' \in E$ such that $d(p,p') and $d(q,q') . Hence $$d(p, q) \le d(p,p') + d(p', q') + d(q', q) Ok until here. But then they use the inequality above to come up with $$\text{diam }\overline{E} \le 2\varepsilon + \text{diam }E$$ Where I can only see the strict inequality because of the strict inequality relation made in the inequalities above.
|
The logical flow in a more generic setting makes it clearer. Given $\epsilon >0$ , and a non-empty $S\subseteq \mathbb{R}$ . If for all $s \in S$ we have $s , then $2\epsilon + b$ is an upper bound on $S$ . (Defn of upper bound). Then $\sup S \leq 2 \epsilon + b$ , as the supremum is always less than or equal to all other upper bounds (Defn of supremum). Then $\sup S \leq b$ . Proof: Consider not, then $\sup S > b$ . Then choosing $\epsilon \in (0, \frac{\sup S - b}{2} ) $ implies $2 \epsilon + b , a contradiction to 1. Identifying $S = \{ d(p, q) \mid p, q \in \bar{E} \}$ , and $ b =$ diam $ \bar{E} $ proves the result.
|
|real-analysis|inequality|proof-explanation|
| 0
|
Exercise 3.2.2.d in Grimmett & Stirzaker's 'Probability and Random Processes'
|
The questions asks: Let $X$ and $Y$ be independent random variables taking values in the positive integers and having the same mass function $f(x)=2^{-x}$ for $x=1,2,...$ find: $P(X\geq kY)$ for a given positive integer $k$ $$\begin{align*} P(X\geq kY) &= \sum_{y=1}^\infty (X\geq kY, Y=y) \\\\ &= \sum_{y=1}^\infty (X\geq kY)P(Y=y) \;\;\;\; \text{due to independence} \\\\ &= \sum_{y=1}^\infty \sum_{x=0}^\infty \frac{1}{2^{x+ky}} \frac{1}{2^y} \\\\ &= \sum_{y=1}^\infty \sum_{x=0}^\infty \frac{1}{2^{x+ky+1}} \end{align*}$$ The answer is $$\frac{2}{2^{k+1}-1}$$ generally for a single geometric series the sum is: $$\frac{1}{1-r}$$ $$\frac{1}{1-2^{k+1}}$$ $$\frac{2^{k+1}}{2^{k+1}-1}$$ Which has the correct denominator, can someone please explain how to the answer shown?
|
Careful, you have an algebraic error in your last equality. You should get: $$\begin{align*} \sum_{y=1}^\infty \sum_{x=0}^\infty \frac{1}{2^{x+{\color{red}{(k+1)y}}}} &= \sum_{y=1}^\infty \frac{1}{2^{(k+1)y}} \underbrace{\sum_{x=0}^\infty \frac{1}{2^{x}}}_{=2} \\ &= 2\sum_{y=1}^\infty \frac{1}{(2^{k+1})^y} \\ &= 2\cdot \frac{2^{-(k+1)}}{1 - 2^{-(k+1)}} \\ &= \frac{2}{2^{k+1}-1} \end{align*}$$ where in the last step I multiplied the numerator and denominator by $2^{k+1}$ .
|
|probability|sequences-and-series|discrete-mathematics|
| 1
|
Number of edges in a graph construction
|
Suppose we have a graph on the vertex set $[n]^m$ , i.e, each vertex is of the form $(v_1, ..., v_m)$ , where each $v_i \in \{1, ..., n\}$ . We draw an edge between two vertices $v = (v_1, ..., v_m)$ and $u = (u_1, ..., u_m)$ iff $u_i = v_i$ for some $i \in [m]$ . My question is, how many edges are in our graph? My attempt: We clearly have $n^m$ vertices. Fix some $v = (v_1, ..., v_m)$ . Then the number of vertices in which every entry is different from $v$ is $(n-1)^m$ , so $v$ has $n^m -1- (n-1)^m $ neighbours (the $-1$ is to avoid counting $v$ itself). By the handshake lemma, the number of edges in the graph is $$\frac{1}{2}n^m(n^m -1- (n-1)^m).$$ Is this correct? Does this mean the number of edges is asymptotically proportional to $n^{2m}$ ? I'm not sure why this seems wrong to me. Thank you!
|
The formula is correct; there are indeed $\frac12 n^m(n^m - (n-1)^m - 1)$ edges in this graph. The asymptotics are a bit trickier. When there are two variables involved, our asymptotic analysis has to be done with an understanding of which variable is going to infinity - or if both of them are, how quickly relative to each other they are doing so. Taking the expression $\frac12 n^m (n^m - (n-1)^m - 1)$ and dismissing $(n-1)^m$ (and $1$ ) as a lower-order term relative to $n^m$ is fair if $n$ is constant as $m \to \infty$ . More generally, it is valid if $\frac{(n-1)^m}{n^m} \to 0$ as $n$ and $m$ go to $\infty$ at their own paces, which happens when $\frac nm \to 0$ as $n$ and $m$ go to $\infty$ . If $m \sim cn$ for a constant $c$ , then $(n-1)^m \sim e^{-c} n^m$ , and so the number of edges is asymptotically more like $\frac12 e^{-c} n^{2m}$ , or $\frac12 e^{-m/n} n^{2m}$ . (It's still fair to say that the number of edges is asymptotically proportional to $n^{2m}$ here, but only if we
|
|combinatorics|graph-theory|
| 1
|
Special Type of Locally Metrizable Space
|
We say a topological space is locally metrizable if for every $x\in X,$ there is an open set $U$ containing $x$ which is metrizable. That is, the subspace topology on $U$ is the topology induced by some metric $d_U.$ In my research, I've come across a space which is locally metrizable such that for some $d:X^2\rightarrow[0,\infty],$ all such metrizable neighborhoods $U$ , we have $d_U$ is a metric equivalent to $d$ restricted to $U^2.$ Intuitively, the space only fails to be a metric space insofar as there exist $x,y\in X$ such that $d(x,y)=\infty.$ This notion seems much stronger as it states that each pair of local metrics coincide (up to equivalence) everywhere they are defined. Is there a name for such a space, or a related condition? Are there any obvious corollaries? (ideally that $X$ is metrizable haha)
|
Another perspective on this. Suppose your $d:X^2\to[0,\infty]$ has the usual metric properties, where for the purpose of the triangle inequality we have $\infty+x=\infty$ for all $x\in[0,\infty]$ , and you require that your topology is generated by the sets $B(x,r)=\{y:d(x,y) defined by this extended metric. Then let $d':X^2\to[0,1]$ be defined by $d'(x,y)=\lim_{z\to d(x,y)}\frac{z}{z+1}$ . It can be shown that $d'$ satisfies the properties of a metric, and generates the same topology. Therefore, it's okay to define an infinite distance between points to prove a space is metrizable, provided the metric requirements hold globally. But if you can only show your function locally satisfies the metric properties, then your locally metrizable space need not have any strong separation properties, as MW's answer shows.
|
|general-topology|metric-spaces|metrizability|
| 1
|
Can you explain how to convert this notation to a function to someone unfamiliar with group theory?
|
While working on a project, I came across a paper ( "On $C^2$ -smooth Surfaces of Constant Width" by Brendan Guilfoyle and Wilhelm Klingenberg ) that includes the definition $g ∈ $ on page 15, where $$ is the "tetrahedral group" ("a discrete subgroup of isometries $ ⊂ O(3)$ "). The paper then uses $g(z)$ as a function with a complex input, where $z$ is "the local complex coordinate on the unit 2-sphere in $^3$ obtained by stereographic projection from the south pole". Is it possible to define an algebraic expression for $g(z)$ in a way that is understandable to a person with no background in group theory?
|
The group $O(3)$ is the group of isometric symmetries of the $2$ -dimensional sphere $S^2$ in Euclidean 3-dimensional space. It has an index 2 subgroup of rotational symmetries denoted $SO(3)$ . When a regular tetrahedron $T$ is inscribed in $S^2$ , the rotational symmetries of $T$ itself form a subgroup of $SO(3)$ with $12$ elements, called the tetrahedral group. But, in order to write the elements of the tetrahedral group as functions of a complex variable, I have to guess a little bit at the intentions of the unknown author. (If this guess is wrong, then my whole answer is wrong). My best guess is that the $2$ -sphere $S^2$ is being transformed to the extended complex plane $\mathbb C \cup \{\infty\}$ using stereographic projection . When this is done, the individual elements of $SO(3)$ can all be written as fractional linear transformations of $\mathbb C \cup \{\infty\}$ , meaning functions of the format $$g(z) = \frac{az+b}{cz+d} $$ where $a,b,c,d$ are complex numbers required to
|
|group-theory|
| 1
|
If $\sum \|f_i\|$ converges and $\sum_{i=1}^\infty f_i$ exists in a normed function space, do we get for free that $\|\sum_{i=1}^n f_i - f\| \to 0$?
|
Let $(V,\|-\|)$ be a normed function space, and suppose that $(f_i)_i$ is a sequence of elements of $V$ so that $\sum \|f_i\| . Further suppose that this data implies that $f = \sum f_i$ , where $\sum_{i=1}^n f_i \to f$ pointwise , is an element of $V$ . Then I would think that we can show immediately that $\sum_{i=1}^n f_i \to f$ in norm, as: $$\left\|\sum_{i=1}^n f_i - f \right\| = \left\|\sum_{i=1}^n f_i- \sum_{i=1}^\infty f_i\right\| = \left\|\sum_{i=n+1}^\infty f_i\right\| \leq \sum_{i=n+1}^\infty \|f_i\|$$ which is sensible, as we can show that any tail of $\sum f_i$ is in $V$ as well. However, as $\sum \|f_i\| , it follows that the right-hand-side goes to zero as $n \to \infty$ , thus proving that $\sum_{i=1}^n f_i \to f$ in norm. Here is my question: Have I made a mistake in the above reasoning? The reason I am suspicious is because whenever I have seen an example of this situation in books (e.g., showing $L^p$ is a Banach space), the author shows convergence in norm using some
|
The key step is $\left\|\lim\limits_{m \to \infty} \sum\limits_{i=n+1}^m f_i\right\| \leq \sum\limits_{i=n+1}^\infty \|f_i\|$ . And in general pointwise convergence doesn't have to do anything significant with norm. Let $X$ be space of sequences that are eventually $0$ with norm $\|x\|_X = \sup_n \frac{|x_n|}{2^n}$ and $Y$ be $X \times \mathbb R$ with norm $\|(x, r)\|_Y = \|x\|_X + |r|$ . Let $V$ be space of sequences that are eventually constant with norm $\|v\|_V = \sup_n \frac{|v_n - \lim v|}{2^n} + |\lim v|$ - it's indeed norm, as $V$ is isomorphic to $Y$ via $f(y) = f((x, r)) = (x_1 + r, x_2 + r, \ldots)$ . Now, let $f_i = e_i$ and $f = (1, 1, \ldots)$ . Then $\sum f_i \to f$ pointwise, $\sum \|f_i\|_Z = 1$ , but $\left\|\sum\limits_{i = 1}^n f_i - f\right\| > 1$ , so series doesn't converge in norm.
|
|functional-analysis|convergence-divergence|normed-spaces|
| 0
|
Continuity of line integral given a sequence of curves
|
I have this question regarding the continuity of a line integral. Suppose we have a continuous function $f:\mathbb{R}^n \to \mathbb{R}$ , and a curve $C_n$ such that $C_n \to C$ pointwise as $n\to \infty$ . The curves $C_n$ and $C$ can have unbounded domain. I was wondering what kind of assumptions one can make so that $$\int_{C_n}f(s)ds \to \int_{C}f(s)ds$$ I believe some assumptions enabling the application of the dominated convergence theorem could be the answer, but I am not very knowledgeable about the topic. Any insight or reference would be much appreciated!
|
With a little help from my professor I was able to come up with some sufficient conditions. Assume that $f$ is continuous on some compact $K$ containing all curves $C_n$ and $C$ . Moreover, assume that the parametrizations $\psi_n : [0,1] \rightarrow \mathbf{R}^n$ (for $C_n$ ) converge uniformly (on $[0,1]$ ) to the parametrization $\psi : [0,1]\rightarrow \mathbf{R}^n$ (for $C$ ). And, assume that also $\psi_n’$ converge uniformly to $\psi’$ on $[0,1]$ . These conditions are strong enough. We’ll need the following lemma. Let $g:\mathbf{R}^n\rightarrow \mathbf{R}$ be uniformly continuous on some $U\subseteq \mathbf{R}^n$ . Let $\psi_n$ and $\psi$ be as above and their images be contained in $U$ , then also $g\circ \psi_n \overset {[0,1]}{\rightrightarrows}g \circ \psi$ . This can obviously be generalized quite a bit and a proof of this can be found here . We’ll also need that we can interchange limit and integral if the integrand converges uniformly, and that the product of uniformly c
|
|integration|convergence-divergence|lebesgue-integral|line-integrals|
| 0
|
If $x+y=2$ is a tangent to the ellipse with foci $(2, 3)$ and $(3, 5)$, what is the square of the reciprocal of its eccentricity?
|
If $x+y=2$ is a tangent to the ellipse with foci $(2, 3)$ and $(3, 5)$ , what is the square of the reciprocal of its eccentricity? This could be done by the property, The product of perpendiculars drawn from focus to the tangent is equal to $b^2$ . But I couldn't figure out where is error in my approach.
|
The center of the ellipse is the midpoint between focii, $M=\frac{(2,3)+(3,5)}2=(\frac52,4)$ and the unit vectors of axes are $v_1=(\tfrac1{\sqrt5},\tfrac2{\sqrt{5}})$ and $v_2=(-\tfrac2{\sqrt5},\frac1{\sqrt{5}})$ . The ellipse equation is given by $$(x,y)=M+a\cos\theta v_1+b\sin\theta v_2.$$ So the given ellipse is $$\frac{(x+2y-\frac{21}2)^2}{a^2}+\frac{(y-2x+1)^2}{b^2}=5.\tag1$$ The distance between focii is $\sqrt{(3-2)^2+(5-3)^2}=2c=\sqrt5$ . Therefore, $$|a^2-b^2|=c^2=\frac54.\tag2$$ On the other hand, if we put the tangent line $y=2-x$ in $(1)$ and equate the discriminant of the resulting quadratic polynomial to zero, after a tedious calculation, we find $$45a^2+4b^2=405.\tag3$$ From $(2)$ and $(3)$ , major axis half length is $\frac{\sqrt{41}}2$ and hence, $e=\frac c{\sqrt{41}/2}=\frac{\sqrt5}{\sqrt{41}}$ and $\frac1{e^2}=\frac{41}5.$
|
|conic-sections|
| 0
|
Why the derivative of a vector field along a curve is not defined for a generic manifold?
|
In his book "Differential Geometry" Loring. Tu writes the derivative of a vector field $X$ along a curve $c: [a,b] \to \mathbb{R}^n$ as $\frac{dV(t)}{dt}=\sum \frac{dv^i(t)}{dt} \partial_i |_{c(t)}$ . It then goes on to say that such a derivation is only defined in the $\mathbb{R}^n$ manifold and not in an arbitrary manifold because it doesn't have a canonical frame $\partial_1 .... \partial_n$ like $\mathbb{R}^n$ does. But what exactly does this mean? Could it mean that in an arbitrary manifold, globally the derivative is not defined, because we don't have an atlas with a single chart like $\mathbb{R}^n$ ? But the derivative could still exist locally, because by considering a single chart around a point $(U, x_1, ...,x_n)$ I could still use the chart coordinates to create a frame, couldn't I?
|
Suppose $X$ is a vector a field on a smooth manifold, and $\gamma:I\rightarrow M$ a smooth curve. Let $(U,\phi$ , and $(V,\psi)$ be coordinate charts with $U\cap V$ , and $\gamma(I)\cap U\cap V$ not equal to the empty set. Let $U$ have coordinates $x^i$ , and $V$ have coordinates $y^i$ . Then in both frames we can write that: $$X=v^i\frac{\partial}{\partial x^i}\qquad X=w^i\frac{\partial}{\partial y^j}$$ Since $X$ is globally defined, we have that on the overlap these must agree. I.e. the transition function given by the Jacobian of $\phi\circ\psi^{-1}$ relates the two coordinate frames. This means that on $U\cap V$ , we can write $X$ in terms of the $x$ frame as: \begin{align} X=w^i\frac{\partial}{\partial y^i}=w^i\frac{\partial x^j}{\partial y^i}\frac{\partial}{\partial x^j} \end{align} so we have that $w^i\partial x^j/\partial y^i=v^j$ . Now in the $V$ coordinates, we would write $d/dt X$ along $\gamma$ to be: \begin{align} \frac{d}{dt}X(t)=\frac{dw^i(t)}{dt}\frac{\partial}{\partial
|
|calculus|differential-geometry|manifolds|smooth-manifolds|
| 1
|
Galois group of degree 7 polynomial
|
From Bosch‘s Algebra (p. 368): Determine the galois group of $$X^7 - 8X^5 - 4X^4 + 2X^3 - 4X^2 + 2 \in \mathbb Q[X]$$ and decide if it‘s solvable or not. How does one find the galois group of that polynomial? My usual way is to find the roots $\alpha_1, \ldots, \alpha_n$ , find a splitting field $L = L(\alpha_1, \ldots, \alpha_n)$ and then study the structure of the galois group $\operatorname{Gal}(L/\mathbb Q) = \operatorname{Gal}(f)$ as a subgroup of $S_n$ . However that does not seem to work in this case since already finding roots seems difficult. What is a useful approach in this case?
|
Hint Use Eisenstein's Criterion to show that $f$ is irreducible. Show that $f$ has exactly $5$ real roots. (Since $f$ has negative discriminant, $f$ has either exactly $1$ or exactly $5$ real roots, hence it suffices to show that $f$ has at least $2$ real roots.) What kind of permutations do (1), (2) (separately) imply occur in $\operatorname{Gal}(f)$ ? 1. So, $\operatorname{Gal}(f)$ contains a $7$ -cycle. 2. Complex conjugation exchanges the $2$ nonreal roots of $f$ , hence it determines a transposition in $\operatorname{Gal}(f)$ .
|
|abstract-algebra|field-theory|galois-theory|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.