title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Issues with motivating the conditional connective
The conditional operator $\Rightarrow$ can be tricky to motivate in the cases where it is True. An approach taken by Elliott Mendelson in Number Systems and the Foundations of Analysis is to examine the expression $(C\land D)\Rightarrow D$ . My understanding is as follows: Because English sentences that can be paraphrased with this schema are regarded as true by virtue of their structure, we axiomatically define any interpretation of the sentence letters in this schema to evaluate to a True truth value. For example, ``If I like ice cream and Au is the chemical abbreviation for gold, then I like cream'' is obviously a true sentence by it's structure, even if I hate ice cream. In particular, when $C$ is false and $D$ is true, we get $F\Rightarrow T$ evaluates to $T$ . The above makes sense to me, but then if I again try to use English to motivate the truth table of the conditional operator, I can run into issues. In particular, the following English structure sounds clearly false, irresp
This is really just an extended comment, but you are unlikely to get a satisfactory answer of how the logical properties of the ordinary English if should be understood. There is vast amount of discussion. For example, Humberstone, The Connectives , is a study of the four basic connectives ( and , or , if , not ), and weighs in at 1492 pages! For a short introduction to issues specifically about if , and a wide range of associated logics, see here .
|logic|propositional-calculus|
0
An orthogonal system in an inner product space
Let $X$ be an (infinite-dimensional) inner product space with an (countable) orthogonal system $\lbrace e_i \rbrace_{i \in \mathbb{N}}$ . The field is $\mathbb{R}$ or $\mathbb{C}$ . Here X may not be a Banach space . Suppose further that $Span(\lbrace e_i \rbrace_{i \in \mathbb{N}})$ is dense in $X$ , do we still have infinite series representation of each element? i.e., I'm wondering if $\forall x \in X$ , $\exists \lbrace a_n \rbrace \subseteq \mathbb{R}$ (or $\mathbb{C}$ ) such that $$x = \sum_{n=1}^{\infty} a_ne_n, \forall x \in X$$ If $X$ is a Hilbert space, then above is just a trivial application of Hilbert basis. I'm quite interested in the situation where $X$ is not complete. Any help or idea is appreciated.
If $\ x\in X\ $ let $$ x_n\stackrel{\text{def}}{=}\sum_{i=1}^n\frac{\langle x,e_i\rangle}{\|e_i\|^2}e_i\ . $$ for $\ n\in\mathbb{N}\ .$ Then $\ \big\langle x-x_n, e_i\big\rangle=0\ $ for all $\ i\le n\ .$ Since $\ Span\big(\big\{e_i\big\}_{i\in\mathbb{N}}\big)\ $ is dense in $\ X\ ,$ then for any $\ \epsilon>0\ $ there exists $\ y=\sum_\limits{i=1}^m\xi_ie_i\in Span\big(\big\{e_i\big\}_{i\in\mathbb{N}}\big)\ $ such that $\ \|x-y\| Then for any $\ r\ge m\ $ \begin{align} \epsilon^2&>\|x-y\|^2\\ &=\big\|x-x_r+x_r-y\big\|^2\\ &=\big\|x-x_r\|^2+2\mathfrak{Re}\big\langle x-x_r, x_r-y\big\rangle+\big\|x_r-y\big\|^2\\ &=\big\|x-x_r\|^2+\big\|x_r-y\big\|^2\ , \end{align} because $\ x_r-y\ $ is a linear combination of $\ \big\{e_i\big\}_{i=1}^r\ ,$ all of which are orthogonal to $\ x-x_r\ .$ It follows from this that, for all $\ r\ge m\ ,$$\ \big\|x-x_r\big\| and hence, from the arbitrariness of $\ \epsilon\ ,$ that $$ x=\lim_{n\rightarrow\infty}x_n=\sum_{i=1}^\infty\frac{\langle x,e_i\rangle}{
|real-analysis|functional-analysis|analysis|hilbert-spaces|banach-spaces|
0
Proof verification: Differentiability implies continuity (epsilon-delta)
I have never had a formal background in mathematics and am relatively new to formal proof writing. I am trying to prove that differentiability at an interior point of a functions domain implies that the function is continuous there using the $\epsilon - \delta $ definition. Any tips for the proof or for my math in general would be appreciated. Here is my attempt: Let $f$ be differentiable at $a$ . We wish to show that given a function $f$ is differentiable at $x=a$ implies continuity $$\forall \hspace{1pt} \epsilon>0 \hspace{1pt} \exists \hspace{1pt} \delta >0 \ni |x-a| $f$ is differentiable at $a$ $\implies$ $f'(a) = {\lim_{x \to a}} \left( \frac{f(x)-f(a)}{x-a}\right) \exists $ $\hspace{5pt}$$\implies \forall\epsilon_0>0 \hspace{5pt}\exists \hspace{3pt} \delta_0>0 \hspace{5pt} \ni \hspace{3pt} 0 Suppose, for a particular $\epsilon_0$ and $\delta_0$ we choose a $\delta so that when $x\neq a$ we have $$0 $$\implies |\frac{f(x)-f(a)}{x-a}| Let $\epsilon$ > 0 and let $$\delta = \frac{\de
I think this is a version of what you have done which is more concise and the flow of logic is easier to follow... $$\begin{align} &0 So for any $\epsilon_1>0$ we can define... $$\delta_1 \equiv \frac{\epsilon_1 \delta}{\epsilon + |f'(a)|}$$ Such that ... $$ |x-a|
|real-analysis|solution-verification|continuity|
0
Proving that $\operatorname{image}(T^2)$ = $\operatorname{image}(T)$ implies $\ker(T)$ = $\ker(T^2)$
$\newcommand{\image}{\operatorname{image}}$ For an assignment, I have the prove the equivalence of 5 statements for a linear transformation $T: V \rightarrow V$ : a) $\image(T^2) = \image(T)$ b) $\ker(T) = \ker(T^2)$ c) $\image(T) \cap \ker(T) =$ { $0$ } d) $V = \image(T) \oplus \ker(T)$ e) $\ker(T_{\image(T)}) = $ { $0$ } So I try to assume a) is true and that implies b) is true. From $\image(T^2) = \image(T)$ , I know I have to use the rank-nullity theorem to prove the kernel and that $\image(T^2) = \image(T)$ implies each $v \in V$ can be represented as a linear transformation of another vector in $V$ . But I don't know how to make that connection and would appreciate some help. Also can I have hints for $\ker(T_{\image(T)})$ means?
(a) implies (e): Since $\operatorname{image}(T)=\operatorname{image}(T^2)$ , $T|_{\operatorname{image}(T)}$ is an isomorphism. So its kernel is zero. (e) implies (c): Suppose there is a vector $v\in\operatorname{image}(T)\cap\ker(T)$ . Then $Tv=T|_{\operatorname{image}(T)}v=0$ implies $v=0$ (because $T|_{\operatorname{image}(T)}$ is an isomorphism). (c) implies (d): You have assumed that $V$ is finite dimensional. Use rank-nullity theorem and count the dimensions. (d) implies (b): Let $v$ be a vector with $T^2v=0$ . I must show that $Tv=0$ . Indeed, $T^2v=T(Tv)=0$ implies $Tv$ is in the kernel of $T$ . But it is also in the image of $T$ , so $V=\operatorname{image}(T)\oplus \ker T$ implies $Tv=0$ . Can you work for the other way round? Hint: use duality.
|linear-algebra|linear-transformations|
0
Counting commutative binary operations on a set S with n elements
I understand why on a set S with n elements there are $n^{n^2}$ binary operations, but how exactly do you count the number of commutative binary operations? I saw this post: Counting binary operations on a set with $n$ elements but I do not really understand their answer. For a set with 4 elements I get that there are $4^{12}$ commutative operations, not $4^{10}$ .
A binary operation is a mapping from $S \times S$ to $S$ .To each ordered pair from $S \times S$ , there are $|S|$ possibilities of a paring from the codomain. That means if $|S|=n$ , then n gets multiplied as many times as there are ordered pairs, so the total is $n^{n^2}$ . There are fewer ordered pairs to work with for a commutative operation since $*(x,y)=*(y,x)$ . So the question becomes how many ordered pairs are we working, and what may they be mapped to? Suppose $x,y \in S$ . We have one set of ordered pairs where $x=y$ and another set where $x\ne y$ . There are $n$ ordered pairs of the form $*(x,x)$ and there are $\binom{n}{2}$ ordered pairs where $x\ne y$ . That's a total number of ordered pairs of $\frac{n(n-1)}{2}+\frac{2n}{2}=\frac{n(n+1)}{2}$ . And as in the previous case, we multiply $n$ by itself as many times as there are ordered pairs so $n^{\frac{n(n+1)}{2}}$ .
|abstract-algebra|
0
Sum equals the product
A while ago, just playing with some numbers I noticed that $1+2+3=1\cdot2\cdot3$, so I started thinking about the non-zero integer solutions of the equation $$\prod_{i=1}^na_i=\sum_{i=1}^na_i$$ For example, for $n=2$, the only solution is the pair $(2,2)$, for $n=3$ the only solutions are $(1,2,3)$ and $(-1,-2,-3)$ and that's what I have by now, the problem is, I solved the case $n=2$ using divisibility and the case $n=3$ I proved that if $|a_1|\leq|a_2|\leq|a_3|$, then for $a_2>2$ there was no solution, so I just analyzed every case. Can someone help me with the general case?
This is a partial answer. We want to find all the non-zero integer solutions of $\displaystyle\prod_{j=1}^{n}a_j=\sum_{j=1}^{n}a_j$ for $n\ge 2$ . This answer is only interested in the solution $(a_1,a_2,\cdots,a_n)$ satisfying $|a_1|\le |a_2|\le \cdots \le|a_n|$ . If $a_j=-a_{j+1}$ , then this answer is only interested in $(a_j,a_{j+1})$ satisfying $a_j\lt a_{j+1}$ . (For example, for $n=4$ , this answer is interested in $(a_1,a_2,a_3,a_4)=(\color{red}{-1},\color{red}1,-2,-2)$ , not $(\color{red}1,\color{red}{-1},-2,-2)$ .) This answer proves the following claims : Claim 1 : If $n\equiv 1\pmod 4$ , then the equation has infinitely many non-zero integer solutions. Claim 2 : If $n\not\equiv 1\pmod 4$ , then the equation has only finitely many non-zero integer solutions. Claim 3 : If $n\ge 3$ , then the equation has at least one non-zero integer solutions where at least one $a_i$ is negative. Claim 4 : If $n=2$ , the only non-zero integer solution of the equation is $(a_1,a_2)=(2,2)$ . C
|number-theory|
0
Is "The present King of France is bald" studied by maths?
Intuitively, "The present King of France is bald." is false. But Bertrand Russell said it would mean that "The present King of France is not bald.", which seems to be false. This apparently leads to a contradiction. Could assertions about things which don't exist not be false in mathematics (or even true)? For example, does $\frac{1}{0}=3$ mean anything, since $\frac{1}{0}$ doesn't exist?
Contrary to Russell, given that there is presently no King of France, it is vacuously true that if x is presently The King of France, then x is bald . PROOF Let K and B be unary predicates such that: K ( x ) = " x is presently The King of France " B ( x ) = " x is bald" $\forall a: \neg K(a)~~~~$ (Assume) $K(x)~~~~$ (Assume) $\neg B(x)~~~~$ (Assume) $\neg K(x)~~~~$ (U Spec, 1) $K(x) \land \neg K(x)~~~~$ (Join 2, 4) $\neg \neg B(x)~~~~$ (Discharge 3, 5) $B(x)~~~~$ (Elim $\neg\neg$ , 6) $\forall a: [K(a) \implies B(a)]~~~~$ (Discharge 2, 7) $\forall a: \neg K(a) \implies \forall a: [K(a) \implies B(a)]~~~~$ (Discharge 1, 8) Note that we could as easily have proven that the nonexistent king of France is NOT bald: $\forall a: \neg K(a) \implies \forall a: [K(a) \implies \neg B(a)]$ Hint: Change line 3 to " $B(x)$ " instead of " $\neg B(x)$ ". EDIT Re: $\frac{1}{0} = 3$ The expression $\frac{1}{0} = 3$ is "meaningless" in the sense that its truth value cannot be determined. We can define di
|logic|
0
Measure is real if integration of all real valued continuous function is real
Let $X$ be a compact Harsdorff space and $\cal A$ be a $\sigma$ -algebra over $X$ . Let $\mu$ be a complex measure on $(X,\cal A)$ . Let $C(X)$ denotes the set of all complex valued continuous function on $X$ . Let $C(X)_\mathbb R:=\{f\in C(X)| f=\overline f\}.$ I want to show that if $\displaystyle \int fd\mu\in \mathbb R$ for all $f\in C(X)_\mathbb R$ , then the masure $\mu$ is real. My approach: Let $f\in C(X)_\mathbb R$ . Then $f$ is real-valued continuous function on $X$ . Then we can write $$\int f d\mu=\int f d\mu_1+i\int fd\mu_2.$$ Now by the given condition $\displaystyle \int fd\mu\in \mathbb R$ , so $\displaystyle \int fd\mu_2=0.$ Now it is enough to show that $\displaystyle \int fd\mu_2=0$ for every $f\in C(X)_\mathbb R$ gives $\mu_2=0$ . If I use indicator function, then for every set $A$ the integration $\displaystyle \int 1_A d\mu_2=0$ gives $\mu_2(A)=0$ and we are done. But indicator functions are not continuous. Please help me to solve this. Thanks in advance.
$\int f d\mu=\overline {\int f d\mu}=\int f d\overline \mu$ if $f \in C(X)_{\mathbb R}$ so $\nu \equiv \mu-\overline \mu$ is a real measure with $\int fd\nu=0$ for every $f \in C(X)_{\mathbb R}$ . This implies that $\nu=0$ .
|real-analysis|integration|measure-theory|
1
White and Red Chip Combinatorics Problem
I am doing this problem: A box contains exactly five chips, three red and two white. Chips are randomly removed one at a time without replacement until all the red chips are drawn or all the white chips are drawn. What is the probability that the last chip drawn is white? What I tried to do is to look at total permutations of the five chips RRRWW which is of course if 5!/(3!2!), the 3! and 2! come from ways to arrange R's and W's as they are identical. Next I set W at the end which represents it being picked last, so there are four slots now so there is 4 ways. This equates to 4/10=2/5. But the answer is 3/5. I don't know what I did wrong. I would appreciate it if someone can point out any mistakes.
Since you stop drawing when either color is exhausted, white must be exhausted first, which means that a red must be at the tail of the queue, directly giving the answer $Pr = \frac35$ If you must use permutations instead (though more complicated), $\Large\frac{Permutations\; of\; two\; W,\; two\; red}{Permutations\; of\; two\; W,\; three\; R} = \frac{4!}{2!2!}/\frac{5!}{2!3!} = \frac35 $
|combinatorics|
0
whether we can calculate a limit by knowing the values of two other ones
This is not an exercise found in a textbook, just a question that arose while I was working on a larger project. The question may sound vague, so let me know If I have skipped on some information. So $f, g : \mathbb{R}\rightarrow\mathbb{R}$ are continuous in an interval that contains zero. $$\lim\limits_{x\rightarrow 0}\frac{f^2(x)}{x}=l_1\in \mathbb{R}$$ $$\lim\limits_{x\rightarrow 0}\frac{1-g^2(x)}{x}=l_2\in \mathbb{R}$$ Can we say anything about the limit $$\lim\limits_{x\rightarrow 0}\frac{f(x)g(x)}{x}=\ ?$$ You can assume that the functions are differentiable if that helps. Are the three limits independent (in the sense that we need more information about the two functions to answer the question)?
The first limit doesn't exist unless it's in an indeterminte form. This means $f(0)=0$ else the limit wouldn't exist. By L'hopital's Rule, the limit must be $2f'(0)f(0)=l_1$ . So $l_1=0$ . By similar arguments $g(0)^2=1$ . $-2g'(0)g(0)=\mp 2g'(0)=l_2$ The third limit only exists if $f(0)g(0)=0$ and it has to be $l_3=f(0)g'(0)+f'(0)g(0)=0\cdot g'(0) \pm f'(0)=\pm f'(0)$ . So we don't know anything about $f'(0)$ and thus we know nothing about $l_3$ . So most we can say is $l_3=f'(0)$ .So we don't know much.
|real-analysis|calculus|limits|
0
Finding $\int_0^\infty \frac{(\log x)^3}{x^2 +2x +2} dx$
I need help with this integral $$\int_0^\infty \frac{(\log x)^3}{x^2 +2x +2} dx$$ What I have done so far: \begin{align*} I &= \int_0^1 \frac{(\log x)^3}{x^2 + 2x + 2} dx + \int_1^\infty \frac{(\log x)^3}{x^2 +2x +2} dx\\ &=−(\int_1^\infty \frac{(\log x)^3}{2x^2 + 2x + 1} dx) + \int_1^\infty \frac{(\log x)^3}{x^2 +2x +2} dx \end{align*} After this I am not sure how to proceed. According to wolframalpha the approximate value of the integral is $2.55128$
For $-1 , define \begin{align*} I(s) &= \int_0^\infty \frac{x^s}{x^2 + 2x + 2} \ \mathrm{d}x \end{align*} Then, we see the integral required is $I'''(0)$ . Hence, it suffices to evaluate the above integral. Let $f(z) = \frac{z^s}{z^2 + 2z + 2}$ for $z \in \mathbb{C}$ . Consider an anticlockwise keyhole contour $\Gamma$ of outer radius $R$ and inner radius $\varepsilon$ about the positive real axis. Let $\gamma_R$ denote the outer arc, $\gamma_\varepsilon$ denote the inner arc, and $\gamma_{\pm}$ denote the segments going away from and towards the origin respectively. Note that this contour contains two singularities of $f$ , namely $z = \sqrt{2}\exp\left (\frac{3\pi i}{4}\right ), \sqrt{2}\exp\left (\frac{5\pi i}{4}\right )$ , both of order $1$ . Hence, by Residue Theorem, we have \begin{align*} \oint_\Gamma \frac{z^s}{z^2 + 2z + 2} \ \mathrm{d}z &= 2\pi i \left (\operatorname{Res}\left (f(z), z = \sqrt{2}\exp\left (\frac{3\pi i}{4}\right )\right ) + \operatorname{Res}\left (f(z), z =
|calculus|definite-integrals|contour-integration|
0
Proof of tensor characterisation lemma in Riemannian Geometry by Lee
Is there any book that proves the following theorem? The proof that is mentioned is only for covariant tensors and the second part is not done at all. I think having Prop. B1 the proof of the seond part is rather short, isn't it? : " $\implies$ ": If $F$ is a $C^{\infty}$ -mult.-linear map, then by Prop.B.1, $F$ can be identified with a (k+1,l)-tensor, namely, for some $\omega_{k+1}$ , $\omega_{k+1}((F\omega_{1},...,\omega_{k},v_{1},...v_{l}))$ " $\Longleftarrow$ ": Given a (k+1,l)-tensor $\sigma$ Prop.B.1 says that $\Phi^{-1}[\sigma]$ is a $C^{\infty}$ -mult.-linear map. Proof attempt for Proposotion B.1: First let's look hat the special case mentioned in excercsie B.2. Let $\Phi:End(V)\to T^{(1,1)}(V),A\mapsto\Phi[A](\omega,v):=\omega(Av)$ . Claim: $\Phi$ is bijective - Injective: Suppose $\Phi[A]=\Phi[B]\iff\omega(Av)=\omega(Bv)\iff\omega(A-B)v)=0$ , by linearity of $\omega$ . Since this is true for all $\omega\in V^{*}$ and $v\in V$ , it is also true for $dx_{i},e_{i}$ , i.e. basis
The reason why Ted is so confident in comments that the proof is exactly the same is because we have the following more general lemma, which generalizes both the covariant and contravariant cases. This is one of those times where generalizing makes everything simpler. Below, $\Gamma(E)$ is the $C^\infty(B)$ -module of sections of $E$ . Lemma. Let $E \to B$ and $F \to B$ be finite rank smooth real vector bundles (over a paracompact base). Every $C^\infty(B)$ -linear map $\Gamma(E) \to \Gamma(F)$ is induced by a vector bundle map $E \to F$ , and conversely. Proof. Of course, a map of vector bundles $E \to F$ induces a $C^\infty(B)$ -linear map of sections $\Gamma(E) \to \Gamma(F)$ by post-composition. Now suppose that $\phi : \Gamma(E) \to \Gamma(F)$ is $C^\infty(B)$ -linear. We want to define a map $T : E \to F$ for each $e \in E_b$ by letting $\sigma \in \Gamma(E)$ be some section such that $\sigma(b) = e$ and then declaring $T(e) := \phi(\sigma)(b)$ ; it suffices to show that $T(e)$ d
|differential-geometry|riemannian-geometry|tensors|
0
Miklos Schweitzer 1968 P11 by Renyi
Source : Miklos Schweitzer 1968, Problem 11 Let $ A_1,...,A_n$ be arbitrary events in a probability field. Denote by $ C_k$ the event that at least $ k$ of $ A_1,...,A_n$ occur. Prove that $$\prod_{k=1}^n P(C_k) \leq \prod_{k=1}^n P(A_k).$$ Attempt : I believe the first step to tackle this would be to employ Bayes rule: $$P(C_n)=P(A_1\cap\cdots\cap A_n)=P(A_2\cap\cdots\cap A_n|A_1)P(A_1)$$ as here we accrue a $P(A_1)$ term times something less than or equal to one which is promising for the inequality to show. Then my guess would be to continue to establish similar correspondences with the other $P(A_i)$ terms. I try that by further employing Bayes to get $$P(C_n)=P(A_1)P(A_2|A_1\cap A_3\cap\cdots\cap A_n)P(A_3\cap\cdots\cap A_n|A_1).$$ However this seems to lead to a dead end (at least with $C_n$ ) because I'm conditioning the other $A_k$ terms on some measurable subset of the sample space, so I can't obtain the other $P(A_k)$ terms unconditioned without magic. Another approach or fol
Solution. We start with three remarks: Comment $A$ . The statement is true for $n=2$ , that is, $$ P(A+B) P(A B) \leq P(A) P(B) . $$ To see this, use the notation $A B=x, A \bar{B}=y$ , and $\bar{A} B=z$ . Then our inequality becomes $$ (P(x)+P(y)+P(z)) P(x) \leq(P(x)+P(y))(P(x)+P(z)), $$ which obviously holds. Comment $B$ . If $A_1 \supseteq \cdots \supseteq A_n$ , then $C_k=A_k$ . C. $C_k$ is identical to the event that at least $k$ occur from the events $A_1+A_2, A_1 A_2, A_3, \ldots, A_n$ , that is, from the events $A_1+A_2, A_1 A_2$ , $A_3, \ldots, A_n$ , exactly as many occur as from the events $A_1, A_2, \ldots, A_n$ . This statement is trivial. Our proof is based on the above comments. Put $A_i^0=A_i(i=1, \ldots, n)$ , and assume that the events $A_1^\mu, \ldots, A_n^\mu$ are already defined. Next, define the events $A_1^{\mu+1}, \ldots, A_n^{\mu+1}$ as follows: choose a pair $A_i^\mu, A_j^\mu$ of events, $i , for which $A_i^\mu \nsupseteq A_j^\mu, A_j^\mu \nsupseteq A_i^\mu$ (
|probability|probability-theory|contest-math|
0
Implicit equation of all points that a circle that traces along a 2d parametric curve.
I want to find an implicit equation that contains points that fall within a circle that has an origin that follows a 2d parametric curve, which would look like you painted a circle along that curve. I need to be able to do this for (cubic) Bézier curves, but if there is a more generalized solution, that would be preferable. It would be nice for the solution to be numerically stable, and for it to avoid computationally expensive operations, but a working solution is the main priority. For example, I have a function of a unit circle $f(x,y)=x^2+y^2-1$ and function of a cubic Bézier curve $g(t)=(t-1)^3P_0+3(t-1)^2tP_1+3(t-1)t^2P_2+t^3P_3$ where $0\le t\le 1$ . How would I get a function where $F(x,y)=0$ when (x,y) is in the path. Here is a graph with an approximation of the shape that I am trying to achieve.
So, saying it another way, you want to find all points whose distance to the Bézier curve is less than some given radius $r$ . My advice is to approximate the Bézier curve by some simpler curves whose distance function is easy to compute. The obvious choice is just a piecewise linear curve (i.e. a polyline). A slightly more sophisticated approach is to approximate with circular arcs by using biarc techniques. Look up “biarc”, or look at the answers to this question . Measuring the distance to each line or circular arc is easy, and then the distance to the Bézier curve is (approximately) the minimum of these distances. Everything you need to know about offset curves is in two papers by Rida Farouki and Andrew Neff in 1990: R. T. Farouki and C. A. Neff, “Algebraic properties of plane offset curves,” Computer Aided Geometric Design, vol. 7, no. 1–4, pp. 101–127, 1990. R. T. Farouki and C. A. Neff, “Analytic properties of plane offset curves,” Computer Aided Geometric Design, vol. 7, no. 1
|geometry|parametric|bezier-curve|implicit-function|
1
Find supX, infX, maxX and minX on a set
Find sup X, inf X, max X and min X when $X = \left\{ \frac{n}{n + m^2 + 1}: n,m \in \mathbb{N}\right\} $ Without using limits. All i could find is that the upper limit must be lower than 1 and lower limit higher than 0. But I think I could prove this and find a better answer, but I am stuck, because I have not done an excercise with two values m and n and I am not sure how to approach and explain this.
Let's talk about infimum first , Here, we want to prove that $0$ is the infimum of the given set $X$ Obviously, $\dfrac{n}{n+m^2+1}>0$ , $\forall n,m \in \mathbb{N}$ . So $0$ is in the set o lower bounds of $X$ . Now, if we can show that in every nbhd of $0$ there exist at least one member of the set $X$ then we are done ! Now, for small $\epsilon >0$ , there exist a positive integer $m\in \mathbb{N}$ such that $\dfrac{1}{m^2+2}=x (say) . Then, $x=\dfrac{1}{m^2+2} \in X$ for $n=1$ and also $x\in (0-\epsilon,0+\epsilon)$ for arbitrarily small $\epsilon$ . Q.E.D. Since, the set $X$ can never achieve this value so it does not have minimum. Now, for supremum , We, we are going to prove that $1$ is the supremum of this set $X$ . First, we show that 1 is in the set of upper bound of $X$ . Suppose, we can find such element $x\in X$ such that $x>1$ , i.e., $$ x=\dfrac{n}{n+m^2+1} >1$$ $$ \Rightarrow n > n+m^2+1$$ A contradiction, as $m\in \mathbb{N}$ . So, $1$ is the upper bound of the given s
|sequences-and-series|analysis|
0
Expected Value of Infinite Geometric Sequence
Find the sum of the following sequence: $\frac{7}{10} *\sum_{n=1}^{\infty}{n*(3/10)^n}$ . The original question is: You have a 30% chance to gain an extra "reroll" after every reroll. What is the expected number of reroll's after one reroll? For context, this is from the game TFT and is the augment "Golden Ticket". I'm interpreting this as $\frac{7}{10} *\sum_{n=1}^{\infty}{n*(3/10)^n}$ . I'm taking the derivative of $\sum_{n=1}^{\infty}{(3/10)^n}$ because that is just $\frac{A_1}{1-r}.$ This gives us $\sum_{n=1}^{\infty}{n*(3/10)^{n-1}}$ . and then multiplying by 3/10 and also 7/10 to get to my original equation. However my answer is 9/70 which makes no sense because the odds of getting only one free roll is 21/100. Thanks.
Your equation seems incorrect. The number of rolls is always one more than the number of rerolls. So it should be $(n+1)$ in the summation of GP, not $(n)$ , where n goes from $0$ to infinity. The correct answer is $\textbf{$\frac{10}7$}$ . Let's solve it using your method of GP and using an intuitive faster method as well. Method $1$ : Geometric Progression $E = \frac{7}{10}\cdot1 + \frac{3}{10}\cdot\frac{7}{10}\cdot2 + (\frac{3}{10})^2\cdot\frac{7}{10}\cdot3 +...$ $=> E = \frac{7}{10} \cdot \left( 1 + \frac{3}{10}\cdot2 + (\frac{3}{10})^2\cdot3 +... \right)$ $=> E = \frac7{10} \cdot \left( \sum_{n=0}^{\infty}{(n+1)\cdot (\frac3{10})^n} \right)$ $=> E = \frac7{10} \cdot \left( \frac{100}{49} \right)$ [You can look at the calculations for this step here ] $=> E = \frac{10}{7}$ Method $2$ : Recursion Note that we can define the entire EV recursively as follows: $$E = \frac{7}{10} \cdot 1 + \frac{3}{10} \cdot (1 + E)$$ $$=> E = \frac{10}{7}$$
|sequences-and-series|
0
If $\rho_{AB}$ is a separable then the partial transpose w.r.t to A is PSD
Def: The partial transpose of a linear operator $\rho_{AB}$ over a Hilbert space $H_A \otimes H_B$ w.r.t A is defined for a linear operator $\rho_{AB}=\rho_A \otimes\rho_B$ as $\rho^{T_A}_{AB}=\rho_A^T \otimes\rho_B$ The definition can be extended to a general linear operator I want to prove the Partial transpose test: If $\rho_{AB}$ is a separable (unentangled) then the partial transpose w.r.t to A is PSD (positive semidefinite) My try: My definition of PSD is that a hermitian operator is PSD if it has non negative eigenvalues Assume that $\rho_{AB}=\rho_A \otimes\rho_B$ . Then since $\rho_A^T$ and $\rho_A$ have the same eigenvalues, then they both have non-negative eigenvalues. I think I can say they are both PSD but they still have to be hermitian for that and it looks like the transpose is not hermitian: $(\rho_A^T)^\dagger=\overline{(\rho_A^T)^T}=\overline{\rho_A}$ . Furthermore how do I conclude that $\rho^{T_A}_{AB}=\rho_A^T \otimes\rho_B$ is hermitian and with nonnegative eigen
The transpose of a Hermitian matrix is again Hermitian, and your attempt got quite close. All you have to do is, essentially, the same calculation again: $$ (\rho^T)^\dagger=\overline{(\rho^T)^T}=\overline{\rho}=\overline{\rho^\dagger}=\overline{\overline{\rho^T}}=\rho^T $$ (In the third step we used that $\rho$ is Hermitian)
|eigenvalues-eigenvectors|hilbert-spaces|quantum-mechanics|hermitian-matrices|quantum-information|
0
On average, how many times must I roll a dice until I get a $6$?
On average, how many times must I roll a dice until I get a $6$? I got this question from a book called Fifty Challenging Problems in Probability. The answer is $6$, and I understand the solution the book has given me. However, I want to know why the following logic does not work: The chance that we do not get a $6$ is $5/6$. In order to find the number of dice rolls needed, I want the probability of there being a $6$ in $n$ rolls being $1/2$ in order to find the average. So I solve the equation $(5/6)^n=1/2$, which gives me $n=3.8$-ish. That number makes sense to me intuitively, where the number $6$ does not make sense intuitively. I feel like on average, I would need to roll about $3$-$4$ times to get a $6$. Sometimes, I will have to roll less than $3$-$4$ times, and sometimes I will have to roll more than $3$-$4$ times. Please note that I am not asking how to solve this question, but what is wrong with my logic above. Thank you!
Let's have an experiment with n possibilities $P_1,\ldots,P_n$ with uniform distribution. Let $m$ be the expected number of tries an specific case occur, say $P_k$ . Since all cases occur with the same probability $1/n$ , we must have $m\geq n$ otherwise $P_k$ occurs with higher probability than others, a contradiction. On the other hand, if $m>n$ then pigeonhole principle guarantees the existence of some $P_i$ , in any sequence of tries, that occurs twice before $P_k$ , which is again impossible as the distribution is uniform. Thus $m=n$ .
|probability|expectation|dice|
0
Ring of regular functions on principle open set
Let $X$ be a closed subset of $\mathbb{A}^n$ (with ground field $k$) and $f\in k[T_1,...,T_n]$. Let $V(f)$ denote the zeros of $f$ and $D(f)$ denote $X\backslash V(f)$. How can I prove that $k[D(f)]=k[X][f^{-1}]$? According to Shararevich, I should be able to prove this using the fact that $D(f)$ is isomorphic to the closed set $Z$ in $\mathbb{A}^{n+1}$ defined by the equations defining $X$ plus the equation $f(T_1,...,T_n)T_{n+1}=1$. But I don't know how to give a rigorous proof. Thanks in advance.
$D(f)$ is an affine variety because it is isomorphic to a closed subset of $\mathbb{A}^{n+1}$ according to the construction you mentioned. Explicitly, suppose $G_1=\cdots=G_m=0$ are the equations of $X\subset \mathbb{A}^n$ , then $D(f)\subset \mathbb{A}^{n+1}$ is the vanishing locus of the following polynomials \begin{align*} F_i(t_1,\ldots,t_{n+1})&:=G_i(t_1,\ldots,t_n)\quad i=1,\ldots,m\\ F_{m+1}(t_1,\ldots,t_{n+1})&:=f(t_1,\ldots,t_n)t_{n+1}-1. \end{align*} In this case, the regular functions on $D(f)$ is given by $$k[D(f)]:=\frac{k[t_1,\ldots,t_{n+1}]}{(F_1,\ldots,F_{m+1})}=\frac{k[t_1,\ldots,t_n][t_{n+1}]}{(G_1,\ldots,G_m,ft_{n+1}-1)}\cong \frac{k[t_1,\ldots,t_n]}{(G_1,\ldots,G_m)}[t_{n+1}]/(ft_{n+1}-1)\cong k[X][f^{-1}],$$ where the nontrivial isomorphism could be proven by computing the kernel of $$\Phi:k[t_1,\ldots,t_n][t_{n+1}]\overset{}{\to} \frac{k[t_1,\ldots,t_n]}{(G_1,\ldots,G_m)}[t_{n+1}]\overset{}{\to} \frac{k[t_1,\ldots,t_n]}{(G_1,\ldots,G_m)}[t_{n+1}]/(ft_{n+1}-1).$$
|algebraic-geometry|
0
What does it mean to say that similar matrices represent the same linear map under (possibly) different bases?
For example: how exactly do $$\begin{pmatrix} 0 & -1 \\\ 1 &0\end{pmatrix}$$ and $$\begin{pmatrix} 2 & -5/3 \\\ 3 & -2\end{pmatrix}$$ represent the same linear map? I do understand how to express the same vector under different bases. I believe my question is the same as What does it mean to be the same linear transformation in different bases? However, I do not understand how the one current answer is meant to answer the question, which may very well be because I'm missing some insight as to the implications of the statements made in the answer. I'm having trouble figuring out what it means to say that similar matrices represent the same linear map under (possibly) different bases. Firstly, I don't even have a good guess as to what is meant by the basis corresponding to a particular matrix representation of a linear map. I had the following idea as to one way to make the concept clear: First, come up with two non-identical similar 2x2 matrices $A$ and $B$ . Then, point out the basis $
$A,B$ are 'similar' by $A = U^{-1} B V$ with invertible basis transforms $U,V$ in the two spaces. Looking at the central result, that the only invariant feature of all similar linear maps between two spaces , is the dimension of its kernel, that is the subspace of the preimage mapped to zero, 'similarity' certainly is a misleading formulation - at least in non-categorical language usage - that they all represent the same map in different bases. Don't try to tell that deep truth to an optical specialist. It's a classical case of Goethes view of pure mathematics Mathematicians are like frenchmen The productive potential of the definition of similarity evolves if applied to multiple chains of linear maps, characterized by their kernels and cokernels
|linear-algebra|
0
Compute the exterior dihedral angle between two hyperplanes
Let's say, in $d$ -dimensional space, we have two hyperplanes: $\mathbf{n_1}\cdot\mathbf{x}+b_1=0$ and $\mathbf{n_2}\cdot\mathbf{x}+b_2=0$ respectively, and their normal directions satisfy $\Vert\mathbf{n_1}\Vert=1$ and $\Vert\mathbf{n_2}\Vert=1$ . We define the exterior as the intersection of two positive hyperplane regions: $\{\mathbf{x}\in\mathbb{R}^d\mid\mathbf{n_1}\cdot\mathbf{x}+b_1\ge 0 \;\text{and} \; \mathbf{n_2}\cdot\mathbf{x}+b_2\ge 0\}$ . We observe these two hyperplanes from the exterior, so the "exterior dihedral angle" could be 180°. My question is how to determine the type of the angle? The above figure illustrates the three types of "exterior dihedral angle" in 2D. We can of course observe their types in 2D by eyes only, but is there a computational method (e.g. an algorithm) to determinte their types in a general $d$ -dimensional space?
Update: hyperplanes should be half-hyperplanes intersecting at a ridge (e.g. the hinge point shown above). I find an answer, it is surprisingly simple. Basic idea: we just sample two points respectively on each half-hyperplane, let's say, point $P_1$ and $P_2$ , and we check if $P_1$ lies in the positive region of the second hyperplane, and if $P_2$ lies in the positive region of the first hyperplane, the other cases are invalid. If they are both in positive regions, then the ange is
|geometry|algorithms|computational-geometry|
1
Prove $(a+b+c-3)\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}-3\right)+abc+\frac{1}{abc}\ge 2.$
For any positive real numbers $a,b,c$ then prove $$(a+b+c-3)\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}-3\right)+abc+\frac{1}{abc}\ge 2.$$ I've try to assume $a+b+c\ge 3.\quad(1)$ By AM-GM $$abc+\frac{1}{abc}\ge 2$$ and we need to prove $$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}-3\ge 0$$ By Cauchy-Schwarz $$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\ge \frac{9}{a+b+c}$$ but $$\frac{9}{a+b+c}\ge 3 \iff a+b+c\le 3$$ which gives a contradiction to $(1).$ Similarly when $a+b+c I can not find a good approach for each assumed condition. Could you please give some hint to solve it? Thank you.
Here we need to assume that $a,b,c>0$ , and this condition will be used later in this proof. Since we have the inequality $$\frac{a+b+c}{3} \geq 3\sqrt[3]{abc}$$ with $''=''$ iff $a=b=c$ , we get $$(a+b+c-3)(\frac{1}{a} +\frac{1}{b} + \frac{1}{c} -3) + abc + \frac{1}{abc}$$ $$\geq (3\sqrt[3]{abc} - 3)(3\frac{1}{\sqrt[3]{abc}} - 3) + abc + \frac{1}{abc} \cdots (*) $$ $$= 9(t-1)(\frac{1}{t} -1 ) + t^3 + \frac{1}{t^3}$$ $$= t^3 + \frac{1}{t^3} - 9(t + \frac{1}{t}) + 18$$ where $t = \sqrt[3]{abc}$ . And then, since $$(t+\frac{1}{t})^3 = t^3 + \frac{1}{t^3} + 3(t+ \frac{1}{t})$$ we have $$t^3 + \frac{1}{t^3} - 9(t + \frac{1}{t}) + 18$$ $$= (t+\frac{1}{t})^3 - 3(t+ \frac{1}{t})- 9(t + \frac{1}{t}) + 18$$ $$= (t+\frac{1}{t})^3 -12(t+ \frac{1}{t}) + 18$$ Let $x = t+\frac{1}{t} \geq 2$ with $''=''$ iff $t = 1$ , and set $$f(x) = x^3 - 12x + 18$$ we have $$f'(x) = 3x^2 - 12$$ and we can see that when $x \geq 2$ , $f'(x) \geq 0$ , and this means $f(x)$ is increasing when $x \geq 2$ . So we have $
|inequality|contest-math|
0
Stuck trying to simply a logical expression
Simplify: $\neg[p \to \neg(p \land q)] $ My Attempt: 1: $\neg(p \lor \neg(p \land q) \quad \textit{Implication Law} $ 2: $\neg((p \lor \neg p) \lor q)] \quad \textit{First de Morgan's Law} $ 3: $\neg(\neg p \lor \neg q) \quad \textit{Second Idempotent Law} $ I am stuck, and cannot decide on the next step. My mind thinks of the Double Negation law or alternatively one of the de Morgan's laws.
$\phi\rightarrow \psi$ is logically equivalent to $\neg \phi \lor \psi$ (sometimes called the implication law ). $\therefore \neg(p\rightarrow \neg(p\land q)) = \neg(\neg p \lor \neg(p\land q))$ $\neg(\neg \phi \lor \neg \psi)$ is logically equivalent to $\phi\land\psi$ $\therefore \neg(\neg p \lor \neg (p\land q)) = p \land (p\land q)$ (this is one of De Morgan's Laws ). $\land$ is associative , IE: $\phi\land(\psi\land\chi)$ is equivalent to $(\phi\land\psi)\land\chi$ , So we can write $p\land(p\land q)$ as $(p\land p)\land q$ . This $p\land p$ is equivalent to just $p$ . Therefore the answer is $p\land q$ . You might have figured out this last step by looking at $p\land(p\land q)$ but this is the more detailed reason why. This can be verified using a truth table.
|logic|
0
What is meant by a flat morphism of relative dimension n?
I am trying to learn about etale morphisms, and one definition that is given often is a smooth morphism of relative dimension $0$ . I know that relative dimension can be defined generally in the case of flat morphisms, but there seems to be some confusion in the literature about the definition. Can anyone tell me definitively which of the following is actually meant by "relative dimension $n$ ": Every non-empty fiber has dimension $n$ , every non-empty fiber has pure dimension $n$ , every fiber has dimension $n$ , or every fiber has pure dimension $n$ . I have noticed some sources do say literally every fiber, but then a corollary of this is that any flat morphism is surjective, since otherwise some fibers would be empty. So it seems more likey that it is only referring to non-empty fibers.
There's an equivalent definition to KReiser's one (see Lemma below). For completeness, I will include the proof of their equivalence. The proof arose from the discussion I maintained with KReiser here . If $f:X\to S$ is a morphism of schemes and $s\in S$ , we denote by $X_s$ to the scheme-theoretic fiber at $s$ . Recall that given a topological space $T$ , the Krull dimension of $T$ at $t\in T$ is defined as: $$ \dim_tT=\min\{\dim U\mid U\subset T\text{ open neighborhood of }t\}. $$ Lemma. Let $f:X\to S$ be a morphism of schemes locally of finite type, and let $d$ be a non-negative integer. Then $d=\dim_x X_{f(x)}$ for all $x\in X$ if and only if all non-empty fibers of $f$ are equidimensional of dimension $d$ . Exercise. A topological space $X$ is said to have locally finitely many irreducible components if each point of $X$ has a neighborhood with finitely many irreducible components. For such a space, show that if $Z\subset X$ is an irreducible component, then there is a non-empty o
|algebraic-geometry|schemes|
0
How to evaluate $\int_{-\infty}^{+\infty}\frac{\cos x}{\left(1+x+x^2\right)^2+1}\mathrm{~d}x$
Question $$\int_{-\infty}^{+\infty}\frac{\cos x}{\left(1+x+x^2\right)^2+1}\mathrm{~d}x$$ Wolfram alpha says it is $$\int_{-\infty}^{\infty} \frac{\cos(x)}{\left(1 + x + x^2\right)^2 + 1} \,dx = \frac{\pi (1 + 2\sin(1) + \cos(1))}{5e} \approx 0.745038$$ My try \begin{align} &\quad\int_{-\infty}^{+\infty}\frac{\cos x}{\left(1+x+x^2\right)^2+1}\mathrm{~d}x\\ &=\underbrace{\int_{-\infty}^{0}\frac{\cos x}{\left(1+x+x^2\right)^2+1}\mathrm{~d}x}_{x\to-x}+\int_{0}^{+\infty}\frac{\cos x}{\left(1+x+x^2\right)^2+1}\mathrm{~d}x\\ &=\int_{0}^{\infty}\left[\frac{\cos x}{\left(1+x+x^2\right)^2+1}+\frac{\cos x}{\left(1-x+x^2\right)^2+1}\right]\mathrm{~d}x\\ &=\int_{0}^{\infty}\frac{\cos x}{\left(1+x^2\right)}\left[\frac{1}{2+2x+x^2}+\frac{1}{2-2x+x^2}\right]\mathrm{~d}x\\ &=2\int_{0}^{\infty}\frac{\left(2+x^2\right)\cos x}{\left(1+x^2\right)\left(4+x^4\right)}\mathrm{~d}x\\ &=\frac{2}{5}\int_0^\infty\frac{\cos x}{1+x^2}\mathrm{~d}x+\frac{12}{5}\int_0^\infty\frac{\cos x}{4+x^4}\mathrm{~d}x\\ &\quad-\fr
Noting that $$ \left(1+x+x^2\right)^2+1 = {\left[\left(x^2+1\right)^2+2 x\left(x^2+1\right)+x^2\right]+1 } \\ \qquad = \left(x^2+1\right)\left(x^2+2 x+2\right), $$ whose 4 simple poles are $\pm i$ and $-1\pm i$ . Using contour integration along anti-clockwise direction of the path $$\gamma=\gamma_{1} \cup \gamma_{2} \textrm{ where } \gamma_{1}(t)=t+i 0(-R \leq t \leq R) \textrm{ and } \gamma_{2}(t)=R e^{i t} (0 we have $$ \begin{aligned} &\int_{-\infty}^{+\infty} \frac{\cos x}{\left(1+x+x^2\right)^2+1} d x \\ =&\Re \int_\gamma \frac{e^{zi}}{\left(z^2+1\right)\left(z^2+2 z+2\right)} d y \\ =&\Re\left[2 \pi i \left(\lim _{z \rightarrow i}(z-i) \frac{e^{z i}}{\left(z^2+1\right)\left(z^2+2 z+2\right)}+\right.\lim _{z \rightarrow-1+i}(z+1-i) \frac{e^{zi}}{\left(z^2+1\right)\left(z^2+2 z+2\right)}\right] \\ =&2 \pi \Re\left[i \left(\lim _{z \rightarrow i} \frac{e^{z i}}{(z+i)\left(z^2+2 z+2\right)} + \lim _{z \rightarrow-1+i} \frac{e^{z i}}{\left(z^2+1\right)(z+1+i)}\right)\right] \\ =&2\pi
|calculus|integration|definite-integrals|closed-form|
0
Solving XOR matrix
i have this XOR equations: (x1 ⊕ x2 =1) (x2 ⊕ x3 =1) (x3 ⊕ x4 =1) (x4 ⊕ x5 =1) (x5 ⊕ x1 =1) i want to show that i can transform this unsolvable matrix into antoher solveable matrix , for example if x1 ⊕ x2 =0 then the new matrix is solveable. but before i can do this i need to understand how to handle this xor equations. i have a few questions about it. what is the rank of this equations matrix and how to calculate it ? iam not sure how to handle this xor equations. i know that this matrix is unsolvable , but why ? here is what i already tried, use a sequence of elemetrary row operations and then in each line there is exactly one variable with 1. but its mean that the rank of this matrix is 5 , but i think it 4. if someone could explain to me how to "take the sum of all the rows and observe that a nontrivial linear combination equals zero" it will be great. Edit: \begin{pmatrix}1&1&0&0&0\\ \:0&1&1&0&0\\ \:0&0&1&1&0\\ \:0&0&0&1&1\\ \:1&0&0&0&1\end{pmatrix} \begin{pmatrix}1&1&0&0&0\\ \:0
First proof : based on associativity/commutativity property of $\oplus$ : By contradiction : Assume a solution $x_1,x_2,x_3,x_4,x_5$ exists. Let : $b=x_1\oplus x_2\oplus x_3\oplus x_4\oplus x_5\oplus x_1\tag{1}$ Using associativity of $\oplus$ : $b=(x_1\oplus x_2)\oplus (x_3\oplus x_4)\oplus (x_5\oplus x_1)$ we get $$b=1 \oplus 1 \oplus 1 = 1\tag{2}$$ But, as $\oplus$ is commutative (and associative) : $b=(x_1\oplus x_1)\oplus (x_2\oplus x_3)\oplus (x_4\oplus x_5)=0 \oplus 1 \oplus 1=0$ Contradiction with (2)... Second proof : (in a linear algebra spirit) Let $M=\pmatrix{1&1&0&0&0\\0&1&1&0&0\\0&0&1&1&0\\0&0&0&1&1\\1&0&0&0&1}.$ First manner : $$M\underbrace{\pmatrix{1\\1\\1\\1\\1}}_{V_1}=\pmatrix{0\\0\\0\\0\\0}\tag{3}$$ Therefore the kernel of $M$ contains the non zero vector $V_1$ : as a consequence $M$ isn't invertible. Second (equivalent) manner : $$M \underbrace{\pmatrix{1\\0\\0\\1\\0}}_{V_2}=\underbrace{\pmatrix{1\\0\\1\\1\\1}}_{V'_2} \ \ \ \text{and} \ \ \ M\underbrace{\pmatrix{0\
|linear-algebra|matrices|
0
Solving XOR matrix
i have this XOR equations: (x1 ⊕ x2 =1) (x2 ⊕ x3 =1) (x3 ⊕ x4 =1) (x4 ⊕ x5 =1) (x5 ⊕ x1 =1) i want to show that i can transform this unsolvable matrix into antoher solveable matrix , for example if x1 ⊕ x2 =0 then the new matrix is solveable. but before i can do this i need to understand how to handle this xor equations. i have a few questions about it. what is the rank of this equations matrix and how to calculate it ? iam not sure how to handle this xor equations. i know that this matrix is unsolvable , but why ? here is what i already tried, use a sequence of elemetrary row operations and then in each line there is exactly one variable with 1. but its mean that the rank of this matrix is 5 , but i think it 4. if someone could explain to me how to "take the sum of all the rows and observe that a nontrivial linear combination equals zero" it will be great. Edit: \begin{pmatrix}1&1&0&0&0\\ \:0&1&1&0&0\\ \:0&0&1&1&0\\ \:0&0&0&1&1\\ \:1&0&0&0&1\end{pmatrix} \begin{pmatrix}1&1&0&0&0\\ \:0
XOR is addition mod $2\,.$ When you add the first row to the last row you get $$ \pmatrix{1&1&0&0&0\\0&1&1&0&0\\0&0&1&1&0\\0&0&0&1&1\\1&0&0&0&1}\longrightarrow \pmatrix{1&1&0&0&0\\0&1&1&0&0\\0&0&1&1&0\\0&0&0&1&1\\\color{red}0&1&0&0&1}\,. $$ Now repeat this with the second, third and forth row. The resulting matrix has rank $4\,:$ $$\pmatrix{1&1&0&0&0\\0&1&1&0&0\\0&0&1&1&0\\0&0&0&1&1\\\color{red}0&\color{red}0&\color{red}0&\color{red}0&\color{red}0}\,.$$ When we did this with the augmented matrix $$ \pmatrix{1&1&0&0&0&|&1\\0&1&1&0&0&|&1\\0&0&1&1&0&|&1\\0&0&0&1&1&|&1\\1&0&0&0&1&|&1} $$ we got a system that is clearly unsolvable: $$\pmatrix{1&1&0&0&0&|&1\\0&1&1&0&0&|&1\\0&0&1&1&0&|&1\\0&0&0&1&1&|&1\\\color{red}0&\color{red}0&\color{red}0&\color{red}0&\color{red}0&|&\color{red}1}\,$$ because of $1+1+1+1+1\mod 2=\color{red}1\,.$
|linear-algebra|matrices|
0
How to show a decreasing sequence is convergent and how to find its limit?
I just started studying real analysis and am having some trouble with this question: Show that the sequence given by x n+1 = $\frac{1}{5-x_n}$ , $x_0$ =4 converges and find its limit. I can observe that the sequence is decreasing and bounded below and that $\lim_{x\rightarrow\infty}x_{n+1}=0$ . Since $x_1 >x_2$ can I assume that $x_n > x_{n+1}$ then by induction $x_{n+1} > x_{n+2}$ so therefore the sequence is monotone decreasing. Would this be correct? Knowing the limit is $0$ and $x_1\ge 0$ and $x_0\ge 0$ can I assume that $x_n\ge 0$ , so then $\frac{1}{5-x_n}\ge0$ , so $x_{n+1}\ge0$ . Hence the sequence is bounded below by 0? Therefore by the monotone convergence theorem the series converges. Is any of this correct or on the right track, I feel like I'm missing something or more explaining in some of the induction bits, I'm not entirely sure if I'm using things correctly. I would really appreciate any help. Thank you!
Let $P_n$ be the statement $1 \le x_n \le 4$ . Surely, $P(1)$ is true. Suppose $P_n$ is true. Consider $x_{n+1}=\frac 1{5-x_n}$ . Since $5-x_n$ lies between $1$ and $4$ it follows that $x_{n+1}$ lies between $\frac 1 4$ and $1$ . In particular, $x_{n+1}$ lies between $1$ and $4$ . We have proved $P_n$ for each $n$ . Now we get $x_{n+1} \le x_n$ since $\frac 1 {5-x_n} \le\frac 1 {5-4}=1\le x_n$ . Thus, $(x_n)$ is monotonically deereasing and bounded below by $0$ . Hence it is convergent. Finding the limit is easy: The limit $l$ must satisy the quaiton $\frac 1 {5-l} =l$ which gives $l^{2}-5l+1=0$ . The roots are $\frac {5\pm \sqrt {21}} 2$ . But $\frac {5+ \sqrt {21}} 2>4$ so the limit has to be $\frac {5- \sqrt {21}} 2$ .
|real-analysis|sequences-and-series|limits|convergence-divergence|
0
Evaluating $\int_0^1 \log \log \left(\frac{1}{x}\right) \frac{dx}{1+x^2}$
Show that $\displaystyle{\int_0^1 \log \log \left(\frac{1}{x}\right) \frac{dx}{1+x^2} = \frac{\pi}{2}\log \left(\sqrt{2\pi} \Gamma\left(\frac{3}{4}\right) / \Gamma\left(\frac{1}{4}\right)\right)}$ This question was posted as part of this question: Solve the integral $S_k = (-1)^k \int_0^1 (\log(\sin \pi x))^k dx$ I cannot think of a change of variable nor other integrating methods. Maybe there is a known method that I am missing.
Fourier Series of the Log-Gamma Function: For $s \in(0,1)$ , we have $$ \log \Gamma(s)=\left(\frac{1}{2}-s\right)(\gamma+\log 2)-\frac{1}{2} \log (\sin (\pi s))+(1-s) \log (\pi)+\frac{1}{\pi} \sum_{n=1}^{\infty} \frac{\sin (2 \pi n s) \log n}{n} $$ Proof: It suffices to do the following integrals: $$ \begin{aligned} \int_0^1 \log \Gamma(s) d s & =\frac{1}{2} \log (2 \pi) \\ \int_0^1 \log \Gamma(s) \cos (2 \pi n s) d s & =\frac{1}{4 n} \quad \forall n \in \mathbb{Z}^{+} \\ \int_0^1 \log \Gamma(s) \sin (2 \pi n s) d s & =\frac{\gamma+\log (2 n \pi)}{2 n \pi} \quad \forall n \in \mathbb{Z}^{+} \end{aligned} $$ The first integral can be evaluated with the help of Euler's reflection formula: $$ \begin{aligned} \int_0^1 \log \Gamma(s) d s & =\frac{1}{2} \int_0^1 \log \Gamma(s) d s+\frac{1}{2} \int_0^1 \log \Gamma(1-s) d s \\ & =\frac{1}{2} \int_0^1 \log \left(\frac{\pi}{\sin (\pi s)}\right) d s \\ & =\frac{\log (\pi)}{2}-\frac{1}{2} \int_0^1 \log (\sin (\pi s)) d s \\ & =\frac{\log (2 \pi)}{
|calculus|integration|definite-integrals|logarithms|gamma-function|
0
What does it mean to say that similar matrices represent the same linear map under (possibly) different bases?
For example: how exactly do $$\begin{pmatrix} 0 & -1 \\\ 1 &0\end{pmatrix}$$ and $$\begin{pmatrix} 2 & -5/3 \\\ 3 & -2\end{pmatrix}$$ represent the same linear map? I do understand how to express the same vector under different bases. I believe my question is the same as What does it mean to be the same linear transformation in different bases? However, I do not understand how the one current answer is meant to answer the question, which may very well be because I'm missing some insight as to the implications of the statements made in the answer. I'm having trouble figuring out what it means to say that similar matrices represent the same linear map under (possibly) different bases. Firstly, I don't even have a good guess as to what is meant by the basis corresponding to a particular matrix representation of a linear map. I had the following idea as to one way to make the concept clear: First, come up with two non-identical similar 2x2 matrices $A$ and $B$ . Then, point out the basis $
Let $u_1, \dots, u_n$ be a basis of a vector space $V$ and let $T$ be a linear transformation $V \to V$ . Then we can write each $T u_i$ as a linear combination $\sum_j a_{ji} u_j$ , so writing these coefficients as the $i$ th column we get a matrix $A$ . This is called the matrix representing $T$ in the basis $(u_i)$ . Now suppose $v_1, \dots, v_n$ is another basis of the same vector space $V$ . Then the same recipe gives a new matrix $B$ representing $T$ in the basis $(v_i)$ . We can ask what the relationship is between $A$ and $B$ . Note that we can also write each $v_i$ as a linear combination $\sum_j p_{ji} u_j$ , and taking these coefficients to be the $i$ th column we get an $n \times n$ matrix $P = (p_{ij})$ . It is not hard to check that $PB = AP$ , or $PBP^{-1} = A$ , once you know that $P^{-1}$ exists, ie $P$ is invertible. In fact $P^{-1}$ is given by writing $u_i$ as linear combinations of $v_j$ , inverting the recipe for $P$ as one might expect. In this situation the matr
|linear-algebra|
0
solve $x^3-2x^2-x+1 = 0 \pmod{7^3}$
I'm trying to solve the equation $x^3-2x^2-x+1 = 0 \pmod {7^3} $ . I tried first to find a solution $\mod 7$ , I got that $3$ is a solution , denote $x =$ $3+7t $ . now find a solution $\mod 7^2 $ , I got that for $t= 37 $ , there is a solution , thus $x =$ $3+7\times37 \pmod {49} $ $=$ $ 17 \pmod {49} $ . Now in the same way , denote $x =$ $17 +49t$ , solve for $ 7^3 $ , now here I got that there is no solutions and I can't decide if there is no solution or that I did something wrong. Is there a solution for this equation ?
Let $x = 3 + y$ , then $P(x) = y^3 + 7 y^2 + 14 y + 7$ . Hence $7$ divides $y$ , hence $49$ divides $7$ which is false. There is no solution.
|number-theory|elementary-number-theory|
0
Chain rule of the derivative on smooth manifold
Let $M$ be a smooth manifold. We will use the following definitions and results. Consider a local chart $\,x=\big(x_1,\cdots,x_n\big):\,U\subseteq M\longrightarrow\mathbb R^n\,$ at a point $p\in U$ . The partial derivative operator at $p$ is \begin{align*} \frac{\partial}{\partial x_i}\bigg|_p:\ C^\infty(p)\,&\longrightarrow\,\mathbb R \newline f \, &\longmapsto \,\frac{\partial}{\partial x_i}\bigg|_p(f)\,=\,\frac{\partial\big(f\circ x^{-1}\big)}{\partial x_i}\big(x(p)\big)\,=:\,\frac{\partial f}{\partial x_i}(p). \end{align*} The tangent space at $p$ of $M$ is \begin{align*} \left \,=:\,TM_p. \end{align*} For any smooth curve $\,\alpha:\,I\longrightarrow M$ , the velocity vector of $\alpha$ at $t_0\in I$ is a map \begin{align*} \alpha'(t_0):\ C^\infty(p)\,&\longrightarrow\,\mathbb R \newline f \, &\longmapsto \, \alpha'(t_0)[f]\,=\,\frac{d\big(f\circ\alpha\big)}{dt}(t_0). \end{align*} For any smooth curve $\,\alpha:\,I\longrightarrow M\,$ and local chart $\,x:\,U\longrightarrow\mathbb
Just notice from the definition of $\alpha'(t_0)[f]$ , you can apply it for $y_j\circ f$ which is now a function of $C^{\infty}(p)$ . Therefore, one automatically get \begin{align} (y_j\circ f\circ\alpha)'(t_0)\,=\,\alpha'(t_0)[y_j\circ f]. \end{align}
|derivatives|differential-topology|smooth-manifolds|
0
Cantor-Bendixson Theorem
I know two different proofs of the Cantor-Bendixson theorem, however both explicitly construct the perfect set and the countable set directly, without using the fact that closed sets have the perfect set property. I was wondering, assuming that we know that a closed set has the perfect set property (let's say by the Gale-Steward theorem), is there an easier way to show the Cantor-Bendixson theorem?
I think the complexity is in showing that the other part is countable. Let $X$ be a closed subset of $\Bbb{R}$ . Let $I$ be the subset of isolated points of $X$ . Since $I$ is a set of isolated points of $X$ it is countable (there's a rational in each interval isolating a point in $I$ .) Note that since $X$ is closed in $\Bbb{R}$ , taking a closure of a subset of $X$ relative to $X$ is the same as taking a closure in $\Bbb{R}$ . Now let $X_1 = \overline{I}$ and let $X_2=\overline{X\setminus X_1}=\overline{\operatorname{int}_X(X\setminus I)}$ . The set $X_2$ , if non-empty is perfect. To show this, let $G=\operatorname{int}_X(X\setminus I)$ . If $y\in X_2\setminus G$ then $y$ is a limit point of $G$ , so $y$ is not isolated. If $y\in G$ , assume by contradiction that $y$ is isolated in $X_2$ . Then there is an open set $U$ (of $\Bbb{R})$ such that $U\cap X_2=\{y\}$ . Since $G$ is relatively open in $X$ , there is an open set $V$ (of $\Bbb{R}$ ) such that $V\cap X=G$ . Then $y\in U\cap G
|analysis|elementary-set-theory|set-theory|alternative-proof|descriptive-set-theory|
0
understanding group closure
As we know from the group axioms,for all elements a, b in G, a b is in G. I am just wondering if a=b, wether or not a b is still in G. I am trying to prove that the set (1, 2, . . . , p − 1) of p − 1 integers under multiplication modulo p is not a group, where p is not prime. When considering the closure, my professor chose p = 4, and wrote 2·2 mod 4 as the counter example. 2 and 2 are the same, so I am not sure why they used the same elements here.
Closure of a group $(G,*)$ says that for any two elements $a, b \in G$ , $a*b$ is also in $G$ . There's nothing wrong with taking $a=b$ . Indeed, closure demands that for any $a \in G$ , $a*a=a^2 \in G$ . Here, taking $a=2$ , for closure we require that $2 * 2 = 4 \in G$ , which is not true. We only need to find one counterexample to show that closure does not hold. If we were trying to prove closure instead, we would have to show it is true for any $a,b \in G$ , and then we could not assume that $a=b$ .
|group-theory|
0
What does it mean to say that similar matrices represent the same linear map under (possibly) different bases?
For example: how exactly do $$\begin{pmatrix} 0 & -1 \\\ 1 &0\end{pmatrix}$$ and $$\begin{pmatrix} 2 & -5/3 \\\ 3 & -2\end{pmatrix}$$ represent the same linear map? I do understand how to express the same vector under different bases. I believe my question is the same as What does it mean to be the same linear transformation in different bases? However, I do not understand how the one current answer is meant to answer the question, which may very well be because I'm missing some insight as to the implications of the statements made in the answer. I'm having trouble figuring out what it means to say that similar matrices represent the same linear map under (possibly) different bases. Firstly, I don't even have a good guess as to what is meant by the basis corresponding to a particular matrix representation of a linear map. I had the following idea as to one way to make the concept clear: First, come up with two non-identical similar 2x2 matrices $A$ and $B$ . Then, point out the basis $
Okay, Read this carefully to know what actually it means when we are saying that " A matrix representing a linear map"... See, Let consider a linear map $T:V \rightarrow V$ where $V$ is finite dimensional vector spaces(I am considering only one vector space for better explanation). Since, this vector space is finite dimensional vector space so it has a bases. Now suppose dimension is $n$ and bases is $\beta=\{x_1,x_2,x_3,\ldots,x_n\}$ . Remember domain and co domain both are same V.S. and we are taking same bases for this calculation (We can take another bases for co domain since we have plenty of them) Now, we find image of each element of this bases $\beta $ as then we get image vectors which are also element of $V$ . Also we know that we can write this image vectors as a linear combination of bases vector as shown below.. $$ T(x_1)=y_1=a_{11}x_1+a_{21}x_2+\ldots a_{n1}x_n$$ $$T(x_2)=y_2=a_{12}x_1+a_{22}x_2+\ldots a_{n2}x_n$$ $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\v
|linear-algebra|
1
What is the mistake in evaluating $\lim_{n \to \infty } \sum\limits_{k=1}^n \frac{k^n}{n^n} $
I saw this problem: Find $L=\lim_{n \to \infty } \sum\limits_{k=1}^n \frac{k^n}{n^n} $ Solutions to this limit can be found here . I got $L=1.5$ which is wrong but I don't know why & where I went wrong. We should have got $L=e/(e-1)$ My attempt : By Faulhaber's formula $\displaystyle \sum_{k\le n} k ^r = \frac{n^{r+1}}{r+1}+ \frac{n^r}{2} + P_{n-1}{(r)} $ where $P_m(x) $ is a polynomial of $m$ degree. So $$L=\lim_{n \to \infty }\sum_{k\le n}\frac{k^n}{n^n}= \lim_{n \to \infty }\frac{n}{n+1}+ \frac{n^n}{2n^n}+ \frac{P_{n-1}(n)}{n^n} =1.5$$ Since all the coefficients in Faulhaber's formula has an absolute value less than $1$ the last term should tend to $0$ because $$|\frac{P_{n-1}(n)}{n^n}|
I'll quote a section of the German Wikipedia about Faulhaber's formula (the English version seems not say anything similar to this) Die niedrigen Koeffizienten in Stammbrüchen, wie man sie bei kleinem $k$ aus der Schulmathematik kennt, sind aber für den weiteren Verlauf überhaupt nicht typisch. Bereits bei $k = 11$ tritt zum ersten Mal ein Koeffizient > 1 auf; bei noch höheren Potenzen wird das zur Regel. Grund dafür sind die Bernoulli-Zahlen, die nach einer Reihe von niedrigen Werten stark ansteigen, sogar stärker als jede Exponentialfunktion, und gegen Unendlich gehen. That translates (slightly shortened) to The small coefficients, as known from school math for small values of $k$ , are not typical for higher $k$ . Already at $k=11$ comes the first coefficient bigger than $1$ , for higher powers that becomes the norm. The reason are the Bernoulli numbers which, after some small values initially, increase significantly (even faster than any exponential function) and tend to infinity.
|real-analysis|calculus|limits|solution-verification|
0
Do any whole number solutions exist for $a^2 + b^3 = 2^{2023}$
Do any whole number solutions exist for $a^2 + b^3 = 2^{2023}$ ? And if so, would it be possible to give an example of one? This problem is meant to be solved with modular-arithmetic and I could not even get started. Guessing that you must find the right modulus such that no squares or cubes can be added to reach the mod of $2^{2023}$ . For example all squares in mod $8$ are $0$ , $1$ or $4$ . And all cubes have in mod $13$ class $0$ , $\pm1$ , $\pm 5$ . Any help appreciated!
The question is Do any whole number solutions exist for $a^2 + b^3 = 2^{2023}$ One potential solution would be if $a^2 = b^3 = 2^{2022}$ . Note that this has obvious solutions $a = 2^{2022/2} = 2^{1011}$ and $b = 2^{2022/3} = 2^{674}.$ There may be other non-obvious solutions. This is an example of an elliptic curve. The base case in standard Weierstrass form is $$ y^2 = x^3 + 2 \tag1 $$ which is the curve with Cremona label 1728a1 . This curve has the obvious points $(x,y) = (-1,\pm 1).$ Each is a generator for the group of rational points. All other rational points are not integral. In your case, you want something like $$ y^2 = x^3 + 2^{1+6k} \tag2 $$ or equivalently, $$ (y/8^k)^2 = (x/4^k)^3 + 2. \tag3 $$ The problem now becomes to find a solution $(x,y)$ of equation $(1)$ where the denominator of $x$ is $4^k$ which seems to be impossible if $k>0$ .
|modular-arithmetic|exponentiation|
1
Internal angle sum of a triangle
I teach a 5th grade class geometry, and I came up with the following alternative proof (?) to show that the internal angle sum of a triangle is $180^\circ$ . I remember reading that this result is equivalent to the parallel postulate , however I can't see where I have used this axiom. I would be very happy if anyone could point out my mistake. Here we go: The pencil starts at the side $CB$ The pencil turns clockwise $\angle B$ The pencil turns clockwise $\angle A$ The pencil turns clockwise $\angle C$ . Since the pencil points in the opposite direction it has turned $180^\circ$ (Or maybe $180^\circ+360^\circ\cdot k$ ?)
Define a 3D axis in space, at (0, 0, 0) with an X, Y and Z axis visually, and a point at (1, 1, 1) is in one of eight octant sections the axis form. Let’s trade up 1 for +# as being any positive value, and –# as any negative value. Starting with (+#, +#, +#), if we clockwise rotate all values 45 degrees, we and up at (0, 0, 0) and how? Removing the Z axis from the view and do the same rotation over again ignoring a Z value, we and rotate (+#, +#) clockwise 45 degrees and it becomes (+#, 0), Rotate another 45 degrees clockwise and it becomes (-#, -#). What significance is this? Any two axis rotations can reach any 3D point. In a 2D situation we observe the same by how one collapses no matter how many axis as it is required of rotation that two or more coordinates are involved. So, what I see is your pencil as if it really collapses. The reason is every angle has a compliment back angle, i.e. a 45 degrees angle has a 315 degrees angle clockwise or counterclockwise to reach the same concl
|geometry|solution-verification|euclidean-geometry|
0
For $x,y,z \in \mathbb{R} $, prove that $ x(x+y)^5+y(y+z)^5+z(z+x)^5 \geq \frac{32}{243}(x+y+z)^6 $
For real numbers $x,y,z$ ,prove that $$x(x+y)^5+y(y+z)^5+z(z+x)^5 \geq \frac{32}{243}(x+y+z)^6$$ For $x,y,z>0$ I have a simple solution: By Hölder inequality: $$\sum x(x+y)^5 \geq \frac{(\sum x(x+y))^5}{(\sum x)^4}\ge \frac{(\frac{2}{3}(\sum x)^2)^5}{(\sum x)^4}=\frac{32}{243}(x+y+z)^6 $$ But for $x,y,z \in \mathbb{R}$ the solution failed. How can I prove the inequality for $x,y,z \in \mathbb{R}$ ?
Let $$f(a,b,c)=a(a+b)^5+b(b+c)^5+c(c+a)^5- \frac{32}{243}(a+b+c)^6$$ and the SOS tool gives the following result $$\begin{aligned} f(a,b,c) & = \frac{10}{818181}\sum \left(- 156 a^{3} - 544 a^{2} b + 143 a^{2} c - 403 a b^{2} + 395 a c^{2} + 149 b^{2} c + 260 b c^{2} + 156 c^{3}\right)^{2} \\ & + \frac{1}{4332331332}\left(\sum a \left(34418 a^{2} + 39597 a b - 14871 b^{2} - 59144 b c\right)\right)^{2} \\ & + \frac{1}{45175369079215116}\left(\sum a b \left(31282489 a - 450959 b - 30831530 c\right)\right)^{2} + \frac{48435918625}{218758445577}\left(\sum a b \left(b - c\right)\right)^{2} \end{aligned}$$ By the way, the original problem is true by Buffalo Way. See Wolfram Alpha
|inequality|
0
a doubt on functor in Dummit&Foote's Abstract Algebra
The exercise3 on page915 in D&F's Abstract Algebra states that: Show that the map $\mathsf{Ring}$ to $\mathsf{Grp}$ by mapping a ring to its group of units (i.e., $R\mapsto R^{\times}$ ) defines a functor. But for a non-trivial ring $R$ without $1$ , how to define $R^{\times}\in\mathsf{Grp}$ , I think it is an empty set and so, can it be in $\mathsf{Grp}$ ? Furthermore, if $R=0$ is the zero ring, is it true that $R^\times\cong\{0\}$ where $\{0\}$ is the identity group?
On page 912 the category $\mathbf{Ring}$ is defined to be the category of unital rings (=rings with a unit) with unital morphisms (mapping units to units). On page 226 the group $R^\times$ is defined for rings with unit $1 \neq 0$ . I would assume that for the zero ring, one would define $R^\times$ to be the trivial group.
|abstract-algebra|group-theory|elementary-set-theory|ring-theory|category-theory|
0
An implication of Lipschitz functions
Let the function $f:\mathbb R^n \rightarrow \mathbb R^n$ be Lipschitz with constant $L$ . In a paper, they say the condition that $f:\mathbb R^n \rightarrow \mathbb R^n$ is Lipschitz is equivalent to the condition that there exists a $B such that \begin{align} (i)&: \quad \Vert f(x) \Vert \leq B(1+\Vert x\Vert) \\ (ii)&: \quad (f(x)-f(y))^\top(x-y) \leq B\Vert x-y\Vert^2 \end{align} holds. Condition (i) is clear for me: Lipschitz implies that the function grows at most linearly with a finite $L$ , which implies (i). However, condition (ii) I do not see how it is fulfilled only requiring that $f$ is Lipschitz. In fact, condition (ii) has for me some similarity with the (reversed) condition for strongly convex functions, implying in this case that the primitive function of $f$ is less than $B \Vert x \Vert^2$ , i.e. less than a quadratic function. This kind of gives the idea that $f$ has grow at most linearly. However, that is not really a proof. Does someone know why condition (ii) is f
I would say that, thanks to Cauchy-Schwarz inequality, \begin{align} (f(x)-f(y))^{T}(x-y) &\leq \|f(x)-f(y)\|\,\|x-y\|\\ &\leq L\|x-y\|\,\|x-y\|=L\|x-y\|^2 \end{align}
|real-analysis|analysis|continuity|convex-analysis|lipschitz-functions|
1
How to use limits of $f'(x)$ to show that f is differentiable at a given $x_0$
I am trying to prove the following statement (assuming that it is true): Given that a function $f:\mathbb R \longrightarrow \mathbb R$ satisfies the following: Defined in a $\delta$ neighborhood of $0$ Differentiable in the deleted $\delta$ -neighborhood of $0$ , $ \mathit{N}^*_\delta=(-\delta,0) \cup (0,\delta)$ $ \displaystyle \lim_{x \to 0} f'(x) = 1$ Then $f$ is differentiable at $0$ and $f'(0)=1$ I understand that $\displaystyle \lim_{x \to 0} f'(x) = 1$ implies that, either $f'$ is continuous at $x=0$ and then $f'$ exists at $x=1$ and and $f'(0)=1$ , or $f'$ has a removable discontinuity at $x=0$ I am trying to prove that the second option is not possible. Any ideas or sugestions would be great. Thanks!
What if $f(x)$ is defined by $$f(x) = \begin{cases} x \quad &\text{if $x 0$}\\ 1 \quad &\text{if $x=0$} \end{cases}$$ ?
|calculus|
1
Kernel of the action of GL(V) on exterior square of V
I wonder whether anyone knows a reference for the following result? I can give a shortish proof, but would prefer to cite the literature if possible. Theorem Let $V=F^n$ be an $n$ -dimensional $F$ -space where $F$ is a field. The action of ${\rm GL}(V)$ on $\Lambda^2(V)$ induces a homomorphism $\phi_n\colon{\rm GL}_n(F)\to{\rm GL}_d(F)$ of matrix groups where $d=\binom{n}{2}$ . The kernel of $\phi_n$ is ${\rm GL}_1(F)$ if $n=1$ , ${\rm SL}_2(F)$ if $n=2$ , and $\langle -I_n\rangle$ if $n\ge3$ . I will give a proof if there is no easy reference and one is requested.
I can't think of a reference for the exact result, but two possible short arguments. Maybe something like this is what you had in mind, so this might not answer your question. But in any case, this is too long for a comment. Proof 1: there are not many normal subgroups of $GL_n(F)$ in general, so you can just check from this what the kernel has to be. Proof 2: in $\wedge^2(V)$ , for nonzero elements $v \wedge w = v' \wedge w'$ if and only if $\langle v,w \rangle = \langle v', w' \rangle$ in $V$ . This is basically the fact that the Plücker map is injective, which has textbook references. For $n = 2$ the map $\phi_n$ is the determinant map, so the result is clear. Assume $n > 2$ . If $g \in \operatorname{ker} \phi_n$ , then $g$ leaves every $2$ -dimensional subspace of $V$ invariant. This implies that every line $\langle v \rangle$ is invariant under $g$ . Indeed, let $\{v,v',v''\}$ be a linearly independent set. Since $\langle v,v' \rangle$ and $\langle v,v'' \rangle$ are $g$ -invarian
|group-theory|reference-request|multilinear-algebra|exterior-algebra|
0
What is $(-1)^{2r-1}$?
What is $(-1)^{2r-1}$ ? $$(-1)^{2r-1}=\dfrac{(-1)^{2r}}{(-1)}=\dfrac{[(-1)^2]^r}{(-1)}=\dfrac{1^r}{-1}=\dfrac{1}{-1}=-1.$$ Would this be correct? It's quite confusing. Also is it mathematically legal to have a negative sign in the denominator only? Is there a difference being in the denominator vs. the numerator? Thanks.
The other answer focuses on whether or not the simplification is correct, but the question itself asks questions about the representation of rational numbers, which have nothing to do with exponentiation, e.g. "...is it mathematically legal to have a negative sign in the denominator only?" Remember what a "numerator" and "denominator" are. In slightly more advanced language, there is no reason to define a "division" operation. Instead, only multiplication is defined, along with the observation that if $q$ is a rational number, then there exists a multiplicative inverse of $q$ , i.e. another rational number $r$ [1] with the property that $$qr = 1. $$ There are a couple of notations for this multiplicative inverse: generally, one writes either $$ r = q^{-1} \qquad\text{or}\qquad r = \frac{1}{q}. $$ These are two equivalent notations for the same thing. In this context, there is also no need to define a "subtraction" operation. Instead, only addition is defined, along with the analogous o
|algebra-precalculus|
1
Let $VABC$ be a triangular pyramid with $VA<VB<VC$. Prove that there is a point $P$ inside the triangle $ABC$ such that $VP= \frac{VA+VB+VC}{3}$.
The drawing Let $VABC$ be a triangular pyramid with $VA . Prove that there is a point $P$ inside the triangle $ABC$ such that $VP= \frac{VA+VB+VC}{3}$ . The idea Let $VO \perp (ABC)$ The first thought I got is clearly that $AO because $VA and using the Phytaghora Theorem we get this inequality I tried imaging P being in different posts such as the midpoint of $AB,BC,AC$ or as the center of the circumscribed circle or inscribed circle of the triangle, in some of them I got that P doesn't satisfy the equality or got to nothing useful to demonstrate. Hope one of you can help me! Thank you!
Let $D$ be a point on the line segment $AB$ such that $VD=\frac{VA+VB}{2}$ . (Such a point $D$ exists. The reason is as follows : Let $A'$ be a point on the half line $VA$ such that $VA'=\frac{VA+VB}{2}$ . Let $B'$ be a point on the half line $VB$ such that $VB'=\frac{VA+VB}{2}$ . Then, we have $VA'\gt VA$ since $$2VA\lt VA+VB\implies VA\lt \frac{VA+VB}{2}=VA'$$ Similarly, $VB'\lt VB$ since $$2VB\gt VA+VB\implies VB\gt\frac{VA+VB}{2}=VB'$$ Now on the plane $VAB$ , let us consider a circle whose center is $V$ with radius $\frac{VA+VB}{2}$ . We see that $A',B'$ are on the circle. Since $VA'\gt VA$ and $VB'\lt VB$ , there is a point $D$ on the line segment $AB$ such that $D$ is on the circle. For such a point $D$ , we have $VD=\frac{VA+VB}{2}$ .) Let $E$ be a point on the line segment $VD$ such that $VE=\frac 23VD$ . Let $F$ be a point on the line segment $VC$ such that $VF=\frac 13VC$ . Let $G$ be a point on the half line $VD$ such that $VG=VE+VF$ . Let $H$ be a point on the half line $V
|geometry|inequality|
1
Lie algebra cohomlogy of $\mathbb{R}^2$
Consider $\mathbb{R}^2$ as a real Lie algebra. How can we prove that $\mathcal{H}^2 (\mathbb{R}^2)\cong \mathbb{R}$ , the second cohomology space of $\mathbb{R}^2$ ? I really appreciate if someone could help me about it. Thanks in advance.
For an abelian Lie algebra $L$ over a field $K$ , the second cohomology space $H^2(L,K)$ , by definition of the Chevalley-Eilenberg complex, is just given by ${\rm Hom} (\Lambda^2 (L),K)$ which has dimension $\binom{d}{2}$ , where $d=\dim(L)$ .
|lie-algebras|homology-cohomology|
0
Relationship between eigenvalues when the same change is applied to two different adjacency matrices of graphs with the same eigenvalues
I'm attacking to solve the graph isomorphism problem using the adjacent matrices. I would appreciate it if someone could show me whether following conjecture works or not. I expect this conjecture to work. Let $A$ and $B$ be $n \times n$ symmetric matrices. Let the diagonal components of $A$ and $B$ are $0$ , and the other components are $0$ or $1$ . Assume that there is no permutation matrix $P$ such that $A=P^t B P$ . Let $D$ be a diagonal matrix $Diag(d_{11},...,d_{nn})$ , where $d_{ll} > 0$ , and the other diagonal entries are $0$ . Let $d_{ll}$ be an integer. Let the eigenvalues of $A$ and $B$ be $\lambda_i$ , $\mu_i$ , and arrange them in the descending order. Assume that $\lambda_i=\mu_i$ for all $i$ . Let the eigenvalues of $A+D$ and $B+D$ be $\lambda'_i$ , $\mu'_i$ and arrange them in the descending order. Assume that $\lambda'_j \neq \lambda_j$ for more than one $j$ . Also, assume that $\mu'_k \neq \mu_k$ for more than one $k$ . Conjecture. Under these conditions, there exist
I will disprove your conjecture in the case $n=2$ . (A general suggestion: If you have a linear algebra conjecture, try to check it for $2\times 2$ matrices first.) Consider the following family of symmetric $2\times 2$ matrices with nonnegative entries and zero determinant: $$ A=\begin{bmatrix} a& 1 \\ 1 & a^{-1}\end{bmatrix}. $$ Its eigenvalues are $a+a^{-1}=Tr(A)$ and $0$ . For $t > 1$ take the matrix $$ T=\begin{bmatrix} t& 0\\ 0 & t^{-1}\end{bmatrix} $$ and let $B:= T A T^{-1}$ , $$ B= \begin{bmatrix} a& t^{2}\\ t^{-2} & a^{-1}\end{bmatrix}. $$ Its eigenvalues, of course, are the same as that of $A$ . Since $t>1$ , there are no permutation matrices conjugating $A$ and $B$ . Now, take $D=Diag(0,d)$ , the diagonal matrix with diagonal entries $0, d$ , where $d\ne 0$ (the case $D=Diag(d,0)$ is similar). Then $$ A'= A+D, B'=B+D. $$ The new matrices $A', B'$ still satisfy $TA'T^{-1}=B'$ . Hence, $A', B'$ still have the same sets of eigenvalues. At the same time, neither $0$ nor $Tr(A)$
|linear-algebra|matrices|eigenvalues-eigenvectors|symmetric-matrices|
0
Detemine whether the interval [4,8] is well-ordered. Explain.
I don't think this interval is well-ordered because the subset (4,8) would not have a smallest value. I'm stuck on how to show (4,8) has no smallest value.
Assume $(4,8)$ has a smallest value. Name it $x$ . Now $4 . Therefore $x$ is not the smallest value as $y and $y\in(4,8)$ .
|discrete-mathematics|well-orders|
0
$\lim_ {n \to \infty} \int_{0}^{\infty} n\sin(x/n)(x(1+x^2))^{-1}dx$
I'm trying to compute $\lim_ {n \to \infty} \int_{0}^{\infty} n\sin(x/n)(x(1+x^2))^{-1}dx$ . This is exercise 2.28c in Folland. I know all the big convergence theorems (dominated, monotone, etc.), but I'm not sure where to start with this problem. Any hints?
There is nothing wrong with the accepted answer except that I believe this exercise intends to test the understanding of convergence theorems. For the sake of completeness, let us fill in the details of the methods mentioned in the comments section. The Dominated Convergence Theorem asserts that if there is a sequence $f_n$ of integrable functions $(i)$ , such that $f_n \to f$ pointwise a.e. $(ii)$ , and there exists a non-negative $g$ such that $|f_n| \lt g$ a.e. for all $n$ $(iii)$ , then $\int f = lim_{n \to \infty} \int f_n$ . Let $f_{n}(x) = nsin(\frac{x}{n})[x(1+x^2)]^{-1}$ , if one fixed $n \in \mathbb{N}$ , note that $nsin(\frac{x}{n})[x(1+x^2)]^{-1} \lt \frac{1}{1+x^2} $ as $\sin(\frac{x}{n}) \lt \frac{x}{n}$ . And $ \int_{0}^{\infty} \frac{1}{1+x^2} = arctan(x)|_{0}^{\infty} = \frac{\pi}{2} \lt \infty$ . Therefore you have (i). Now fix $x$ , note that $\lim_{n \to \infty} nsin(\frac{x}{n}) = x$ , hence $\lim_{n \to \infty} f_n = \frac{1}{1+x^2}$ . Therefore you have (ii). By
|real-analysis|measure-theory|improper-integrals|
0
Trying to maximize $\int_a^b L(t,q(t),\dot{q}(t)) dt$ subject to $|\dot{q}(t)| = 1$
I am trying to find the differential equation which implies a smooth path $q:[a,b] \rightarrow \mathbb{R}^n$ subject to $|\dot{q}(t)| = 1$ (i.e. $q$ has unit speed) is a stationary point of $$ \int_a^b L(t,q(t), \dot{q}(t)) \ dt $$ for a certain $L(t,q(t), \dot{q}(t))$ . This question and answer here appears to be exactly what I need, but when I tried to apply their formula to an easy example, it didn't work. Did I make a mistake or is their formula wrong? The example I tried to apply it to is below. Following the notation of their question, suppose $F(x,y) = x^2+y^2$ and $g(x',y') = {x'}^2 + {y'}^2 -1 = 0$ and $x(0) = y(0) = 0$ . So you are trying to maximize $$ \int_0^1 x(t)^2 + y(t)^2 dt \tag 1 $$ subject to $(x(t),y(t))$ having unit speed. Because $x(t)^2 + y(t)^2 \leq t^2$ , $(1)$ is less than $\int_0^1 t^2 dt = 1/3$ . On the other hand, this maximal value is obtainable by having $(x(t),y(t))$ move in a straight line at unit speed. However their equation $$ \frac{\partial F}{\part
The problem is simpler in polar coordinates. The augmented Lagrangian, then, is given by $$ L=r^2+\lambda(\dot{r}^2+r^2\dot{\theta}^2-1). \tag{1} $$ The correct Euler-Lagrange equations are $^{(*)}$ \begin{align} \frac{\partial L}{\partial r}-\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{r}}\right)=0 &\implies \left(1+\lambda\dot{\theta}^2\right)r-\dot{\lambda}\dot{r}-\lambda\ddot{r}=0, \tag{2} \\ \frac{\partial L}{\partial \theta}-\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{\theta}}\right)=0 &\implies \frac{d}{dt}(\lambda r^2\dot{\theta})=0 \implies \lambda r^2\dot{\theta}=C, \tag{3} \\ \frac{\partial L}{\partial \lambda}=0 &\implies \dot{r}^2+r^2\dot{\theta}^2-1=0. \tag{4} \end{align} The initial condition $x(0)=y(0)=0$ , or $r(0)=0$ , implies $C=0$ in Eq. $(3)$ . There are, then, three possibilities: $r(t)=0$ : this is not consistent with Eq. $(4)$ , which becomes $\dot{r}^2=1$ ; $\lambda(t)=0$ : Eq. $(2)$ then implies $r(t)=0$ , which we have seen is not consistent with Eq.
|calculus-of-variations|euler-lagrange-equation|
0
Six Children and 3 Flavors of Ice Cream
$\textbf{Q}$ : There are $6$ children each offered a single scoop of any of $3$ flavors of ice cream. In how many ways can each child choose a flavor for their scoop so that some flavor of ice cream is selected by exactly $3$ children? I struggled with this for a bit. I'm counting this as $ 3 \cdot C(6,3) \cdot 2 \cdot 2 \cdot 1$ where $3$ is for the number of colors, $C(6,3)$ is choosing $3$ kids from $6$ to have one flavor and $2 \cdot 2 \cdot 1$ comes from the choices that the last $3$ children have. I'm doing something wrong though as this is not correct. Could someone help point me in the right direction? Thank you.
There are two possibilities that result in some flavor being chosen by exactly three children: three children choose one flavor, two choose another and one chooses the remaining one; three children choose one flavor and the other three choose a second flavor. For the first case, the number of ways to divide up the children like this is $\binom 63\binom32=60$ . Then we can assign the flavors to groups in $3!$ ways, giving a total of $360$ ways. For the second case there are only $\binom 52$ ways to divide up the children into two groups of three, since it is equivalent to choosing two other children to be in the same group as child 1. Then we can assign the flavors to (child 1's group), (other group), (no-one) in $3!$ ways, giving another $60$ ways. Therefore the answer is $420$ .
|combinatorics|
0
From a deck of $52$ cards, we draw $5$ cards. Find the probability that we get at least $4$ face cards if we know that we have an ace and no hearts.
From a deck of $52$ cards, we draw $5$ cards without returning. Determine the probability that we get at least $4$ face cards (w if we know that we have an ace and no hearts. $A_4 =$ we get $4$ face cards (at least one of them is an ace) and $1$ card that is not a figure; there is no heart among all those cards $A_{4.1} = $ we get $1$ ace, $3$ different face cards and $1$ card that is not a figure; there is no heart among all those cards $A_{4.2} = $ we get $2$ aces, $2$ different face cards and $1$ card that is not a figure; there is no heart among all those cards $A_{4.3} = $ we get $3$ aces, $1$ different face card and $1$ card that is not a figure; there is no heart among all those cards $A_5 =$ we get $5$ face cards (at least one of them is an ace); there is no heart among all those cards $A_{5.1} = $ we get $1$ ace, $4$ different face cards; there is no heart among all those cards $A_{5.2} = $ we get $2$ aces, $3$ different face cards; there is no heart among all those cards $A_{
I assume you intend a "figure" card to be one whose rank is J, Q, K, or A. Under that interpretation, most of your calculations make sense up until you get to calculating the counts for your $B$ events. $B_1$ for instance should have been the count of the number of possible hands with exactly once ace and no hearts in hand. You choose which ace it was, and then from all remaining non-heart cards ( of which there are many ) you choose four additional cards to make up the hand. You erroneously used $\binom{3\cdot 3}{4}$ here to count how many ways there were to finish out the hand, but that is the count had the cards to finish out the hand all been face cards. We are not limiting ourselves to face cards here. There are $12$ ranks remaining which are not aces for a total of $3\cdot 12$ non-ace non-heart cards to choose from, many more than the $3\cdot 3$ that you accidentally limited yourself to. Using $3\cdot 12$ here in those calculations for the $B$ 's should correct the mistake.
|probability|combinatorics|card-games|
1
Why is differentiability defined on open interval?
If I have a subset $A \subseteq \mathbb{R}$ , a function $f: A \to \mathbb{R}$ , and a point $a \in A$ , then (according to the definitions I came across) for $f$ to be differentiable at $a$ , then there must exist some open interval $I$ so that $a \in I$ and $I \subseteq A$ . I tried understanding why this requirement exists, by first discarding it and seeing if it naturally arises from the other bits of the definition. NOTATION: For any $x \in X\subseteq \mathbb{R}$ , I define $D[x;X] \subseteq \mathbb{R}$ as: $$D[x;X] = \{ \ h \in \mathbb{R} \setminus \{ 0 \} \ | \ x + h \in X \ \}$$ and for any $\phi: X \to S$ such that $S \subseteq \mathbb{R}$ , I define $\mathrm{Q}^{\phi}_{x} : D[x;X] \to \mathbb{R}$ as: $$\mathrm{Q}^{\phi}_{x}(h) = \frac{\phi(x+h) - \phi(x)}{h}$$ DEFINITION: First I will fix the point $a \in A$ , and then define the derivative of $f$ at $a$ as: $$f'(a) = \lim_{ h \to 0 } \mathrm{Q}^{f}_{a}(h)$$ REFLECTION: If $f'(a)$ exists, then $\lim_{ h \to 0 } \mathrm{Q}^{f}
The idea of derivative, in general, is "linear approximation". In fact, a function is differentiable at $a_0$ if there exists a linear map, which is the derivative, that approximates the function in the neighborhood of $a_0$ . This is represented in the equation $$ f(a_0 + h) - f(a) = \frac{df}{dx}(a_0)h + ε(h); \quad \lim_{h\to 0} \frac{\varepsilon(h)}{h} = 0. $$ To do justice to this idea and to garuantee the function is defined for $a_0 + h$ for any $h$ small enough, we ask for the function be defined in an open set, which is a neighborhood of all of it's point.
|calculus|analysis|
1
Counting crossings of a particular complete bipartite graph
This type of complete bipartite graph has two vertex sets $(V,U)$ where $V$ has points $v_n$ along a straight line parallel to the set of points $u_m$ . How should I go about finding a general formula to count the amount of crossing in this particular type of graph? There are 18 crossings, not including crossings that occur at the sets of vertices on the top and bottom lines themselves, for $K_{4,3}$ . $K_{4,3}$ " />
Let the graph be $K_{n,m}$ . If you fix the $i$ -th vertex on the upper line and the $j$ -th on the lower line, then all lines formed with vertices to the left of $i$ and the right of $j$ will touch this line and also all the lines formed to the right of $i$ and the left of $j$ will touch this line, and so the total crossing will be $$\frac{1}{2}\sum _{\substack{1\leq i\leq n\\1\leq j\leq m}}(i-1)(m-j)+(n-i)(j-1)$$ It is half because you are actually counting each crossing twice. This is equivalent to choose two points on the upper line and two points on the lower line (join them in a cross) so it will be $$\binom{n}{2}\binom{m}{2}.$$
|combinatorics|discrete-mathematics|graph-theory|bipartite-graphs|
0
If $T_1$ and $T_2$ are two well-ordered sets, then $T_1 \cap T_2$ is also well-ordered.
Prove that if $T_1$ and $T_2$ are two well-ordered sets, then $T_1 \cap T_2$ is also well-ordered. I want to say yes, but I'm not sure how to explain that. If they are both well-ordered they both have smallest elements. I suppose the answer could be no, that if the elements that are in both form a new set that does not have a smallest. Any help is appreciated!
A set is called well-ordered if every non-empty subset has a smallest element. If $A$ and $B$ are well-ordered then their intersection $A\cap B$ is well-ordered because it is a subset of both $A$ and $B$ . Any subset of $A\cap B$ is a subset of $A$ and thus has a smallest element.
|discrete-mathematics|elementary-set-theory|
0
Is it consistent with ZC that a well-order of type $\omega_\omega$ does not exist?
Working in Zermelo's set theory (with choice for simplicity) - the construction in Hartogs' theorem shows that starting with a set $X$ , there is a set $X'$ in at most $\mathcal{P}^4(X)$ (where $\mathcal P$ denotes power set) and a well-order $ in at most $\mathcal{P}^6(X)$ such that the well-order type of $(X', is the least well-order such that $X'$ is not injectible into $X$ . It is known that $V_{\omega+\omega}$ is model of $ZC$ so a model of $ZC$ may not have the von-Neumann ordinal $\omega+\omega$ . However, it does have well order types corresponding to $\omega_n$ for all $n\in\omega$ . Start with $\omega$ . Construct a set $P_1$ by the Hartogs procedure with well-order type $\omega_1$ . Then continue inductively to construct $P_n$ with well-order type $\omega_n$ for all $n\in\omega$ . If we want to continue to construct a well-order of type $\omega_\omega$ - we run into a block. Since $\{P_n\colon n\in\omega\}$ may not be a set - so there's no axiom making it possible to take un
Yes. Work in a model of $ZFC$ plus the generalized continuum hypothesis and consider the substructure $V_{\omega\cdot 2}\vDash ZC$ . Every element of $V_{\omega\cdot 2}=\bigcup_{\alpha will have cardinality less than some $\beth_n=\aleph_n . Since you don't have replacement in $V_{\omega\cdot 2}$ you can no longer speak about aleph numbers, however you have that in $V_{\omega\cdot 2}$ any sequence of infinite sets $(P_n)_{n\in\omega}$ will be contained in some $V_{\omega +m}$ . If $V_{\omega\cdot 2}\vDash\forall n\in\omega\;|P_{n+1}|>|P_{n}|$ then you would have that $\beth_m=|V_{\omega+m}|>\aleph_\omega$ which contradicts our assumption that the generalized continuum hypothesis holds. edit: You need to be cautious in general since cardinality is not absolute between transitive structures, however it is for structures of the form $V_\alpha$ for $\alpha$ a limit ordinal.
|set-theory|cardinals|well-orders|
1
von Neumann subalgebra which is isomorphic to $R$
Let $M$ be a von Neumann algebra and $N$ be the von Neumann subalgebra of $M$ . If $N$ is $*$ -isomorphic to the hyperfinite type II $_1$ factor $R$ . Can we conclude that $N$ is also a hyperfinite type II $_1$ factor?
Yes, the only II $_1$ -subfactors of the hyperfinite II $_1$ -factor $R$ are copies of $R$ . This follows from Connes 1976 Annals' paper, where he proves that every injective II $_1$ -factor is hyperfinite. So $R$ is injective, and because a $\text{II}_1$ -factor has conditional expectations onto any of its subfactors, any subfactor of $R$ is injective and hence hyperfinite.
|operator-algebras|von-neumann-algebras|
1
How do you solve second order differential equations with variables as the coefficients using a Laplace transform?
Up until now I have used the Laplace transform to solve all the second order differential equations I have met. However I was recently set this equation as a homework problem and have been unable to solve it. $ x^2y'' + 5xy' +4y=0 $ I have been made aware that you can also use a Fourier transform to solve these and am just curious as to how you solve them with both. is one method better than the other for situations like this? and how do I solve this using the Laplace transform?
In general, Laplace transform is useful for equations where the coefficients are constants. In this case, you can use the change of variables $x = \exp(t)$ to make this into a constant-coefficient differential equation to which you can apply the Laplace transform.
|fourier-transform|laplace-transform|
1
How to compute the numerical radius of the right shift operator?
Let $T$ be the right shift operator on $\ell^2$ defined by $T(x_1,x_2,\ldots)=(0,x_1,x_2,\ldots)$ . The numerical radius of $T$ is defined by $w(T)=\sup\{|\langle Th,h\rangle|:\, \|h\|=1\}$ . It is well known that $r(T)\le w(T)\le \lVert T\rVert$ where $r(T)$ denotes the spectral radius of $T$ . Since $T^n$ is isometry for all $n$ , $r(T)=\lVert T\rVert=1$ . Hence, $w(T)=1$ . But I want to compute $w(T)$ elementarily using the definition. By definition, $$w(T)=\sup\left\{\left|\sum\limits_{i=1}^\infty \overline{x_{i+1}}x_i\right|:\, \sum\limits_{i=1}^\infty |x_i|^2=1\right\}$$ If I consider the vector $x=(x_1,x_2,\ldots)\in\ell^2$ with $x_i=\frac{1}{2^{i/2}}$ . Then $\left|\sum\limits_{i=1}^\infty \overline{x_{i+1}}x_i\right|=\frac{1}{\sqrt{2}}$ . This shows that $w(T)\ge \frac{1}{\sqrt{2}}$ . But how can I arrive at $1$ ? Can anyone help me in this regard? Thanks for your help in advance.
For $|z| let $h_i=\sqrt{1-|z|^2}z^{i-1}.$ Then $\|h\|=1.$ Moreover $$\langle Th,h\rangle =(1-|z|^2)\sum_{i=1}^\infty \overline{z}^iz^{i-1} =\overline{z}$$ Therefore $$ \{z\,:\,|z| (the second containment follows from $\|T\|=1).$ In particular the numerical radius is equal $1.$ Remark Actually the numerical range of $T$ is equal $\{z\,:\,|z| as $|\langle Th,h\rangle| for $\|h\|=1.$ This follows from the property that $T$ does not admit eigenvalues, in particular eigenvalues in the unit circle.
|functional-analysis|operator-theory|hilbert-spaces|lp-spaces|operator-algebras|
0
What is the indeterminate in the set of all symbols in $R[x]$ and what does the elements in $R[x_1,x_2]$ looks like?
I was studying polynomial rings over commutative rings from the book Topics in Algebra by I.N Herstein. From there what I understood was, that: If $R$ be a commutative ring with a unit element then $R[x]$ is defined to be the set of all symbols $a_nx^n+a_{n-1}x^{n-1}+\cdots+a_0$ where, $a_0,a_1,...,a_n\in R$ and $x$ is called an indeterminate. However, I have a doubt regarding, what is meant by the word "indeterminate". It seems to me that in those set of symbols that are present in $R[x]$ of the form mentioned in the above lines, the symbol $x$ can be any element in the set $R.$ I think it's the same way we worked with polymials over the set of real numbers in early courses, i.e, a polynomial in $\Bbb R$ is defined as the set of all symbols or precisely set of mappings $f$ from $\Bbb R$ into itself such that $f(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_0$ where, $a_0,a_1,...,a_n\in \Bbb R$ and $x$ is just a "variable" that can take any value in the domain of $f$ which is $\Bbb R$ in this case
To say "R[x] is defined to be the set of all symbols ..." is not clear. What is a symbol ? More formally, a polynomial is a sequence $$(a_0,a_1, a_2,...)$$ where the $a_i \in R$ are almost all zero, which means that there exists an $n$ such that $$\forall m>n, a_m=0$$ The indeterminate $X$ is defined as $$\boxed{X:=(0,1,0,0,0,0,0,0,...)}$$ Then, very classically, you define an addition and multiplication on $R[X]$ , which makes $(R[X],+,\times)$ is a ring. With the definition of the multiplication chosen, it is easy to verify that $$X^2=(0,0,1,0,0,0,0,0,....)$$ $$X^3=(0,0,0,1,0,0,0,0,....)$$ $$...$$ and finally $$(a_0,a_1, a_2,...)=a_0+a_1X+a_2X^2+a_3X^3+...$$ Then $R_1:=R[X]$ being itself a ring, you can by the previous construction build $R_1[X_2]$ ... It's just a formal construction for the two-variable polynomials we're all used to. For example, $2XY+X^2=X^2+2X.Y=a_0+a_1Y$ , with $a_0=X^2, a_1=2X$
|polynomials|ring-theory|commutative-algebra|definition|
0
How are the two definitions of Eulerian posets equivalent?
I have been following Stanley's Enumerative combinatorics for the definition of an Eulerian poset. It is defined as follows: Definition: A finite graded poset $P$ with $\hat{0}$ and $\hat{1}$ is Eulerian if $\mu_P(s,t)=(-1)^{l(s,t)}$ for all $s\leq t$ in $P$ . Also after checking online , I found another definition that says an Eulerian poset is a graded poset in which every non-trivial interval has the same number of elements of even ranks as of odd ranks. Can someone guide me a little to see how these two definitions are equivalent? I might be missing something pretty basic here.
It is indeed true that $\mu(s,t)=(-1)^{\ell(s,t)}\iff $ every nontrivial interval has an equal number of elements with even and odd ranks. $(\!\!\implies\!\!)$ Fix $s . Since $\mu$ is the inverse of the zeta function, $\zeta(s,t)=1$ for all $s\le t$ , we know that $$ \begin{align} 0 =\sum_{s\le r\le t}\mu(s,r)\color{gray}{\cdot 1} =\sum_{s\le r\le t}(-1)^{\ell(s,r)} =(-1)^{\ell(\hat 0,s)}\sum_{s\le r\le t}(-1)^{\ell(0,r)}. \end{align} $$ Note that $\ell(0,r)$ is the rank of $r$ . Since the sum of $(-1)^{\text{rank}(r)}$ over the interval $[s,t]$ is zero, it follows that $[s,t]$ has an equal number of elements of even and odd rank. ( $\Longleftarrow$ ) To prove, $\mu(s,t)=(-1)^{\ell(s,t)}$ , define $\sigma(s,t):=(-1)^{\ell(s,t)}$ . It suffices to show that $\sigma$ is an inverse to the zeta function. For any $s , we compute $$ (\sigma*\zeta)(s,t) =\sum_{s\le r\le t}\sigma(s,r)\cdot \zeta(r,t) =\sum_{s\le r\le t}(-1)^{\ell(s,r)}\cdot 1 =(-1)^{\ell(\hat 0,s)}\sum_{s\le r\le t}(-1)^{\text{
|combinatorics|order-theory|
1
What conditions are needed to have $F(f,g) \in L^\infty$ for $f, g \in L^\infty$?
Let $\Omega \subset \mathbb{R}$ and $f, g \in L^\infty(\Omega)$ . Define $F: \Omega \times \Omega \rightarrow \mathbb{R}$ by $F(x) = \tilde{f}(f(x), g(x))$ for some $\tilde{f}$ . I am interested in what conditions can be placed on $\tilde{f}$ so that $F \in L^\infty(\Omega)$ . I am thinking of two extreme cases: $\tilde{f} \in C^0$ and $\tilde{f} \in C^\infty$ . If $\Omega$ was bounded this seems to be trivial since we know $f, g \in L^\infty$ so we can take their maximum and use that to bound $\tilde{f}$ . However I'm not sure about the case where $\Omega$ is unbounded.
In relation to my comment on the previous answer, it is sufficient to choose $\tilde{f} \in L^\infty_{\mathrm{loc}}(\mathbb{R}^2)$ . Since $f,g$ are bounded and measurable the set $D=(f(\mathbb{R}), g(\mathbb{R}))$ is bounded as a subset of $\mathbb{R}^2$ and so \begin{align} \mathrm{ess}\sup_{(x,y)\in\mathbb{R}^2} F(x,y)\leq \mathrm{ess}\sup_{(x,y)\in\bar{D}} \tilde{f}(x,y) For a given $f,g$ you can make the condition sharper by requiring only $\tilde{f} \in L^\infty(D)$ where $D$ is as above.
|real-analysis|functional-analysis|lp-spaces|
1
Derivative of $\|X-\alpha Y\|_2$ with respect to $\alpha$.
Let $X$ and $Y$ be operators on a real or complex Hilbert space $\mathcal{H}$ and $f(\alpha) = \|X - \alpha Y\|_2$ where $\alpha$ is real and $\|A\|_2 = \sigma_{\mathsf{max}}(A)$ is the $\ell^2$-induced operator norm. What is $\frac{df}{d\alpha}$? Even if the function is not differentiable everywhere, $f$ is convex in which case a sub-gradient will suffice. Also, if it helps we can assume $X=I$ and $Y$ is positive definite but I'd rather see a more general result. Also considering $f^2$ instead of $f$ is also fine if that helps. Plots of $f$ : I ran two simple numerical examples which might be enlightening. In the following plot 1 , $X = I\in M_{50}(\mathbb{R})$ and $Y = Z^\mathsf{T}Z + I$ where $Z_{ij}\sim\mathcal{N}(0,1)$ is normally distributed. As we can see, the plot seems piecewise linear. In the next plot 2 , we take $X_{ij}\sim\mathcal{N}(0,1) - I \in M_{50}(\mathbb{R})$ and $Y_{ij}\sim\mathcal{N}(0,1)$. Note that neither $X$ nor $Y$ are symmetric. This example looks differenti
$ \def\R#1{{\mathbb R}^{#1}} \def\a{\alpha} \def\s{\sigma} \def\k{\otimes} \def\h{\odot} \def\t{\times} \def\o{{\tt1}} \def\bR#1{\big(#1\big)} \def\BR#1{\Big[#1\Big]} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\vc#1{\op{vec}\LR{#1}} \def\rank#1{\op{rank}\LR{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\mt{\mapsto} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\deriv#1#2{\frac{d #1}{d #2}} \def\c#1{\color{red}{#1}} \def\CLR#1{\c{\LR{#1}}} $ Given the Singular Value Decomposition of a matrix $A\in\R{m\t n},\;\,r=\rank A$ $$\eqalign{ A &= \sum_{k=\o}^r \s_k\,u_k\,v_k^T \qquad&{\rm where}\;\; \s_\o\gt\s_2\ge\ldots\ge\s_r\gt0 \\ \s_\o &= \|A\|_2 \quad&\{{\rm Spectral\ norm}\} \\ }$$ then from this post (and this one ) the gradient of the Spectral norm (wrt $A$ ) is $$\eqalign{ \grad{\s_\o}{A} &= u_\o v^T_\o \qiq d\s_\o = u_\o^T\,\c{dA}\:v_\o \\ }$$ Substituting $\,A=\bR{Y\a-X}\,$ yields the desired derivativ
|functional-analysis|derivatives|operator-theory|convex-analysis|spectral-norm|
0
How to prove this vector identity?
I've seen this vector identity from the book[1] in page 89, $$ (\nabla p)\times\nu =0,\ \text{on}\ \partial\Omega,$$ where $\nu $ is the outer normal vector of $\partial \Omega$ , $ p \in H_0^1(\Omega).$ I try to calculate directly $(\nabla p)\times \nu =(\nu_3\partial_2 p-\nu_2\partial_3 p,\nu_1\partial_3 p-\nu_3\partial_1 p,\nu_2\partial_1 p-\nu_1\partial_2 p)^T,$ but I don't know how to show it equals $0$ . [1] Monk, Peter, Finite Element Methods for Maxwell's Equations (Oxford, 2003; online edn, Oxford Academic, 1 Sept. 2007)
Also, $(\nabla p)\times \nu=|\nabla p ||\nu|\sin(\theta)$ where $\theta$ is the angle between the two vectors. For the cross product to be zero the two vectors must be parallel. You must show that $\nabla p$ is parallel to $\nu$
|partial-differential-equations|vector-analysis|sobolev-spaces|electromagnetism|
0
$\{A_i\}, \{B_i\}$ are chain of sets indexed by the same linearly ordered set. If $|A_i|\le |B_i|$ for all $i$, does $|\cup A_i|\le |\cup B_i|$?
Let $I$ be a linearly ordered set and $\{A_i\}, \{B_i\}$ be two collection of sets indexed by $I$ , in an order-preserving way (i.e. $A_i\subsetneqq A_j\iff i and $B_i\subsetneqq B_j\iff i ). If for all $i\in I$ , ${\rm card} A_i\le {\rm card}B_i$ , do we have \begin{equation*} {\rm card}\ \bigcup_{i\in I} A_i \le {\rm card}\ \bigcup_{i\in I} B_i ? \end{equation*} I guess the answer is yes. At first I wanted to construct an injection $f:\bigcup_{i\in I} A_i\to \bigcup_{i\in I} B_i$ by "gluing". But since $I$ may not be finite or countable, this seems hard to be written rigorously.
I think the inequality is true (assuming Choice, at least). I will make fairly heavy use of cardinal arithmetic/facts. First, we can assume without loss of generality that $I$ is a limit ordinal, by passing to an appropriate cofinal subset (and because if $I$ has a maximum element, it's obvious). I will argue by proving a nice formula for the cardinality of each union. Lemma. If $I$ is a limit ordinal and we have sets $A_i$ for each $i \in I$ such that $A_i \subsetneq A_j$ iff $i , then $|\bigcup_i A_i| = \max\{|I|, \sup_i |A_i|\}$ . Proof. Let's write $A = \bigcup_i A_i$ . There is an injection $I \to A$ given by sending $i$ to some element of $A_{i + 1} \setminus A_i$ , so $|I| \le |A|$ . It's also clear that for any $i$ , we have $|A_i| \le |A|$ . So $\sup_i |A_i| \le |A|$ . Conversely, we have $|A| \le |I| \cdot \sup_i |A_i|$ , for example because if $S$ is a set of cardinality $\sup_i |A_i|$ , then clearly $A$ injects into $I \times S$ : for each $i$ , choose an injection $\iota_i
|set-theory|cardinals|
1
Could any "incoherently involutive" endofunctor be made "coherently involutive"?
Given a category $C$ with an endofunctor $F:C \to C$ and a natural isomorphism $\epsilon:FF \cong 1_C$ , call the pair $(F, \epsilon)$ an involutive endofunctor. Also, call the pair $(F, \epsilon)$ a coherently involutive endofunctor if $(F, F, \epsilon^{-1}, \epsilon)$ forms an adjoint equivalence, or equivalently, if $F\epsilon_X=\epsilon_{FX}$ for all objects $X$ of $C$ . Then, given any involutive endofunctor $(F, \epsilon)$ , is there always another natural isomorphism $\epsilon^{\prime}:FF \cong 1_C$ for which $(F, \epsilon^{\prime})$ is coherently involutive? It is known that incoherent equivalences could always be made into coherent adjoint equivalences by changing one of the involved isomorphisms without also changing the other.
If $\mathcal C$ is group $G$ considered as a one-object category, the question reduces to whether there exists an automorphism $f\colon G\to G$ with the property that $\{e\in G:f^2(g)=e^{-1}ge$ for all $g\in G\}$ is non-empty and disjoint from $\{e\in G:f(e)=e\}$ . Since any automorphism preserves the identity of the group, it follows that $f^2$ cannot be the trivial automorphism, whence $G$ cannot be abelian. Moreover, if $f(g)=h^{-1}gh$ , then $ff(g)=(h^2)^{-1}gh^2$ and $f(h^2)=h^{-1}h^2h=h^2$ , so $f$ cannot be an inner automorphism. Finally, $ff(g)=ffff^{-1}(g)=f(e^{-1}f^{-1}(g)e)=f(e)^{-1}gf(e)$ shows that $e$ and $f(e)$ define the same inner automorphism. Thus $G$ has to have a non-trivial center. The group of automorphisms of the dihedral group $G=(\mathbb Z/3)^\times\lhd\mathbb Z/n$ for $n>2$ is isomorphic to the semi-direct product $(\mathbb Z/n)^\times\lhd\mathbb Z/n=\{bx+c:b\in(\mathbb Z/n)^\times,c\in\mathbb Z/n\}$ with elements $(s,a)\in(\mathbb Z/3)^\times\times\mathbb Z/
|category-theory|involutions|equivalence-of-categories|
0
Permutohedron Facets
The permutohedron P4 is the 3d truncated octahedron. Its facets consist of 6 hexagons and 8 squares. The permutohedron P5 is a 4d polytope with 30 3d facets: there are 10 truncated octahedra, and 20 hexagonal prisms (I believe). The permutohedron P6 is a 5d polytope with 62 4d facets: there are 12 P5 polytopes, 30 truncated octadedral prisms, and 20 polytopes with 36 vertices. These latter seem to be 4d polytopes with 36 vertices and 12 3d facets, each of which is a hexagonal prism. Is that correct?
The permutatopes $P(n)$ are just the omnitruncated $n-1$ dimensional simplices. Accordingly these are represented by the Coxeter-Dynkin diagrams $x3x3x3...3x3x$ with $n-1$ nodes and in here each node symbol $x$ represents a ringed node (in a typewriter friendly way). As is known for Coxeter Dynkin diagrams, the according facets are obtained by omission of any possible single node each (as long as those do not become degenerate). I.e. in here we have $$\begin{array}{ccl}.\ x3x3...3x3x & = & P(0)\times P(n-1)\\ x\ .\ x3...3x3x & = & P(1)\times P(n-2)\\ ... & = & P(k)\times P(n-k-1)\\ x3x3x3...3x\ . & = & P(n-1)\times P(0)\end{array}$$ The respective counts each are provided by the $n$ -th row of the Pascal triangle (excluding the extremal 1 each), e.g. the truncated octahedron $P(4)=x3x3x$ has $choose(4,1)=4$ hexagons $x3x\ .$ , $choose(4,2)=6$ squares $x\ . \ x$ , and $choose(4,3)=4$ more hexagons $.\ x3x$ . As for your question about the 5D $P(6)$ this esp. provides $choose(6,1)=6\ P(5
|geometry|polytopes|
1
Generalized cohomology and stable homotopy groups of spectra
Suppose two spectra $E$ and $F$ have same stable homotopy groups $\pi_k$ for $k\geq0$ , equivalently, $E^{-k}(S)=F^{-k}(S)$ . If we replace the sphere spectrum $S$ by another spectrum $X$ , I wonder if the statement is still true: $$E^{-k}(X)=F^{-k}(X), \text{for } k\geq0.$$
Assume that $[S^k, E] = 0$ for all $k \geq 0$ , and assume that $[S^{-1}, E] \neq 0$ . That is, assume that $E^0(S^{-1}) \neq 0$ . Let $F$ be the zero spectrum. Then $E$ and $F$ have the same homotopy groups in non-negative dimensions, but they disagree in degree $0$ cohomology for the spectrum $S^{-1}$ .
|algebraic-topology|homology-cohomology|homotopy-theory|stable-homotopy-theory|
0
Approximation of $x^2-x-1 = 0$ .
An exercise in Calculus With Applications by Peter D. Lax confused me, here is the original text: 1.34 . Solve $x^2 − x − 1 = 0$ as follows. Restate the equation as $x = 1 + 1/x$ , which suggests the sequence of approximations $$x_0 = 1,\quad x_1 = 1+ 1/x_0,\quad x_2=1 + 1/x_1, ...$$ Explain the following items to prove that the sequence converges to a solution. (a) $x_0 (b) $x_0 (c) The even sequence $x_{2k}$ increases to a limit $L$ , and the odd sequence $x_{2k+1}$ decreases to a limit $R ≥ L$ . (d) The distances $(x_{2k+3} −x_{2k+2})$ satisfy $(x_{2k+3} −x_{2k}+2) (e) $R=L=\lim\limits_{k\to∞}x_k$ is a solution to $x^2 −x−1 = 0.$ For (c) I have trouble explaining the increment of the even-numbered sequence, although it is obvious by listing a few figures. I can tell that the even terms are bounded, but I can't verify that the even terms are increasing. Please help me please
This is a proof for step (b): First we notice that $$ x_{i+2}=1+1/x_{i+1}=1+\frac{1}{1+1/x_i}=1+\frac{x_i}{x_i+1}=\frac{2x_i+1}{x_i+1} $$ and $x_i>0$ . Next, we try to prove that the two-step is strictly increasing $$ x_{i+2}>x_i\\ \Leftrightarrow \frac{2x_i+1}{x_i+1}>x_i\\ \Leftrightarrow x_i+1>x_i^2 $$ So in order to show that the two step is strictly increasing we need to prove that the last inequality holds true for all $i$ . We do this by induction Initial step: $x_0=1\Rightarrow1+1>1$ (Here we use that we are actually talking about the even indices) Induction step: assume that $x_i+1>x_i^2$ , now show that $x_{i+2}+1>x_{i+2}^2$ . $$ x_{i+2}+1>x_{i+2}^2\\ \Leftrightarrow \frac{2x_i+1}{x_i+1}+1>\frac{(2x_i+1)^2}{(x_i+1)^2}\\ \Leftrightarrow \frac{3x_i+2}{x_i+1}>\frac{(2x_i+1)^2}{(x_i+1)^2}\\ \Leftrightarrow (3x_i+2)(x_i+1)>(2x_i+1)^2\\ \Leftrightarrow x_i+1>x_i^2\\ $$ The proof for the decreasing odd indices works the same way. To show that the even indices are always strictly belo
|sequences-and-series|
1
Fourier transform of a generalized function of $e^{ix}$
I want to solve $\mathscr{F}\{e^{ix}\}(\xi)$ in terms of distributions. This is what I did $$\mathscr{F}\big(e^{ix}\big)(\xi) = \int e^{ix} e^{i(\xi,x)}\text{d}x$$ But how do I get further from here?. I used the formula in Vladimirov, on p 108, but this seems to me too general as a solution. Any help appreciated
It can be a bit confusing with the variables in the definition of the Fourier transform of a tempered distribution. We are used to having $x$ or $t$ as variable for the plain function, and $\xi$ or $\omega$ for the transformed function, but of course we can have any variable. Below I will use $y$ for the transformed function. Hopefully it helps you see what is going on. Let $u$ be a tempered distribution. Its Fourier transform $\hat u$ is defined by $ \langle \hat{u}, \varphi \rangle = \langle u, \hat\varphi \rangle $ for all $\varphi\in\mathcal{S}.$ Abusing integral notation this means $$ \int \left( \int u(x) e^{-ixy} \, dx \right) \varphi(y) \, dy = \int u(x) \left( \int e^{-ixy} \varphi(y) \, dy \right) dx . $$ For $u(x) = e^{ix}$ this becomes $$ \langle \mathcal{F}\{e^{ix}\}, \varphi \rangle = \int \mathcal{F}\{e^{ix}\}(y) \, \varphi(y) \, dy = \int \left( \int e^{ix} e^{-ixy} \, dx \right) \varphi(y) \, dy \\ = \int e^{ix} \left( \int e^{-ixy} \varphi(y) \, dy \right) \, dx = \in
|fourier-transform|distribution-theory|
0
matrix with univariate entries: rank deficit of specialization ≤ vanishing order of determinant, part II: commutative ring
a special case of this question with coefficients in a field was recently asked and answered . fix a commutative ring with unit $R$ , and an $n \times n$ matrix $M(X)$ with entries in $R[X]$ . the determinant $\det(M(X))$ is itself a polynomial, say $D(X)$ . suppose that an element $x \in R$ is such that the specialized matrix $M(x)$ has $\text{rank}(M(x)) \leq n - d$ , in the sense that there is a subset of $M$ 's columns, of size $d$ , such that each column in that subset is a linear combination (over $R[X]$ ) of $M$ 's other $n - d$ columns. (this straightforwardly implies that each of $M$ 's $n - d + 1 \times n - d + 1$ minors vanish in $R[X]$ , by the properties of the determinant.) Claim. In this case, the determinant $D(X) := \det(M(X))$ vanishes at $x$ to order $\geq d$ ; that is, $(X - x)^d \mid D(X)$ in $R[X]$ .
i was able to solve this, in the following way: let $x \in R$ and $d$ be as in the hypothesis; i.e., $\text{rank}(M(x)) \leq n - d$ . fix any entry $(i, j) \in \{0, \ldots , n - 1\} \times \{0, \ldots , n - 1\}$ . by Euclidean division, we can re-express the polynomial entry $M_{i, j}(X) = (X - x) \cdot M'_{i, j}(X) + M_{i, j}(x)$ , where $M'_{i, j}(X) \in R[X]$ is some arbitrary other polynomial. now consider the determinant of the resulting matrix $M(X)$ , where we write the variant $(X - x) \cdot M'_{i, j}(X) + M_{i, j}(x)$ in each cell $(i, j)$ . we get a complicated determinant of binomials. as in the usual binomial theorem, we can "stratify" the result according to how many powers of $(X - x)$ are present. our goal is to show that the entire resulting expression is divisible by $(X - x)^d$ . of course we can simply ignore each stratum containing at least $d$ powers of $(X - x)$ . the goal is to show that each stratum containing fewer than $d$ powers of $(X - x)$ vanishes. fix an
|linear-algebra|abstract-algebra|matrices|matrix-rank|algebras|
0
Extracting the leading non-analytic piece of $\int_0^{1/2} \frac{t dx}{x^{-t} - x^t}$ at small real $t$
Consider the integral $$I(t) = \int_0^{1/2} \frac{t dx}{x^{-t} - x^t}$$ in the vicinity of $t=0$ . When $t$ is viewed as a real variable, $I(t)$ is very well-behaved. For example, it's infinitely differentiable. On the other hand, when $t$ is on the imaginary axis, the integrand is very poorly behaved as a function of $x$ close to zero. The denominator becomes a rapidly oscillating function that repeatedly passes through zero. This is easiest to see by writing $x^{-t} - x^t = 2\sinh(t \log(1/x))$ which is $2i\sin(\Im(t) \log(1/x))$ for imaginary $t$ . Thus $I(t)$ is infinitely-differentiable for real $t$ , but is non-analytic in the complex plane about $t=0$ . Note that I expect that $I(t)$ will be asymptotic to some power series in $t$ for small real $t$ , $I(t) \sim \sum_{n=0}^\infty c_n t^n$ , but that is not quite what I'm looking for. Rather, I'm interested in the leading "non-analytic" contribution to $I(t)$ at small real $t$ . For example, one might guess such a piece to go some
Long Comment. Part 1. We first show that $I(t)$ extends to an analytic function on $\Omega = \mathbb{C}\setminus B$ , where $B$ is the union of branch cuts given by $$ B = \left\{ \sigma + \frac{2\pi i k}{\log 2} : \sigma \in (-\infty, 0] \text{ and } k \in \mathbb{Z} \right\}. $$ Below is a complex plot of $I(t)$ : Indeed, assume for a moment that $t > 0$ . Substituting $x = \frac{1}{2}u^{\frac{1}{2t}}$ , $I(t)$ can be written as $$ I(t) = \int_{0}^{\frac{1}{2}} \frac{t x^t}{1 - x^{2t}} \, \mathrm{d}t = 2^{-t-2} \int_{0}^{1} \frac{u^{\frac{1}{2t} - \frac{1}{2}}}{1 - 4^{-t}u} \, \mathrm{d}u. \tag{1} $$ Note that the last integral can be expressed in terms of the hypergeometric function, which is useful for computing $I(t)$ numerically. Then for any integer $N \geq 1$ , invoking the identity $\frac{1}{1-a} = 1 + a + \cdots + a^{N-1} + \frac{a^N}{1-a}$ yields \begin{align*} I(t) &= 2^{-t-2} \int_{0}^{1} u^{\frac{1}{2t} - \frac{1}{2}} \left( \sum_{k=0}^{N-1} (4^{-t}u)^k + \frac{(4^{-t}u)^
|analysis|asymptotics|
0
Question about finding cube root by hand
To find the square root of $125$ by hand: $$ \begin{array}{l|l} & 1,25.00 & 11.1 \\ &1 & \\ \hline 21 & \space \space \space \space 25 & \\ &\space \space \space \space21 & \\ \hline 221& \space \space \space \space \space \space 4.00& \\ &\space \space \space \space \space \space221& \end{array} $$ Thus the answer is $11.1$ and however many decimals one would desire. If you look at the method to calculate the square root, it quickly becomes obvious it is utilizing $(a+b)^2=a^2+2ab+b^2 = a^2+b(2a+b)$ . The first digit of the solution is $1$ , which is $a$ , then this value is doubled and a $b$ is estimated, and finally multiplied by $b$ in the answer, or in other words $b(2a+b)$ . The cube root is something similar. Finding the cube root of $1278$ by hand: $$ \begin{array}{l|l} & 1,278 & 1 \\ &1 & \\ \hline 300\times1^2=300 & \space \space \space \space 278 & \\ 30\times1\times b=?&\space \space \space \space & \\ b^2=?& & \\ \hline \end{array} $$ At this juncture obviously $300$ excee
Hint: If you have a two digit number represented by $10a+b$ , $b$ represents the units. What does $a$ represent? (In general, a number can be represented by $10^a x_1 + 10^{a-1} x_2 + 10^{a-2} x_3 \dots 10^{a-(n-1)} x_{n-1} + 10^{(a-n)} x_n$ , where $x_1, x_2, x_3, \dots x_{n-1}, x_n$ are the digits of that number.)
|arithmetic|
1
Find the Itô representation of the following random variable
Let $(B_t)_{t \geq 0}$ be a Brownian motion and $T \geq 0$ . Find a constant $z_T \in \mathbb{R}$ and $(s,\omega) \in \mathcal{V}([0,T])$ such that $F_T(\omega) = z_T + \int_{0}^{T} \phi_T(s,\omega)dB_s(\omega)$ when $F_T(\omega) = \int_0^T B^3_s(\omega) ds$ . The idea is to rewrite $\int_0^T B^3_s ds$ using Itô's lemma, but it seems like I'm stuck in a loop. We have \begin{equation*} \begin{split} d(tB_t^3) &= B_t^3dt + 3tB_t^2dB_t + 3t B_t d \langle B \rangle_t \iff \\ \int_0^T B_t^3 dt&= TB_T^3 - \int_0^T 3tB_t^2 d B_t - \int_0^T 3t B_t dt \end{split} \end{equation*} Now two new terms appeared, $TB_T^3$ and $\int_0^T 3t B_t dt$ . If we plug in $$ TB_T^3 = \int_0^T B_t^3 dt + \int_0^T 3tB_t^2 d B_t + \int_0^T 3t B_t dt $$ everything cancels, yielding $0=0$ . Otherwise we can proceed by using Itô on $\int_0^T 3t B_t dt $ to see what we get, but I'm not quite sure if that's gonna give anything. Any hints? Edit, solution: We have that \begin{equation*} \begin{split} &\int_0^T 3t B_t dt
Seems I obtain something different which cannot be since the representation should be unique. By Ito, $$\tag{1} B_t^3=\textstyle 3\int_0^tB_s^2\,dB_s+3\int_0^tB_s\,ds\,. $$ By integration by parts, $$ \textstyle\int_0^sB_u\,du=\textstyle\int_0^s(s-u)\,dB_u\,.\tag{2} $$ By stochastic Fubini, \begin{align} \textstyle\int_0^t\int_0^sB_u\,du\,ds&\stackrel{(2)}=\textstyle\int_0^t\int_0^s(s-u)\,dB_u\,ds\stackrel{stoch. Fub.}=\int_0^t\int_u^t(s-u)\,ds\,dB_u\,.\tag{3}\\ \end{align} Then, using (1), \begin{align} F_t&=\textstyle\int_0^tB_s^3\,ds\stackrel{(1)}=3\int_0^t\int_0^sB_u^2\,dB_u\,ds+3\int_0^t\int_0^sB_u\,ds\,du\\[2mm] &\stackrel{stoch. Fub.}=\textstyle3\int_0^t\int_u^tB_u^2\,ds\,dB_u+3\int_0^t\int_0^sB_u\,ds\,du\\[2mm] &\stackrel{(3)}=\textstyle3\int_0^tB_u^2(t-u)\,dB_u+3\int_0^t\int_u^t(s-u)\,ds\,dB_u\\[2mm] &=\textstyle3\int_0^tB_u^2(t-u)+\frac{t^2-u^2}{2}-u(t-u)\,dB_u\,.\tag{4} \end{align}
|stochastic-calculus|
0
Integral $\int_0^\infty\frac{\tanh^2(x)}{x^2}dx$
It appears that $$\int_0^\infty\frac{\tanh^2(x)}{x^2}dx\stackrel{\color{gray}?}=\frac{14\,\zeta(3)}{\pi^2}.\tag1$$ (so far I have about $1000$ decimal digits to confirm that). After changing variable $x=-\tfrac12\ln z$, it takes an equivalent form $$\int_0^1\frac{(1-z)^2}{z\,(1+z)^2 \ln^2z}dz\stackrel{\color{gray}?}=\frac{7\,\zeta(3)}{\pi^2}.\tag2$$ Quick lookup in Gradshteyn—Ryzhik and Prudnikov et al. did not find this integral, and it also is returned unevaluated by Mathematica and Maple . How can we prove this result? Am I overlooking anything trivial? Further questions: Is it possible to generalize it and find a closed form of $$\mathcal A(a)=\int_0^\infty\frac{\tanh(x)\tanh(ax)}{x^2}dx,\tag3$$ or at least of a particular case with $a=2$? Can we generalize it to higher powers $$\mathcal B(n)=\int_0^\infty\left(\frac{\tanh(x)}x\right)^ndx?\tag4$$ Thanks to nospoon 's comment below, we know that $$\mathcal B(3)=\frac{186\,\zeta(5)}{\pi^4}-\frac{7\,\zeta(3)}{\pi^2}\tag5$$ I checked h
Solution 1. By the residue theorem, for any positive integer $k$ , $$ \int_{\gamma_k} \frac{\tanh ^2(z)}{z^2} d z=2 \pi i \sum_{j=1}^k \operatorname{Res}\left(\frac{\tanh ^2(z)}{z^2}, z_j\right) $$ where $\gamma_k$ is the boundary of the rectangle $[-k, k] \times[0, i \pi k]$ oriented counter-clockwise and $z_j=$ $i \pi(j-1 / 2)$ for $j \in \mathbb{Z}$ are the poles of the integrand function. We note that along the two vertical sides of $\gamma_k$ , $$ |\tanh ( \pm k+i y)|=\left|\frac{e^{ \pm k+i y}-e^{\mp k-i y}}{e^{ \pm k+i y}+e^{\mp k-i y}}\right| \leq \frac{e^k+e^{-k}}{e^k-e^{-k}} \leq \frac{1}{\tanh (k)} \leq \frac{1}{\tanh (1)} whereas, along the horizontal upper side, $|\tanh (x+i \pi k)|=|\tanh (x)| . Hence, as $k \rightarrow+\infty$ , $$ \left|\int_{\gamma_k \backslash[-k, k]} \frac{\tanh ^2(z)}{z^2} d z\right| \leq 2 \int_0^{\pi k} \frac{4}{k^2+y^2} d y+\int_{-k}^k \frac{1}{x^2+\pi^2 k^2} d x \leq \frac{8 \pi k}{k^2}+\frac{2 k}{\pi^2 k^2}=\frac{8 \pi^3+2}{\pi^2 k} \rightarrow
|calculus|integration|definite-integrals|closed-form|hyperbolic-functions|
0
Help in proving $\text{Bdry}(A)= \overline{A} \setminus \text{Int}(A)$
I would like to proof the following; $\text{Bdry}(A)= \overline{A} \setminus \text{Int}(A)$ .So here is my attempt. Proof: ( $\supseteq$ ) Suppose that $z \in \overline{A} \setminus \text{Int}(A)$ , then we get that $z \in \overline{A}$ and $z \notin \text{Int}(A)$ , then for all $r >0$ we have that $B(z;r) \cap A \neq \emptyset$ and since $z \notin \text{Int}(A)$ there doesn't exist any open ball centered at $z$ such that $B(z;r) \subseteq A$ , which means that for all $r >0$ we get that $B(z;r) \cap A^c \neq \emptyset$ , so $z \in \text{Bdry}(A)$ . $(\subseteq)$ I've got problems in this one. Suppose $z \in \text{Bdry}(A)$ then for all $r>0$ : $B(z;r) \cap A \neq \emptyset$ and $B(z;r) \cap A^c \neq \emptyset$ , this means that there isn't any open ball centered at $z$ such that $B(z;r) \subseteq A$ , but from here I can't see a way to conclude, any clue for this reverse containment would be appreciated.
In general topology; a point x is in boundary iff it is neither in interior nor in exterior of the set. For your case you can adapt this for metric space $$ \begin{array}{lllr} x\ \in Bdry(A) & \Leftrightarrow & x\notin \overset{ \ \circ}{A} \ \wedge \ x \notin (X-A)^\circ & \\ & \Leftrightarrow & x\notin \overset{ \ \circ}{A} \ \wedge \ x \notin (X- \overset{ \ -}{A}) & \ \ \ ...\left( (X-A)^\circ =X- \overset{ \ -}{A} \right) \\ & \Leftrightarrow & x\notin \overset{ \ \circ}{A} \ \wedge \ x \in \overset{ \ -}{A}& \\ & \Leftrightarrow & x\in \overset{ \ -}{A}- \overset{ \ \circ}{A} & \end{array}% $$
|real-analysis|general-topology|
0
A table is made of $\frac{n(n + 1)}{2}$ boxes where we randomly write numbers. Calculate the probability that $\max_1 < \max_2 < ... < \max_n$
Given a natural number $n \geq 1$ , consider a table made up of $\frac{n(n + 1)}{2}$ boxes, arranged in $n$ rows: one box in the first row, two in the second, etc., $n$ boxes in the $n$ th row. In the boxes of the table we randomly write numbers from $1$ to $\frac{n(n + 1)}{2}$ . Let $m_k$ be the largest of the numbers in the $k$ th row. Calculate the probability that $m_1 I think that it can be illustrated with such an array, in the example $n = 6$ . That gives us $\frac{6(6 + 1)}{2} = 21$ boxes. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 The max values in the rows are: $1, 3, 6, 10, 15, 21$ . First I thought that all the arrangements of values in an array that are correct would be those made by permuting numbers in each row. However, that is not that simple since for example we could change places of $2$ and $4$ and that would be correct. However, not all permutations in columns are valid, since interchanging places of $1$ and $4$ would make the arrangement of values incor
Let $p_n$ be the probability that $m_1 . Suppose $n\ge 1$ . In order for the event $m_1 to occur, two things must happen. First of all, the largest number of the triangle, $n(n+1)/2$ , must be placed in the $n^\text{th}$ row. There are $n$ spots in this row, so the probability that this occurs is $n/(n(n+1)/2)=2/(n+1)$ . Second, we need it to be the case that $m_1 . The event that this occurs is just a smaller version of the same problem, with only $n-1$ rows. Therefore, the probability of this is $p_{n-1}$ . Note that the events in the two bullet points are independent of each other. This is because the event in the second bullet only depends on the relative orderings of the entries in the first $n-1$ rows, so it does not matter which entries were placed in the $n^\text{th}$ row. Since these two are independent, we conclude $$ p_n=\frac2{n+1}\cdot p_{n-1}\qquad (n\ge 2) $$ This leads to a proof by induction that $p_n=2^{n}/(n+1)!$ for all $n\ge 1$ .
|probability|
1
From a deck of $52$ cards, we draw $5$ cards. Find the probability that we get at least $4$ face cards if we know that we have an ace and no hearts.
From a deck of $52$ cards, we draw $5$ cards without returning. Determine the probability that we get at least $4$ face cards (w if we know that we have an ace and no hearts. $A_4 =$ we get $4$ face cards (at least one of them is an ace) and $1$ card that is not a figure; there is no heart among all those cards $A_{4.1} = $ we get $1$ ace, $3$ different face cards and $1$ card that is not a figure; there is no heart among all those cards $A_{4.2} = $ we get $2$ aces, $2$ different face cards and $1$ card that is not a figure; there is no heart among all those cards $A_{4.3} = $ we get $3$ aces, $1$ different face card and $1$ card that is not a figure; there is no heart among all those cards $A_5 =$ we get $5$ face cards (at least one of them is an ace); there is no heart among all those cards $A_{5.1} = $ we get $1$ ace, $4$ different face cards; there is no heart among all those cards $A_{5.2} = $ we get $2$ aces, $3$ different face cards; there is no heart among all those cards $A_{
Assuming that "figure" includes $J,Q,K,A$ , you could simplify computations by computing [Hands with $4/5$ figures] - [Such hands without any ace] The restricted sample space excluding hearts is $9+3$ ace figures, and $27$ "others" $$ Pr = \frac{\left[\binom{12}4\binom{27}1+\binom{12}5\right]- \left[\binom94\binom{27}1 +\binom95\right]}{\binom{39}5 - \binom{36}5}$$
|probability|combinatorics|card-games|
0
get dihedral angles of octahedron given all triangles
An octahedron (not necessarily regular) consists of 8 triangles. You can see it as two pyramids glued together (for now on I only consider this case). Call the triangles in the upper pyramid $T_1, T_2, T_3, T_4 $ (clockwise) and in the lower pyramid $T_5, …, T_8$ (clockwise). The seams are $(T_i, T_{i+1}) (i=1,2,3,5,6,7), (T_4, T_1), (T_8, T_5)$ and $(T_i, T_{i+4}) (i=1,2,3,4)$ . Assume you now all information about the triangles. Call the angles and length for triangle $T_i$ (starting from the top (for $i = 1,...,4$ ), resp. bottom (for $i = 5,6,7,8$ ) and go clockwise): $(a_i, b_i, c_i)$ for lengths and $(\alpha_i, \beta_i, \gamma_i)$ for the angles. You know that they form an octahedron, the question is know: How can I find all dihedral angles? I guess you can find equations to which these dihedral angles need to fullfill and that by solving these system of equations I can find the dihedral angles. Does someone have ideas for getting these equations?
Another way to solve this problem is as follows. Place $A,B,C,D$ on the $xy$ plane. Let $ a = AB , b = BC , c = CD, d = DA $ , and let $V$ be the apex with a $z$ coordinate that is positive. Let $ e= VA, f = VB, g = VC, h = VD $ . Now express the coordinates of all the points using three angles. Angles $\theta$ and $\phi$ are the angles that $AD$ and $BC$ makes with the positive $x$ axis. And angle $\psi$ is the angle of rotation of the face $ABV$ from the triangle $ABV'$ which is a congruent with it, but has $V'$ on the $xy$ plane with the $y$ coordinate of $V'$ negative. The rotation is by angle $\psi$ clockwise about the $x$ -axis. Since the sides of this triangle are known to be $a, e, f$ , then we can drop a perpendicular from $V'$ onto $AB$ , with the altitude being $ h_1 = e \sin \alpha $ and the offset of the foot of the perpendicular from point $A$ is $ f_1 = e \cos \alpha $ , where $ \alpha = \angle BAV' = \cos^{-1} \left( \dfrac{a^2 + e^2 - f^2 }{2 a e }\right)$ . With this
|geometry|trigonometry|3d|
0
For $g,h\in{\rm SL}(2,q)\setminus\{\pm I\}$ with ${\rm tr}(g)={\rm tr}(h)$, how does $h$ ${\rm SL}(2,q)$-conjugated relate to $g^{\pm 1}?$
This question is less open-ended than you might think. The Question: For $g,h\in{\rm SL}(2,q)\setminus\{\pm I\}$ with ${\rm tr}(g)={\rm tr}(h)$ , how does $h$ ${\rm SL}(2,q)$ -conjugated relate to $g^{\pm 1}?$ Context: According to a preprint : [I]f $g,h\in \operatorname{SL}(2,\Bbb R)\setminus \{\pm I\}$ and $\operatorname{tr}(g)=\operatorname{tr}(h)$ then $h$ is $\operatorname{SL}(2,\Bbb R)$ -conjugate to $g^{\pm 1}$ , while if $g,h\in\operatorname{SL}(2,\Bbb C)\setminus\{\pm I\}$ and $\operatorname{tr}(g)=\operatorname{tr}(h)$ then $h$ is $\operatorname{SL}(2,\Bbb C)$ -conjugate to $g$ . Thoughts: My guess is there's some function $\varphi: \{(h,g)\}\to\operatorname{SL}(2,q)$ or maybe $\psi:\{(h,g)\}\to \Bbb Z$ such that $$k_{h,g}hk_{h,g}^{-1}=\varphi(h,g)\tag{1}$$ or $$l_{h,g}hl_{h,g}^{-1}=g^{\psi(h,g)},\tag{2}$$ where $k_{h,g}, l_{h,g}\in\operatorname{SL}(2,q)$ . I think $(1)$ says nothing, now that I think about it, but I'll leave it here to articulate what I mean exactly. Somethi
There is no such relation as $(2)$ if $q=p^{2m}$ where $p$ is an odd prime number. Note that any element of $GF(p)$ is a square in $GF(q)$ . Indeed, if $x$ is a square in $GF(p)$ , it is a square in $GF(q)$ . If it is not a square in $GF(p),$ then $GF(p)(\sqrt{x})$ is nothing but $GF(p^2)$ , which is a subfield of $GF(q)$ , and $x$ is a square in $GF(q)$ Now, for $\alpha\in GF(q)^\times$ , set $T(\alpha)=\begin{pmatrix}1 & \alpha \cr 0 & 1\end{pmatrix}$ . Direct computation show that if $T(\alpha)$ and $T(\beta)$ are conjugate, then $\beta\alpha^{-1}$ is a square on $GF(q).$ It is even an equivalence. We also have $T(\alpha_1)T(\alpha_2)=T(\alpha_1+\alpha_2)$ . In particular, $T(\alpha)^m=T(m \alpha)$ for all $m\in\mathbb{Z}$ . Note that $GF(q)$ has characteristic $p$ , so all the various powers of $T(\alpha)$ may be obtained by taking $m=0,\ldots, p-1$ . For example, $T(\alpha)^{-1}=T((p-1)\alpha)$ . Take $g=T(1)$ and $h=T(\varepsilon)$ , where $\varepsilon$ is a nonsquare element of
|linear-algebra|matrices|finite-fields|trace|
1
Connection $1$-forms and the local expression $d + A$
Let $M$ be a smooth manifold and $E \to M$ a vector bundle over $M$ with a connection $\nabla$ . Locally on an open set $U \subset M$ with a frame $(E_1,\dots,E_k)$ , we can write any section $s$ of $E|_U$ as $s = \sum c^iE_i$ for smooth functions $c^i:U \to \Bbb R$ . This means that we can write $$ \begin{align*} \nabla_X s &= \nabla_X\left(\sum_i c^i E_i\right) \\ &= \sum_i \nabla_X(c^iE_i) \\ &= \sum_i (Xc^i)E_i+c^i\nabla_X E_i \end{align*} $$ or alternatively $$ \begin{align*} \nabla s &= \nabla\left(\sum_i c^i E_i\right) \\ &= \sum_i \nabla(c^iE_i) \\ &= \sum_i dc^i \otimes E_i+c^i\nabla E_i. \end{align*} $$ In any case this can be computed by knowing $\nabla E_i$ 's. Since $\nabla E_i$ is a section of $E$ over $U$ it is a linear combination of $E_j$ 's and so $$ \nabla_X E_i = \sum_j \omega^j_i(X)E_j $$ and we define the connection $1$ -forms $\omega = [\omega^j_i]$ to be the coefficients in this sum. Some authors state that locally any connection can be locally written as $$d +
Fix any connection $\nabla_0$ on $E$ , and let $\nabla$ be an arbitrary connection on $E$ . Consider the operator $\nabla_0-\nabla:\Gamma(E)\to\Gamma(E\otimes T^*M)$ . You can show that this is an endomorphism valued $1$ -form: $$(\nabla_0-\nabla)(fs)=df\otimes s +f\nabla_0s-df\otimes s -f\nabla s =f(\nabla_0-\nabla)s$$ This $C^\infty(M)$ -linearity is what characterises $\nabla_0-\nabla$ as a tensor - in this case a $1$ -form. Thus, since the difference of any two connections is an endomorphism valued $1$ -form, it is also true that any connection can be obtained from a fixed basepoint $\nabla_0$ by adding an endomorphism valued $1$ -form. In particular, on a trivialisation, one can take $d$ to be that basepoint, and then locally any connection can be expressed as $\nabla=d+A$ . So yes, the local isomorphism $E|_U\cong U\times\mathbb{R}^r$ is crucial, because it gives a basepoint for the space of connections (in the given trivialisation, of course).
|differential-geometry|vector-bundles|connections|
0
The distribution of the minimum value among $mn$ non-independent random variables and Expected average distance in greedy matching on a circle
Now we have several independent and identically distributed random variables following the uniform distribution on the interval [0, 1].They are denoted as $x_1, x_2, x_3, ..., x_m$ and $y_1, y_2, ..., y_n$ respectively.Let $$ D_{i,j} = \begin{cases} x_i-y_j, & x_i\ge y_j \\\\ 1+x_i-y_j, & x_i In reality, these points are distributed on a circle with a length of 1 and can only move clockwise. so $$ \begin{aligned} F_{D_{ij}}(d) &= P(D_{ij} \leq d) \\\\ &= P(D \leq d | x_i \geq y_j)P(x_i \geq y_j) + P(D \leq d | x_i Therefore, $D_{ij}$ also follows a uniform distribution on the interval [0, 1]. However, unfortunately, $D_{ij}$ , for all $i$ and $j$ , are not independent. For example, suppose $D_{11} = x_1 - y_1$ and $D_{12} = x_1 - y_2$ . Then $E(D_{11}D_{12}) = E((x_1 - y_1)(x_1 - y_2)) = E(x_1^2) - E(x_1y_1) - E(x_1y_2) + E(y_1y_2) = \frac{1}{12} \neq 0$ . So it's difficult to determine the distribution function of the random variable $Y = \min(D_{11}, D_{12}, ..., D_{1n}, D_{21}, D_{2
Your variable $D_{ij}$ is the distance from $x_i$ to $y_j$ in, say, the clockwise direction if you bend the unit interval into a circle. So you want the distribution of the minimum $Y$ of the clockwise distances from $m$ points $x_i$ to $n$ points $y_j$ . This can be determined by distinguishing the cases in which the $x_i$ and the $y_j$ each form $k$ contiguous blocks, with $1\le k\le\min(m,n)$ . In such an arrangement, for $Y$ to be $\ge d$ there must be clockwise gaps of length $d$ from an $x$ -block to a $y$ -block, whereas there’s no constraint on the clockwise gap from a $y$ -block to an $x$ -block. Pick one of the points at which to cut the circle into an interval. Then successively remove each of the $k$ gaps of length $d$ by subtracting $d$ from all points that come after it. This moves all the points into an interval of length $1-kd$ , and this transformation is bijective: Each configuration with all points in this interval is transformed into a configuration with $k$ gaps of
|probability|integration|cumulative-distribution-functions|
1
A couple of clarifications about curvature, T(t), N(t)
Suppose $f: \mathbb{R} \rightarrow \mathbb{R^3}$ is for example 3 times differentiable, not necessarily smooth function (smooth meaning $f'(t)$ exists and $f'(t) \ne \overrightarrow{0}$ for each $t$ ). Here $f$ is not a path-length parametrization. Suppose also that $T(t)$ - the unit tangent at $t$ $N(t)$ - the principal normal at $t$ $\kappa(t)$ - the curvature at $t$ Is it correct that: when $f'(a) = \overrightarrow{0}$ then $N(a)$ and $\kappa(a)$ are not defined? when $f'(a) \ne \overrightarrow{0}$ then $N(a)$ and $\kappa(a)$ are defined? the image of $f$ (which is the curve $C$ in $\mathbb{R^3}$ ) has "a corner" at the point $t = a$ , if and only if $f'(a) = 0$ ? Note: What do I mean by "a corner"? Well, I am not sure of the exact definition but e.g. the image of the function $g(t) = (t^2, t^3, 0)$ has a corner at $t = 0$ . So are the above statements true or not? I am asking because of this formula. I got confused by chapter 2.8 of this book https://www.amazon.com/Vector-Calculus-
There's a lot of salient questions here, but I think that addressing the following might be satisfactory. Suppose that we are given a 3-times differentiable function $f:\Bbb R \to \Bbb R^3$ . Let $s:\Bbb R \to \Bbb R$ be the path-length function of $f$ , so that $f \circ s^{-1}$ is parameterized by path-length. We define a cusp in the path to be a value $a \in \Bbb R$ such that $f \circ s^{-1}$ has derivative zero at $s(a)$ . Is it possible to detect whether $f$ has a cusp at $a$ without calculating $s(t)$ or its inverse? If $f$ does not have a cusp at $a$ but $f'(a) = 0$ , can the curvature and principal normal of the path be computed? First of all, let's take stock of what can go "wrong" here. It is possible that $f$ is 3-times differentiable, but the curve has a cusp. For instance, $(t^2,t^3)$ describes a curve with a cusp. It is possible that the curvature and principal normal fail to exist even where no cusp is present and $f$ is 3-times differentiable. For instance, consider the
|calculus|differential-geometry|vector-analysis|
0
Showing that $S^1 \not\cong (0, 1)$ and $\not\cong [0, 1]$
So far, I have shown that $S^1 \setminus \{(1, 0)\}$ is homeomorphic to $(0, 1) \subset \mathbb{R}$ : Define the function $f: S^1 \setminus \{(1, 0)\} \rightarrow (0, 1)$ by $f\big((\cos(\theta), \sin(\theta))\big) = \theta / (2\pi)$ (since each point $(x, y) \subset S^1 $ may be represented uniquely by $x = \cos(\theta)$ and $y = \sin(\theta)$ ). By definition of trigonometric parameterization, each $(x, y) \in S^1$ may be uniquely represented by $0 such that $x = \cos(\theta)$ and $y = \sin(\theta)$ . Then, each $\theta$ is scaled by a factor of $1/(2\pi)$ so that each $\theta$ corresponds to a unique point in the interval $(0, 1)$ . Therefore, $f$ is injective. Next, given some $a \in (0, 1)$ , we have the corresponding point $(x, y) \in S^1$ where $x = \cos(2\pi a)$ , $y = \sin(2\pi a)$ . Therefore, $f$ is surjective. Next, let $U \subseteq S^1 \setminus \{(1, 0)\}$ be an open arc in $S^1$ . That is, $U = (\theta_1, \theta_2)$ where $0 . Then we have $$f(U) = \left\{\frac{\vartheta
$S^{1}$ isn't simply connected but both the two intervals are, so no homeomorphism can exists.
|general-topology|
0
Showing that $S^1 \not\cong (0, 1)$ and $\not\cong [0, 1]$
So far, I have shown that $S^1 \setminus \{(1, 0)\}$ is homeomorphic to $(0, 1) \subset \mathbb{R}$ : Define the function $f: S^1 \setminus \{(1, 0)\} \rightarrow (0, 1)$ by $f\big((\cos(\theta), \sin(\theta))\big) = \theta / (2\pi)$ (since each point $(x, y) \subset S^1 $ may be represented uniquely by $x = \cos(\theta)$ and $y = \sin(\theta)$ ). By definition of trigonometric parameterization, each $(x, y) \in S^1$ may be uniquely represented by $0 such that $x = \cos(\theta)$ and $y = \sin(\theta)$ . Then, each $\theta$ is scaled by a factor of $1/(2\pi)$ so that each $\theta$ corresponds to a unique point in the interval $(0, 1)$ . Therefore, $f$ is injective. Next, given some $a \in (0, 1)$ , we have the corresponding point $(x, y) \in S^1$ where $x = \cos(2\pi a)$ , $y = \sin(2\pi a)$ . Therefore, $f$ is surjective. Next, let $U \subseteq S^1 \setminus \{(1, 0)\}$ be an open arc in $S^1$ . That is, $U = (\theta_1, \theta_2)$ where $0 . Then we have $$f(U) = \left\{\frac{\vartheta
For any real $a , $[a,b)$ , $(a,b]$ , $[a,b]$ and $(a,b)$ are all contractible so their fundamental group or if you prefer homology groups is $0$ . But, $H_1(\mathbb{S}^1)=\pi_1(\mathbb{S}^1)=\mathbb{Z}$ . With regards to the homeomorphism you construct, observe that there is an simpler method. Let $N$ be any point on $\mathbb{S}^1$ . Stereographic projection sais that $\mathbb{S}^1-N$ is homeomorphic to $\mathbb{R}$ and $\mathbb{R}$ is homeomorphic to $(0,1)$ .
|general-topology|
0
Essentially infinite continuous maps $\prod_{\mathbf{N}} \mathbf{N} \rightarrow \mathbf{N}$?
Let $\mathbf{N}$ be the natural numbers with the discrete topology (or really, any countable set), and consider the space of natural number sequences $\prod_{\mathbf{N}} \mathbf{N}$ with the product topology (sometimes called "the Baire space"). Is there an "explicit" example of a continuous function $\prod_{\mathbf{N}} \mathbf{N} \rightarrow \mathbf{N}$ that does not factor through projection $\prod_{\mathbf N} \mathbf{N} \rightarrow \prod_{i=0}^{n} \mathbf{N}$ onto some finite product? I can't think of one off the top of my head. If no explicit example, does such a map at least exist abstractly?
The answer by user469053 explains the right strategy for obtaining an example, but there's a slightly simpler implementation: $$ f(x(1),x(2),x(3),\dots)=x(x(1)). $$ This doesn't factor through a projection to any finite subproduct because $x(1)$ can be arbitrarily large. But it's continuous because $f(x)$ depends on only two components of $x$ .
|general-topology|descriptive-set-theory|
0
Image of an Open Disk Given Function
Say I am given a function $f : \mathbb{C} \to \mathbb{C}$ , where $f(z) = z^n$ for $n \geq 1$ is a positive integer, and given a disk $D(0, \varepsilon) = \{z \in \mathbb{C} : |z| , and I want to find the image of $f$ under $D(0, \varepsilon)$ , i.e. I find $f(D(0, \varepsilon))$ . So by definition of the image of a function, I have \begin{align*} f(D(0, \varepsilon)) &= \{f(z) \in \mathbb{C} : |f(z)| Now I consider cases where $n$ is even and when $n$ is odd. If even then \begin{equation*} f(D(0, \varepsilon)) = \{z \in \mathbb{C} : 0 and when $n$ is odd, \begin{equation*} f(D(0, \varepsilon)) = \{z \in \mathbb{C} : |z| Would this be the correct way of finding the image under a disk? In particular where I labelled (*)?
The correct way would be $$\begin{align*} f(D(0, \varepsilon)) &= \{f(z) \in \mathbb{C} : |z| What you found is the pre-image of $D(0, \varepsilon)$ , that is, the set of points that map to $D(0, \varepsilon)$ by $f$ .
|complex-analysis|solution-verification|
1
Normality relation between product of subgroups
I need to solve this exercise for the proof of Dedekind's modular law. Let $G$ be a group, $N$ a normal subgroup of $G$ , $H$ and $K$ subgroups of $G$ with $H$ normal subgroup of $K$ . I want to show that the product $NH$ is a normal subgroup of $NK$ . I know that $NH$ is a group and I started by choosing $nh\in NH, mk \in NK$ . I want to prove that $$k^{-1}m^{-1}nhmk\in NH.$$ Defining $g=mk$ , I wrote it as $$(g^{-1}ng)(g^{-1}hg).$$ Now, the first term is in $N$ , but I can't prove that the other one is in $H$ .
Consider the map $x\mapsto x^{-1}$ from $NH$ to itself. This is a bijection and hence $NH=HN$ . Similar argument shows that $NK=KN$ . Therefore, there will be $x\in K$ and $y\in N$ , such that $xy=g$ . Then $$g^{-1}hg =y^{-1}x^{-1}hxy\in y^{-1}Hy.$$ Now since $$Hy\subset HN=NH,$$ we have $$g^{-1}hg\in y^{-1}Hy\subset y^{-1}NH=NH.$$
|group-theory|
1
If $K$ is a field of characteristic $0$ in which every polynomial of degree $\geq3$ is reducible, then $\bar{K}/K$ has degree $1$ or $2$
Is it true that if $K$ is a field of characteristic $0$ in which every polynomial of degree 3 is reducible, then $\bar{K}/K$ has degree 1 or 2? I have thought of taking $\alpha$ algebraic over $K$ . Then $K\subset K(\alpha)\subset\bar{K}$ . From the condition on the degree of irreducible polynomials $[K(\alpha):K]=1,2$ and $[\bar{K}:K]=[\bar{K}:K(\alpha)][K(\alpha):K]$ . Can I guess something about $[\bar{K}:K(\alpha)]$ ? The whole situation looks just like $\mathbb{R}$ or $\mathbb{C}$ ...
The Primitive Element Theorem says that every finite separable extension is simple; that is, if $L/K$ is a field extension that is finite, and $L$ is separable over $K$ , then there exists $\alpha\in L$ such that $L=K(\alpha)$ . Extensions of fields of characteristic $0$ are necessarily separable, so finite extensions of fields of characteristic zero are always simple. If $\alpha\in \overline{K}$ , then $\alpha$ is algebraic over $K$ , so its characteristic polynomial has degree either $1$ or $2$ . If you always get degree $1$ , then $\alpha\in K$ for all $\alpha\in\overline{K}$ , so you have $\overline{K}=K$ . If there is at least one $\alpha$ with $[K(\alpha):K]$ of degree $2$ , let $\beta\in \overline{K}$ . Then we have a tower $K\subseteq K(\alpha)\subseteq K(\alpha,\beta)$ . Since $K(\alpha,\beta)=K(\gamma)$ for some $\gamma$ by the Primitive Element Theorem, we have $[K(\beta,\alpha):K] = [K(\gamma):K]\leq 2$ . Therefore, $$2\geq [K(\gamma):K] = [K(\gamma):K(\alpha)][K(\alpha):K]
|abstract-algebra|field-theory|extension-field|
1
Link between soudness, completeness, and the use of truth tables
I do think there is a link, I just can't quite catch it. I have looked it up but I'm pretty new to logic so the explanations all seemed a little confusing. I think a system needs to be sound and complete in order to rely on truth tables to make proofs but I couldn't be less sure.
Truth tables are used in defining the notion of logical. consequence. A conclusion $C$ is a logical consequence of a set $\mathcal H$ of hypotheses if, whenever an assignment of truth values to the propositional variables (a row of a truth table) makes all the hypotheses in $\mathcal H$ true, it also makes $C$ true. Notice that this definition of logical consequence is entirely about truth values; there is no mention of axioms or rules of inference. A conclusion $C$ is deducible , in a formal system $X$ , from a set $\mathcal H$ of hypotheses if there is a deduction of $C$ from hypotheses in $\mathcal H$ and axioms of $X$ by means of the rules of inferencee of $X$ . Notice that this definition of deducible is entirely about axioms and rules of inference; there is no mention of truth values. The notions of "logical consequence" and "deducible in $X$ ", despite their entirely different provenance, turn out to be equivalent provided $X$ was intelligently designed. In more detail, $X$ is s
|logic|
1
Is the adjunction space of two Hausdorff spaces also Hausdorff?
I was reading the definition of CW-complex in terms of pushouts given by Lück's Algebraische Topologie: Homologie und Mannigfaltigkeiten (Chapter 3). It is stated (though not proven) that such a topological space is always Hausdorff. I guess this follows readily by induction if we can prove that the adjunction space $Z$ formed by attaching $Y$ to $X$ along $f$ , where $f:A\rightarrow X$ is some continuous map and $A$ is a closed subset of $Y$ , is Hausdorff whenever $X$ and $Y$ are (alternatively, $Z$ is just the pushout of $(\iota, f)$ , with $\iota: A\hookrightarrow Y$ being the inclusion). However, the quotient of a Hausdorff space is not Hausdorff in general, so I was wondering how to prove it in this particular case. If I choose two distinct $z,z'\in Z$ , it is easy to separate them if they both lie in $q(Y\setminus A)$ , with $q:X\amalg Y\rightarrow Z$ being the quotient map, as $q(Y\setminus A)$ is open in $Z$ and $q\vert_{Y\setminus A}$ is a topological embedding. Now, if both
Here's a counterexample Let $Q=\Bbb{Q}\cap (0,1)$ . Let $Y$ be $\Bbb{R}$ with the following topology A set $U\subseteq Y$ is open if it's open in the usual topology on $\Bbb{R}$ or $U=V\setminus Q$ where $V$ is open in the usual topology on $\Bbb{R}$ . (This is due to Munkres - a Hausdorff space that's not regular.) Since $Y$ 's topology contains the usual topology on $\Bbb{R}$ , it's Hausdorff. Let $A=Q\cup\{0\}$ . $A$ as a subspace of $Y$ which is Hausdorff, is also Hausdorff. $Q$ is closed in $Y$ as the complement of $Y\setminus Q$ . $\{0\}$ is also closed in $Y$ . So $A$ is a closed subset of $Y$ . Let $X=\{0,1\}$ . In the subspace topology of $A$ , $Q$ is closed and $\{0\}$ is closed, so there is a continuous map $f\colon A\to X$ sending $Q$ to $0$ and $f(0)=1$ . In the quotient topology $X\sqcup Y/f$ , the points $0,1$ from $X$ don't have disjoint open neighborhoods. One such neighborhood must be an open neighborhood of $0$ in $Y$ - containing all irrationals in some interval $(0
|general-topology|quotient-spaces|cw-complexes|
0
Proof of the Cycloid Parametric Equation
One of the steps of deriving the equations for the parametric curve of a cycloid is the following: Here we establish that the distance PT is equal to the distance OT, which then (alongside other steps) allows us to derive the parametric equation of the cycloid. Every video or written proof I have seen on the proof for deriving the parametric curve consider this step to be "intuitive", but this does not strike to me as intuitive at all. Am I missing something obvious? Is there someway we could prove this fact? Is my intuition "broken"? (I hope not). The image in the post was taken form this video: https://www.youtube.com/watch?v=wUDQFRZyE9Y&ab_channel=TimHodges (1:04)
As the circle rolls without slipping on the line the arc length from $P$ to $T$ matches the length along the $x$ axis. Imagine that the circle has a tape measure wound around it anchored at the origin that unrolls as the circle rolls.
|proof-explanation|intuition|curves|parametric|cycloid|
0
How to generalise inner product to measures without densities
Let $(E, \mathcal{E}, \lambda)$ be a metric finite measure space, and let $\mu, \nu$ be finite measures with densities $f,g$ with respect to $\lambda$ . Then, I am interested in considering the functional: $$\langle \mu, \nu \rangle = \int_E g d \mu = \int_E fg \;d\lambda = \int f d \nu$$ which has the desired properties of an inner product. However, I would possibly like to define this product for measures which do not have densities. How could I best generalise this for the case where neither $\mu, \nu$ have densities with respect to $\lambda$ ? Inspired by the Lebesgue theory of integration, my best attempt would be to define: $$\langle \mu, \nu \rangle = \sup \{\rho(E): \rho(A) \le \mu(A) \nu(A) \; \forall A \in \mathcal{E} \}$$ where $\rho$ must be a measure on $(E,\mathcal{E})$ . Does this seem reasonable? If so, is there a more elegant way to write this? It would be very helpful to know before I start trying to prove that this definition coincides with the definition for pairs o
My hypothesis was dead wrong. Consider the example where $E = [0,1]$ and $\lambda, \mu$ and $\nu$ are all the Lebesgue measure. Then, clearly, we wish to have: $$\langle \mu, \nu \rangle = 1$$ But this is not what we get with the latter definition. Note that if $I_n$ is any interval of width $2^{-n}$ , then we must have: $$\rho(I_n) \le \mu(I_n) \nu(I_n) \le 2^{-2n}$$ $$\implies \rho([0,1]) \le 2^{-n}$$ by countable additivity, and therefore, since $n$ was arbitrary: $$\rho([0,1]) = 0$$ Intuitively, the issue here is the second definition obliquely refers to measures $\rho$ such that $d \rho \le d \mu \; d \nu$ , and the double infinitesimal still gives zero after integrating. It clearly makes sense to wonder if the solution is to integrate twice. Revised solution : Let $\mu, \nu$ be finite measures on $E$ . We define: $$\langle \mu, \nu \rangle = (\mu \times \nu) (E \times E)$$ where $\mu \times \nu$ is the product measure on $E \times E$ , given the product $\sigma$ -algebra as usual
|integration|probability-theory|measure-theory|probability-distributions|
1
Scuola Normale Superiore's IVth year Admission Exam 2014 - Probability question (Exercise 3)
Let $X,Y$ be real-valued random variables with the same law and such that the product $XY$ is almost surely zero. Prove that $P(X=0)\geq 1/2$ . Find an explicit example such that $P(X=0)=1/2$ . Link here . (My attempt is in the answers - this is a question+answer post)
Take $\mathcal{N} = \{XY = 0\}$ , we know that $\mathbb{P}(\mathcal{N}) = 1$ . Next, we have that $\{Y\neq 0\}\cap \mathcal{N}\subset \{X = 0\}$ , it follows that $\mathbb{P}(\{Y\neq 0\}\cap \mathcal{N})\leq \mathbb{P}(X = 0)$ . Since $\mathbb{P}(\mathcal{N}) = 1$ one has $\mathbb{P}(\{Y\neq 0\}\cap \mathcal{N}) = \mathbb{P}(Y\neq 0)$ thus $\mathbb{P}(Y\neq 0)\leq \mathbb{P}(X=0)$ . Finally, as $X,Y$ have the same law, it follows that $\mathbb{P}(Y\neq 0) = \mathbb{P}(X\neq 0)$ , we conclude $\mathbb{P}(X\neq 0) \leq \mathbb{P}(X= 0)$ , but we also know $\mathbb{P}(X=0) + \mathbb{P}(X\neq 0)=1$ , then we conclude $\mathbb{P}(X=0)\geq 1/2$ . Consider $\Omega = \{u,v\}$ , $\mathcal{F} = \mathcal{P}(\Omega)$ and the probability $\mathbb{P}$ defined by $\mathbb{P}(u) = \mathbb{P}(v) = 1/2$ . Define $X(\omega) = \mathscr{1}_{\{\omega = u\}}$ and $Y(\omega) = \mathscr{1}_{\{\omega = v\}}$ , then we have that $X,Y$ have the same law and $XY = 0$ and $\mathbb{P}(X=0) = 1/2$ .
|probability|
0
Derivative of matrix exponential w.r.t. to each element of the matrix
I have $x= \exp(At)$ where $A$ is a matrix. I would like to find derivative of $x$ with respect to each element of $A$ . Could anyone help with this problem?
We may verify Hyperplane’s answer directly. The Fréchet derivative of the matrix exponential $$ f:A\mapsto e^A=I+A+\frac{1}{2!}A^2+\frac{1}{3!}A^3+\cdots $$ is \begin{align*} Df(A):H&\mapsto H+\frac{1}{2!}(HA+AH)+\frac{1}{3!}(HA^2+HAH+AH^2)+\cdots\\ &=\sum_{k=0}^\infty \sum_{r=0}^k \frac{1}{(k+1)!} A^r H A^{k-r}\\ &=\sum_{k=0}^\infty \sum_{r=0}^k \frac{B(k-r+1, r+1)}{r!(k-r)!} A^r H A^{k-r}\\ &=\sum_{k=0}^\infty \sum_{r=0}^k \frac{\int_0^1 s^r(1-s)^{k-r} ds}{r!(k-r)!} A^{k-r} H A^r\\ &=\int_0^1 \sum_{k=0}^\infty \sum_{r=0}^k \frac{(1-s)^{k-r}s^r}{r!(k-r)!}A^{k-r} H A^r ds\\ &=\int_0^1 \left(\sum_{n=0}^\infty \frac{1}{r!}(1-s)^rA^r\right) H \left(\sum_{n=0}^\infty \frac{1}{r!}s^rA^r\right) ds\\ &=\int_0^1 e^{(1-s)A} H e^{sA} ds.\\ \end{align*} For $g(A)=e^{tA}=(f\circ L)(A)$ where $L(A)=tA$ , we have $$ Dg(A)(H)=Df\big(L(A)\big)DL(A)(H)=tDf(tA)(H). $$ Thererfore \begin{align*} Dg(A)(H) =\sum_{k=0}^\infty \sum_{r=0}^k \frac{t^{k+1}}{(k+1)!} A^r H A^{k-r} =t\int_0^1 e^{(1-s)tA} H e^{stA}
|matrices|derivatives|matrix-calculus|matrix-exponential|
0
Solve 1-D coupled differential equations analytically
I'm currently going through an article where I came across these two 1-D coupled differential equations. $$\frac{dA}{dz} = a_1B(z)e^{-i\beta z} $$ $$\frac{dB}{dz} = a_2A(z)e^{i\beta z} $$ with these three conditions $$ \frac{d}{dz}(|A(z)|^2 +|B(z)|^2)=0 $$ $$ a_1 =-a_2^* $$ $$ B(0)=B_0, A(0)=0 $$ The article then skips all steps and arrived at these two solutions, subject to the three conditions, $$ A(z)=B_0\frac{2a_1e^{-i\beta z/2}}{\sqrt{4|a_1|^2+\beta^2}}sin\left(\frac{1}{2}\sqrt{4|a_1|^2+\beta^2}z \right) $$ $$ B(z)=B_0e^{i\beta z/2}\left[cos\left(\frac{1}{2}\sqrt{4|a_1|^2+\beta^2}z\right)-i\frac{\beta^2}{\sqrt{4|a_1|^2+\beta^2}}sin\left(\frac{1}{2}\sqrt{4|a_1|^2+\beta^2}z\right)\right] $$ Any idea on how the author arrived at these two expressions? I tried integrating by parts but ended up with $A'(z)$ and $\int{A(z)}$ terms that I can't get rid of. The author also made no mention of trial solutions. Any help is appreciated.
Hint One can eliminate one of the functions via differentiation and substitution to produce an o.d.e. in, say, $A$ alone. Differentiating the equation for $A'$ gives $$ A'' = a_1 e^{-i \beta z} (B' - i \beta B). $$ Substituting for $B'$ using the second equation (in particular using that $a_1 a_2 = a_1(-\overline{a_1}) = -|a_1|^2$ ) and for $B$ using the first equation gives a second-order, linear, constant-coefficient equation, $$A'' + i \beta A' + |a_1|^2 A = 0 ,$$ in $A$ alone. Additional hint We can solve this equation in the usual way: Substitute the ansatz $A(z) := \exp \lambda z,$ which yields the characteristic equation for $\lambda$ : $$\lambda^2 + i \beta \lambda + |a_1|^2 = 0.$$ Can you take it from here?
|ordinary-differential-equations|coupling|
0
Measurability of the set of elements who belong to a infinite amount of subsets in a sequence
I've been struggling to prove the following statement: Let (X, $\mathcal{M}$ , $\mu$ ) a finite measure space and let $(A_n)_{n\in\mathbb{N}}$ a sequence of measurable sets in X. Now consider $M$ the set of elements from X who belong to a infinite amount of the subsets $A_n$ . Prove $M$ is measurable and $$\inf_{n\in\mathbb{N}} \mu(A_n) \leq \mu(M)$$ I have checked for a solution here but i do not understand neither the solution given (infinite does not mean "all") nor the approximation from the person who asked. Thanks in advance.
$x\in A_n$ for infinitely many $n$ is equivalent to saying that for any $m\in\mathbb{N}$ there exists a $n\in\mathbb{N}$ such that $n\geq m$ and $x\in A_n$ which is further equivalent to the fact that $x\in\cup_{n=m}^{\infty}\,A_n$ for all $m\in\mathbb{N}$ . So we get that your required set of points is $M=\cap_{m=1}^{\infty}\,\cup_{n=m}^{\infty}\,A_n$ . Since, countable union/intersection of measurable sets is measurable, $M$ is measurable. Since, $\mu$ is a finite measure we have, $\mu(M)=\lim_{m\to\infty}\mu(\cup_{n=m}^{\infty}\,A_n)$ . Again observe that $\mu(\cup_{n=m}^{\infty})\geq\mu(A_n)\forall n\geq m$ for each fixed $m$ . Again, $\mu(A_n)\geq\inf\mu(A_n)$ . Now take limit over $m$ to achieve the desired inequality.
|real-analysis|measure-theory|proof-writing|lebesgue-measure|outer-measure|
0
Definition of sets and functions
The main problem is that I do not understand what the hell is even being asked in this question. Is anyone familiar with inductive definitions that could help me go through these definitions and understand what the question is about? I think that for the base case of the definition I need to define the variables of type $t$ and the constants also of type $t$ , and that the inductive step will be the functions that take more parameters. Otherwise, I am totally and completely stumped by the formulation of the question.
Let $I = \langle \phi, \theta \rangle$ and $V$ be given as you have stated (but replacing $n$ by $1$ in the type declarations for the functions, which I think is what you meant). There are two other bits of data in your problem that we need names for: $F = \mathrm{dom}(\phi)$ (the set of constant and function names) and $\tau$ , the function $V \to \theta$ that assigns a type to each variable. The set $E_t = E_{I, V, t}$ of type-correct expressions of type $t$ for a type $t \in \theta$ comprise the smallest set of expressions that contains $\{v \in V \mid \tau(v) = t\}$ (the variables of type $t$ and $\{c : F \mid \phi(c) = t\}$ and that is closed under the operation on sets of expressions $X$ , that, given $f \in F$ with $\phi(f) = t_1 \times \ldots t_k \to t$ and expressions $e_1 \in E_{t_1}, e_2 \in E_{t_2} \ldots e_k \in E_{t_k}$ , adds $f(e_1, e_2, \ldots, e_k)$ to $X$ . Because we are having to define the $E_{t}$ in parallel, a more formal definition would define the $E_{t}$ (at
|logic|induction|natural-numbers|
0
A problem on tangent and secant lines
My question is the following: Assume a point $A$ outside some circle, and a point $X$ on the same circle, such that $AX$ is not a diameter of the circle. Draw $Y$ as the point of intersection between the circle and the (extesion of the) line $AX$ . Draw the tangents to the circle in points $X$ and $Y$ , and denote their point of intersection with $Z$ . Also draw the two tangents $AB$ , $AC$ to the circle, where $B$ , $C$ are points on the circle. Prove that $Z$ lies on the extension of the line $BC$ . The approaches I have tried until now involved either analyzing triangle similarities and searching for equal angles (many pairs of angles subtend the same arcs), calculating cross ratios (considering the drawing as a two-point perspective drawing of a circle), establishing a particular origin of linear coordinates and searching vector expressions for the positions of all points, writing all points in polar coordinates, although without any significant progress. Can you please help me for
We're going to use two results on the polar(*) of a point with respect to a circle. (1) if $P$ is exterior to a circle $\mathcal C, $ then the polar of $P$ is $(TT')$ , where $T$ and $T'$ are the "tangent points"; (2) if $P$ and $M $ are two points different of the center of $\mathcal C$ , then $M$ belongs to the polar of $P$ iff $P$ belongs to the polar of $M$ . According to (1), $\color{green}{BC}$ is the polar of $\color{blue}{A}$ and $\color{grey}{XY}$ the polar of $\color{red}{Z}$ . According to (2), as $\color{blue}{A}$ is on $\color{grey}{XY}, \color{red}{Z}$ is on $\color{green}{BC}$ . $\square $ (*) To define the polar of a point $P$ in a circle $\mathcal C$ with center $O$ and radius $R$ , you first determine $P' \in OP$ s.t. $$\overline{OP}.\overline{OP'}=R^2$$ then the polar is the perpendicular to $OP$ passing through $P'$ . Let's Prove Quickly (1): Let $P$ be a point outside the circle, $T$ the tangent point, and $H$ the foot of the perpendicular to $OP$ passing through $
|geometry|analytic-geometry|
0
Conditional Probability without replacement and with unique picks
Say I have a bag of 1000 balls. There are 50 uniquely coloured balls, each with known frequency. Every time I pick a ball out, whatever colour it is, all of those coloured balls are then removed from the bag. What is the probabilty of picking a certain colour ball in 20 picks. Applicability: I work in finance. There is an investment package that allows the customer to choose 20 stocks. We have the data that shows how many customers have this package and the amount of times a certain stock is chosen. For example, say I have 1,000 customers who use this investment package, 550 of them choose NTFLX as one of their 20 stocks. Goal: I am looking for a way to determine the probability of a stock being picked as one of the 20. All of the probabilities are known for each possible stock choice, I suppose the ultimate goal would be to figure out the probability of being chosen algorithmically.
Suppose you have $n$ balls, $m$ colors and you want $k$ picks. In your case, $n=1000$ , $m=50$ and $k=20$ . For each color $i$ , let $x_i$ be its frequency. It is assumed then that the sum of the $x_i$ 's is equal to $n$ . The probability of getting the colors $(c_1, c_2, \ldots, c_k)$ in a pick in that order is equal to $$ \frac{x_{c_1}}{n} \cdot \frac{x_{c_2}}{n-x_{c_1}} \cdots \frac{x_{c_k}}{n-(x_{c_1}+\cdots+x_{c_k})} $$ The desired probability is then the sum of the probabilities (as above) over all the tuples that contain the fixed color. Without assuming anything else, that's the further I can get, as the probability of two tuples with the same elements is not necessarily the same. Hope that helps!
|probability|conditional-probability|
0