title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Finding Area of $\triangle ABF$ in a Right Triangle with Given Side Lengths and Median Intersection
In right triangle $ABC$ , we have $\angle ACB=90^{\circ}$ , $AC=2$ , and $BC=3$ . Medians $AD$ and $BE$ are drawn to sides $BC$ and $AC$ , respectively. $AD$ and $BE$ intersect at point $F$ . Find the area of $\triangle ABF$ . I aim to use the Shoelace Theorem to calculate the area of $\triangle ABF$ , but I'm stuck on determining the coordinates of point $F$ . Could someone assist me in finding the coordinates of point $F$ so that I can proceed with calculating the area of $\triangle ABF$ using the Shoelace Theorem? Or is there another method altogether?
Point $F$ is called the centroid of a circle. The medians of $\triangle ABC$ are always concurrent. Note: Centroids are usually denoted with the letter G. In your image provided, draw a line containing $F$ perpendicular to $\overline {BC}$ . Let's call the intersection point $G$ . $\overline {BG}:\overline {GC}=\frac{2}{3}:\frac{1}{3}$ . Assuming $C$ = ( $0, 0$ ), the $x$ -coordinate of $F=1$ and the $y$ -coordinate of $F=\frac{2}{3}$ . $F$ is located at ( $1, \frac{2}{3}$ ).
|geometry|analytic-geometry|
0
Can someone please explain the below and if it means that if x=m/2^n and if m is odd then x can be represented in binary in two ways?
Reference image containing the statement Can someone please explain the below text and what it means in simple words? From what I understood it means that if x=m/2^n and if m is odd then x can be represented in binary in two ways? But how is that possible. I tried with 1/2 but how can 0.5 be represented as 0.1 and 0.0111?
Note that the $1$ s go on forever. They don't stop after the first three. This is the binary version of the infamous fact that $$ 0.999\cdots = 1 $$ in decimal. (This equation has its own Wikipedia page.) I cannot see what equation $(2)$ is in your book, but here's a definition of binary representations which is likely equivalent: A binary number $(.a_1a_2a_3\cdots)_2$ represents the real number $x$ if $$ \sum_{n = 1}^\infty \frac{a_n}{2^n} = x. $$ For your example, $x = \frac12$ , the $0.1$ representation is short for $0.1000\cdots$ . The infinite sum is then $$ \sum_{n = 1}^\infty \frac{a_n}{2^n} = \frac12 + 0 + 0 + 0 +\cdots = \frac12. $$ But for the other representation, $0.0111\cdots$ , the sum is $$ \sum_{n = 1}^\infty \frac{a_n}{2^n} = 0 + \frac14 + \frac18 + \frac1{16} + \cdots. $$ This is a geometric series which evaluates to = $1/2$ , as desired.
|binary|
1
Help completing proof. Calculating sum of series.
I'm trying to solve the ex 5.11 from "Probability Essentials", which asks to show that if X is Poisson(λ) then {|−|}= $\frac{2\lambda^\lambda e^{-\lambda}}{(\lambda -1)!} $ . I've shown that {|−|} = ${2\lambda^\lambda e^{-\lambda}} $ $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $ but I'm having trouble calculating the sum of this series: $\sum_{k=1}^\infty \left(\frac{k\lambda^k}{(k+\lambda)!}\right) $ . Could somebody help? EDIT: This is what I have done {|−|} = $\sum_{j=0}^\infty \left(\frac{|j-\lambda|\lambda^je^{-\lambda}}{j!}\right) $ = $\sum_{j=0}^\lambda \left(\frac{(\lambda - j)\lambda^je^{-\lambda}}{j!}\right)$ + $\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $\sum_{j=0}^\infty \left(\frac{(\lambda - j)\lambda^je^{-\lambda}}{j!}\right)$ - $\sum_{j=\lambda+1}^\infty \left(\frac{(\lambda-j)\lambda^je^{-\lambda}}{j!}\right) $ + $\sum_{j=\lambda+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right) $ = $2\sum
I show that $$ \color{blue}{\mathbb {} |−|= \frac{2\lambda^{m+1} e^{-\lambda}}{m!} } $$ with $\color{blue}{m=\lfloor \lambda \rfloor},$ which is nothing but the median of Poisson distribution. When $\lambda$ is integer, the formula given in the OP is obtained. Unifying the summation makes the problem difficult. Instead, the following method works: $$ \mathbb {} |−|= \sum_{j=0}^\infty \left(\frac{|j-\lambda|\lambda^je^{-\lambda}}{j!}\right)=\sum_{j=0}^m \left(\frac{(\lambda-j)\lambda^je^{-\lambda}}{j!}\right) +\sum_{j=m+1}^\infty \left(\frac{(j-\lambda)\lambda^je^{-\lambda}}{j!}\right)= \\ \sum_{j=0}^m \left(\frac{\lambda\lambda^je^{-\lambda}}{j!}\right)-\sum_{j=1}^m \left(\frac{\lambda^je^{-\lambda}}{(j-1)!}\right) +\sum_{j=m+1}^\infty \left(\frac{\lambda^je^{-\lambda}}{(j-1)!}\right)-\sum_{j=m+1}^\infty \left(\frac{\lambda\lambda^je^{-\lambda}}{j!}\right) = \\ \left [ \sum_{j=0}^m \left(\frac{\lambda\lambda^je^{-\lambda}}{j!}\right)-\color{blue}{\sum_{j=0}^{m-1} \left(\frac{\lambda\la
|calculus|probability|sequences-and-series|
1
Can someone please explain the below and if it means that if x=m/2^n and if m is odd then x can be represented in binary in two ways?
Reference image containing the statement Can someone please explain the below text and what it means in simple words? From what I understood it means that if x=m/2^n and if m is odd then x can be represented in binary in two ways? But how is that possible. I tried with 1/2 but how can 0.5 be represented as 0.1 and 0.0111?
by definition, $x = (.a_1a_2\dots a_n\dots)_2$ iff $$x=\sum_{i=1}^{\infty}a_i2^{-i}$$ if $x = \frac{m}{2^n}$ , for $m$ odd, i.e., $m=2k+1$ , then $x=\frac{2k+1}{2^n}=\frac{k}{2^{n-1}}+\frac{1}{2^n}=(.a_1a_2\dots a_{n-1}1000\dots)_2$ . But, note that $x$ is also equal to $(.a_1a_2\dots a_{n-1}0111\dots)$ by definition, since \begin{align*} \sum_{i=1}^{\infty}a_i2^{-i} & = a_12^{-1}+\dots+a_{n-1}2^{n-1}+\sum_{i=n+1}^\infty2^{-i}\\ & = \frac{k}{2^{n-1}}+\frac{2^{-n-1}}{1-\frac12}\\ & = \frac{k}{2^{n-1}}+\frac{1}{2^n}\\ & = x \end{align*} so the real reason why every real in binary has two decimal expansions is precisely because $\frac12 = 0.1 = 0.0111\dots$ , both series converge to the same number. And in fact this occurs on any numerical basis $b$ , since $$(0.1)_b=b^{-1}=\sum_{i=2}^\infty (b-1)b^{-i}=\frac{(b-1)b^{-2}}{1-\frac1b}=(0.0(b-1)(b-1)\dots)_b$$ this is why, for example, $1 = 0.999\dots$ in decimal base.
|binary|
0
Computing upper probability bound for randomly arrangement of 2n individuals at a round table
A group of 2n individuals consisting of n couples, are randomly arranged at a round table. You are required to find an upper bound for the probability that none of the couples are seated next to each other. Solution: This is a combinatorial problem. Let's denote the total number of ways to arrange 2n individuals around a round table as T, and the number of ways to arrange them such that no couples are seated next to each other as S. The probability that none of the couples are seated next to each other is then given by $P = \displaystyle\frac{S}{T}.$ Total arrangements (T): Since the table is round, we can fix one person and arrange the remaining 2n-1 people. This can be done in (2n-1)! ways. Arrangements with no couples together (S): This is a bit trickier. We can think of each couple as a single entity first. So we have n entities to arrange, which can be done in (n-1)! ways (again, because the table is round). Now, within each couple, we have 2 people that can be arranged in 2! ways
What you have computed as $S= (n-1)!(2!)^n$ represents # of arrangements with couples just considered apart pair by pair without regard to their relative positions when all are seated, which means some pairs may be seated together For $k$ couples together, $\;\;S_k=\binom{n}{k}2^k(2n−1−k)!$ [ Couples to be together are chosen, glued together and flippable. The rest are free to be permuted in the remaining part on an unnumbered circle ] And the probability of no couples together can be computed using inclusion-exclusion as $$Pr = \frac{1}{(2n-1)!}\;\sum_{0}^n \left[(-1)^k\cdot\binom{n}{k}\cdot 2^k\cdot (2n-1-k)!\right]$$ PS This is the upper bound for a given $n$ , because it is the probability that all couples are apart, and you can't have a probability higher than that PPS If instead, you want to know the highest value Pr might reach as you increase $n$ , I suspect it might be $e^{-1}$ , as gleaned from the table below: $n\quad\quad\; Pr$ $04\quad\quad 0.2954...$ $10\quad\quad 0.3395.
|probability|combinatorics|upper-lower-bounds|
1
Identifying Probability Distribution
I am looking to calculate the CDF of this PDF and I was wondering if anyone could help me to identify if this is a common PDF function of a well known distribution: $$f(x) = \dfrac12e^{-|x|}$$
It is called Laplace distribution .
|probability|probability-distributions|
0
Constructing a Free Resolution of a Cokernel Given Free Resolutions of Domains and Codomains
Let's state this question in cokernels first. The case of kernels are easily solved since free modules are projective. Let $0 \to P \to N \to M \to 0$ be a short exact sequence of modules over a ring $R$ . Given free resolutions $0 \to L_n \to L_{n-1} \to \cdots \to L_0 \to N \to 0$ and $0 \to K_m \to K_{m-1} \to \cdots \to K_0 \to P \to 0$ , how to construct a free resolution of $M$ ? The idea is to fit the given information into an exact diagram, but I cannot prove the canonical injections $K_n \to L_n$ assured by projectivity of free modules split. Would this work with some extra care, or do I need to follow a different approach?
I am not sure if I understood you correctly. But if I did, the problem is that the inclusions do not split in general. Take the usual example of a non-split SES $$0\rightarrow \Bbb Z \overset{\cdot 2}\rightarrow \Bbb Z \rightarrow \Bbb Z/2 \rightarrow 0$$ and note that the first two factors are already free. Hence their free resolutions turn out to be just $\Bbb Z \overset{=}\rightarrow \Bbb Z$ . But then the natural inclusion is again the map $\Bbb Z \overset{\cdot 2}\rightarrow\Bbb Z$ , which does not split. This is to say, in general you won't be able to construct a free resolution of the quotient as the quotient of the free resolutions (which I think you were trying to do).
|commutative-algebra|homological-algebra|
0
How many homomorphisms are there from $D_5$ to $V_4$?
Question: How many homomorphisms are there from $D_5$ to $V_4$ , where $D_5$ is the dihedral group of order $10$ and $V_4$ the Klein four-group? I've used the fact that since $V_4$ is abelian, the commutator subgroup of $D_5$ is contained in the kernel of any homomorphism. However, I am having trouble determining the order of the commutator subgroup. Thanks in advance :)
Since $2\nmid 5$ , all the four elements of order $5$ must be sent to $1$ . As for the five elements of order $2$ , $\varphi(sr^k)=$ $\varphi(s)\varphi(r^k)=$ $\varphi(s)$ . Therefore, for $V_4=\{1,a,b,ab\}$ : \begin{alignat}{1} &s\mapsto a \\ &s\mapsto b \\ &s\mapsto ab \\ \end{alignat} completely define the only three nontrivial homomorphisms.
|group-theory|finite-groups|group-homomorphism|dihedral-groups|
0
Solve the equation $\left(\frac{1+\sqrt{1-x^2}}{2}\right)^{\sqrt{1-x}} = (\sqrt{1-x})^{\sqrt{1-x}+\sqrt{1+x}}$
Solve in $\mathbb{R}$ : $ \left(\frac{1+\sqrt{1-x^2}}{2}\right)^{\sqrt{1-x}} = (\sqrt{1-x})^{\sqrt{1-x}+\sqrt{1+x}} $ My approach: Let $a = \sqrt{1-x}$ and $b = \sqrt{1+x}$ so $a^2 + b^2 = 2$ . The equation becomes $\left(\frac{1+ab}{2}\right)^a = a^{a+b}$ , which is equivalent to $\left(\frac{1+ab}{a^2+b^2}\right)^a = a^{a+b}$ . After taking the natural logarithm, we get $a \ln(1+ab) - a \ln(a^2+b^2) = a \ln(a) + b \ln(a)$ . I thought of considering a function but I couldn't find it. Any help is appreciated.
Note that $a^2+b^2 = 2 \implies a^2+b^2+2ab = 2(1+ab)$ . Hence $$\left(\frac{1+ab}{2}\right)^a = a^{a+b} \implies \left(\frac{a+b}{2}\right)^{2a} = a^{a+b} \implies \left(\frac{a+b}{2}\right)^{2/(a+b)} = a^{1/a} \tag{*}$$ Claim : $f(x) = x^{1/x}$ is injective for $x \in [0,e]$ (Define $f(0)$ as $\lim_{x \to 0}x^{1/x} = 0$ ). Proof : We show it is strictly increasing in the said interval. $f'(x) = x^{1/x} \left(\frac{1-\ln(x)}{x^2}\right)$ which is non-negative in the said interval (it is zero only at two points 0 and $e$ , and otherwise strictly positive). $\blacksquare$ Using the claim on $(*)$ , and noting that $a,\frac{a+b}2 \in [0,e]$ (as $a,b \le \sqrt{2} ) we must have $\frac{a+b}2 = a$ , which gives the desired answer $\boxed{x=0}$ .
|functions|inequality|logarithms|systems-of-equations|exponential-function|
0
Let $S$ be infinite and $A\subset S$ be finite. Prove that $|S| = |S\setminus A|$
Let $S$ be infinite and $A\subset S$ be finite. Prove that $|S| = |S\setminus A|$ Given Solution Let $A = \{s_1,\ldots s_n\}$ . Since $S$ is infinite the set $S\setminus A$ is non-empty. Pick any $s_{n+1}\in S\setminus A = S\setminus \{s_1,\dots, s_n\}$ . Next, since $S\setminus \{s_1,\dots , s_{n+1}\}\neq\emptyset$ we can choose $s_{n+2}\in S\setminus \{s_1,\ldots, s_{n+1}\}$ . Proceeding by induction we can construct $s_{m+1}\in S\setminus \{s_1,\dots, s_m\}$ for any $m\geq n$ . Now define $f : S\to S\setminus A$ by the formula $f (s_i) = s_{i+n}$ for any $i$ and $f (x) = x$ if $x\in S\setminus \{s_1, s_2,\ldots\}$ . By construction, $f$ is 1-1 and onto. Similar but concrete example I could fully understand $(0,1)\sim [0,1]$ by $$f(x) = \begin{cases} \frac{1}{10} & \text{if } x = 0, \\ \frac{1}{100} & \text{if } x =1.\\ \frac{1}{10^{n+2}} & \text{if } x=\frac{1}{10^n}.\\ x, & \text{otherwise } \end{cases}$$ The subtle difference here is that $[0,1]$ is uncountably infinite, so we can
if you let $S=\mathbb{N}$ and $A=\{1,;\ldots, 10\}$ . When we can just let $f:S \to S \setminus A$ be $f(x)=x+10$ . Clearly the function $f$ is bijective. The proof does not make use of countability.
|real-analysis|analysis|elementary-set-theory|proof-explanation|set-theory|
0
Is it correct to call $A^2$ as "the square of the set $A$" and to call $A^n$ as "the set $A$ raised to the $n$th power"?
Let $A$ be a set. The Cartesian product of $A$ by itself is $A^2=A\times A$ . Also the Cartesian product of $A$ by itself for $n$ times is $A^n=A\times \cdots \times A$ . Is it correct to call $A^2$ is the square of the set $A$ and to call $A^n$ as the set $A$ raised to the $n$ th power?
Personally I have not seen the terminology "square" and would use "second (cartesian) power". The term "(cartesian) power of a set" is used commonly in some fields, in fact in category theory there is a whole concept called a power , which generalizes the operation $X^{I} = X \times ... \times X \;(\#I\text{ times})$ on sets. The problem with "power of a set" is of cause that it might lead to confusions with the "powerset of a set", but I think this is not a big problem, since in the end $\mathcal{P}(X) = \mathbf{2}^X$ .
|elementary-set-theory|terminology|
1
Finding Area of $\triangle ABF$ in a Right Triangle with Given Side Lengths and Median Intersection
In right triangle $ABC$ , we have $\angle ACB=90^{\circ}$ , $AC=2$ , and $BC=3$ . Medians $AD$ and $BE$ are drawn to sides $BC$ and $AC$ , respectively. $AD$ and $BE$ intersect at point $F$ . Find the area of $\triangle ABF$ . I aim to use the Shoelace Theorem to calculate the area of $\triangle ABF$ , but I'm stuck on determining the coordinates of point $F$ . Could someone assist me in finding the coordinates of point $F$ so that I can proceed with calculating the area of $\triangle ABF$ using the Shoelace Theorem? Or is there another method altogether?
Great question. Please refer to my rough diagram $$\angle CAB = \angle CAD(=θ_1) + \angle BAD(=θ_2) = \tan ^{-1} \frac{CB}{AC} = \arctan (\frac{3}{2}) $$ $$θ_1=\arctan (\frac{CD}{AC})=\arctan(\frac{1.5}{2})$$ So $$θ_2=\angle CAB - θ_1 = 19.4400348282°$$ $$ \angle CBA = \angle CBE(=θ_3) + \angle EBA(=θ_4)= \arctan \frac{AC}{BC}=\arctan (\frac{2}{3}) $$ $$θ_3 = \arctan \frac{CE}{BC} = \arctan (\frac{1}{3})$$ $$θ_4=\angle CBA - θ_3= 15.2551187031°$$ In a triangle, if two angles(= $α_1,α_2$ ) and the length of side common to them(= $b$ ) are known, then Αrea of the triangle = $\frac{1}{2}\frac{b^2}{\cot α_1+\cot α_2}$ By Pythagorean theorem, $AB=\sqrt{13}cm$ . So we know exactly the required amount of data to apply the formula given above. We know $\angle DAB,\angle EBA$ and $AB$ . Thus Area= $$\frac{AB^2}{2(\cot θ_2+\cot θ_4)}= \frac{13}{13} = 1 cm^2 $$ The area of the formula is an easy proof, and can be left as an exercise to the reader. Also you might not need a calculator by using the
|geometry|analytic-geometry|
1
Solve of $\int \frac{dx}{\left(x^2+9\right)^2}$ with Partial Integration
$$ \int \frac{dx}{\left(x^2+9\right)^2}$$ How would you solve this with partial integration (without trigonometry)?
$$I=\int 1.\frac{1}{x^2+9}dx=x.\frac{1}{x^2+9}-\int x\left(-\frac{1}{(x^2+9)^2}\right)(2x)dx$$ $$=\frac{x}{x^2+9}+2\int \frac{x^2}{(x^2+9)^2}dx$$ $$I=\frac{x}{x^2+9}+2\int \frac{x^2+9-9}{(x^2+9)^2}dx=\frac{x}{x^2+9}+2\int \frac{1}{(x^2+9)}dx-18\int \frac{1}{(x^2+9)^2}dx$$ $$I=\frac{x}{x^2+9}+2I-18\int \frac{1}{(x^2+9)^2}dx$$ $$18\int \frac{1}{(x^2+9)^2}dx=\frac{x}{x^2+9}+I=\frac{x}{x^2+9}+\frac{1}{3}\tan^{-1}\frac{x}{3}+C$$ $$\int \frac{1}{(x^2+9)^2}dx=\frac{x}{x^2+9}+I=\frac{x}{18(x^2+9)}+\frac{1}{54}\tan^{-1}\frac{x}{3}+C$$
|calculus|integration|
0
Formula for number of $(3,k)$ magic squares
Let $n,k\in\mathbb{N}.$ By a $(n,k)$ magic square, we mean a $n×n$ matrix containing non-negative integer entries such that the sum of entries of any given row or column is $k.$ Note that we don't need the diagonals to add up to $k.$ Prove that the number of $(3,k)$ magic squares is ${{k+4}\choose{4}}+{{k+3}\choose{4}}+{{k+2}\choose{4}}.$ I tried induction on $k,$ but that doesn't seem to work. In an attempt to work backwards, I realised that if we fill the $4$ top left entries of the matrix, then the remaining elements are uniquely determined. Maybe this is where the $4$ in the ${{k+4}\choose{4}}+{{k+3}\choose{4}}+{{k+2}\choose{4}}$ comes from. I didn't get far with this either. Other than this, I've noticed that the number of $(2,k)$ magic squares is $k+1.$ Also, if a $(3,k)$ magic square contains $k$ somewhere, that row and column must be filled with zeroes. Then, the remaining $4$ elements of the matrix form a $(2,k)$ magic square. The formula makes it seems like we may use stars a
Let me begin by invoking some heavy machinery. All this theory can be intimidating, but it will pay off in the end. I will put the part it's okay to skim inside a quote (though I'm not quoting anyone in particular). If we take an $(3,k)$ magic square, and divide all its entries by $k$ , we get a doubly stochastic matrix: a nonnegative matrix whose rows and columns all add up to $1$ . Geometrically, we can think of the doubly stochastic matrices as points in $\mathbb R^9$ (where the $9$ entries of the matrix are coordinates), but the row and column sum conditions define an affine subspace of $\mathbb R^9$ which has dimension $4$ . (This is essentially the observation already stated in the question: the $4$ top left entries of a magic square determine the other $5$ .) Additionally, we don't get the whole affine subspace: we carve off a bounded region of the subspace by the $9$ nonnegativity constraints. This $4$ -dimensional polytope in $\mathbb R^9$ with is called the Birkhoff polytope
|combinatorics|contest-math|magic-square|
1
Point-set level object/category
What is the meaning behind the terminology "point-set level (object or category)” in context of stable homotopy theory? This appears e.g. in following excerpt quoted from Tom Bachmann's Thesis Invertible Objects in Motivic Homotopy Theory (p 2): [...] Just as the homotopy category of spaces can be obtained from several “point-set level” categories of spaces, the stable homotopy category SH can be obtained from several categories of “point-set level objects” called spectra by passing to an appropriate equivalence relation on maps, also called weak equivalence.
In the context of homotopy theory the language of $\infty$ -categories is used frequently. Building it up in forms of models founded on the theory of ordinary sets and their points is quite elaborate, but it proves to be really worth it. Since many things can be stated quite easily and elegantly in this language, it is common to skip these point-set details and work in a somewhat axiomatic fashion with this language. From this axiomatic point of view, $\infty$ -categories exist on their own but can be represented by point-set models. Picking up on Bachmann's example, the stable $\infty$ -category of spectra is "just" the stabilization of the $\infty$ -category of anima/spaces/homotopy types. But you can model this $\infty$ -category as the model category of sequential spectra, or as the model category of symmetric spectra, or as the model category of orthogonal spectra...
|algebraic-topology|definition|homotopy-theory|stable-homotopy-theory|
1
How many homomorphisms are there from $D_5$ to $V_4$?
Question: How many homomorphisms are there from $D_5$ to $V_4$ , where $D_5$ is the dihedral group of order $10$ and $V_4$ the Klein four-group? I've used the fact that since $V_4$ is abelian, the commutator subgroup of $D_5$ is contained in the kernel of any homomorphism. However, I am having trouble determining the order of the commutator subgroup. Thanks in advance :)
The commutator of $D_5$ is $\langle r\rangle \cong \Bbb Z_5.$ As a result it's in the kernel (there's two ways to see that: $V_4$ has order $4,$ and it's abelian) , and the homomorphisms have every reflection, $$r^ks,1\le k\le5$$ going to the same element of order $2$ . So there's $3,$ $4$ including the trivial one.
|group-theory|finite-groups|group-homomorphism|dihedral-groups|
0
Is this set uncountable? $A = \{A_n \colon n \in \mathbb{N}\}$ where $A_n$ is the set $\mathbb{N}$ with the number $n$ removed from it
The set in the title is presented in this answer as an example of a similar set to the $P(\mathbb{N})$ (in the context of explaining the necessity of the axiom of choice in the existence of a well ordering on reals), but the definition of it in terms of enumerating the $\mathbb{N}$ gives me the impression it's actually a countable set. (This should actually have been a comment to the original answer but I don't have enough reputation. Feel free to remove if I'm violating any guidelines).
By definition, a set is countable if there exists a bijection from itself to the naturals. In other words, if there is a one-to-one function from $A$ to $\mathbb{N}$ that does not lefts out any natural number. In this case, it is easy to find such a function: $$ f: \mathbb{N} \rightarrow A, \ f(n) = A_n $$ It is easy to show that $f$ is one-to-one and for every $a \in A$ , there exists $n_a \in \mathbb{N}$ such that $f(n_a) = a$ In conclusion, the set $A_n$ is countable
|elementary-set-theory|well-orders|
1
Solve the system of equations $x+y+z=1, x y z=1,|x|=|y|=|z|=1, x, y, z \in \mathbb{C}$
Solve the system of equations $x+y+z=1, x y z=1,|x|=|y|=|z|=1, x, y, z \in \mathbb{C}$ I tried with polar representation, letting $x=\exp{(ia)}$ , $y=\exp{(ib)}$ , $z=\exp{(ic)}$ , with this I got $a+b+c=2 \pi n$ , so the three numbers are essentially three points on a unit circle, with their centroid being the point $1/3$ . But I don't know how to proceed.
If $x,y,z$ are distinct then $x+y+z=1$ means that the orthocenter is $1$ and lies on the circumcircle. Hence the triangle is a right angled triangle with one of its vertices coinciding with $1$ . So if $x=1$ we have $y+z=0$ and $yz=1$ so that the other two variables are $i,-i$
|complex-numbers|
0
Optional Stopping Theorem and Stopped $\sigma$-fields
This is a simple exercise needed to prove the Optional Stopping Theorem that I'm working on. Suppose $(X_n)$ is a supermartingale and we have stopping times $T, S$ . Then we already know in general that $\mathbb{E}(X_T|F_s)\leq X_{\min(S,T)}$ by splitting $X_T=X_{\min(S,T)} +\sum_{k=0}^n (X_{i+1}-X_i)1_{S\leq k\leq T}$ , however I want to show directly with $T\leq S$ that $$\mathbb{E}[X_T|F_S]\leq X_T$$ This seems trivial if the expectation was conditioned upon $F_n$ where $n\leq T$ almost surely, but I'm unsure of the proof for a stopped $\sigma$ -algebra. I can do this: $$\mathbb{E}(\mathbb{E}[X_T|F_S]1_{A})\leq \mathbb{E}(X_T 1_A)$$ for any $A \in F_S$ , i.e. $A\cap\{S\leq n\}\in F_n$ for any $n\geq 0$ . This yields, by earlier assumption, that $A\cap \{S\leq n\}\cap \{T\leq S\}\in F_n$ , but can we then assume from there that $A\cap \{T\leq n\}\in F_n$ ? I guess the above is the same as trying to show that $X_T$ is $(F_S)$ -measurable if $T\leq S$ a.s. I would appreciate any brief
For real $x$ you have $$ \{X_T\le x\}\cap\{S\le n\} = \cup_{k=0}^n\{X_k\le x, T=k, S\le n\}. $$ which is clearly $\mathcal F_n$ -measurable, for each $n$ . It follows that $X_T$ is $\mathcal F_S$ -measurable when $T\le S$ . In this case, $\Bbb E[X_T|\mathcal F_S] = X_T$ (ignoring questions of integrability of $X_T$ ).
|probability-theory|stochastic-processes|martingales|stopping-times|
1
Differentiating with respect to a sum
Suppose I have 2 arbitrary functions $f(\theta)$ and $g(\theta)$ , and I need an expression for $\frac{\partial f}{\partial \theta +\partial g}$ . How would I do so? Is the above expression equivalent to $\frac{\partial f}{\partial \theta}+\frac{\partial f}{\partial g}$ ? Any response will be appreciated. Many thanks in advance. Edit: For context of the problem, I was deriving the angle of rotation of a beam element of arclength $ r d\phi$ , undergoing axial displacements of $du_z$ and azimuth displacements of $du_\theta$ . Then, the tangent of overall rotation angle as in this image should be given by $\frac{1}{r}\frac{\partial u_z}{\partial \phi +\partial u_\theta}$ . However, it seems that this form cannot be solved in as a differential equation easily, even if only a numerical approximation is desired. Hence, I was looking to simplify this expression in anyway possible so that I arrive at a form that can be solved as an differential equation.
I am assuming that you have three functions $\phi,v,w$ depending on some variable $\theta$ . (I renamed $u_z$ respectively $u_\theta$ in $v$ respectively $w$ .) The change of angle between $\theta$ and $\theta+t$ can be described via $$\tan\beta(\theta,t)=\frac{v(\theta+t)-v(\theta)}{r(\phi(\theta+t)-\phi(\theta))+r(w(t+\theta)-w(\theta))},$$ i.e. $$\beta(\theta,t)=\arctan\frac{1}{r}\frac{(v(\theta+t)-v(\theta))/t}{((\phi+w)(\theta+t)-(\phi+w)(\theta))/t}.$$ This means that that the momentary change of the angle $\alpha$ at $\theta$ is described by $$ \begin{aligned} \alpha'(\theta)=\beta(\theta,0) &=\arctan\frac{1}{r}\frac{f'(\theta)}{g'(\theta)} \end{aligned} $$ for the function $f=v$ and $g=\phi+w$ . Then, it holds $$\alpha(\theta)=\alpha(t)+\int_{t}^\theta\arctan\frac{1}{r}\frac{f'(x)}{g'(x)}dx$$ Notice, that $\arctan$ is continuous, such that the limit for $t\rightarrow 0$ can be interchanged. More specifially $$ \begin{aligned} \alpha'(\theta)=\lim_{t\rightarrow 0}\beta(\theta,t)
|derivatives|
0
Complex Integrate $\int_{-\infty}^{\infty}e^{-|\lambda t|}e^{itx}dt$
I'm working through Big Rudin's (Real and Complex Analysis) Fourier Transform chapter, and the following complex integral is part of a discussion on the Inverse Transform that Rudin mentions briefly without resolving, and I'm very curious how this is done. $\int_{-\infty}^{\infty}e^{-|\lambda t|}e^{itx}dt$ What I remember from my Complex Analysis days, is to take the limit of the integral evaluated around a contour from -R to R, and then around $C_R$, the semicircle connecting -R and R. Since this is closed I can use the Cauchy Residue Theorem, and by Jordan's Lemma hopefully the integral around $C_R$ will vanish, so I'm left with $\int_{-\infty}^{\infty}e^{-|\lambda t|}e^{itx}dt=2\pi*\{\text{sum of residues}\}$ However, this integral has no residues, so simply taking the limit of this integral from -R to R should work, but my rusty integration techniques aren't helping and Mathematica is not giving me any results. Any help in solving this is much appreciated!
Use is made of even function and Laplace transform: \begin{align} F(x)&=\int_{-\infty}^\infty e^{-|\lambda t|}e^{itx}\,dt\\ &\overset{s=|\lambda|}=2\int_{0}^\infty e^{-s t}\cos(tx)\,dt\\ &=\frac{2s}{s^2+x^2}. \end{align}
|integration|complex-analysis|fourier-analysis|complex-integration|
0
Center position of an orthogonal rectangle that has a side or corner touching a circumference
I need to find how distant the center of an orthogonal rectangle is from the center of a circle, given a specific angle. The dimensions of the rectangle are proportional to the circle radius, so they use a 0-1 range, where 1 is the radius of the circle. The objective is to get the position of the center of that rectangle, knowing its size, so that when an angle is given one of its corner (or sides) is exactly on the circumference. To clarify, here are three image examples. Here is a rectangle that touches the circle on one of its sides: The same rectangle, with a different angle, now touching the circle at one of its corners: Another rectangle, with the same angle as the last, showing a different position for its center: The above is just an example, the final purpose is to get an arbitrary function , no matter the angle: I'm actually trying to put a given text as close as possible to the circle. The bounding rectangle of the text is known (its size is proportional to the circle radius
$\mathrm{Fig.\space 1}$ shows vertex $A$ of the rectangle $ABCD$ having measurements $2\lambda r\times 2\mu r$ touching a circle of radius $r$ , the center of which is located at the origin $O$ of $xy-$ coordinate system. The center of the rectangle is at $M$ . It is given that the line $OM$ joining the centers of the circle and the rectangle makes an angle $\phi$ with the $x-$ axis. Our aim is to determine the length $OM$ in terms the known variables $r, \lambda, \mu$ , and $\phi$ . Although OP has stated the condition $0\lt \lambda ,\mu\le 0.5$ in his problem statement, we are going to ignore it because we found that the derived formulae are valid as long as $\lambda ,\mu\gt 0$ . To facilitate the derivation of the sought formula, we have drawn a line parallel to $OM$ thorough the vertex $A$ to intersect the $y-$ axis and the vertical line $MN$ , which passes through $M$ , at $G$ and $F$ respectively. This makes $\measuredangle FAE = \phi$ . We also drop a perpendicular to the line $
|geometry|trigonometry|circles|rectangles|collision-detection|
1
Is $(f-g)(x)$ concave when $f,g$ are positive valued concave functions and $f\ge g$.
Let $f$ is positive-valued concave function and $g$ is another positive-valued concave function, such that $f:\mathbb{R}_+ \mapsto \mathbb{R}_+$ and $g:\mathbb{R}_+ \mapsto \mathbb{R}_+$. Additionally, $f\ge g$ over all domain $\mathbb{R}_+$. Do their difference $f-g$ is concave function too? This question is different from this.
I was trying to find out necessary conditions for it to be true. I was wondering about $f>g$ . In fact, a slight modification of the above counter-example is sufficient to get another counter-example for the case $f>g$ . I came up with another counter-example. In fact the difference between two concave functions such that $f>g$ can be convex. Let $f$ be defined over $\mathbb{R}$ , $$ f(x)= \left\{ \begin{array}{l} -\vert x \vert^3 \quad\text{if}\quad x\in [-0.25,0.25] \\ -3/16\vert x\vert +1/32 \quad\text{else}, \end{array} \right. $$ and $g(x)= -x^2$ . Then $f$ and $g$ are concave and $f>g$ but $h = f-g$ is convex. You can see an illustration below. I have also illustrated the counter-examples given by Erik M and GLay.
|convex-analysis|
0
Hilbert space has countable basis
Let $H$ be Hilbert space. I want to show that if $H$ has a countable orthonormal basis, then every orthonormal basis for $H$ must be countable. I spent almost a day, but could not solve this problem. I am trying to solve it with some kind of contradiction. I started by assuming that there exist an uncountable basis. Then I want to derive something crazy but cannot find it. Could you give me some hints or suggestions?
Suppose $H$ has a countable dense subset, $\{c_n\}_{n \in \Bbb{N}} \subset H$ . Let $\{u_a\}_{a \in A}$ be orthonormal. Then let us assume towards a contradiction that $A$ is uncountable. Then for any $a \neq b \in A$ , \begin{align} \vert \vert u_a - u_b \vert \vert^2&=\langle u_a-u_b \rangle \\ &=\langle u_a,u_a \rangle+ \langle u_b,u_b \rangle-2 \text{Re}(\langle u_a,u_b \rangle)\\ &=\vert \vert u_a \vert \vert + \vert \vert u_b \vert \vert\\ &=2. \end{align} So any two orthonormal elements differ by a distance of $\sqrt{2}$ in the norm, so then balls of radius $\frac{1}{2}$ centered at the $u_a$ are disjoint by triangle inequality, by density, the $c_n$ must intersect each of the $B_{\frac{1}{2}}(u_a)$ forcing $\Bbb{N}$ to be uncountable, a contradiction.
|real-analysis|linear-algebra|functional-analysis|hilbert-spaces|
0
Given $f(x) = \frac{x+1}{x^2+2 \sqrt{x}}$, How to prove $\lim_{x \rightarrow 4} f(x)$ using $\varepsilon$-$\delta$ definition?
I was wondering how I could use the limit of a function definition to prove that $\lim_{x \rightarrow 4} f(x) = 1/4$ , where $f$ is defined as $$f(x) = \frac{x+1}{x^2+2 \sqrt{x}}.$$ Here's some work so far. $$\left| \frac{x+1}{x^2+2 \sqrt{x}} - \frac{1}{4} \right| \leq \frac{\left| x+1 \right|}{ \left| x^2 + 2 \sqrt{x}\right|} + \frac{1}{4} = \frac{\left|x-4\right| + 3}{\left|x^2 + 2 \sqrt x \right|} + \frac{1}{4}$$ Suppose we let $\delta_1 = 5$ , so that $1 and $3 . Can I say that $$ and for any $\varepsilon$ , choose $\delta = \min ( 5, (12 \varepsilon - 15)/4 )$ to satisfy $0 and $|f(x)-1/4| ? Also, should I mention that $\delta$ should be chosen so that itself and $\varepsilon$ is greater than 0? Thanks in advance.
You have \begin{align}|f(x)-f(4)|&=\left|\frac{x+1}{x^2+2\sqrt x}-\frac14\right|\\&=\frac{\left|4-2\sqrt x+4x-x^2\right|}{4(x^2+2\sqrt x)}\\&=\frac{\left|4-2\sqrt x+4x-16-x^2+16\right|}{4(x^2+2\sqrt x)}\\&\leqslant\frac{2\left|\sqrt x-2\right|+4|x-4|+|x^2-16|}{4(x^2+2\sqrt x)}\\&=\frac{2\frac{|x-4|}{\sqrt x+2}+4|x-4|+|x+4||x-4|}{4(x^2+2\sqrt x)}\\&=\frac{2\frac1{\sqrt x+2}+4+|x+4|}{4(x^2+2\sqrt x)}|x-4|.\end{align} Now, suppose that $|x-4| . Then $x>1$ , and therefore $$x^2+2\sqrt x>3\quad\text{and}\quad\sqrt x+2>3.$$ On the other hand, $x , and therefore $|x+4| . So, \begin{align}\frac{2\frac1{\sqrt x+2}+4+|x+4|}{4(x^2+2\sqrt x)}& Therefore, given $\varepsilon>0$ , if you take $\delta=\min\left\{3,\frac{\varepsilon}2\right\}$ , then, when $|x-4| , you have \begin{align}|f(x)-f(4)|&\leqslant\frac{2\frac1{\sqrt x+2}+4+|x+4|}{4(x^2+2\sqrt x)}|x-4|\\&
|real-analysis|limits|epsilon-delta|
1
Solve the following recurrence relation: $f(N,d) = p*f(N-1,d-1) + (1-p)*f(N-1,d+1)$ subject to constraints in the body.
The constraints are $f(0,0)=1, f(0,k)=0\space \forall k \neq 0, f(N,k)=0 \space \forall k>N\space0\leq p\leq 1$ . When working on a probability problem, I came across this recursion when working with random walks. For symmetric random walks, with $p=\frac{1}{2}$ , I found a closed form solution of $2^{-N}*$ ${N}\choose{\lfloor{\frac{k}{2}}\rfloor} $ . To do so, I wrote out the first few terms, noticed a pattern, then used induction. I cannot find a nice pattern writing out the first few terms of the general form with $p$ . My gut instinct is that I will get something with binomial coefficients, since this recurrence relation looks an awful lot like the recurrence relation for binomial coefficients, and the solution for $p=\frac{1}{2}$ has binomial coefficients. Does anyone have any insights or suggestions? I also will add, I tried turning this into a PDE and I tried some generating function approaches, but the initial/boundary conditions have given me difficulty.
$f(n,d)$ is the probability that, when you repeat $n$ independent events that have a probability of success of $p$ , and a probability of failure of $1-p$ , that the number of successes minus the number of failures is $d$ . Let $s$ be the number of successes and $f$ be the number of failures. Since $s+f=n$ and $s-f=d$ , it follows that $s=(n+d)/2$ . Therefore, we need $(n+d)/2$ successes out of $n$ independent events with probability $p$ . This is answered by a binomial distribution. Therefore, $$ f(n,d)=\binom{n}{(n+d)/2}p^{(n+d)/2}(1-p)^{(n-d)/2}\;. $$ However, there is an exception. If $n$ and $d$ have opposite parities, then it is impossible for the difference between the number of successes and failures to be $d$ . In this case, $f(n,d)=0$ .
|combinatorics|recurrence-relations|binomial-coefficients|recursion|
1
Analysis Question: Projection.
Let $S$ be a sphere $x^2 + y^2 + z^2 = 1$ . $\forall$ point $\vec{p}=(x,y,z) \in S$ , consider the line $L_{\vec{p}}$ that passes through the pole $(0,0,1) \in S$ and by the point $\vec{p}$ . Let $\vec{q}$ be the interception of $L_{\vec{p}}$ and the plane $XY$ . The function $f$ that takes $\vec{p}$ to $\vec{q}$ is called stereographic projection. If $(u,v) = f(x,y,z)$ , then what is the value of $(u,v)$ . I really don't get it, it has being hard even to interprete the question, to be honest. The solution is: $(u,v) = (\frac{x}{1-z}, \frac{y}{1-z})$ .
The line that passes through $(0,0,1)$ and $\mathbf p=(x,y,z)$ in parametric form is $$L_{\mathbf p}=\left\{\begin{align} &x'=xt\\ &y'=yt\\ &z'=1+\left(z-1\right)t \end{align}\right.,\ t\in\mathbf R$$ Imposing $z'=0$ we'll find $t$ when the line passes through the $XY$ plane, i.e., $$z'=1+\left(z-1\right)t=0\iff t=\frac{1}{1-z}$$ Evaluating $L_{\mathbf p}$ at that $t$ we get $$L_{\mathbf p}\left(t=\frac{1}{1-z}\right)=\left\{\begin{align} &x'=\frac{x}{1-z}\\ &y'=\frac{y}{1-z}\\ &z'=0 \end{align}\right\}= \begin{pmatrix} u\\ v\\ 0 \end{pmatrix}$$
|real-analysis|stereographic-projections|
1
maximum value of complex numbers
Question - If $z_0,z$ are complex numbers s.t. $|z-i|\leq2 $ & $z_0=5+3i$ , find $|iz+z_0|_{max}$ My Approach - by Triangle Inequality , we know $|z_1+z_2|\leq|z_1|+|z_2|\implies|z_1+z_2|_{max}=|z_1|+|z_2|$ given $z_0=5+3i$ , we get $|z_0+iz|_{max}=|z_0|+|iz|=\sqrt{5^2 + 3^2}+|iz|=\sqrt{34} + |z|$ Now, its given $\because|z-i|\leq2\implies|z-i|_{max}=|z+(-i)|_{max}=2\implies|z|+|-i|=2\implies|z|=1$ From above, we can write $|z_0+iz|_{max}=\sqrt{34}+|z|=\sqrt{34}+1$ However, my teacher says the correct answer is 7. Can somebody help me !
If $|z - i| \le 2$ , then $z = 3i$ certainly satisfies this inequality, since $|3i - i| = |2i| = 2$ , yet $|3i| = 3 > 1$ . So you clearly have an error. Where is it? Well, the triangle inequality does say that $$2 = |z-i| = |z+(-i)| \color{red}{\le} |z| + |-i| = |z| + 1. \tag{1}$$ But the problem here is that you write $$|z - i| = 2 \implies |z| + 1 = 2,$$ when instead the inequality $(1)$ clearly is saying $$2 \color{red}{\le} |z| + 1,$$ or $$|z| \color{red}{\ge} 1.$$ A better approach would be to let $w = iz+z_0$ and note that if $|z-i| \le 2$ , then $$\begin{align} 2 &\ge |z-i| \\ &= |i||z-i| \\ &= |iz-i^2| \\ &= |iz+1| \\ &= |iz+(5+3i)-(4+3i)| \\ &= |iz+z_0 - (4+3i)| \\ &= |w - (4+3i)|. \tag{2} \end{align}$$ We want to find, for all $w$ satisfying $(2)$ , the one with maxmimum $|w|$ . Geometrically, what is $(2)$ ? It is the disk in the complex plane with radius $2$ and center $4+3i$ . The point in this disk that is furthest away from the origin, and thus has largest magnitude, is
|complex-numbers|absolute-value|
1
Angle of attack between surface and vector
I'm trying to make a simplified sailing ship simulation and want to get the angel-of-attack between a surface normal vector (representing a sail) and a vector (representing the wind velocity). The result should preferably be in the range of 0-180 deg. I can calculate the angle between the surface normal and the wind vector, but that is always the positive angle between the two, but I need to differentiate between the angle-of-attacks according to the illustration below. For example I need to figure out the difference between situation 2 and 4 (since one is generating positive lift and the other negative lift). Both give a 45 deg angle between the two vectors, but I need to differentiate the two, the former should be 45 deg and the latter should be 135 deg (see the desired output written in orange next to each situation). Just for some more context. I model in 3D space, and the 2D illustration is just to give a general idea of my problem. One could picture the examples in the illustrati
Assuming the sail is a flat plane as shown in the figures (it isn't, but let's ignore that problem for a moment), you have two choices for the normal vector of that plane. Whatever choice you made when you initially modeled the plane, you can always find the other choice of normal vector by simply reversing the modeled normal vector (multiply by $-1$ ). If you want to have a vector representing the "up" direction (the direction of positive lift provided by the sail) for the interaction between the flat-plate sail and the wind, you can choose the normal vector that has a positive dot product with the wind vector. In other words, you can easily change examples 6, 7, and 8 in your diagram to examples 2, 3, and 4 respectively. This gives good results because (for example) the angle you want to find in example 2 is the same as the angle in example 6. If the chosen normal vector is $\hat{\mathbf n}$ and the wind vector is $\mathbf w$ , project $\hat{\mathbf n}$ onto $\mathbf w$ and subtract
|linear-algebra|vectors|3d|
1
Analysis Question: Projection.
Let $S$ be a sphere $x^2 + y^2 + z^2 = 1$ . $\forall$ point $\vec{p}=(x,y,z) \in S$ , consider the line $L_{\vec{p}}$ that passes through the pole $(0,0,1) \in S$ and by the point $\vec{p}$ . Let $\vec{q}$ be the interception of $L_{\vec{p}}$ and the plane $XY$ . The function $f$ that takes $\vec{p}$ to $\vec{q}$ is called stereographic projection. If $(u,v) = f(x,y,z)$ , then what is the value of $(u,v)$ . I really don't get it, it has being hard even to interprete the question, to be honest. The solution is: $(u,v) = (\frac{x}{1-z}, \frac{y}{1-z})$ .
Before studying stereographic projection in space, it is better to study what is also known as stereographic projection in the plane. Then it's easy to go back to the 3D-space and furthermore, provided you with the answer of @Joan S.Guillamet. We call $I=(0,1)$ the pole, we have a point $P=(x,y)\in S=\{(x,y)\in \mathbb R^2:x^2+y^2=1\}$ First, we're going to look at the $l=IP$ line : $M=(\xi,\eta)\in l\iff \vec {IM}\parallel \vec{IP}$ $\vec {IM}\begin{pmatrix}\xi-0 \\\eta-1\end{pmatrix},\overrightarrow{IP}\begin{pmatrix}x \\y-1\end{pmatrix}$ Then $M\in l\iff \begin{vmatrix}\xi & x \\\eta-1 & y-1\end{vmatrix}=0\iff\xi(y-1)-x(\eta-1)=0$ Now let's apply the assumption that for $Q, \eta=0$ We have then $$\begin{cases}\xi(y-1)-x(\eta-1)=0 \\ \eta=0 \end{cases}$$ So, $$\begin{cases}\xi(1-y)=x \\ \eta=0 \end{cases}$$ and finally $$\begin{cases}\xi=\frac{x}{1-y} \\ \eta=0 \end{cases}$$ Otherwise written with notations $Q=(u,v)$ , $$\begin{cases}u=\frac{x}{1-y} \\ v=0 \end{cases}$$
|real-analysis|stereographic-projections|
0
For polynomials $p,q$ of the same degree, find $\lim_{x\to+\infty}\left(\frac{p(x)}{q(x)}\right)^x$ without using l'Hôpital's rule
How to use the result $\lim_{x\to+\infty}\left(1+\frac{1}{x}\right)^x=e$ to find $\lim_{x\to+\infty}\left(\frac{x^2-2x-3}{x^2-3x-2}\right)^x$ ? I rewrite the fraction as $1+\frac{x-1}{x^2-3x-2}$ in the hope of using the result about $e$ , but then I have no clues. I wanna know if there's a way that doesn't apply l'Hôpital's rule.
In general $$(1+f(x))^x=\left(1+f(x)\right)^{\frac{xf(x)}{f(x)}}=\left[(1+f(x))^{\frac{1}{f(x)}}\right]^{xf(x)}$$ so if $(1+f(x))^x$ has the indeterminated form $1^{\infty}$ the limit is equal to $e^{\lim_{x\to\infty}xf(x)}$ . In this particular case $f(x)=\left(1+\frac{x-1}{x^2-3x-2}\right)$ so $$\lim_{x\to\infty}x\left(1+\frac{x-1}{x^2-3x-2}\right)=1$$
|limits|limits-without-lhopital|
1
How do I remove the specific assumptions in the proof of the rank theorem in the special case of injective differential?
I have the following proof in my notes. It is a particular case of the more general rank theorem. Theorem (Rank theorem for injective differential). Suppose $M$ is a smooth manifold of dimension $m$ , and that $N$ is a smooth manifold of dimension $n$ . Suppose $F : M \to N$ is smooth. Let $p \in M$ . If $dF_p$ is injective, then there are charts $(U, \varphi)$ of M around p and $(V,\psi )$ of N around $F(p)$ such that $F(U) \subseteq V$ and for all $x \in\varphi(U)$ , and $\psi\circ F \circ \varphi^{−1}(x) = (x, 0_{n−m})$ Proof We prove the theorem in the case that $M$ is an open subset of $\Bbb R^m$ , $N$ open subset of $\Bbb R^n$ , $p = 0_m$ and $F(p) = 0_n$ By using appropriate charts around p and F(p), we can prove the general case. Suppose $dF_p$ is injective. Then $m \le n$ . Define $Q: M \to \Bbb R^m$ and $R: M \to \Bbb R^{n−m}$ by $F(x) = (Q(x), R(x))$ for $x \in M$ . Since $DF(p)$ is injective, its matrix has an $m × m$ invertible submatrix. We can do this by possibly exchang
The proof is awkwardly written and obscures the central ideas. We assume that we’re in Euclidean soace and the derivative at $p=0$ is $\begin{bmatrix} I\\0\end{bmatrix}$ . Consider the map $g(x,y)=F(x)+(0,y)$ . The inverse function theorem gives a local inverse $\tau$ for $g$ . Then look at $(\tau\circ g)(x,0)=(\tau\circ F)(x)$ .
|linear-algebra|analysis|differential-geometry|smooth-manifolds|inverse-function-theorem|
0
How to simplify a combination inside summation
Is it possible to simplify the following expression, where $k$ is a given constant? I want to simplify it to something in terms of only $k$ . $ \sum_{i=0}^{\lfloor \frac{k}{2} \rfloor} 2^{k} \binom{k-i}{i}$ . I was confused on how to deal with the $i$ in the combination.
I'm not sure if you meant to wrote $2^i$ instead of $2^k$ , because otherwise you can just take the $2^k$ out of the summation. For the latter case, we have $$\sum\limits_{i=0}^{\lfloor\frac{k}{2}\rfloor}2^k\begin{pmatrix}k-i\\i\end{pmatrix} = 2^k\cdot\sum\limits_{i=0}^{\lfloor\frac{k}{2}\rfloor}\begin{pmatrix}k-i\\i\end{pmatrix}$$ Let $a_k$ denote the summation on the right hand side, then we have \begin{align*} a_k &= \sum\limits_{i=0}^{\lfloor\frac{k}{2}\rfloor}\begin{pmatrix}k-i\\i\end{pmatrix} = \sum\limits_{i=0}^{\lfloor\frac{k}{2}\rfloor}(\begin{pmatrix}k-i-1\\i-1\end{pmatrix}+\begin{pmatrix}k-i-1\\i\end{pmatrix})\\ &= \sum\limits_{i=0}^{\lfloor\frac{k}{2}\rfloor}\begin{pmatrix}k-i-1\\i-1\end{pmatrix} + \sum\limits_{i=0}^{\lfloor\frac{k}{2}\rfloor}\begin{pmatrix}k-i-1\\i\end{pmatrix}\\ &= \sum\limits_{i=0}^{\lfloor\frac{k}{2}\rfloor-1}\begin{pmatrix}k-2-i\\i\end{pmatrix} + \sum\limits_{i=0}^{\lfloor\frac{k-1}{2}\rfloor}\begin{pmatrix}k-1-i\\i\end{pmatrix}\\ &= a_{k-2}+a_{k-1} \e
|combinations|
0
Prove that removing an open 2-cell from $S^2$ results in a contractible space
Let $X$ be a cellular decomposition of $S^2$ . I want to show that if $r\in X^{(2)}$ then $X\setminus \text{Int}(r)$ is a contractible space. I don't know much topology so I don't know if this it completely trivial or requires a bit of work. I can see why it should be true but I cannot rigorously prove it.
Note that $S^2 \setminus \operatorname{Int}(r)$ is a deformation retract of $S^2 \setminus \{x\}$ where $x \in \operatorname{Int}(r)$ is any point, as a consequence of the fact that $D^2 \setminus \{y\}$ where $y$ is any interior point of $D^2$ deformation retracts onto the boundary $S^1 \subset D^2$ . But $S^2 \setminus \{x\}$ is contractible since it is homeomorphic to $\mathbb{R}^2$ , hence so is $X \setminus \operatorname{Int}(r)$ .
|general-topology|algebraic-topology|cw-complexes|
1
What it means for a differential form to be on a manifold and how to "find" it
In most of the cases I have studied, I haven't seen what it means for a form "to be" on the manifold. We usually let $M$ to be a manifold, and we let $\omega\in \bigwedge ^kT^*_pM$ in the exterior algebra of the dual tangent space, we then say $\omega$ is of the form $\sum_{I}f_Idx^I$ (I is a multiindex) and we don't give too much meaning. But all this is abstract and much general. How we can make sense of a form been on a manifold. Apart from the obvious, to say that $\omega$ "takes values" from the tangent space of the manifold. Can we say anything about the coefficients (functions) of the form ? I have been trying to answer this by looking the example of the sphere $S^2$ , trying to deduce some information about the functions of the form by looking some evaluations of vector fields on the sphere. So I think a nice answer to this question would be, to give an example of a $2$ -form on the sphere. Any example of your choice that answers this will do. But I always take the sphere as a
A quote from Lee's Smooth Manifolds second edition page 360. In any smooth chart, a k-form can be written locally as $$\omega = \sum_I\omega_I dx^{i_1}\land \dots \land dx^{i_k}$$ where the coefficients $\omega_I$ are continuous functions defined on the coordinate domain. Proposition $10.22$ shows that $\omega$ is smooth on $U$ if and only if the coordinate functions $\omega_I$ are smooth. So the answer to your question is that the coordinate functions must be continuous to be a k-form, and smooth to be a smooth k-form. Here is an example in $\mathbb{R}^3$ : A 0-form is a continuous real valued function. A 1-form is a co-vector field. Some examples of 2-forms are $$\omega = (\sin xy)dy\land dx$$ which is smooth because $\sin xy$ is smooth and $$\eta = dx\land dy + dx \land dz + dy \land dz$$ Every 3-form on $\mathbb{R}^3$ is a continuous real-valued function times $dx\land dy\land dz$ In conclusion the thing that we can say about the coefficients is that they are (smooth) continuous re
|differential-geometry|riemannian-geometry|differential-forms|
0
Understanding The Math Behind Elchanan Mossel’s Dice Paradox
So earlier today I came across Elchanan Mossel's Dice Paradox , and I am having some trouble understanding the solution. The question is as follows: You throw a fair six-sided die until you get 6. What is the expected number of throws (including the throw giving 6) conditioned on the event that all throws gave even numbers? Quoted from Jimmy Jin in "Elchanan Mossel’s dice problem" In the paper it goes on to state why a common wrong answer is $3$. Then afterwards explains that this problem has the same answer to, "What is the expected number of times you can roll only $2$’s or $4$’s until you roll any other number?" I don't understand why this is the case. If the original problem is asking for specifically a $6$, shouldn't that limit many of the possible sequences? I also attempted to solve the problem using another method, but got an answer different from both $3$ and the correct answer of $1.5$. I saw that possible sequences could have been something like: $$\{6\}$$ $$\{2,6\}, \{4,6\}
Possible Confusion To see the problem with simply ignoring the odd rolls, let's simplify things and consider a three sided die and the probability of rolling a $\{3\}$ or a $\{2,3\}$ . Only roll $\boldsymbol{2}$ or $\boldsymbol{3}$ If we are not ignoring $1$ 's, the probability of rolling a $\{3\}$ is $\frac13$ and the probability of rolling a $\{2,3\}$ is $\frac19$ . That is, if $E$ is the event that we have only rolled a $\{3\}$ or a $\{2,3\}$ , $$ \begin{align} \operatorname{Pr}(\{3\}|E)&=\frac{\frac13}{\frac13+\frac19}=\frac34\tag{1a}\\ \operatorname{Pr}(\{2,3\}|E)&=\frac{\frac19}{\frac13+\frac19}=\frac14\tag{1b} \end{align} $$ Ignore rolls of $\boldsymbol{1}$ However, if we are ignoring $1$ 's, rolling any other number may also involve rolling and ignoring any number of $1$ 's. Denote these by $\langle1\rangle$ . Then, $$ \operatorname{Pr}(\{\langle1\rangle,2\})=\operatorname{Pr}(\{\langle1\rangle,3\})=\left[\sum_{n=0}^\infty\left(\frac13\right)^n\right]\frac13=\frac12\tag2 $$ $(2
|probability|conditional-expectation|means|
0
Composition of Lebesgue measurable function and Invertible linear transformation is Lebesgue measurable
Let $f:\mathbb{R}^n\to \mathbb{R} $ is a Lebesgue measurable function and $T\in Gl(n,\mathbb{R})$ then show that $f\circ T$ is also Lebesgue measurable . For Borel measurable functions it's easy to show . I think we need to use the fact that if $E$ is a Lebesgue measurable set then $E=B\cup N$ where $B$ is a $F_\sigma$ set i.e Borel set and $N$ is set with outer measure $0$ and $N\subset F$ where $F$ is a Borel set with measure zero , last line comes from the fact that Lebesgue measure is completion of Borel Measure . There is similar question : Let $f:\mathbb{R}^n\to \mathbb{R} $ is a Lebesgue measurable function and $T: \mathbb{R}^n \to \mathbb{R}^n$ a $C^1$ diffeomorphism , then show that $f\circ T$ is also Lebesgue measurable .
Pick a Borel function $g$ on $\mathbb{R}^n$ such that $g = f$ ae. Let $N \subseteq \mathbb{R}^n$ be the null set where this equality fails. Then $M := T^{-1}(N)$ is a null set: If $\lambda$ is Lebesgue measure, then $$\lambda( T^{-1}(N) ) = | \det T^{-1} | \lambda(N) = 0$$ since $T$ is linear and invertible. For $x \notin M$ one has $Tx \notin N$ , hence $g(Tx) = f(Tx)$ . So $g \circ T = f \circ T$ ae. and $g \circ T$ is Borel by reasons you know. So $f \circ T$ is measurable since it equals ae. the Borel function $g \circ T$ .
|real-analysis|measure-theory|measurable-functions|outer-measure|
1
What kind of notation is $\pi: b\ \varnothing\ M$?
In the paper Magnetic Bloch analysis and Bochner Laplacians of Ruedi Seiler, this expression appears in this context: The conventional notation for a fiber bundle would be $\pi: b\ \to\ M$ . where $\pi$ is a continuous surjection from $b$ to $M$ . Does $\varnothing$ mean a continuous surjection? Is this a conventional notation for this? Or does it mean something else here?
Looks like bad OCR to me. Below screenshot of the paper from different source.
|notation|fiber-bundles|
1
When is $\sqrt n$ in $\Bbb Q[\omega_m]$?
Given a positive integer $n$ and a primitive $m^{th}$ root of unity $\omega_m$ over $\Bbb Q$ , how could one determine if $\sqrt{n}$ lies in $\Bbb Q[\omega_m]$ ? In the case of $n=p>0$ being an odd prime, the question is known by some algebraic number theory. Indeed primes ramified in $\Bbb Q\left[\sqrt p\right]$ are $p$ and if $p\equiv 3$ modulo $4$ , also $2$ . If $p\equiv 1$ modulo $4$ , the inclusion $\Bbb Q\left[\sqrt p\right]\subset\Bbb Q\left[\omega_m\right]$ requires $p$ to also ramify in $\Bbb Q[\omega_m]$ , which happens iff $p|m$ . If $p\equiv 3$ modulo $4$ , then $2$ also need to ramify in $\Bbb Q[\omega_m]$ , forcing $4p|m$ . Finally $\sqrt 2\in\Bbb Q[\omega_m]$ iff $8|m$ . Conversely one verifies $p\equiv 1$ modulo $4$ implies $\sqrt{p}\in\Bbb Q[\omega_p]$ and that $p\equiv 3$ modulo $4$ implies $\sqrt{p}\in\Bbb Q[\omega_{4p}]$ . The following criteria summarize the situation: Theorem 1. Let $p>0$ be a positive prime. If $p\equiv 1$ modulo $4$ , then $\sqrt p\in\Bbb Q[\om
The other solution uses class field theory. I will avoid that here. Your question is the same as asking when $\mathbf Q(\sqrt{n}) \subset \mathbf Q(\omega_m)$ . By replacing $n$ by is biggest squarefree factor, we can assume $n$ is squarefree . And surely you are not interested in $n$ being a square, so we take $n$ to be a squarefree integer other than $1$ . When $\mathbf Q(\sqrt{n}) \subset \mathbf Q(\omega_m)$ , primes that ramify in the quadratic field also ramify in the cyclotomic field. When $n$ is squarefree and not $1$ , which primes ramify in $\mathbf Q(\sqrt{n})$ ? An odd prime ramifies in $\mathbf Q(\sqrt{n})$ if and only if it divides $n$ , while $2$ ramifies in $\mathbf Q(\sqrt{n})$ if and only if $n \equiv 2, 3 \bmod 4$ . Which primes ramify in $\mathbf Q(\omega_m)$ ? When $p \nmid m$ , $p$ is unramified in $\mathbf Q(\omega_m)$ since $x^m - 1 \bmod p$ is separable. When $p \mid m$ and $p > 2$ , $p$ is ramified in $\mathbf Q(\omega_p)$ , so $p$ is ramified in the larger fi
|algebraic-number-theory|cyclotomic-fields|
1
Finding a Polynomial Basis for a linear operator
I am working on this question: Consider $\mathcal{P}_2(\mathbb{R})$ be all polynomials of degree at most 2 with real coefficients. Let $\mathcal{S}$ be a subspace defined by $\mathcal{S} = \{p \in \mathcal{P}_2(\mathbb{R}): p(0) = p(1) \}$ . Let T: $\mathcal{S} \rightarrow \mathcal{S}$ be a linear operator defined by T(p) = p+p'. Prove or disprove T is diagonalizable. My approach is to use the theorem "T is diagonalizable iff its algebraic and geometric multiplicity is the same". So, I started by finding a basis of T. Since every element in $\mathcal{S}$ is in the form $ax^2-ax+b$ , we have $T(ax^2-ax+b) = ax^2+(a-1)x+b$ . I feel like the basis should only contain 2 elements (since it is governed by 2 variables), but besides 1, what could be the other basis? AI give me the opinion of $\{1, x, x^2-x\}$ as a basis, but I do not know if I should trust this. Any help is appreciated!
Elements $ax^2+bx+c$ are in $\mathcal{P}_2(\mathbb{R})$ and the elements of $\mathcal{S}\subset{P}_2(\mathbb{R}) $ are not only those of the form $a(x^2-x)+b$ but also the constant polynomials . Cannot contain linear polynomials because $p(0)=p(1)$ and linear polynomials are injective. Then $\mathcal{S}$ cannot have the same dimension than $\mathcal{P}_2(\mathbb{R})$ .
|linear-algebra|
0
Finding a Polynomial Basis for a linear operator
I am working on this question: Consider $\mathcal{P}_2(\mathbb{R})$ be all polynomials of degree at most 2 with real coefficients. Let $\mathcal{S}$ be a subspace defined by $\mathcal{S} = \{p \in \mathcal{P}_2(\mathbb{R}): p(0) = p(1) \}$ . Let T: $\mathcal{S} \rightarrow \mathcal{S}$ be a linear operator defined by T(p) = p+p'. Prove or disprove T is diagonalizable. My approach is to use the theorem "T is diagonalizable iff its algebraic and geometric multiplicity is the same". So, I started by finding a basis of T. Since every element in $\mathcal{S}$ is in the form $ax^2-ax+b$ , we have $T(ax^2-ax+b) = ax^2+(a-1)x+b$ . I feel like the basis should only contain 2 elements (since it is governed by 2 variables), but besides 1, what could be the other basis? AI give me the opinion of $\{1, x, x^2-x\}$ as a basis, but I do not know if I should trust this. Any help is appreciated!
It is correct that every element in $S$ is of the form $ax^2-ax+b$ (eventually with $a=0$ ), and $$T(ax^2-ax+b)=ax^2+ax+b-a.$$ A basis of $S$ is given by ${\{1,x^2+x}\}$ : The elements are linearly independent and span $S$ . AI is wrong: $x$ is not part of the basis, and it is even not contained in $S$ .
|linear-algebra|
0
Find all projective transformations that fix two conics
Given two distinct nonsingular conics $C_1,C_2$ in a complex projective plane, I try to find all projective transformations that fix $C_1$ and fix $C_2$ . Then I consider the intersections of $C_1,C_2$ . There are 5 cases: four single points; two single points, one double point; two double points; one single point, one triple point; one quadruple point. Case 1. Obtain three pairs of lines by joining disjoint pairs of these four points, each such pair of lines meets in a point $v_i$ where $\{v_1, v_2, v_3\}$ is a basis relative to which the matrices of $C_1,C_2$ are simultaneously diagonalizable. So there is a basis in which the two conics are defined by ( $\lambda_i$ are distinct) $$x_1^2+x_2^2+x_3^2=0, \quad \lambda_1 x_1^2+\lambda_2 x_2^2+\lambda_3 x_3^2=0$$ so there are four projective transformations that fix $C_1$ and fix $C_2$ $$\begin{array}l [x_1,x_2,x_3]\mapsto[x_1,x_2,x_3]\cr [x_1,x_2,x_3]\mapsto[-x_1,x_2,x_3]\cr [x_1,x_2,x_3]\mapsto[x_1,-x_2,x_3]\cr [x_1,x_2,x_3]\mapsto[x_1,
Let's use $G \subset PGL(n, \mathbb{C})$ to denote the group of projective transformations preserving $C_1$ and $C_2$ . Case 2 Yes, your reasoning is correct. $G \cong \mathbb{Z} / 2\mathbb{Z}$ . Case 4 In fact, only the identity transformation preserves $C_1$ and $C_2$ . $G \cong \{ e \}$ . Proof: Let $p_1, p_2$ denote the intersection points of degree 1 and 3 respectively. Let $\ell_1$ denote the common tangent line through $p_1$ , and let $\ell_2$ denote the other common tangent line. The line $\ell_2$ intersects $C_1$ and $C_2$ at points $q_1$ and $q_2$ respectively. The points $p_1, p_2, q_1, q_2$ are all pairwise different, and no three of them are colinear. A projective transformation preserving $C_1$ and $C_2$ would have to preserve each of these four points, so it can only be the identity. Case 5 Yes, there are infinitely many such transformations. We have $G \cong (\mathbb{Z} / 2 \mathbb{Z}) \times (\mathbb{C}, +)$ . Let $p$ denote the intersection point of $C_1$ and $C_2$ ,
|conic-sections|projective-geometry|
1
Every closed 1 form on a compact, simply connected manifold M is exact.
Suppose that $M$ is simply connected, compact manifold. Then I want to show that every closed $1-$ form is exact on $M$ . This is same showing that the first de-Rham cohomology space of $M$ is $0$ . Question : Is it possible to answer this without using universal coefficients theorem? (The reason for not using the universal coefficients theorem is that it contains a mystery term ext which I'm not comfortable using. Digging deeper into ext, it turns out that it is a core algebraic term which I'm not good enough at at this stage.) Since $M$ is simply connected, its fundamental group $\pi_1$ is $0$ . First homology group $H_1$ is abeliazation of $\pi_1$ so $H_1 =0$ . To conclude $H^1=0$ , I would need universal coefficients theorem, which is what I want to avoid. Does the following work? Suppose that $\omega$ is a closed $1$ form on $M$ . Consider any $p$ in M. Take a chart at $p$ and call it $h$ . Then $\tau=(h^{-1})^\ast \omega$ is a $1$ form on $\mathbb R^n$ hence equal to $df$ for som
To answer your Question , note that one does not need universal coefficients to state at least some link between $\newcommand{\HdR}{H_\mathrm{dR}}\HdR^1(M)$ and $\pi_1(M)$ . Namely, you can define a map $\Phi\colon \HdR^1(M) \to \operatorname{Hom}_{\mathrm{Grp}}(\pi_1(M, x), \mathbb{R})$ (where $x \in M$ is any point) via $$ \Phi([\omega])([\gamma]) = \int_{\tilde{\gamma}} \omega $$ where $\tilde{\gamma} \in [\gamma]$ is a smooth representative (here the integral is the usual line integral, i.e. it is given by $\int_{\tilde{\gamma}} \omega = \int_{[a, b]} \tilde{\gamma}^* \omega$ for $\tilde{\gamma}\colon [a, b] \to M$ a smooth curve). It turns out that this map is an isomorphism, and you can at least see directly that it is injective without resorting to de Rham's theorem and singular cohomology at all: In fact, if $\int_{\tilde{\gamma}} \omega = 0$ for all choices of $\tilde{\gamma}$ , then it is not hard to see that $\int_\sigma \omega = 0$ for $\sigma$ a smooth closed curve based a
|differential-geometry|algebraic-topology|manifolds|
0
Proof of product rule for limits
Let $$\lim_{x \to a} f(x) = L$$ $$\lim_{x \to a} g(x) = M$$ Where $L$ and $M$ are finite reals. Then I want to prove that $$\lim_{x \to a} f(x) g(x) = LM$$ Let $\epsilon > 0$. We need a $\delta > 0$ such that for all $x$ we have $0 Rearrange: $$\begin{align}|f(x)g(x)-LM|&=|f(x)g(x)-Lg(x)+Lg(x)-LM|\\ &=|g(x)(f(x)-L)+L(g(x)-M)|\\ &\le|g(x)||f(x)-L|+|L||g(x)-M| \\ &\lt|g(x)||f(x)-L|+(|L| + 1)|g(x)-M| Since the limits for $g(x)$ and $M$ approach the same value $M$, there exists a $\delta_1 > 0$ such that for all $x$, $0 $$\begin{align}|f(x)g(x)-LM| &\lt |g(x)||f(x)-L|+(|L|+1)|g(x)-M|\\ & Since the limits for $f(x)$ and $L$ approach the same value $L$, there exists a $\delta_2 > 0$ such that for all $x$, $0 $$\begin{align}|f(x)g(x)-LM| &\lt |g(x)||f(x)-L|+\frac{\epsilon}{2} \\ & Now we prove $|g(x)| \leq |M|+1$: $$|g(x)| = |g(x) - M + M| \leq |g(x) - M| + |M| \leq |M|+1$$ Subtracting $|M|$ from both sides, we see that: $$|g(x) - M| \leq 1$$ Since the limits for $g(x)$ and $M$ approach the s
Let $l_{ f } = \displaystyle \lim_{ x \to a } f \left( x \right)$ and $l_{ g } = \displaystyle \lim_{ x \to a } g \left( x \right)$ . For any $\epsilon > 0$ , there must be a $\delta > 0$ such that $\left| x - a \right| implies \begin{equation*} \left| f \left( x \right) - l_{ f } \right| We have \begin{alignat*}{1} \left| f \left( x \right)g \left( x \right) - l_{ f } l_{ g } \right| &= \left| \left( f \left( x \right) - l_{ f } \right) l_{ g } + l_{ f } \left( g \left( x \right) - l_{ g } \right) + \left( f \left( x \right) - l_{ f } \right) \left( g \left( x \right) - l_{ g } \right) \right| \\ &\le \left| f \left( x \right) - l_{ f } \right| \left| l_{ g } \right| + \left| l_{ f } \right| \left| g \left( x \right) - l_{ g } \right| + \left| f \left( x \right) - l_{ f } \right| \left| g \left( x \right) - l_{ g } \right| \\ & Hence $\left|x - a\right| implies $\left| f \left( x \right) g \left( x \right) - l_{ f } l_{ g } \right| and thus $\displaystyle \lim_{ x \to a } f \left( x \
|calculus|limits|proof-verification|proof-writing|epsilon-delta|
0
$A$ is a borel subset of $\mathbb{R}^2$ and we know that $\lambda_2(A) = \pi$. Prove that: $\int_A (x^2 + y^2) \ d \lambda_2(x,y) \geq \frac{\pi}{2}$
We assume that $A$ is a borel subset of $\mathbb{R}^2$ and that $\lambda_2(A) = \pi$ . Prove that: $$\int_A (x^2 + y^2) \ d \lambda_2(x,y) \geq \frac{\pi}{2}$$ I see that it might be useful to use polar coordinate system. That way we would get that: $$x = rcos(\theta) \ , \ y = rsin(\theta)$$ And we know that $\lambda_2(A) = \pi$ , so we would get that: $$\int_{\theta} \int_r \ r^2 \cdot \pi \ dr \ d\theta = \pi \int_{\theta} \int_r \ r^2 \ dr \ d\theta$$ But how to define / how to set the limits of integration so that: $$\int_{\theta} \int_r \ r^2 \ dr \ d\theta \geq \frac{1}{2}$$ Why that would be the case?
The following lemma says roughly that your integral will be larger if the set is ‘further out’. Lemma Suppose $f:\Bbb{R}^n\to\Bbb{R}$ is a radially-increasing function. If $E,F\subset\Bbb{R}^n$ are measurable sets such that $\lambda(E)=\lambda (F)$ , and for some $R\geq 0$ , we have $E\subset B_R(0)$ and $F\subset B_R(0)^c$ , then \begin{align} \int_Ef\,d\lambda\leq\int_Ff\,d\lambda. \end{align} The proof is pretty simple. The hypothesis means that we can write $f(x)=\phi(\|x\|)$ for some (weakly) increasing function $\phi:[0,\infty)\to\Bbb{R}$ . So, because $\phi$ is increasing, we have \begin{align} \int_Ef\,d\lambda&\leq \int_E\phi(R)\,d\lambda=\phi(R)\cdot \lambda(E)=\phi(R)\cdot \lambda(F)=\int_F\phi(R)\,d\lambda\leq\int_Ff\,d\lambda. \end{align} We have the following simple corollary: Corollary Let $f:\Bbb{R}^n\to\Bbb{R}$ be a radially-increasing function, and $A\subset\Bbb{R}^n$ a finite-measure set, and $R\in[0,\infty)$ be the unique value such that $\lambda(B_R(0))=\lambda(A)$
|real-analysis|measure-theory|lebesgue-integral|lebesgue-measure|
1
What is the probability of drawing a blue ball on the 3rd trial (without replacement)?
I am self-studying Professor Ery Arias-Castro's Principles of Statistical Analysis (it's a pretty great book so far), and I could use some help on this problem. Suppose we draw without replacement from an urn with $r$ red balls and $b$ blue balls. At each stage, every ball remaining in the urn is equally likely to be picked. Use (1.15) to derive the probability of drawing a blue ball on the 3rd trial. Equation 1.15 is $$ \mathbb{P}(A) = \mathbb{P}(A | B) \mathbb{P}(B) + \mathbb{P}(A | B^\complement) \mathbb{P}(B^\complement). $$ Using the notation $B_n$ / $R_n$ for the event of drawing a blue/red ball on the $n$ th trial, I calculated \begin{align} \mathbb{P}(B_2) &= \mathbb{P}(B_2 | B_1) \mathbb{P}(B_1) + \mathbb{P}(B_2 | R_1) \mathbb{P}(R_1)\\ &= \frac{b - 1}{b + r -1} \frac{b}{b + r} + \frac{b}{b + r - 1} \frac{r}{b + r}\\ &= \frac{b}{b + r}. \end{align} Similarly, we get $\mathbb{P}(R_2) = r / (b + r)$ , but I am having a hard time setting up the equation for $\mathbb{P}(B_3)$ . I
My detailed solution to the problem. Thank you to @stochasticboy321 for helping me clarify the ideas. First, we note the general law of total probability in the discrete case. Theorem (Law of Total Probability): Let $(\Omega, \Sigma, \mathbb{P})$ be a probability space, and suppose $\{B_n\}_{n=1}^\infty \subseteq \Sigma$ is a countable collection of mutually exclusive ( $B_i \cap B_j = \varnothing$ for $i \neq j$ ) and collectively exhaustive ( $\cup_{n = 1}^\infty B_n = \Omega$ ) events. Then, for any $A \in \Sigma$ , we have \begin{align} \mathbb{P}(A) &= \sum_{n=1}^\infty(A \cap B_n)\\ &= \sum_{n=1}^\infty\mathbb{P}(A | B_n)\mathbb{P}(B_n). \end{align} In the problem, our experiment is drawing balls out of an urn without replacement until we empty the urn. Our sample space $\Omega$ consists of words of length $b+r$ in two letters $B$ and $R$ . Let $B_k$ and $R_k$ represent the event of choosing a blue or red ball on the $k$ th draw respectively. We note that $$ (B_2 \cap B_1) \cup (
|probability|conditional-probability|
0
$\sum_n |f(x_n)| < \infty \ \forall f \in X^* \implies \sum_n |f(x_n)| < C \|f\|$
I am working on the following problem Let $X$ be a normed space, and let $(x_n)_{n \geq 1}\in X$ , and suppose that $\sum_n |f(x_n)| Prove that $\sum_n |f(x_n)| . What I've done so far. By considering the set $A =\{\hat{x}_n : n \in \mathbb{N}\}$ , where $\hat{x}_n$ is the canonical embedding of $\hat{x}_n$ into $X^{**}$ , I can apply the principle of uniform boundedness to $A$ , to conclude that the maps $\hat{x}_n$ are uniformly bounded. However, this only gives that $|f(x_i)| \leq C \|f\|$ which is not very helpful I think. I also tried extending $A$ to contain sums of the form $x_1+\cdots + x_n$ , which works similarly, and I get $|f(x_1 + \cdots + x_n)| \leq C \|f\|$ , but again I don't really know how to finish this either. I also tried to consider the map $F:X^* \to \mathbb{R}$ , given by $f \left(\sum_n x_n \right)$ , which I believe is well-defined and linear, and then apply the closed graph theorem, but again, I didn't succeed. Any help would be appreciated.
Consider the operators $T_n: X^*\to \ell^1$ defined by $$T_n(f)=(f(x_1),f(x_2),\ldots, f(x_n),0,\ldots)$$ Clearly the operators $T_n$ are bounded. By assumptions the operators $T_n$ are bounded pointwise. Thus by the uniform boundedness theorem they are uniformly bounded, i.e. $$\sum_{k=1}^n|f(x_k)| =\|T_n(f)\|_1\le C\|f\|$$ for a constant independent of $n.$ As $n$ is arbitrary we get the conclusion.
|functional-analysis|
0
Finding a Polynomial Basis for a linear operator
I am working on this question: Consider $\mathcal{P}_2(\mathbb{R})$ be all polynomials of degree at most 2 with real coefficients. Let $\mathcal{S}$ be a subspace defined by $\mathcal{S} = \{p \in \mathcal{P}_2(\mathbb{R}): p(0) = p(1) \}$ . Let T: $\mathcal{S} \rightarrow \mathcal{S}$ be a linear operator defined by T(p) = p+p'. Prove or disprove T is diagonalizable. My approach is to use the theorem "T is diagonalizable iff its algebraic and geometric multiplicity is the same". So, I started by finding a basis of T. Since every element in $\mathcal{S}$ is in the form $ax^2-ax+b$ , we have $T(ax^2-ax+b) = ax^2+(a-1)x+b$ . I feel like the basis should only contain 2 elements (since it is governed by 2 variables), but besides 1, what could be the other basis? AI give me the opinion of $\{1, x, x^2-x\}$ as a basis, but I do not know if I should trust this. Any help is appreciated!
Just like @nor affirmed, there doesn't exist $\mathbf T:\mathcal S\to\mathcal S$ . In case you did the general case with $\mathbf T:\mathcal P_2(\mathbf R)\to\mathcal P_2(\mathbf R)$ , just for curiosity purposes, whether it'd be diagonalizable or not, you'd proceed like this: Assuming basis $\{1,x,x^2\}$ , notice $$\mathbf T p(x)=p(x)+p'(x)=[\mathbf I+\mathbf D]p(x)$$ Then, $$\left.\begin{align} &\frac{\mathrm d}{\mathrm dx}1=0=(0,0,0)\\ &\frac{\mathrm d}{\mathrm dx}x=1=(0,0,1)\\ &\frac{\mathrm d}{\mathrm dx}x^2=2x=(0,2,0) \end{align}\right\}\implies\mathbf D\equiv \begin{pmatrix} 0 & 0 & 0\\ 2 & 0 & 0\\ 0 & 1 & 0 \end{pmatrix}\implies \mathbf T=\mathbf I+\mathbf D\equiv \begin{pmatrix} 1 & 0 & 0\\ 2 & 1 & 0\\ 0 & 1 & 1 \end{pmatrix}$$ $\mathbf T$ 's characteristic polynomial is $$p(\mathbf T)=(1-\lambda)^3,$$ and so, $\lambda=1$ has algebraic multiplicity $3$ . The rank of $\mathbf T-\mathbf I$ is $2$ since the determinant of $\mathbf T-\mathbf I$ is $0$ but the minor $$\begin{vmatri
|linear-algebra|
1
Torsion elements of $SL_3(\mathbb{F}_p[x])$? (Quick question)
Is every element of $SL_3(\mathbb{F}_p[x])$ a torsion element? Here are my thoughts: First of all, the group is noncommutative, so a torsion element is an element of finite order. I'm thinking of examples of torsion elements: Any generator (since a generator p times is the identity.) Any element that has a rotation matrix as one of its factors. And there are a bunch more examples. Then I can't think of any element that does not have torsion in this group. Could someone please refer me to a result that makes this more precise? Your help will be very much appreciated!
In $\mathrm{GL}_n(\mathbf{F}_q(t))$ , the characteristic polynomial $\det(B-x\mathrm{Id})$ of every element of finite order lies in $\mathbf{F}_q[x]$ . Indeed, all its roots being roots of unity, they lie in $\overline{\mathbf{F}_q}$ , hence so do their symmetric polynomials. Since the only elements in $\mathbf{F}_q(t)$ that are algebraic over over $\mathbf{F}_q$ are the constants, this observation follows. Hence, consider a companion matrix $M$ , relative to some monic polynomial $P$ of degree $n$ (here $n=3$ ) with coefficients in $\mathbf{F}_q(t)$ , i.e., $P\in\mathbf{F}_q(t)[x]$ . If $P\notin\mathbf{F}_q[x]$ (and $P(0)\neq 0$ to ensure that $M$ is invertible), then $M$ has infinite order. The determinant of $M$ is $P(0)=(-1)^{n-1}$ . For instance, for $n=2,3$ the following (companion, for the polynomials $x^2-tx+1$ , $x^3-tx^2-1$ respectively) matrices are in $\mathrm{SL}_n(\mathbf{F}_p[t])$ and have infinite order and determinant 1: $$\begin{pmatrix}0 & -1\\1 & t\end{pmatrix},\qua
|abstract-algebra|group-theory|algebraic-geometry|geometric-group-theory|torsion-groups|
0
In what sense are similar matrices "the same," and how can this be generalized?
I sort of intuitively see why we care about similar matrices, i.e., when $A=S^{-1}BS$ for some invertible matrix $S$ . But I want to make this intuition more precise and abstract. Matrices: First of all, as mentioned here , Because matrices are similar if and only if they represent the same linear operator with respect to (possibly) different bases, similar matrices share all properties of their shared underlying operator. This is followed by a long list of shared properties. However, I feel this doesn't give the full story. For example, we also care about when two different operators are similar, in which case (conversely), they can be represented by the same matrix with appropriate bases. In this case, the long list of properties is still shared by both operators. How can one precisely say what type of properties are shared by similar operators, and what's the most abstract way to understand this? Generalizations: If $A, B, S$ are elements of a group $G$ , then $A = S^{-1}BS$ is desc
Here's one way of approaching matrix similarity, from an abstract POV. Instead of an $n\times n$ matrix $M$ , let's consider an arbitrary endomorphism $\phi$ of an $n$ dimensional vector space $V$ . An endomorphism $\phi$ of $V$ is equivalent to a $k[x]$ module structure on $V$ , where $x\cdot v:=\phi(v)$ . From this perspective, similarity of endomorphisms is when the associated modules are isomorphic. So any properties of finite dimensional $k[x]$ modules (of course invariant under isomorphism) are what similar matrices share. For instance, the structure theory of modules over a PID gives normal form theorems, generalised eigenvalues are the isomorphism classes of simple modules in the composition series, etc. The category of finite dimensional $k[x]$ modules also has duals, and (symmetric) tensor products, so we can understand transpose and exterior powers without bases, giving characteristic polynomials categorically. As a concrete example, from this perspective, we can prove easil
|linear-algebra|abstract-algebra|group-actions|matrix-decomposition|similar-matrices|
1
How did Artin discover the function $f(x)=\frac{(x^2-x+1)^3}{x^2(x-1)^2}$ with the properties $f(x)=f(1-x)=f(\frac{1}{x})$?
In Artin's "Galois Theory" P38, he said the function $$f(x) = \frac{(x^2 - x + 1)^3}{x^2(x-1)^2}$$ satisfies the properties of $f(x)=f(1-x)=f(\frac{1}{x})$ . Is the function given by some rational step or just by a flash of insight? If $f(0)$ is a number, then $f(0) = f(\frac{1}{0})$ . So that the domain of definition of f(x) does not include 0.(maybe. I know it's not rigorous) Then the domain of definition of f(x) does not include 1 either. Thus I think it is a function like $f(x)=\frac{g(x)}{x^a(x-1)^bh(x)}, h(0)*h(1) \neq 0$ . Then I tried $a=1, b=1$ , failed. but $a = 2, b = 2$ succeed. However, I think that's a really weird way to go about it. Does the question like" $f(x)$ is a rational function that satisfies the properties of $f(x) = f(g_1(x)) = f(g_2(x)) = ... =f(g_n(x)). \forall k \in \mathbb N^+, g_k(x)$ is a rational function. Now give a example of f(x)." has an easy way to solve?
Let $K = \mathbf C(x)$ . This is a field with automorphisms $r$ and $s$ where $r(h(x)) = h(1/(1-x))$ and $s(h(x)) = h(1/x)$ : In other words, $r$ and $s$ are linear-fractional changes of variables $x \mapsto 1/(1-x)$ and $s \mapsto 1/x$ . Check $r$ has order $3$ , $s$ has order $2$ , and $r(s(h)) = s(r^2(h))$ for all $h \in K$ . So $\langle r,s\rangle \cong S_3$ . Since $r(s(h)) = r(h(1/x)) = h(1/(1/(1-x))) = h(1-x)$ , the way you describe your problem puts emphasis on $r$ and $rs$ , which both have order $2$ , but they generate the same group as $r$ and $s$ : $\langle r,s\rangle = \langle rs,s\rangle$ . If a finite group $G$ acts as automorphisms on a field $K$ , then $K^G := \{\alpha \in K : g(\alpha) = \alpha {\rm \ for \ all \ } g \in G\}$ is the subfield of $K$ consisting of elements in $K$ fixed by all of $G$ and Artin proves in the book that $K$ is a Galois extension of $K^G$ with ${\rm Gal}(K/K^G) \cong G$ . When $\alpha \in K$ , the polynomial $\prod_{g \in G} (T - g(\alpha))$
|functions|polynomials|galois-theory|rational-functions|
0
The fundamental group of $\Pi_{n=1}^\infty S^1$ with the $\operatorname{sup}$-metric
Consider $S^1 \subseteq \mathbb{C}$ with the Euclidean metric. Let $X = \Pi_{n=1}^\infty S^1$ as a set and define $d_\infty: X \times X \to [0, 2], d_\infty((x_n)_{n=1}^\infty, (y_n)_{n=1}^\infty) = \operatorname{sup}_{n=1}^\infty d(x_n, y_n)$ Now $(X, d_\infty)$ is a metric space. What is the fundamental group of this space? Let $G = \{(a_n)_{n=1}^\infty \in \Pi_{n=1}^\infty\mathbb{Z} \mid \exists C \in \mathbb{Z}_{\geq 0} \forall n \in \mathbb{N} : |a_n| \leq C\}$ . I believe that we should have $\pi_1(X) = G$ . The corresponding paths are defined by $\varphi_{(a_n)_{n=1}^\infty}: S^1 \to X, z \mapsto (z^{a_n})_{n=1}^\infty$ for $(a_n)_{n=1}^\infty \in G$ . The reason I believe that $\pi_1(X) = G$ is that the fundamental group of $X$ endowed with the product topology is simply $\Pi_{n=1}^\infty\mathbb{Z}$ , and $G \subseteq \Pi_{n=1}^\infty\mathbb{Z}$ is precisely the subgroup of elements whose corresponding paths are still continuous when viewed as mappings from $S^1$ to $X$ equippe
Suppose $f:S^1\to X$ is a loop. By compactness, $f$ is uniformly continuous, and it follows that the winding numbers of the coordinates of $f$ are bounded (if $\delta>0$ is such that $|x-y| implies $d_\infty(f(x),f(y)) , say, then you can bound the winding number of each coordinate of $f$ in terms of $\delta$ ). So there is a homomorphism $p:\pi_1(X)\to G$ which takes each loop to its sequence of winding numbers on each coordinate. As you have observed, $p$ is surjective, so it suffices to check that $p$ is also injective. So, suppose $p([f])=0$ , so $f=(f_n)$ has winding number $0$ on each coordinate. Note that each $f_n$ has a canonical nullhomotopy given by lifting it to the universal cover $\mathbb{R}$ and using the linear homotopy. It is then easy to check that these linear nullhomotopies can be joined together to give a nullhomotopy of $f$ (because continuity of $f$ implies the $f_n$ are equicontinuous and their lifts to $\mathbb{R}$ are uniformly bounded, and then it follows tha
|general-topology|algebraic-topology|homotopy-theory|fundamental-groups|
1
Understanding Computer Code to Test for an Injective Function
I'm learning about types of function in discrete maths. I like to learn by exploring concepts through computer code. I came across the following code purporting to determine whether a function is injective: def is_injective(function): # Check if the function is injective (one-to-one) codomain_set = set(function.values()) return len(codomain_set) == len(function) # Example usage domain = ["A","B","C"] # Replace with your domain codomain = [1,2,3,4] # Replace with your codomain function = { "A": 1, # Replace with your function mapping "B": 2, # Replace with your function mapping "C": 4, # Replace with your function mapping } if is_injective(function): print("The function is injective (one-to-one).") I'm having a bit of trouble understanding how this works or whether it is correct. In is_injective() , codomain_set seems to create a set from the function values (which I think represent the range?). I assume creating a set is to remove duplicates. However, I'm not seeing how comparing a set
Since I am not able to add comments, I will post this as an answer. As you probably know, the standard definition of injectivity is: $$x_1 \ne x_2 \implies f(x_1) \ne f(x_2)$$ The "codomain_set" in your code seems to be taking the values of each element that goes through the function (i.e len(function)). If those are equal then the function must yield different values for different elements. If, for example $f(A)=4=f(B)$ , then this $4$ element will be counted just once, while A and B are two different elements. This gives two different cardinals of those sets.
|functions|python|
0
Understanding Computer Code to Test for an Injective Function
I'm learning about types of function in discrete maths. I like to learn by exploring concepts through computer code. I came across the following code purporting to determine whether a function is injective: def is_injective(function): # Check if the function is injective (one-to-one) codomain_set = set(function.values()) return len(codomain_set) == len(function) # Example usage domain = ["A","B","C"] # Replace with your domain codomain = [1,2,3,4] # Replace with your codomain function = { "A": 1, # Replace with your function mapping "B": 2, # Replace with your function mapping "C": 4, # Replace with your function mapping } if is_injective(function): print("The function is injective (one-to-one).") I'm having a bit of trouble understanding how this works or whether it is correct. In is_injective() , codomain_set seems to create a set from the function values (which I think represent the range?). I assume creating a set is to remove duplicates. However, I'm not seeing how comparing a set
Making some (hopefully) reasonable assumptions about what function.values() and len(function) do, your understanding of the code seems correct. Correctness of the function itself then follows from the following observation that $f\colon X \to Y$ where $X$ is a finite set is injective iff $|X| = |f(X)|$ : Clearly $|X| = |f(X)|$ if $f$ is injective since it matches elements of the domain uniquely to elements of its range, whereas if $|X| = |f(X)|$ and you assume that $f$ is somehow not injective, you can find some $y \in f(X)$ with $|f^{-1}(\{y\})| \geq 2$ ; but if you then consider $f|_{X \setminus \{x\}}\colon X \setminus \{x\} \to f(X)$ for some choice of $x \in f^{-1}(\{y\})$ you'll have a surjection from a set of cardinality $|f(X)| - 1$ onto one of strictly greater cardinality $|f(X)|$ which is absurd, which is to say that $f$ must have been injective in the first place.
|functions|python|
0
Does anyone know what font this is?
I am trying to figure out how to produce this exact integral sign (in pdfLaTeX, if possible; if not, in Microsoft Word?). Would anyone know how to? I know that it's really specific, but it could really make the difference for me in a class. I've also attached an image of this integral with an integrand. Thank you! The integral The integral with an integrand
It looks like Computer Modern Unicode Serif (also known as CMU Serif) and should be available as a TTF download for Windows which means can then be used in Word and/or PDFs.
|integration|
0
How to pull back the differential form $\omega = \frac{-ydx+xdy}{\sqrt{x^2+y^2}}$ to $S^2$
Consider the stereographic projection chart on $S^2$ which doesn't include the north pole $$(X,Y)=\varphi(x,y,z)=\left(\frac{x}{1-z}, \frac{y}{1-z}\right).$$ I want to pull back the 1-form $\omega = \frac{-ydx+xdy}{\sqrt{x^2+y^2}}$ to $S^2$ from $\mathbb{R}^2$ to $S^2$ but I am not sure about a step in the calculation. The definition of pullback of a form under a smooth map $\varphi $ is $$\varphi^* \omega=(f \circ \varphi) d\left(X \circ \varphi\right)+(g \circ \varphi) d\left(Y \circ \varphi\right)$$ Then, $$ \begin{aligned} &\varphi^*\omega =\frac{\frac{-y}{1-z}}{\sqrt{\frac{x^2}{(1-z)^2}+\frac{y^2}{(1-z)^2}}} d\left(X \circ \varphi\right)+\frac{x}{\sqrt{\frac{x^2}{(1-z)^2}+\frac{y^2}{(1-z)^2}}} d\left(Y \circ \varphi\right) \\ & =\frac{-y}{\sqrt{x^2+y^2}} d(X \circ \varphi)+\frac{x}{\sqrt{x^2+y^2}} d(Y \circ \varphi) \\ & \end{aligned} $$ How do I compute $d(X\circ \varphi)$ and $d(Y\circ \varphi)$ . Intuitively, this feels like some sort of product rule would have to occur. $$d\fr
To answer your question in coordinates if we have $f:U \longrightarrow \mathbb{R}$ , for $U \subset\mathbb{R}^n$ and $U$ open. : $$df=\sum_{i=1}^n\frac{\partial f}{\partial x_i}dx_i$$
|calculus|geometry|analysis|manifolds|differential-topology|
0
What in the definition of a pentagon prevents three vertices on the same edge
The definition of a Pentagon says ( https://en.wikipedia.org/wiki/Pentagon ) "five-sided polygon" but let ABCD be a square and E the middle of AB. Is AEBCD a pentagon? What in the definition of a pentagon "prevents" this from happening? And more generally hexagons, etc ... Does "five-sided" does not work if the side defined by AE is the same as the one by EB?
If the shape you describe is considered a pentagon then it can, by extension be described as an $n$ -gon (for $n>5$ ) by simply adding additional points on the line segment. So, to avoid this, there is an assumption when we say shape $X$ is an $n$ -gon that it has $n$ angles, none of which are equal to $\pi$ . There is nothing wrong with describing your shape as a pentagon, but many would consider it a trivial case.
|geometry|
1
Calculate "vanishing points" of line of latitude on orthographic map projection
For an orthographic map projection (of radius $r$ ) centered at latitude $\varphi_0$ , I believe the ellipse defining the line of latitude $\varphi$ is centered at $y$ coordinate $r\cos(\varphi_0)\sin(\varphi)$ , with major axis radius $r\cos(\varphi)$ and minor axis radius $r\sin(\varphi_0)\cos(\varphi)$ . What I'm having trouble with is identifying the coordinates of the "vanishing points" where this ellipse intersects the horizon—where the ellipse is tangent to the circle defining the earth. Equivalently (if less intuitively) I'm looking for the zero, one, or two points that satisfy both $$ x^2 + y^2 = r $$ and $$ \frac{x^2}{(r\cos(\varphi))^2} + \frac{(y - r\cos(\varphi_0)\sin(\varphi))^2}{(r\sin(\varphi_0)\cos(\varphi))^2} = 1 $$ It seems like there should be some (simple?) relation between $\varphi_0$ , $\varphi$ , and the longitude at which the line of latitude disappears, from a minimum of 90° ( $\frac{\pi}{2}$ ) at $\varphi = 0$ to a maximum of 180° ( $\pi$ ) at $\varphi = \pi
I might be completely missing the point here but I will give it a quick try anyway. I will take the part of your question stating that you want " the longitude at which the line of latitude disappears " and ignore the rest since I am not sure if it is all correct. Referencing the Wikipedia page you linked, the equations for $x$ and $y$ are $$ \begin{align} x & = R \cos{\varphi} \sin{(\lambda - \lambda_0)} \\ y & = R (\cos{\varphi_0} \sin{\varphi} - \sin{\varphi_0} \cos{\varphi} \cos{(\lambda - \lambda_0)}) \end{align} $$ and the equation for "the angular distance $c$ from the center of the orthographic projection" is $$ \cos{c} = \sin{\varphi_0} \sin{\varphi} + \cos{\varphi_0} \cos{\varphi} \cos{(\lambda - \lambda_0)} $$ meaning that the projection vanishes at the point where $c$ is either $- \frac \pi 2$ or $\frac \pi 2$ . Therefore, we can find the "vanishing point" longitude in terms of the latitude as $$ \begin{align} 0 & = \sin{\varphi_0} \sin{\varphi} + \cos{\varphi_0} \cos{\varp
|trigonometry|map-projections|
0
Writing function without piecewise
This question was inspired by the solution to this problem: Piecewise linear function and absolute value . Question: Suppose you have the real-valued piecewise function: \begin{equation*} f(x) = \begin{cases} f_1(x), & \text{if } x where $f_1$ is continuous and differentiable on the left side of $a$ and $f_2$ is continuous and differentiable on the right side of $a$ . Is there a systematic way to write $f(x)$ without making it piecewise? I suspect the answer will have something like the form: $f(x)=f_1(x) \space + \space $ (something involving $f_1'$ and $f_2'$ ) $ \space \cdot \space$ (some mix of $f_1$ and $f_2$ along with $a$ involving absolute values), but the only reason I suspect this is because of the special case where the $f_i$ s are linear. Note: The case where $f$ is continuous and piecewise-linear is straightforward: \begin{equation*} f(x) = \begin{cases} \alpha_1 x+\beta_1, & \text{if } x \begin{equation} = (\alpha_1 x+\beta_1)+\frac{\alpha_2-\alpha_1}{2}\cdot(|x-a|+x-a) \
You can write indicator functions of intervals $[a,\infty]$ in terms of absolute values: $$1_{[a,\infty]}(x)=\frac{1}{2}\left(\frac{|x-a|}{(x-a)}+1\right)$$ so you can write indicator functions of intervals $[a,b]$ in terms of absolute values: $$1_{[a,b]}(x)=1_{[a,\infty]}(x)-1_{[b,\infty]}(x)=\frac{1}{2}\left(\frac{|x-a|}{(x-a)}-\frac{|x-b|}{(x-b)}\right)$$ From there you can just multiply your $f_i$ by their respective interval indicator functions and take the sum.
|calculus|
0
Derive $\sin x$ expansion without using calculus
We know that $$\sin x = x - \frac{1}{3!}x^3 + \frac{1}{5!}x^5 - \cdots = \sum_{n\ge 0} \frac{(-1)^n}{(2n+1)!}x^{2n+1}$$ But how could we derive this without calculus? There were some approach using $e^{ix} = \cos x + i \sin x$ , pls notice I also would like to avoid such definition as to prove $e^{ix} = \cos x + i \sin x$ again we need the expansion of $\sin x$ and $\cos x$ . One approach I tried is to start with $\sin^2 x$ : let $$\sin^2 x := \sum_{n\ge 1} a_n x^{2n}$$ Note: I guess such an ansatz as $\sin x$ is an odd function -- so that $\sin x$ has only $x$ 's power of odd numbers, so $\sin^2 x$ only has $x$ 's power of even numbers. Now if I can arrive $$\sin^2 x = \sum_{n\ge 1} \frac{(-1)^{n+1} 2^{2n-1} }{(2n)!} x^{2n},$$ then via $\cos^2x = 1-2\sin^2 x$ I can get the expansion of $\cos x$ then $\sin x$ . To derive $a_n$ , first I use $\lim_{x\rightarrow 0} \frac{\sin x}{x} = 1$ from geometric interpretation (the arc is almost the opponent edge for small angle $x$ ), to get $$a_1
Let us first correct your recursive formula, which should rather be $$ \left(1-2^{2n-2}\right) a_n= \sum_{k=1}^{n-1} a_k a_{n-k}.$$ To derive from it (and from $a_1=1$ ) that $$\forall n\in\Bbb N\quad a_n=\frac{(-1)^{n+1} 2^{2n-1} }{(2n)!},$$ just check that this sequence satifies our recursive formula, i.e. check that $$ \left(1-2^{2n-2}\right)\frac{(-1)^{n+1} 2^{2n-1} }{(2n)!}= \sum_{k=1}^{n-1}\frac{(-1)^{(k+1)+(n-k+1)} 2^{(2k-1)+(2n-2k-1)} }{(2k)!(2n-2k)!}.$$ After some easy simplifications, the task is to prove that $$\sum_{k=0}^n\binom{2n}{2k}=2^{2n-1}.$$ This is done for instance here .
|sequences-and-series|algebra-precalculus|
1
Definition of natural numbers in category theory
It is said that category theory serves as an alternative foundation of mathematics, as such it must define natural numbers in terms of categories as it is done in set theory, when we consider set theory as a foundation for mathematics and where we define recursively natural numbers as $0:=\emptyset$ and $n+1:=n\cup \{n\}$ . So, what is the definition of natural numbers in category theory?
I'm sure there are plenty, but this is the one that I have encountered, I believe it is rather standard. In the category of small categories, consider the category C with one object and only the identity morphism, and the category D with two objects, a single morphism between the two objects, the two identity morphisms, and no other morphisms. Then there are two functors $C\to D$ . The natural numbers $\Bbb N$ is the coequalizer of these two functors. This coequalizer $\Bbb N$ has a single object, and there is a coequalizing functor $D\to \Bbb N$ . The morphism between the two objects in $D$ is mapped to the morphism $1$ in $\Bbb N$ , and apart from the identity morphism in $\Bbb N$ , all other morphisms can be obtained in a unique way by composing $1$ with itself. Composition is the addition operation. (Note that this construction is mentioned in the "Examples" section of the coequalizer Wikipedia article.)
|category-theory|set-theory|natural-numbers|
0
Continuous Extension of $f : D \rightarrow \mathbb{R}$, $x \mapsto \frac{{x_1 \sin(x_2) + x_2 \sin(x_1)}}{{\sqrt{x_1^2 + x_2^2}}}$
Investigate at which points on $\partial D$ the function can be continuously extended, and in this case, provide the continuous extension. $f : D \rightarrow \mathbb{R}$ , $x \mapsto \frac{{x_1 \sin(x_2) + x_2 \sin(x_1)}}{{\sqrt{x_1^2 + x_2^2}}}$ I have tried to disprove that $f(x_1, x_2)$ has a continuous extension at $(0,0)$ . Specifically, I used straight lines, but the limit always seems to approach zero. Therefore, I plotted the graph, and indeed, it seems like a continuous extension is possible. My question is as follows: If that is the case, how do I show it? It seems impossible to prove in every possible direction. Can I maybe use sequences, or what should I do in this situation? Any help is appreciated.
In polar coordinates we have $f(r \cos \theta, r \sin \theta)=\cos \theta \sin (r sin \theta)+\sin \theta \sin (r cos \theta)$ , so $|f(r \cos \theta, r \sin \theta)| \le |\sin (r sin \theta)|+|\sin (r cos \theta)|$ . So $f(r \cos \theta, r \sin \theta) \to 0$ as $(r \cos \theta, r \sin \theta) \to (0,0)$ .
|multivariable-calculus|trigonometry|continuity|
1
Solve the equation $\left(\frac{1+\sqrt{1-x^2}}{2}\right)^{\sqrt{1-x}} = (\sqrt{1-x})^{\sqrt{1-x}+\sqrt{1+x}}$
Solve in $\mathbb{R}$ : $ \left(\frac{1+\sqrt{1-x^2}}{2}\right)^{\sqrt{1-x}} = (\sqrt{1-x})^{\sqrt{1-x}+\sqrt{1+x}} $ My approach: Let $a = \sqrt{1-x}$ and $b = \sqrt{1+x}$ so $a^2 + b^2 = 2$ . The equation becomes $\left(\frac{1+ab}{2}\right)^a = a^{a+b}$ , which is equivalent to $\left(\frac{1+ab}{a^2+b^2}\right)^a = a^{a+b}$ . After taking the natural logarithm, we get $a \ln(1+ab) - a \ln(a^2+b^2) = a \ln(a) + b \ln(a)$ . I thought of considering a function but I couldn't find it. Any help is appreciated.
Clearly, $x = 0$ is a solution. We claim that $x = 0$ is the only solution. If $-1 \le x , we have $$\Big(\frac{1+\sqrt{1-x^2}}{2}\Big)^{\sqrt{1-x}} If $0 , we have $$\Big(\frac{1+\sqrt{1-x^2}}{2}\Big)^{\sqrt{1-x}} \ge \frac{1+\sqrt{1-x^2}}{2} > \frac{\sqrt{1 - x} + \sqrt{1 - x}}{2},$$ and $$(\sqrt{1-x})^{\sqrt{1-x}+\sqrt{1+x}} \le \sqrt{1 - x}.$$ Thus, $\Big(\frac{1+\sqrt{1-x^2}}{2}\Big)^{\sqrt{1-x}} > (\sqrt{1-x})^{\sqrt{1-x}+\sqrt{1+x}}$ . The claim is proved.
|functions|inequality|logarithms|systems-of-equations|exponential-function|
1
Prove that no three of the six angles trisectors meet at one point
I.e., that there are 12 distinct intersection points of the angle trisectors. $ABC$ is a triangle: with its 6 angle trisectors. Prove that $M, N, O, P, Q, R, S, T, U, V, W, X$ (the trisector's intersections) are all distinct. The only formula I found about angle trisectors is Morley's Formula, saying that $PQU$ is equilateral.
Disclaimer : Not a direct answer but a methodology. As your question can be understood as "how can I attack such an issue ?", I would like here to propose two combined tools for such questions involving angle bisectors in a triangle : trilinear coordinates $(u:v:w)$ (abbreviation here : t.c.) and their use with isogonal conjugation $(u:v:w) \leftrightarrow (\tfrac{1}{u}:\tfrac{1}{v}:\tfrac{1}{w})$ . I will show it through a configuration of 8 points (see figures 1 and 2) sharing some points with your own configuration . Fig. 1 : A case where the angles aren't trisected in 3 equal values (not a "Morley configuration"). Fig. 2 : A Morley configuration for a general triangle. This configuration is determined by a single point with t.c. $(u:v:w)$ in the following way (see the correspondence with Fig. 1 (and your own figure) : $$\begin{cases} (u:v:w)&\text{red disk}\\ (\tfrac{1}{u}:v:w)&\text{blue star ; your point O}\\ (u:\tfrac{1}{v}:w)&\text{green disk ; your point S}\\ (u:v:\tfrac{1}{w}
|triangles|
0
Understanding Computer Code to Test for an Injective Function
I'm learning about types of function in discrete maths. I like to learn by exploring concepts through computer code. I came across the following code purporting to determine whether a function is injective: def is_injective(function): # Check if the function is injective (one-to-one) codomain_set = set(function.values()) return len(codomain_set) == len(function) # Example usage domain = ["A","B","C"] # Replace with your domain codomain = [1,2,3,4] # Replace with your codomain function = { "A": 1, # Replace with your function mapping "B": 2, # Replace with your function mapping "C": 4, # Replace with your function mapping } if is_injective(function): print("The function is injective (one-to-one).") I'm having a bit of trouble understanding how this works or whether it is correct. In is_injective() , codomain_set seems to create a set from the function values (which I think represent the range?). I assume creating a set is to remove duplicates. However, I'm not seeing how comparing a set
The way I see this is function is an array of something like this. function = ((1,A),(2,B),(3,C),(4,B)) [not injective as function.2 = function.4 = B] Then function.values() is a list (A,B,C,B). A list can have multiple objects. set() is a function that turns the list into a set that can not have multiple objects. So set(A,B,C,B) will return (A,B,C) Now function has length 4. And function.values() has length 4. but codomain_set = (A,B,C) has length 3 so the they are inequal. The only way that can happen is if function.values() had repeated elements that got stripped. And that means the function was not injective. On the other hand if the function = ((1,A),(2,B), (3,C), (4,D)) and was injective then function.values = (A,B,C,D) has no repeated values. So set(A,B,C,D) will return (A,B,C,D) and nothing is stripped and function, function.values(), and codomain_set=set(function.values()) will all be equal to 4. So despite my initial critique of this being bad code and useless, I've changed m
|functions|python|
1
minimum of $-(\cos k_1 + \cos k_2 + \cos k_3)$
I want the minimum value of the function $$- (\cos k_1 + \cos k_2 + \cos k_3)$$ under the constraint of $$ k_1 + k_2 +k_3 = K , $$ where the constant $K \in [-\pi, \pi]$ . I guess the minimum is achieved at $k_1 = k_2 = k_3 = K/3$ , but find it difficult to prove it. The problem is that $-\cos x $ is not completley convex on the real axis. Here we have three terms. It is conjectured that if we have $N\geq 3 $ terms, the minimum is always achieved by $k_1 = k_2 = \ldots = k_N = K/N $ . This problem is relevant to solid state physics. I want to minimize the kinetic energy of a few electrons in a lattice.
I confirm your guess for the case $N=3$ . By your hint in the comments, $K$ is a constant in $[-\pi, \pi]$ follows, $k_{1..3}$ are almost free. That is why I disbelieved your minimum if $k_{1..3}=K/N$ assumption. My reasoning, with $k_1=k_2=0$ the "terms" $\cos(k_1)$ and $\cos(k_2)$ have a maximum impact on the result, only $\cos(k_3)$ with $k_3=K$ is what it is. But , at $k_n=0$ the effect of a "little change" may be lower than it is for $k_3=K$ . Hence I modified your function $- (\cos k_1 + \cos k_2 + \cos k_3)$ to the ansatz $$-\left(2\cos\left(e\right)+\cos\left(K-2e\right)\right)$$ The first derivative to $e$ set to $0$ and solved for $e$ yields four solutions: $$\left\{e=\frac{4m\pi+2\pi+K}{3}\mathrm{,}\\e=\frac{4m\pi+K}{3}\mathrm{,}\\e=4n\pi+\pi+K\mathrm{,}\\e=4n\pi-\pi+K\right\}$$ with $m$ and $n$ arbitrary integers including $0$ and negative values, I opted for $m=n=0$ . Next I checked the sign of the second derivative with this solutions, if it results a minimum or not. It d
|optimization|maxima-minima|
0
Divergence of Tensor Field
The author of this textbook is introducing this coordinate free definition of the divergence of tensor fields. Can someone help break this down for me, I am just really confused. First of all, does "a" represent a vector field? And then the the dot product of the two vector fields given by would give us a scalar field right? Also, the right hand side of this equation would represent a scalar field as well. Any help would be greatly appreciated! The Definition
$ \def\r#1{\color{red}{#1}} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\p{\partial_i} \def\c{\cdot} \def\n{\nabla} \def\a{a_k} \def\T{T_{ik}} $ You will have a hard time proving the linked formula because it is wrong. To see this, expand $\n\c\LR{T\c a}$ using the Einstein summation convention $$\eqalign{ \p\LR{\T\a} &= \LR{\p\T}\a + \T\LR{\p\a} \\ }$$ then rearrange the terms $$\eqalign{ \LR{\p\T}\a &= \p\LR{\T\a} - \T\LR{\p\a} \\ }$$ and convert back to vector notation $$\eqalign{ \LR{\n\c T}\c a &= \n\c\LR{\r T\c a} - T:\LR{\n a} \\ }$$ or in terms of named operators $$\eqalign{ {\op{div}\LR{T}}\c a &= \op{div}\LR{\r T\c a} - \op{trace}\LR{T^T\!\c{\n a}} \\ }$$ Note that the term in red is $\r{T^T}$ in the linked formula, which is incorrect. $\sf NB\!:\:$ I could have expanded $\n\c\LR{T^T\c a}\,$ but then the linked document would have two typos instead of one, so the above is actually the most generous interpretation.
|calculus|multivariable-calculus|vector-analysis|
1
Does $\sum\limits_{k=5}^n\binom{k-3}2\binom{k-1}3$ have a closed form?
This summation came up in a problem I was working on. My first attempt in coming up with a formula yielded the result $\binom{n-1}3\binom{n-2}3-\sum\limits_{k=3}^{n-3}\binom{k+1}2\binom k3$ at which point I was unable to simplify any further. I then devised a second method to achieve a formula and came up with the result $\sum\limits_{k=5}^n\binom{k-3}2\binom{k-1}3$ . Both appear to be equivalent, which I was unable to prove directly, but could with induction if needed. I tried using this to come up with a closed formula for one summation or the other but was unable to figure out a way to do so. If worse came to worse, I could treat the summand as a quadratic multiplied by a cubic, in which case, the result of the sum would be some degree 6 polynomial, but is there a nicer closed form for one of these summations?
Inspired by ultralegend5385's solution, I think I see another way to solve this problem without using WolframAlpha. The inspiration comes from seeing it in the form $$\binom{k-3}{2}\binom{k-1}{3}=\frac{1}{12}(k-1)(k-2)(k-3)^2(k-4)$$ I think I see a way to manipulate this further into something more workable. From here we have $$\frac{1}{12}(k-1)(k-2)(k-3)(k-4)(k-5+2)=$$ $$\frac1{12}(k-1)(k-2)(k-3)(k-4)(k-5)+\frac16(k-1)(k-2)(k-3)(k-4)=10\binom{k-1}5+4\binom{k-1}4$$ Now summing this from $5$ to $n$ , we can use the hockey-stick identity to get $10\binom n6+4\binom n5$ .
|summation|binomial-coefficients|
0
If $F(x)=(F_1(x),\dots, F_n(x))$, is always possible to find $F_i(x)=\min\{F_1(x),\dots, F_n(x)\}$?
Let $n\ge 2$ , ( $n$ finite) and $F:\mathbb R\to\mathbb R^n$ be a positive function, meaning that $$ F(x)=(F_1(x),\dots, F_n(x))$$ and $F_i(x)\ge 0$ for any $i\in\{1,\dots, n\}$ . My question is: is always possible to find the minimum (or the maximum) between the $n$ components of $F$ ? More precisely, is it possible to find $i\in\{1,\dots,n\}$ such that $F_i(x)=\min\{F_1(x),\dots, F_n(x)\}$ ? On the one hand, I think that the answer is yes because $F_i(x)\in\mathbb R$ for any $i\in\{1,\dots,n\}$ , which means that $(F_1(x),\dots, F_n(x))$ is vector made of real numbers and I can always find the minimum (or maximum) between $n$ real numbers. On the other hand, I am confused about the presence of the variable $x$ . Anyone could please help me in understanding this?
Let $F_1(x) = x, F_2(x) = 1 - x, x \in [0, 1]$ . Then, $$ \min\{F_1(x), F_2(x)\} = \begin{cases} x, \text{ if } 0 \le x \le 1/2 \\ 1 - x, \text{ if } 1/2 There is no $i \in \{1,2\}$ such that $F_i(x) = \min\{F_1(x), F_2(x)\}$ for all $x \in [0, 1]$ . But, for a fixed $x \in [0, 1]$ , you can find $i \in \{1,2\}$ such that $F_i(x) = \min\{F_1(x), F_2(x)\}$ . In particular, $$ i = \begin{cases} 1, \text{ if } 0 \le x You may even notice that when $x = 1/2$ , the choice for $i$ is not unique, as $F_1(1/2) = F_2(1/2) = 1/2$ . In conclusion, the statement: $$ \exists i, \forall x, F_i(x) = \min\{F_1(x), \ldots, F_n(x)\} $$ is in general, false. But the statement: $$ \forall x, \exists i \text{ (may depends on }x \text{) }, F_i(x) = \min\{F_1(x), \ldots, F_n(x)\} $$ is true.
|real-analysis|calculus|multivariable-calculus|vectors|
1
The Meaning of the Fundamental Theorem of Calculus
I am currently taking an advanced Calculus class in college, and we are studying generalizations of the FTC. We just started on the version for Line Integrals, and one can see the explicit symmetry between the 1-dim version and this version. But as the technical meaning of this theorem sank in, I realized I never really understood the meaning of FTC, going back to 1-dim. I understand that FTC creates a bond between the two fundamental studies of calculus, the integral and the derivative. But is there some geometric or tangible meaning to this bond? What does it really mean that the integral and the derivative are "inverse processes"? Is this in the same exact sense as an inverse function? It would seem that the derivative measures instant change, and the integral measures area; what connection can drawn between the two ideas? Is the equation in FTC just purely mechanical? Any ideas on this subject are greatly appreciated; I'm just trying to understand a nice theorem. Also, if anyone ca
I made this picture to try to explain the fundamental theorem of calculus in the easiest and most intuitive way possible. It's color coded in an attempt to make the colors in the formula match the colors of the words in the explanation below. As an example, suppose you have something measuring temperature over time. If you want to find out the total temperature change for a full hour, you have two ways you can do it. The hard way of measuring the change each minute and then adding all sixty of those changes together, or the easy way of finding the temperature at the end of the hour and subtracting the temperature at the beginning. Both give you the exact same answer. This explanation is very clear for the discrete world, and the formality of the fundamental theorem extends that idea to the continuous world.
|calculus|integration|multivariable-calculus|intuition|vector-analysis|
0
f inverse of f(a)
$f\left(x\right)=x^{2}-4x+8$ , for $x\ge2$ If $a\le0$ , find $f^{-1}\left(f\left(a\right)\right)$ . I know $f^{-1}\left(x\right)=2+\sqrt{x-4}$ How to solve my qn then.
If $a , since $f(x)$ has domain $[2,\infty)$ , $f(a)$ does not exist (or is not defined, depending on your preferred terminology). The rest of the question is therefore meaningless. If you would like to check your question and ask it again, I'm sure people would be glad to assist.
|inverse|
0
How to show that if y=f(x+a) is an even function where f(x)=(x-6)^2sin(bx) then a must be 6?
With latex: How to show that if $y=f(x+a)$ is an even function where $f(x)=(x-6)^2\sin(\omega x)$ then $a$ must be 6? The original question was to solve for what $\omega$ s are possible. And there was a step like this: since $y=f(x+a)$ is even, so is $y=(x+a-6)^2$ and $y=\sin(\omega x+\omega a)$ . I'm confused by this step.
We exclude the trivial case where $\omega=0$ Then, what makes it hard is that there is a $\sin$ term that changes the scale of the quadratic $(x+a-6)^2$ , so we consider the case when $\sin$ is at its maximum. If $a>6$ , for every $x>0$ , $(x+a-6)^2>(-x+a-6)^2$ , pick an $x>0$ such that $\sin(\omega(x+a))$ is at maximum $1$ , then clearly $f(x+a)>f(-x+a)$ making $y=f(x+a)$ not even. For $a , it is basically the same story.
|functions|trigonometry|
0
Union of two events is at least as likely as the product of the events' probabilities
If $A$ and $B$ aren't disjoint and $A \cup B \neq \Omega$ , then is $P(A \cup B) \geq P(A)P(B)$ ? My only idea is to use $P(A \cup B) = P(A) + P(B) - P(A \cap B)$ but there's a minus in front of the intersection and the events don't have to be independent.
The short answer is yes, and I believe that the reason is more fundamental than you suggest. Since $P(A\cup B)\geq P(A)$ and $0 \leq P(B) \leq 1$ it follows quite naturally that $P(A) \geq P(A)\times P(B)$ and hence $P(A\cup B)\geq P(A)P(B)$
|probability|
1
Show that if $a, b \in \mathbb{R}$ and $a \neq b$, the intersection of the $\epsilon$ neighborhoods of a and b is $\emptyset$
I know the $\epsilon$-neighbordhood for a $\in \mathbb{R}$ is defined as the set $V_\epsilon(a)$ of $x \in \mathbb{R}$ such that |x-a|
Not necessarily true. For example: Take $2,\frac{3}{2}\in\mathbb{R}$ then, for $\epsilon =0.6$ we have, $N_\epsilon(2)=(2-0.6,2+0.6)=(1.4,2.6)$ and $N_\epsilon(\frac{3}{2})=(\frac{3}{2}-0.6 ,\frac{3}{2}+.6)=(0.9,2.1)$ and clearly, $(1.4, 2.6) \cap(0.9, 2.1)=(1.4,2.1)\neq\emptyset$ Your statement need some correction. The following statement is true. If $a,b\in\mathbb{R}$ and $a\neq b$ then, there exists neighborhoods $N_{\epsilon_1}(a)$ , $N_{\epsilon_2}(b)$ of $a, b$ respectively, such that $N_{\epsilon_1}(a)\cap N_{\epsilon_2}(b)=\emptyset$ .
|real-analysis|real-numbers|
0
Expressing the diagonal of a matrix filtered by a vector
I am trying to extract the mathematical expression of creating a set of elements from the diagonal of a matrix, but after filtering by a vector. For example: Let $A$ be a $3 \times 3$ matrix, $\mathbf{v}$ a $3 \times 1$ vector, and $\mathbf{R}$ be the resulting $3 \times 1$ vector: $A = \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{bmatrix}$ $\mathbf{v} = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}$ $\mathbf{R} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}$ Which would be the expression of this operation? I am looking for a notation expressed in a way that specifies the relationship between $\mathbf{R}$ and $A$ and $\mathbf{v}$ . I have thought something like: $\mathbf{D} = \{ A_{ii} \mid i \in [0, N] \}$ $\mathbf{R} = \{D_{i} \land \mathbf{v}_{i} \mid i \in [0, N] \}$ Would that be correct?
Let diag() return the main diagonal of its matrix argument as a column vector, and Diag() construct a diagonal matrix from its vector argument. Also note that the Hadamard product $(\odot)$ between two vectors is equivalent to making one of them a diagonal matrix and multiplying it by the other. Therefore the filtering operation is $$\eqalign{ R \;=\; v\odot{\rm diag}(A) \;=\; {\rm Diag}(v)\cdot{\rm diag}(A) }$$ Note that all elements of $v\,$ and ${\rm diag}(A)$ are either $\,0$ or ${\tt1}.\,$ Thus a logical AND operation between elements is equivalent to integer multiplication, so a standard Hadamard product is sufficient to describe the filtering operation.
|linear-algebra|matrices|vectors|
0
How do I remove the specific assumptions in the proof of the rank theorem in the special case of injective differential?
I have the following proof in my notes. It is a particular case of the more general rank theorem. Theorem (Rank theorem for injective differential). Suppose $M$ is a smooth manifold of dimension $m$ , and that $N$ is a smooth manifold of dimension $n$ . Suppose $F : M \to N$ is smooth. Let $p \in M$ . If $dF_p$ is injective, then there are charts $(U, \varphi)$ of M around p and $(V,\psi )$ of N around $F(p)$ such that $F(U) \subseteq V$ and for all $x \in\varphi(U)$ , and $\psi\circ F \circ \varphi^{−1}(x) = (x, 0_{n−m})$ Proof We prove the theorem in the case that $M$ is an open subset of $\Bbb R^m$ , $N$ open subset of $\Bbb R^n$ , $p = 0_m$ and $F(p) = 0_n$ By using appropriate charts around p and F(p), we can prove the general case. Suppose $dF_p$ is injective. Then $m \le n$ . Define $Q: M \to \Bbb R^m$ and $R: M \to \Bbb R^{n−m}$ by $F(x) = (Q(x), R(x))$ for $x \in M$ . Since $DF(p)$ is injective, its matrix has an $m × m$ invertible submatrix. We can do this by possibly exchang
See Product manifolds in Dieudonné, Vol. 3: Reduction of proofs to open sets in $\Bbb{R}^n$ for some indications of how to handle such reductions. Just to be super explicit for this question, suppose you prove the rank theorem for injective differentials in your given special case. For completely general $F:M\to N$ , fix a point $p\in M$ about which we’re going to prove things. fix an arbitrary chart $(V_0,\psi_0)$ in $N$ around $F(p)$ , and consider the translation $\tau_1:\Bbb{R}^n\to\Bbb{R}^n$ given by $\tau_1(y)=y-\psi_1(F(p))$ (translations are obviously diffeomorphisms). Now, define $V_1=V_0$ , and $\psi_1=\tau_1\circ\psi_0$ . Then, $(V_1,\psi_1)$ is a chart for $N$ around the point $F(p)$ , and furthermore, $\psi_1(F(p))=0\in\Bbb{R}^n$ . fix an arbitrary chart $(U_0,\phi_0)$ in $M$ around $p$ . and consider the translation $\tau_0:\Bbb{R}^m\to\Bbb{R}^m$ given by $\tau(x)=x-\phi_0(p)$ (obviously a diffeomorphism). Now, define $U_1=U_0\cap F^{-1}(V_0)$ (so that we can compose thin
|linear-algebra|analysis|differential-geometry|smooth-manifolds|inverse-function-theorem|
0
Two subgroups $H, K$ of coprime and finite index are such that $G = HK$.
I am asked to prove that if $H, K$ are subgroups of $G$ such that $$h = [G:H] then we have $$G = HK$$ I tried using the fact that $$|HK| = \frac{|H||K|}{|H\cap K|}$$ and concluded I needed to prove $hk|H \cap K| = |G|$ but couldn't go any further. Immediately above, in the same exercise, I proved that $[H: H\cap K] \leq [G:K]$ if $[G:K] EDIT : Following the hint in the accepted answer , we have $$|G| \leq |HK| = \frac{|H||K|}{|H\cap K|}$$ But obviously $HK \subseteq G$ hence $|G| = |HK|$.
I feel that this might be closer to what the exercise wants you to do (for infinite $G$ ): You proved above that $|H:H\cap K| . Similarly, $|K:H\cap K| . We then know $|G:H||H:H\cap K| = |G:K||K:H\cap K| . Let $$n=\frac{|G:H|}{|K:H\cap K|} = \frac{|G:K|}{|H:H\cap K|},$$ we see that $n=1$ because $n$ divides both $|G:H|$ and $|G:K|$ . Then we are done, by what you proved above.
|abstract-algebra|group-theory|
0
Counter example of Poincare's inequality.
Give an example of a sequence of function in $C^1(\mathbb{R})$ to show that the Poincaré inequality does not hold on a general non-compact domain. I tried to find the sequence in $C^1(\mathbb{R})$ such that it is not belong to $L^2$ on unbounded domain in but its dirivative is in $L^2$ in that unbounded domain but this sequence has to vanish on the boundary. Consider $f(x)= \begin{cases} x^{-1/3} &\text{if}\,\, x\neq 0\\ 0 & \text{if}\,\, x = 0 \end{cases}$ and $f'(x)= \begin{cases} -\dfrac{1}{3}x^{-4/3} &\text{if}\,\, x\neq 0\\ 0 & \text{if}\,\, x = 0 \end{cases}$ on the domain $(1, \infty).$ $$\int_1^{\infty}x^{-2/3}dx=+\infty$$ but $$\int_1^{\infty} -\dfrac{1}{3}[x^{-4/3}]^2dx=\dfrac{5}{9}.$$ It is easy to find an example that it is not integrable on unbounded domain but its dirivative is integrable in that unbounded domain. But the problem is in my example $f$ is not continuously differentiable at $0$ at not vanish on boundary. So I have trouble to find the function belongs to $C^1
The example I gave in my last comment can be simplified a lot, you can just take $$ \Large f(x) = \frac{1}{(1+x^2)^{\frac14}}. $$ It is not square integrable because we have $$ \Large \frac{1}{(1+x^2)^{\frac12}} \geq \frac{1}{(2x^2)^{\frac12}} = \frac{1}{\sqrt{2}}\frac{1}{x} $$ for $x \geq 1$ and its derivative $$ \Large f'(x) = -\frac{x}{2(1+x^2)^{\frac54}} $$ is square integrable because $$ \Large \frac{x^2}{4(1+x^2)^{\frac52}} \leq \frac{1+x^2}{(1+x^2)^{\frac52}} \leq \frac{1}{1+x^2} $$ for all $x \in \mathbb{R}$ .
|analysis|partial-differential-equations|integrable-systems|
1
Max/Min of Bernoulli variables
Let $X,Y$ be Independent random variables following the Bernoulli law with parameter $p$ , what is the law of $\min(X,Y)$ and $\max(X,Y)$ , I said that $P(\min(X,Y)) = 1-p^2$ and $P(\max(X,Y))=p^2$ is this right?
$\min (X, Y)$ takes two values $0$ and $1$ then it follows the Bernoulli law with the parameter : $$q = P(\min(X, Y) = 1) = P(X = 1, Y = 1) = P(X = 1) P(Y = 1) = p^2$$ because $X$ and $Y$ are independent. $\max (X, Y)$ takes two values $0$ and $1$ then it follows the Bernoulli law with the parameter : $$r = P(\max(X, Y) = 1) = 1 - P(\max(X, Y) = 0) = 1 - P(X = 0, Y = 0) = 1 - P(X = 0) P(Y = 0) = 1 - p^2$$ because $X$ and $Y$ are independent.
|probability|
0
$(H,*)$ group with some properties $\Rightarrow H$ not an interval.
Let $(H, \ast)$ be a group, where $H \subseteq (0, \infty)$ , which has these properties: $x \in H \Rightarrow \frac{1}{x} \in H$ $2023 \in H$ , and $x \ast y = \frac{1}{x} \ast \frac{1}{y}$ for any $x, y$ . Prove that $H$ is not an interval. As far as my attempts go, I haven't been able to do much, apart from modifying the " * " relation a bit and finding things such as x=1/x * 1/e=1/e * 1/x and so on.(where e is the identity element) I could see an outline of what im supposed to prove, as if i were to take any value, either higher or lower than 2023 and place them on an axis $oX$ , they would create an interval between $(0,1)$ and $(1,\infty)$ , but would only unite if $1$ was part of $H$ . Could that be where I'm supposed to be "heading" with my proof?
You're on the right track. It remains to show that $1 \not\in H$ . Hint Suppose otherwise. Then, the third property implies that for all $x \in H$ , $1 \ast x = 1 \ast \frac1x$ , hence $x = \frac1x$ , so $x = 1$ , i.e., $H = \{1\}$ . Remark Such a group actually exists: We can take $H := \Bbb R^+ \setminus \{1\}$ equipped with the operation $x \ast y := \exp(\log x \log y)$ ---which is just the multiplicative subgroup $(\Bbb R^\times, \cdot)$ of the field $\Bbb R$ conjugated by the (restriction) $\exp : \Bbb R^\times \to \Bbb R^+ \setminus \{1\}$ of the exponential map. The subgroup $(1, \infty) \subset H$ is connected, but it doesn't satisfy the second property.
|abstract-algebra|group-theory|real-numbers|
1
How to show that only this regular expression solves this equation
Consider the equation $x=v\cdot x + w$ where $x$ is a variable regular expression, $v, w$ are fixed regular expressions, $v$ has no variables inside it, and $w$ has no $x$ inside it. It is easy to show that $x=v^*\cdot w$ solves this equation: $v\cdot v^*\cdot w+w=(v\cdot v^*+1)\cdot w=v^*\cdot w$ . But how to show that there are no other solutions?
As already noted, you run into problems if you formulate everything in terms of equations. This idea stems from the 1960's literature - and is wrong. The right way to formulate this emerged in the 1990's, which is as a system of inequations , with the least solution being the one sought for. In effect, it's a non-numeric form of optimization, though neither the literature of the time (nor since) has explicitly laid it out that way. Textbooks, apparently have not come up to speed on this, if we're still seeing this formulated in terms of equations. So, for your problem, the system is a single inequation: $$x ≥ v x + w.$$ Since the variable is on both sides of the inequation, then it's a fixed-point system: a system of the form $x ≥ φ(x)$ where one seeks out a fixed point of $x ↦ φ(x)$ . The least fixed point is referred to as $μx·φ(x)$ . Correspondingly, the expression you're looking for is $$μx·(v x + w).$$ For Kleene algebras , one of its fundamental properties is that if $x ≥ vx$ the
|computability|automata|regular-language|regular-expressions|
1
Row Spaces - Prove $\mathrm{Row}\ A = \mathrm{Row}\ A^T A$
The question is: Prove that the following statement is true. For any rectangular $m\times n$ matrix A, $(\mathrm{Nul}( A ))^\perp = \mathrm{Row} (A^T A)$ Now, my understanding of the Row space is that it is simply $\mathrm{Col}\ A^T$ (since it is just the Span of all linear combinations of row vectors of A.) I also know that $(\mathrm{Row}\ A)^\perp = \mathrm{Nul}\ A$ . Since we can take a double orthogonal complement, we can interchange it: $((\mathrm{Row}\ A)^\perp)^\perp = (\mathrm{Nul}\ A)^\perp \implies \mathrm{Row}\ A = (\mathrm{Nul}\ A)^\perp$ . Hence, the actual proof is now simplified to proving that $\mathrm{Row}\ A = \mathrm{Row}\ A^TA$ . Here is my (incomplete) proof for the above: Let the matrix A be $m \times n$ , consisting of entries of the form of vectors: $A = \left[\begin{array}{cccc}\vec{v_1} & \vec{v_2} & \cdots & \vec{v_n}\end{array}\right]_{m\times n}$ , where $\vec{v_i} \in \mathbb{R}^m$ . Now, we take the transpose of A: $A^T = \left[\begin{array}{c}\vec{v_1}^T
Easier: Show that $N(A)=N(A^\top A)$ , where $N$ is the nullspace. (1) $N(A)\subset N(BA)$ for any matrix $B$ . (2) $N(A^\top A)\subset N(A)$ : If $(A^\top A)x=0$ , then $0=x\cdot A^\top (Ax) = Ax\cdot Ax$ , so $Ax=0$ .
|linear-algebra|orthogonality|
1
Finding all prime powers of the form $\frac{n(n-1)}{10}$
I was trying to come up with a problem where the solutions would be the numbers 5, 6, 10 and 11. It seems like "finding all integers n, where $\frac{n(n-1)}{10}$ is a prime power", is a good attempt, as there are at least no other solutions for n However, I didn't manage to prove that there are no other numbers that match this criteria, so I wonder whether there are none, or whether it's just that prime powers are quite rare, and there could be very large numbers still matching my criteria. Can anyone help me out there?
Hint Since $n - 1$ and $n$ are coprime, at least one of them must be a factor of $10$ , hence $n \leq 11$ .
|number-theory|prime-numbers|perfect-powers|
0
Finding Area of $\triangle ABF$ in a Right Triangle with Given Side Lengths and Median Intersection
In right triangle $ABC$ , we have $\angle ACB=90^{\circ}$ , $AC=2$ , and $BC=3$ . Medians $AD$ and $BE$ are drawn to sides $BC$ and $AC$ , respectively. $AD$ and $BE$ intersect at point $F$ . Find the area of $\triangle ABF$ . I aim to use the Shoelace Theorem to calculate the area of $\triangle ABF$ , but I'm stuck on determining the coordinates of point $F$ . Could someone assist me in finding the coordinates of point $F$ so that I can proceed with calculating the area of $\triangle ABF$ using the Shoelace Theorem? Or is there another method altogether?
Also I would like to mention another very easy process, using data given by @Wasu Chanyasubkit. You know the coordinates of $A(0,2),B(3,0),C(1,\frac{2}{3})$ . You can apply the formula for area of a triangle between 3 coordinates. $$A=\frac{1}{2}(x_1(y_2-y_3)+x_2(y_3-y_1)+x_3(y_1-y_2))$$ To get $A=1 cm^2$
|geometry|analytic-geometry|
0
2 Players playing a game to guess a number between 1 and 100 with decreasing payout
2 Players playing a game to guess a number between 1 and 100 such that the higher of the two guesses wins $100 - \max(guess_{1}, guess_{2})$ . What should be the ideal guess in this game? My thought is first to assume uniform distribution for the opponent as we don't know anything how they might guess and in turn have a expected payout: $E[x_{1}] = a(100 - a)$ for the guess a between 1 and 100 yielding the optimal guess to be 50 but not sure how to continue form here as I am not convinced this is the optimal answer.
Okay I'm considering that you two are choosing at random. Let you be $A$ and $B$ be the other friend. I'm using my basic high school combinatorics knowledge. Please correct me if there is a mistake. Let $x_A,x_B$ be the numbers you choose. $x_A,x_B ∈$ { ${1,2,3,4,...100}$ } So there are total $$100×100=10000$$ possible pairs of $x_A,x_B$ . Out of which $$99×100=9900$$ pairs have unequal values of $x_A,x_B$ . Out of these, exactly half pairs have $x_A>x_B$ . Thus total pairs in this case are $$\frac{9900}{2}=4950$$ Hence if $E$ is the event of having $x_A>x_B$ or getting $100, then $$P(E)=\frac{4950}{10000}=0.495$$ Hence there is $49.5\%$ chance of you getting a hundred bucks, $49.5\% $ of your friend getting it, and $1\%$ for equality. Regarding the ideal choice, you get this probability for any random choice. But by studying data collections, you can check that people tend to choose a value between 30 to 70 usually. It gives them a sense of "Not very much but not very less" thinking t
|probability|expected-value|game-theory|
0
The operation $ (a,b)(c,d)=(ac-bd,ad+bc) $ on $\Bbb R\times\Bbb R\backslash (0,0)$ yields a group
Here is the binary operation $ *: \mathbb{R}\times \mathbb{R} \backslash (0,0) $ defined by $ (a,b)(c,d)=(ac-bd,ad+bc) $ . My idea is that to show this is a group ( $\mathbb{R}\times \mathbb{R} \backslash (0,0), * $ ), I need to show that $ * $ is well-defined and associative and then show it has an identity and inverse. I am struggling to do the first part. How do I show $ * $ is well-defined (and is the first part required)? Is showing that $ ac-bd=0,ad+bc=0 $ will only be true if $a=b=c=d=0$ sufficient?
As explained here, they are isomorphic to complex numbers: $\,(a,b)\sim a+b\,i,\,$ so we can simply mimic the norm-based complex number proof that a product is zero iff some factor is zero, i.e. $\alpha = (a,b),\ \bar\alpha := (a,-b)\,\Rightarrow\, \alpha\bar\alpha = (|\alpha|,0),\,$ with $|\alpha| := a^2\!+\!b^2\!=\! 0\!\color{#c00}\iff\! \alpha\!=\!(0,0)\!=:\!\bf 0$ $\alpha\beta\!=\! {\bf 0} \overset{\!\times\,\bar\alpha\bar\beta }\Longrightarrow {\bf 0}\!=\! \alpha\bar\alpha(\beta\bar\beta)\!=\! (|\alpha|,0)(|\beta|,0)\!=\! (|\alpha||\beta|,0)\!$ $\!\iff\!\! |\alpha|\,{\rm or}\,|\beta|\!=\!0\!\color{#c00}\iff\! \alpha\,{\rm or}\,\beta\!=\! {\bf 0}$ Similarly we can compute inverses as in $\Bbb C$ using Key Idea multiplying by a conjugate allows us to rationalize (here $\color{#c00}{\textit{real-ize}}$ ) denominators, lifting "existence of inverses of $\,r\ne 0$ " from $\mathbb R$ to $\mathbb C,\,$ i.e. since $\mathbb R$ is a field, $\rm\ 0\ne r\in \mathbb R\, \Rightarrow\, r^{-1}\in
|abstract-algebra|group-theory|functions|binary-operations|
0
Bounded denominators for modular forms
I recently saw a conjecture that a modular form is a congruence modular form if and only if it has bounded denominators. I wonder if one direction or the other is already known to be true? EDIT: For completeness, here is the answer I received at Math Overflow when I asked a while back. https://mathoverflow.net/questions/59498/bounded-denominators-for-modular-forms
The unbounded denominators conjecture was proven in the affirmative and is now a theorem. https://arxiv.org/abs/2109.09040 https://www.quantamagazine.org/long-sought-math-proof-unlocks-more-mysterious-modular-forms-20230309/ Basically, the authors proved that there was an upper bound of possible noncongruence modular forms with bounded denominators, and if one existed, it would imply the existence of so many other such forms that it would exceed the upper bound.
|number-theory|modular-forms|
0
Computing the angle between two vectors using the inner product
Question is: Consider $\mathbb{R}^3$ with the inner product $\langle u1,u2\rangle=x_1x_2+3y_1y_2+z_1z_2$ where $u_1=(x_1,y_1,z_1)$ and $u_2=(x_2,y_2,z_2)$ are two vectors in $\mathbb{R}^3$. What is the cosine of the angle between the vectors $v=(2,1,3)$ and $w=(-1,2,-1)$ with respect to the above inner product? I don't even understand what exactly the inner product is,I tried using the formula $\cos(\theta)=\frac{\langle v,w\rangle}{|v||w|}$ but I don't understand how to use the inner product in this case. Any help would be much appreciated. I don't understand what to even substitute in the formula.
I’m answering this question exactly 10 years later, i.e 31 March 2024, so I’m not sure how relevant it might be but to anyone who needs it in the future, Hint: The numerator can be expressed as $\langle u1,u2 \rangle = x_1x_2 + 3y_1y_2 +z_1z_2$ which is fairly easy to compute. For the denominator, recollect that the distance function is defined as $\sqrt{\langle u1,u1 \rangle} = \sqrt{x_1^2 + 3y_1^2 + z_1^2 }$ And $||u1|| = \sqrt{\langle u1,u1 \rangle}$ . Use similar steps to calculate the value of length for the vector $u2$ , based on the newly defined inner product, and then you can plug them into the formula for the angle to obtain the result
|linear-algebra|
0
dual space of l2 with strange norm
Consider $\displaystyle(\ell_2, \lVert\cdot\rVert_\star), \lVert x\rVert_\star = \sum\limits_{k=1}^{\infty}\frac{|x(k)|}{k}$ . What is its dual space? Is this space reflexive? My idea is to consider $\displaystyle\varphi: (\ell_2, \lVert\cdot\rVert_\star) \rightarrow (\ell_1, \lVert\cdot\rVert_1), \varphi(\{x(k)\}_{k=1}^\infty) = \left\{ \frac{x(k)}{k} \right\}_{k=1}^\infty$ . Then $\lVert\varphi(x)\rVert_1 = \lVert x\rVert_\star$ and $\varphi$ is isometric isomorphism. But $\text{Im}\varphi \subsetneq \ell_1$ becasue, for example, $\displaystyle\left\{\frac{1}{k^{3/2}}\right\} \in \ell_1, \left\{\frac{1}{k^{1/2}}\right\} \notin \ell_2$ . So how to find dual space in this case?
As you have noticed, we can identify our space with $X:=\mathrm{Im}(\varphi)$ which is a dense subspace of $\ell^1$ (as $X$ contains the subspace of sequences with finitely many nonzero entries). The identification part gives us that $(\ell^2,\Vert \cdot \Vert_*)^*$ is isomorphic as normed space to $(X, \Vert \cdot \Vert_1)^*$ via the map $$G: (\ell^2,\Vert \cdot \Vert_*)^*\rightarrow (X, \Vert \cdot \Vert_1)^*, f \mapsto f\circ \varphi^{-1}.$$ Using the density we get that $$F: (\ell^1, \Vert \cdot \Vert_1)^*\rightarrow (X, \Vert \cdot \Vert_1)^*$$ yields an isomorphism of normed spaces, where $F$ is the restriction to $X$ , i.e. for $f\in (\ell^1)^*$ $$F(f): X \rightarrow \mathbb{C}, F(f)(x)=f(x).$$ The linear map $F$ is clearly bounded. It is injective (if a continuous function vanishes on a dense subset, then it vanishes everywhere, i.e. $F$ has trivial kernel). Furthermore, $F$ is surjective too as every continuous linear $g: X\rightarrow \mathbb{C}$ can be extended to $\tilde{g}:
|sequences-and-series|functional-analysis|normed-spaces|banach-spaces|dual-spaces|
1
Integer solutions to $ax+by -cz = 0$ in relation to Pythagorean triples.
In a paper by Frink (1987), it was shown that if $(x,y,z)$ are solutions to the equation $x^2+y^2 = z^2 +1$ and $(a,b,c)$ are primitive Pythagorean triples i.e, solutions to $a^2 + b^2 = c^2$ and $\gcd(a,b,c) = 1$ (pairwise relatively prime), then: $$(x,y,z) = (x+a, y+ b, z+c) \\ (x', y',z') = (x'+ a, y'+b, z'+c) $$ where $x+ x' = a, y+y' = b, \mbox{and } z+z' =c$ are pairs of solutions to $x^2+y^2 = z^2 +1$ . I tried to prove this algebraically by showing that: \begin{aligned} &(x+a)^2 + (y+b)^2 - (z+c)^2 = 1 \\ &(x^2 + y^2 -z^2) + (a^2+b^2-c^2) + 2(ax+by-cz) = 1 \\ &\implies 2(ax+by-cz) = 0 \end{aligned} I've tried to verify this for some $(x,y,z)$ and $(a,b,c)$ and the last equation does hold. However, I am not certain if this holds in general . My question is how do I show that $(ax+by-cz) = 0$ given the properties provided?
I think what you mean is that if $x^2+ y^2 = z^2 + 1$ and $a^2 + b^2 = c^2$ then $a x + b y = cz$ if and only if $(x+a)^2 + (y+b)^2 = (z+c)^2 + 1$ . This is simply from the fact that $$(x+a)^2 + (y+b)^2 - (z+c)^2 = (x^2 + y^2 - z^2) + 2 (a x + by - cz) + (a^2 + b^2 - c^2)$$
|algebra-precalculus|elementary-number-theory|
1
Use tensor product to show that $ \mathscr{T}^n=\sum_{j \geq 1} \lambda_j^n e_j \otimes e_j $?
I learned tensor product before. The definition is Definition 3.4.6 Let $x_1, x_2$ be elements of Hilbert spaces $\mathbb{H}_1$ and $\mathbb{H}_2$ , respectively. The tensor product operator $\left(x_1 \otimes_1 x_2\right): \mathbb{H}_1 \mapsto \mathbb{H}_2$ is defined by $$ \left(x_1 \otimes_1 x_2\right) y=\left\langle x_1, y\right\rangle_1 x_2 $$ for $y \in \mathbb{H}_1$ . If $\mathbb{H}_1=\mathbb{H}_2$ we use $\otimes$ in lieu of $\otimes_1$ . Suppose that $\mathbb{H}_i=\mathbb{R}^{p_i}, i=1,2$ for some finite positive integers $p_2, p_1$ . Then, $$ x_1 \otimes_1 x_2=x_2 x_1^T $$ for $x_i \in \mathbb{R}^{p_i}, i=1,2$ : namely, $x_1 \otimes_1 x_2$ is the vector outer product of $x_2$ and $x_1$ . Please use my definition to prove that: The representation of a self-adjoint compact operator $\mathcal{T}$ : $$\mathcal{T} = \sum_{j \geq 1} \lambda_j e_j \otimes e_j$$ satisfies $$ \mathscr{T}^n=\sum_{j \geq 1} \lambda_j^n e_j \otimes e_j $$ for any positive integer $n$ : namely, $\left(\la
Your definitions imply : $$ (e_j \otimes e_j)(e_k \otimes e_k) = e_je_j^Te_ke_k^T = e_j \langle e_j,e_k \rangle e_k^T = \delta_{jk} (e_j \otimes e_k), $$ where we assumed that the basis of the Hilbert space is orthonormal in order to write $\langle e_j,e_k \rangle = \delta_{jk}$ , with $\delta_{jk}$ the Kronecker delta. In consequence, one has : $$ \sum_{j\geq1}\sum_{k\geq1} \lambda_j^n\lambda_k (e_j \otimes e_j)(e_k \otimes e_k) = \sum_{j\geq1}\sum_{k\geq1} \lambda_j^n\lambda_k \delta_{jk} (e_j \otimes e_k) = \sum_{j\geq1} \lambda_j^{n+1} (e_j \otimes e_j) = \mathcal{T}^{n+1} $$
|functional-analysis|
1
Help Antie evaluate Gauss curvature of a smooth surface using ruler and a protractor
Antie, a smart ant living on a smooth surface $S$ of $\mathbb{R}^3$ , would like to evaluate the Gauss curvature $K$ at a certain point $P\in S$ . Antie is aware of Gauss Theorema Egregium, according to which Gauss curvature may be evaluated using the first fundamental form of the surface. So, Antie grabs a ruler and a protractor, determines a neighbourhood around $P$ and is ready to start calculating, since it has heard that the first fundamental form is related to lengths and angles around $P$ (words like tangent space do not really make sense to Antie). Αntie knows that if he knew some quantities, called $E$ , $F$ , $G$ , then it would be able to evaluate $K$ . But of course Antie is not aware of the way the surface is embedded in space $\mathbb{R}^3$ and its local parametrization $\sigma(u,v)=( \sigma_1(u,v), \sigma_2(u,v),\sigma_3(u,v))$ around $P$ (in order to evaluate $E=\sigma_u\cdot \sigma_u$ etc around $P$ ). How would we help Antie calculate all the quantities needed for Gau
Assumption: the “ruler and protractor” has the following functions: Ruler: draw a geodesic between two points, and measure the length of that geodesic Protractor: given two geodesics intersecting at a point P, measure the angle they make at point P. At least on the plane, these are the correct properties of the ruler and the protractor. I don’t know if they are the correct properties of these instruments on the surface. I don't know a lot of differential geometry, but here might be a feasible approach. Draw a sufficiently small geodesic right triangle $ABC$ around $P$ , say with angle $B$ being 90 degrees and $AB = BC$ , and (very accurately) measure the angles $\angle ABC, \angle ACB, \angle BAC$ . By Gauss-Bonet, we have $$\int_{\triangle ABC} K dA = \angle ABC + \angle ACB + \angle BAC - \pi.$$ Now, if $K_P$ is the curvature at point $P$ , we have $$\int_{\triangle ABC} K dA = (1 + o(1)) \text{Area}(\triangle ABC) \cdot K_P$$ and $$\text{Area}(\triangle ABC) = \frac{1 + o(1)}{2} AB
|differential-geometry|surfaces|curvature|
1
minimum of $-(\cos k_1 + \cos k_2 + \cos k_3)$
I want the minimum value of the function $$- (\cos k_1 + \cos k_2 + \cos k_3)$$ under the constraint of $$ k_1 + k_2 +k_3 = K , $$ where the constant $K \in [-\pi, \pi]$ . I guess the minimum is achieved at $k_1 = k_2 = k_3 = K/3$ , but find it difficult to prove it. The problem is that $-\cos x $ is not completley convex on the real axis. Here we have three terms. It is conjectured that if we have $N\geq 3 $ terms, the minimum is always achieved by $k_1 = k_2 = \ldots = k_N = K/N $ . This problem is relevant to solid state physics. I want to minimize the kinetic energy of a few electrons in a lattice.
Some thoughts. Fact 1. Let $k_1, k_2, k_3 \in \mathbb{R}$ with $k_1 + k_2 + k_3 \in [-\pi, \pi]$ . Then $-\cos k_1 - \cos k_2 - \cos k_3 \ge -3\cos \frac{k_1 + k_2 + k_3}{3}$ . Fact 2. Let $u, v \in [-\pi, \pi]$ and $n\ge 3$ . Then $-n\cos \frac{u}{n} - \cos (u - v) + (n + 1)\cos \frac{v}{n + 1}\ge 0$ . Fact 3. Let $k_1, k_2, \cdots, k_n \in \mathbb{R}$ ( $n \ge 3$ ) with $k_1 + k_2 + \cdots + k_n \in [-\pi, \pi]$ . Then $$-\cos k_1 - \cos k_2 - \cdots - \cos k_n \ge - n\cos \frac{k_1 + k_2 + \cdots + k_n}{n}.$$ Proof of Fact 3. We use the Mathematical Induction. When $n= 3$ , by Fact 1, the statement is true. Assume that the statement is true for $n$ ( $n\ge 3$ ). For $n + 1$ , WLOG, assume that $k_1 + k_2 + \cdots + k_n \in [-\pi, \pi]$ . ( Note : Otherwise, there exists $m\in \mathbb{Z}$ such that $k_1 + k_2 + \cdots + k_n + 2m\pi \in [-\pi, \pi]$ . Let $k_1' = k_1 + 2m\pi, k_{n+1}' = k_{n+1} - 2m\pi$ . Then $-\cos k_1 - \cos k_2 - \cdots - \cos k_{n+1} = -\cos k_1' - \cos k_2 - \cd
|optimization|maxima-minima|
0
Pederson's Analysis now Proposition 4.5.10, Why is this operator equal to $0$?
Here is an image of the proposition: An highlighted below is the part of the proof I'm having trouble with. Why is the equation true for every n? $f_n(T) = \int_{\sigma(T)} f_n \: dE$ , but I don't see how if you take some $x \in \ker(\lambda I-f(T))$ that then $f_n(T)x=0$ . I tried using the fact that: $$\lVert f_n(T)x \rVert^2 = \int_{\sigma(T)} f^2_n \: d\mu_{x,x}$$ but I'm failing to see how this integral is $0$ , given that all I know about it is that $\mu_{x,x}(Y) = \langle E(Y)x,x \rangle$ on measurable subsets of the spectrum.
Note that $f_n = g_n(\lambda - f)$ where $g_n(t) = (|t| \wedge 1)^{1/n}$ . $g_n$ is a continuous function with $g_n(0) = 0$ , so it can be uniformly approximated by a sequence of polynomials $(g_{nm})_{m \in \mathbb{N}}$ with zero constant term. Clearly, when $x \in \mathrm{ker}(\lambda I - f(T))$ , $[g_{nm}(\lambda I - f(T))](x) = 0$ , since $g_{nm}$ is a polynomial with no constant term and you can directly evaluate $g_{nm}(\lambda I - f(T))$ to see that this is the case. As $g_{nm} \to g_n$ uniformly, we see that $g_{nm}(\lambda I - f(T)) \to g_n(\lambda I - f(T)) = f_n(T)$ , so $[f_n(T)](x) = 0$ .
|functional-analysis|analysis|spectral-theory|functional-calculus|
0
Conditions on formula of expectation of sum of infinitely random variables
My book states that it's not always true that: $$E(\sum_{i=1}^{\infty} X_i) = \sum_{i=1}^{\infty} E(X_i)$$ and what makes it not true in general is this equality: $$E(\sum_{i=1}^{\infty} X_i) = E(\lim_{n\to\infty}(\sum_{i=1}^{n}X_i)) \stackrel{?}{=} \lim_{n\to\infty}E(\sum_{i=1}^{n}X_i)$$ Two special cases that this can be true are: $X_i$ are all nonnegative $\sum_{i=1}^{\infty}E(|X_i|)$ is unbounded I have no idea how to prove the relation in these two conditions. What am I missing here ?
This has to do with the ability to swap an integral with infinite sum, which can happen under certain conditions. I think the main theorem you are missing is: If $X_n:\Omega\to[0,\infty]$ are measurable functions for all $n\in \mathbb{N}$ and $X\left(x\right)=\sum_{n=1}^{\infty}X_{n}\left(x\right)$ for all $x\in X$ then: $$\intop_{\Omega}Xd\mu=\sum_{n=1}^{\infty}\intop_{\Omega}X_{n}d\mu$$ When you look at the definition of expectancy, you get the integral. As to the special cases you note: If $X_i$ are all non-negative, then the theorem holds. If the sum of positive elements is unbounded then when we swap order it is still unbounded, meaning $\infty=\infty$ . EDIT For the theorem I'm not sure if it has a name, but it appears in Rudin "Real and complex analysis" as Theorem 1.27. I can show the derivation from the theorem: $$ E(\sum_{n=1}^{\infty}X_n)=\intop_{-\infty}^{\infty}t\left ( \sum_{n=1}^{\infty} X_n(t)\right) dt=\intop_{-\infty}^{\infty} \sum_{n=1}^{\infty}t\cdot X_n(t) dt\overs
|probability|sequences-and-series|probability-theory|multivariate-statistical-analysis|
0
What is gradient? What's the difference between gradient and divergence?
I don't really get what's the difference between them. What does each thing physically and mathematically signify? Aren't both things just a dot product with the del operator?
Mathematically, the gradient is a property of a scalar function $f:\mathbb R^n\to\mathbb R$ , found as $$\mathrm{grad} (f)=\nabla f=\begin{bmatrix}\frac{\partial f}{\partial x^1}\\\frac{\partial f}{\partial x^2}\\ \frac{\partial f}{\partial x^3}\\\vdots\end{bmatrix}.$$ In physical terms you can think of it as the equivalent of the derivative of a function of one variable. It is "the derivative" or "the slope" in higher dimensions, so to speak. For instance, for a function of two variables $f:\mathbb R^2\to\mathbb R$ , which represents a surface when plotted, the gradient is a vector arrow that always points in the steepest direction from any point. The divergence on the other hand is a property of a vector function $V:\mathbb R^n\to\mathbb R^n$ , which more specifically is called a vector field, and is found as $$\mathrm{Div}(\mathbf V)=\nabla \cdot \mathbf V=\frac{\partial V^1}{\partial x^1}+\frac{\partial V^2}{\partial x^2}+\frac{\partial V^3}{\partial x^3}+\cdots$$ Physically, if yo
|definition|derivatives|vector-fields|calculus|
0
Is solution's group property is a characteristic property of autonomous differential dynamic systems?
That means if a one parameter differentiable group $\Phi(x,t):\mathbb{R}^d\times \mathbb{R}\to\mathbb{R}^d$ satisfies $$\Phi(\Phi(x_0,t_1),t_2)=\Phi(x_0,t_1+t_2),\Phi(x_0,0)=x_0$$ holds for $\forall x_0\in\mathbb{R}^d,t_1,t_2\in\mathbb{R}.$ Then is there exist a suitable function $F$ on $\mathbb{R}^d$ s.t. $\dot{\Phi(x(0),t)}=F(\Phi(x(0),t)),x(0)=x$ ?
The notation may become tricky, that is why we will lighten it by remarking that $x(t) = \Phi(x_0,t)$ , where $x_0 = x(0)$ plays the role of an initial condition. Moreover, it allows to eliminate the time-dependence of the flow $\Phi$ with respect to its first variable, since $x_0$ is a constant vector. In consequence ${}^1$ , we can write $\dot{x}(t) = \dot{\Phi}(x_0,t) = F(\Phi(x_0,t)) = F(x(t))$ . Then, the following "trick" permits to "linearize" this differential equation : $\dot{x} = F(x) = (F(x)\cdot\nabla_x)x$ , which is solved formally ${}^2$ by $x(t) = \left.e^{tF(x)\cdot\nabla_x}x\right|_{x=x_0}$ . But, $x(t) = \Phi(x_0,t)$ at the same time, from which one concludes $F(x) = \dot{x}(0) = \left.\dot{\Phi}(x,t)\right|_{t=0}$ in the end. ${}^1$ N.B. : otherwise, without the aforementioned time-independence in the first variable, we would have $\dot{x}(t) = \frac{\mathrm{d}}{\mathrm{d}t}\Phi(x(t),t) = \nabla_x\Phi(x(t),t)\dot{x}(t) + \partial_t\Phi(x(t),t)$ , which would be a mor
|real-analysis|calculus|ordinary-differential-equations|dynamical-systems|
0
defective renewal equation
I am reading this paper of Lin and Willmot. I dont understand how they come up with formula $2.7$ and why $\tilde{\phi}(s)=\frac{\tilde{H}(s)}{1+\beta-\tilde{g}(s)}$ . Can someone help me? So i want to show that $$\phi(u)=\frac{1}{1+\beta}\int_0^u\phi(u-x)dG(x)+\frac{1}{1+\beta}H(u)$$ is the same as $$\phi(u)=\frac{1}{\beta}\int_0^uH(u-x)dK(x)+\frac{1}{1+\beta}H(u)$$ where $\beta>0, G(x)=1-\overline{G}(x)$ is a distribution function with $G(0)=0$ and $H(u)$ continuous for $u\geq 0$ and $K(u)=1-\overline{K}(u)$ with $\overline{K}(u)=\sum_{n=1}^\infty\frac{\beta}{1+\beta}\left(\frac{1}{1+\beta}\right)^n\overline{G}^{*n}(u)$ . $f^{*n}(x)$ is the $n$ fold convolution of $f$ . \ Can someone explain me why they are the same?
The Laplace transform of a convolution is the product of its individual Laplace transforms. Therefore $$\mathcal{L}f^{\ast n}=(\mathcal{L}f)^n$$ Especially if $F$ is differentiable with $F'=f$ , you find $$\mathcal{L}f(s)=\int_0^\infty e^{-su}f(u)du=\int_0^\infty e^{-su} dF(u).$$ Recall, that $\bar G^{\ast n}=1-G^{\ast n}$ for $n\in\mathbb{N}$ , since it is defined as the tail of the $n$ -convolution of $G$ . Moreover, it holds $G^{\ast n}(0)=0$ for $n\in\mathbb{N}$ . (The special case $n=1$ is covered by $G(0)=0$ .) It holds $$ \begin{aligned} \frac{\beta}{1+\beta}+\bar K(0) &=\frac{\beta}{1+\beta}+\frac{\beta}{1+\beta}\sum_{n=1}^\infty\big(\frac{1}{1+\beta}\big)^n(1-G^{\ast n}(0))\\ &=\frac{\beta}{1+\beta}+\frac{\beta}{1+\beta}\sum_{n=1}^\infty\big(\frac{1}{1+\beta}\big)^n\\ &=\frac{\beta}{1+\beta}\sum_{n=0}^\infty\big(\frac{1}{1+\beta}\big)^n\\ &=\frac{\beta}{1+\beta}\frac{1}{1-\frac{1}{1+\beta}}\\ &=1, \end{aligned} $$ such that $$K(0)=1-\bar K(0)=\frac{\beta}{1+\beta}.$$ Moreover,
|integration|probability-theory|laplace-transform|convolution|
1
Is L-Smoothness Equivalent to Globally Lipschitz Gradient without Convexity
The problem I am Having Suppose $f:\mathbb E\mapsto \mathbb{\bar R}$ , mapping from some euclidean space to augmented real with a well-defined gradient defined everywhere, and it has the smooth property: $$ \exists L > 0 \text{ s.t: } |f(y) - f(x) - \langle \nabla f(x), y - x\rangle| \le \frac{L}{2}\Vert x - y\Vert^2 \quad \forall x, y\in \mathbb E, $$ then, is it possible that the function's gradient is globally Lipschitz? For generality, we may assume $\Vert \cdot \Vert$ is any norm, but proving it only for Euclidean norm suffices the argument. For context, I am just curious because the converse of the statement is a simple proof, but this statement is not; I assume smoothness and then get a globally Lipschitz gradient without convexity assumption, and it makes me feel unease. What I had Tried A list of things I tried to solve it. Simple algebraic manipulations, thinking that taking supremum or Cauchy Schwartz inequality might help. Googling and some UCLA lecture slides are what I fo
I encountered very similar and related question. That is whether the following conditions are equivalent to each other, (1) $f$ is $L$ -smooth ( $\|f(x) - f(y)\| \leq L \| x-y \|$ ) (2) $|D_f(x,y)| \leq \frac{L}{2}\|x-y\|^2$ (3) $\|\nabla^2 f\| \leq L$ . Without otherwise stated, we consider Euclidean norm for both vectors and matrices. Here we define the Bregman distance $D_f(x,y) = f(x) - f(y) - \langle \nabla f(y), x-y\rangle$ . (1) $\iff$ (3) can be found in Lemma 1.2.2 in (Nesterov et.al 2018) (1) $\implies$ (2) can be found in Lemma 1.2.3 in (Nesterov et.al 2018) Then I find smooth convex generalizations 's Lemma 8 shows that $|D_f (x,y)| \leq \frac{L}{2}\|x-y\|^2 \iff |z^\mathsf{T} \nabla^2 f(x) z| \leq L \|z\|^2$ ( $z$ can be any vector), which means (2) $\iff$ (3). So now, I wonder whether it holds that (1) $\iff$ (2) $\iff$ (3)? At last, I think this question is asking whether (1) $\iff$ (2)? Nesterov, Y. (2018). Lectures on convex optimization (Vol. 137, pp. 5-9). Berlin: Sp
|real-analysis|optimization|convex-optimization|lipschitz-functions|gradient-descent|
0
Solve $\sqrt{\dfrac{a}{x}}-\sqrt{\dfrac{x}{a}}=\dfrac{a^2-1}{a}$
Solve $\sqrt{\dfrac{a}{x}}-\sqrt{\dfrac{x}{a}}=\dfrac{a^2-1}{a}$ Let $u^2=\dfrac{a}{x}$ : $\Rightarrow \sqrt{u^2}-\sqrt{\dfrac{1}{u^2}}=\dfrac{a^2-1}{a} \tag{1}$ $\Rightarrow u-\dfrac{1}{u}=\dfrac{a^2-1}{a} \tag{2}$ $\Rightarrow a(u^2-1)=u(a^2-1) \tag{3}$ $\Rightarrow au^2-a^2u=a-u \tag{4}$ $\Rightarrow au(u-a)=a-u \tag{5}$ $\Rightarrow -au(a-u)=a-u \tag{6}$ $\Rightarrow -au=1 \tag{7}$ $\Rightarrow u=\dfrac{-1}{a} \tag{8}$ Therefore $\dfrac{1}{a^2}=\dfrac{a}{x} \Rightarrow x=a^3$ . However $x=\dfrac{1}{a}$ is given as another answer, but I don't see how it's possible to derive this solution from my calculations.
Is the $u$ substitution needed? Consider the following: $\sqrt{\frac{a}{x}}-\sqrt{\frac{x}{a}}=\frac{a^{2}-1}{a}$ Now, re-write the RHS as $a-\frac{1}{a}$ and square both sides (this is the part that I'm not totally sure is valid) to give: $\frac{a}{x}+\frac{x}{a}-2=a^{2}+\frac{1}{a^{2}}-2$ $\frac{a^{2}+x^{2}}{x}=\frac{a^{4}+1}{a}$ $a x^{2}-(a^{4}+1)x+a^{3}=0$ $(x-a^{3})(ax-1)=0$ $x=a^{3}$ or $x=\frac{1}{a}$
|algebra-precalculus|
0