title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Exact vs. approximated form of dipole susceptibility for stimulated emission in laser
Stimulated emission in laser can be described by the classical oscillator theory (A.E. Siegman Lasers chapter 2 University Science Books 1986). According to Siegman, the overall susceptibility is $$\tilde{\chi}={Ne^2\over m\epsilon}{1\over \omega_a^2-\omega^2+j\omega\Delta\omega_a}$$ The exact form of the imaginary part: $${\chi}_e={-Ne^2\over m\epsilon}{j\omega\Delta\omega_a\over (\omega_a^2-\omega^2)^2+\omega^2{\Delta\omega_a}^2}$$ When $\omega\approx \omega_a$ , $\omega_a^2-\omega^2 \approx 2\omega_a(\omega_a-\omega)$ , the exact form of the imaginary part can be approximated by: $${\chi}_a={-Ne^2\over m\epsilon}{j\omega_a\Delta\omega_a\over 4\omega_a^2(\omega_a-\omega)^2+\omega_a^2{\Delta\omega_a}^2}$$ In exercise question 5 of section 2.4, the author asks how far $\omega$ can deviate from $\omega_a$ , so that $${\left\vert {\chi}_e - {\chi}_a\right\vert\over {\chi}_e}=0.1$$ If I do it in a straight forward way, I will get a fourth order polynomial of $\omega$ , which doesn't seem
Well, expanding in one-term Taylor series in $\omega$ centered at $\omega_a$ , we have (cases are to make the absolute value pick a sign) \begin{align*} \omega &> \omega_a &&: & \frac{|\chi_e - \chi_a|}{\chi_e} &\approx \frac{\omega - \omega_a}{\omega_a} \\ \omega & From these, the range of $\omega$ is $-[0.9, 1.1]\omega_a$ . The minus sign is because $\chi_e$ in the denominator is negative. It is far more common to write the relative error as either of $$ \left| \frac{\chi_e - \chi_a}{\chi_e} \right| = \left| 1 - \frac{\chi_a}{\chi_e} \right| $$ to correctly deal with negative and nonzero exact values. See the brief discussion of "relative error" at [https://en.wikipedia.org/wiki/Approximation_error#Formal_definition]. Expanding to two terms, \begin{align*} \omega &> \omega_a &&: & \frac{|\chi_e - \chi_a|}{\chi_e} &\approx -\frac{\omega - \omega_a}{\omega_a} + \frac{4 (\omega - \omega_a)^3}{\omega_a \Delta\omega_a^2} \\ \omega & We could solve this cubic, but determining which root is
|polynomials|physics|
0
Bijection and outer measure of two sets
Suppose I have two sets and they have the same cardinality which I believe means that there exists a bijection between them, kind of like an isomorphism but I don't know if I can call it that since all sets may not be groups. Eg, let's consider the set $(0,1)$ and the set of irrationals between $(0,1)$ , since they both have same cardinality, will they both have same outer measure ( which I know they have, but is this coincidence or actually due to bijection?)
In general, two sets with the same cardinality do not need to have the same outer measure. For example, all non-empty open intervals in $\mathbb{R}$ have the same cardinality, but intervals $(a, b)$ and $(c, d)$ have the same outer measure if and only if $b-a = d-c$ . Another important example is that the Cantor set has the same cardinality as $\mathbb{R}$ , but has (outer) measure $0$ . In your specific example of $(0, 1)$ and $(0, 1) \setminus \mathbb{Q}$ , these do have the same outer measure, but it isn't just because there is a bijection between them.
|abstract-algebra|measure-theory|lebesgue-measure|
1
Question on an example in Milne's ANT about factorisation & ramification
At the end of pg 65 of Milne's ANT notes ( https://www.jmilne.org/math/CourseNotes/ANT.pdf ), he claimed $X^4+X^3+X^2+X+1\equiv (X+4)^4$ (mod 5) to be obvious. I understand that LHS is Eisenstein at 5 by substituting $X=Y-1$ , and that if $f$ is Eisenstein at $p$ , then $p$ ramifies totally in $\mathbb{Q}[\alpha]$ for $\alpha$ root of $f$ . However this proposition appears in the section after this example. So I think there is another way of seeing this equality quickly? Including the factor should be $x+4$ ? I would imagine he treated $x+4$ as $x-1$ and try to connect with roots of unity.
$$\frac{X^5-1}{X-1}\equiv_5 \frac{(X-1)^5}{X-1}$$ The equality $(X-1)^5\equiv_5 X^5+(-1)^5\equiv_5 X^5-1$ follows from the binomial theorem: $$(a+b)^p=\sum_{i=0}^{p}\binom{p}{i}a^ib^{p-i}\equiv_p a^p+b^p$$ since $\binom{p}{i}\equiv_p 0$ for $i\neq 0, p$
|algebraic-number-theory|ramification|
1
Solve linear system, with constraint element-wise $|a|=1$
I have the following equation $$a=a_0+Mx,$$ where $a$ , $a_0$ , and $x$ are $N\times 1$ complex vectors, and $M$ is a (non-invertible) $N\times N$ complex matrix. For a given $a_0$ and $M$ , I want to find a vector $x$ such that the absolute value of each element of $a$ is equal to 1. Is there a nice algebraic way to find such a solution? Or brute force is the only way? This comes from a wave propagation problem, where some output $b$ is related to some input $a$ via a linear relation $b=ma$ , with $m$ a flat matrix (reduces dimension). I want to find an input $a$ that gives me some target output $b_0$ . The solutions to this are $a=m^{-1}b_0+(I-m^{-1}m)x$ for any $x$ ( $m^{-1}$ is the pseudo inverse). However in my application I do not have control on the amplitude of the input, only the phase. Therefore I want to find a solution $a$ whose elements have all absolute value of 1. I stated the problem above in a simpler form, with $a_0=m^{-1}b_0$ , and $M=I-m^{-1}m$ .
This can be expressed as an instance of quadratically constrained quadratic programming , so one approach would be to try to solve it with an off-the-shelf QCQP solver. In particular, let $x=t+iu$ and $a=b+ic$ , where $t,u,b,c$ are $N \times 1$ real vectors. Then you can express $b,c$ as a linear function of $t,u$ . Now your equation amounts to the requirement that $b_j^2 + c_j^2 = 1$ for all $j$ . These are quadratic constraints on the unknowns $t,u$ . Therefore, you can use an off-the-shelf QCQP solver to search for a feasible solution to these constraints. (For instance, define an objective function $\Phi$ that is always zero, and ask the QCQP solver to minimize $\Phi$ subject to these constraints.) Of course, QCQP is NP-hard, so solvers have running time that is at least exponential in the worst case. So there is probably only hope for success if $N$ is not too large. I believe your problem is NP-hard, even when all of $a_0,M,x$ are real-valued. Therefore, you shouldn't expect a so
|linear-algebra|matrices|systems-of-equations|matrix-equations|
1
Arithmetic sequence in coefficients of $(x+y)^n$
For which $n$ the second and third and fourth term's coefficients of binomial expansion's $(x+y)^n$ makes arithmetic sequence?
From $2\dbinom n2=\dbinom n1+\dbinom n3$ we get $$n(n-1)=n+\frac{n(n-1)(n-2)}6$$ Since $n\neq 0$ , $$6n-6=6+n^2-3n+2$$ $$n^2-9n+14=0$$ so $n=2$ or $n=7$ . For $n=2$ we don't have a 'fourth term'. Thus, $n=7$ .
|sequences-and-series|combinatorics|binomial-theorem|
0
index of the subfactor
If $N\subset M$ is an inclusion of type II $_1$ factors, Jones define the index $[M:N]$ . If $N$ is the type II subfactor of the type III factor $M$ . How to define $[M:N]$ ?
In general, the index of an inclusion of factors depends on the conditional expectation (if it exists). There is the notion of minimal index , but this does no necessarily coincide with the Jones index for $\mathrm{II}_1$ factor inclusions. In your case, if $E\colon M\to N$ is a normal conditional expectation, then $$ \mathrm{Ind}(E)=\inf\{\lambda\geq 0\mid \lambda E(x)\geq x\text{ for all }x\in M_+\}. $$ This version of the index is called the Pimsner-Popa index. There is another definition that relies on spatial derivatives and operator-valued weights, which coincides with the Pimsner-Popa index in this case. However, it is more complicated to spell out. A detailed account of index theory beyond the type $\mathrm{II}_1$ case is given in these lecture notes by Kosaki: https://pages.uoregon.edu/njp/lec-f.pdf .
|operator-algebras|von-neumann-algebras|
1
Prove that there exists at least one $x_0\in\mathbb{R}$, such that $f(x_0)+f''(x_0)=0$
Let $f:\mathbb{R}\to\mathbb{R}$ be a function, two times differentiable with $\left|f(x)\right|\leq1,\forall x\in\mathbb{R}$ and $f^2(0)+\left(f'(0)\right)^2=4$. Prove that there exists at least one $x_0\in\mathbb{R}$, such that $f(x_0)+f''(x_0)=0$. I have tried the usual Rolle method by $e^x$, but didn't go far... Any help available?
Consider the interval $[-2,0]$ . By LMVT $$|f'(c)|=\left|\frac{f(0)-f(-2)}{0-(-2)}\right|\leq\frac{|f(0)|+|f(-2)|}{2}\leq 1$$ for some $c\in(-2,0)$ , $$\Rightarrow g(c)=(f(c))^2+(f'(c))^2\leq 1+1$$ i.e. $g(c)\leq 2$ . Similarly, on the interval $[0,2]$ we have $g(d)\leq 2$ for some $d\in (0,2)$ . So on the interval $(c,d)$ we have a point of local/global maximum at $x=x_0\in(c,d)$ since $g(0)=4$ . So $g(x_0)\geq 4$ and thus we must have $g'(x_0)=0$ since $g(x)$ is twice differentiable. $$g'(x_0)=2f'(x_0)(f(x_0)+f''(x_0))=0.$$ We can't have $f'(x_0)=0$ because in that case $g(x_0)=(f(x_0))^2+(f'(x_0))^2\leq 1+0\leq 1$ . But $g(x_0)\geq 4$ . So, the only option that we have is $f(x_0)+f''(x_0)=0$ for some $x_0\in(-2,2)$ .
|calculus|functions|inequality|
0
Trying to simplify the following boolean expression
Simplify: $y \times [x + (x' \times y)]$ My attempt: $y \times [(x + x') \times (x + y)] \quad \textit{First Distributive Axiom} $ $y \times [1 \times (x + y)] \quad \textit{First Inverse Axiom} $ And for the continued steps, I am unfortunately stuck again. Axioms available:
Solution: $y \times [(x + x') \times (x + y)] \quad \textit{First Distributive Axiom} $ $y \times [1 \times (x + y)] \quad \textit{First Inverse Axiom} $ $y(x+y) \quad \textit{Identity Axiom} $ $yx + yy \quad \textit{Distributive Axiom} $ $(yx) + y \quad \textit{Idempotent Axiom} $ $y + (yx) \quad \textit{Commutative Axiom} $ $y \quad \textit{Absorption Axiom} $
|logic|boolean-algebra|boolean|
0
Global minimization of finite Laurent series (rational function) in $\mathbb{R}^+$
Context: I am creating a special ballistics simulation with nonstandard physics, and I have reached a step where I need to find the global minimum of a rational function very fast to be able to run the simulation interactively in real-time. I have documented my research and implementation in C#: it can be found on GitHub . Goal: Find the global minimum of a univariate, nonnegative rational function of the form $$\left\| v(T)\right\|^2:= \left\| \frac{x(T)}{T} \right\|^2=\frac{x(T)\cdot x(T)}{T^2}$$ for some Taylor polynomial with vector "coefficients" $$x(T):=\sum_{k=0}^{n} \frac{T^k}{k!}\vec{a_k}$$ with $x(T)$ and therefore $a_k$ of arbitrary dimension. I am interested in the domain of $\mathbb{R}_{>0}$ . As you notice, this function is strictly nonnegative on this interval. My first thought, which is a standard approach for optimization, is to use calculus to take the derivative and set it to $0$ to find critical points and test for the minimum (the function is differentiable on $\ma
There are many methods for numerical optimization. I would suggest you experiment with them. One standard approach is gradient descent . Another is Newton's method . Let $$f(T) = \left\lVert{x(T) \over T}\right\rVert^2.$$ Gradient descent requires the ability to evaluate $f(T)$ and to evaluate the first derivative $f'(T)$ . Notice that we can obtain an algebraic expression for $f'(T)$ : $$f(T) = \sum_j \left({x_j(T) \over T}\right)^2,$$ so $$\begin{align*}f'(T) &= \sum_j 2 {x_j(T) \over T} \left({x'_j(T) \over T} - {x_j(T) \over T^2}\right)\\ &= 2 {x(T) \cdot x'(T) \over T^2} - 2 {\lVert x(T)\rVert^2 \over T^3}, \end{align*}$$ where $$x'(T) = \sum_{k=1}^n {T^{k-1} \over (k-1)!} \overline{a}_k.$$ Therefore, you can evaluate $f(T)$ and $f'(T)$ using $2n$ vector-sums and $3$ dot-products. This is enough to use gradient descent. If you expand the calculation a bit further, you can compute the second derivative $f''(x)$ and then use Newton's method. The advantage of Newton's method is that
|optimization|rational-functions|
0
Trying to simplify the following boolean expression but suck and new to this
Expression: $[x + (yz)](x' + z)$ My attempt: $[x + (yz)](x' + z)$ $(x + y)(x + z)(x' + z) \quad \textit{Distributive law}$ The answer is supposed to be: $(x + y) \times z$ N.B. I am still new to this and learning, I am using the following book: Discrete Mathematics for Computing / Edition 3 by Peter Grossman
$ [x+(yz)](x′+z) $ distributiv law => $ (x+y)(x+z)(x′+z) $ commutative law => $ (x+y)(z+x)(z+x') $ distributive law => $ (x+y)(z+(xx')) $ complement Law => $ (x+y)(z+0) $ identity law => $ (x+y)z $ (Laws From: Laws of Boolean Algebra (electronics-tutorials.ws) )
|logic|boolean-algebra|boolean|
1
The expected max inner product between a random vector and the sum of random vectors
Given: Fix $n, k \in \mathbb{N}$ , we generate $n$ random vectors $X_i \in \mathbb{R}^k$ with $\|X_i\|_2 = 1$ . We want to compute: The expected max Euclidean inner product between a random vector and the sum of the random vectors, i.e., $$ \mathbb E \max_{i \in [n]} \langle X_i, \sum_{j \in [n]} X_j \rangle. $$ What I have tried: When $n = 2$ , $\langle X_1, X_1 + X_2 \rangle = \langle X_2, X_1 + X_2 \rangle = 1 + \langle X_1, X_2 \rangle$ ; $\mathbb E \langle X_1, X_2 \rangle = 0$ due to symmetry, and thus $\mathbb E \max_{i \in [n]} \langle X_i, \sum_{j \in [n]} X_j \rangle = 1$ . When $n \geq 3$ , it becomes unclear to me. Intuitively it might get larger as $n$ increases since we have more options, and would approach some limit, possibly related to $\| \sum_{j \in [n]} X_j \|$ . Simulations (1,000,000 trials) with $n = 3$ and $k = 2$ gives $1.5319441791547532 \pm 0.7104635978561801$ . Simulations (1,000,000 trials) with $n = 4$ and $k = 2$ gives $1.6632600316282025 \pm 0.9042611238
Large $n$ asymptotics Let $S$ be the position of a random walk with steps $X_j$ which are uniformly distributed unit vectors $$\tag{1} S=\sum\limits_{j=1}^nX_j $$ Then using the multivariate CLT the distribution of $S$ as $n\to\infty$ is a multivariate Gaussian $$\tag{2} f_S(s)=\frac{\exp\left(-sM^{-1}s/2 \right)}{\sqrt{(2\pi)^k|M|}} $$ where $M$ is the covariance matrix, $|M|$ is the determinant, and $s \in \mathbb{R}^k$ . By symmetry we have for the components $X_j^m$ of each vector $X_j$ $$\tag{3} \mathbb{E}(X_j^mX_j^{m'})=\alpha\delta_{mm'} $$ where $\delta$ is the Kroneckar delta and $\alpha$ is a constant. Using (3) we have $M=\mathbb{I}\alpha$ . To find the radial distribution of $r=|s|$ we integrate (2) over the hypersphere $S_{k-1}(r)$ yielding $$\tag{4} f_R(r)=\frac{2\pi^{k/2}r^{k-1}}{\Gamma(k/2)}\frac{\exp\left(-r^2/2\alpha \right)}{\sqrt{(2\pi\alpha)^k}} $$ To determine $\alpha$ we use $\mathbb{E}(X_iX_j)=\delta_{ij}$ with (1) to show $\mathbb{E}(S^2)=n$ then compare to $\m
|probability|random-variables|expected-value|
1
Left adjoint of an additive functor between triangulated categories that commute with shift
Let $\mathcal S, \mathcal T$ be triangulated categories and $R: \mathcal S \to \mathcal T$ be an additive functor that commutes with shift. If $L:\mathcal T \to \mathcal S$ is a left adjoint to $R$ , then must $L$ also commute with shift? (Note that $L$ must be an additive functor Are adjoint functors between additive categories additive? )
Let $X$ be an object of $\mathcal{T}$ . There is a sequence of isomorphisms of functors \begin{align} \operatorname{Hom}_{\mathcal{T}}(L(X[n]),?) &\cong\operatorname{Hom}_{\mathcal{S}}(X[n],R?)\\ &\cong\operatorname{Hom}_{\mathcal{S}}(X,(R?)[-n])\\ &\cong\operatorname{Hom}_{\mathcal{S}}(X,R(?[-n])\\ &\cong\operatorname{Hom}_{\mathcal{T}}(LX,?[-n])\\ &\cong\operatorname{Hom}_{\mathcal{T}}((LX)[n],?). \end{align} So by Yoneda's Lemma, $L(X[n])\cong (LX)[n]$ .
|adjoint-functors|triangulated-categories|additive-categories|
1
An exact definition of multiplication
I am looking into repeated operations, and it seems really hard to precisely define multiplication. Of course, for integer $b$ and real number $a$ , we use the grade school definition we all know: $$ab = \underbrace{a + a + a + \cdots + a}_{b\text{ times}}$$ but what about for real numbers $a$ and $b$ ? For exponentiating (for integers: repeated multiplication), we have a precise formula to define it, which is easy to derive: $$a^x = \sum_{n=0}^{\infty} \frac{x^n \left(\ln(a)\right)^n}{n!}$$ which is nice because we only have integer powers in the sum, which we already know how to define: $$x^n = \underbrace{x \times x\times x \times \cdots \times x}_{n\text{ times}}$$ But this just raises the question of how we define $x \times x$ precisely. Is there an analogous formula to this for multiplication? How does the calculator compute multiplication of reals? Note: According to sources, just approximating multiplication for real numbers uses calculus or numerical methods. I cannot grasp wh
You can define multiplication as a binary operation of a particular algebra. For example the definition of a ring. From A.H. Lightstone Symbolic logic and the real number system DEFINITION 3.6.1. An algebraic system, say $(R,+,\cdot,0)$ , where $+$ and $\cdot$ are binary operations on $R$ and $0\in R$ is said to be a ring iff (i) $(R,+,0)$ is an abelian group, (ii) $(R,\cdot)$ is a semi group, (iii) $\forall x \forall y \forall z [x \cdot (y+z) = x \cdot y + x \cdot z]$ , (iv) $\forall x \forall y \forall z [(y+z) \cdot x = y \cdot x + z \cdot x]$ . And the binary operation $\cdot$ in the definition of a ring represents multiplication. For multiplication of real numbers the definition is DEFINITION 5.2.4. $(a_n) \cdot (b_n) = ND(a_n \cdot b_n)$ whenever $(a_n)$ and $(b_n)$ are real numbers. And the $N$ and $D$ are unary operations on infinite sequences $(a_n)$ and $(b_n)$ of decimal rationals that represent a real number. Here $D$ rounds down certain digits and $N$ rounds of a block of
|real-analysis|definition|arithmetic|real-numbers|
0
If $f(a)=b$ and $f(b)=a$. Prove that there exists at least one $c$ such that $|f'(c)|<1$. Also prove that ther exists some 'd' such that $|f'(d)|>1$
Let $f:[a,b]\rightarrow[a,b]$ where $a be a non-linear differentiable function such that $f(a)=b$ and $f(b)=a$ . Prove that there exists at least one $c$ such that $|f'(c)| .Also prove that there exists at least one $d$ such that $|f'(d)|>1$ . My Attempt By LMVT we have $f'(e)=\frac{f(b)-f(a)}{b-a}=\frac{a-b}{b-a}=-1\Rightarrow |f'(e)|=1$ . Since $f(x)$ is non-linear we can't have $f'(x)=-1$ for all $x$ . So we may have $f'(c) or $f'(c)>1$ But I was wondering if there is a more conclusive argument.
Derivatives have Intermediate Value Propery. If $|f'(x)| \ge 1$ for all $x$ then either $f'(x) \ge 1$ for all $x$ or $f'(x) \le -1$ for all $x$ . Suppose $f'(x) \ge 1$ for all $x$ . Then $f(b)-f(a)=(b-a)f'(c)\ge b-a$ for some $c$ . But then, $a-b \ge b-a$ , a contradiction. Suppose $f'(x) \le -1$ for all $x$ . Let $x \le y$ . Then $f(y)-f(x)=(y-x)f'(c)\le -(y-x)$ for some $c$ . This gives $f(x)+x\ge f(y)+y$ . In particular, $f(x)+x \ge f(b)+b=a+b$ . In the other hand, $f(y)+y \le f(x)+x$ and this gives $f(y)+y \le f(a)+a=a+b$ . Puttimg these together we get $f(t)+t=a+b$ for all $t$ . But $f$ is given to be non-linear. This proves the existence of $c$ . Existence of $d$ : Suppose $|f'(x)|\le 1$ for all $x$ . Let $g(x)=f(x)+x$ . Then, $g'(x)=f'(x)+1\ge 0$ . Thus, $g$ is increasing. This gives $g(x) \le g(b)=f(b)+b=a+b$ and $g(x) \ge g(a)=f(a)+a=b+a$ . Hence, $g(x)=a+b$ for all $x$ which makes $f$ linear.
|real-analysis|calculus|derivatives|rolles-theorem|
1
Are two subgroups of prime index with a large intersection necessarily conjugate?
Assume that $G$ is a finite group, $p$ a prime number, and $H_1,H_2\le G$ such that $[G:H_1]=p=[G:H_2]$ . Assume further that $[G:H_1\cap H_2] . Does it follow that $H_2=gH_1g^{-1}$ for some $g\in G$ ? As an alternative goal: if the main "guess" fails, will it still follow that $H_1$ and $H_2$ share the same core in $G$ (in other words: $\bigcap_{g\in G}gH_1g^{-1}=\bigcap_{g\in G}gH_2g^{-1}$ ). Either a proof or a counterexample is welcome! My preliminary thoughts: It is impossible to have $H_1\unlhd G$ , as then we would have $G=H_1H_2$ , and the parallelogram law would imply $[G:H_1]=[H_2:H_1\cap H_2]$ . Obviously the same holds for $H_2$ as well, so the two subgroups are self-normalizing in $G$ . The examples I can think of support this. For example, $H_1$ and $H_2$ can be point stabilizers of (the natural action of) $S_p$ , and as the action of $G$ is transitive, they are always conjugate. The intersection $H_1\cap H_2$ has index $p(p-1) . The exact same thing happens, when $G$ is
${\rm PSL} (2,7)\ (\cong {\rm PSL}(3,2))$ is a counterexample with two non-conjugate subgroups of index $7$ , and intersection of index $28$ . Other examples are ${\rm PSL}(2,11)$ with $p=11$ and ${\rm PSL}(3,3)$ with $p=13$ . More generally, for a prime power $q$ , if $(q^n-1)/(q-1)$ is prime with $n \ge 3$ , then ${\rm PSL}(n,q)$ is a counterexample. These examples are all simple, so the two subgroups both have trivial core. In fact under your hypotheses $H_1$ and $H_2$ always have the same core. To see this, let $C_2$ be the core of $H_2$ in $G$ . If $C_2$ is not contained in $H_1$ then, since $H_1$ is maximal in $G$ , we must have $G=C_2H_1$ , so $|C_2H_1:H_1| = |C_2:C_2 \cap H_1| = p$ . But $C_2 \le H_2$ , so $|C_2:C_2 \cap H_1| \le |H_2:H_2 \cap H_1|$ , which contradicts $|G:H_1 \cap H_2| .
|group-theory|finite-groups|
1
If $f(a)=b$ and $f(b)=a$. Prove that there exists at least one $c$ such that $|f'(c)|<1$. Also prove that ther exists some 'd' such that $|f'(d)|>1$
Let $f:[a,b]\rightarrow[a,b]$ where $a be a non-linear differentiable function such that $f(a)=b$ and $f(b)=a$ . Prove that there exists at least one $c$ such that $|f'(c)| .Also prove that there exists at least one $d$ such that $|f'(d)|>1$ . My Attempt By LMVT we have $f'(e)=\frac{f(b)-f(a)}{b-a}=\frac{a-b}{b-a}=-1\Rightarrow |f'(e)|=1$ . Since $f(x)$ is non-linear we can't have $f'(x)=-1$ for all $x$ . So we may have $f'(c) or $f'(c)>1$ But I was wondering if there is a more conclusive argument.
Suppose $f'(t)\geq -1$ for every $t\in [a, b]$ . Then, if $a\leq x , there exists $c\in (x,y)$ such that $$ \frac{f(y)-f(x)}{y-x}=f'(c)\geq -1. $$ Therefore, $$ f(y)-f(x)\geq x-y $$ for $x , which implies that the function $g(t)=t+f(t)$ is non-decreasing. However, since $g(a)=a+b=g(b)$ , $g$ is constant, which implies $f$ is linear. Therefore, if $f$ is non-linear, there is $d\in [a, b]$ such that $f'(d) . Now proceed in an analogous way assuming $f'\leq -1$ , get a contradiction and find $e$ such that $f'(e)>-1$ . Since derivatives satisfy the Intermediate Value Property, you are done.
|real-analysis|calculus|derivatives|rolles-theorem|
0
Hartshorne Example II.7.8.6
I have a question about Example II.7.8.6 in Hartshorne's Algebraic Geometry. Let $k$ algebraically closed field, $V=(t^4,t^3u,tu^3,u^4)$ and $V'=(t^4,t^3u+at^2u^2,tu^3,u^4)(a\in k^*)$ are two quartic curves in $\mathbb{P}^3_k$ . How can I show these are not equivalent by an automorphism of $\mathbb{P}^3_k$ . In Joe Harris's Algebraic Geometry, he defines ration quartic curves as curves defined by the parameterization $[X_0^4-\beta X_0^3X_1,X_0^3X_1-\beta X_0^2X_2^2,\alpha X_0^2X_1^2-X_0X_1^3,\alpha X_0X_1^3-X_1^4]$ . So it is slightly different from one in Hartshorne's book and it doesn't help. Also Maccaulay2 showed the homogeneous ideal of $V'$ is $(z^4-xw^3,4y^3z^2-y^4w+4xyz^2w-6xy^2w^2-x^2w^3+x^2s^2,y^5+20xy^2z^2-10xy^3w+4x^2z^2w-15x^2yw^2-4x^2z^2-x^2yw)$ It is so complicated so that it doesn't help.
The homogeneous ideal of $V$ contains a quadratic polynomium whereas the homogeneous ideal of $V'$ does not. This property whould be preserved by an automorphism of $\Bbb{P}^3$ since an automorphism is given by linear forms. Edit : This argument is flawed. As KReiser pointed out in the comments, the homogeneous ideal of any quartic rational curve in $\Bbb{P}^3$ will always contain a quadratic polynomial.
|algebraic-geometry|schemes|projective-geometry|
0
The minimum value of the sum $O_1O_2+…+O_{n-1}O_n$
A cube C of side $n > 2, n ∈ N,$ is divided in $n^3$ cubes of side 1, with disjoint interiors two by two. We say that two of the cubes of side 1 are Olympic, if any plane parallel to any of the faces of cube C intersects at most one of the interiors of these cubes. We choose the cubes $C_1, C_2,…, C_n$ olympics two by two,and denote with $O_1,…, O_n$ their centers. Determine the minimum value of the sum $O_1O_2+…+O_{n-1}O_n$ and establish which are the configurations formed by n cubes of side 1 for which this minimum is reached. I think the minimum is $(n-1)\sqrt 3$ but I don’t know to prove.
Following my hint in the comments, let's solve this problem of $n \times n$ unit squares. We replace planes by lines. In an $n \times n$ grid, if any line parallel to the gridlines does not intersect the interior of more than one square, all squares must belong on different rows and columns. (This is equivalent to placing $n$ mutually non-attacking rooks on an $n \times n$ chessboard.) Now, we show $O_iO_{i+1} \ge \sqrt 2$ . By Pythagoras, $O_{i}O_{i+1} = \sqrt{d_c^2 + d_r^2}$ , where $d_c$ is the difference in there column numbers, and $d_r$ is the difference in their row numbers. By the hypothesis, $d_c,d_r \ge 1$ . So, $$O_{i}O_{i+1} \ge \sqrt{1^2+1^2} = \sqrt 2$$ Analogously, in the 3D version, show that $O_{i}O_{i+1} \ge \sqrt 3$ , by noting that $$O_{i}O_{i+1} = \sqrt{d_x^2+d_y^2+d_z^2}$$ where $d_x$ is the difference in their $x$ co-ordinates, etc. Since you already have a construction for the equality case, the claim is proven.
|combinatorics|geometry|3d|
1
Find $k$ using Chebyshev's inequality such that $E(X)=100,\sigma(X)=10$ and $P(X\le 10k)\ge 4/5$
Given $E(X)=100,\sigma(X)=10$ and $P(X\le 10k)\ge 4/5$ , how do we find a lower bound on $k$ using the Chebyshev's inequality Let $X$ be any r.v. with finite mean, $μ$ , and finite variance. Then $∀a > 0,$ $$ P\bigg(|X-\mu|\ge a\bigg)\le\frac{\sigma^2(X)}{a^2} $$ which is the Chebyshev’s inequality. $$ P(X\le 10k)\ge 4/5\implies P(X> 10k)\le 1/5\implies P(X-\mu>10k-\mu)\le 1/5\\ $$ And from Chebyshev’s inequality $$ P(|X-\mu|\ge 10k-\mu)=P\Big(X-\mu\ge 10k-\mu\text{ or }X-\mu\le -(10k-\mu)\Big)\le \frac{100}{(10k-\mu)^2}\\ P(X-\mu\ge 10k-\mu)+P(X-\mu\le -(10k-\mu))\le \frac{100}{(10k-\mu)^2} $$ Is there a way to proceed further?
Note that as long as $10k-\mu > 0$ , $$P(X-\mu > 10k-\mu) \leq P(|X-\mu| \geq 10k-\mu) \leq \frac{\sigma^2(X)}{(10k-\mu)^2}$$ so we'll have the left-most quantity $\leq 1/5$ if the right-most quantity is $ \leq 1/5$ . Using the given values, $$\frac{100}{(10k-100)^2} \leq \frac15 \\ (k-10)^2 \geq 5$$ This last inequality will hold if $k \geq 10 + \sqrt{5}$ , and for any such $k$ , we can check that $10k-\mu > 0$ , so $$k \geq 10+\sqrt{5}$$ is sufficient for $P(X\leq 10k) \geq \frac45$ .
|probability|statistics|inequality|variance|means|
1
Some estimate in a growing domain
Consider a growing family of domains $\Omega_t$ in $\mathbb{R}^n$ with boundary $\Gamma_t$ , for $t \in [0,1]$ . Examples that I have in mind are $\Omega_t = \{ x = (x_1,x_2) \in \mathbb{R}^2: x_1 or $\Omega_t = B_t(0)$ a ball of radius $t$ . An advantage of the latter case is that for $0 \leq s one has $|\Omega_t \setminus \Omega_s| \leq C |t-s|$ (recall for later). Also, write $-\Delta_s$ for the Dirichlet Laplacian over $L^2(\Omega_s)$ and $C^\infty_0(\Omega_t)$ for the test functions in $\Omega_t$ . For $0\leq s , $u\in D(-\Delta_s) =H^1_0(\Omega_s) \cap H^2(\Omega_s)$ and $v\in C^\infty_0(\Omega_t)$ I want to show the following estimate: $$\int_{\Gamma_s} \partial_\nu u \cdot v d \sigma \leq C \omega(t-s) \| \Delta_s u \|_{L^2(\Omega_s)} \| v \|_{H^1(\Omega_t)},$$ where $\omega(t-s)$ is a modulus of continuity that should be at least as good as $1/2 + \varepsilon$ Hölder, and $\partial_\nu$ is the co-normal derivative with respect to $\Gamma_s$ . The question comes from some appli
I do not think this is possible without assuming more regularity of $v_t$ with respect to $t$ . Take $\Omega_t = (-t,+t)$ . Construct $v_t$ to be piecewise linear with $v_t(\pm t)=0$ and $v_t(\pm s)=1$ . Then $\|v_t'\|_{L^2} =c (t-s)^{-1/2}$ , and $v_t'$ is concentrated on $\Omega_t \setminus \Omega_s$ . Set $u_s(x) := 1-x^2/s^2$ . Then $\partial_\nu u =c s^{-1}$ , $\|\Delta u\|_{L^2} = c s^{-3/2}$ . This implies $$ \frac{ \int_{\partial \Omega_s} \partial_\nu u v_t }{ \|v_t'\|_{L^2} \|\Delta u\|_{L^2} } = c \frac{s^{-1}}{ (t-s)^{-1/2} s^{-3/2}} = c s^{1/2}(t-s)^{1/2}. $$
|integration|partial-differential-equations|sobolev-spaces|
1
Evaluate integral with high power under square root
How to solve this case? If only it had been $x^2$ instead of $x^4$, I would set $x=sinht$ ... $$ \int \frac{x}{\sqrt{1+x^{4}}}dx $$
Let $x^2=\sinh \theta$ , then $$ \begin{aligned} \int \frac{x}{\sqrt{1+x^4}} d x & =\frac{1}{2} \int \frac{\cosh \theta d \theta}{\cosh \theta} \\ & =\frac{1}{2} \theta+C \\ & =\frac{1}{2} \sinh ^{-1}\left(x^2\right)+C . \end{aligned} $$
|integration|
0
Distribution of the sample mean of a exponential
I please ask someone to check if my calculations are right. I have $X_1, ..., X_n$ from a $\mathcal{E}(\lambda): f(x, \lambda) = \lambda e^{-\lambda x}$. I have to find the $k$ such that $P(\bar{X} \le k) = \alpha$, where $\bar{X}$ is the sample mean; i did: $$Y=\sum_{i = 1}^{n} X_i$$ $$Y \sim \Gamma (n, \lambda)$$ $$\bar{X} = \frac{1}{n} Y \sim \Gamma(n, \frac{\lambda}{n})$$ $$T = 2\frac{n}{\lambda} \bar{X} \sim \Gamma(\frac{2n}{2}, 2) \stackrel{d}{=}\chi^2 (2n) $$ $$P(\bar{X} \le k) = P(T \le k' = 2\frac{n}{\lambda} k) = \alpha $$ Then i can find the value of $k'$ from the table, and finally find $k$. I'm i missing something? (I can't reach the result stated by the book). Thank you very much.
Considering these properties: $c \times \text{Gamma} (\alpha, \lambda) \sim \text{Gamma} (n, \lambda/c)$ for $c>0$ $X+Y \sim \text{Gamma} (\alpha+\beta, \lambda)$ when $X \sim \text{Gamma} (\alpha, \lambda)$ and $Y \sim \text{Gamma} (\beta, \lambda)$ are independent $\chi^2_m \sim \text{Gamma} (\frac{m}{2}, \frac{1}{2}) $ we have the following $$ \color{blue}{2 n\lambda \bar{X} \sim \chi^2_{2n}}$$ $$ \color{blue}{n\bar{X} \sim \text{Gamma} (n, \lambda)}$$ $$ \color{blue}{\bar{X} \sim \text{Gamma} (n, n\lambda)}.$$
|statistics|
0
how to calculate the automorphisms of a group that fix a subgroup
I have a finite (polycyclic) group $G$ and a subgroup $H . How do I calculate the subgroup of $Aut(G)$ that fixes $H$ pointwise : $S = \{ a \in Aut(G) : \forall h \in H,a(h)=h\}$ It would be nice if this can be done in a straight forward way in GAP but a general approach is welcome.
Pick a generating set $\{h_1, \ldots, h_n\}$ of $H$ . Note that $\alpha \in \operatorname{Aut}(G)$ fixes $H$ pointwise if and only if it fixes all of the $h_i$ . As $\operatorname{Aut}(G)$ is finite, you can check for every $\alpha \in \operatorname{Aut}(G)$ whether or not $\alpha(h_i) = h_i$ for all $i \in \{1,\ldots,n\}$ . This is of course very inefficient, but it works. A faster way would probably be to use existing orbit-stabiliser algorithms. For each $h_i$ , calculate its stabiliser under the natural action of $\operatorname{Aut}(G)$ on $G$ , and then take the intersection of these stabilisers. A further improvement, suggested by @ahulpke in the comments, is to calculate the stabilisers iteratively. An implementation in GAP (with some minor optimisations), could be pointwiseFixed := function( G, H ) local S, h; S := AutomorphismGroup( G ); for h in SmallGeneratingSet( H ) do S := Stabiliser( S, h, \^ ); od; return S; end; which can then be used like this: gap> G := SmallGroup( 1
|finite-groups|automorphism-group|gap|
1
History of $p$-adic numbers
I'm interested in learning about the historical motivation and development of $p$-adic numbers. I haven't been able to find any books on the topic. I'd appreciate any references, including to more general history books which include coverage of the $p$-adics. Alternatively, if anyone has any knowledge about the history of $p$-adic numbers, feel free to post a summary here, particularly if you can highlight any names, papers and keywords that I could use to do more research on my own. I'm not looking for a simplified overview, I want to really dig into the details, but any amount of information that could get me started is appreciated.
[This post is an attempt to inform others on an interesting finding in Gauss's Nachlass; I know math stack exchange format is intended primarily for more "closed" answers, but I felt like it is my duty to acknowledge interested readers on this finding. This question looks seeks for an historical survey of the development p-adic numbers, while my answer focuses on its prehistory.] In a recently scanned notebook of Gauss (Schaede 5), one can find a note whose title is "Congruentia infinita", which includes the following results (a pic of the relavent page is shown at the end of this answer): Congruentia infinita $$x^5-20x^4-86x^3-98x^2+80x+3\equiv 0 \pmod {241^{\infty}}$$ has roots: $$(1) = 2+191.r+$$ $$(2) = 3+$$ $$(3) = 4+$$ $$(4) = 5+$$ $$(5) = 6+$$ and in the bottom of this page Gauss also writes: $$\sqrt{5}\pmod{11^{\infty}}= 9.0.4.10.4.4$$ The last result is about the last digits of the square root of 5 calculated 11-adically, and according to this post , this result is actually co
|number-theory|reference-request|math-history|p-adic-number-theory|
0
Standard symbol to denote a circular sector
To denote a triangle we write $\triangle ABC$ . How about a circular sector with $O$ at the center of the circle and with the arc $AB$ . Is there a standard way such as $\lhd OAB$ where $\lhd$ has a round end?
I use as in OAB. I don't know if LaTeX has a command for it - I just made it as an image and scale it to whatever size is needed for text in a given document.
|geometry|notation|euclidean-geometry|
0
Finding the count of triangles in a 4 vertices complete graph after coin toss/erasure
Consider a complete graph with 4 vertices (i.e., every two vertices are connected by an edge). For each of the 6 edges we toss a coin, and if heads occur, then we erase the edge. Let X be the number of triangles in the graph. Find E(X). How should I even approach this? Thanks!
You can use Mathematica to demonstrate the problem. Clear["Global`*"]; (*Define vertices*)vertices = {1, 2, 3, 4}; (*Generate all possible edges*) possibleEdges = Subsets[UndirectedEdge @@@ Subsets[vertices, {2}], {0, 6}]; (*Generate all graphs*) graphs = Graph[vertices, #, VertexLabels -> "Name", ImagePadding -> 10] & /@ possibleEdges; 2^Binomial[4, 2] == Length[graphs] (* 64 *) (*Display graphs*) Grid[Partition[graphs, 8], Frame -> All]; count3Cycles[g_Graph] := Tr[MatrixPower[AdjacencyMatrix[g], 3]]/6; counter = count3Cycles /@ graphs Tally[counter] (* {{0, 41}, {1, 16}, {2, 6}, {4, 1}} *) Mean[counter] (*1/2*) Variance[counter] (* 40/63 *) FYI: Let's denote $c_k$ as count of $k$ -cycles in one graph. According to this Wolfram MathWorld Page , Harary and Manvel (1972) gave the following closed forms for small $k$ : $$ c_3= \frac{\operatorname{Tr}(\mathbf{A}^3)}{6} \\ \cdots $$ where $\mathbf{A}$ is the adjancency matrix of the graph.
|probability|
0
Correlation of sum
What is Corr(X+Y, Z) in terms of Corr(X, Z) and Corr(Y, Z)? If there is no direct formula, is there any relationship in terms of > or
Note $\textrm{Cov}(X+Y,Z)=\textrm{Cov}(X,Z)+\textrm{Cov}(Y,Z)$ using the (bi)linearity of covariance. This does not translate into correlations without a bit of a mess. You can say: $$\textrm{Cor}(X+Y,Z)\sqrt{\textrm{Var}(X+Y)}=\textrm{Cor}(X,Z)\sqrt{\textrm{Var}(X)}+\textrm{Cor}(Y,Z)\sqrt{\textrm{Var}(Y)}$$ By manipulating the variances, particularly of $X+Y$ , this provides a way of illustrating possible relationships: for example with $(X,Y,Z)=(0,0,0)$ or $(50,50,50)$ or $(100,100,100)$ with equal probability, then $$\textrm{Cor}(X+Y,Z)=1 while with $(X,Y,Z)=(0,100,0)$ or $(100,0,0)$ or $(51,51,100)$ with equal probability, you get $$\textrm{Cor}(X+Y,Z)=1 > 0.023 \approx \textrm{Cor}(X,Z)+\textrm{Cor}(Y,Z)$$
|statistics|correlation|
0
Long exact sequence of $\ell$-adic cohomology with compact support for $X = X_1 \cup X_2$ union of two irreducible components
Let $X$ be a separated scheme of finite type over an algebraically closed field $k$ of characteristic $p>0$ , possibly not equidimensional. Assume that $X$ has only two irreducible components $X_1$ and $X_2$ , and that each $X_i$ is a nonsingular complete variety. Assume further that $X_1$ and $X_2$ meet transversally at $Y := X_1 \cap X_2$ . In particular, $Y$ is nonsingular, and the singular locus of $X$ consists precisely of $Y$ . Let $U_i := X \setminus X_i$ denote the open complement of $X_i$ . Finally, fix $\ell$ a prime different from $p$ . We will write $H^i(\,\cdot\,) := H^i(\,\cdot\,,\mathbb Q_{\ell})$ for the $\ell$ -adic cohomology groups of the various varieties above. Recall the existence of the long exact sequence of cohomology with compact support given a closed subvariety of $X$ and its open complement. Let me form the following diagram. $$\begin{array}{ccc} \vdots & & \vdots \\ \downarrow & & \uparrow \\ H_c^i(U_1) & \rightarrow & H^i(X_2) \\ \downarrow & & \uparrow \
For any $k$ -variety $\pi_Z \colon Z\longrightarrow \operatorname{Spec}(k)$ , let's write $D(Z)$ for $D^b_c(Z,\overline{\mathbb{Q}}_l)$ whose unit object is denoted by $\mathbb{1}_Z$ , then $H^i(Z) = \operatorname{Hom}_{D(k)}(\mathbb{1}_k,(\pi_Z)_*\mathbb{1}_Z[i])$ and $H^i_c(Z) = \operatorname{Hom}_{D(k)}(\mathbb{1}_k,(\pi_Z)_!\mathbb{1}_Z[i])$ . Your long exact sequences should involve cohomology with compact support. By this I mean it should be $H^i_c(U_1) \longrightarrow H^i_c(X) \longrightarrow H^i_c(X_1)$ and by properness $H^i_c(X_1) = H^i(X_1)$ . Let $v_1 \colon U_1 = X_2 \setminus Y \hookrightarrow X_2,t_2 \colon X_2 \hookrightarrow X$ be inclusions. Then the (co)unit morphisms $(t_2 \circ v_1)_!\mathbb{1}_{U_1}\to \mathbb{1}_X$ , $\mathbb{1}_X \to (t_2)_!\mathbb{1}_{X_2}$ and $(v_1)_!\mathbb{1}_{U_1} \to \mathbb{1}_{X_2}$ , respectively induce morphisms (simply by taking composition) $$H^i_c(U_1) = \operatorname{Hom}_{D(k)}(\mathbb{1}_k,(\pi_{U_1})_!\mathbb{1}_{U_1}[i]) \to H
|algebraic-geometry|etale-cohomology|
1
how to determine amplitude and frequency of Oscillator with random fluctuations?
I have the following system $$\ddot{x}+w^2 x=0,$$ with the following initial conditions: $\dot{x}(0)=0$ and $x(0)=x_o$ , the solution reads: $$x(t)=x_o cos(t).$$ Now I want to include the fluctuations due to thermal fluctuations (in frictionless system), so the equation of motion is modified as $$\ddot{x}+w^2 x=\lambda \zeta(t),$$ with $\dot{x}(0)=0$ and $x(0)=0$ solution reads: $$x(t)=\lambda \int_0^{t}cos(t-t')\zeta(t')dt'$$ 1- I would like to know how to include the initial condition $\dot{x}(0)=0$ and $x(0)=x_o$ in the solution? 2- how to obtain the frequency spectrum due to these fluctuations?
You have to give the correlation function for the noise. I guess it is Gaussian being thermal noise and set $$ \langle\zeta(t)\zeta(t')\rangle=\sigma^2\delta(t-t') $$ where $\sigma$ is a constant. The general solution of the forced harmonic oscillator can be written down, with the given initial conditions, $$ x(t)=x_0\cos(\omega_0 t)+\frac{\lambda}{\omega_0}\int_0^t\sin(\omega_0(t-t'))\zeta(t')dt', $$ where I have slightly changed the OP notation by setting $w\rightarrow\omega_0$ . This will yield for the correlation function $$ \langle x(t)x(t')\rangle=x_0^2\cos^2(\omega_0t)+\frac{\lambda^2}{\omega_0^2} \int_0^t dt'\int_0^t dt''\sin(\omega_0(t-t'))\sin(\omega_0(t-t'')) \langle\zeta(t')\zeta(t'')\rangle, $$ that is $$ \langle x(t)x(t')\rangle=x_0^2\cos^2(\omega_0t)+\frac{\lambda^2}{\omega_0^2}\sigma^2 \int_0^t dt'\sin^2(\omega_0(t-t')). $$ I think you can go on from here.
|ordinary-differential-equations|random|stochastic-differential-equations|noise|
1
Theater Seating Combinatorics Problem: 6 friends go to the movies and sit in adjoining seats in one row
The theater is configured so that in any row there is a wall, 6 seats, an aisle, 6 more seats, and then a wall. In total there are 2* 6! arrangements = 1440. "How many ways can they sit if Jane and Mary refuse to sit next to each other?" I have tried to approach this in 3 different ways. First that the number of arrangments is the number of total arrangements w/o restriction minus the configurations of the restriction such that 6! - 5! = 1200 potential arrangements. But this is the difference in the outcomes so I'm not sure how this would represent the separate seating, I think it would represent the together seating. I have also approached by drawing a grid to represent Mary and Janes seating. mary and jane seating This chart indicates where the two can sit without sitting next to each other, and these outcomes can be multiplied by 2 to account for both sides of the rows. But here I don't understand how to translate the chart into possible outcomes. If we block Mary and Jane together
We shall bring in probability to simplify the problem. For the moment, consider Mary and Jane as blue balls, John as a yellow ball, and the other $3$ as red balls. P(blue balls together) $=P(A)= \frac{5}{\binom62} = \frac13$ P(yellow ball) at ends $= P(B) = \frac26 = \frac13$ $P(A\cap B) = \frac13\frac{4}{\binom52} = \frac2{15}$ P(desired conditions are met) $= 1 - P(A\cup B)$ $= 1 - [P(A) +P(B) - P(A\cap B)] = 1-(\frac13+\frac13 - \frac2{15}) = \frac7{15}$ Now unrestricted permutation of three types of colored balls $= \frac{6!}{1!2!3!} = 60$ Thus valid arrangements of balls $= \frac7{15}\cdot60 = 28$ Can you now convert this into valid arrangements of people , and multiply by $2$ for the two parts of the row ?
|combinatorics|permutations|factorial|groups-enumeration|
1
$F_1\subset F_2\implies [F_2(\alpha):F_2]\leq [F_1(\alpha):F_1]$
Given the tower of fields $F_1\subset F_2\subset K$ , and $\alpha\in K$ algebraic over $F_1$ , can we deduce that (this is not an exercise, and thus more or less a two-fold question: 1. is the statement correct and 2. is the proof correct) $$ [F_2(\alpha):F_2]\leq [F_1(\alpha):F_1] $$ ? My attempted proof uses the fact that if $1,\alpha,\alpha^2,\dots,\alpha^k$ are linear dependent in $F_1(\alpha)$ over $F_1$ , then they are also linear dependent in $F_2(\alpha)$ over $F_2$ . Note that we assume that all extensions are finite.
The statement is correct, and your idea is fine. Here another proof using that $[F_i(\alpha):F_i]$ is the degree of the minimal polynomial $f_i$ of $\alpha$ over $F_i$ . It is quite clear that $f_2$ divides $f_1$ , since $f_1$ is irreducible over $F_1$ , but it may split over $F_2$ . Thus, $\deg(f_2) \leq \deg(f_1)$ .
|abstract-algebra|extension-field|
1
Even function given a condition
I would like to understand the rationale behind the solutions I found for the problem below. Show that every function $f: \mathbb{R}^* \rightarrow \mathbb{R}$ , satisfying the condition $f(xy)=f(x)+f(y)$ , is even. The solutions I've seen go in the following direction: $\forall x \in \text{D}(f)$ , $f([(-x)(-x)]) = f(-x) + f(-x) \Rightarrow 2f(-x) = f(x^2) = f(x*x) = f(x) + f(x)$ $2f(-x) = 2f(x) \Rightarrow f(-x) = f(x)$ I understand that the condition for the function to be even is $f(-x) = f(x)$ . But I didn't understand why I can assume that the values of x and y are equal to $−x$ in the beggining of the sulotion.
The condition $$f(xy)=f(x)+f(y)$$ is satisfied for all $x,y\in \mathbb{R}^*$ . In particular, if we fix $x\in\mathbb{R}^*$ , we also have $-x\in\mathbb{R}^*$ , and we may take $y=-x$ in the condition, from where we deduce $f$ must be even.
|algebra-precalculus|even-and-odd-functions|
1
In a CCC with the initial object, can I have a morphism A -> 0?
I'm in a cartesian-closed category with the initial object $0$ . What are the consequences of having a morphism from some object $A$ to $0$ ? Does it mean that there is also a global element of $0$ , that is $1\to 0$ ? Does it mean that we have a zero object, $1 \cong 0$ ? Does it mean that all objects are isomorphic? Assume we don't have coproducts.
Here is a simpler version of Qiaochu's argument that does not mention $\mathrm{Hom}$ functors: let $0$ be initial and $f : A \to 0$ . Since the category is cartesian closed, products distribute over colimits and so $0 \times A$ is initial. We will exhibit $f$ as an inverse to the unique morphism $\mathop{!} : 0 \to A$ . We clearly have $f \circ \mathop{!} = \mathrm{id}_0$ , so it suffices to check that $\mathop{!} \circ f = \mathrm{id}_A$ : The right triangle commutes by initiality of $0 \times A$ . Thus $A$ is also initial.
|category-theory|
0
Solve $x^2-2x+1=\log_2( \frac{x+1}{x^2+1})$
Solve in $\mathbb R$ the following equation $$x^2-2x+1=\log_2 (\frac{x+1}{x^2+1})$$ First of all from the existence conditions of the logarithm, we have $x > -1$ . Analyzing $x^2 - 2x - 1$ , we get that for $x \in (-1,1]$ , the function $ x \mapsto x^2 - 2x - 1$ is strictly decreasing and $ x^2 - 2x - 1 \in [-2,2)$ and for $x \in [1,\infty)$ the function $ x \mapsto x^2 - 2x - 1$ is strictly increasing and $ x^2 - 2x - 1 \in [-2, \infty)$ .I don't know if it helps, but at least we know the monotony of the left member. Now it seems quite complicated what I have to do next. I don't know how many solutions there are, but I assume there are 2. We'll probably have to rely on the convexity/concavity of the functions( IDEA : as the coefficient of $x^2$ is strictly positive, and the discriminant of the equation of degree 2 is greater than 0, the function $ x \mapsto x^2 - 2x - 1$ is strictly convex , maybe we can show that the right member is strictly concave so we have at most 2 solutions), b
Let $x\in\Bbb{R}$ be a real number satisfying $$x^2-2x+1=\log_2\left(\frac{x+1}{x^2+1}\right).\tag{1}$$ The left hand side equals $(x-1)^2$ and so it is nonnegative. Then the right hand side is nonnegative, and so $$\frac{x+1}{x^2+1}\geq1,$$ from which it follows that $x\geq x^2$ , or equivalently, that $0\leq x\leq1$ . On this interval the left hand side of $(1)$ is strictly decreasing and convex. For the right hand side, note that $$\frac{d}{dx}\log_2\left(\frac{x+1}{x^2+1}\right)=-\frac{1}{\ln{2}}\frac{x^2+2x-1}{(x+1)(x^2+1)}.$$ Clearly its only zero in the interval $[0,1]$ is at $x=-1+\sqrt{2}$ , and so the right hand side of $(1)$ is strictly increasing for $x\leq-1+\sqrt{2}$ , and strictly decreasing for $x\geq-1+\sqrt{2}$ . Moreover $$\frac{d^2}{dx^2}\log_2\left(\frac{x+1}{x^2+1}\right)=\frac{1}{\ln{2}}\frac{x^4+4x^3-2x^2-4x-3}{(x+1)^2(x^2+1)^2},$$ which is easily verified to be strictly negative on the interval $[0,1]$ , meaning that the right hand side of $(1)$ is concave. Thi
|algebra-precalculus|logarithms|exponentiation|
0
Structure of Closed Semialgebraic set
I am trying to prove the following, from Benedetti and Risler's book: The "above proposition" is: It seems an easy proposition that boils down to taking the complement of the complement of F, since the complement of $\{x \in \mathbb{R}^{n}: P(x)>0\}$ is $\{x \in \mathbb{R}^{n}: P(x) \leq 0\}$ . But I end up with a finite intersection of unions of sets of this form, and I can't rewrite it as the intended form. Any help is appreciated.
A bit late but I hope my answer is still helpful. In fact, a finite intersection of unions of sets can be rewritten as another finite union of intersection of sets because: $$\bigcap_{i \in I}\bigcup_{j \in J} \{x\mid P_{ij}(x) \geq 0\} = \bigcap_{(j_1,\ldots,j_{|I|}) \in J^I}\bigcup_{i \in I} \{x\mid P_{i,j_i}(x) \geq 0\}.$$
|real-algebraic-geometry|semialgebraic-geometry|
0
Does the range of a one-one function need necessarily to be equal to its co-domain?
I came across a question which says- "If a set A contains 7 elements and the set B contains 8 elements, then number of one to one mappings from A to B is:" The answer is given zero The explanation for the answer says that since n(A) is not equal to n(B) where n is the cardinal numbers of set A and B, hence number of none one mappings is zero But I don't think this should be the answer as it is not mentioned anywhere in the definition of one one function that its range must be equal to the co-domain ...
The answer given is wrong. As @Sisanta Chhatoi pointed out, the question was likely referring to the number of onto functions from A to B which would be $0$ . The correct answer would be $ ^{8}C_{7} \times 7! $ , as you are picking 7 out of 8 elements from B, and permuting the 7 elements of A into those 7 elements of B. Little trick: If you want to find the number of one one functions from A to B, then the condition $n(A) \leq n(B)$ has to be satisfied, and the number of such one - one functions is $^nP_r$ or $ ^nC_r \times r!$ , where $n$ is $n(B)$ and $r$ is $n(A)$ . Hope this helped.
|relations|
1
Finding the number of grams of gold in a solid sphere?
I am trying to solve a question from Leaving Cert. Ordinary Level Maths 2018 Paper 2. A solid sphere is made of gold. It has a volume of $0.113$ cm $^3$ . Each cm $^3$ of pure gold weighs $19.3$ grams. I am trying to find the number of grams of pure gold in the sphere. The marking scheme says I am to multiply $0.113$ cm $^3$ by $19.3$ grams, but I don't understand why. The answer is " $2.18$ ". I am assuming this is $2.18$ grams / cm $^3$ . Does this mean that there are $2.18$ grams of gold per cm $^3$ of the sphere? Any feedback would be greatly appreciated. Thanks, Alana
The question says - Each cm $^3$ of pure gold weighs $19.3$ grams. This is just another way of saying that the sphere's gold content is $19.3$ g / cm $^3$ . The question wants you to find the mass of pure gold in the sphere ( $0.113$ cm $^3$ ) in grams. So you multiply the mass per unit volume ( $19.3$ g / cm $^3$ ) and the volume ( $0.113$ cm $^3$ ) together. Your answer would be $$\frac{19.3 \text{ g}}{\text{cm}^3} \times 0.113 \text{ cm}^3 \approx 2.18 \text{ g}$$ As you can see the two instances of "cm $^3$ " cancel out and we are left with grams only.
|volume|
0
Using the estimation lemma on semicircle
I need to prove, using the estimation lemma, that $\left| \int_{C_R} \frac{1}{z^2 + 2} dz \right| Where $C_R$ is the semicircular curve in the upper half-plane from $R$ to $−R$ . I know that $l(C_R) = \pi R$ Unsure of how to evaluate $\max |f(z)|$ along the semicircle.
$|z^2 + 2| \geq |z^2| - 2 = R^2 - 2$ for $|z| = R$ .
|complex-analysis|contour-integration|
0
On an algorithm for counting triangles
This is regarding the complexity of an algorithm for counting triangles in an undirected graph which was suggested in a document I came across. (Link - https://www.cs.cmu.edu/~15750/notes/lec1.pdf ) Suppose the graph has $m$ edges and $n$ vertices. The algorithm goes as follows: sort the vertices in increasing order of degree ( $v_1 \cdots v_n$ ) and scan them beginning at the lowest degree vertex. When you are at vertex $v_i$ , only scan pairs of edges (which could potentially be triangles) of the form $(\{v_i,v_j\},\{v_i, v_k\})$ where $j,k>i$ . The document says that the time complexity of this algorithm is $O(m^{3/2})$ . There’s no proof in the document and I’ve been trying to prove it but have not been able to. How do you show this?
Consider this simpler algorithm: Pick an $\alpha \in \mathbb{R}_{>0}$ and partition the vertices of $G$ in two sets $A = \{u: d_u \leq a\}$ and $B=\{u: d_u > a\}$ . Perform the following two subroutines: Iterate through all triplets $u, v, w \in B$ and check if they form a triangle For each $u\in A$ check for all pairs of neighbors $v, w$ if $\{u,v,w\}$ forms a triangle The time complexity will be $$|B|^3 + \sum_{u\in A}\binom{d_u}{2} \leq |B|^3+\sum_{u\in A}d_u^2\leq |B|^3+\alpha\sum_{u\in V}d_u=|B|^3+2\alpha m$$ Furthermore, notice that $|B|\cdot \alpha \leq \sum_{u\in B}d_u\leq 2m$ and so $|B|\leq \frac{2m}{a}$ . Choosing $\alpha = O(\sqrt{m})$ gives the desired complexity. This logic is easily incorporated in the modified algorithm at hand. It is worth mentioning though that for this algorithm to work you require access both to the adjacency list of the graph as well as an oracle that tells you whether any edge $\{u,v\}$ exists in $O(1)$ time.
|combinatorics|graph-theory|algorithms|
1
Finding the general solution to $2u_x-3u_y+(U-x)=0$
The PDE I'm working on is: $$2u_x-3u_y+(U-x)=0.$$ Using the method of characteristics I obtained $c_1=2x+3y.$ Where I am stuck is on $c_2$; currently I'm exploring $$\frac{dx}{2}=\frac{du}{u-x}.$$ My first instinct was to integrate as is and get the following: $$\frac{x}{2}+c_2=\log(u-x)$$ or $$x+c_2=2\log(u-x)$$ which becomes $$e^xe^{c_2}=(u-x)^2$$ Solving for $c_2$ we get $(u-x)^2e^{-x}=c_2$ Hence $F(2x+3y)=(u-x)^2e^{-x}.$ Am I on the right track for a general solution?
You made three mistakes: The solution to $\frac{dx}{2}=-\frac{dy}{3}$ is not $2x+3y=c_1$ , but $$ 3x+2y=c_1; \tag{1} $$ There is a sign error in your second equation; the correct equation to be solved is $$ \frac{dx}{2}=\frac{du}{x-u}; \tag{2} $$ Your solution (of the incorrect ODE) also is incorrect. To solve $(2)$ , let's rewrite it as $$ \frac{du}{dx}=\frac{x-u}{2} \implies \frac{du}{dx}+\frac{u}{2}=\frac{x}{2}. \tag{3} $$ Multiplying both sides of $(3)$ by $e^{x/2}$ , we get $$ e^{x/2}\left(\frac{du}{dx}+\frac{u}{2}\right)=\frac{d}{dx}\left(e^{x/2}u\right)=\frac{x}{2}e^{x/2} $$ $$ \implies u =e^{-x/2}\int\frac{x}{2}e^{x/2}\,dx = x-2+c_2e^{-x/2}. \tag{4} $$ It follows from $(1)$ and $(4)$ that the general solution to the PDE is $$ u=x-2+e^{-x/2}f(3x+2y), \tag{5} $$ where $f$ is an arbitrary differentiable function.
|partial-differential-equations|
0
Solve $x^2-2x+1=\log_2( \frac{x+1}{x^2+1})$
Solve in $\mathbb R$ the following equation $$x^2-2x+1=\log_2 (\frac{x+1}{x^2+1})$$ First of all from the existence conditions of the logarithm, we have $x > -1$ . Analyzing $x^2 - 2x - 1$ , we get that for $x \in (-1,1]$ , the function $ x \mapsto x^2 - 2x - 1$ is strictly decreasing and $ x^2 - 2x - 1 \in [-2,2)$ and for $x \in [1,\infty)$ the function $ x \mapsto x^2 - 2x - 1$ is strictly increasing and $ x^2 - 2x - 1 \in [-2, \infty)$ .I don't know if it helps, but at least we know the monotony of the left member. Now it seems quite complicated what I have to do next. I don't know how many solutions there are, but I assume there are 2. We'll probably have to rely on the convexity/concavity of the functions( IDEA : as the coefficient of $x^2$ is strictly positive, and the discriminant of the equation of degree 2 is greater than 0, the function $ x \mapsto x^2 - 2x - 1$ is strictly convex , maybe we can show that the right member is strictly concave so we have at most 2 solutions), b
These are Geogebra graphs of the respective expressions. The points of intersection are the solutions.
|algebra-precalculus|logarithms|exponentiation|
0
Even function given a condition
I would like to understand the rationale behind the solutions I found for the problem below. Show that every function $f: \mathbb{R}^* \rightarrow \mathbb{R}$ , satisfying the condition $f(xy)=f(x)+f(y)$ , is even. The solutions I've seen go in the following direction: $\forall x \in \text{D}(f)$ , $f([(-x)(-x)]) = f(-x) + f(-x) \Rightarrow 2f(-x) = f(x^2) = f(x*x) = f(x) + f(x)$ $2f(-x) = 2f(x) \Rightarrow f(-x) = f(x)$ I understand that the condition for the function to be even is $f(-x) = f(x)$ . But I didn't understand why I can assume that the values of x and y are equal to $−x$ in the beggining of the sulotion.
The answer you found is a pretty natural answer. Looking at the proof of the contrapositive statement may also help show why we tend to start with an $x=y$ case in the forward direction, because it highlights the role of $y=x=-x$ . Using E to mean the constraints you wrote above, if the proposition is stated as: (Function $f$ has form E) $\rightarrow$ ( $f$ is even) It suffices to prove the contraposition: $\neg$ (f is even) $\rightarrow$ $\neg$ (Function f has form E) So $f(x) \neq f(-x)$ by hypothesis. Considering form E: Suppose, for a contradiction, that $f$ has form E. $f(x^2)=f((x)(x))=f(x)+f(x) = 2 f(x)$ $f(x^2)=f((-x)(-x))=f(-x)+f(-x) = 2 f(-x)$ So $f(-x)=f(x)=\frac{1}{2}f(x^2)$ But by hypothesis $f(-x) \neq f(x)$ . So the supposition is false, making the contrapositive statement true, and the original statement true.
|algebra-precalculus|even-and-odd-functions|
0
Understand why every Lipschitz domain is an $(\varepsilon, \delta)$-domain
A domain $\Omega \subset \mathbb{R}^n$ is called an $(\varepsilon, \delta)$ -domain (or locally uniform domain ) if there exists an $\varepsilon > 0$ such that for all $x, y \in \Omega$ with $\lvert{x - y}\rvert there is a rectifiable curve $\gamma : [0, l(\gamma)] \to \Omega$ joining $x$ to $y$ and satisfying \begin{align} l(\gamma) &\leq \frac{1}{\varepsilon} \lvert{x - y}\rvert,\text{ and} \label{eq:1}\tag{1}\\ \operatorname{dist}(z, \partial{\Omega}) &\geq \frac{\varepsilon \lvert{x - z}\rvert \lvert{y - z}\rvert}{\lvert{x - y}\rvert} \quad \forall z \in \gamma,\label{eq:2}\tag{2} \end{align} where $l(\gamma)$ denotes the arclength of $\gamma$ . These domains where introduced by Jones where he states (in the above notation) Condition \eqref{eq:1} says that $\Omega$ is locally connected in some quantitative sense. Condition \eqref{eq:2} says there is a "tube" $T, \gamma \subset T \subset \Omega$ ; the width of $T$ at a point $z$ is on the order of $\min (\lvert{x-z}\rvert, \lvert{y-
Not a complete proof, but maybe a first step: I would use the definition of Lipschitz boundary, i.e. that for any $p\in\partial\Omega$ exists a radius $\delta>0$ and a bilipschitz function (i.e. bijective Lipschitz function, such that the inverse is Lipschitz as well) $\psi: B_\delta(p)\rightarrow B_1(0)$ such that $$\psi|_{\partial \Omega\cap B_\delta(p)} :\partial \Omega\cap B_\delta(p)\rightarrow B_1(0)\cap (\mathbb{R}^{n-1}\times\{0\}) $$ is a bilipschitz map. Furthermore $$\psi(\Omega\cap B_\delta(p)) = B_1(0)\cap \{(x_1,\ldots, x_n)|\ x_n>0\}.$$ Then for $x,y\in \Omega\cap B_\delta(p)$ we can connect $\psi(x)$ with $\psi(y)$ with a straight line $L:[0,1]\rightarrow B_1(0)$ . Then the curve $$\gamma (t) := \psi^{-1}(L (t))$$ is rectifiable and after reparametrisation probably satisfies your requirements. But this is something you should check for yourself. One other point: For this to work, $\overline{\Omega}$ probably has to be compact.
|real-analysis|geometry|partial-differential-equations|euclidean-geometry|sobolev-spaces|
0
Norm of $(0,2)$ tensor
If we have a $(0,2)$ tensor $A$ (in my situation, it is actually second fundamental form), I am confused by this notation $|A|$.If we think it as Frobenius norm, write $A$ in local coordinate,i.e., $A=A_{ij}dx^i\otimes dx^j$ where $A_{ij}=A(\partial_i,\partial_j)$, then $$|A|=(\sum A^2_{ij})^{1/2}.$$ But if we think it is a norm induced by metric, then $$|A|= ^{1/2}= ^{1/2}=(g^{im}g^{jn}A_{ij}A_{mn})^{1/2}.$$ It seems that if I choose local orthonormal frame, they are same since $g^{im}=\delta^i_m$. Or the truth is they are same all the time? Any explanation will be helpful.
As mentioned by Andreas the expression $$|A|=\left(\sum_{i,j=1}^n A_{ij}^2\right)^{1/2}.$$ does not make sense, in general. Now for a Riemannian manifold $(M,g)$ , the Frobenious norm $|A|$ of a $(0,2)$ -tensor $A$ is a function of $M$ to $[0,\infty)$ , which is given in a coordinate system $p=(x_1,\ldots,x_n)$ by: $$|A|(p)=\sum_{i_1,i_2,j_1,j_2=1}^n g^{i_1j_1}(p)g^{i_2j_2}(p)A_{i_1j_1}(p)A_{i_2j_2}(p).$$ Given $p\in M$ fixed, you can find a coordinate system $(x_1,\ldots,x_n)$ around $p$ such that $g^{ij}(p)=\delta_{ij}$ , namely any normal coordinates centered at $p$ . Observe that for $q\neq p$ in the domain of the normal coordinates, we have $g^{ij}(q)\neq \delta_{ij}$ in general. Nonetheless, in these normal coordinates we obtain the following expression of the Frobenious norm of $A$ at $p$ : $$|A|(p) = \left(\sum_{i,j=1}^n A_{ij}^2(p)\right)^{1/2}.$$ With this observation, the first expression makes sense pointwise , i.e. you can assume that $|A|=(\sum A^2_{ij})^{1/2}$ as functio
|differential-geometry|riemannian-geometry|
1
Why is (y) a uniformizer for $y^2 = x^3 + x$ at (0,0) and why is its order equal to 1?
I'm currently trying to understand Silvermans example for the valuation on curves discussed in the answer to this post: Definition and example of "order of a function at a point of a curve" Silverman has already shown that $M_P / M_P^2$ is generated by (y), but why does $M_P = (y)$ and then $ord_P(y) = 1$ follow from this fact? This probably boils down to a deeper problem: Why are the elements if the function field with order one exactly the generators of $M_P$ . I can see why $M_P = (t) \implies ord_P(t) = 1$ but I'm not really sure about the other implication. And how do the elements / generators of $M_P$ and $M_P^2$ relate to each other?
If $(R, \mathfrak{m}, k)$ is a noetherian local ring and $M$ is a finite $R$ -module, a special case of Nakayama's lemma states that if $\mathfrak{m}M = M$ then $M = 0$ . In particular if $[m_1], \dots, [m_n]$ is a $k$ -basis for $M/\mathfrak{m}M$ , we can choose representatives $m_1, \dots, m_n \in M$ . Consider the submodule $N \subset M$ generated by these representatives. Then, $\mathfrak{m}(M/N) = M/N$ , since $M = N + \mathfrak{m}M$ , so by Nakayama's lemma $M = N$ and $M$ is generated by $m_1, \dots, m_n$ . Now, in your situation, $M_p/M_p^2$ is generated by $[y]$ , so $y$ lifts to a generator of $M_p$ , and since $M_p \neq M_p^2$ , we must have $y \notin M_p^2$ . Hence $\operatorname{ord}_p(y) = 1$ . Similarly if $\operatorname{ord}_p(t) = 1$ for some rational function $t$ then $t \in M_p \setminus M_p^2$ , so $[t] \neq 0$ in $M_p/M_p^2$ . As long as $\dim_k M_p/M_p^2 = 1$ (ie. the curve is nonsingular at $p$ ) it follows that $t$ lifts to a uniformizing parameter.
|algebraic-geometry|valuation-theory|function-fields|
1
Solving the differential equation $m\frac{d^2x}{dt^2}=Ae^{-\gamma t}\sin(\omega t).$
A particle of mass $m$ moving along the $x$ -axis, is acted upon by a force: $$F\left(t\right)=\begin{cases}A\mathrm{e}^{-\gamma t}\sin\left(\omega t\right),&t\ge0;\\0,&t Initial conditions are: $x(0)=x_{0},\dot{x}(0)=\dot{x}_{0}.$ Here $\dot{x}$ is $\frac{dx}{dt}$ . Find the equation of motion $x(t)$ for the particle. My attempt: In both cases we want to solve the differential equation $m\frac{d^2x}{dt^2}=F(t)$ . For the second case, the equation becomes trivial $\frac{d^2x}{dt^2}=0$ with the solution $x(t)=\dot{x}_{0}t+x_0$ . However, I am having trouble solving the differential equation $$m\frac{d^2x}{dt^2}=Ae^{-\gamma t}\sin(\omega t).$$ My approach was just integrating both sides, but I am stuck in the seemingly never ending integration by parts loop. Could someone show how to solve this for $x(t)$ ?
We could try Laplace transform of your differential equation: $$\mathcal{L}_t\left[m x''(t)=A \exp (-\gamma t) \sin (\omega t)\right](s)$$ Results in s-domain equation $$m \left(s^2 X(s)-s x(0)-x'(0)\right)=\frac{A \omega }{(s+\gamma )^2+\omega ^2}$$ Then $$X(s)=\frac{A \omega }{m s^2 \left((s+\gamma)^2+\omega ^2\right)}+\frac{x(0)}{s}+\frac{x'(0)}{s^2}$$ Now we go back to time domain with inverse Laplace transform: With $\mathcal{L}_s^{-1}\left[\frac{x(0)}{s}\right](t)=x(0)=x_0$ and $\mathcal{L}_s^{-1}\left[\frac{x'(0)}{s^2}\right](t)=t x'(0)=t \dot{x}_0$ we finally get $$x(t)=\frac{A \omega}{m} \left(-\frac{2 \gamma }{\left(\gamma ^2+\omega ^2\right)^2}+\frac{t}{\gamma ^2+\omega ^2}+\frac{e^{-\gamma t} \left(\left(\gamma ^2-\omega ^2\right) \sin (\omega t)+2 \gamma \omega \cos (\omega t)\right)}{\omega \left(\gamma ^2+\omega ^2\right)^2}\right)+x_0+t\dot{x}_0$$
|calculus|ordinary-differential-equations|physics|classical-mechanics|
1
Is L'Hospital applicable when the limit of derivative quotient does not exist?
According to some sources, i.e. Wiki, the application of L'Hospital's Rule requires four conditions to be satisfied. One of them is the existence of limit of the quotient of the derivatives. We have the following limit: $$ \lim_{x\to \pi}\frac{x-\pi}{1+\cos x}=\left[\frac{0}{0}\right]=-\lim_{x\to \pi}\frac{1}{\sin x} $$ The last limit does not exist, but one-sided limits are $\pm \infty$ . Should we say that the rule is not applicable in this particular case? Or the rule is applicable but not helpful? Or it's applicable and helpful? Note that the original limit also does not exist.
You can use L'Hôpital's rule for one-sided limits. Here you can use it twice, once on each side, and it says the one-sided limits are the same in the first limit as in the second limit (which they are). See Does L'hopital work for one sided limits?
|real-analysis|
0
Is L'Hospital applicable when the limit of derivative quotient does not exist?
According to some sources, i.e. Wiki, the application of L'Hospital's Rule requires four conditions to be satisfied. One of them is the existence of limit of the quotient of the derivatives. We have the following limit: $$ \lim_{x\to \pi}\frac{x-\pi}{1+\cos x}=\left[\frac{0}{0}\right]=-\lim_{x\to \pi}\frac{1}{\sin x} $$ The last limit does not exist, but one-sided limits are $\pm \infty$ . Should we say that the rule is not applicable in this particular case? Or the rule is applicable but not helpful? Or it's applicable and helpful? Note that the original limit also does not exist.
Uisng $1+\cos x= \frac{\sin x}{\tan \frac{x}{2}}$ we can reduce initial limit to tangent at $\frac{\pi}{2}$ , which exists one-sided, so L'Hospital gives correct answer and can be call applicable here. In book, mentioned by me in comment, limit is considered one-sided.
|real-analysis|
1
Projective transformation that fixes a circle and sends an interior point to the center
The question is related to this question : In this handout theorem 2.3(Homographies that exist) item two claims For some circle ω and interior points P, Q, we can send ω to itself and send P to Q. (Note that Q is usually taken to be the center of ω.). I attempt to prove this. It suffices to prove that every interior point can be mapped to center (so the composition of a transform with an inverse transform will send the first interior point to the second interior point). Let A be the center of ω. Take an interior point B of ω. Let CD be the diameter through B. Let EF be the chord perpendicular to CD at A. Let GH be the chord perpendicular to CD at B. By theorem 2.3 item one, there exists a projective transformation $T$ on $\Bbb{RP}^2$ that maps C,D,G,H to C,D,E,F respectively. Then $T$ sends $B$ to $A$ . It remains to prove $T$ sends the circle ω to itself. But I don't know how to prove $T$ sends the circle ω to itself.
The unit circle has paramatrization $\left(\frac{1-t^2}{t^2+1},\frac{2 t}{t^2+1}\right)$ In this paramatrization, $C,D,E,F,G,H$ correspond to $1,-1,\infty,0,\frac1x,x$ respectively. The projective transformation $t\mapsto\frac{x-t}{xt-1}$ maps $1,-1,\frac1x,x$ to $1,-1,\infty,0$ . So there is a projective transformation $T$ on the unit circle that maps $C,D,G,H$ to $C,D,E,F$ respectively. Another way: The projective transformation $t\mapsto\frac{t-x}{xt-1}$ maps $1,-1,\frac1x,x$ to $-1,1,\infty,0$ . So there is a projective transformation $T$ on the unit circle that maps $C,D,G,H$ to $D,C,E,F$ respectively. Moreover, $\left(\begin{smallmatrix}1&-x\\x&-1\end{smallmatrix}\right)$ has trace $0$ , so $T$ is an involution. The involution center $S$ is the intersection of lines $FG,EH$ .
|projective-geometry|
1
Calculating radius given the arc length and distance
I wanted a formula that can tell the length of the radius of the circle given the arc length and the shortest distance between them. I have derived it to this form: $d^2 = 2r^2(1-\cos{\frac{l}{r}})$ d: shortest distance r: radius of the circle l: arc length I understand that it cannot be solved for r algebraically with precise value. Can someone give me a neat and a very good approximation equation for r
A simpler relation between $l$ , $d$ , and $r$ is $d = 2r\cdot\sin \frac{l}{2r}$ . This is equivalent to yours by applying the half angle or double angle formula. Note that for small angles, $\sin{x}\approx x$ , which gives $d=l$ and $r=\infty$ . This is obviously not good enough for your purposes. Using the approximation $sin{x}\approx x-\frac{x^3}{3!}$ , the first two terms of the power series expansion, you do get something useful, namely something that simplifies to: $$r = \frac{L}{\sqrt{ 24(1-\frac{d}{L}) }}$$ That should be reasonably accurate until the arc spans about 60 or 70 degrees.
|trigonometry|approximation|transcendental-equations|
0
If $a+b+c+d+e=0$ so $180(a^6+b^6+c^6+d^6+e^6)\geq11(a^2+b^2+c^2+d^2+e^2)^3$
Let $a$, $b$, $c$, $d$ and $e$ be real numbers such that $a+b+c+d+e=0$. Prove that: $$180(a^6+b^6+c^6+d^6+e^6)\geq11(a^2+b^2+c^2+d^2+e^2)^3$$ I tried Holder, uvw and more, but without success.
WLOG, assume that $a, b, c \ge 0$ . If $d = e = 0$ , then $a = b = c = 0$ , and the inequality is true; Otherwise, we have $\min(d, e) . WLOG, assume that $e = -1$ . Using $d = 1 - a - b - c$ , the desired inequality is written as \begin{align*} &180(a^6 + b^6 + c^6 + (1 - a - b - c)^6 + 1) \\ \ge{}& 11(a^2 + b^2 + c^2 + (1 - a - b - c)^2 + 1)^3. \tag{1} \end{align*} Approach 1. We can prove (1) using the pqr method + Buffalo Way. It is complicated. Approach 2. Using Vasc's Equal Variable Theorem (Case 3, Corollary 1.9, [1]), it suffices to prove (1) for the case either $a = 0$ or $a\le b = c$ . Reference [1] Vasile Cirtoaje, “The Equal Variable Method”, J. Inequal. Pure and Appl. Math., 8(1), 2007. Online: https://www.emis.de/journals/JIPAM/images/059_06_JIPAM/059_06.pdf
|inequality|contest-math|
0
Simplifying the Boolean expression $x'y + x(x+y')$
Expression: $$x'y + x(x+y')$$ My attempt: $x'y + x(x+y')$ $x'y + xx + xy' \quad \textit{After applying second Distributive law.}$ $x'y + x + xy' \quad \textit{After applying second Idempotent law.}$ $x'y + x(1 + y') \quad \textit{Breaking out x}$ $x'y + x(1) \quad \textit{After applying Annihilation law}$ $x'y + x$ The answer is: $x + y$ . N.B. I am still new to this and learning, I am using the following book: Discrete Mathematics for Computing / Edition 3 by Peter Grossman
Simplest Solution You can use venn diagram to solve create it use 2 different circles for x and y x'y=y\x x(x+y')=x sum of both=x+y That is why
|logic|boolean-algebra|boolean|
0
Extracting Specific Component Functions from a Composite Numerical Function
I have a function f(x) defined below as, $f(x) = \sum_{n=0}^N c_n(g_n(x + n) + g_n(x - n))$ I have no knowledge of what $c_n$ is, but I do have access to f(x) numerically. Is there a way to apply an operator that would cancel all terms except for one associated with a particular n? My goal is to extract $g_n(x)$ . A few notes, $g_n(0) = 1$ , $g_n(x) = 0$ as $|x|$ becomes large. I have tried a handful of things such as moving to the fourier domain where terms such as $g_n(x+n) + g_n(x-n)$ are equivalent to the fourier representation $g_n(k)\cos(nx)$ . But I am struggling to develop a filter, numerical or analytical, that is able to attenuate or filter out $g_m(x)$ where $n\neq m$ .
If we let $\bar{\mathbf{g}}(x) = \mathbf{g}(-x)$ , then I think what you have written is equal to $$ \mathbf{f} = \mathbf{c} \ast \mathbf{g} + \mathbf{c} \ast \bar{\mathbf{g}} = \mathbf{c} \ast ( \mathbf{g} + \bar{\mathbf{g}}). $$ Witht knowing neither $\mathbf{c}$ nor $\mathbf{g}$ , this is a blind deconvolution problem, which may or may not be solvable depending on the characteristics of both $\mathbf{c}$ and $\mathbf{g}$ .
|real-analysis|numerical-methods|fourier-analysis|signal-processing|
0
Confused regarding CMI question
I think there might be an error in the official solution to the following question: CMI May 23, 2022 BSc entrance exam , Question A2(7): You are asked to take three distinct points $1, \omega_1, \omega_2$ in the complex plane such that $|\omega_1|=|\omega_2|=1$ . Consider the triangle $T$ formed by the the complex numbers $1, \omega_1, \omega_2$ : (7) If $\omega_1+\omega_2$ is real, the triangle $T$ must be isosceles. The official solution tells that the statement is true. But I think I might have a counterexample: What if we choose $\omega_1=\omega$ and $\omega_2=-\omega$ ? Doesn’t that satisfy the criteria without the triangle being isosceles? Perhaps ‘nonzero real’ might have been a more appropriate choice in the question statement.
Yes are true because is w1 and w2 both have same modulus 1 and if sum of both is purely real so Im(w1)=-Im(w2)=m And we know that Re(w1)²+Im(w2)²=1 Re(w2)²+Im(w2)²=1 // Modulus subtract both Re(w1)²=Re(w2)² Re(w1)=±Re(w2)=k w1=k+im w2=±k-im w3=1 So if you draw these you can see that only w2=k-im is valid So yes you are correct
|complex-numbers|contest-math|examples-counterexamples|
0
Optimizing sum of a quadratic function and $l_1$ norm on a sphere
I am currently attempting to derive the dual and KKT conditions for the following optimization problem: \begin{equation} \min_{x\in \mathbb R^n} \quad x^TMx+a ||x||_1^2 \quad \text{subject to} \quad ||x||_2=1. \end{equation} Regrettably, I haven't made any significant progress thus far. I am open to making assumptions on matrix $M$ and $a>0$ if it helps in deriving meaningful conditions. Any guidance or insights would be greatly appreciated. The case without $l_1$ norm is a well-studied problem (e.g., see 1 and 2 ).
To derive the dual and KKT conditions, I use $\|x\|^2_2=1$ instead of $\|x\|_2=1$ for computational simplicity. There is no restriction on $a$ and $M$ in the following, so the objective can be non-convex. Also, recall the following facts: $$x^TMx= x^T\left(\frac{M^T+M}{2} \right)x, \tag{1}$$ $$\|x\|_2 \le \|x\|_1 \le \sqrt {n} \|x\|_2 \tag{2}$$ and the notation $[a]^-=\min(a,0)$ and $[a]^+=\max(a,0).$ From (1), the problem can be written as \begin{equation} \min_{x\in \mathbb R^n} \quad x^T\left(\frac{M^T+M}{2} \right)x+a ||x||_1^2 \quad \text{subject to} \quad ||x||^2_2=1. \end{equation} Dual problem From (2), we have $$x^T\left(\frac{M^T+M}{2} \right)x+a \|x\|_1^2+\mu\|x\|^2 \ge \\ x^T \left (\frac{M^T+M}{2}+\left ([a]^- n+[a]^++\mu \right )I \right)x.$$ If $\frac{M^T+M}{2}+\left ([a]^- n+[a]^++\mu \right )I \succeq 0,$ the minimum of the RHS is zero, which is attained at $x=0$ in both sides. If $\frac{M^T+M}{2}+\left ([a]^- n+[a]^++\mu \right )I \nsucceq 0$ , for some $i$ there exis
|linear-algebra|optimization|convex-analysis|convex-optimization|non-convex-optimization|
1
Is an extension of an algebraic group by a multiplicative group a semidirect product?
This is probably a very simple question with a negative answer, but I somehow cannot find a counterexample. Let $X$ be a smooth algebraic variety over an algebraically closed field $k$ . Assume that $X$ contains an embedded curve $C$ isomorphic to $\mathbb{A}^1_*$ , such that for every $\varphi\in \operatorname{Aut}(X)$ we have $\varphi(C)=C$ ; and the induced action of $\operatorname{Aut}(X)$ on $C$ is transitive. Is it true that $\operatorname{Aut}(X)$ contains a subgroup isomorphic to $\mathbb{G}_{m}$ , acting transitively on $C$ ? Equivalently, let $\operatorname{Aut}(X)^{\circ}$ be the identity component of $\operatorname{Aut}(X)$ , and let $\operatorname{Stab}(C)^{\circ}$ be the subgroup of the connected component of $\operatorname{Aut}(X)^{\circ}$ consisting of those automorphisms which fix $C$ pointwise. I assume that the restriction map $\operatorname{Aut}(X)^{\circ}\to \operatorname{Aut}(C)^{\circ}\cong \mathbb{G}_{m}$ is surjective, and I ask whether the exact sequence $1\to
I am not going to think too hard on the geometric problem with the curve, but as for the question about algebraic groups, the answer is negative. Example: $1\to \mu_n\to \mathbb{G}_m \xrightarrow{x\mapsto x^n} \mathbb{G}_m\to 1$ is not split.
|algebraic-geometry|algebraic-groups|automorphism-group|toric-geometry|
0
Can $\csc(x)\csc(x + y)\sin(y)+\cot(y)$ be rewritten to make its symmetry in $x$ and $y$ more obvious?
I have a function $f(x, y) = \csc(x) \csc(x + y) \sin(y) + \cot(y)$ , and I know from graphing it that the function is symmetric in $x$ and $y$ such that $f(x, y) = f(y, x)$ . However, that fact isn't obvious just by looking at the equation, since the two variables don't seem to be treated the same. Can the expression be written in such a way that its symmetry is obvious from the expression itself, without needing to graph it? I've tried using trig identities to replace the cosecants and cotangent with sines and cosines, using the angle addition identities to isolate one-variable-per-trig-function, and rearranging and regrouping terms, but without success.
Expanding my comment ... We have $$ f(x,y) = \frac{\sin y}{\sin x\sin(x+y)}+\frac{\cos y}{\sin y} = \frac{\sin^2y+\sin x\cos y\sin(x+y)}{\sin x\sin y\sin(x+y)} \tag1$$ The denominator is symmetric in $x$ and $y$ , so let's focus on the numerator ... $$\begin{align} &\sin^2y+\sin x\cos y(\sin x\cos y+\cos x\sin y) \tag2\\[4pt] \to\quad &(1-\cos^2y)+\sin^2x\cos^2y+\sin x\cos x\sin y\cos y \tag3\\[4pt] \to\quad & 1-(1-\sin^2x)\cos^2y+\sin x\cos x\sin y\cos y \tag4\\[4pt] \to\quad & 1 - \cos^2x\cos^2y+\sin x\cos x\sin y\cos y \tag5\\[4pt] \to\quad & 1 - \cos x\cos y(\cos x\cos y-\sin x\sin y) \tag6\\[4pt] \to\quad & 1 - \cos x\cos y \cos(x+y) \tag7 \end{align}$$ Therefore, $$f(x,y) \;=\; \frac{1-\cos x\cos y\cos(x+y)}{\sin x\sin y\sin(x+y)} \;=\; f(y,x) \tag{$\star$}$$
|algebra-precalculus|functions|trigonometry|symmetric-functions|
1
Turn two ODEs first degree into a Hamiltonian with code
This is more like a coding question, but I thought I'll post it here because of its mathematical foundation. Two ODEs first degree can describe a Hamiltonian System. The connection between the ODEs and the Hamiltonian System is given by the canonical equations which are: $ \dot{x}=\frac{\partial H}{\partial x} $ $ \dot{y}=-\frac{\partial H}{\partial y} $ For example: Two ODEs: $ \dot{x}= 2y $ $ \dot{y}= -2x $ Results in the following Hamiltonian: $ H(x,y)=x^2+y^2 $ Is there a way to create a code/algorithm, that can calculate the Hamiltonian from these ODE?
You prob want a subroutine that can take a function as an argument. You might also want a variable step size. If step size is $\Delta x>0, dy/dx=f(x,y)$ then $y_{i+1}\approx y_i + \Delta x f(x,y), x_{n+1}=x_n + \Delta x$ . Then iterate. That's not the most stable or accurate way of performing an integration. RK4 is much better in both regards. As to taking derivatives, $df/dx \approx \frac{f(x+\Delta x)-f(x)}{\Delta x}$ . You can just use the definition and use a small step size. Too big, obviously the calculation isn't accurately approximating, too small a step size, you could get a similar problem. To reduce that issue, you want to use the symmetric derivative. $f'(x) \approx \frac{f(x+\Delta x/2)-f(x-\Delta x/2)}{\Delta x}$ . This will give you a fairly accurate approximation to the derivative. One caveat among possibly many, it will give you a derivative where the derivative does not exist, cf. $|x|$ at $x=0$ . As to your specific problem. $H=x^2+y^2\implies \partial H/\partial x =
|algorithms|python|hamilton-equations|canonical-transformation|
0
Fourier Curve Fitting
I'm having a rough time understanding the world of the Fourier series. I've read an awful lot, but am not mathematically inclined, so most of what I have read is not in terms my brain understands. Here's what I am trying to do: I have a bunch of measured data points at equal time spacings. They look like a sine wave, so I want to figure out a function that I can use to approximate the data (so that I can pick any time value and get an approximate data value). I don't want to use matlab (because I don't have it, and because I need to implement this in code with different measured datasets). I can run the data through a FFT, but I'm having trouble interpreting the output of the FFT and constructing a usable function of sines and cosines. None of what I've read makes sense to me for this step (as a programmer, and not a math guy). I've watched videos of determining the Fourier coefficients for a square wave, but am lost for how to do this for arbitrary measured data. Is there a relationsh
Here's an example Mathematica program for computing the DFT and inverse DFT of $\sin(2 \pi t)$ over the interval $0\le t\le 1$ with a step size of $\frac{1}{M-1}=\frac{1}{100}$ using the formulas at Discrete Fourier transform - Definition . The Mathematica functions Fourier and InverseFourier perform these operations much more efficiently, but the program below provides more insight into the underlying formulas. The program below accounts for the Mathematica convention that the first element of a list uses an index value of $1$ instead of an index value of $0$ . The Mathematica program above generates the following plot.
|fourier-analysis|fourier-series|fast-fourier-transform|
0
Laplace transform of a resonant second-order ODE
I'm trying to solve the following ODE with the Laplace transform $y'' + y = \cos \omega t$ with initial conditions $y(0)=y'(0)=0$ . I get $Y(s) = \frac{s}{(s^2 + 1)^2}$ but then, I don't know how to compute the inverse transformation. Decomposing this fraction into partial fractions does not help in this specific case, which seems to involve resonance. How would you use the inverse Laplace transform to get the final solution? Thank you for your help!
Source: Inverse Laplace Transform partial fraction $\frac{\omega ^{2}}{\left ( s^{2}+\omega ^{2} \right )( s^{2}+\omega ^{2} )}$ By taking the derivative $\frac{d}{d \omega} \mathcal{L}(\cos \omega t) = \mathcal{L}(- t \sin \omega t) = \frac{d}{d \omega} \frac{s}{s^2 + \omega^2} = - \frac{2 \omega s}{(s^2 + \omega^2)^2}$ . The derivative commutes with the Laplace transform in this case since we derive w.r.t. $\omega$ while the Laplace transform integrates w.r.t. $t$ . Finally, fix $\omega=1$ , so we obtain $\mathcal{L}(t \sin t) = \frac{2s}{(s^2+1)^2}$ and then $y(t) = \frac{1}{2} t \sin t$ .
|ordinary-differential-equations|
0
In exponential growth, what does it mean for the derivative to be proportional to the quantity iself?
I'm largely curious as to what that specifically means: the equation usually associated with this would be like $y(x) = y(0) * e^{kt}$ , but I don't see the proportionality of its derivative to itself. I've tried calculating between different $e$ -exponential charts but I have a hard time seeing what it means by proportional. If somebody could point it out, that'd be sweet.
I assume you mean something like \begin{equation} \frac{dy}{dx} = ky(x), \end{equation} where the derivative of $y$ with respect to $x$ is proportional to $y$ itself (which here is a function of $x$ ). Your question is then (I think) why are exponential functions described in this way? If so, the answer is because the equation above is the differential equation for an exponential function. To see this, we can solve it using the separation of variables method. First, we move the $y$ over to the other side of the equation. \begin{equation} \frac{1}{y}\frac{dy}{dx} = k \end{equation} Then we integrate both sides with respect to $x$ . \begin{equation} \int\frac{1}{y}\frac{dy}{dx}dx = \int k dx \end{equation} On the left-hand-side we 'separate' the variables, putting $(dy/dx)dx = dy$ , so the integration is now over $y$ . Meanwhile, on the right-hand-side, the integration is straight-forward. \begin{equation} \int\frac{dy}{y} = kx + C. \end{equation} Here, $C$ is the constant of integration
|calculus|
1
Average item cost
So let's say I can open up crates that have balls in it, but the amount of balls received from each crate has different percentage probability. As shown bellow: 10x balls = 40% chance, 20x balls = 30% chance, 50x balls = 20% chance, 100x balls = 7% chance, 200x balls = 2% chance, 1000x balls = 1% chance, How can I figure out the average ball amount received per crate opened over an infinite amount of openings, or at least a large number opened so I can get a closely accurate answer.
What you are asking for is the expected value of the number of balls in a crate. It's essentially an average of all the possibilities weighted by their probabilities. The expected value is calculated by multiplying the outcomes by the probability of each outcome. For example, consider finding the expected value when you roll a normal 6-sided die. There are 6 outcomes: 1,2,3,4,5, or 6. The probabilities for each of those outcomes are all 1/6, so we can find the expected value in the following way: $$\frac16*1+\frac16*2+\frac16*3+\frac16*4+\frac16*5+\frac16*6=\boxed{3.5}.$$ So this means that if we rolled a die an infinite number of times, and took the average value of the results of those rolls, the average would be 3.5. Can you figure out how to apply this to your problem?
|probability|average|percentages|
1
If $f : [0,1] \to \mathbb R$ is continuous with $\int f =1$, then $f(a)f(b)=1$ for some $a<b$
If I am given a function $$ f: [0,1] \rightarrow \mathbb{R} $$ which is continuous, and for which $ \int_{0}^{1} f(x) \, dx = 1$ , how do I prove the existence of $a,b\in (0,1)$ with $a such that $f(a)f(b)=1$ ? Also, how could I prove that there are infinitely many such pairs? I have tried various things, such as working with the integral and trying to find some form of it to which I could apply MVT for definite integrals on an interval $[0,1/2]$ , so that I could get an ' $a$ ' in said interval and $b=1-a$ , though I've had no success yet. I'm thinking the solution wouldn't even use said theorem, as the values HAVE to be inside $[0,1]$ and NOT $0$ or $1$ .
One has to look at the function $g(x,y) = f(x)\cdot f(y)$ and integrate it over $[0,1]\times[0,1]$ . Clearly, we get that this integral is $1$ so the rest follows from a middle value theorem.
|real-analysis|definite-integrals|continuity|
0
Is it true if a C*algebra A is a primitive, and it contains a simple ideal F, then F is contained in every non zero closed ideal of A?
I was wondering whether the following statement is true: If a $C^*$ algebra A is a primitive, and it contains a simple ideal F, then F is contained in every non-zero closed ideal of A. We say a $C^*$ algebra is primitive if it has a faithful irreducible representation on a Hilbert space. We say a (closed) ideal is simple if it doesn't contain any proper ideal. Basically, we want to show that any closed ideal $I$ of $A$ must have a non-trivial intersection with $F$ . If this holds, then since F is simple, it must be the cast that $I$ must contain $F$ . But why is this true?
This is true. If $F = \{0\}$ there is nothing to prove. So assume $F$ is nontrivial. Recall that in a $C^\ast$ -algebra, if $I$ and $J$ are closed ideals, then $I \cap J = IJ$ (see Murphy’s $C^\ast$ -algebras and operator theory , page 82). Recall that a primitive ideal is always prime in a $C^\ast$ -algebra, i.e., if $I$ is primitive, and $J_1J_2 \subset I$ , then it must be the case that either $J_1 \subset I$ or $J_2 \subset I$ (see Theorem 5.4.5 of the same book). Combining the above, we see that in a primitive $C^\ast$ -algebra, since $\{0\}$ is a primitive ideal, any two nonzero closed ideals must intersect nontrivially. But $F$ , per your assumption, is simple. So for any nonzero $I$ , its intersection with $F$ must be nontrivial, but its intersection with $F$ is an ideal contained in $F$ , whence it must be $F$ itself, i.e., $F \subset I$ .
|functional-analysis|operator-theory|operator-algebras|
1
Problem understanding pigeonhole principle
In the café, 4 people are having lunch, whose names are $v_1$ , $v_2$ , $v_3$ , and $v_4$ . Some of them know each other. The number of acquaintances of person $v_i$ (who are in the café) is denoted by $d(v_i)$ . Prove that among the numbers $d(v_1)$ , $d(v_2)$ , $d(v_3)$ , and $d(v_4)$ , there are equal ones. I guess I have to apply pigeonhole principle here. The possible values for $d(v_i) \in {0, 1, 2, 3}$ (it is a possibility some have zero acquitances and you can't be acquitances with yourself). Therefore, there are 4 possible values and 4 people. And to apply pigeonhole principle we must have $n$ items put into $m$ containers, with $n > m$ . How do I work it out?
Clearly, $d(v_i) \in {0,1,2,3}$ . Now, number of $d(v_i)$ = $o$ is $0$ or $1$ or $2$ . Case 1 : number of $d(v_i) = o$ is $0$ . $\implies d(v_1),d(v_2),d(v_3),d(v_4)$ $\in$ {1,2,3}. Thus, we are done. Case 2 : number of $d(v_i) = o$ is $1$ . without loss of generality, assume $d(v_4) = 0.$ $\implies d(v_1),d(v_2),d(v_3)$ $\in$ {1,2}. Thus, we are done. Case 3 : number of $d(v_i) = o$ is $2$ . without loss of generality, assume $d(v_3),d(v_4) = 0.$ $\implies d(v_1),d(v_2)$ $\in$ {1}. Thus, we are done.
|pigeonhole-principle|
1
Simplifying the Boolean expression $x'y + x(x+y')$
Expression: $$x'y + x(x+y')$$ My attempt: $x'y + x(x+y')$ $x'y + xx + xy' \quad \textit{After applying second Distributive law.}$ $x'y + x + xy' \quad \textit{After applying second Idempotent law.}$ $x'y + x(1 + y') \quad \textit{Breaking out x}$ $x'y + x(1) \quad \textit{After applying Annihilation law}$ $x'y + x$ The answer is: $x + y$ . N.B. I am still new to this and learning, I am using the following book: Discrete Mathematics for Computing / Edition 3 by Peter Grossman
For short, I'll abbreviate below Absorption, Associativity, Commutativity, Distributivity and Idempotence as abs, assoc, comm, distr and idemp, respectively. Lemma. In a Boolean algebra, if $a+b=ab$ , then $a=b$ . Proof. If $a+b = ab$ then \begin{align} a &= a (a + b) \tag{abs}\\ &= a (ab) \tag{hypothesis}\\ &= ab \tag{assoc, idemp}\\ &= b (ab) \tag{assoc, comm, idemp}\\ &= b (a + b) \tag{hypothesis}\\ &= b. \tag{abs} \end{align} Now, \begin{align} (x + x'y) + (x + y) &= x + y + x'y \tag{assoc, comm, idemp}\\ &= x + (y + x'y) \tag{assoc}\\ &= x + y, \tag{abs} \end{align} and \begin{align} (x + x'y)(x+y) &= x + xy + x'yx + x'y \tag{distr}\\ &= x + (xy + x'y) \tag{assoc, $xx'=0$}\\ &= x + (x + x')y \tag{distr}\\ &= x + y. \tag{$x + x' = 1$} \end{align} Hence, letting $a = x + x'y$ and $b = x + y$ , we have $a + b = b = ab$ , and so $a=b$ , by the Lemma. So $x + x'y = x + y$ . Shorter version: \begin{align} x + x'y &= (x + xy) + x'y \tag{abs}\\ &= x + (xy + x'y) \tag{assoc}\\ &= x + (x +
|logic|boolean-algebra|boolean|
1
What Toric Variety does this fan correspond to?
Let $\Sigma$ be the fan defined by $\{\sigma_1,\sigma_2,\sigma_3,\sigma_4,\star\}$ , where $\sigma_1=\operatorname{cone}(e_1)$ , $\sigma_2=\operatorname{cone}(e_2)$ , $\sigma_3=-\sigma_1$ , $\sigma_4=-\sigma_2$ , and $\star=\operatorname{cone}((0,0))$ . We are working on a two dimensional lattice $N$ extended to the real vector space $N_{\mathbb R}\cong \mathbb R^2$ , and $e_i$ are the standard basis vectors. Now my question is, how do I recognize what toric variety this fan corresponds to? Importantly, I can't quite figure out how things should glue. Here's my process, since the dual cones are given by: $$\sigma_i^*=\{m\in M_{\mathbb R}:m(u)\geq 0,\forall u\in \sigma_i\}$$ where $M$ is dual lattice, we have that: \begin{alignat}{3} \sigma_1^*=&\operatorname{cone}(-e^2,e^2,e^1)\qquad&\sigma_2^*=&\operatorname{cone}(-e^1,e^1,e^2)\\ \sigma_3^*=&\operatorname{cone}(-e^2,e^2,-e^1)\qquad&\sigma_4^*=&\operatorname{cone}(-e^1,e^1,-e^2) \end{alignat} So we have that: \begin{alignat}{3} U_{\sig
The variety is P1 times P1 minus the four fixed points. Ripping out the big cones in the fan corresponds to vertices in the polytopal picture.
|algebraic-geometry|schemes|integer-lattices|dual-cone|toric-varieties|
0
Find the center of all circles that touch the $x$-axis and a circle centered at the origin
Given a circle $C$ of radius $1$ centered at the origin, I want to determine the locus of the centers of all circles that touch $C$ and the $x$ -axis. This is the red curve in the following Desmos plot , where the blue circle touches $C$ and the $x$ -axis: Let $P=(\sin\alpha,\cos\alpha)$ be the point where the blue circle touches $C$ . Moving $\ell$ units towards the center of $C$ (the origin) gives the point $Q=(1-\ell)(\sin\alpha,\cos\alpha)$ . Now a circle around $Q$ touches the $x$ -axis when the $y$ -coordinate of $Q$ equals $\ell$ , so that $Q$ has the same distance to $C$ and to $y=0$ : $$ Q_y=(1-\ell)\cos\alpha \stackrel.= \ell \tag1 $$ This equation is solved by $$Q_y=\frac{\cos\alpha}{1+\cos\alpha} \tag2$$ It's also easy to compute the $x$ -ccordinate of $Q$ , which yields $Q$ depending on $\alpha$ : $$ Q=Q(\alpha)=\left(\frac{\sin\alpha}{1+\cos\alpha}, \frac{\cos\alpha}{1+\cos\alpha}\right) \tag3 $$ Where I am stuck is to compute $Q_y$ as a function of $Q_x$ , that is find $
Let the centre of the circle be $C = (h,r).$ since, this circle touches $x^2 + y^2 = 1$ internally, $|OC| = 1 - r.$ $\implies h^2+r^2 = (1-r)^2.$ $\implies h^2 = 1 - 2r.$ $\implies h = \sqrt{1-2r}$ or $ r = \displaystyle\frac{1-h^2}{2}.$
|geometry|inverse-function|locus|moduli-space|
0
Find the center of all circles that touch the $x$-axis and a circle centered at the origin
Given a circle $C$ of radius $1$ centered at the origin, I want to determine the locus of the centers of all circles that touch $C$ and the $x$ -axis. This is the red curve in the following Desmos plot , where the blue circle touches $C$ and the $x$ -axis: Let $P=(\sin\alpha,\cos\alpha)$ be the point where the blue circle touches $C$ . Moving $\ell$ units towards the center of $C$ (the origin) gives the point $Q=(1-\ell)(\sin\alpha,\cos\alpha)$ . Now a circle around $Q$ touches the $x$ -axis when the $y$ -coordinate of $Q$ equals $\ell$ , so that $Q$ has the same distance to $C$ and to $y=0$ : $$ Q_y=(1-\ell)\cos\alpha \stackrel.= \ell \tag1 $$ This equation is solved by $$Q_y=\frac{\cos\alpha}{1+\cos\alpha} \tag2$$ It's also easy to compute the $x$ -ccordinate of $Q$ , which yields $Q$ depending on $\alpha$ : $$ Q=Q(\alpha)=\left(\frac{\sin\alpha}{1+\cos\alpha}, \frac{\cos\alpha}{1+\cos\alpha}\right) \tag3 $$ Where I am stuck is to compute $Q_y$ as a function of $Q_x$ , that is find $
The radius of a blue circle is $y$ . The distance between the centers is $1-y$ (do you see why?). Therefore, $x^2 + y^2 = (1-y)^2$ .
|geometry|inverse-function|locus|moduli-space|
0
Proving $\frac{s^2}{c}\left[\tan\frac\alpha2+\tan\frac\beta2\right]\left[\tan\frac\alpha2\tan\frac\beta2\right]=(s−c)\cot\frac\gamma2$ in any triangle
The question below was a part for my board exam of fbise, the key provided by the board that this statement would be proved through law of cosine, law tangent or law of sine, im confused furthermore I have searched this question over the internet and I haven't found something related to it, so any help would be recommended. In triangle $ABC$ (with usual notations) such that $$ \alpha+\beta+\gamma = \pi $$ then prove that, $$ \frac{s^2}{c}\left[\tan\left(\frac{\alpha}{2}\right)+ \tan \left(\frac{\beta}{2}\right)\right]\left[\tan \left(\frac{\alpha}{2}\right) \tan \left(\frac{\beta}{2}\right)\right] = (s − c) \cot \left(\frac{\gamma}{2}\right) $$ where $ s=\frac{a+b+c}{2} $ and $a$ , $b$ and $c$ are the sides of $\triangle$ . Furthermore, $\alpha $ , $\beta$ and $\gamma$ are the angles of triangles facing each side $a$ , $b$ , and $c$ respectively.
Using $\tan(\frac{\alpha}{2})$ = $\sqrt\frac{(s-b)(s-c)}{s(s-a)}$ & $\tan(\frac{\beta}{2})$ = $\sqrt\frac{(s-c)(s-a)}{s(s-b)},$ $\&$ putting $\tan(\frac{\alpha}{2})+ \tan(\frac{\beta}{2}) = \cot(\frac{\gamma}{2})\left(1-\tan(\frac{\alpha}{2})\tan(\frac{\beta}{2})\right),$ simplifying the LHS, we get the result easily.
|trigonometry|
1
Solving inequality of multiple variables
What should be a general approach towards solving these kind of questions algebraically? I tried to code this up and find a solution but I don't know whether it's correct or not. As I've used a scipy package and don't know the theory of this problem that well. from scipy.optimize import linprog #linprog solves for minima so change of sign. c = [-1, -1, -1, -1, -1, -1, -1, -1, -1] # Objective function # Inequality matrix A = [ [-1, 2, -3, 4, -5, 6, -7, 8, -9], [-9, 8, -7, 6, -5, 4, -3, 2, -1], [1, -1, 1, -1, 1, -1, 1, -1, 1] ] # RHS b = [-1, -2, 0.2] result = linprog(c, A_ub=A, b_ub=b, method='highs') result.success, result.x
You can just use Fourier Motzkin elimination to deal with them systematically
|linear-algebra|systems-of-equations|convex-optimization|numerical-linear-algebra|nonlinear-system|
0
Binomial Theorem Coefficients derivation
By the Binomial Theorem, when $y : $(1+x)^y=1+yx+\dfrac{y(y-1)}{2!}x^2+\dfrac{y(y-1)(y-2)}{3!}...$ The coefficient of $y$ is: $x+\dfrac{(-1)}{2!}x^2+\dfrac{(-1)(-2)}{3!}x^3+\dfrac{(-1)(-2)(-3)}{4!}x^4...$ I don't understand how the coefficient of $y$ came about except for the second term, which is obviously $x$ by inspection. For example in the third term, how is it possible to say the coefficient of $y$ in $y(y-1)$ is $-1$ ?
This is because in $y(y-1)$ as you are looking for coefficient of $y$ you have to keep in mind that what will get multiplied to give $y$ . For eg in $y(y-1)$ expanding it you will realise that when you multiply $y $ with $y$ you get $y^2$ WHICH CLEARLY WON'T CONTRIBUTE TO COEFFICIENT OF $y$ .. BUT when we multiply y with $-1$ the power of $y$ will remain as it is and so it will contribute to its coefficients. Now similarly in the next term $y(y-1)(y-2)$ you will have only $1$ term of y which will be formed only when we multiply $y.(-1).(-2)$ . We don't care about other terms as we know they will give higher powers of $y$ which won't contribute to its coefficients.
|binomial-theorem|
1
Question about the exponential Martingale
Question: In general, $Z_t := \exp(\int_0^t \phi_s dBs - \frac{1}{2}\int_0^t\phi_s^2ds)$ is the exponential martingale. Is $Z_t := \exp(-\int_0^t \phi_s dBs - \frac{1}{2}\int_0^t\phi_s^2ds)$ also an exponential martingale? Origin: Let $(B_t)_{t \geq 0}$ be abrownian motion starting at $x>0$ . Let $T_a := \inf\{t :B_t =a\}$ . Compute $$ E[exp(-\int_0^{T_a} \frac{1}{B_s^2}ds) | B_0 = x]$$ where $0 . The idea is to rewrite $exp(-\int_0^{T_a} \frac{1}{B_s^2}ds)$ into a martingale and use the optimal stopping theorem. Using Itô on $\ln(B_t)$ (and assuming $(B_0 \neq 0)$ ) yields $$ \frac{B_0}{B_t}e^{-\int_0^{T_a} \frac{1}{B_s^2}ds} = e^{-\int_0^t \frac{1}{B_s}dBs -\frac{1}{2} \int_0^{T_a} \frac{1}{B_s^2}ds}$$ Now it would be very convenient if the right hand side was indeed the exponential martingale.
I would write the Ito decomposition of the process: let $$X_t=-\int_0^t \phi_s dBs - \frac{1}{2}\int_0^t\phi_s^2ds, \quad Y_t := \exp\left(X_t\right)$$ then $dX_t=-\frac{1}{2}\phi^2_t\,dt-\phi_t\,dB_t$ . Using the Ito's formula we get that \begin{align} dY_t&=Y_t\,dX_t+\frac{1}{2}Y_t\,d[X]_t\\ &=Y_t\left(-\frac{1}{2}\phi^2_t\,dt-\phi_t\,dB_t+\frac{1}{2}\phi^2_t\,dt\right)\\ &= -\phi_t\,Y_t\,dB_t \end{align} which is the decomposition of an exponential martingale, since $-\phi\in M^2_{loc}$
|stochastic-calculus|
0
Variational formulation for the heat equation
Let $J = (0,T)$ , $T > 0$ , $G = (a,b) \subset \mathbb{R}$ (finite interval), and $f \in C(J;L^2(G))$ . I consider the heat equation with zero Dirichlet boundary conditions and with initial value $u_0 \in L^2(G)$ , \begin{align} \begin{cases} \frac{\partial u(t,x)}{\partial t} - \frac{\partial^2 u(t,x)}{\partial x^2} &= f(t,x) \quad \text{in } J \times G, \\ u(t,x) &= 0 \quad \text{on } J \times \partial G, \\ u(0,x) &= u_0(x) \quad \text{in } G. \end{cases} \end{align} I am given the following problem: Show that if $u$ is a smooth solution of this problem, then for all $v \in H^1_0(G)$ , there holds $$ \frac{d}{dt} ( u(t),v )_{L^2(G)} + a(u(t), v) = (f(t), v)_{L^2(G)}, \quad \forall t \in J, $$ where $$ a(\phi, \psi) = \int_G \frac{\partial \phi(x)}{\partial x} \frac{\partial \psi(x)}{\partial x} \, dx, \quad (\phi, \psi)_{L^2(G)} = \int_G \phi(x) \psi(x) \, dx. $$ My attempt : Let $u$ be a smooth solution and let $v \in H^1_0(G)$ (this is the Sobolev space for $p = 2$ such that the c
Actually it suffices to use integration by parts and use the fact that $v\in H^1_0(G)=\{u\in L^2(G):\exists g\in L^2(G),\int_Gu\phi'=\int_G-g\phi,\;\forall\phi\in C^1_0(G),\exists\tilde{u}\in C^0(G),\,u=\tilde{u}\, \text{a.e. on }G,\, \,\tilde{u}\vert_{\partial G}=0 \}$ where $\tilde{u}$ . is the unique continuous modification of $u$ . Then $-\int_G\partial_{xx}uvdx=-\partial_xuv\vert_{\partial G}+\int_G\partial_xu\partial_xvdx$ and the second term is zero due to the boundary conditions imposed on $v$ .
|partial-differential-equations|heat-equation|weak-derivatives|
1
Compactness of $C[J,X]$?
If $J=[0,T]\subset \mathbb{R}$ , and $X$ is a Banach space? Then, is $C[J,X]$ (Banach space of all continuous functions from $J$ to $X$ ) a compact metric space with respect to supnorm defined as $\|x\|_{\infty}=\sup \{x(t): t\in[0,T]\}$ ?
The only normed space that is compact is $\{0\}$ . Any other Banach space is unbounded (as soon as $x\ne0$ , $\|nx\|$ can be made arbitrarily large for $n\in\mathbb N$ ), so not compact.
|functional-analysis|metric-spaces|normed-spaces|compactness|compact-operators|
0
Let $m; n \in N$ with $m > n$. Then there does not exist an injection from $N_m$ into $N_n$.
Let $m; n \in N$ with $m > n$ . Then there does not exist an injection from $N_m$ into $N_n$ . We will do this by induction on $n$ . Let's formulate the inductive hypothesis $P(k)$ is if $f : N_m \to N_k$ and $m > k$ then $f$ is not an injection. Base case: $k = 1$ : then $f(1)=f(2)=...=f(m)=1$ . So, $f$ is not an injection. Inductive step: Suppose, $P(k)$ is true. Let's show that $P(k+1)$ is true. Let $h: N_m \to N_{k+1}$ . We can have $2$ cases: case 1: $h(N_m) \subseteq N_k \subseteq N_{k+1}$ , then $h$ is not an injection to $N_k$ , by our inductive hypothesis. Hence, it is not an injection to $N_{k+1}$ . case 2: $h(N_m) \not\subseteq N_{k}$ . If there are more than one elements in $N_{m}$ that map to $k+1$ then $h$ is not an injection. So, suppose there is only one $p \in N_m$ such that $h(p) = k+1$ . We now defined $h_1(i) = h(i)$ if $i = 1, 2, ... p-1$ and $h_1(i) = h(i+1)$ if $i = p, ..., m - 1$ . So, that $h_1 : N_{m-1} \to N_k$ . Now it is argued that $h_1$ is not an injectio
We just need prove: for $\ k = 1,2,3,...,m-1 $ ,the propositon is true. So $\ k+1 \leq m-1$ , and that means $\ k . ( If $\ k+1 > m-1$ , then $\ k \geq m $ , which contradicts the precondition.) And the inductive step should be modified as below: Suppose for $\ \forall j \leq k, \ P(j)$ is true. Let's show that $\ P(k+1)$ is true. This is because in case 1 $\ h(N_m) \subset N_k$ , so for any subset of $\ N_k$ ,say $\ N_j(j \leq k)$ ,there should not exist an injection from $\ N_m$ to $\ N_j$ .
|real-analysis|induction|
0
$x^n = y^n \Leftrightarrow x = y$ for all real numbers?
I have to prove a simple property but i have doubts about a property used in the demonstration. I've to prove that $\lVert\vec{u}$ x $\vec{v}\rVert^{2} =\lVert \vec{u}\rVert^{2}\lVert \vec{v}\rVert^{2} - (\mathrm{\vec{u}}\cdot\mathrm{\vec{v}})^{2}$ where x is the vectorial product. I decide to use a formula given to me as definition and to go from there: $\lVert\vec{u}$ x $\vec{v}\rVert =\lVert \vec{u}\rVert\lVert \vec{v}\rVert \sin(\theta)$ since $\theta \in [0, \pi]$ both sides are positive, so i square them both, obtaining: $\lVert\vec{u}$ x $\vec{v}\rVert^{2}=\lVert \vec{u}\rVert^{2}\lVert \vec{v}\rVert^{2} \sin^{2}(\theta)$ . Then since $\sin^{2}(\theta) = 1 - \cos^{2}(\theta)$ , i can replace the $\sin^{2}({\theta})$ : $\lVert\vec{u}$ x $\vec{v}\rVert^{2}=\lVert \vec{u}\rVert^{2}\lVert \vec{v}\rVert^{2} (1 - \cos^{2}(\theta))$ becomes $\lVert\vec{u}$ x $\vec{v}\rVert^{2}=\lVert \vec{u}\rVert^{2}\lVert \vec{v}\rVert^{2} - \lVert \vec{u}\rVert^{2}\lVert \vec{v}\rVert^{2}(\cos^2{\th
You need to realize that [assuming $X,Y$ are complex numbers and $n$ is an integer] the statements (a) $X=Y \implies X^n=Y^n$ and (b) $X^n=Y^n \implies X=Y$ are two different statements. The former statement (a) is always true, the latter statement (b) [which is the converse of statment (a)] is not always true. In fact, let $f$ be any function whose domain and range is in $\mathbb{C}$ , and let $X$ and $Y$ be in the domain of $f$ . Then $X=Y$ $\implies$ $f(X)=f(Y)$ is always true. However, $f(X)=f(Y)$ does not guarantee $X=Y$ , for general functions $f$ . To add something else to this however, $f(X)=f(Y)$ does imply $X=Y$ if $f$ is an invertible function. The function $f: \mathbb{R} \rightarrow \mathbb{R}$ ; $f(X)=X^n$ ; $n$ an odd integer, qualifies. So getting back to your example: It is true that $X=Y \implies X^2 = Y^2$ . So setting $X=\mathrm{\vec{u}}\cdot\mathrm{\vec{v}}$ and $Y=\lVert \vec{u}\rVert\lVert \vec{v}\rVert \cos(\theta)$ , it follows that $X=Y$ $\implies$ $X^2=Y^2$ .
|linear-algebra|algebra-precalculus|
1
Why is the partial derivative of the generalized momentum with respect to generalized position equal to zero?
Let $q$ denote the generalized position vector, $v$ denote the generalized velocity vector and $p$ denote the generalized momentum vector. The Lagrangian of a mechanical system is a function of $q$ , $v$ , and possibly the time $t$ . The legendre transform of the Lagrangian outputs a Hamiltonian function, which is a function of $q$ and $p$ . The partial derivative of $L$ with respect to $v$ equals $p$ ; therefore $p$ is a function of $q$ , $v$ and possibly time... And yet when taking the partial derivative of the hamiltonian with respect to $q$ , we always say that the partial derivative of $p$ with respect to $q$ equals zero. This seems like a contradiction to me, based on how we defined $p$ . Thank's in advance!!
I remember feeling that the proof was cutting corners when it suddenly just ignored one of the variables, so this is roughly what I came up with to reassure myself of the result. To summarize, the Lagrangian approach finds a path $q(t)$ which minimizes the action $S[q]=\int_{t_0}^{t_1} L_t(q(t),\dot q(t))\,dt$ where $L(q,v)$ is the Lagrangian that describes the mechanics of the system and $\dot q=dq/dt$ . A solution that minimizes $S[q]$ requires $$ \frac{\partial L_t}{\partial q}(q(t),\dot t(t)) = \frac{d}{dt}\frac{\partial L_t}{\partial v}(q(t),\dot q(t)). $$ The Hamiltonian is typically defined as $H_t(q,p)=p\cdot\dot q-L_t(q,\dot q)$ where $p=\partial L_t/\partial v$ , and the solution that minimizes $S[q]$ must satisfy the Hamiltonian equations $$ \dot q = \frac{\partial H}{\partial p} \quad\text{and}\quad \dot p = -\frac{\partial H}{\partial q}. $$ But the definition of the Hamiltonian also contains $\dot q$ , which is ignored. Let's instead use $F_t(q,v,p)=p\cdot v-L_t(q,v)$ . T
|partial-derivative|classical-mechanics|
0
Spivak, Ch. 22, "Infinite Sequences", Problem 1(iii): How do we show $\lim\limits_{n\to \infty} \left [\sqrt[8]{n^2+1}-\sqrt[4]{n+1}\right ]=0$?
The following is a problem from Chapter 22 "Infinite Sequences" from Spivak's Calculus Verify the following limits (iii) $\lim\limits_{n\to \infty} \left [\sqrt[8]{n^2+1}-\sqrt[4]{n+1}\right ]=0$ The solution manual says $$\lim\limits_{n\to \infty} \left [\sqrt[8]{n^2+1}-\sqrt[4]{n+1}\right ]$$ $$=\lim\limits_{n\to \infty} \left [\left (\sqrt[8]{n^2+1}-\sqrt[8]{n^2}\right )+\left (\sqrt[4]{n}-\sqrt[4]{n+1}\right )\right ]$$ $$=0+0=0$$ (Each of these two limits can be proved in the same way that $\lim\limits_{n\to \infty} (\sqrt{n+1}-\sqrt{n})=0$ was proved in the text) How do we show $\lim\limits_{n\to \infty} \left [\sqrt[8]{n^2+1}-\sqrt[8]{n^2}\right ]=0$ ? Note that in the main text, $\lim\limits_{n\to \infty} (\sqrt{n+1}-\sqrt{n})=0$ was solved by multiplying and dividing by $(\sqrt{n+1}+\sqrt{n})$ to reach $$0 $$\implies n>\frac{1}{4\epsilon^2}$$
Although heropup has a pretty solution, given the author's comments, I am pretty certain that Spivak intended the reader to proceed as follows: As the solution manual states: $$\lim_{n\to \infty} \sqrt[8]{n^2+1}-\sqrt[4]{n+1}= \lim_{n \to \infty}\sqrt[8]{n^2+1}-\sqrt[8]{n^2}+\sqrt[4]{n}-\sqrt[4]{n+1}$$ To show that $\displaystyle\lim_{n \to \infty}\sqrt[8]{n^2+1}-\sqrt[8]{n^2}=0$ , we can take Spivak's suggestion... but we need to apply it multiple times . Observe: \begin{align}&\sqrt[8]{n^2+1}-\sqrt[8]{n^2}\times\left(\frac{\sqrt[8]{n^2+1}+\sqrt[8]{n^2}}{\sqrt[8]{n^2+1}+\sqrt[8]{n^2}}\right)=\frac{\sqrt[4]{n^2+1}-\sqrt[4]{n^2}}{\sqrt[8]{n^2+1}+\sqrt[8]{n^2}}\implies \\ &\frac{\sqrt[4]{n^2+1}-\sqrt[4]{n^2}}{\sqrt[8]{n^2+1}+\sqrt[8]{n^2}}\times\left(\frac{\sqrt[4]{n^2+1}+\sqrt[4]{n^2}}{\sqrt[4]{n^2+1}+\sqrt[4]{n^2}}\right)=\frac{\sqrt{n^2+1}-\sqrt{n^2}}{\left(\sqrt[8]{n^2+1}+\sqrt[8]{n^2}\right)\times\left(\sqrt[4]{n^2+1}+\sqrt[4]{n^2}\right)} \implies\\ &\frac{\sqrt{n^2+1}-\sqrt{n^2}}{
|calculus|sequences-and-series|limits|
0
Finding the matrix of a linear operator with respect to the standard basis
Let $f \in End (\mathbb{R}^{3})$ such that $f$ is diagonalizable and it has only two distinct eigenvalues. $f(U) = V$ , where $U = \{(x,y,z): x-2y-z = 0\}$ and $V = span ((1,0,1), (-1,1,1))$ 1 is an eigenvalue of $f$ and there is a vector in $U$ associated to that eigenvalue. (1,0,-1) is an eigenvector associated to a simple eigenvalue. Find the matrix of $f$ with respect to the standard basis in terms of as many parameters as you may need. My attempt: We first note that $1-2 \cdot 0 -1 = 0$ , so $(1,0,1) \in U$ . By (3) we can infer that $f(1, 0, 1) = (1, 0, 1)$ . Moreover, $1-2 \cdot 0 -(-1) = 2 \neq 0$ , so $(1, 0, -1) \notin U$ , therefore $1$ cannot be an eigenvalue associated to $(1,0,-1)$ . Let $\lambda_{1} = 1,\ \lambda_{2} = a$ , $a \neq 1$ , be the eigenvalues of $f$ associated to $(1, 0, 1)$ and $(1, 0 ,-1)$ respectively. Since $a$ is simple, and $f$ diagonalizable, there must be a second eigenvector associated to $1$ . Hence, the characteristic polynomial of $f$ is of the f
Your reasoning is correct, you have $U \cap V = \operatorname{span}((1,0,1))$ so $v_1 = (1,0,1)$ is an eigenvector of eigenvalue 1. Now you know that $v_2$ is an eigenvector for the singular eigenvalue $\lambda$ which is thus different from $1$ . There is a last eigenvector which is not in $V$ (let us call it $v_3$ ) with eigenvalue $1$ . But let us get back to it later. We can easily see that $e_1 = \frac{v_1 + v_2}{2}$ and $e_3 = \frac{v_1-v_2}{2}$ so $f(e_1) = (\frac{1+\lambda}{2},0,\frac{1-\lambda}{2})$ and $f(e_3) = (\frac{1-\lambda}{2},0,\frac{1+\lambda}{2})$ . Now let us find a second vector in $U$ for example $u=(1,1,-1)$ , we have $$f(u) = a v_1 + b(-1,1,1) = a v_1 - b v_2 + be_2$$ but $u = e_2 + v_2$ , so $f(u) = f(e_2) + \lambda v_2$ which means $$f(e_2) = av_1 -(\lambda+b)v_2 + b e_2$$ So in the basis $(v_1,e_2,v_2)$ the matrix of $f$ would be $$A=\begin{pmatrix} 1 &a & 0\\ 0 & b & 0\\ 0 & -\lambda - b & \lambda \end{pmatrix}$$ Which has $(X-1)(X-b)(X-\lambda)$ , but the ch
|linear-algebra|eigenvalues-eigenvectors|
0
Calculate the limit $\lim_{n \to \infty} \int_1^e x^m e^x (\log x)^n dx$ with limsup and liminf
I have a question about how to prove the limit using $\limsup$ . On the other day, I was asked the following problem in the exam: Let $m$ and $n$ be positive integers. Calculate the following limit, where $m$ is fixed. $$ \lim_{n \to \infty} \int_1^e x^m e^x (\log x)^n dx $$ To this problem, I answered in the manner shown below: (My solution) Take any $\varepsilon \in (0, 1)$ . Now $f_n(x)$ denotes the integrand of the integral in question ( $f_n(x)$ depends on $m$ of course, but $m$ is fixed in this problem so no problems may occur even if $m$ is not appear in the notation). First, split the integral into two pieces, $$ \int_1^e f_n(x) dx = \int_1^{e - \varepsilon} f_n(x) dx + \int_{e - \varepsilon}^e f_n(x) dx, $$ and call the first term $I_1$ and the second $I_2$ . Since all of $x^m$ , $e^x$ and $(\log x)^n$ are increasing functions on $[1, e]$ , so is $f_n(x)$ . Then, $I_1$ is evaluated as below: $$ I_1 \le \int_1^{e - \varepsilon} f_n(e - \varepsilon) dx = f_n(e - \varepsilon)(e -
Your proof is valid, and very well-written. Eventually you will learn about Measure Theory and Dominated Convergence , and then this one is a one-liner!
|real-analysis|integration|limits|solution-verification|limsup-and-liminf|
1
Newman's Short Proof of Prime Number Theorem
I'm going through the paper of D. Zagier on Short Proof of Prime Number Theorem. There it says in V that $\Phi(s)=\int_1^\infty \frac{d\vartheta(x)}{x^s}$ . Can someone please explain in details why that is the case ? And also how The Analytic Theorem is being applied here to the functions defined as f(t) and g(z) ? MSE question link: Newman's "Natural proof"(Analytic) of Prime Number Theorem (1980) Edit: I have understood how the Analytic Theorem is being applied here but I am still unable to figure out about that integral. I saw a post mentioning that because $\vartheta(x)$ changes by $log(p)$ for primes only. I got that point but is their a way to formally proof that thing. Any relative reference is appreciated. Thank you ,
This is a form of integration called Riemann-Stieltjes integration . In short, we define $$ \int_a^b f(x) d g(x) = \sum f(c_j) \big( g(x_{j + 1}) - g(x_j) \big) $$ for partitions $a = x_0 and points $c_j \in [x_j, x_{j + 1}]$ . This is a weighted generalization of typical Riemann integration. Concretely, we can think of this sort of integral as adding $f(x) \Delta(g(x))$ whenever $g(x)$ changes. Here, $\vartheta(x) = \sum_{p \leq x} \log p$ is the first Chebyshev function. Observe that this changes only on intervals $(p - \epsilon, p + \epsilon)$ when $p$ is prime (for sufficiently small $\epsilon$ ). Thus a Riemann-Stieltjes integral of the form $$ \int_a^b f(x) d \vartheta(x)$$ for a continuous function $f$ on an interval $[a, b]$ (and where $p_1, \ldots, p_k$ are the primes in the interval $[a, b]$ ) will be the limit of partition terms that look like $$ \sum_{j = 1}^k f(\alpha_j) \big( \vartheta(p_j + \epsilon) - \vartheta(p_j - \epsilon) \big) \to \sum_{j = 1}^k f(p_j) \log p_j.$$
|integration|number-theory|prime-numbers|analytic-number-theory|riemann-zeta|
1
Question about definition of Sequences in Analysis I by Tao.
Here's the definition of a sequence as laid out in the text: Let $m$ be an integer. A sequence $(a_n)_{n=m}^\infty$ of rational numbers is any function from the set $\{n \in \mathbf{Z} : n \geq m\}$ to $\mathbf{Q}$ . I can make sense out of this definition, but I was under the impression that a sequence has an ordering to it. I do not see any order implied on the "outputs" in the definition. Am I missing something?
As $\mathbb{Z}$ is ordered, the set $\{ n \in \mathbb{Z} : n \geq m \}$ is ordered. Hence the sequence $(a_n)_{n \geq m}$ is ordered (by the subscript $n$ ). For a concrete example, take the set $S = \{2, 3, 4, 5, \ldots \}$ and the function $$ \begin{align*} f: \{ n \in \mathbb{Z} : n \geq 2 \} &\longrightarrow \mathbb{Q} \\ n &\mapsto n^2 \end{align*} $$ that squares inputs. Then the sequence is $$ (a_2, a_3, a_4, \ldots) = (4, 9, 16, \ldots), $$ and the ordering of the sequence is inherited from the ordering of the integers.
|real-analysis|sequences-and-series|analysis|functions|definition|
0
Finding the matrix of a linear operator with respect to the standard basis
Let $f \in End (\mathbb{R}^{3})$ such that $f$ is diagonalizable and it has only two distinct eigenvalues. $f(U) = V$ , where $U = \{(x,y,z): x-2y-z = 0\}$ and $V = span ((1,0,1), (-1,1,1))$ 1 is an eigenvalue of $f$ and there is a vector in $U$ associated to that eigenvalue. (1,0,-1) is an eigenvector associated to a simple eigenvalue. Find the matrix of $f$ with respect to the standard basis in terms of as many parameters as you may need. My attempt: We first note that $1-2 \cdot 0 -1 = 0$ , so $(1,0,1) \in U$ . By (3) we can infer that $f(1, 0, 1) = (1, 0, 1)$ . Moreover, $1-2 \cdot 0 -(-1) = 2 \neq 0$ , so $(1, 0, -1) \notin U$ , therefore $1$ cannot be an eigenvalue associated to $(1,0,-1)$ . Let $\lambda_{1} = 1,\ \lambda_{2} = a$ , $a \neq 1$ , be the eigenvalues of $f$ associated to $(1, 0, 1)$ and $(1, 0 ,-1)$ respectively. Since $a$ is simple, and $f$ diagonalizable, there must be a second eigenvector associated to $1$ . Hence, the characteristic polynomial of $f$ is of the f
Your reasoning is correct except your final choice of a seemingly random vector $(-1,1,1)\in V$ for the image of $$w:=(1,1,-1)$$ (actually, we shall see that this guess is correct, but it needs a proof. And we shall see that the matrix $M$ does have the expected characteristic polynomial, contrarily to your report. Hence your computation of $M$ , which you don't describe, must be mistaken). You were invited to introduce "as many parameters as you may need". Let $u:=(1,0,-1)$ and $v:=(1,0,1)$ . You already found that $f(u)=au$ for some $a\ne1$ and $f(v)=v$ . A priori, $$f(w)=b(1,0,1)+c(-1,1,1).$$ Since the canonical basis $(e_1,e_2,e_3)$ is given by $$e_1=\frac{u+v}2,\quad e_2=w-u,\quad e_3=\frac{v-u}2,$$ the columns of the matrix $M$ of $f$ in this basis will be $$\frac{a(1,0,-1)+(1,0,1)}2,\quad b(1,0,1)+c(-1,1,1)-a(1,0,-1),\quad\frac{(1,0,1)-a(1,0,-1)}2,$$ i.e. $$M=\begin{pmatrix}\frac{a+1}2&b-c-a&\frac{1-a}2\\0&c&0\\\frac{1-a}2&b+c+a&\frac{a+1}2\end{pmatrix}.$$ The equations of eigen
|linear-algebra|eigenvalues-eigenvectors|
0
Question about definition of Sequences in Analysis I by Tao.
Here's the definition of a sequence as laid out in the text: Let $m$ be an integer. A sequence $(a_n)_{n=m}^\infty$ of rational numbers is any function from the set $\{n \in \mathbf{Z} : n \geq m\}$ to $\mathbf{Q}$ . I can make sense out of this definition, but I was under the impression that a sequence has an ordering to it. I do not see any order implied on the "outputs" in the definition. Am I missing something?
Informally, a sequence is an ordered list of things. The ordering is the order in which the things appear, not an order among the things (the ouptputs of the function in Tao's definition). So $$ 1, 0, 1, 0, \ldots $$ is (the start of) a sequence of integers. Tao orders the position of the elements of the sequence by labeling them with integers starting at some $m$ . (The usual starting points are $m=0$ or $m=1$ . Tao may have some use later for the extra generality.)
|real-analysis|sequences-and-series|analysis|functions|definition|
0
Isn't this proof of what this professor calls the "Finite Models Theorem" flawed?
Starting at 7:52 of this video , the professor presents a proof of the following theorem: for any infinite set of well-formed formulas $\Sigma$ such that $\Sigma \vDash \alpha$ , where $\alpha$ is a w.f.f., there exists a finite subset $\Sigma_0 \subset \Sigma$ such that $\Sigma_0 \vDash \alpha$ . This is how I understood his proof (with comments of mine in bold): Since $\alpha$ is a w.f.f., it involves only finitely many propositional variables, so there are only finitely many truth assignments in its truth table. Isn't this already an issue? There are only finitely many truth assignments whose domains are the set of just those propositional variables that appear in $\alpha$ . But in the larger context of an arbitrary infinite $\Sigma$ , whose formulas could collectively involve infinitely many propositional variables, there could be infinitely many truth assignments for each combination fo truth values of those variables that appear in $\alpha$ . Let $V_i$ be a truth assignment for w
Here is a slightly easier way of thinking about this, in which we rely on the fact that propositional logic is deductively sound and complete. Suppose $\Sigma$ is an infinite set of well-formed formulas of propositional logic such that $\Sigma \models \alpha$ for some propositional formula $\alpha$ . By the completeness theorem for propositional logic, $\Sigma \vdash \alpha$ . This is witnessed by a formal deduction $D$ in whatever formal system you've proved the completeness theorem for (say, a Hilbert system or a natural deduction system). Crucially, $D$ will be a finite object: a sequence or tree of formulas. Let $\Sigma_0$ be the set of all $\sigma \in \Sigma$ which appear as axioms in $D$ . $\Sigma_0$ is a finite subset of $\Sigma$ , and clearly $\Sigma_0 \vdash \alpha$ . Finally, by the soundness theorem for propositional logic, $\Sigma_0 \models \alpha$ .
|logic|solution-verification|propositional-calculus|
0
Isn't this proof of what this professor calls the "Finite Models Theorem" flawed?
Starting at 7:52 of this video , the professor presents a proof of the following theorem: for any infinite set of well-formed formulas $\Sigma$ such that $\Sigma \vDash \alpha$ , where $\alpha$ is a w.f.f., there exists a finite subset $\Sigma_0 \subset \Sigma$ such that $\Sigma_0 \vDash \alpha$ . This is how I understood his proof (with comments of mine in bold): Since $\alpha$ is a w.f.f., it involves only finitely many propositional variables, so there are only finitely many truth assignments in its truth table. Isn't this already an issue? There are only finitely many truth assignments whose domains are the set of just those propositional variables that appear in $\alpha$ . But in the larger context of an arbitrary infinite $\Sigma$ , whose formulas could collectively involve infinitely many propositional variables, there could be infinitely many truth assignments for each combination fo truth values of those variables that appear in $\alpha$ . Let $V_i$ be a truth assignment for w
I think you are correct, and the proof works only when $\Sigma$ involves finitely many variables (but then it's not very interesting, because there are only finitely many non-equivalent formulas in $\Sigma$ ). To see this, consider this version: let allow only assignments which assign T to finitely many variables. The compactness theorem fails in this case: for $\Sigma = \{x_0 \vee x_1, x_0 \vee x_2, \ldots\}$ and $\alpha = x_0$ we have that any allowed assignment that satisfies $\Sigma$ also satisfies $\alpha$ (because any assignment assigns F to some $x_n$ , and as it has to satisfy $x_0 \vee x_n$ , it has to assign T to $x_0$ ). But for any finite subset of $\Sigma$ there is an allowed assignment that satisfies this subset but not $\alpha$ . However, I don't see any part in the proof that are affected by what is allowed set of assignments. The strange part is that author talks about $V_i$ as an assignment for only variables of $\alpha$ , but to apply it to formula from $\Sigma$ , it
|logic|solution-verification|propositional-calculus|
0
Question about definition of Sequences in Analysis I by Tao.
Here's the definition of a sequence as laid out in the text: Let $m$ be an integer. A sequence $(a_n)_{n=m}^\infty$ of rational numbers is any function from the set $\{n \in \mathbf{Z} : n \geq m\}$ to $\mathbf{Q}$ . I can make sense out of this definition, but I was under the impression that a sequence has an ordering to it. I do not see any order implied on the "outputs" in the definition. Am I missing something?
Yup, if you look only at the "outputs" there is no sequential order. That's why the sequence is not the same thing as its "outputs". The sequence is the mapping, and the domain/index set is a subset of the naturals, which does have the standard order on it. If you want to get poetical about it, "elements of the sequence remember where they came from". So, for instance, suppose we have a sequence that starts at index $i=1$ , and the sequence is $1, 1, 2, 3, 5, ...$ , then $a_1$ and $a_2$ are different elements of the sequence, even though they both have the value $1$ . Added in response to questions in the comment section of the answer by davidlowryduda: An ordered set is a different thing from just a plain set. Even if you write $\{3,2,5,4,\ldots \}$ instead of $\{1,2,3,4,\ldots \}$ you still have the same set. The thing that can be confusing for people learning about formal mathematical structures is that many sets come with "default" structures: the usual " $+$ " addition operation o
|real-analysis|sequences-and-series|analysis|functions|definition|
1
Calculate $E(Y)$ when $Y$ is the chosen integer out of $1,..,X$
The Question : In the first step we randomly choose an integer out of the values ${1,2,...,10}$ . Let X be the chosen integer. In the second step we randomly choose an integer out of the values ${1,2,...,X}$ . Let Y be the chosen integer. In the third step we randomly choose an integer out of the values ${1,2,...,Y}$ - Let Z be the chosen integer. (a). Calculate $P(Y≥7)$ . (b). Calculate $P(Z≥8)$ . (c). Calculate $E(X),E(Y),E(Z),V(X),V(Y)$ . I succeed to solve (a) and (b) and most of (c), I know it's a quite simple question but I don't understand why the official handles $E(Y)$ as $E(Y) = E(\frac {X+1} {2})$ I just did the long way by using ${\displaystyle \operatorname {E} (Y)=\sum _{i}x_{i}P(Y=x_{i})}$ which is can cause mistakes easily because, in order to calculate each $P(Y=x_{i})$ for every $1≤i≤10$ . I know that they do it due to Law of total expectation, but I think I have a confusion about the formula and I would like a clarification about it. In our case we have : ${\displays
The law of total expectation states $$\operatorname{E}[Y] = \operatorname{E}[\operatorname{E}[Y \mid X]]$$ but students at first tend to find this notation confusing or opaque. What is really happening is that the inner conditional expectation $\operatorname{E}[Y \mid X]$ regards $X$ as fixed, and calculates the expectation with respect to $Y$ . The result is a function of the random variable $X$ , and then the outer expectation is taken with respect to $X$ . In the sampling process described, what we have is a hierarchical model: $$X \sim \operatorname{DiscreteUniform}(1,10), \\ Y \mid X \sim \operatorname{DiscreteUniform}(1,X), \\ Z \mid Y \sim \operatorname{DiscreteUniform}(1,Z).$$ Note that it is not the case that $Y$ itself is discrete uniform, because it is only after we sample from $X$ that the conditional distribution of $Y$ is discrete uniform. An intuitive way to see this is to observe that the only way $Y = 10$ is if $X = 10$ , but $Y = 1$ can occur for any $X$ . Indeed, we
|probability|expected-value|
1
For a smooth manifold $M$, what is the relation between $\mathfrak X(M)$ and $C^{\infty}(M) $
Let $M$ be a smooth manifold. We denote $C^\infty(M)$ is the space of all smooth functions $f:\ M\longrightarrow\mathbb R$ and $\mathfrak X(M)$ is the space of all smooth vector fields $X:\ M\longrightarrow T(M)$ . It is known that if $X$ is a smooth vector field, then for any local chart $x:\ U\subset M\to\mathbb R^n$ , there exists the smooth functions $X_1,\dots,X_n:\ U\to\mathbb R$ such that \begin{align} X_p\,=\,\sum_{i=1}^nX_i(p)\,\frac{\partial}{\partial x_i}\bigg|_p \end{align} or we can write $X=\sum X_i\,\displaystyle\frac{\partial}{\partial x_i} $ as a operator $C^\infty(M)\to C^\infty(M) $ such that \begin{align} X(f)\,=\,\sum X_i\,\displaystyle\frac{\partial f}{\partial x_i}. \end{align} From here, I wonder if the spaces $\mathfrak X(M)$ and $C^\infty(M)$ have any relationshio ?
If we identify $X \in \mathfrak X(M)$ with the map $$C^{\infty}(M) \to C^{\infty}(M), \qquad f \mapsto X(f),$$ as in the question statement, checking directly shows that $\mathfrak X(M)$ is a module over $C^{\infty}(M)$ . The product rule $X(fg) = X(f) g + f X(g)$ precisely makes the elements of $\mathfrak X(M)$ derivations , and all derivations of $C^\infty(M)$ turn out to arise this way, giving a (natural) isomorphism $$\mathfrak X(M) \cong \operatorname{Der}(C^\infty(M)) .$$ Indeed, sometimes we define a vector field on $M$ to be a derivation of $C^\infty(M)$ .
|differential-topology|smooth-manifolds|
1
Invariance in outer measures
Recently I was trying to solve this problem. Suppose that $X$ is a set and $\mu^{\star}$ is an outer measure on $2^X$ . Let $A \subseteq X$ be a set such that $\mu^{\star}(A) and suppose that $C \in \Sigma_{\mu^{\star}}$ satisfies $A \subseteq C$ and $\mu^{\star}(C)=\mu^{\star}(A)$ . Show that $$ \mu^{\star}(A \cap E)=\mu^{\star}(C \cap E) \quad \forall E \in \Sigma_{\mu^{\star}} . $$ The $\Sigma_{\mu^{\star}}$ is just a short for the set of $\mu^*$ measurable sets as in Caratheodory. Now I did something similar for measures, not outer-measures. I looked at the solutions of the author but they are wrong. Has anyone seen this result before ?
Since $A \subseteq C$ , from the monotonicity of $\mu^{\star}$ , we have $$ \mu^{\star}(A \cap E) \leqslant \mu^{\star}(C \cap E) $$ and $$ \mu^{\star}\left(A \cap E^c\right) \leqslant \mu^{\star}\left(C \cap E^c\right) $$ (here $E^c=X \backslash E$ ). Also, since $E \in \Sigma_{\mu^{\star}}$ , we have $$ \mu^{\star}(A)=\mu^{\star}(A \cap E)+\mu^{\star}\left(A \cap E^c\right) \leqslant \mu^{\star}(C \cap E)+\mu^{\star}\left(C \cap E^c\right)=\mu^{\star}(C) $$ Because by hypothesis $\mu^{\star}(A)=\mu^{\star}(C)$ , we conclude that $$ \mu^{\star}(A \cap E)=\mu^{\star}(C \cap E) . $$ I would like to remark that going from the inequalities to the equality, in the end, of what we want to proof took me a while to get convinced but it is just algebra in the end.
|measure-theory|outer-measure|
0
How to simplify the boolean expression $(x\times y)'+(y\times z)$?
Expression: $$(x\times y)'+(y\times z)$$ My attempt: $(xy)' + (yz)$ $(x'+y') + (yz) \quad \textit{After applying de Morgan's Axiom}$ $x' + (y' + yz) \quad \textit{After applying 1st Distributive Axiom}$ $x'+ (y' + y) + (y'z) \quad \textit{Rewriting}$ $x' + 1 + (y'z) \quad \textit{After applying Inverse Axiom}$ $x' + y' + z \quad \textit{After applying second Identity Axiom}$ The answer is supposed to be: $(x'+y')+z$ N.B. I am still new to this and learning, I am using the following book: Discrete Mathematics for Computing / Edition 3 by Peter Grossman Question: What am I missing or have I done wrong?
There is a mistake on line 4. Applying Distribution on $y’ + (yz)$ will get you $(y’+y)(y’+z)$ , and so line 4 should be $x’ + ((y’+ y)(y’+z))$ Accordingly, line 5 should be $x’+ 1(y’+z)$ . In fact, if line 5 was what you have, $x’+1+y’z$ , then your line 6 should be just $1$ , because $1+ [anything]$ is always just $1$ . (I don’t know if your book has an equivalence principle for that, but many books call it Annihilation) I also note that between your line 5 and 6, the $y’z$ term suddenly changes to $y’+z$ , which isn’t right either, so you made two mistakes in going from line 5 to line 6 Despite these mistakes, you somehow managed to reach the correct answer. As indicated, the correct line 5 should have been $x’+1(y’+z)$ which by Identity becomes $x’+y’+z$ . So, you happened to get to the right answer, but you made several mistakes getting there. Here is how you would have done this correctly: $(xy)’+yz$ $(x’ +y’)+yz$ (DeMorgan) $x’ + (y’ +yz)$ (Association) $x’+ (y’+y)(y’+z)$ (Distr
|logic|boolean-algebra|boolean|
1
Find the smallest hypersphere passing through $n$ points in $d$-dimensional space (n <= d)
It is obvious that $d+1$ points uniquely determine a circumscribed hypersphere in $d$ -dimensional space. If the number of points $n \le d$ , the circumscribed hypersphere is not unique, but the smallest one exists and must have its center lying inside the space spanned by these $n$ points. For example, when the hypersphere is unique and $n=4, d=3$ (i.e. $n=d+1$ ), and these four points are denoted as $\mathbf{a}, \mathbf{b}, \mathbf{c}, \mathbf{d}$ , respectively, it is clear that the unique circumscribed equation is given by $\textrm{InSphere}(\textbf{e})=0$ where $\textrm{InSphere}(\textbf{e})$ is As we can see, the hypersphere equation is given by a determinant for cases where $n=d+1$ . We can expect the hypersphere where $n\le d$ might be similarly derived and it should be a simple determinant equation as well. Do you have any ideas on how to derive such a hypersphere (the smallest one) in the form of determinant equation where $n\le d$ ?
Isn't this is a trivial problem? If you have $n$ points with no $3$ on a line, $4$ on a plane, etc... then they lie on a $n-2$ sphere, and the smallest spheres in higher dimensions will be those with the same center and radius as this $n-2$ sphere.
|linear-algebra|geometry|algebraic-geometry|determinant|
0
calculate definite integrals in terms of area
(a) Graph the function $$ f(x) = \begin{cases} 2 - \sqrt{4 - x^2}, & -2 \leq x \leq 2 \\ |x - 5| - 1, & x > 2 \end{cases} $$ on the interval [-2, 6] and use it to calculate the definite integral $$ \int_{-2}^{6} f(x) \, dx $$ in terms of area. I graphed the functions, and it is a semi-circle that lies on the x-axis plus two lines Then I calculated it by area and it is half square minus half semi-circle which is 8-2pi and the other two triangles are 1 and 2 so the total area is 11-2pi, but if I calculate the definite integrals in the interval [-2,6] the answer is 9-2pi. I feel confused that Should I count the area below the x-axis as negative in this question?
Here are the Matlab code and the plot of function: And for integrating the function in terms of the area, you can split the area into two parts: one is a rectangle minus semicircle, and another is the sum of two triangles.
|calculus|definite-integrals|
0
Can any characterized property give a solid account of why multiplication is a harder computation operation than addition?
For humans, in general, it's far easier to do an addition than a multiplication. One might argue this is just an effect of the particular way brains are wired. But from what I find, multiplication is also more expensive on computers, where one could certainly arrange chips in an arbitrary way, or at least without any pre-established biological constraint. But from a group theory point of view, addition is not more fundamental than multiplication, is it? Are there some characteristics, like associativity or commutativity that explain that difference in terms of computational complexity? The answer might of course rely on other mathematical fields, like topology, or even be related to some physical constraints that hold for both brains and common CPUs but are not strictly constrainted by mathematical characteristics. Related resources: https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-done-at-the-same-speed-as-addition-on-a-modern How is addition different tha
Since you mentioned computers, computers work "natively" with integers as $n$ -bit binary words, such as a 64-bit value for a 64-bit processor. While for an addition the result fits in $n+1$ bits (one word and one carry bit like a carry flag), multiplication results require $2n$ bits, or 2 words to store. Binary digit arithmetic is just what is easiest to implement in silicon, after engineers experimented with decimal-based digits (what we normally use), binary-coded decimal, even ternary. Old processors didn't have the transistors for a hardware multiplication, but nowadays we dedicate a large area of silicon to fast multiplication because it's so important. If you work with factoring large numbers, such as in the quadratic sieve , it is worth storing a number $x$ as a vector $[e_1, e_2, \cdots]$ where $x = 2^{e_1} 3^{e_2} \cdots p_k ^{e_k} $ is the prime factorization handling primes up to $p_k$ . The vector can be arbitrarily long for arbitrarily large $x$ . Then multiplication of t
|group-theory|arithmetic|computational-complexity|
0
How many solutions can $x^4=-1$ have on a field of characteristic $0$?
I am looking at the equation $x^4=-1$ over a generic field $k$ of characteristic $0$ . If $k$ is algebraically closed, it will certainly have $4$ roots. I think that they should all be different, but I am not completly sure about this. Could someone tell me if this is the case, and explain why? If we are looking at the complex numbers for example, it is clear to me why they are all different. But since $k$ is unknown, I don't really know how to deal with this. Now let's look at the case where $k$ is not algebraically closed. In this case, it could be that the equation had no solutions in the ground field (for example if $k=\mathbb{Q}$ ). But could it be that it had only two solutions? Again, I think that the answer is no, but wouldn't be able to argue it. In both cases, my intuition comes from this answer, and also from the fact that I know that every field of characteristic $0$ must contain a copy of $\mathbb{Q}$ . So it cannot do very weird things with the solutions of this equation.
For the first question, you know that if $\operatorname{char} k = 0$ any polynomial $p$ has no double root iff $\gcd (p,p') = 1$ , so in your case by applying Euclides algorithm you get $$\begin{align*} x^4 + 1 = \frac{x}{4} \cdot 4x^3 + 1/4 \\ 4x^3 = 16x^3 \cdot \frac{1}{4} +0 \end{align*}$$ So $\gcd (p,p') = 1$ up to multiplication by a scalar (I wrote $p = x^4 + 1$ ). Hence all possible roots are distinct because the gcd is invariant with field extension. So if $(x-r)| x^4$ then it does not divide $4x^3$ hence it is not root of $4x^3$ so every root is singular. Also you don’t need this since we explicitely prove later that we can find 4 distinct roots if we have one. Now suppose you have a root $j \in k$ , $i=j^2$ is a root of $x^2+1$ and $(j^3)^4 = (-1)^3 = -1$ so $j^3$ is also a root. Then $(-j)^4 = j^4$ so $-j$ is also a root as well as $-j^3$ . Let us check that they are different: $j-(-j) = 2j \neq 0$ since $\operatorname{char} k =0$ and $j \neq 0$ since $j$ is not a root of $x
|polynomials|field-theory|roots|extension-field|
0
Example implementing the Chain Rule in a textbook by Charles Chapman Pugh
I am asking for help interpreting an example in a textbook. The author gives two functions from different dimensions of Euclidean space, and he precisely describes the image of arbitrary elements under these functions, but he does not state the arbitrary elements in the domains. Here is the example from Chapter 5. Maybe someone has a different edition of the textbook. Let $f: \mathbb{R}^{2} \to \mathbb{R}^{3}$ and $g: \mathbb{R}^{3} \to \mathbb{R}$ be defined by $f = (x,y,z)$ and $g = w$ where $$w = w(x,y,z) = xy + yz + xz$$ and $$x = x(s,t) = st, \quad y = y(s,t) = s\cos{t} \quad z = z(s,t) = s\sin{t}.$$ a.) Find the matrices that represent the linear transformations $(\mathrm{D}f)_{p}$ and $(\mathrm{D}g)_{q}$ where $p = (s_{\circ}, t_{\circ}) = (0,1)$ and $q = f(p)$ . b.) Use the Chain Rule to calculate the $1 \times 2$ matrix $[\partial{w}/\partial{s}, \partial{w}/\partial{t}]$ that represents $(\mathrm{D}(g\circ{f}))_{p}$ . c.) Substitute the functions $x = x(s,t)$ , $y = y(s,t)$ ,
This gets a bit messy but we have the following... $$f:\mathbb{R}^2\rightarrow\mathbb{R}^3$$ $$f(s,t)=(x,y,z)=(st, s\cos(t), s\sin(t))$$ So the matrix which represents the total derivative of our map $f$ at the point $p=(s,t)$ would be... $$(Df)_{(s,t)}=\begin{bmatrix} t &s \\ \cos(t)& -s\sin(t) \\ \sin(t)& s\cos(t) \end{bmatrix}$$ Additionally, we have the map; $$w:\mathbb{R}^3\rightarrow\mathbb{R}$$ $$w(x,y,z)=xy+yz+xz$$ Which yields; $$(Dw)_{f(p)}=\begin{bmatrix} (y+z)& (x+z)& (y+x) \end{bmatrix}$$ So, by the chain rule, our map $D(w\circ f)_p$ would give us; $$D(w\circ f)_p=D(w)_{f(p)}\circ D(f)_p=\begin{bmatrix} (y+z)& (x+z)& (y+x) \end{bmatrix}\begin{bmatrix} t &s \\ \cos(t)& -s\sin(t) \\ \sin(t)& s\cos(t) \end{bmatrix}$$ Letting $(s,t)=(1,0)$ gives us; $$\begin{bmatrix} 1& 0& 1 \end{bmatrix}\begin{bmatrix} 0 &1 \\ 1& 0 \\ 0& 1 \end{bmatrix} = \begin{bmatrix} 0& 2 \end{bmatrix} $$ I'll leave it to you to confirm that; $$\begin{bmatrix} \frac{\partial\omega}{\partial s}& \frac{\pa
|real-analysis|multivariable-calculus|chain-rule|pushforward|
1
Integrating using Laplace Transforms
$$\int_{0}^\infty {\cos(xt)\over 1+t^2}dt $$ I'm supposed to solve this using Laplace Transformations. I've been trying this since this morning but I haven't figured it out. Any pointers to push me in the right direction?
Let's continue the Laplace approach and start from Amir Alizadeh's ... \begin{align} I&=\int\limits_{0}^\infty {\cos(xt)\over 1+t^2}\,\mathrm dt\\ &\stackrel{*}{=}\int\limits_{0}^\infty {\mathcal{L}\left[\cos(xt)\right](\xi)\mathcal{L^{-1}}\left[\frac{1}{1+t^2}\right](\xi)}\,\mathrm d\xi\\ &=\int\limits_{0}^\infty{\frac{\xi}{\xi^2+x^2}\sin\xi}\mathrm d\xi \\ &=\int_0^\infty\frac{t\sin(t)}{t^2+x^2}dt\\ &\stackrel{\%}{=}\int_0^\infty\frac{|x|\cos(t)}{t^2+x^2}dt\\ &=\frac{\pi}{2} e^{-|x|} \end{align} where $(*)$ used $\int_0^\infty f(x)g(x)\,dx = \int_0^\infty(\mathcal{L} f)(s)\cdot(\mathcal{L}^{-1}g)(s)\,ds$ or a good generalization from Laplace transform of the product of two functions . Ps $(\%)$ is just a byproduct when expanding $\sin t=\frac{e^{it}-e^{-it}}{2i}$ and use above formula twice then ended up to the next line which looks like a cousin of the above. Thus from there (and assuming positive $x$ for an even function), one can obtain $I'=-I$ which can be solved by $I=Ce^{-x}$ .
|calculus|improper-integrals|laplace-transform|
0
How many solutions can $x^4=-1$ have on a field of characteristic $0$?
I am looking at the equation $x^4=-1$ over a generic field $k$ of characteristic $0$ . If $k$ is algebraically closed, it will certainly have $4$ roots. I think that they should all be different, but I am not completly sure about this. Could someone tell me if this is the case, and explain why? If we are looking at the complex numbers for example, it is clear to me why they are all different. But since $k$ is unknown, I don't really know how to deal with this. Now let's look at the case where $k$ is not algebraically closed. In this case, it could be that the equation had no solutions in the ground field (for example if $k=\mathbb{Q}$ ). But could it be that it had only two solutions? Again, I think that the answer is no, but wouldn't be able to argue it. In both cases, my intuition comes from this answer, and also from the fact that I know that every field of characteristic $0$ must contain a copy of $\mathbb{Q}$ . So it cannot do very weird things with the solutions of this equation.
Let $F$ be a field of characteristic not $2$ . All the roots of $x^4+1=0$ are distinct, and if $\alpha$ is one such root, the set of the $4$ distinct roots are $\{\alpha, -\alpha, \alpha^3, -\alpha^3\}$ . We next establish this, by first noting that if $\alpha$ is a root of $x^4+1$ , then so is both $-\alpha$ and $\alpha^3$ . Indeed, that $-\alpha$ is also a root is easy to check. So to see that $\alpha^3$ is also a root, note that $$\alpha^4=-1 \implies (\alpha^4)^3 = (-1)^3$$ $$\implies (\alpha^3)^4 = (-1)^3 = -1.$$ Next note that $\alpha^3$ is not in the set $\{-\alpha, \alpha\}$ . Indeed, if on the one hand the equation $\alpha^3=-\alpha$ is true, then [multiplying both sides by $\alpha$ and noting $\alpha^4=-1$ ] the string of equations $-1=\alpha^4 = -\alpha^2$ must also be true, giving $\alpha^2=1$ which [in a field of characteristic not $2$ ] implies $\alpha^4=1 \not = -1$ , a contradiction. If on the other hand the equation $\alpha^3=\alpha$ is true, then [multiplying both sid
|polynomials|field-theory|roots|extension-field|
1
If each connected component of a submanifold of Euclidean space embeds does the entire manifold embed?
I have been working on a problem where I want to prove an injective immersion $f$ from a submanifold $M \subset R^n$ (actually, a level set) to $R^k,\; k \geq \dim M$ is a topological embedding. If I knew $M$ was connected I could do it (I believe) and so I have been twisting myself into a pretzel trying to come up with a proof. Then I thought: Question : Does $M$ have to be connected? What if I could show that $f$ restricted to each connected component of $M$ was a topological embedding to its image? Would that suffice to show that the image of $M$ is an embedded submanifold of $R^k$ ? Since immersions are inherently local and topological embeddings don't seem to depend on connectivity, I can't immediately see why that wouldn't be true. Any pointers towards a way of proving/disproving this would be appreciated. Edit To correct inequality. Edit I see where the title of my post is more general than I intended. I have changed the title to accurately reflect the actual question I'm asking
The answer is no. Even if the restrictions of $f$ to the components of $M$ are already embeddings, $f$ need not be: I submit this poorly drawn sketch derived from Example 4.19 in Lee's Introduction to Smooth Manifolds as a counterexample: The maps taking the blue interval $(0, 2)$ to the blue curve above and the red interval $(-2, 0)$ to the red curve below can both be chosen embeddings (I'll leave coming up with parametrizations and verifying this to you as an exercise :), but it is not an embedding: its image is compact while its domain is not.
|general-topology|smooth-manifolds|
1