title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Are solutions to polynomial equations subideals in a polynomial ring?
|
I'm wondering how can i make sense of a partial solution to a two-system of polynomial equations. Let $\{f,g\}\subseteq K[x,y]$ be polynomials in a polynomial ring over the field $K$ . Given the equations: $$f(x,y)=a$$ $$g(x,y)=b$$ Every solution is contained in the affine variety $\mathcal{V}(I_0)≠\emptyset$ of the ideal $$I_0=\langle f(x,y)-a,g(x,y)-b\rangle$$ Suppose i have a function $t\in K[x,y]$ such that: $$I_1=\langle f(t,y)-a,g(t,y)-b\rangle=$$ $$\langle a-a,g(t,y)-b\rangle=\langle g(t,y)-b\rangle$$ Is there a relationship between $I_0$ and $I_1$ ? Given these conditions, i thought about the possibility of the ideals satisfying: $I_1\subseteq I_0$ , as in the following example: let $$I_0=\langle x+y-a,xy-b\rangle$$ And let $t=a-y$ , then $$I_1=\langle ay-y^2-b\rangle$$ We can show that $I_1\subseteq I_0$ by showing that $r\in I_0$ satisfies: $$r=-y(x+y-a)+xy-b=$$ $$ay-y^2-b$$ So $$I_1=\langle r\rangle\subseteq I_0$$ But is this generally the case? How can i prove it? Maybe the
|
Actually, i found something interesting... We can extend the polynomial ideal $I_0\subseteq K[x,y]$ to the field of algebraic functions with degree one (in two variables) by noting: $$I_0\subseteq A[x,y]=\left\{x\in K[x,y]^2|x=\frac{n(x,y)}{d(x,y)};n\in K[x,y],d\in K[x,y]/\{0\}\right\}$$ By performing this extension, we can define a function $\eta\in A[x,y]$ as the solution to the equation: $$\eta(f(x,y)-a)+g(x,y)-b=g(t,y)-b$$ Which then shows: $$\eta=\frac{g(t,y)-g(x,y)}{f(x,y)-a}$$ (remember, $t\in K[x,y]$ ) this is utterly unexpected for me, because this shows that the ideal inclussion $I_1\subseteq I_0$ depends explicitly in the base set of the ideals, and we have the implication statement: $$I_0\subseteq A[x,y]\Rightarrow I_1\subseteq I_0$$ But the same statement for ideals in the ring $K[x,y]$ might not be true (i.e when $\eta\notin K[x,y]$ )
|
|polynomials|ideals|polynomial-rings|
| 0
|
Laurent series expansion of $f(z) = \cos(z/(1-z))$ around $z = 1$
|
I am tasked with finding whether the function $f(z) = \cos(z/(1-z))$ can be developed into a Laurent series around $z = 1$ , and if so what is the radius of convergence and what is the residue? So far what I got is: The function has an isolated singularity at $z = 1$ so indeed it can be developed into a Laurent series at that point. There are two different Laurent series expansions around $z = 1$ , one for $|z| (with radius of convergence 1 I suppose?) and one for $|z| > 1$ (with infinite radius of convergence). Is this right? If the above is correct, how would I go about developing the Laurent series from here for these two different domains? I know I can write $z/(1-z)$ as $-\frac{1}{z-1} - 1$ and can then sub that into the formula for $\cos(z)$ Taylor series, but not sure how that interacts with the different domains or how to simplify from there?
|
Don't overthink too much; you can write straightforwardly : $$ \begin{align} f(z) &= \cos\left(\frac{z}{z-1}\right) \\ &= \cos\left(1+\frac{1}{z-1}\right) \\ &= \cos(1)\cos\left(\frac{1}{z-1}\right) - \sin(1)\sin\left(\frac{1}{z-1}\right) \\ &= \sum_{k=0}^\infty \frac{(-1)^k}{(2k)!} \frac{\cos(1)}{(z-1)^{2k}} - \sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!} \frac{\sin(1)}{(z-1)^{2k+1}} \end{align} $$ where we used the Taylor series of the trigonometric functions. Alternatively, you may have used the identity $\cos t = \frac{e^{it}+e^{-it}}{2}$ and gather the terms of the exponential series together according to the powers of $i$ . In consequence, the residue associated to this essential singularity turns out to be $\mathrm{Res}_{z=1} f(z) = -\sin(1)$ . As for the radii of (the annulus of) convergence, they are given respectively by $r = \displaystyle \limsup_{k\to\infty} |a_{-k}|^{1/k} = 0$ and $\frac{1}{R} = \displaystyle \limsup_{k\to\infty} |a_k|^{1/k} = 0$ , hence $R = \infty$ .
|
|complex-analysis|laurent-series|
| 1
|
Proof of Lemma 2.5.22, Liu's Alg. Geometry
|
I'm struggle understanding the last statement below from Qing Liu's "Algebraic geometry and arithmetic curves". Lemma 2.5.22 (pg. 73): Let $A$ be a finitely generated integral domain over $k$ , and let $p$ be a prime ideal of $A$ of height $1$ . Then $\dim A/p=\dim A-1.$ Proof. Let $f\in p\backslash \{0\}$ . Then $\sqrt{fA_p}=pA_p.$ As $A$ is Noetherian, there exists an $h\in A\backslash p$ such that $$\sqrt{fA_h}=pA_h.$$ My question: Why such an $h$ exists? Thanks!
|
The ideal $p$ is finitely generated, say $p=(y_1,...,y_k)$ . Since $\sqrt{fA_p}=pA_p$ , there exists some $n\geq 1$ such that $\frac{y_i^n}{1}\in fA_p$ for all $1\leq i\leq k$ . This means for each $i$ there exists some $u_i\notin p$ such that $u_iy_i^n\in fA$ . And now let $h=u_1u_2...u_k$ . Note that for all $i$ we have $hy_i^n\in fA$ , and so $\frac{y_i}{1}\in\sqrt{fA_h}$ in $A_h$ . Since the elements $\frac{y_1}{1},...,\frac{y_k}{1}$ generate $pA_h$ over $A_h$ , it follows that $pA_h\subseteq\sqrt{fA_h}$ . The other inclusion is easy.
|
|algebraic-geometry|commutative-algebra|
| 1
|
How are $\epsilon$-nets (as in centers of $\epsilon$-ball covers) related to nets (as in topology)?
|
Here are two definitions that I have encountered: The first, corresponding to this Wikipedia page , is the following. Definition. $\ $ Let $(X,d)$ be a metric space. Let $\epsilon\in\mathbb{R}^{>0}$ . An $\epsilon$ -net for $X$ is a subset $S\subseteq X$ of points such that the collection of $\epsilon$ -balls centered at those points forms a cover for X. i.e., $$S\subseteq X \ \text{ is an }\epsilon\text{-net}\ \ \iff\ \ \bigcup_{x\in S}B_X(x,\epsilon)=X\ \ .$$ or equivalently $$S\subseteq X \ \text{ is an }\epsilon\text{-net}\ \ \iff\ \ \forall x\in X,\ \ \operatorname{dist}(x,S) The second, corresponding to this Wikipedia page , is the following. Definition. $\ $ Let $(X,\tau)$ be a topological space. Let $(P,\preccurlyeq)$ be a partially ordered set with the additional property that $\forall a,b\in X,\ \ \exists c\in X\ :\ a\preccurlyeq c\ \land\ b\preccurlyeq c\ $ . $\,$ A net $x$ is a function $x\colon P\to X$ and is also denoted as $(x_\alpha)_{\alpha\in P}$ . How are these two c
|
No relation. Words from English (and other languages) are frequently taken for technical use in mathematics, and sometimes (as here) in two (or more) unrelated ways.
|
|general-topology|metric-spaces|nets|
| 1
|
Counting arrangements of seven people standing in a row of $3$ and a row of $4$, with $A$ and $B$ together, and with $A$ and $C$ separated
|
Seven people are standing in two rows, with three people in the front row and four in the back row. Among them, A and B must stand next to each other, while A and C must stand separately. How many different arrangements are there? My thought process is like this, but the result is incorrect. First, we consider the condition that A and B must stand next to each other. We can treat A and B as a single unit since they have to stand together. Now we have 6 "entities" to arrange (5 people plus the A-B unit), with 3 spots in the front row and 4 in the back row. Step 1: Choose 3 entities from the 6 to stand in the front row. There are $6 \choose 3$ ways to do this. Step 2: The 3 entities in the front row can be arranged in $3!$ ways. Step 3: The remaining 3 entities (including the A-B unit) in the back row can also be arranged in $3!$ ways. However, since A and B form a unit, there are actually only 2 arrangements (A on the left or A on the right). Step 4: We need to subtract the arrangements
|
$\textbf{Another Approach}$ $AB$ together can be put in $5$ places, and will leave $5$ valid spots for $C$ if they start either row, else $4$ valid spots. Ditto (mirror image) if we start with $BA$ Thus valid ways of placing $A,B,C = 2(2\cdot5+3\cdot4) = 44$ , and permuting the remaining $4$ people in the spots left, Total permutations $= 44*4! = 1056$
|
|combinatorics|permutations|combinations|
| 0
|
Double parenthesis confusion (Discrete Mathematics)
|
1- ((¬P ∧ ¬Q) ∨ (P ∨ R) ∨ (Q ∨ R)) ∨ R 2- (¬P ∧ ¬Q) ∨ (P ∨ R) ∨ (Q ∨ R) ∨ R 3- (¬P ∧ ¬Q) ∨ (P ∨ R) ∨ (Q ∨ R) ∨ (R ∨ R) Q:Why did we remove the double parenthesis after step number 1? Also, my thinking process was that I could distribute the R outside the double parenthesis over the sub-groups inside the double parenthesis (Distribute it over the whole thing) and not only distribute the R outside the double parenthesis over (Q ∨ R) only. *Note: I am new to Discrete-Mathematics and to this site. I would very appreciate some tips on what to improve and learn, but be gentle with me since I am like still a newbie.
|
Something isn't right: 1 $\neg ((\neg P \land \neg Q) \lor (P \lor R) \lor (Q \lor R)) \lor R$ is not equivalent to 2 $\neg (\neg P \land \neg Q) \lor (P \lor R) \lor (Q \lor R) \lor R$ Rather, it is equivalent to: 2 $(\neg (\neg P \land \neg Q) \land \neg (P \lor R) \land \neg (Q \lor R)) \lor R$ And notice that with that, you can not drop parentheses.
|
|discrete-mathematics|propositional-calculus|
| 0
|
Given two well-orders $\langle A,R \rangle$ and $\langle B,S \rangle$, one of the following holds.
|
Let $\langle A,R \rangle$ and $\langle B,S \rangle$ be two well-orders, and let $\text{pred}(A,x,R) := \{y \in A \;|\; yRx\}$ and similarly for $\text{pred}(B,z,S)$ . It is claimed that one of the following must hold: $1$ . $\langle A,R \rangle \cong \langle B,S \rangle$ . $2$ . $\exists y \in B\left(\langle A,R \rangle \cong \langle \text{pred}(B,y,S),S\rangle\right)$ . $3$ . $\exists x \in A\left(\langle \text{pred}(A,x,R), R \rangle \cong \langle B,S \rangle \right)$ . The proof of this seems like a sketch, giving us $f := \{\langle v,w \rangle\;:\: v \in A \land w \in B \land \text{pred}(A,v,R), R \rangle \cong \langle \text{pred}(B,w,S),S\rangle\}$ and it is claimed that it can not be the case that both initial segments are proper, which I interpret as $\neg (\text{pred}(A,v,R) \subsetneqq A \land \text{pred}(B,w,S) \subsetneqq B)$ . First: Is my understanding of what is claimed correct? Second: Given that my understanding of what is claimed is correct, I fail to see why this can'
|
What you need to show is $f$ is a function on its domain. In other words, for all $v\in A$ , there is at most one $w\in B$ such that $\langle v,w\rangle\in f$ . Use lemma 6.1 for this. $f$ is order-preserving- if $v_1 in $A$ , $v_1,v_2\in\text{dom}(f)$ then $f(v_1) in $B$ . $\text{dom}(f)$ is an initial segment of $\langle A,R\rangle$ and $\text{ran}(f)$ is an initial segment of $\langle B,S\rangle$ . $\text{dom}(f)$ and $\text{ran}(f)$ cannot both be proper initial segments (of $A, B$ respectively) because then you could extend $f$ by one more mapping so $f$ is not maximal.
|
|set-theory|well-orders|
| 0
|
Relations Symmetry and Transitivity
|
Given the following Relations over the set $M := \{α, β, γ\}$ $R1 := \{(α, α), (α, β), (β, α), (β, β), (γ, γ)\}$ How is $R1$ transitive? The condition for transitivity is $(a,y)\in R1 \text{ and }(y,x) \in R1 \implies (a,x) \in R1$ And how is $R2$ not a partial order? Partial order if thing is antisymmetric, reflexive and transitive.
|
Here is an illustration of $R_1$ (with the usual conventions): In this illustration, we can clearly see that $R_1$ is reflexive and symmetrical. We have to check that $R_1$ is transitive: $\forall (a,b,c)\in M^3, ((a,b)\in R_1\text{ and }(b,b)\in R_1)$ obviously implies that $(a,b)\in R_1$ . Likewise for $((a,a)\in R_1\text{ and }(a,b)\in R_1)$ It therefore remains to be verified that $\color{red}(\color{blue}((\alpha, \beta) \in R_1 \land (\beta,\alpha)\in R_1\color{blue})$ and $\color{blue}((\alpha,\alpha)\in R_1 \land (\beta,\beta) \in R_1\color{blue})\color{red})$ is TRUE. Probably the whole exercise was to check that $R_1$ is an equivalence relation on $M$ .
|
|relations|order-theory|symmetry|
| 0
|
Confusion on finding if a limit exists or not.
|
I have started the introduction of calculus in high school, and I am confused about this problem. Define $f(x) = -x$ for $x . Then find out if $\lim_{x \to 0} f(x)$ exists or not. If it does, then state the value. Here is my approach: The limit exists if $\lim_{x \to 0^{-}} f(x)$ = $\lim_{x \to 0^{+}} f(x)$ . Then is it appropriate to state that $\lim_{x \to 0^{-}} f(x) = 0$ , because $0^{-}$ is still in the interval $x ? It probably is. But, is it appropriate for $\lim_{x \to 0^{+}} f(x) = 0$ ? I think not because $0^{+}$ is NOT in the interval. Is my reasoning correct?
|
In general, when the domain of a function covers all real numbers, your approach of comparing the left and right limits is valid. In this case however, the function $f(x)$ stops at $x=0$ , so there is no need to calculate the limit approaching from the right side of $x=0$ . All you need is the left limit, since that’s the only possible direction you can approach $x=0$ from.
|
|calculus|limits|
| 0
|
Derivative of 2-norm of a matrix.
|
I have a function: $$f(A)=cond(AB)=\left \|{AB}\right \|_2 \left \| {(AB)^{-1}} \right \|_2 $$ We know that: $$\left \| X \right \|_2= \sqrt{\lambda_{max}(X^TX)}$$ So: $$f(A)=\sqrt{\dfrac {\lambda_{max}((AB)^T(AB))} {\lambda_{min}((AB)^T(AB))}}$$ How should I take the derivation of $f(A)$ with respect to matrix $A$ ? $$\dfrac {df(A)} {dA}$$
|
$ \def\k{\kappa} \def\s{\sigma} \def\e{\varepsilon} \def\o{{\tt1}} \def\so{\s_\o} \def\sr{\s_r} \def\uo{u_\o} \def\ur{u_r} \def\vo{v_\o} \def\vr{v_r} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\rank#1{\op{rank}\LR{#1}} \def\cond#1{\op{cond}\LR{#1}} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\c#1{\color{red}{#1}} \def\CLR#1{\c{\LR{#1}}} \def\fracLR#1#2{\LR{\frac{#1}{#2}}} $ Let $C=AB$ and calculate its SVD and condition number $$\eqalign{ r &= \rank C,\qquad C &= \sum_{i=\o}^r \s_i u_i v_i^T \qquad \k = \cond C &= \fracLR{\so}{\sr} \\ }$$ From this post , the gradient of the $k^{th}$ singular value is $\;{\large\grad{\s_k}C} = u_kv_k^T\;\;$ (if it exists) The gradient of the condition number (assuming it's finite) is therefore $$\eqalign{ \grad{\k}C &= \fracLR{\sr\uo\vo^T-\so\ur\vr^T}{\sr^2} &= \frac{\uo\vo^T-\k\ur\vr^T}{\sr} \\ }$$ Rearrange this to recover the gra
|
|derivatives|partial-derivative|matrix-calculus|condition-number|
| 0
|
What is an intuitive explanation for $\operatorname{div} \operatorname{curl} F = 0$?
|
I am aware of an intuitive explanation for $\operatorname{curl} \operatorname{grad} F = 0$ (a block placed on a mountainous frictionless surface will slide to lower ground without spinning), and was wondering if there were a similar explanation for $\operatorname{div} \operatorname{curl} F = 0$.
|
Below is the entry I put in my RemNote. I hope it is helpful. It is not formal; but, it is intuitive.
|
|multivariable-calculus|intuition|
| 0
|
Showing divergence of $\sum\limits_{k=1}^{\infty} \log\left(1+\frac{(-1)^{k+1}}{k^\alpha}\right)$ where $0<\alpha<\frac{1}{2}$
|
I am trying to prove that $$\sum\limits_{k=1}^{\infty} \log\left(1+\frac{(-1)^{k+1}}{k^\alpha}\right)$$ diverges, for any $0 . I tried showing this by taking the Taylor expansionof $\log(1+\epsilon)$ around $0$ up to order $N$ , where $N$ is the minimal integer such that $\alpha N >1$ , then substituting $\epsilon = \frac{(-1)^{k+1}}{k^\alpha}$ . This resulted in the sum of the following libnitz serieses + some convergent series (it converges because $\alpha N >1$ ) + the remainder, and they all converge: \begin{align*} &\sum\limits_{k=1}^{N} (\log(1+\frac{(-1)^{k+1}}{k^\alpha})\\ =&\sum\frac{(-1)^{k+1}}{k^\alpha}-\sum\frac{1}{2k^{2\alpha}}+\dots+\sum(-1)^{N+1}\frac{(-1)^{k+1}}{k^{\alpha N}\cdot N}+\sum R_N\left(\frac{(-1)^{k+1}}{k^\alpha}\right) \end{align*} which is clearly in contradiction with the sum of the logs diverging
|
Using $$ \log(1+x)=x+O(x^2) $$ one has $$\sum\limits_{k=1}^{\infty} \log\left(1+\frac{(-1)^{k+1}}{k^\alpha}\right)=\sum\limits_{k=1}^{\infty}\left[\frac{(-1)^{k+1}}{k^\alpha}+O\left(\frac{1}{k^{2\alpha}}\right)\right]. $$ Noting that $\sum\limits_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^\alpha}$ converges for $\alpha>0$ and that $\sum k^{-2\alpha}$ converges if $\alpha>\frac12$ and diverges if $\alpha\le\frac12$ , one concludes that the series converges if $\alpha>\frac12$ and diverges if $\alpha\le\frac12$ .
|
|calculus|convergence-divergence|taylor-expansion|divergent-series|
| 0
|
What is the maximum and minimum value of the following function?
|
I came across a question in the book Calculus for the Practical Man. The question stated to find the maximum or minimum value of the following function: $$f(x) = \frac{1}{4}\cos^2x - \sin 2x$$ I tried to find first the critical value by differentiating this function to get: $$f'(x) = \frac{-\sin2x}{4}-2\cos2x$$ However, I am unable to solve it further to get the value of $x$ for the maximum or minimum. Can someone please help me out? Thanks for helping.
|
Write $\cos^2 x $ in terms of $\cos 2x$ Then you will get $f(x)$ in the form of $k + a \sin2x + b \cos2x$ where $a,b,k$ are constants.Then you can use the fact that the max value of $a \sin\alpha + b \cos\alpha $ is $\sqrt{a^2 + b^2}$ . So the maximum value becomes $k + \sqrt{a^2 + b^2}. $
|
|calculus|maxima-minima|
| 0
|
Reference Request: Quadratic Optimization with affine constraints
|
I was wondering what is a standard textbook/source that I can reference this fact: $\min_y \frac{1}{2} y^T C^{-1} y - b^T y$ such that $A^T y = f,$ where $C^{-1}$ is am $m \times m$ symmetric positive definite matrix, and $A$ is $m \times n$ matrix of rank $n \le m$ . This quadratic constrained minimization problem has a unique solution given by the system $$\begin{bmatrix} C^{-1} & A \\ A^T & 0 \end{bmatrix} \begin{bmatrix} y \\ \lambda \end{bmatrix} = \begin{bmatrix} b \\ f\end{bmatrix}.$$ I'm currently looking at these course notes Proposition 14.3 but would like a more established reference: https://www.cis.upenn.edu/~cis5150/cis515-12-sl14.pdf
|
Found in Section 10.1.1 of "Convex Optimization" by Boyd and Vandenberghe
|
|reference-request|quadratic-programming|
| 1
|
How to prove that $S = \left\{ \alpha \in \mathbb{R}^3 : \alpha_1 + \alpha_2 e^{-t} + \alpha_3 e^{-2t} \le 1.1 \right\} $ is convex but not affine?
|
So, I'm trying to see if my approach can show that the set $S$ is not affine and is convex with a similar argument for both cases $S = \left\{ \alpha \in \mathbb{R}^3 : \alpha_1 + \alpha_2 e^{-t} + \alpha_3 e^{-2t}\le 1.1 \right\} \\ \text{where } t \ge 1 $ My first observation is that, given $x \in S$ I can write it as: $$ x \cdot \gamma(t) \le 1.1 $$ Where $ \gamma(t)^T = \left[ 1, e^{-t} , e^{-2t} \right]$ . With that, I can write an arbitrary pair $x$ and $y$ from S as the combination: $$ \left[ \theta x + (1-\theta)y \right]\cdot \gamma(t) $$ If $\theta \in \mathbb{R} $ the combination is affine and if $0 \le \theta \le 1 $ the combination is convex. In my understanding, since $\theta x + (1-\theta)y$ is real and $x,y \in S$ I can bound them by 1.1. How can I refine my argument to prove that: S is convex when I construct the vector $\theta x + (1-\theta)y$ when $0 \le \theta \le 1 $ S is not affine when $\theta \in \mathbb{R} $ The reason for my thinking is that the difference of
|
Your thoughts so far are good. In general, the set $A = \{x \in \Bbb{R}^n : u \cdot x \le c\}$ , where $u \in \Bbb{R}^n$ is non-zero and $c \in \Bbb{R}$ , is a closed halfspace. Such sets are never affine; take any two vectors whose difference is not perpendicular to $u$ , and the line connecting them cannot be contained in $A$ . More specifically, consider the points $x = \frac{cu}{\|u\|^2}$ and $y = \frac{(c-1)u}{\|u\|^2}$ . Note that $$u \cdot x = \frac{c}{\|u\|^2}(u \cdot u) = c,$$ and similarly, $$u \cdot y = c - 1 \le c.$$ That is, $x, y \in A$ . Now, let $$z = 2x - 1y = \frac{2cu}{\|u\|^2} - \frac{(c-1)u}{\|u\|^2} = \frac{(c+1)u}{\|u\|^2}.$$ Note that $z$ is an affine combination of $x$ and $y$ , but $$z \cdot u = c + 1 > c,$$ hence $z \notin A$ . Thus, $A$ is not affine. On the other hand, halfspaces are convex. If we assume $x, y \in A$ are arbitrary, and $\theta \in [0, 1]$ , then $$u \cdot (\theta x + (1 - \theta)y) = \theta(u \cdot x) + (1 - \theta)(u \cdot y) \le \theta c
|
|convex-analysis|
| 1
|
Only $x_{6n}, y_{6n}$ doesn't have the same simplest denominator
|
I'm studying this series \begin{align*} a_1=\mathrm{i}, \quad a_{n+1}=\mathrm{i} + \frac{\mathrm{i}}{a_n}, \end{align*} where $\mathrm{i}$ is the imaginary unit, in order for me to see clearly the structure of this sequence, I separate the real part and the imaginary part as a sequence, as follows. \begin{align*} x_{n+1} = \frac{y_n}{x_n^2+y_n^2}&, \quad y_{n+1}=\frac{x_n}{x_n^2+y_n^2}+1, \\ &a_n = x_n + y_n\mathrm{i}, \end{align*} with $x_1=0, y_1=1$ , when I try to find the first few patterns, I found Only $x_{6n}, y_{6n}$ doesn't have the same simplest denominator The following are $x_n$ , $y_n$ for the first 20 items ( $x_n$ is on the left and $y_n$ is on the right) and has undergone preliminary programming verification on a larger scale. But I don't really know how to prove this, Can anyone give me an idea or prove this wrong? Addition Information If we consider a more complex situation, given two nonnegative integers X and Y, make the following sequence: \begin{align*} x_{n+1} =
|
In this answer I shall endeavor to explain how it comes to pass that $a_{6n}$ can be expressed as rational numbers with different denominators, while others cannot. I find that there are two conditions that are necessary for the rational numbers of $(X+iY)/D$ to have different denominators. The first is that $X,Y,D$ are all even numbers. The second is that the prime factors of $X,Y,D$ have disparate values of repeating factors such as 2 and 3. This will be shown in examples below. A short table of $X,Y,D$ values was shown in one of my previous answers on this page. Here is what I did: First of all, $X$ and $D$ are known sequences (thank you user @Somos for pointing this out). $X$ is given exactly by OEIS A092886. You may also notice that $Y(n)=X(n+1)$ . $D$ is given exactly by OEIS A105309. But you can find that A105309 (n) = A092886(n+1) - A092886(n-1). Or, $D(n)=X(n+1)-X(n-1)=Y(n)-X(n-1)$ . So everything comes down to $X$ . At this point I regenerated the table of $X,Y,D$ using the O
|
|sequences-and-series|algebra-precalculus|discrete-mathematics|recurrence-relations|
| 0
|
Error in Gaussian Quadrature when $f \in C[a,b]$ and $f \notin C^{2n}[a,b]$
|
Suppose that $x_1,x_2,\ldots x_n$ , are the roots of a degree $n$ orthogonal polynomial $Q_n(x)$ on $[a,b]$ . Consider the corresponding Gaussian Quadrature $$\int_a^b f(x) dx \approx \sum_{j=1}^nA_jf(x_j)$$ where $A_j=\int_a^b l_j(x) dx$ for Lagrange basis polynomials $l_j$ associated with $x_1,x_2,\ldots x_n$ . Show that $|\int_a^b f(x) dx-\sum_{j=1}^nA_jf(x_j)| \leq 2(b-a)\min_{p \in P_{2n-1}}||f-p||_{C[a,b]}$ I know how to derive the error estimates when $f \in C^{2n}[a,b]$ but without that assumption, I don't have any estimates on $f$ to begin with. I suppose we have to use the fact that Gaussian Quadrature is exact for all $p \in P_{2n-1}$ but I don't know where to begin.
|
How about taking any polynomial $p\in P_{2n-1}$ , then $$\int_a^b f(x) dx - \sum_{j=1}^n A_j f(x_j) = \int_a^b ( f(x) - p(x) )dx + \int_a^b p(x) dx - \sum_{j=1}^n A_j f(x_j) $$ Since the quadrature rule is exact for $p$ , it becomes $$\int_a^b ( f(x) - p(x) )dx + \sum_{j=1}^n A_j (p(x_j) - f(x_j) )\le |a-b||f-p|_{\infty} + \sum_{j} A_j |p-f|_{\infty} = 2|a-b||f-p|_{\infty}.$$ Of course $A_j > 0$ is needed, this is a conclusion from Gaussian quadrature.
|
|numerical-methods|approximation|
| 1
|
$f(n)= \left\lfloor \frac{an+b}{cn+d} \right\rfloor , \forall n \in \mathbb N$ is surjective
|
Let $a,b,c,d \in \mathbb N , d \ne 0$ and consider the funtion $f: \mathbb N \rightarrow \mathbb N $ such that : $$f(n)=\left \lfloor \frac{an+b}{cn+d}\right \rfloor , \forall n \in \mathbb N$$ Prove that $(c=0 , b is a surjective function I tried to solve this Olympiad problem by myself. My solution is totally different from the one presented by the author, so it is difficult for me to self-evaluate. Can you please tell me if the solution below is complete and correct? And if you can give it a score from 0 to 7? Thanks ! My attempt : Let's consider $c=0, b , so the function becomes $f(n)=\left \lfloor \frac{an+b}{d}\right \rfloor , \forall n \in \mathbb N$ . Let $k \in \mathbb N$ such that $ f(n)=k \implies$ $$ \implies k = \left\lfloor \frac{an+b}{d}\right \rfloor \implies $$ $$\implies k \le \frac{an+b}{d} $$ \implies kd \le an+b (we can devide by $a$ since $a > 0$ ) $$\implies \frac{kd-b}{a} \le n Now I will prove that $n = \lfloor \frac{kd-b}{a} \rfloor + 1 $ checks the condition
|
A clearer, but non-constructive, proof is to show: $f(0)=0,$ $f(n+1)\leq f(n)+1,$ and $f(n)$ is unbounded above. All of these are easy to prove. Then we can show any function $f:\mathbb N\to\mathbb N$ which satisfies (1.), (2.), and (3.) is surjective. Assume $m\in\mathbb N.$ If $m=0$ then $f(0)=0.$ If $m>0,$ let $S=\{n\in\mathbb N\mid f(n)\geq m\}.$ By (3.) $f$ is not bounded above, $S$ is not empty. Let $n$ be the least element of $S.$ Then, since $m>0,$ $n>0,$ and $n-1\notin S,$ since $n$ is the least element of $S.$ So $f(n-1) and by (2.), $m\leq f(n)\leq f(n-1)+1 So $f(n)=m.$ We can make this constructive in your case, because we can construct $n_0$ such that $f(n_0)\geq m$ and then just check all the values $f(n)$ for all $n\leq n_0.$ Here, $n_0=dm$ then $f(n_0)=\left\lfloor ma+\frac{b}d\right\rfloor =ma\geq m.$ So this proof is constructive if property (3.) is constructive - that is, if there is a constructive function $g$ such that $f(g(m))\geq m$ for all $m.$ Overkill We get a
|
|functions|solution-verification|ceiling-and-floor-functions|
| 0
|
Generalization of Geometric Mean, Standard Deviation, etc.
|
The geometric mean can be thought of as the exponential of the arithmetic mean of the logarithms of your dataset $\{a_i\}_{i=1}^n$ : $$GM = \exp\left(\dfrac{1}{n}\sum_{i=1}^{n}\ln(a_i)\right)$$ Similarly, the standard deviation of a dataset is the square root of the arithmetic mean of the squares of your errors $\{e_i\}_{i=1}^{n}$ : $$SD = \sqrt{\dfrac{1}{n}\sum_{i=1}^{n}(e_i)^2}$$ I find it interesting that both of these ideas seems to follow of more general pattern: $$f^{-1}\left( \dfrac{1}{n} \sum_{i=1}^{n} f(x_i) \right),$$ where the function in question is $f(x) = x^2$ for the standard deviation, and $f(x) = \ln x$ for the geometric mean. Even the arithmetic mean is trivially of this form, just with $f(x)= x$ as the identity function. Are there other widely-used variations of these kinds of "functional" averages? And is there anything we can say, more universally, about these kinds of averages as a whole? In particular, I find it interesting that all three of the averages I mentio
|
This type of average is called a quasi-arithmetic mean when $f$ is continuous. The midpoint property is then guaranteed because $f$ must be strictly monotonic if it is continuous and has a left inverse $f^{-1}$ . As you observed, choosing $f(x)=\ln(x)$ (or $f(x)=\log_a(x)$ for any positive $a \neq 1$ ) results in the geometric mean. Another notable example is that $f(x)=\frac{1}{x}$ corresponds to the harmonic mean. For any such $f$ , the $f$ -mean $M_f(\vec{x})=f^{-1}\left(\frac{1}{n} \sum_{i=1}^n f(x_i)\right)$ enjoys many other properties that would be expected of an averaging function (see the Wikipedia page). It is easy to show, for instance, that the $f$ -mean has idempotency in that for all $x$ , $M_f(x,\cdots,x)=x$ .
|
|statistics|functions|average|standard-deviation|
| 1
|
Can I avoid differentiation to find the remainder when $x^{73}-2x^{15}+3x-1$ is divided by $(x-1)^2$
|
The problem states: Find the remainder when $x^{73}-2x^{15}+3x-1$ is divided by $(x-1)^2$ . So, what I did is , Assume, the remainder to be linear, i.e $r(x) = ax+b$ By Eucild''s, $x^{73}-2x^{15}+3x-1 = q(x)(x-1)^2+r(x)$ Putting x = 1, We get, $a+b = 1$ . Then I differentiated it, and the rest is trivial. How can I skip differentiation and get the answer by some other method/way. I was thinking perhaps something involving number theory? I want to know is it possible to skip calculus in such questions?
|
COMMENT.- We know $$P(x)=x^{73}-2x^{15}+3x-1=q_1(x)(x-1)+P(1)=q_1(x)(x-1)+1$$ and because of $q_1(x)=q_2(x)(x-1)+q_1(1)$ we have $$P(x)=q_2(x)(x-1)^2+\color{red}{q_1(1)(x-1)+1}$$ If $P(x)=q_2(x)(x-1)^2+ax+b$ then $P(1)=a+b=1$ and the other equation to determine the values of $a$ and $b$ are given by $q_1(1)$ and $-(q_1(1)-1)$ respectively. The value $q_1(1)$ can be obtained using the binomial theorem and it is $46$ ( look at Theo Bendit's answer above ).
|
|calculus|algebra-precalculus|derivatives|polynomials|
| 0
|
Knuths arrow notation
|
I have looked at Knuth's arrow notation and am wondering if multiplication can be represented by $n\uparrow^0m$ . Also whether addition can be represented as $n\uparrow^{-1}m$ .
|
From https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation , we have the following facts: $a\uparrow^nb=H_{n+2}(a,b)$ $H_1(a,b)=a+b$ $H_2(a,b)=a\times b$ Knuth's arrow notation is usually considered for $n\geq 1$ , but we can extend it to $n\geq-2$ to have the identities, $a\uparrow^{-1}b=H_1(a,b)=a+b$ $a\uparrow^0b=H_2(a,b)=a\times b$
|
|functions|
| 1
|
Euler product proof by the fundamental theorem of arithmetic
|
I was looking at the proof of Euler product at Mathworld https://mathworld.wolfram.com/EulerProduct.html First we expand the product, this I understand, then “we write each term as a geometric series.” But I see that we don’t write “each term” as a geometric series. The whole term is $\frac{1}{1-\frac{1}{p_k^s}}$ but we take only this part $1/p_k^s$ . Why don’t we take the whole term? I’m not understanding this line either: $$ 1 + \sum_{1\leq i} \frac{1}{p_i^s} + \sum_{1\leq i \leq j} \frac{1}{p_i^s p_j^s} + \sum_{1\leq i \leq j \leq k} \frac{1}{p_i^s p_j^s p_k^s} + \ldots $$ How does $\sum_{1\leq i} \frac{1}{p_i^s}$ become $\frac{1}{2^s}$ ? Edit: Ioveri answer As per Ioveri's answer, the Equation (4) below, written in general form, contains the reciprocals of all natural numbers written as prime factors. The zeta sum is the sum of the reciprocals of all natural numbers. So both sides state the same thing and they are equal. So, even more explicitly, the first sum is the sum of the rec
|
How does $\sum_{1\leq i} \frac{1}{p_i^s}$ become $\frac{1}{2^s}$ ? It's simply isn't. The $\frac{1}{1^s}+\frac{1}{2^s}+\frac{1}{3^s}+...$ series isn't obtained by taking those sums individually but by rearranging their terms. First I'll explain those sum means: $$\sum_{1\leq i} p^s_i \text{ is the sum of primes, each raised to the power of s}$$ $$\sum_{1\leq i \leq j} p^s_i p^s_j = \sum_{1\leq i \leq j} (p_ip_j)^s\text{ is the same sum but each number has 2 prime factor} \\ \text{(repetition also counts, e.g. } 4^{-s} = 2^{-s}2^{-s} \text{ is also a term})$$ $$\sum_{1\leq i \leq j \leq k} p^s_i p^s_j p^s_k = \sum_{1\leq i \leq j} (p_ip_jp_k)^s \text{ is the same sum but each has 3 prime factors ...}$$ And so on. This is true because each natural number larger than 1 has a unique prime factorization. And so when we add all these sums together, it's the same as "the sum of all numbers that have an arbitrary number of prime factors, each are raised to the power of $s$ ", or in other words
|
|euler-product|
| 1
|
ODE and semi flow
|
I am new to ODE and semi-flows. So, it is a beginner question. If an ODE $x^{\prime} = h(x)$ admits a unique solution, is it right to say that the flow generated by this equation, i.e. $\Phi_t(x) = x + \int_0^t h(s) \,\mathrm{d}s$ satisfies those properties : (a) $\Phi_0 = Id$ ; (b) $\Phi_t$ is continuous $\forall t$ ; (c) $\Phi_{t+s} = \Phi_t \circ \Phi_s$ $\forall s,t \geq 0$ . I can prove the first two points, but I fail to prove the third one. I try to prove this point in a particular case where $T$ is a contraction on $\mathbb{R}^d$ and $h = T - I_d$ . Thanks in advance
|
(Too long for a comment) I guess you are stuck because of the notation (which may become quite confusing when dealing with flows). In fact, the flow $\Phi_t$ needs an initial state, because it acts by making the solution evolve by a duration $t$ . In the formula you provided, the initial state is given implicitly by the initial condition $x = x(0)$ . This formula can be extended to other initial states by "reconstructing" the flow $\Phi_t$ from the original differential equation. Indeed, the latter is integrated as $$ x(t) - x(s) = \int_{x(s)}^{x(t)} \mathrm{d}x = \int_s^t \dot{x} \,\mathrm{d}\tau = \int_s^t h(x(\tau)) \,\mathrm{d}\tau, $$ hence $$ x(t_0+t) = \Phi_t(x(t_0)) := x(t_0) + \int_{t_0}^{t_0+t} h(x(\tau)) \,\mathrm{d}\tau. $$ As is, you should be able to prove the third property more easily.
|
|ordinary-differential-equations|
| 0
|
Can I avoid differentiation to find the remainder when $x^{73}-2x^{15}+3x-1$ is divided by $(x-1)^2$
|
The problem states: Find the remainder when $x^{73}-2x^{15}+3x-1$ is divided by $(x-1)^2$ . So, what I did is , Assume, the remainder to be linear, i.e $r(x) = ax+b$ By Eucild''s, $x^{73}-2x^{15}+3x-1 = q(x)(x-1)^2+r(x)$ Putting x = 1, We get, $a+b = 1$ . Then I differentiated it, and the rest is trivial. How can I skip differentiation and get the answer by some other method/way. I was thinking perhaps something involving number theory? I want to know is it possible to skip calculus in such questions?
|
Claim: $$x^n\equiv nx-(n-1)\pmod{(x-1)^2}$$ Proof by induction: It is true for $n=2$ . Let it be true for $n$ as above. For $n+1$ , $$x^n\equiv nx-(n-1)\implies\\ x^{n+1}\equiv nx^2-(n-1)x\equiv n(2x-1)-(n-1)x\equiv (n+1)x-n\pmod{(x-1)^2}$$ Using the claim for the problem, we have $$x^{73}-2x^{15}+3x-1\equiv 73x-72-2(15x-14)+3x-1\\ \equiv 46x-45\pmod{(x-1)^2}$$
|
|calculus|algebra-precalculus|derivatives|polynomials|
| 0
|
Density argument in Normed spaces
|
Assume $(X,\|\cdot\|_X)$ is a normed space and if $Y$ is a dense linear subspace of $X$ , How to prove that for each $x \in X$ there exists a sequence $(y_j)_j \subset Y$ such that $$\sum_{j=1}^\infty y_j = x$$ in $X$ and $$\sum_{j=1}^\infty\|\mathbf{y}_j\|_X \leq 2\|x\|_X$$
|
Take $y_1$ within $||x||/2$ of $x$ , with $||y_1||\leq ||x||$ . Next $x-y_1$ has norm $\leq ||x||/2$ , so take $y_2$ within $||x||/4$ of $x-y_1$ , with $||y_2||\le ||x||/2$ . So $x-y_1-y_2$ has norm $\leq ||x||/4$ , and $||y_2||\le \frac{1}{2}||x||$ . Keep going, so we get a sequence of $y_i$ such that $||y_i||\le 2^{-i+1}||x||$ , while $||x-\sum_{i=1}^ny_i||\leq 2^{-n}||x||$ . So this is the sequence you were searching for. Indeed, the sum is clearly Cauchy hence the infinite sum exists and is equal to $x$ , and the infinite sum has the wanted bound: $$\sum_{i=1}^\infty ||y_i||\le \sum_{i=1}^\infty 2^{-i+1}||x||=2\cdot ||x||$$
|
|sequences-and-series|functional-analysis|analysis|dense-subspaces|
| 1
|
Sieving the range $[a,b]$
|
In Sieve of Eratosthenes we sieve the range $[1,N]$ by crossing out 1, then crossing out 2 and multiples of 2, then take 3 and then cross out 3 and multiples of 3 and so on... picking the next uncrossed number as the next prime. Instead of the range $[1,N]$ , if we start with a range $[a,b]$ what is the best approach? We can start with the first even number in the range which means $a$ or $a+1$ . But if $a$ is odd, how do we know whether it is prime or composite (without doing a primality test of course, because we are sieving)? I am guessing we still have to start the sieving from $2$ to collect primes less than $a$ because we need them to sieve the range $[a,b]$ to eliminate multiples of those primes as this MSE question/answer seems to suggest. Keep in mind that we are ultimately using the range sieving to see if it contains a factor of $N$ . So, the modified method could use GCD computation to throw away $a$ after the GCD returns 1. If not, we have a factor of $N$ . Is there any ot
|
To sieve the range $[a,b]$ you need all primes up to $\sqrt b$ . To get these start with sieving the range $[1, \sqrt b]$ . After that you can sieve the range $[a,b]$ : for each prime $p_i$ cross out its multiples starting with the least multiple $m$ of $p_i$ that is greater than $a$ : $m = a + p_i - 1 - (a + p_i - 1) \operatorname {mod} p_i$ . Alternatively cross out even numbers first and use the odd primes for sieving in $2p_i$ steps.
|
|elementary-number-theory|factoring|primality-test|sieve-theory|
| 0
|
relation of probability and conditional expectation
|
Let $X$ and $Y$ be jointly normally distributed $(X,Y) \in \mathcal{N}(\mu, \Sigma)$ . Is the following always correct $$ P(X \leq a, X + Y \leq b) = E(P(X \leq a, X + Y \leq b|X)) $$
|
Yes, here are the details $$\begin{align} \mathbb{P}(X \leq a, X + Y \leq b) &=\mathbb{E}(\mathbf{1}_{\{X \leq a, X + Y \leq b \}}) \\ &=\mathbb{E}(\mathbb{E}(\mathbf{1}_{\{X \leq a, X + Y \leq b \}}|X)) \\ &= \mathbb{E}(\mathbb{P}(X \leq a, X + Y \leq b|X)) \end{align}$$ And the statement holds true for all random variables $X,Y$ (not necessarily follows joint normal distribution).
|
|probability|probability-distributions|conditional-probability|
| 1
|
How to prove $\sum_{i=1}^n\frac{(1-a_i)^n}{a_i\prod_{j\neq i}(a_j-a_i)}=\frac{1}{a_1\cdots a_n}-1$?
|
Prove that $\displaystyle\sum_{i=1}^n\frac{(1-a_i)^n}{a_i\prod_{j\neq i}(a_j-a_i)}=\frac{1}{a_1\cdots a_n}-1$ for distinct $a_1,\cdots,a_n\in\mathbb{R}\backslash\{0\}$ . I noticed that it is equivalent to $\displaystyle\sum_{i=1}^nx_i^n\prod_{j\neq i}\frac{1-x_j}{x_i-x_j}=1-\prod_{k=1}(1-x_k)$ where $x_i=1-a_i$ , then the LHS can be regarded as Lagrange interpolation formula, the value at $x=1$ of the polynomial of degree $n-1$ going through $(x_1,x_1^n),\cdots,(x_n,x_n^n)$ . But I don't know if it is useful for the proof.
|
Another approach, using partial fractions Solve by partial fractions: $$\frac{(x-1)^n}{(x-a_1)(x-a_2)\cdots (x-a_n)}=1+\sum_{i=1}^n \frac{b_i}{x-a_i}\tag1$$ Multiplying by $x-a_i$ and evaluating both sides at $x=a_i,$ you get: $$b_i=\frac{(a_i-1)^n}{\prod_{j\neq i} (a_i-a_j)}=-\frac{(1-a_i)^n}{\prod_{j\neq i}(a_j-a_i)} $$ Evaluating $(1)$ at $x=0,$ you get: $$\frac{1}{a_1\dots a_n}=1+\sum_{i=1}^n \frac{(1-a_i)^n}{a_i\prod_{j\neq i}(a_j-a_i)}$$ Which easily gives your result. More generally, if $p(x)$ is a polynomial of degree less than or equal to $n$ with coefficient $c$ at $x^n,$ we get: $$\frac{(-1)^np(0)}{a_1\dots a_n}=c-\sum_i\frac{p(a_i)}{a_i\prod_{j\neq i}(a_i-a_j)}$$ If $p(x)=(x-k)^n,$ then you get: $$\frac{k^n}{a_1\dots a_n}-1=\sum_{i=1}^n\frac{(k-a_i)^n}{a_i\prod_j(a_j-a_i)}$$ If $p(x)=(x-k)^m$ for $m then $c=0$ and: $$\frac{k^m}{\prod a_i}=\sum_{i=1}^n \frac{(k-a_i)^m}{a_i\prod(a_j-a_i)}$$ If $1\leq m\leq n$ then you get, by subtracting $k^m/\prod a_i$ from $k\cdot k^{m-1}/\
|
|combinatorics|analysis|inequality|polynomials|symmetric-polynomials|
| 0
|
How to prove $\sum_{i=1}^n\frac{(1-a_i)^n}{a_i\prod_{j\neq i}(a_j-a_i)}=\frac{1}{a_1\cdots a_n}-1$?
|
Prove that $\displaystyle\sum_{i=1}^n\frac{(1-a_i)^n}{a_i\prod_{j\neq i}(a_j-a_i)}=\frac{1}{a_1\cdots a_n}-1$ for distinct $a_1,\cdots,a_n\in\mathbb{R}\backslash\{0\}$ . I noticed that it is equivalent to $\displaystyle\sum_{i=1}^nx_i^n\prod_{j\neq i}\frac{1-x_j}{x_i-x_j}=1-\prod_{k=1}(1-x_k)$ where $x_i=1-a_i$ , then the LHS can be regarded as Lagrange interpolation formula, the value at $x=1$ of the polynomial of degree $n-1$ going through $(x_1,x_1^n),\cdots,(x_n,x_n^n)$ . But I don't know if it is useful for the proof.
|
You are almost done using your approach. You have an $n-1$ degree polynomial $p(x)$ such that $p(a_i) = a_i^n$ . The $a_i$ are the roots of $x^n - p(x)$ and since this is monic, it follows that $x^n - p(x) = (x-a_1)...(x-a_n)$ . Then, plugging in 1 gives $p(1) = 1 - (1-a_1)...(1-a_n)$ which is what you wanted.
|
|combinatorics|analysis|inequality|polynomials|symmetric-polynomials|
| 0
|
Faster methods for finding binary reciprocals by hand
|
Are there any faster methods than long division for finding the binary expansion of a reciprocal by hand? I've noticed some patterns, such as $\frac{1}{2^n-1} = 0.\overline{[n - 1\text{ 0s}]1}$ and $\frac{1}{2^n+1} = 0.\overline{[n\text{ 0s}][n\text{ 1s}]}$ , which speed up certain reciprocals, but only apply to a small amount of them. I've been unable to find patterns that apply to other reciprocals. I'm attempting to calculate Pi to high precision by hand, which with the series I'm using requires calculating binary reciprocals for many numbers. Any speedup in the process would greatly decrease the time it takes. I'm not super knowledgeable in math, so apologies if this is a bad question.
|
This answer ended up being kind of long. So if you want to see the process, jump to the summary at the end. Denominators of the form $2^n - 1$ When we write a real number $x \in [0, 1]$ as a sequence of bits $(b_1, b_2, \dots)$ , where each $b_i \in \{0, 1\}$ , what we are really doing is expressing it as the sum of the series $$ x = \sum_{i=1}^{\infty} \frac{b_i}{2^i}. $$ As I'm sure you know, this sequence is eventually periodic if and only if the number is rational. For instance, you've observed that in binary, with $n = 3$ , \begin{align} 0.\overline{001} &= 0.001001001001\dots \\ &= 0.001 + \frac{0.001}{1000} + \frac{0.001}{1000^2} + \frac{0.001}{1000^3} + \cdots \\ &= 0.001 \cdot \biggl(1 + \frac{1}{1000} + \frac{1}{1000^2} + \frac{1}{1000^3} + \cdots \biggr) \end{align} And in decimal, this looks like $$ \frac18 \cdot \biggl( 1 + \frac{1}{8} + \frac{1}{8^2} + \frac{1}{8^3} + \cdots \biggr) = \frac{1}{8} \cdot \frac{1}{1 - \frac{1}{8}} = \frac{1}{8 - 1} = \frac{1}{7}, $$ which fi
|
|algorithms|binary|
| 0
|
Sequence of sequences $\{a^{(n)}\}_n \subseteq \ell^2$ with bounded members $|a^{(n)}_k| \leq 1$ has converging subseq.
|
I struggling to understand a partial step in the solution to an exercise: Given a seq. of seq. $\{a^{(n)}\}_{n \in \mathbb N} \subseteq \ell^2$ such that $|a^{(n)}_k| \leq 1 \forall n,k \in \mathbb N$ , there exists a subseq. $n_j$ such that $a^{(n_j)}_k \to a_k$ converges pointwise $\forall k > \in \mathbb N$ . The solution brushes over this by stating that for each $k \in \mathbb{N}$ the seq. $\{a^{(n)}_k\}_{n \in \mathbb{N}}$ is a bounded seq. in $\mathbb R$ and hence has a converging subseq. and hence there is a reordering of $n$ such that $a^{(n)}_k \to a_k$ converges pointwise $\forall k \in \mathbb N$ . I understand that we can find a reordering of a particular $k$ , but I do not understand how we can just conclude that there is one that works for all $k$ .
|
If you take a reordering for a particular $k$ , then throwing out all the other elements, you can now use this new ordering for the next $k$ (possibly $k+1$ ). On the other hand, there is no straight forward guarantee that after infinite steps, an infinite subsequence will remain. But if at every step you keep one element, then the algorithm still runs, while keeping infinintely elements.
|
|real-analysis|sequences-and-series|functional-analysis|analysis|lp-spaces|
| 1
|
Is there a polynomial $P(x)$ with integer coefficients that satisfies $P(2)=4$ and $P(6)=6$?
|
What I did so far: I imagened a polynomial $P(x)$ with degree of $n$ . $P(6)=a_n6^n + \dots + a_26^2 + a_16 + a_0$ $P(2)=a_n2^n +\dots+ a_22^2 + a_12 + a_0$ $P(6) - P(2) =a_n(6^n - 2^n) +\dots + a_2(6^2 -2^2) + a_1(6 - 2) +0$ I have found out that $a^n - b^n$ will always be divisible by $a-b$ . If I can prove that, then I know that $P(6) - P(2)$ has to be divisible by $6-2=4$ . But I don’t know how to easily prove that.
|
You are around the corner for the answer. What you showed is that $4 = (6-2) \mid (P(6) - P(2)) = 6 - 4 = 2$ which is a contradiction. So no such polynomial exist.
|
|algebra-precalculus|polynomials|divisibility|
| 0
|
On a property for normed spaces
|
I came upon the following specific property for a normed space $X$ , and I am looking for a characterization of the normed spaces where it holds true: If a sequence $x_n$ in $X$ satisfies $\displaystyle \lim_{n\to\infty}(\|x_n+y\|-\|x_n\|)=\|y\|$ for all $y\in X$ , then $\displaystyle \lim_{n\to\infty}x_n=0$ . This is not true in $l_1$ ; take $x_n=e_n$ , the unit vector basis. The same counterexample doesn't seem to work in $c_0$ , $l_\infty$ , and $l_p$ for $p\neq 1$ . Is this actually true in these spaces, or is this property, in fact, never satisfied?
|
Let me give you a full, but not satisfactory, general characterization of your property, and then a much nicer characterization of your property among separable spaces. First we say that a sequence $(x_n)$ is $L$ -orthogonal if $\|x_n + x \| \rightarrow 1 + \|x\|$ for every $x \in X$ ; and we say that an element $x^{**} \in X^{**}$ is $L$ -orthogonal if $\|x^{**} + x\| = \|x^*\| + \|x\|$ for every $x \in X$ . Proposition. A Banach space $X$ has your property iff it does not contain an $L$ -orthogonal sequence. Proof. First note that any $L$ -orthogonal sequence $(x_n)$ satisfies the assumption of your property, but $\|x_n\| \rightarrow 1$ , so spaces containing $L$ -orthogonal sequences cannot have your property. On the other hand suppose that $X$ does not satisfy your property, witnessed by a sequence $(x_n)$ . As $(x_n)$ is bounded and not converging to zero, there is $c > 0$ and a subsequence $(y_n)$ of the sequence $(x_n)$ such that $\|y_n\| \rightarrow c$ . But then $z_n = c^{-1}
|
|functional-analysis|normed-spaces|banach-spaces|
| 1
|
Is my interpretation of this notation correct ? (G.E. Sacks book)
|
I am trying to read G.E. Sacks's book on higher recursion theory and he refers to the notation $\{ e \}^{f}$ as the $e$ th element in an enumeration of partial recursive functions in $f$ , where $f$ is a total function. Am I right to interpret that $f$ is actually an oracle ? (as commonly referred to in other recursion theory texts)..
|
Yes, this is correct, although I don't think he explicitly mentions the notion of oracle anywhere; Sacks does everything formally. Towards the beginning of the book, in Section 1.1, he defines: $$\{e\}^f(b) \simeq c\;\text{ iff }\;(\mathrm E y)[T(\bar f(y),e,b,y)\;\;\&\;\;U(y)=c],$$ where $\bar f(y)$ encodes $f$ restricted to the domain $\{i\mid i $T$ and $U$ are as defined by Kleene in his presentation of recursion theory, so the definition of $\{e\}^f(b)\simeq c$ means that there is some natural number $y$ such that the computation of $\{e\}^f(b)$ halts within y steps using only the first $y$ values of $f,$ and the value produced is $c$ (and if there is no such $y,$ then the computation diverges). Viewing the computation as using an oracle for $f$ is the natural way of thinking about this. Sacks's definition is just a formalization of that intuitive notion. I would guess that the motivation behind this formal approach (rather than using an informal notion of oracle) is probably so th
|
|logic|notation|computability|
| 1
|
Distribution of Increments of a fractional Brownian Motion
|
According to my Textbook, a fBm Process $B_t$ has the same distribution as its increments: $(B_{t+h}^H-B_t^H)_{t\geq 0} \stackrel{d}{=}(B_t)_{t\geq 0}$ To proof this, note that both sides are centered Gaussian Processes. Hence it is enough to show that they have the same covariance i.e.: $Cov(B_{t+h}^H-B_t^H,B_{s+h}^H-B_s^H) = \frac{1}{2}(t^{2H}+s^{2H}-|t-s|^{2H})$ But I am not able to proof this. So far I have: $Cov(B_{t+h}^H-B_t^H,B_{s+h}^H-B_s^H) \\ =E((B_{t+h}^H-B_t^H)(B_{s+h}^H-B_s^H)) \\ =\frac{1}{2}\left(|(t+h)-s|^{2H}+|t-(s+h)|^{2H}-|(t+h)-(s+h)|^{2H}-|t-s|^{2H}\right) \\ =\frac{1}{2}\left(|(t+h)-s|^{2H}+|t-(s+h)|^{2H}-2|t-s|^{2H}\right)$ How do I continue from this or did I already make a mistake on the way ?
|
I messed up the indices. It should be: $(B_{t+h}^H-B_{\mathbf{h}}^H)_{t\geq 0} \stackrel{d}{=}(B_t)_{t\geq 0}$ Now we can derive the desired result: $Cov(B_{t+h}^H-B_h^H,B_{s+h}^H-B_h^H) \\ =E((B_{t+h}^H-B_h^H)(B_{s+h}^H-B_h^H)) \\ =\frac{1}{2}\left( |t+h|^{2H}+|s+h|^{2H}-|t-s|^{2H} \\ -|t+h|^{2H}-|h|^{2H}+|t|^{2H} \\ -|h|^{2H}-|s+h|^{2H}+|s|^{2H} \\ +|h|^{2H}+|h|^{2H}-|h-h|^{2H}\right) \\ = \frac{1}{2}\left(|t|^{2H}+|s|^{2H}-|t-s|^{2H} \right) $
|
|probability-theory|stochastic-processes|random-variables|brownian-motion|
| 0
|
Number of zeros at the end of $5^5 \times 10^{10} \times 15^{15} \times \dots \times 120^{120} \times 125^{125} $?
|
I have to find the number of zeros at the end of the number $$5^5 \times 10^{10} \times 15^{15} \times \dots \times 120^{120} \times 125^{125} $$ So, First I know that the zeros are formed by the multiplication of 2 with 5 and 2 appears less number of times than 5 than we should count the number of 2. Even though I can count the number of 2 manually be selecting each even terms in the product and calculating the power of 2 in that and adding all such. But, What if the product was large that it would consume more time doing this way? , so I searched I find that it is solved as follows: Highest power of 2 in the given product = (Number of multiples of 2) + (Number of multiples of 4) + (Number of multiples of 8) + (Number of multiples of 16) = (10+20+30+...+120) + (20+40+60+...+120) + (40+80+120) + 80 = 1520 How did they calculate it like that?
|
"How did they calculate it like that?" For $~p~$ prime, and $~n \in \Bbb{Z^+},~$ let $~V_p(n)~$ denote the largest integer $~\alpha~$ such that $~p^\alpha ~| ~n.~$ This implies that $~p^{\alpha + 1}~$ does not divide $~n.$ Now, let $~S = \{x_1, x_2, \cdots, x_k\}$ be any set of $~k~$ positive integers, and suppose that you wanted to compute $~V_p\left[\prod_{i=1}^k x_i\right].$ The basic approach would be, that for any $~r \in \Bbb{Z^+},~$ you would let $~S_r~$ denote the subset of $~S~$ that contains all elements $~x_i~$ (and only those elements) such that $~V_p(x_i) = r.~$ Let $~|S_r|~$ denote the number of elements in $~S_r.$ Then, $$V_p\left[\prod_{i=1}^k x_i\right] \\ = [1 \times |S_1|] + [2 \times |S_2|] + [3 \times |S_3|] + \cdots \\ = \sum_{r=1}^\infty \left[ ~r \times |S_r| \right]. \tag1 $$ The computation in (1) above can be re-expressed as |S_1| + |S_2| + |S_2| + |S_3| + |S_3| + |S_3| + |S_4| + |S_4| + |S_4| + |S_4| + ... In the above tableau , the 1st column represents all
|
|elementary-number-theory|
| 1
|
Multidimensional Mean Value Theorem with arbitrary norm
|
In the question Multivariate Mean Value Theorem Reference was written the following statement for $x,y\in \mathbb{R}^{n}$ \begin{equation} ||f(x) - f(y)||_q \leq \sup_{z\in[x,y]}||f'(z)||_{(q,p)}||x-y||_p, \end{equation} where $z∈[x,y]$ denotes a vector $z$ contained in the set of points between $x,y\in \mathbb{R}^{n}$ , and $||f′(z)||_{(q,p)}$ is the $L(p,q)$ norm of the derivative matrix of $f:\mathbb{R}^{n}→\mathbb{R}^{m}$ evaluated at $z$ . I have not found proof of this statement anywhere, including the link provided in the question above. But I've found the following proof of the classical Mean Value Theorem ( $p=q=2$ ) here Theorem 5.4. My question is can this proof be used for arbitrary norms? For cases of norm $p=q=1$ or $p=q=\infty$ it is obvious that the same proof can be used since it is easily can be obtained the following estimations for integrable vector-valued functions $$ \|\int_0^1f(t)dt\|_1\le\int_0^1\|f(t)\|_1\,dt, $$ $$ \|\int_0^1f(t)dt\|_\infty\le\int_0^1\|f(t)\|_
|
By @Deane 's suggestion I've posted the proof of the statement above here. In essence, this proof simply repeats the one given in the link above with the norm $l_2$ replaced by a more general one. Let $f:\mathbb{R}^{n}→\mathbb{R}^{m}$ be continuously differentiable on a domain containing the two points $x,y\in \mathbb{R}^{n}$ with a segment connecting them. Let $p,q\in [1,\infty]$ By theorem 5.3 in the link above we have the following representation for $f(x) - f(y)$ $$f(x) - f(y) = \begin{pmatrix}\int_0^1\nabla f_1(x+t(y-x))^T(y-x)dt\\...\\\int_0^1\nabla f_m(x+t(y-x))^T(y-x)dt\end{pmatrix}=\int_0^1\nabla f(x+t(y-x))^T(y-x)dt,$$ where $\nabla f_k$ is the gradient of $k$ th component of $f$ . Then $$||f(x) - f(y)||_q = ||\int_0^1\nabla f(x+t(y-x))(y-x)dt||_q=\otimes$$ To estimate $\otimes$ we can apply $\|\int_0^1f(t)dt\|_p\le\int_0^1\|f(t)\|_p\,dt$ (to prove this inequality, it suffices apply Hölder's inequality for finite sums for the $||\sigma_f||^p_p,$ where $\sigma_f$ is the (finit
|
|integration|derivatives|vector-analysis|matrix-norms|mean-value-theorem|
| 0
|
Extrema of derivate are where tangent crosses the curve.
|
In this article https://www.jstor.org/stable/2310782 i found this proposition: Let $f$ be a differentiable function defined on an open interval $(a, b)$ containing the point $x_0$ . Let: (B) There exists an open interval $I\subset (a, b)$ , $x_0\in I$ , such that on $I$ the function $f'$ attains a (local) maximum or minimum at $x_0$ . (C) There exists an open interval $I\subset (a, b)$ , $x_0\in I$ such that on I we have $T\ge f$ on one side of $x_0$ and $T\le f$ on the other side. Here $T$ is the tangent to the graph of $f$ at the point $x_0$ . Then (B) implies (C). I ask help for proof thats if $x_0$ is local minimum of $f'$ (i.e. $\exists \delta>0$ : $f'(x)\ge f'(x_0)$ for $x_0-\delta ), then $f\le T$ in $(x_0-\delta,x_0)$ and $f\ge T$ in $(x_0,x_0+\delta)$ . $[T(x)=f(x_0)+f'(x_0)(x-x_0)]$ . Ps. On Wikipedia ( https://en.wikipedia.org/wiki/Inflection_point ) i found that ' If all extrema of $f'$ are isolated (that is, in some neighborhood, x is the one and only point at which f' has
|
By $T$ , the author is referring to the linearization of $f$ about $x=x_0$ : $$ T(x) = f(x_0) + f'(x_0)(x-x_0) $$ By the Mean Value Theorem, for all $x \in (a,b)$ there exists a point $\xi$ between $x$ and $x_0$ , such that $$ f(x) = f(x_0) + f'(\xi)(x-x_0) $$ Now suppose that $f'(x_0)$ is the minimum value of $f'$ on $I$ , and $x$ is a point in $I$ less than $x_0$ . Then $$\begin{aligned} f'(\xi) &\geq f'(x_0) \\\implies f'(\xi)(x-x_0) &\leq f'(x_0)(x-x_0) \\\implies f(x_0) + f'(\xi)(x-x_0) &\leq f(x_0) + f'(x_0)(x-x_0) \\\implies f(x) &\leq T(x) \end{aligned}$$
|
|real-analysis|derivatives|maxima-minima|tangent-line|inflection-point|
| 0
|
How many ways can four planes be arranged in space?
|
It is very surprising to me that although the picture of the five ways that three planes can be arranged in 3-dimensional space is in many textbooks (college algebra, linear algebra) the analogous picture with four planes in space is so hard to find. How many arrangements are there? Is there a reference? I did ask a more general question in MO, where you can read that what we want to count are combinatorial equivalence classes of affine hyperplane arrangements--a very hard problem. Here I'm focusing on the simple question of "are there any drawings out there of all the ways to arrange just 4 planes in 3 affine dimensions." https://mathoverflow.net/questions/307862/counting-hyperplane-arrangements-up-to-combinatorial-equivalence-simple-example So far no real leads! There is of course lots of great literature on hyperplane arrangements, especially R. Stanley's lectures, the chapter in Grunbaum's book on convex polytopes, Zaslavsky's ground-breaking 1975 work on Facing up to Arrangements,
|
Combinatorial types of hyperplane arrangements are called dissection types in the dissertation of L. Finschi: https://finschi.com/math/publ/2001-08-31_Finschi_A-Graph-Theoretical-Approach-for-Reconstruction-and-Generation-of-Oriented-Matroids.pdf This thesis is a great source for the connection between oriented matroids and the known answers to this question. For instance the first few terms in sequence of numbers of arrangements of 2D planes in 3 dimensions, up to combinatorial equivalence, can be found from Table 8.1 on page 165. Sum the first three rows to get the sequence 1, 2, 5, 14, 74, 1854, ... [edit] Note however that this table (probably for higher ) counts pseudo-hyperplane arrangements, so the question is still open how many of each are realizable as hyperplane arrangements.
|
|combinatorics|geometry|
| 1
|
Do the tangent points of the circles that are tangent to the sides of a triangle lie on a conic section?
|
I came up with this feature about 3 months ago while using GeoGebra If we have a triangle, and we draw the three circles that touch its sides from the outside, then the six points of contact lie on one conic section. Is this property already known? How do we prove it anyway?
|
Now when I looked at the Carnot's Theorem page on the Cutting the Knot website and looked at the related topics, I found that my theorem had already been discovered. https://www.cut-the-knot.org/m/Geometry/InExConics.shtml
|
|geometry|euclidean-geometry|conic-sections|
| 0
|
Is there a conventional term for the $e^{i\theta}$ thing in $z=\rho e^{i\theta}$?
|
In the polar representation of a complex number, is there a conventional term for the $e^{i\theta}$ thing in $z=\rho e^{i\theta}$ ? I know $|z|=\rho$ is typically called the modulus , and $\theta$ is typically called the argument . $|z|\cos\theta=\Re z$ is called the real part, and $|z|\sin\theta=\Im z$ is called the imaginary part. But I don't recall there being any standard term for the expression $e^{i\theta}$ as a whole.
|
From linear algebra, I think it would be reasonable to call that factor the "direction", but I don't think that's common when working with the complex plane like this.
|
|complex-numbers|terminology|definition|
| 0
|
Is there a conventional term for the $e^{i\theta}$ thing in $z=\rho e^{i\theta}$?
|
In the polar representation of a complex number, is there a conventional term for the $e^{i\theta}$ thing in $z=\rho e^{i\theta}$ ? I know $|z|=\rho$ is typically called the modulus , and $\theta$ is typically called the argument . $|z|\cos\theta=\Re z$ is called the real part, and $|z|\sin\theta=\Im z$ is called the imaginary part. But I don't recall there being any standard term for the expression $e^{i\theta}$ as a whole.
|
It has a bit of physics ring to it, but, at least in a mixed company (mingling with physics majors) I often call it the phase factor . Wikipedia seems to give me some support. Maybe I got used to it when working in telcomm? They do waves! I think I used it before my stint already :-)
|
|complex-numbers|terminology|definition|
| 0
|
Let $n$ be a prime number. Consider $\mathbb U_n$ which is the set of roots of unity.
|
Let $n$ be a prime number. Consider $\mathbb U_n$ which is the set of roots of unity, i.e. the solutions of the equation $z^n=1$ , more precisely $\mathbb U_n = $ { $1,\epsilon , \epsilon^2 , ... , \epsilon^{n-1}$ } , where $\epsilon = \cos \frac{2k \pi}{n}+i\sin{2k \pi}{n} ,k={0,1,2,...,n-1}$ . My question is if $n$ is prime , then how many subsets of $\mathbb U_n$ have the property that the sum of all the elements from the subset is 0 ? I think the answer would be 1(the the whole set ), but I'm not sure and there might be more.I need someone's confirmation/contradiction. Thank you all !
|
If $n$ is prime, then each of $\epsilon^k$ , $1\leq k\leq n-1$ is a root of $$p(z)=z^{n-1}+z^{n-2}+\dots+z+1.$$ This polynomial is irreducible over $\mathbb{Q}$ and hence it is the minimal polynomial of $\epsilon$ over $\mathbb{Q}$ . Suppose for some elements of the set $A=\{\epsilon, \epsilon^2, \dots \epsilon^{n-1}, 1\}$ , we get sum equal to zero, say $$\epsilon^{k_1}+\epsilon^{k_2}+\dots+\epsilon^{k_r}=0,$$ where all $k_i$ are distinct and $0\leq k_i\leq n-1$ . Assuming without loss of generality that $k_1$ is the highest value, then this implies $\epsilon$ would satisfy another polynomial of degree $k_1 \leq n-1$ . Hence, this new polynomial must be $p(z)$ itself as this new polynimial is monic and the minimal polynomial is always unique and hence the above sum must involve all the elements of the set $A$ .
|
|complex-numbers|roots-of-unity|
| 1
|
Calculating $\partial H_j\cdot H^{-1}_j$ where $H_j$ is the hermitian structure
|
Let $(E,h)$ be a hermitian holomorphic vector bundle and suppose we have local trivializations $U_i$ and $U_j$ with transition maps $g_{ij}$ . Let $H_i$ and $H_j$ represent the matrices of $h$ with respect to the trivializations $U_i$ and $U_j$ . We have that $H_j=g^T_{ij}H_i\bar{g}_{ij}$ . I'm trying to prove the following equation $$ \partial H_j\cdot H^{-1}_j=g^{-1}_{ij}(\partial H_i\cdot H_i)g_{ij} + g^{-1}_{ij}\partial g_{ij}, $$ but I've ran into some issues. What I currently have is the following. We write $H_j=g^T_{ij}H_i\bar{g}_{ij}$ and then $$ \begin{align*} \partial H_j=\partial(g^T_{ij}H_i\bar{g}_{ij})=(\partial g^T_{ij})H_i\bar{g}_{ij} + g^T_{ij}(\partial H_i)\bar{g}_{ij}+g^T_{ij}H_i(\partial \bar{g}_{ij}). \end{align*} $$ Now if I'm not mistaken $\partial \bar{g}_{ij} = 0$ so this reduces to $$ \begin{align*} \partial(g^T_{ij}H_i\bar{g}_{ij})=(\partial g^T_{ij})H_i\bar{g}_{ij} + g^T_{ij}(\partial H_i)\bar{g}_{ij}. \end{align*} $$ Multiplying this thing from the right by
|
If $H_i = g_{ij}H_jg_{ij}^*$ , then, writing $g=g_{ij}$ for ease of notation, we calculate as follows. Note that, as you said, $\partial g^* = (\partial\bar g)^\top = 0$ , since $\bar\partial g = 0$ . \begin{align*} \partial H_i\cdot H_i^{-1} &= \partial (gH_jg^*)(gH_jg^*)^{-1} = \partial g(H_jg^*(g^*){}^{-1}H_j^{-1})g^{-1} + g(\partial H_j)(g^*(g^*){}^{-1})H_j^{-1}g^{-1} \\ &= dg\cdot g^{-1} + g(\partial H_j\cdot H_j^{-1})g^{-1}, \end{align*} as desired.
|
|differential-geometry|complex-geometry|vector-bundles|
| 0
|
How to compute integral $\int_0^\pi \ln\left(2\sin\frac x2\right)dx$
|
I would like to compute $$\int_0^\pi \ln\left(2\sin\frac x2\right)dx$$ The primitive of the integrand does not seem to have a closed form. So I guess one would need to use symettry or trig identities in this case. For some context, this is the first coefficient of a Fourier Series, but I would like to compute the integral using only real techniques, no complex analysis or so.
|
\begin{align} I= &\int_0^\pi \ln(2\sin\frac x2)dx\overset{x\to \pi -x}=\int_0^\pi \ln(2\cos\frac x2)dx\\ =& \ \frac12 \int_0^\pi \ln(2\sin\frac x2) +\ln(2\cos\frac x2)\ dx\\ = & \ \frac12 \int_0^\pi \ln(2\sin x)dx= \int_0 ^{\pi/2} \ln(2\sin x)\overset{x\to \frac x2}{dx}\\ =& \ \frac12 \int_0^\pi \ln(2\sin\frac x2)dx=\frac12I=0 \end{align}
|
|real-analysis|integration|definite-integrals|trigonometric-integrals|
| 0
|
Why is the exponential function the most important?
|
In the prologue to his Real and Complex Analysis, Walter Rudin states matter-of-factly that $\exp$ is the most important function in mathematics (take note, not merely in analysis). This looked so incredible to me, and too much to claim (out of the infinite variety of conceivable mathematical functions extending beyond mapping subsets of $\mathbb{C}$ to other such subsets) that I began to wonder if this sentiment is a common one (I heard a math prof saying it's the second most important function -- he said this only as an afterthought; he had indeed claimed that it was the most important) and if so, what could so distinguish this function like this. So, my question is twofold: Do you think, or is it common knowledge to mathematicians, that the exponential function is indeed the most important in all of mathematics? If so, why is this the case? What could make it more important than all the conceivable and inconceivable functions (linking arbitrary pairs of sets) out there? Thank you.
|
It is an interesting question I asked myself many times. There is to-be-finished list why the exponential function is important (if not mostly by all can agree. Welcome to correct and improve...): It represents Hilbert basis which forms an orthonormal basis of the space of functions with finite Lebesgue integrals... It has true "integrity" which means that she never hide how to change, i.e., the growth or decay is just herself for your observation as time goes by, It embrase universe (all directions in space) so that you can scan by delta functions, It builds a nice bridge to expand number system or the fields, Functional representation (Fourier, Laplace, wave equations...) ...
|
|soft-question|exponential-function|
| 0
|
Ito's lemma and Stratonovich calculus
|
I tried converting Ito's lemma to Stratonovich form, but I got inconsistent results. Consider a 1-D SDE in both Ito and Stratonovich sense: $$ dX_t = \mu(X_t) dt + \sigma(X_t) dW_t = \bar{\mu}(X_t) dt + \sigma(X_t) \circ dW_t, $$ where $$ \bar{\mu} = \mu - \frac{1}{2}\sigma \sigma'. \qquad(1) $$ Let $f=f(x)$ be a smooth function of $x$ . For simplicity, $f, \mu$ , and $\sigma$ have no time dependence. By Ito's lemma, we get $$ df = \left( \mu f' + \frac{1}{2} \sigma^2 f'' \right) dt + \sigma f' dW_t. $$ Now, I want to transform the above equation into Stratonovich form, but I got inconsistent results. Here are my derivations. 1. Using transformation formula From Eq.(1), we have: $$ \begin{align} df &= \left[ \mu f' + \frac{1}{2} \sigma^2 f'' - \frac{1}{2} \sigma f' \frac{\partial (\sigma f')}{\partial X} \right] dt + \sigma f' \circ dW_t\\ &= \left[ \mu f' + \frac{1}{2} \sigma^2 f'' - \frac{1}{2} \sigma \sigma' (f')^2 - \frac{1}{2} \sigma^2 f' f'' \right] dt + \sigma f' \circ dW_t. \qq
|
It is not very clear from Kurt's answer to Fred's answer why $\frac{\partial}{\partial f}$ is legitimate. I write a detailed explanation here. The Ito $\Leftrightarrow$ Stratonovich conversion is only valid when $dX_t$ is an expression of the following form $$dX_t=\mu(X_t,t)dt+\sigma(X_t,t) dW_t\Leftrightarrow dX_t=\bar{\mu}(X_t,t)+\sigma(X_t,t)\circ dW_t$$ Clearly $$df(X_t)=\left(\mu(X_t,t)f'(X_t)+\frac12\sigma^2(X_t,t)f''(X_t,t)\right)dt+\sigma(X_t,t)f'(X_t)dW_t$$ is not in this form. So the conversion formula does not apply. If we let $Y_t=f(X_t)$ , the coefficients of the righthand side should involve $Y_t$ only and then we can apply the conversion formula. To see this, assuming $f$ is invertible. Then we have $$x=f^{-1}(y):=s(y)$$ $$f'(x)=\frac{1}{(f^{-1})'(y)}:=g(y)$$ $$f''(x)=-\frac{(f^{-1})''y}{((f^{-1})'(y))^3}:=h(y)$$ Via these conversions, we can rewrite the righthand side of the expression involving $Y_t$ only as $$dY_t=\left(\mu(s(Y_t),t)g(Y_t)+\frac12\sigma^2(s(Y_t),t)h(Y
|
|stochastic-calculus|stochastic-differential-equations|
| 0
|
Can you give me examples of infinite-rank partial isometries over B(H)?
|
I'm currently working with partial isometries over $B(H)$ (the set of all bounded linear operators over some Hilbert space $H$ ) whose range is $\infty$ -dim , and naturally require examples to test my thoughts and results. I'm mostly interested in operators of this type which are not projections or isometries (over all of $H$ ). So far, the only examples I've come up with which do not fall under these two subclasses are based on (powers of) the left and right-shift operators on $\ell_2$ . Definition A partial isometry is an operator $T \in B(H)$ such that $\lVert Tx \rVert = \lVert x \rVert$ for all $x \in \ker{T}^\perp$ (for a regular isometry, $\ker{T} = \{0\}$ , so the norm of all elements of $H$ is preserved). Whenever an operator is a partial isometry, its adjoint also is. Additionally, $T$ being a partial isometry is equivalent to $TT^*$ being a projection (same goes for $T^*T$ ) and it is also equivalent to $T^*TT^* = T^*$ (or $TT^*T=T$ ) being true.
|
Let $T$ be a partial isometry of $H$ , with kernel $K^\perp$ and range $L$ . The restriction $U$ of $T$ to $K$ is then an isometry onto $L$ , and $T=U\circ P$ , where $P$ is the orthogonal projection of $H$ onto $K$ . Conversely, given two isometric closed subspaces $L,K\subseteq H$ , such a composition $T:=U\circ P$ is a partial isometry, with range $L$ . Equivalently, every partial isometry $T:H\to H$ is given by $$T(e_i)=\begin{cases}f_i&\text{if }i\in I\\0&\text{if }i\in J\end{cases}$$ where $(e_i)_{i\in I\sqcup J},(f_i)_{i\in I\sqcup K}$ are two Hilbert basis of $H$ .
|
|functional-analysis|operator-theory|hilbert-spaces|
| 1
|
Probability System Definition and Random Variable
|
Setup I'm taking a course on probability and our professor has defined random variables in a way that I'm not so used to, I'm used to the definition of a random variable X, being a function from the sample space into the real numbers. Instead we are given this: And then we get random varaibles: Usage A sample definition using this notation is: What I don't understand is what a system actually is. From what I can gather a system, is like some sort of function which maps into our sample space. But if that's the case than what would the domain of our system W be? Another idea I had was that a system is just like a random variable, but it need not map out to the reals, instead just some arbitrary set, and then the random variable maps from that set to the reals. My hunch is that they are trying to decouple the idea of the random event, and then a random variable which produces a real number from that event, but I'm not entirely sure. If anyone has seen this before and knows formally what a
|
At least the class still uses a sample space $\Omega$ and still talks about functions $g:\Omega\rightarrow\mathbb{R}$ (putting details about sigma algebras aside, these functions $g$ are what everyone else would call random variables). I think your class wants to view $W$ as a "factory" (or piece of software) that, when repeatedly run, produces (independent) outcomes. It wants to use $W$ as the name of the software, and then use $w_1, w_2, w_3$ as "realizations" from $W$ . It defines $X=g(W)$ as a new piece of software that puts the output of the previous software through $g$ . So then "realizations" of $X$ are $g(w_1), g(w_2), g(w_3)$ . This view is a significant departure from standard probability theory where there is a single sample space $\Omega$ , and the outcome $\omega \in \Omega$ determines the values $X_1(\omega), X_2(\omega), X_3(\omega), ...$ for all random variables $\{X_1, X_2, X_3, \ldots\}$ . One way to make this consistent is if you consider the product space $$\tilde{
|
|probability|random-variables|definition|
| 1
|
Classification of compact surfaces and $\Sigma_g\# S_n=S_{n+2g}$
|
In a question, I am asked to compute the homology of $\Sigma_g\#S_n$ (via Mayer-Vietoris), and deduce that this space is in fact $S_{n+2g}$ . I did so by assuming the classification of compact surfaces, but now I am wondering whether this is all circular: how is the classification of compact (or even triangulable) surfaces proved and does it not use the result $\Sigma_g\# S_n=S_{n+2g}$ in its proof? I saw here that $\Sigma_1\#S_1=S_3$ can be showed directly. Is the general case done in a similar fashion? Or is there another argument? Edit: $\Sigma_g$ is the connected sum of g tori, while $S_n$ is the connected sum of n $\mathbb{RP}^2$ 's.
|
When classifying any kind of mathematical structure (up to equivalence), there are two steps: Make a list which includes representatives of every equivalence class (and try to winnow your list down by eliminating duplicates) Use invariants to prove that no two members of your list are equivalent. In the case of surfaces, a direct argument that $\Sigma_1 \#\, S_1 = S_3$ can be used to eliminate $\Sigma_1 \#\, S_1$ from your list. Using similar such arguments you can winnow your list down to $$\Sigma_0, S_1, \Sigma_1, S_2, \Sigma_2, S_3, \Sigma_3,... $$ And now you have to prove that any two members of this list are not homoemorphic. That's where the homology calculations come in.
|
|algebraic-topology|homology-cohomology|surfaces|
| 1
|
Probability System Definition and Random Variable
|
Setup I'm taking a course on probability and our professor has defined random variables in a way that I'm not so used to, I'm used to the definition of a random variable X, being a function from the sample space into the real numbers. Instead we are given this: And then we get random varaibles: Usage A sample definition using this notation is: What I don't understand is what a system actually is. From what I can gather a system, is like some sort of function which maps into our sample space. But if that's the case than what would the domain of our system W be? Another idea I had was that a system is just like a random variable, but it need not map out to the reals, instead just some arbitrary set, and then the random variable maps from that set to the reals. My hunch is that they are trying to decouple the idea of the random event, and then a random variable which produces a real number from that event, but I'm not entirely sure. If anyone has seen this before and knows formally what a
|
I think your ideas are correct. As I understand it, it is just a philosophical distinction that makes no difference. When you consider a random variable $g\colon \Omega\to\mathbb{R}$ as a (measurable) function, you realize that the only thing "random" about values of $g$ is really the values of inputs $\omega\in\Omega$ . Mathematically, that is the end of the story; and many people are content to think that the randomness is in the nature of the that abstract probability space $(\Omega,\mathcal{M},\mathbb{P})$ . But you can consider the source of randomness to be something else. So instead of accepting that the input values to $g$ are random in nature, your professor thinks of a "mechanism" $W$ that produces random outcomes, and then the function $g$ takes that random outcome and looks at a particular numeric aspect of the outcome. For example, consider choosing a random person from your math class uniformly randomly. This sample (of size $1$ ) is going to be different every time. The
|
|probability|random-variables|definition|
| 0
|
How to understand logical implication in Set theory?
|
I am reading about the wiki page of material conditional (a.k.a logical implication). In the diagram in that page (which is also pasted here), it draws an Venne Diagram of the truth function of $ A \implies B $ In the text explanation (in 2nd paragraph), it says $\ P \implies Q $ can be symbolized using set theory: $ P \supset Q $ Question: The Diagram explanation and text explanation is conflicting. It seems in the diagram, A is not a superset of B. What is the correct way to understand it?
|
Suppose $\Omega$ a set, from which we get arguments for the predicates $P(\cdot), Q(\cdot)$ . Let $x\in \Omega$ such that the implication $P(x)\implies Q(x)$ . This implication is true, only when $Q(x)$ is true or $P(x)$ is false. Thus collecting the points of $\Omega$ , for which the implication stands, we have $\{x\in \Omega | P(x)\implies Q(x)\}=\{x\in \Omega| Q(x) \vee (\sim P(x)) \}=\{x\in \Omega|Q(x)\}\cup \{x\in \Omega | \sim P(x)\}=\{x\in \Omega|Q(x)\}\cup (\Omega \setminus \{x\in \Omega | P(x)\}).~ (1)$ The set $\{x\in \Omega|Q(x)\}$ can be seen as the set $ \{x\in \Omega | P(x)\}$ in above figure and the second as the left, so the red is described from the expression $(1)$ .
|
|logic|propositional-calculus|
| 0
|
Number of zeros at the end of $5^5 \times 10^{10} \times 15^{15} \times \dots \times 120^{120} \times 125^{125} $?
|
I have to find the number of zeros at the end of the number $$5^5 \times 10^{10} \times 15^{15} \times \dots \times 120^{120} \times 125^{125} $$ So, First I know that the zeros are formed by the multiplication of 2 with 5 and 2 appears less number of times than 5 than we should count the number of 2. Even though I can count the number of 2 manually be selecting each even terms in the product and calculating the power of 2 in that and adding all such. But, What if the product was large that it would consume more time doing this way? , so I searched I find that it is solved as follows: Highest power of 2 in the given product = (Number of multiples of 2) + (Number of multiples of 4) + (Number of multiples of 8) + (Number of multiples of 16) = (10+20+30+...+120) + (20+40+60+...+120) + (40+80+120) + 80 = 1520 How did they calculate it like that?
|
Let's rewrite the number as a product to try to understand this better: $\prod_{x=1}^{25} 5x^{5x}$ As you already stated, zeros are formed when $5$ and $2$ and multiplied. $20+60+80+100+240+240+10^{10+20+30+\dots+120} = 1520$ Since 2's and 5's form 0's, we know there are more 5's than 2's so we can just find the multiples of 2's. Watch this for more. (Hindi) :(
|
|elementary-number-theory|
| 0
|
A gap in the "sliding hump" technique
|
I have been stuck in a seemingly trivial proof for a long time. In general, given a family of linear bounded operators $(T_i)_{i\in I}$ from Banach space $X$ to normed $Y$ , if $\Vert T_n \Vert $ not uniformly bounded, I wish to find a sequence $(x_n)$ such that $T_n$ is not pointwise bounded for $x$ . Applying the lemma in this question, Linear operators that doesn't take their maximal over the "boundary" , to the current problem, one can firstly naively pick a Cauchy sequence $x_n$ such that $\Vert x_n - x_{n-1} \Vert \le \frac{1}{3^n}$ and $\Vert T_n x_n \Vert \ge \Vert T_n \Vert \frac{1}{3^n}$ . By completeness, this sequence converges to $x$ , and the only problem is that I can't find a proper nice bound for $\Vert x-x_n \Vert $ . $\Vert T_n x \Vert = \Vert T_n(x-x_n) + T_n x_n \Vert \ge \Vert T_n x_n \Vert - \Vert T_n \Vert \Vert x-x_n \Vert (*) $ . To amend such a situation I have to pick $y_n$ , so that $y_n:= x_N \text{ such that $N$ is the smallest integer satisfies } \Vert x
|
Note that the bound $\|x - x_{n} \| \leq \frac{1}{2} \frac{1}{3^{n}}$ given $n\in\mathbb{N}$ can be obtained from the following argument. Fix $n\in\mathbb{N}$ . Let $m\in\mathbb{N}$ with $n . Then $$\|x_{m} - x_{n}\| = \left\| \sum_{k=n}^{m-1} (x_{k+1} - x_{k}) \right\| \leq \sum_{k=n}^{m-1}\|x_{k+1} - x_{k}\| \leq \sum_{k=n}^{m-1}\frac{1}{3^{k}} \leq \frac{1}{2} \frac{1}{3^{n}}.$$ It follows that $$\|x - x_{n}\| = \lim_{m\to\infty} \|x_{m} - x_{n}\| \leq \frac{1}{2}\frac{1}{3^{n}}.$$ For your estimate, you appear to have used that given $n\in\mathbb{N}$ , $\|T_{n}(x_{n})\| \geq \frac{1}{2}\frac{1}{3^{n-1}} \|T_{n}\|$ . But I cannot tell where you obtained that estimate from.
|
|functional-analysis|
| 1
|
I need help with solving one Omar Al-Khayyam's distinct cubic equation, $x^3+ax^2=c$.
|
The cubic equation of interest is $x^3+2x^2=6$ . I found the two conics. The parabola is $y= x^2$ and the rectangular parabola, $(x+2)y=6$ . I am familiar with the geometry of the parabola and can graph the rectangular hyperbola with the center at $(-2,0)$ . I have been studying the works of Apollonius. In particular, the use of equal, area rectangles. I am graphing it geometrically and trying to see the relationship.
|
As Dietrich Burde commented, there is a formula for solving cubics. See here and here . A suggestion is rearranging to $x^3+2x^2-6=0$ . Using Decartes' rule of signs, we also know there is 1 positive root which is $x \approx 1.34$ . The other roots are complex as shown in the image below. Hope this helps, try using the formula. If you have trouble with that then I suggest using the Newton-Raphson method or substitution (harder)
|
|geometry|conic-sections|
| 0
|
Recurrence relationship where the difference of two consecutive terms is polynomial. Solution by generating functions.
|
I have this recurrence relation $$h_{n} = h_{n-1} + 4n(n-1) \ \text{for} \ n \geq 1 \ \text{with} \ h_{0} = 0, h_{1} = 0$$ I am asked to find $h_{9}$ . I have attempted to solve this by generating functions and ended up with a closed form solution as: $g(x) = \frac{4x}{(1-x)^3} - \frac{4x}{(1-x)^2}$ I feel like this is correct, though I'm not totally sure. I then rewrote these as infinite sums so I can find the coefficient of $x^n$ more easily: $4x \Sigma^{\infty}_{n=0}\binom{n+2}{2}x^n - 4x\Sigma^{\infty}_{n=0}nx^n$ for every value of $n$ the coefficient is $+4$ more than the actual answer, I've looked at my working for a while but not sure how the answer is so close to the solution? Thanks
|
You don't need the generating method which is often used for much harder problem. For this one, observer that: $h_n = (h_n - h_{n-1})+(h_{n-1} - h_{n-2})+...+(h_1 - h_0) + h_0 = 4n^2 - 4n + 4(n-1)^2 - 4(n-1) +...+ 4\cdot 2^2 - 4\cdot 2+4\cdot 1^2 - 4\cdot 1+0= 4(1^2+2^2+...+n^2) -4(1+2+...+n)= 4\cdot \dfrac{n(n+1)(2n+1)}{6}-4\cdot \dfrac{n(n+1)}{2}$ . From this you can find any term and just plug $n = 9$ in the formula to get $h_9$ .
|
|recurrence-relations|generating-functions|
| 0
|
Does every set with a supremum contain a monotone net converging to that supremum?
|
It's well known that if $U \subset \mathbb{R}$ is bounded, then there exists a monotone increasing sequence $(x_{n})^{\infty}_{n=1}$ converging to $sup(U)$ . My question is: Let $X$ be a lattice, and let $U \subseteq X$ be a totally ordered subset (chain). Does there exist a monotone increasing net $(x_{\alpha})_{\alpha \in D}$ such that $sup_{\alpha}x_{\alpha} = sup(U)$ ?
|
Sure. Take $D:=U$ and define $x_\alpha:=\alpha$ .
|
|real-analysis|sequences-and-series|order-theory|lattice-orders|
| 0
|
Find all Sylow 3-subgroups of $S_3\times S_3$
|
Find all Sylow 3-subgroups of $S_3\times S_3$ ? This is what I already found: Since $o(S_3\times S_3)=36=2^2 3^2$ Sylow- $3$ subgroups have order $9$ . If $n_3$ is the no. of Sylow- $3$ subgroups, Then $n_3|4$ and $3|(n_3 - 1)$ . Hence $n_3$ should be $1$ or $4$ . Now how can I find at least one subgroup of order $9$ ?
|
There's a normal one (as pointed out by Nicky Hekster). Once that happens we know that there's only $1$ , because they're all conjugate, by Sylow. It's (isomorphic to) $\Bbb Z_3^2.$
|
|group-theory|finite-groups|sylow-theory|
| 0
|
Recurrence relationship where the difference of two consecutive terms is polynomial. Solution by generating functions.
|
I have this recurrence relation $$h_{n} = h_{n-1} + 4n(n-1) \ \text{for} \ n \geq 1 \ \text{with} \ h_{0} = 0, h_{1} = 0$$ I am asked to find $h_{9}$ . I have attempted to solve this by generating functions and ended up with a closed form solution as: $g(x) = \frac{4x}{(1-x)^3} - \frac{4x}{(1-x)^2}$ I feel like this is correct, though I'm not totally sure. I then rewrote these as infinite sums so I can find the coefficient of $x^n$ more easily: $4x \Sigma^{\infty}_{n=0}\binom{n+2}{2}x^n - 4x\Sigma^{\infty}_{n=0}nx^n$ for every value of $n$ the coefficient is $+4$ more than the actual answer, I've looked at my working for a while but not sure how the answer is so close to the solution? Thanks
|
I realised my mistake, it was in the second series, the coefficient of $x^n$ is $n+1$ not $n$ !
|
|recurrence-relations|generating-functions|
| 1
|
Expectation of maximum number of non-overlapping random intervals inside $[0,1]$ out of $n$ intervals
|
Choose two random numbers from $[0,1]$ and let them be the endpoints of a random interval. Repeat this independently for $n$ times to get $n$ random subintervals (denoted as $U_1,\dots,U_n$ ). Let $T_n = \max\{|S|: S\subset\{U_1,\dots,U_n\}, U_i\cap U_j=\emptyset \mbox{ for any }U_i\not =U_j\in S\}$ , where $|S|$ denote the number of intervals in $S$ . I'm wondering what is the expectation of $T_n$ ? Or how fast does the expectation of $T_n$ increase with $n$ ? ( is it $O(n)$ , $O(\sqrt{n})$ or $O(\log n)$ ?)
|
Simulation suggests the expectation grows slightly faster than $\sqrt{n}$ , though perhaps not enough faster to avoid being $O(\sqrt{n})$ . Using R and looking at $n=1,2,\ldots,100$ with $10\,000$ simulations each: numbernonoverlapping 0){ lowestupper remaining[whichlowestupper,2],], ncol=2) countintervals you get the experimental means (including simulation noise) slightly above $\sqrt{n}$ when $n>6$ . Dividing the experimental means by $\sqrt{n}$ gives the following chart (with the simulation noise more obvious); reassuringly, the value of about $0.94$ when $n=2$ corresponds to the theoretical $\frac{4/3}{\sqrt{2}}$ since the probability of two intervals not overlapping is $\frac13$ . plot(n, experimental/sqrt(n)) With some further thought, I think I can provide a recursion for $\mathbb E[T_n]$ , starting with the trivial $T_0=0$ and $T_1=1$ : $$\mathbb E[T_n]=\dfrac{\sum\limits_{k=1}^n \frac{k\, (2n-k-1)!}{2^{n-k}\,(n-k)!}(1+\mathbb E[T_{n-k}])}{\sum\limits_{k=1}^n \frac{k\, (2n-k-1
|
|probability|statistics|
| 0
|
Show that there exists $x \in \mathbb R$ such that $\mathbb P (Y = x ) = 1.$
|
Let $\{x_n \}_{n \geq 1}$ be a sequence of real numbers and $\{X_n \}_{n \geq 1}$ be a sequence of random variables such that $\mathbb P (X_n = x_n) = 1,\ n \geq 1.$ Let $Y$ be a random variable such that $X_n \stackrel{d} \to Y.$ Then there exists $x \in \mathbb R$ such that $\mathbb P (Y = x) = 1.$ My Attempt $:$ Let $y$ be a continuity point of $F_Y.$ Then $F_{X_n} (y) \xrightarrow {n \to \infty} F_{Y} (y).$ Since $F_{X_n} (z) \in \{0,1\}$ it follows that $F_Y (z) \in \{0,1\}.$ But since $F_Y (z) \to 1$ as $z \to \infty$ and $F_Y (z) \to 0$ as $z \to -\infty$ it follows that $F_Y (z) = 1$ for sufficiently large $z$ and $F_Y (z) = 0$ for sufficiently small $z$ and hence the set $A : = \{z \in \mathbb R\ |\ F_Y (z) = 1 \}$ is non-empty and bounded below. Let $x : = \inf A.$ Then $F_Y (z) = 0$ for all $z \lt x.$ Choose a sequence $\{y_n \}_{n \geq 1}$ in $A$ such that $y_n \downarrow x$ as $n \to \infty.$ Then for any $z \gt x$ there exists $N \in \mathbb N$ such that $y_n \lt z$ for a
|
The following general statement holds: let $(x_n)_{n \in \mathbb{N}}$ be a sequence of elements of $\mathbb{R}^d$ ; if $(f(x_n))_{n \in \mathbb{N}}$ converges for any continuous and bounded $f:\mathbb{R}^d\to \mathbb{R}$ , then $(x_n)_{n \in \mathbb{N}}$ converges (*). Therefore, if $P(X_n=x_n)=1$ then $E[f(X_n)]=f(x_n)$ and so you just need to assert that $(E[f(X_n)])_{n \in \mathbb{N}}$ is convergent for any continuous and bounded $f$ to have $x_n\to x$ for some $x \in \mathbb{R}^d$ . It then follows that $E[e^{i\xi\cdot X_n}]=e^{i\xi\cdot x_n}\to e^{i\xi \cdot x}$ for any $\xi \in \mathbb{R}^d$ and the latter characterises the law of $Y$ s.t. $P(Y=x)=1$ . (*) This is a special case of Exercise 8.10.67 in Bogachev, Vol II .
|
|real-analysis|probability|probability-theory|measure-theory|weak-convergence|
| 0
|
Confused about notations used in a Computer Vision paper
|
I was reading a computer vision paper for 3D reconstruction and was a bit confused by two notations: https://www.researchgate.net/publication/327805907_3D_Image_Reconstruction_from_Multi-Focus_Microscope_Axial_Super-Resolution_and_Multiple-Frame_Processing $f \in R^{{N_x}\times{N_y}\times{N_z}}$ , which denotes a 3D object $ h(x,y;z) $ What is meant by the $N_x \times N_y \times N_z$ term and the ; symbol in h? Sorry if this seems ignorant, I am not a math major :')
|
This is a notation peculiar to ML papers; what they mean by $\mathbb{R}^{N_x \times N_y \times N_z}$ is actually $\mathbb{R}^{N_x} \times \mathbb{R}^{N_y} \times \mathbb{R}^{N_z}$ , or in simpler terms, $f$ is a three dimensional array with each dimension of length $N_x, N_y$ and $N_z$ . The semicolon is sometimes used to emphasize a semantic difference between the function arguments, but in this case, after a quick look through the paper, it seems unnecessary.
|
|notation|computer-vision|
| 0
|
What is the relationship between the Laplace equation and the Wave equation?
|
What is the relationship between the Laplace equation: $$ (\delta^2_x + \delta^2_y)\phi = 0 $$ and the Wave equation: $$ (\delta^2_x - \delta^2_y)\phi = 0 $$ What is the relationship between the Laplace equation: $$ (\delta^2_{x0} + \delta^2_{x1} + \delta^2_{x2} + \delta^2_{x3})\phi = 0 $$ and the Wave equation: $$ (\delta^2_{x0} - \delta^2_{x1} - \delta^2_{x2} - \delta^2_{x3})\phi = 0 $$
|
Some interesting physical relationships can be obtained by naively playing around with Maxwell's equations; $$\tag{1}\space\space\space\nabla \times \vec{B}=\mu_{0}\varepsilon_0\partial_t\vec{E}+\mu_0\vec{J}$$ $$\tag{2}\space\space\space\nabla\times\vec{E}=-\partial_t\vec{B}$$ Letting $\vec{J}=0$ , we can use the Levi-Civita symbol, $\epsilon_{ijk}$ , to write $(1)$ as; $$\mu_{0}\varepsilon_0\partial_tE^i=\epsilon_{ijk}\partial_jB^k$$ Now, using $(2)$ we can find $B^k$ in terms of the components of $\vec{E}$ ; $$-\partial_tB^k=\epsilon_{kmn}\partial_mE^n\implies B^k=-\epsilon_{kmn}\int\partial_jE^ndt$$ So, we will have; $$\mu_{0}\varepsilon_0\partial_tE^i=-\epsilon_{ijk}\epsilon_{kmn}\int\partial_{mj}E^ndt$$ $$\therefore\mu_{0}\varepsilon_0\partial^2_tE^i=-\epsilon_{ijk}\epsilon_{kmn}\partial_{mj}E^n$$ Utilizing the identity $\epsilon_{ijk}\epsilon_{kmn}=\delta_{im}\delta_{jn}-\delta_{in}\delta_{jm}$ yields; $$\mu_{0}\varepsilon_0\partial^2_tE^i=-\big[\partial_{ij}E^j-\partial^2_{j}E^i
|
|partial-differential-equations|complex-numbers|mathematical-physics|quaternions|
| 0
|
Geometry Insight
|
Hello, I have a question for which I don't understand how the solution came about. In the above triangle, AC = 8, and BE = 6. AD and DE are the equal. Also CN and NB are equal i.e. N is the mid point of BC. The question is what is DN? The solution starts by stating, create a line NF, where N is in between E and C and parallel to BE. From here Triangle ECB is similar to FCN with a ratio of 2:1 so FN is 3. Then DF is equal to DE + EF. DE=1/2AE and EF=1/2EC So DE+EF = 1/2(AE+EC) = 1/2(AC) = 4. FN is 1/2 BE = 3. So DFN is 345 right triangle. So DN is 5. I have a 2 part question. First, it didn't occur to me to draw that parallel line but what in the question would suggest that this is a good idea? I tried putting the triangle in a co-ordinate system but didn't get anywhere. I don't understand the thinking to get to the point where that this seems like a good idea. The second, is there a different solution that would seem more straightforward? Thanks,
|
Here is a variation of the proof, using the midline theorem extensively. Let M be a midpoint if AB. Then due to the midline theorem: MN is parallel to AC, MN is AC/ $2=8/2=4$ , DM is parallel to EB, DM is EB/ $2=6/2=3$ . Since MN $\parallel$ AC $\bot$ EB $\parallel$ DM, we have MN $\bot$ DM. Now, from the right-angle triangle MND we get DN $=5$ .
|
|geometry|triangles|
| 0
|
About the boundary of the $x$-section of a bounded domain
|
Decompose $\mathbb{R}^N = \mathbb{R}^{N_1} \times \mathbb{R}^{N_2}$ , let $\Omega \subset \mathbb{R}^N$ be a bounded domain and take $x$ in the set $$\Pi_1(\Omega) = \{ x \in \mathbb{R}^{N_1} : (x,y) \in \Omega \text{ for some } y \in \mathbb{R}^{N_2}\}. $$ Define the set $$ A_x := \{y \in \mathbb{R}^{N_2} : (x,y) \in \Omega\}. $$ I am trying to discover what should be the boundary of $A_x$ . My first attempt was $$ (1) \,\,\,\,\,\,\,\,\,\,\,\,\,\,\partial A_x = \{y \in \mathbb{R}^{N_2} : (x,y) \in \partial \Omega\}, $$ what, I think, is not true. Edit: For me it is enough to prove: If $y \in \partial A_x$ then $(x,y) \in \partial \Omega$ . Is it true ? Here there is a drawing which shows that (1) may not be true: Notice that $x$ in the picture is in $\Pi_1(\Omega)$ and $y_0$ in the picture is in $\{y \in \mathbb{R}^{N_2} : (x,y) \in \partial \Omega\}$ . However $y_0 \not\in \partial A_x$ .
|
For all $x\in \mathbb R^{N_1}$ , the inclusion $$ \{x\} \times \partial A_x \subset \partial \Omega\quad (*) $$ (that you are asking about in the end of the edit) is true. At the same time, as your example shows, the opposite inclusion is false even if you assume that $A$ is open with smooth boundary and $x\in \Pi_1(A)$ . Let's prove the inclusion (*). (I do not need any assumptions on $A$ and $x$ .) Take $p=(x,y)$ such that $y\in \partial A_x$ . Let $U=U_x\times U_y$ be a product neighborhood of $p$ . Since product neighborhoods form a basis of topology at $p$ , it suffices to prove that every such $U$ has nonempty intersection with both $A$ and its complement $A^c$ in $\mathbb R^N$ . Since $y\in \partial A_x$ , there exists $z\in A_x\cap U_y$ . The point $(x,z)\in A$ lies in $U$ since $x\in U_x$ . Thus, $U\cap A\ne \emptyset$ . Furthermore, since $y\in \partial A_x$ , there exists $w\in U_y\setminus A_x$ . Then $(x,w)\notin A$ either. Hence, $U\cap A^c\ne\emptyset$ as well. Thus, $p\
|
|general-topology|projection|
| 1
|
Covariant Derivative of Matrix Times Vector
|
Good evening! I was wondering if one could prove some sort of "product rule" for the covariant derivative by using parallel transport: Let $(M,g)$ be a Riemannian manifold, then it is known that a connection $\nabla$ is uniquely characterized by its parallel transport via $$(\nabla_\xi X)_x = \left.\frac{d}{d t}\right|_{t=0} P^\gamma_{t,0}(X(\gamma(t))) = \lim\limits_{t\rightarrow 0} \frac{P_{t,0}^\gamma (X(\gamma(t))) - X(x)}{t},$$ where $P_{t,0}^\gamma$ denotes the (reverse) parallel transport of a vector field $X\in \Gamma TM$ along the geodesic $\gamma:\mathbb{R}\longrightarrow M$ with $\gamma(0)=x$ and $\dot\gamma(0)=\xi$ for a $\xi\in T_x M$ , $x\in M$ . Now, for a $J\in \Gamma\operatorname{End}(TM)$ , my idea was to insert a useful zero like in the proof of the regular product rule: \begin{align*}(\nabla_\xi JX)_x &= \lim\limits_{t\rightarrow 0} \frac{P_{t,0}^\gamma (J(\gamma(t))X(\gamma(t))) - J(x)X(x)}{t}\\ &= \lim\limits_{t\rightarrow 0} \frac{P_{t,0}^\gamma (J(\gamma(t))X(\g
|
In the case someone else might have the same question: The product rule I was looking for holds and takes the form $$\nabla (JX) = J(\nabla X) + (\nabla J)X.$$ It follows immediately from the Leibniz formula for tensor products in the case of vector valued $1$ -forms, or can be calculated within the "usual" setup using local bases: Proof Let $J\in\Gamma\operatorname{End}(TM)$ be a linear map, as well as $(e_1,...,e_n)$ a local basis and $(dx_1,....,dx_n)$ the corresponding dual basis, i.e. $dx_i(e_j)=\delta_{ij}$ . Then firstly, $J$ can be seen as the $(1,1)$ -tensor $J=\sum\limits_{i=1}^n dx_i\otimes Je_i$ . By the Leibniz rule for tensor products, this means \begin{equation}\tag{1} \nabla J =\sum\limits_{i=1}^n \nabla dx_i \otimes J e_i + dx_i\otimes \nabla Je_i.\end{equation} Any $X\in\Gamma TM$ can be locally written as $X=\sum\limits_{i=1}^n dx_i(X) e_i$ , so $(1)$ can be rewritten as \begin{equation}\tag{2} (\nabla J)X =\sum\limits_{i=1}^n (\nabla dx_i)(X) J e_i + dx_i(X) \nabla
|
|differential-geometry|riemannian-geometry|
| 1
|
Solve a simple exponential recursive equation?
|
I want to fill a range between two values, not linearly but exponentially. For this I use a starting value L take exponent R, and repeat for N number of steps. L = 1e-7 R = 0.9997 # needs solving N = 1000 for _ in range(N): L = L ** R >>> L 6.523-06 Need to solve for R such that end value of L after N repeats is 1e-05 . Edit: This is not the same question as suggested. In the linked question the formula linked is $ f(x) = a \times r^s.$ For my question, the formula is different: $L[t] = L[t-1]^R$ where t is a recursion step with maximum steps N.
|
Let $c=10^{-7}$ , $N=1000$ and $C=10^{-5}$ be the constants from the questions. Recall that $(c^R)^R=c^{R\cdot R}=c^{R^2}$ , further $$((c^R)^R)^R=(c^{R^2})^R=c^{R^2\cdot R}=c^{R^3}$$ and so on. Thus, the program computes $c^{R^N}$ and the task is to solve $$c^{R^N}=C$$ for $R$ . Taking the logarithm on both sides yields $$R^N\log c=\log C.$$ With basic transformations we get $$R=\left(\frac{\log(C)}{\log(c)}\right)^{1/N}.$$ This is the general solution. For the given numbers this yields $R=(5/7)^{0.001}$ .
|
|recursion|
| 1
|
Exploring the Graphs of a Prime Number Integer Sequence: Seeking Insights
|
I am an engineering student and when I was doing some work on data visualisation I stumbled across an integer sequence after watching a video about sequences that produce interesting graphs. I submitted the sequence to the On-Line Encyclopedia of Integer Sequences A328225 . The sequence is as follows: a(1) = 1, a(n) = a(n-1) - prime(prime(n)) - prime(n-1) if this produces a positive integer not yet in the sequence, otherwise a(n) = a(n-1) + prime(prime(n)) - prime(n-1). This sequence produces the following graph (up to n = 1000, n = 10000 and n = 6e6 respectively) Graph of sequence up to n = 1000 Graph of sequence up to n = 10000 Graph of sequence up to n = 6e6 I'm would love to get a understanding of the underlying principles that give rise to the curves observed in the graphs. Additionally, I'm intrigued by the apparent upper limit observed along the y-axis. I would greatly appreciate any insights, explanations, or conjectures regarding the origins of these curves and the factors con
|
I have another comment for you that requires images. I'm afraid you'll spend a lot of time and energy on the impossible. I'm going to demonstrate dust spirals that is, a collection of dots generated by a sequence that the eye perceives to have patterns, but which are essentially random. (Reference: Spiral: from Theodorus to Chaos , Philip J. Davis, A.K. Peters, Ltd., 1993.) The point is that you may be searching in your recurrence for something that is not there. Consider the sequence $$ z_{n+1}=az_n+b\frac{z_n}{|z_n|},\quad z_1=1 $$ where $a,b$ are random complex numbers on the unit circle, e.g. $a=e^{i2\pi\cdot \text{rand}}$ . The first figure below shows the dust spiral for the first 5000 iterations of the recurrence for the $a,b$ values shown in the title. The sweeping curves and spirals suggest a highly ordered result. But, in fact, it's not. A continuous plot of $z$ shows a entirely different story. This should warn you that such patterns are well known but may be beyond understa
|
|sequences-and-series|prime-numbers|
| 0
|
Without choice, what can be the (finite) automorphism groups of $\mathbb F_2$-vector spaces?
|
My motivation for this question is similar to the one in this question . However, that question only asks about the possibility of an infinite $\mathbb F_2$ -vector space having trivial automorphism group. What other finite automorphism groups can occur? In particular, can $\mathbb Z/(3)$ occur? (see also this question for extra motivation).
|
The question about $\mathbb{Z}/3\mathbb{Z}$ is answered (positively) by Andreas Blass in this MathOverflow answer .
|
|linear-algebra|group-theory|set-theory|axiom-of-choice|automorphism-group|
| 0
|
On the proof of the Hölder inequality.
|
In my attempts of finding different proofs for the Hölder inequality, I have found none like this one that I have derived for myself using the Young inequality: if $p\in]1;\infty[$ and $\frac{1}{p}+\frac{1}{q}=1$ , then $\forall(x,y)\in(\mathbb{R}^{+*})^2$ : \begin{equation} \frac{x^p}{p}+\frac{y^q}{q}\geq xy \end{equation} Naturally I would like to know whether my proof is indeed valid. The proof goes as follows: let $f,g:[a,b]\to\mathbb{R}$ such that $a . We can state with confidence that: $$|f(t)g(t)|=(\lambda |f(t)|)(\frac{1}{\lambda}|g(t)|)\leq \frac{\lambda^p}{p}|f(t)|^p+\frac{1}{q\lambda^q}|g(t)|^q$$ $\forall\lambda>0$ . Integrating on both sides and applying the tirangle inequality for integrals leaves us with: $$\big{|}{\int_a^b f(t)g(t)\ dt\big{|}}\leq \frac{\lambda^p}{p}\int_a^b|f(t)|\ dt+\frac{1}{q\lambda^q}\int_a^b|g(t)|^q\ dt$$ Now consider the function: $$\psi(\lambda)=\frac{\lambda^p}{p}\int_a^b|f(t)|\ dt+\frac{1}{q\lambda^q}\int_a^b|g(t)|^q\ dt$$ And let us denote $F=\
|
Simplifying the proof of @Diego Marcon (and fixing an error), let: % $$ F := \int_{a}^{b} |f(t)|^{p} \, \mathrm{d} t \quad\quad G := \int_{a}^{b} |g(t)|^{q} \, \mathrm{d} t, $$ and (≈normalised) $\quad \tilde{f} := F^{-1/p}\!f$ , $\quad \tilde{g} := G^{-1/q}\!g % \quad\implies\quad % \int_{a}^{b} |\tilde{f}(t)|^{p} \, \mathrm{d} t \ =\, \int_{a}^{b} |\tilde{g}(t)|^{q} \, \mathrm{d} t \ =\, 1$ . The inequality is trivial if $f \!=\! 0$ or $g \!=\! 0$ , so assume $f$ , $g\,\ne\,0$ and bounded. By Young (*), $$ \tfrac{1}{F^{1/p}\,G^{1/q}}\!\int_{a}^{b}\!\! |f(t)g(t)| \,\mathrm{d}t \ =\, \int_{a}^{b}\!\! |\tilde{f}(t)\tilde{g}(t)| \, \mathrm{d}t \ \overset{(*)}{\le}\, \tfrac{1}{p}\!\int_{a}^{b}\!\! |\tilde{f}(t)|^{p} \, \mathrm{d} t + \tfrac{1}{q} \!\int_{a}^{b}\!\! |\tilde{g}(t)|^{q} \, \mathrm{d}t \ =\, \tfrac{1}{p} \!+\! \tfrac{1}{q} \ =\, 1$$ Hence $$\int_{a}^{b}\! |f(t)g(t)| \, \mathrm{d} t \ \leq\ F^{1/p} G^{1/q} \ =\ \big(\int_{a}^{b}\! |f(t)|^{p}\big)^{1/p} \big(\int_{a}^{b}\! |g(t
|
|calculus|solution-verification|integral-inequality|
| 0
|
If $Ext(P,\_) = 0,$ then $P$ is projective
|
Trying to understand Bredon's proof (below). Please note that $G$ is an Abelian group. So my confusion starts with $H_1(Hom(R,G_*)) \to H_0(Hom(A,G_*)) \to H_0(Hom(F,G_*))$ . I understand that he is using Zig-Zag lemma: $H_2(Hom(R,G_*)) \to H_2(Hom(A,G_*)) \to H_2(Hom(F,G_*)) \to H_1(Hom(R,G_*)) \to H_1(Hom(A,G_*)) \to H_1(Hom(F,G_*)) \to H_0(Hom(R,G_*)) \to H_0(Hom(A,G_*)) \to H_0(Hom(F,G_*))$ . Now because $F$ is projective $Hom(F,G_*)$ is exact, so $H_i(Hom(F,G_*))=0$ . $Hom(R,G_*)$ is only left exact, so can we find $H_i(Hom(R,G_*))$ ? I thought maybe $H_2(Hom(R,G_*))=0=Ext^2(R,G)$ , since G is abelian and similar for $H_2(Hom(A,G_*))=0=Ext^2(A,G)$ but is this correct and then what happens to $H_0(Hom(R,G_*))$ ?
|
I think there's some confusion indeed: $\newcommand{\Hom}{\operatorname{Hom}}\newcommand{\Ext}{\operatorname{Ext}} \Ext^2(X, G)$ is indeed trivial for all $X$ , but this has nothing to do with $H_2(\Hom(X, G_*))$ ( $X = R, A$ ): For one, $G_*$ and $G$ are completely unrelated; there is no group by the name of $G$ in that specific part of the argument. Moreover, $G_*$ is just some short exact sequence and you're not taking homology at the right index, so the group does not really connect to $\Ext^2$ in any meaningful way. As for how to determine these groups, there's two ways to go about it: Use the long exact sequence. Note that the homology sequence you wrote down actually extends by 0 to the left and right. Let us then see what terms we can eliminate using Bredon's arguments: All $H_i(\Hom(F, G_*))$ are trivial by projectivity of $F$ , and $H_1(\Hom(R, G_*)) = 0$ as mentioned as well as $H_2(\Hom(R, G_*))$ by the same argument. Now all remaining terms are wedged between two zeroes ea
|
|homological-algebra|projective-module|
| 1
|
Every metric space can be embedded isometrically into the Banach space
|
I've been trying to construct my own proof of this, but the only reference I've been able to find that I could (somewhat) understand was the following. In particular, I'm confused at the use of $f_{a}(a)$ and $f_{b}(a)$ in proving that $\|f_{a} - f_{b}\| = d(a, b)$ . I realize there's an almost identical question, but it's ~4 years old and received one response which didn't help my understanding. The proof goes as follows: Let $X$ be a set and $B(X, \mathbb{R})$ the set of bounded functions $f: X \rightarrow \mathbb{R}$ with norm $\|f\| = \sup_{x \in X}\{|f(x)|\}$ . Then every metric space $(X, d)$ can be embedded isometrically into the Banach space $E = \text{Bou}(X, \mathbb{R})$ . Proof. One can assume $X \neq \emptyset$ . Fix a point $a_{0} \in X$ and for every $a \in X$ define a function $f_{a}: X \rightarrow \mathbb{R}$ by $$ f_{a}(x) = d(x, a) - d(x, a_{0}). $$ Then $|f_{a}(x)| \leq d(a, a_{0})$ for every $x \in X$ so $f_{a}$ is bounded. Set $g: X \rightarrow E$ , $g(a) = f_{a}$
|
To summarise the discussion in the comments, fix $a,b\in X$ . By definition, $$\|g(a) - g(b)\| = \sup_{x\in X}|[g(a) - g(b)](x)| = \sup_{x\in X}|(f_{a} - f_{b})(x)| = \sup_{x\in X}|f_{a}(x) - f_{b}(x)|.$$ So the result follows by showing that $$\sup_{x\in X}|f_{a}(x) - f_{b}(x)| = d(a,b).$$ Showing that $|f_{a}(x) - f_{b}(x)| \leq d(a,b)$ for all $x\in X$ allows you to conclude that $d(a,b)$ is an upper bound for $\{|f_{a}(x) - f_{b}(x)| : x\in X \}$ , from which it follows that $$\sup_{x\in X}|f_{a}(x) - f_{b}(x)| \leq d(a,b).$$ On the other hand, showing that $|f_{a}(a) - f_{b}(a)| = d(a,b)$ allows you to conclude that $$d(a,b) = |f_{a}(a) - f_{b}(a)| \leq \sup_{x\in X}|f_{a}(x) - f_{b}(x)|.$$ This obtains the desired equality.
|
|functional-analysis|metric-spaces|banach-spaces|
| 1
|
Value of derivative of complex function
|
Let $f:\mathbb{C} \to \mathbb{C} \\ \,\ \ \ \ \ z \to (|z|^2-2)\overline{z}$ Determine the points where f is differentiable, presenting the due expression $f'(z)$ . $u(x,y)=Re((x^2+y^2-2)(x-yi))=x^3+xy^2-2x; \\ v(x,y)=Im((x^2+y^2-2)(x-yi))=-yx^2-y^3+2y $ $u_x=3x^2+y^2-2; \,u_y=2xy \\ v_x= \ \ -2xy;\qquad \ \ v_y=-x^2-3y^2+2$ The CR equality happens when: $$4(x^2+y^2)=4$$ And as all partial derivatives are continuous, f is differentiable at $\partial{D(0,1)}$ And now, for $z_{0} \in \partial{D(0,1)}$ and knowing $\textit{'a priori'}$ that the limit exists: $$ lim_{z \to z_0; z \in \partial{D(0,1)}} \frac{f(z)-f(z_0)}{z-z_0} = lim_{z \to z_0; z \in \partial{D(0,1)}}\frac{(|z|^2-2)\overline{z}+\overline{z_0}}{z-z_0}=lim_{z \to z_0; z \in \partial{D(0,1)}}-\frac{z-z_0}{z-z_0}=-1$$ So $f'(z_0)=-1$ for $z_0 \in \partial{D(0,1)}$ Is this correct? Any further suggestions?
|
You are right about the points at which $f\require{cancel}$ is differentiable, but not about the computation of $f'$ . If $x^2+y^2=1$ , then \begin{align}f'(x+yi)&=\frac{\partial u}{\partial x}(x,y)+\frac{\partial v}{\partial x}(x,y)i\\&=3x^2+y^2-2-2xyi\\&=\cancel{2x^2+2y^2-2}+x^2-y^2-2xyi\\&=(x-yi)^2.\end{align} In other words, if $|z|=1$ , then $f'(z)=\overline z^2$ .
|
|complex-analysis|
| 1
|
Does this series violate the integral test condition for convergence/divergence?
|
My professor is asking me to determine if the following series is convergent or divergent using the integral test specifically. $$ \sum^{\infty}_{n=1}n^2e^{-n} $$ However, this seems to violate the requirement that $f(x)$ should be decreasing over $[1,\infty)$ since $f(1) . The rest of the function is decreasing over $(2,\infty)$ which makes me think that I'm missing some special trick. I can't tell if I'm missing something or if my professor just did this on purpose to make sure we check our conditions...
|
Convergence depends only on the tail of the series so you can start the integral test at any convenient point and ignore the finite number of terms before that.
|
|calculus|sequences-and-series|
| 1
|
How can you prove that $1.05^{50} < 100$ without a calculator?
|
Is there a way to prove that $1.05^{50} without a calculator? I have tried this... $$1.05^{50} $$(\frac{105}{100})^{50} $$(\frac{21}{20})^{50} $$\frac{21^{50}}{20^{50}} $$\frac{21^{50}}{2^{50}*10^{50}} $$\frac{21^{50}}{2^{50}} $$10.5^{50} ...but I don't know where to go. Can someone assist me (alternate methods are fine)? EDIT: Can anyone help me prove that $2^{1000} without a calculator?
|
Use $$\left(1+\frac1n\right)^n because $\{\left(1+\frac1n\right)^{n+1}\}$ is a decreasing sequence, we have $$1.05^{50}=\left(\left(1+\frac1{20}\right)^{20}\right)^{\frac52}
|
|inequality|proof-writing|number-comparison|
| 0
|
Showing that $i:\ell^1 \to S_C$ has dense range.
|
Consider the sequence space $$S_C=\left\{(x_n)_{n\in\mathbb{N}}:x_n\in\mathbb{F}, \left(\sum_{k=1}^n x_k\right)_{n\in\mathbb{N}} \text{ is convergent}\right\}$$ of summable sequences. We consider $\lVert(x_n)_{n\in\mathbb{N}}\rVert =\sup_n | \sum_{k=1}^n x_k |$ to be the norm on $S_C$ . In particular $(S_c, \lVert \cdot \rVert)$ is a Banach space. Furthemore consider the Banach space of absolute summable $\mathbb{F}$ sequences $\ell^1$ . (This is a continuation of a post that I made about the norm of this space: Showing that a norm in the space of summable series is well defined ). Prove that the inclusion $i:\ell^1 \to S_C$ has dense range. I already showed that $i$ is well defined, that is: $(x_n)\in\ell$ , then also $(x_n)\in S_C$ . We can also see that $\lVert i \rVert = 1$ . Now to show that $i$ has dense range I think that we need to show that $\overline{\text{Im}(i)}=S_C$ . In particular (and this is where I might be wrong) for all $\varepsilon>0$ and for any $(x_n)\in S_C$ ther
|
As suggested in the comments, it is convenient to work with sequences that have finite support, which means sequences where only finitely many terms in the sequence are non-zero. Let $(x_{n})_{n\in\mathbb{N}}\in S_{C}$ . Let $\varepsilon \in (0, \infty )$ . As $(\sum_{k=1}^{n} x_{k})_{n\in\mathbb{N}}$ is convergent, it is a Cauchy sequence. So there exists some $N\in\mathbb{N}$ such that if $m,n\in\mathbb{N}$ with $N\leq m$ , $N\leq n$ and $m , it follows that $|\sum_{k=m+1}^{n}x_{k}| = | \sum_{k=1}^{n}x_{k} - \sum_{k=1}^{m}x_{k}| . In particular, it follows that $|\sum_{k=N+1}^{n}x_{k}| for all $n\in\mathbb{N}$ with $N . For each $n\in\mathbb{N}$ , define $y_{n} := \begin{cases} x_{n} & \text{ if } n\in \{1, \ldots , N\}, \\ 0 & \text{ if } n\in \mathbb{N} \setminus \{1, \ldots , N\}. \end{cases}$ Consider the sequence $(y_{n})_{n\in\mathbb{N}}$ . It is clear that $(y_{n})_{n\in\mathbb{N}} \in \ell^{1}$ . Given $n\in\mathbb{N}$ , observe that $$\left| \sum_{k=1}^{n}x_{k} - \sum_{k=1}^
|
|real-analysis|functional-analysis|banach-spaces|
| 0
|
Value of derivative of complex function
|
Let $f:\mathbb{C} \to \mathbb{C} \\ \,\ \ \ \ \ z \to (|z|^2-2)\overline{z}$ Determine the points where f is differentiable, presenting the due expression $f'(z)$ . $u(x,y)=Re((x^2+y^2-2)(x-yi))=x^3+xy^2-2x; \\ v(x,y)=Im((x^2+y^2-2)(x-yi))=-yx^2-y^3+2y $ $u_x=3x^2+y^2-2; \,u_y=2xy \\ v_x= \ \ -2xy;\qquad \ \ v_y=-x^2-3y^2+2$ The CR equality happens when: $$4(x^2+y^2)=4$$ And as all partial derivatives are continuous, f is differentiable at $\partial{D(0,1)}$ And now, for $z_{0} \in \partial{D(0,1)}$ and knowing $\textit{'a priori'}$ that the limit exists: $$ lim_{z \to z_0; z \in \partial{D(0,1)}} \frac{f(z)-f(z_0)}{z-z_0} = lim_{z \to z_0; z \in \partial{D(0,1)}}\frac{(|z|^2-2)\overline{z}+\overline{z_0}}{z-z_0}=lim_{z \to z_0; z \in \partial{D(0,1)}}-\frac{z-z_0}{z-z_0}=-1$$ So $f'(z_0)=-1$ for $z_0 \in \partial{D(0,1)}$ Is this correct? Any further suggestions?
|
Following an interesting exchange with @Torsten Schoeneberg who as pointed a fundamental misunderstanding of mine, the present state of this text is now a "side remark" that can still have an interest to show how complex differentiability of $f:\mathbb{C} \to \mathbb{C}$ is much more coercitive than $\mathbb{R^2}$ differentiability ( $f$ considered as a function $\mathbb{R^2} \to \mathbb{C}$ ). Here is an experimental approach using graphical tools that we have at our disposal. Have a look at fig. 1 featuring the absolute value map i.e., the graphical representation of function $$\varphi(x,y)=|(x^2+y^2-2)(x-iy)|$$ Fig. 1 : Modulus information. The black circle corresponds to the unit circle, the locus of points where $f$ is $\mathbb{C}$ -differentiable. "Visibly", function $f$ is $\mathbb{R^2}$ differentiable except : along the circle with equation $x^2+y^2=2$ and in the origin. It is also interesting to understand how the "argument" (aka "phase", or even "angle" in software like Matla
|
|complex-analysis|
| 0
|
Is there a better way to solve this problem of winning and losing probability in sports?
|
Question: A, B, and C students will participate in a badminton competition, and the agreed format is as follows: Those who have accumulated two losses are eliminated; Draw lots before the competition to determine the first two contestants, with the other person taking turns; The winner of each game will play the next game with the vacant player, and the loser will be left in the next game until one player is eliminated(the loser of each game sits out the next game while the person previously sitting out plays the winner of that game); After one person is eliminated, the remaining two people continue to compete until one person is eliminated and the other person ultimately wins, ending the competition After drawing lots, Party A and Party B will compete first, while Party C will take turns. Let's assume that the probability of both sides winning in each game is 1/2 (1) Calculate the probability of winning four consecutive games in A; (2) Find the probability of needing to play the fifth
|
For the first part, the chance $A$ wins four games in a row is $(1/2)^4=1/16$ If $A$ loses any of the first four there will be at least two losses by the time he comes back in and three wins will find a winner. For the second, the only way four games is enough is if $A$ or $B$ wins four games in a row or if $C$ comes in and wins three in a row. The probability of this is $1/4$ For the third, note that $A$ and $B$ are in symmetric positions so we can assume $A$ wins the first game, then double $C$ s winning probability and split the rest between $A$ and $B$ . You have enough information in the tree you have drawn. For each leaf that is $C$ count the number of games which gives the probability you follow that path. You could simplify the tree by noting that if $C$ wins his first game he is in the equivalent position as if one of the others wins the first two-no losses and each of the others has one. I think incorporating this observation is harder to do accurately than just working out t
|
|probability|
| 0
|
Under what conditions is the sum of two non-convex functions convex?
|
I am working on a problem, where I have to show that the sum of functions of the form: $$ f_i(x)=\left( \left\lVert x-a_i \right\rVert - l_i \right)^2, a_i\in\mathbb{R^3}, l_i\in\mathbb{R} $$ is not convex. I have shown that a single $f$ is not convex, but how do I show that some sum of them is also not convex? Note, when I say sum I mean adding only, so no $f-f=0$ . It's also the euclidean norm. Since it is not differentiable at $x=a$ I can't use any of the convex things that rely on differentiability, so I don't know what to look for.
|
I don't believe there are any general conditions to tell if the sum of $n$ non-convex functions is convex outside of using the definitions of convexity to show it directly. It's very dependent on your functions. Fortunately, you're just trying to show that the sum of $f$ isn't necessarily always convex (assuming you wrote out your problem correctly, since $f_i$ is convex if you let $l_i \leq 0$ ). You said you were able to prove for a single component $f_i$ that it's not necessarily convex. Under what specific conditions is $f_i$ non-convex? You should be able to extract those specific conditions that make it non-convex and construct some example $f_2$ to fit those same conditions. I'd recommend you graph a few examples in the $\mathbb{R^2}$ case to see how the function looks if you perturb various inputs $\boldsymbol{a}_i$ and $l_i$ . $\textbf{Big Hint/Answer:}$ $f_i$ is non-convex if $l_i > 0$ . If you pick $\boldsymbol{x} = \boldsymbol{a}_i$ , it turns out to be a local maximum $l_i
|
|convex-analysis|convex-optimization|
| 0
|
What is the subset of {{Ø}}
|
My textbook says: {Ø} is not a subset of {{Ø}} since the only element of {Ø} is Ø and the only element of {{Ø}} is {Ø}, so the element of {Ø} is not an element of {{Ø}}. I think it might be {{Ø}}
|
First, let me say that asking for "the subset" is usually a wrong question to ask, because sets usually have more than one subset. A set that includes only one element is a singleton . Singletons always have 2 subsets -- the empty set $\emptyset$ and the entire set itself. For example the set $\{a\}$ has two subsets: $\emptyset$ and $\{a\}$ . Your example is a singleton too: the only element of the set is $\{\emptyset\}$ . Do not mind the fact that this element itself is a set with another element in it. Just pretend it is $a$ as in the example above. Can you now list all the subsets of $\{\{\emptyset\}\}$ ?
|
|elementary-set-theory|
| 0
|
Does an invertible real matrix have a real cubic root
|
$A$ is an invertible real matrix, is there a real matrix $B$ such that $A=B^3$ ? For the complex case, it is not difficult. Over complex numbers, every invertible matrix has a cubic root. In fact, over $\mathbb C$ , since every invertible matrix has a logrithm , we can take a family of matrices $e^{t\log A}$ , and taking $t=1/3$ yields a square root of $A$ . The condition "invertible " is necessary for the real case. In fact, here's a counterexample link . How about general cases? $A$ is an invertible real matrix, is there a real matrix $B$ such that $A=B^n$ , where integer $n\geq 2$ ?
|
For square roots of real $A$ to exist, it is enough that all of the real eingenvalues of $A$ are $>0$ . Cubic roots, and more generally roots of odd order, exist for every real invertible $A$ . The proof can be done like in the solution of @user1551. A similar approach is using polynomials that do Hermite interpolation of the function $t\mapsto\sqrt[3]{t}$ on the spectrum of $A$ ( considered with multiplicities). I tested an approach for cubic roots using the Newton method. That is, start with some $x_0$ ( perhaps $I$ if not a problem), and use the recurrence $$x_{n+1} = x_n - \frac{x_n^3-a}{3 x_n^2}$$ (the quotient since all of the $x_n$ commute with $a$ ). It seems that the convergence is quite good. This could work with other polynomial equations $P(B)= A$ . Another approach works if $A$ is not too far from $I$ , that is $A= I+\delta$ , with $\delta$ small ( the spectral radius $\rho(\delta) ). Then we can take $$\sqrt[n]{A} = \sqrt[n]{1+\delta}= (1+\delta)^{\frac{1}{n}} = \sum_{k\g
|
|linear-algebra|matrices|matrix-equations|cubics|
| 0
|
For each postive integer $n:$ $a_n=\frac{n^2}{n^2-45n+675}.$ Evaluate $a_1+2a_2+3a_3+\cdots+44a_{44}$
|
For each postive integer $n:$ $a_n=\frac{n^2}{n^2-45n+675}.$ Evaluate $a_1+2a_2+3a_3+\cdots+44a_{44}$ What I have tried: I have taken $a_1,a_2,\cdots,a_{44}$ and put these values into $a_1+2a_2+3a_3+\cdots+44a_{44}.$ Is there any other way to solve this problem? Any guidance will be highly appreciated.Thank you!
|
$a_{45-n}=\frac{(45-n)^2}{(45-n)^2-45(45-n)+675}= \frac{(45-n)^2}{n^2-45n+675}$ The denominator is the same! So $na_n+(45-n)a_{45-n}=\frac{n^3}{n^2-45n+675}+ \frac{(45-n)^3}{n^2-45n+675}$ . Now we use the fact that $x^3+y^3=(x+y)(x^2-xy+y^2)$ : $$ \begin{align} na_n+(45-n)a_{45-n}&=\frac{(n+(45-n))(n^2-n(45-n)+(45-n)^2)}{n^2-45n+675}\\ &=\frac{45(n^2-45n+n^2+45^2-90n+n^2)}{n^2-45n+675}\\ &=\frac{45(3n^2-135n+45^2)}{n^2-45n+675}\\ &=135 \end{align} $$ So each pair works out to a constant value, $135$ and there are $22$ pairs total. The sum is $2970$
|
|discrete-mathematics|recurrence-relations|
| 0
|
Is a totally ordered set consisting of discrete point sets containing point p a well-ordered set?
|
I have encountered a fully ordered set and want to determine whether it is well ordered. Each element B in this totally ordered set A is a set and includes point p, which means that the elements in A may show the following situation: My current attempt: I want to verify that A is well-ordered, then I need to discuss the existence of a minimum element for any non-empty subset S. If there is a set B in S such that it contains only finite points, it can be easily verified $\bigcap _{B\in S} B\in S$ by proof by contradiction. But if the elements in S are at least listable point sets, I don't know how to describe it. Thanks for all your help.
|
Your set $A$ need not be well ordered. Let $A= \{B((0, 0), r) \mid r \in \Bbb R^+ \}$ , where $B((0, 0), r)$ is the ball of radius $r$ centered at $(0, 0)$ . Then $A$ is order-isomorphic to $\Bbb R^+$ , which is not well ordered.
|
|elementary-set-theory|descriptive-set-theory|transfinite-recursion|
| 1
|
Axiom of Regularity applied to the form $A=\{\{A, B\}, C\}$, not the usual $A=\{A,B\}$
|
The Axiom of Regularity tells us the following is disallowed: $$ A = \{ A, B \} $$ where A and B are either sets, or fundamental objects ("lowest hierarchy" as described by Terence Tao's Analysis I). Clearly, here $A$ is a set. Now, as I was doing an exercise I came to a point in the proof resulting in the following two statements: $$a = \{x, y\}$$ $$x = \{a, b\}$$ Here we have the following form, which is different to the above used to illustrate the Axion of Regularity: $$ A = \{\{A, B\}, C\} $$ Intuitively, this cannot be true, because this is a recursive definition of an infinitely deep set. Question: Does the Axiom of Regularity still apply? If so, how can I apply it to the form $A = \{\{A, B\}, C\}$ when it is traditionally applied to the form $A=\{A, B\}$ . Request: I am self-teaching and a beginner in this subject. It is likely I have not fully understood the axiom of regularity, so gentle guidance is appreciated.
|
The axiom of regularity implies there is no infinite descending sequence of sets. This is explained on the Axiom of Regularity Wikipedia page , but I'll give a simpler explanation applied to your scenario of $A=\{\{A,B\},C\}$ . To do this, we won't apply axiom of regularity to $A$ directly, but we will apply axiom of regularity to a set constructed from $A$ . Suppose, for purpose of contradiction, there exists sets $A,B,C$ such that $A=\{\{A,B\},C\}$ . Then, we can construct a set $S=\{A,\{A,B\}\}$ . However, notice that: $A$ is not disjoint from $S$ , because $\{A,B\}\in A$ and $\{A,B\}\in S$ $\{A,B\}$ is not disjoint from $S$ , because $A\in \{A,B\}$ and $A\in S$ Therefore, all elements in $S$ are not disjoint from $S$ , which contradicts the axiom of regularity. Thus, by contradiction, there exists no sets $A,B,C$ such that $A=\{\{A,B\},C\}$ .
|
|elementary-set-theory|
| 0
|
Prove that $P(x)=(1- x+ x^2 -x^3... - x^{99} +x^{100})$ $\times$ $(1 + x +x^2 + x^3... + x^{99} + x^{100})$ doesn't have any odd degree of x member
|
You can't use the fact that $P(x)=P(-x)$ . I have noticed two geometric series. When I multiply each bracket I get $ P(x)=((x^{101}+1)/(x+1))*((x^{101}+1)/(x-1)) $ or $(x^{101}+1)^2/[(x+1)(x-1)]$ I don't know how to proceed from here.
|
You made a typo when converting the geometric series using the sums formula. You converted the first series correctly: $$1-x+x^2-x^3...-x^{99}+x^{100}=\frac{x^{101}+1}{x+1}$$ However, the second series is not right, it should be: $$1+x+x^2+x^3...+x^{99}+x^{100}=\frac{x^{101}-1}{x-1}$$ Notice how there is $-1$ in the numerator, not $+1$ . This is where your typo was. Then, when we multiply them, we get: $$\frac{x^{101}+1}{x+1}\cdot \frac{x^{101}-1}{x-1}=\frac{x^{202}-1}{x^2-1}$$ Finally, notice that the right-hand side is also in the form of a geometric series sum formula: $$\frac{x^{202}-1}{x^2-1}=\frac{(x^2)^{101}-1}{x^2-1}=1+x^2+(x^2)^2+...+(x^2)^{100}=1+x^2+x^4+...+x^{200}$$ and there are no odd-degree terms in this polynomials.
|
|linear-algebra|functions|polynomials|
| 0
|
Explicit approximation of $C_0(\mathbf{N})$ using elementary functions
|
Let $\mathbb{N} = \{1,2,\dots\}$ be the set of natural numbers, and let $C_0$ the the collection of functions on $\mathbb{N}$ that vanishes at infinity. Let ${\cal C}\subset C_0$ be the collection of functions that is finite linear combinations of $f_s(n) = s^n$ where $0\le s . More explicitly, $$ {\cal C} = \left\{\sum_{j = 1}^k a_if_{s_i},\, k\ge 1,\, s_i \in [0,1),\,a_i \in \mathbb{R}\right\}. $$ Here, it is closed under multiplications and is a linear subspace, so it is an algebra, it separates points and vanishes nowhere. Hence by Stone-Weierstrass it is dense in $C_0$ in uniform norm. Now, I want to get an explicit sequence of functions in ${\cal C}$ so that it converges to the indicator function at n = 2, call it $1_{2}$ , that is, $$ 1_2(n) = \begin{cases}1& \text{if n = 2}\\ 0&\text{else}.\end{cases} $$ Or if it is easier, find an explicit sequence of functions in ${\cal C}$ so that it converges to $1_{1,2}$ , that is $$ 1_{1,2}(n) = \begin{cases}1& \text{if } n \in \{1, 2\}\\
|
For $j=1$ , as you stated, we have $$s^{-1}f_s\to 1_1.$$ For $j=2$ , we get $$s^{-2}f_s-s^{-3}f_s\to 1_2.$$ This is because $$s^{-2}f_s=\Bigl(s^{-1},1,s,s^2,\ldots\Bigr),$$ and $$s^{-3}f_{s^2}=\Bigl(s^{-1},s,s^3,s^5,\ldots\Bigr).$$ The difference is $$s^{-2}f_s-s^{-3}f_{s^2}=\Bigl(0,1-s,s-s^3,s^2-s^5,\ldots\Bigr).$$ Taking $s\downarrow 0$ gives convergence to $1_2$ .
|
|sequences-and-series|weierstrass-approximation|
| 1
|
Is the bound $\Gamma(\alpha, x) \leq \tfrac{3}{2} e^{-x} (x + \alpha )^{\alpha - 1}$ valid for all $x > 0$ and $\alpha \geq 0$?
|
Recall the incomplete Gamma function, which is defined as $$ \Gamma(\alpha, x) = \int_x^\infty t^{\alpha - 1} e^{-t} \, dt. $$ My question is whether or not $$ \Gamma(\alpha, x) \leq \frac{3}{2}(x + \alpha)^{\alpha - 1} e^{-x} $$ holds for all $x > 0$ and any $\alpha \geq 0 $ . Below, I prove it holds for all integers $\alpha \geq 0$ . Indeed, integration by parts gives $$ \Gamma(\alpha, x) = x^{\alpha - 1} e^{-x} + (\alpha - 1)\Gamma(\alpha- 1, x). $$ In particular, using the fact that $\Gamma(1, x) = \int_x^\infty e^{-t} \, dt = e^{-x}$ , it can be verified that for all integers $k \geq 0$ , that $$ e^x \Gamma(k + 1, x) = \sum_{j=0}^k x^k (k-j)! \binom{k}{j} \leq \sum_{j=0}^k x^k k^{k-j} \binom{k}{j} = (x + k)^k. $$ Or stated otherwise, for all $x > 0$ , $$ \Gamma(\alpha, x) \leq e^{-x} (x + \alpha - 1)^{\alpha - 1} \leq \frac{3}{2} e^{-x} (x + \alpha)^{\alpha - 1} \quad \mbox{for any}~\alpha \in \{1, 2, 3, \dots\}. $$ Moreover, the bound above can also trivially be verified at $\alp
|
Some thoughts. (A rigorous proof for (1) and (2) is required.) Let $$F(x) := \frac{3}{2}(x + \alpha)^{\alpha - 1} \mathrm{e}^{-x} - \int_x^\infty t^{\alpha - 1} \mathrm{e}^{-t} \, \mathrm{d} t.$$ We have \begin{align*} F'(x) &= -\frac32(x + \alpha)^{\alpha - 2}\mathrm{e}^{-x}(1 + x) + x^{\alpha - 1} \mathrm{e}^{- x}\\ &= - \frac{\mathrm{e}^{-x} x^{\alpha}(1 + x)}{(x + \alpha)^2}\left( \frac32\left(1 + \frac{\alpha}{x}\right)^{\alpha} - \frac{(x+\alpha)^2}{x(1 + x)}\right). \end{align*} We split into three cases. Case 1. $\alpha \ge 1$ and $x > 0$ By Bernoulli inequality, we have $$\frac32\left(1 + \frac{\alpha}{x}\right)^{\alpha} - \frac{(x+\alpha)^2}{x(1 + x)} \ge 1\cdot \left(1 + \frac{\alpha}{x}\cdot \alpha\right) - \frac{(x+\alpha)^2}{x(1 + x)} = \frac{(\alpha - 1)^2}{1 + x}\ge 0.$$ Thus, $F'(x) \le 0$ on $(0, \infty)$ . Also, we have $F(\infty) = 0$ . Thus, $F(x) \ge 0$ for all $x > 0$ . Case 2. $0 \le \alpha and $x > 1$ By Bernoulli inequality, we have $$\left(1 + \frac{\alpha}{x
|
|real-analysis|inequality|gamma-function|
| 0
|
Triangle Equality in a normed linear space
|
The following statement is true or false: If $x, y$ are elements of a normed linear space, then $$\|x + y\| = \|x\| + \|y\| \iff x = 0\ \text{or}\ y = tx$$ for some $t ≥ 0$. What I have tried is as follows: It is clear that if $x = 0\ \text{or}\ y = tx $ then the equality will hold. But for the converse part, let \begin{align*} \|x+y\|& =\|x\|+\|y\|\\ \implies \|x+y\|^2 & =(\|x\|+\|y\|)^2\\ \end{align*} After that I stuck. Also in the following article Characterization of the norm triangle equality , I have read that in the case of a strictly convex normed linear space $V$, the equality $\|x + y\| = \|x\| + \|y\|$ holds for nonzero vectors $x,y ∈ V$ if and only if $\frac{x}{\|x\|} = \frac{y}{\|y\|} $. Thank you for the help.
|
The example of the $\ell^1$ -norm on $\mathbb R^2$ illustrating the failure of the proposition in general by user @copper.hat is very nice. In this answer I'll give a substitute that is true in inner product spaces. Let $(X,\langle\cdot,\cdot\rangle)$ be a real or complex inner product space. (We use $\Re z$ to denote the real part of $z\in\mathbb C$ .) Proposition. Suppose $e_1,\dots,e_n$ in $X$ are unit vectors and $|e_1 + \dots + e_n| = n$ . Then $e_1=e_2=\dots=e_n$ . Proof. We use induction on $n$ . The case $n = 2$ is the most important, the case $n = 1$ being trivial. In case $n = 2$ , consider that $$ 4 = \langle e_1+e_2,e_1+e_2\rangle = 2 + 2\Re\langle e_1,e_2\rangle. $$ Therefore, $\langle e_1-e_2,e_1-e_2\rangle$ is equal to $$ 2 - 2\Re\langle e_1,e_2\rangle = 0. $$ Suppose the claim is true for some $n\ge 2$ , and consider the case $n+1$ . By assumption and the triangle inequality, $$ n+1=|e_1+\dots+e_{n+1}|\le |e_1+\dots+e_n|+1\le n+1, $$ so all inequalities are equalities,
|
|functional-analysis|normed-spaces|
| 0
|
Regularity and B-S equation
|
Let $t_{0}>0,$ $S_{t_{0}}=S_{0}$ and $\forall t>0\quad \frac{dS_t}{S_t}=\mu dt+\sigma dW_t$ . Are there any regularity results for $(\mu, \sigma)\mapsto S(\mu,\sigma)$ ?
|
As mentioned in https://en.wikipedia.org/wiki/Geometric_Brownian_motion we have $$S_{t}=S_{0}\exp \left(\left(\mu -{\frac {\sigma ^{2}}{2}}\right)t+\sigma W_{t}\right).$$ So the mapping $(\mu, \sigma)\mapsto S(\mu,\sigma)$ is smooth.
|
|probability|continuity|finance|stochastic-differential-equations|regularity-theory-of-pdes|
| 1
|
Integrating a product of Gaussians by hand
|
At the end of section 2 of this paper , the authors mention the integral: $$\frac{\alpha}{Z}\int N(x_i | \mu, \sigma I) N(\mu | 0, \rho I) d \mu$$ (Note: $x_i$ and $\mu$ are $d$ -dimensional.) In the first part of section 3.1, they claim that "by a straightforward calculation," that integral evaluates to $$\frac{\alpha}{Z} (2\pi(\rho + \sigma))^{-d/2}\exp\biggr(-\frac{1}{2(\rho + \sigma)}||x_i||^2\biggr).$$ I have tried (a few times) to obtain this result by actually evaluating the integral and completing the square by hand, but I've obtained $$\frac{\alpha}{Z} \biggr(2\pi\frac{\sigma \rho}{\rho + \sigma} \biggr)^{d/2}\exp\biggr(-\frac{1}{2(\rho + \sigma)}||x_i||^2\biggr),$$ and I've not been able to find my mistake. I trust the paper over myself, so I must have done something incorrectly. Could someone show how to properly evaluate the integral step-by-step to obtain the paper's solution?
|
Let $y_i=x_i-\mu$ then $y_i \sim N(0,\sigma I)$ . The distribution of $x_i$ is the same as the distribution of $y_i+\mu$ where $y_i$ and $u$ are independent. The distribution of the sum of two independent normal random variables is a normal distribution . We also have that $\sigma^2(X)+\sigma^2(Y)=\sigma^2(X+Y)$ for two independent random variables $X$ , $Y$ . Therefore $x_i = y_i+\mu \sim N(0,(\sigma +\rho) I)$ , hence the formula. That's why it's straightforward
|
|calculus|probability|definite-integrals|gaussian-integral|
| 1
|
If $f$ maps any measurable set to a measurable set, then is $f$ a measurable map?
|
Let $(X,\sigma)$ and $(Y,\nu)$ are measurable space and $f:\ X\longrightarrow Y$ is a map that takes each measurable set of $\sigma$ to a measurable set of $\nu$ , then is $f$ needed to be measurable, i.e preimage of measurable set in $Y$ is a measurable set in $X$ ? In general topology, an open map - a map $f:\ X\longrightarrow Y$ that turns every open set in $X$ to an open set in $Y$ , is not needed to be continuous. However, the sigma-algebra in general is a larger than the topology, so is there any chance that a map that brings measurable sets to measurables sets is a measurable map ?
|
The answer is no. Here is a simple counter-example. Let $X=Y=[0,1]$ . Let $\sigma =\{\emptyset, [0,1]\}$ and $\nu=\mathcal{P}([0,1])$ . It is easy to see that $\sigma$ and $\nu$ are $\sigma$ -algebra. Let $f:\ X \rightarrow Y$ be the map defined as $f(x) =x$ , for all $x \in [0,1]$ . Clearly, $f$ takes each measurable set of $\sigma$ to a measurable set of $\nu$ . However $f$ is not measurable, since, for instance, $\{0\} \in \nu$ , but $f^{-1}(\{0\})= \{0\} \notin \sigma$ . Remark : In fact, in the counter-example $[0,1]$ can be replaced by any set having two or more elements.
|
|measure-theory|elementary-set-theory|
| 0
|
Value of derivative of complex function
|
Let $f:\mathbb{C} \to \mathbb{C} \\ \,\ \ \ \ \ z \to (|z|^2-2)\overline{z}$ Determine the points where f is differentiable, presenting the due expression $f'(z)$ . $u(x,y)=Re((x^2+y^2-2)(x-yi))=x^3+xy^2-2x; \\ v(x,y)=Im((x^2+y^2-2)(x-yi))=-yx^2-y^3+2y $ $u_x=3x^2+y^2-2; \,u_y=2xy \\ v_x= \ \ -2xy;\qquad \ \ v_y=-x^2-3y^2+2$ The CR equality happens when: $$4(x^2+y^2)=4$$ And as all partial derivatives are continuous, f is differentiable at $\partial{D(0,1)}$ And now, for $z_{0} \in \partial{D(0,1)}$ and knowing $\textit{'a priori'}$ that the limit exists: $$ lim_{z \to z_0; z \in \partial{D(0,1)}} \frac{f(z)-f(z_0)}{z-z_0} = lim_{z \to z_0; z \in \partial{D(0,1)}}\frac{(|z|^2-2)\overline{z}+\overline{z_0}}{z-z_0}=lim_{z \to z_0; z \in \partial{D(0,1)}}-\frac{z-z_0}{z-z_0}=-1$$ So $f'(z_0)=-1$ for $z_0 \in \partial{D(0,1)}$ Is this correct? Any further suggestions?
|
Use complex CR equality: $$\frac{\partial f}{\partial \overline z}=0,$$ and $f(z)=(|z|^2-2)\overline{z}=z\overline z^2-2\overline z$ , then $$\frac{\partial f}{\partial \overline z} =2z\overline z-2=0\iff|z|=1,$$ and $$f'(z)=\frac{\partial f}{\partial z}=\overline z^2,\quad \forall z \in \partial{D(0,1)}.$$
|
|complex-analysis|
| 0
|
Proving uniqueness of antipodes in Hopf algebras
|
Let $(H,\mu,\nu,\Delta,\epsilon)$ be a Bialgebra where H is the vector space, $\mu, \nu$ are the product and unit whilst $\Delta, \epsilon$ are the coproduct and counit. Now, for $f,g \in end(H)$ define $f@g \in end(H)$ by $f@g=\mu(f \otimes g)\Delta(x)=\Sigma_{(x)}f(x')g(x'')$ (via Sweedler notation). An element $S \in end(H)$ is called an antipode if $S@id_H=id_H@S=\nu\circ\epsilon$ If a Bialgebra has an antipode, then it is unique. To see this, suppose $S,T$ are antipodes for the bialgebra $H$ . Then we have: $S = S@(\nu\epsilon)=S@(id_H@T)=(S@id_H)@T=(\nu\epsilon)@T=T$ Can somebody explain to me the first equality? Why do we get $S = S@(\nu\epsilon)$ ?
|
We can in fact prove this using a commutative diagram, so that the result generalizes for all Hopf monoids in symmetric monoidal categories. Here, $I$ denotes the monoidal unit (that is, the base field). The two top quadrilaterals commute by the properties of a monoidal category; the two triangles commute by the properties of the unit $\eta$ and counit $\varepsilon$ ; the bottom quadrilateral commutes because $T$ is an antipode. Therefore, we have proven that: $$S=(\mu\circ(1_H\otimes\mu))\circ(S\otimes1_H\otimes T)\circ((1_H\otimes\Delta)\circ\Delta).$$ In a very similar manner we can also prove that $$T=(\mu\circ(\mu\otimes1_H))\circ(S\otimes1_H\otimes T)\circ((\Delta\otimes1_H)\circ\Delta).$$ But by associativity and co-associativity, this means $S=T$ .
|
|abstract-algebra|hopf-algebras|coalgebras|
| 0
|
Can the integral be found without Feynman’s trick?
|
When I came across the integral $$J=\int_0^{\infty} \frac{\operatorname{artanh}\left(\frac{1}{\sqrt{1+x^2}}\right)}{\sqrt{1+x^2}} d x=\frac{\pi^2}{4} $$ whose answer is surprisingly decent, I, as usual, put $x=\tan \theta$ and transform the integral into $$ \begin{aligned} J & =\int_0^{\frac{\pi}{2}} \frac{\operatorname{artanh}\left(\frac{1}{\sec \theta}\right)}{\sec \theta} \sec ^2 \theta d \theta \\ & =\int_0^{\frac{\pi}{2}} \sec \theta \operatorname{artanh}\left(\frac{1}{\sec \theta}\right) d \theta \end{aligned} $$ Feynman’s trick reminds me to deal with its parametrized integral $$ J(a)= \int_0^{\infty} \frac{\operatorname{artanh}\left(\frac{a}{\sqrt{1+x^2}}\right)}{\sqrt{1+x^2}} d x =\int_0^{\frac{\pi}{2}} \sec \theta \operatorname{artanh} \left(\frac{a}{\sec \theta}\right) d \theta $$ with $|a| and $J(0)=0$ . Consequently, differentiation under integral make our life easier as $$ \begin{aligned} J^{\prime}(a)&=\int_0^{\frac{\pi}{2}} \frac{1}{1-\frac{a^2}{\sec ^2 \theta}} d \thet
|
Using the integral $$ \int \frac{1}{x^2-k^2} d x=-\frac{\tanh ^{-1}\left(\frac{x}{k}\right)}{k}+\text { constant } $$ to transform the itegral into a double integral $$ \int_0^{\infty} \frac{\operatorname{artanh}\left(\frac{a}{\sqrt{1+x^2}}\right)}{\sqrt{b^2+x^2}}dx=-\int_0^{\infty} \int_0^a \frac{1}{y^2-\left(b^2+x^2\right)} d y d x $$ Interchange the variables yields $$ \begin{aligned} \int_0^{\infty} \frac{\operatorname{artanh}\left(\frac{a}{\sqrt{1+x^2}}\right)}{\sqrt{b^2+x^2}}dx & =\int_0^a \int_0^{\infty} \frac{1}{x^2+\left(b^2-y^2\right)} d x d y \\ & =\int_0^a\left[\frac{1}{\sqrt{b^2-y^2}} \tan ^{-1} \frac{x}{\sqrt{b^2-y^2}}\right]_0^{\infty} d y \\ & =\frac{\pi}{2} \int_0^a \frac{1}{\sqrt{b^2-y^2}} d y \\ & =\frac{\pi}{2} \arcsin \left(\frac{a}{b}\right) \end{aligned} $$
|
|calculus|integration|definite-integrals|trigonometric-integrals|hyperbolic-functions|
| 0
|
Is my trigonometric proof of the Pythagorean theorem correct?
|
Actually, this is more of a proof for the trigonometric identity, $$\sin^2\theta + \cos^2\theta = 1.$$ First use right triangle, $ABC$ with angle $C$ being the right angle. Drop an altitude from point $C$ to line $AB$ making the point $D$ . Let's name the segments: $AB = c$ , $AC = b$ , and $BC = a$ . Also, $AD = c_1$ , $BD = c_2$ , so $c_1 + c_2 = c$ . Then, $b\cos A = c_1$ , $a\cos B = c_2$ , so $b\cos A + a\cos B = c$ . Since we know that $\angle A + \angle B = 90^\circ$ , we also know that $\cos B = \sin A$ . Thus, $b\cos A + a\sin A = c$ , or $$ \frac{b}{c}\,\cos A + \frac{a}{c}\, \sin A = 1. $$ Since $\frac{b}{c} = \cos A$ and $\frac{a}{c} = \sin A$ , we have $$ \cos^2 A + \sin^2 A = 1. $$
|
The identity $\sin^2\theta + \cos^2\theta = 1$ is essentially the Pythagorean Theorem in the form of trigonometry. In your proof you use the fact that triangles with congruent angles are similar. This is an essential fact that enables us to define the sine and cosine functions for acute angles using the three sides of a right triangle, from which you get the formulas $\frac bc = \cos A$ and so forth. It is not necessary to use the Pythagorean Theorem to prove that triangles with congruent angles are similar. Then, as detailed in On the Possibility of Trigonometric Proofs of the Pythagorean Theorem by Jason Zimba, the sine and cosine functions (at least, the versions of those functions that you used) are well-defined. There is a related discussion at Cut-the-Knot . While I found the proof adequate and non-circular, I had to rely on knowledge that the sine and cosine functions can be well defined using the high-school "triangle" definition without depending on the Pythagorean Theorem. Th
|
|geometry|algebra-precalculus|trigonometry|solution-verification|euclidean-geometry|
| 0
|
Regularity of $H \cap X$ in Bertini's Theorem (Hartshorne CH. 2, Theorem 8.18)
|
In Hartshorne's proof of Bertini's theorem (CH. 2, Theorem 8.18), I am having difficulty in the first part of the proof which tells us that the scheme $H \cap X$ is regular at every point. I could follow till the statement that $x \in H \cap X \iff \varphi_x(f) \in \mathfrak{m}_x$ . (Q-1) I just want to confirm here that there is an abuse of notation here and the statement means $x \in H \cap X \iff \varphi_x(f) \in \mathfrak{m}_x/ \mathfrak{m}_x^2 = \overline{\mathfrak{m}_x} $ . I am not able to make any progress on the next statement that says " $x$ is non-regular on $H \cap X \iff \varphi_x(f) \in \mathfrak{m}_x^2$ because in that case the local ring $\mathcal{O}_x/ (\varphi(f))$ will not be regular." (Q-2) I hope that RHS is 0 i.e. $\varphi_x(f) \in \mathfrak{m}_x^2$ is same as saying that $\varphi_x(f)=0$ . (Q-3) What will be the local ring of $H \cap X$ at $x$ ? Why does Hartshorne says that it will be $\mathcal{O}_x/(\varphi_x(f))$ ? (hoping that there is a typo: Intended $\varp
|
It seems like you're getting really hung up on the notation while Hartshorne is being pretty free about identifying things with their image under quotient maps and the like. I think you're sophisticated enough to figure out what it should be if you insist on being really exacting like this - you're completely correct in your post when you're making statements about what the notation should be. In detail: Correct, $x\in H\cap X$ iff $\varphi_x(f)\in\mathfrak{m}_x/\mathfrak{m}_x^2 \subset \mathcal{O}_{X,x}/\mathfrak{m}_x^2$ . Yes, that's correct, we are taking the quotient by $\mathfrak{m}_x^2$ . The local ring of $H\cap X$ at $x$ is the quotient of $\mathcal{O}_{X,x}$ by any local equation for $H$ . Importantly, $f$ does not immediately restrict to an element of $\mathcal{O}_{X,x}$ without doing something to it, because it's homogeneous and therefore not a global regular function on $\Bbb P^n$ - instead, it's a section of $\mathcal{O}_{\Bbb P^n}(1)$ . In order to get a local equation fo
|
|algebraic-geometry|commutative-algebra|schemes|projective-varieties|
| 0
|
How do we know the space of distributions is "big"?
|
Consider the set of compactly supported smooth functions $C^\infty_c(\mathbb{R})$ . The dual of this space, $(C^\infty_c(\mathbb{R}))'$ is often referred to as a very large space. Since every $f \in C^\infty_c(\mathbb{R})$ can be mapped to an element in the dual space by integration we know that $C^\infty_c(\mathbb{R}) \subset (C^\infty_c(\mathbb{R}))'$ and that this inclusion is strict since non-regular distributions are known to exist. But how do we know that the dual space is much larger than the original space? That is, how do we know the non-regular distributions are not rare? Is there a way to quantify how much bigger the dual space is compared to the original space?
|
Given the inclusion you have included, introduction of the Hahn-Banach Theorem allows for the extension of linear functionals defined on a subspace (like $C_c^\infty(\mathbb{R}))$ to the entire space (like $C_c^\infty(\mathbb{R}))'$ ) in multiple ways. This theorem underpins the ability to create many distinct distributions from a single function.
|
|functional-analysis|distribution-theory|dual-spaces|
| 1
|
Is there a nice closed form of the following function: $f(k) = \lim_{n \rightarrow \infty}\prod^{kn}_{i = 0}\frac{n-2i+1}{n-2i}$
|
I am tring to find the closed from of the following function: $$f(k) = \lim_{n \rightarrow \infty}\prod^{kn}_{i = 0}\frac{n-2i+3}{n-2i+2},$$ where $k \in [0,\frac{1}{2}]$ If the numerator is $n-2i+4$ instead of $n-2i+3$ , the products can be nicely canceled to $\lim_{n \rightarrow \infty} \frac{n+3}{n-2kn+2} = \frac{1}{1-2k}$ . But the relaxzation is rather big and I don't really know where to look for dealing with this product.
|
An "automatic" approach is to use the gamma function limit $\lim\limits_{x\to+\infty}\frac{\Gamma(x+a)}{x^a\ \Gamma(x)}=1$ : since $$\prod_{i=0}^m\frac{n-2i+3}{n-2i+2}=\frac{\Gamma\left(\frac{n+5}2\right)\Gamma\left(\frac{n+2}2-m\right)}{\Gamma\left(\frac{n+4}2\right)\Gamma\left(\frac{n+3}2-m\right)},$$ we see that $$f(k)=\lim_{n\to\infty}\frac{(n/2)^{5/2}(n/2-kn)^{2/2}}{(n/2)^{4/2}(n/2-kn)^{3/2}}=(1-2k)^{-1/2}.$$
|
|calculus|riemann-sum|
| 1
|
Axiomatizing a QE theory by $\forall \bar{x}\exists{y}\phi(\bar{x},y)$
|
as a part of a model theory assignment I am asked to prove that if a theory T has quantiier elimination, then it can be axiomatized by sentences of the form $\forall\bar{x}\exists{y}\phi(\bar{x},y)$ , where $\phi$ is a quantifier-free formula. I have a sketch for the solution, but I am not too sure about it. I would appreciate some feedback. For each $\psi\in{T}$ : If $\psi$ is quantifier free, it is trivial, as for example $T\vdash\psi\iff \forall{x}\exists{y}\psi$ . For a sentence of the form $Q_1x_1...Q_nx_n\phi(x_1,...x_n)$ where $\phi$ is quantifier free and $Q_i\in$ { $\forall, \exists$ } for each $i\leq{n}$ , I use the fact that if a theory has QE then it is model-complete, which implies each formula is T-equivalent to an existential formula. Therefore, by induction on n: if n=1, i.e $\phi = \forall{x}\psi(x)$ , there is some q.f $\psi'(x)$ such that $T\vdash\forall{x} (\psi(x)\iff\exists{y}\psi'(x,y))$ , and so $\forall{x}\exists{y}\psi'(x,y)$ is a sentence of the desired form
|
I'm not sure I understand your argument... what does it mean for "the claim to be true for sentences of the form..."? "The claim" is not a statement about all sentences, it is a statement about an equivalence of theories. You seem to be showing each sentence in $T$ is equivalent under $T$ to a quantifier-free sentence, but that's clear from the get-go (we have quantifier elimination). And that doesn't work because just because a $\varphi\in T$ is equivalent mod $T$ to some $\psi$ doesn't mean we can replace $\varphi$ with $\psi$ in $T$ and get an equivalent theory... $T$ implies any consequence of $T$ is equivalent with $\varphi,$ no matter how weak. However, observe that we have, for every quantifier-free formula $\varphi(\vec x,y)$ , $$ T\vdash \forall \vec x(\psi_\varphi(\vec x)\leftrightarrow \exists y\varphi(\vec x, y))$$ for some quantifier-free $\psi_\varphi$ (this is just a special case of quantifier eliminiation). Furthermore, this statement of equivalence is itself logically
|
|model-theory|quantifier-elimination|
| 0
|
Number of Solutions to $y^2 \equiv x^2 + 1 \pmod{p}$
|
Suppose $p$ is an odd prime. I wish to show that the number of solutions to $y^2 \equiv x^2 + 1 \pmod{p}$ is $$p + \sum_{k=0}^{p-1}\left(\frac{k^2 +1}{p}\right).$$ I know that, for any $a$ , $y^2 \equiv a \pmod{p}$ admits 1 + $\left(\frac{a}{p}\right)$ solutions (mod $p$ ). It is thus 'intuitively' true that $y^2 \equiv x^2 + 1 \pmod{p}$ should admit $$\sum_{x=0}^{p-1}1 + \left(\frac{x^2 +1}{p}\right),$$ solutions, where we run over the distinct residue classes (mod $p$ ) that $x$ can assume. Is this a sufficient argument? Or is there a way by which to formalise the proof?
|
What you did is fine, but this can be made more explicit. Rewrite the equation as $(y+x)(y-x)\equiv 1\mod p$ . Since $p\not=2$ , the following variable change is a bijection: $$\begin{cases} u = y+x \\ v = y-x \end{cases}\Leftrightarrow \begin{cases} x = \frac{u-v}{2} \\ y = \frac{u+v}{2} \end{cases}$$ So the equation is equivalent to $uv\equiv 1 \mod p$ which has exactly $p-1$ solutions.
|
|elementary-number-theory|quadratic-residues|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.