text
stringlengths
256
16.4k
The Binomial Theorem Consider the expansion of the binomial $(1 + x)^n$ for $n \in \{ 0, 1, 2, ... \}$. When $n = 0$ we have that:(1) When $n = 1$, $n = 2$, and $n = 3$ we get:(2) Notice that if we list the terms of the expansion of $(1 + x)^n$ in ascending order then the coefficients of these terms match the numbers in each row $n$ of Pascal's Triangle. More generally, this property is also apparent in the expansion of the binomial $(x + y)^n$. For $n = 0$, $n = 1$, $n = 2$, and $n = 3$ we have that:(5) The following theorem known as the Binomial Theorem states this result. Theorem 1 (The Binomial Theorem): For all $x, y \in \mathbb{R}$ and $n \in \{ 0, 1, 2, ... \}$ we have that $\displaystyle{(x + y)^n = \binom{n}{0} x^ny^0 + \binom{n}{1} x^{n-1}y^1 + ... + \binom{n}{n-1} x^1y^{n-1} + \binom{n}{n} x^0y^n = \sum_{k=0}^{n} \binom{n}{k} x^{n-k} y^k}$. Proof:Let $n \in \{ 0, 1, 2, ... \}$ and considered the simplified expanded product of: For each factor $(x + y)$ we obtain a term from distributing $x$ and another term from distributing $y$ across either the $x$ or the $y$ in the remaining factors, $(x + y)$. There are $\binom{n}{0} = 1$ ways to get the term $x^ny^0$ (by first multiplying all $x$'s in each of the $(x + y)$ factors), there are $\binom{n}{1} = n$ ways to get the term $x^{n-1}y^1$ (by multiplying any $n-1$ out of $n$ of the $x$'s and then the remaining $y$), etc… and so forth. By continuing in this fashion, we will eventually obtain all terms of the expansion of $(x + y)^n$ and get: The argument made in Theorem 1 is a bit subtle but nevertheless important. For a simpler example, consider the following expansion:(11) Let $A= \{ x, y \}$ the set of terms in the factors $(x + y)$. For each of $3$ multiplications, we will get a term corresponding to choosing one of the elements in $A$, and multiplying them together, that is, we get a finite sequence $\{ a, b, c \}$ where $a, b, c \in A$ and such that $abc$ is a term in the expansion of $(x + y)^3$. We thus choose to multiply either $0$, $1$, $2$, or $3$ of the $y$'s, and correspondingly, choose to multiply either $3$, $2$, $1$, or $0$ of the $x$'s. Thus the term $x^{n-k}y^k$ appears precisely $\binom{n}{k}$ times, and the full expansion of $(x + y)^3$ is:(12) Before we end this page, recall that the binomial coefficients and Pascal's triangle are very much related. We have already noted and proven that a sequence of consecutive binomial coefficients in any row of Pascal's triangle is unimodal symmetric. We will now further show that the sum all binomial coefficients $\binom{n}{k}$ where $k$ is odd is equal to the sum of all binomial coefficients where $k$ is even for any row $n$. Corollary 1: If $n$ and $k$ are positive integers that satisfy $0 \leq k \leq n$ then $\displaystyle{\sum_{k \: \mathrm{is \: odd}} \binom{n}{k} = \sum_{k \: \mathrm{is \: even}} \binom{n}{k}}$. Proof:Consider the binomial $(x + y)^n$. By Theorem 1 we have that the expansion of $(x + y)^n$ is given by: Setting $x = 1$ and $y = -1$ gives us:
Forgot password? New user? Sign up Existing user? Log in Find the remainder when ∑k=1100k(k!)\sum_{k=1}^{100} k(k!)k=1∑100k(k!) is divided by 11. Details:- ∙\bullet∙ k!k!k! stands for factorial of kkk, that is k!=k×(k−1)×(k−2)×...×2×1k!=k\times (k-1)\times (k-2) \times ... \times 2 \times 1k!=k×(k−1)×(k−2)×...×2×1 This is a part of the set 11≡ awesome (mod remainders) Problem Loading... Note Loading... Set Loading...
Suppose that we have right-angled triangles $ABC$ and $A_1B_1C_1$ with $\angle C=\angle C_1=90^\circ$ and $\angle A=\angle A_1$. The two triangles are similar. So. we have $\displaystyle \frac{BC}{AB}=\frac{B_1C_1}{A_1B_1}$, $\displaystyle \frac{AC}{AB}=\frac{A_1C_1}{A_1B_1}$ and $\displaystyle \frac{BC}{AC}=\frac{B_1C_1}{A_1C_1}$ Say if I draw a $20^\circ$-$70^\circ$-$90^\circ$, measure and calculate the ratio of the side opposite to the $20^\circ$ to the hypotenuse. This ratio is not same no matter how big or small is my triangle. We can define it as $\sin20^\circ$. If $\angle A=\theta$ and $\angle C=90^\circ$, we can define $\displaystyle \sin\theta=\frac{BC}{AB}$, $\displaystyle \cos\theta=\frac{AC}{AB}$ and $\displaystyle \tan\theta=\frac{BC}{AC}$. The size of the triangle does not matter. We can actually construct a table for the trigonometric ratios by drawing and measuring right-angled triangles of different angles.
Let $G=(V,E)$ be a simple, undirected and connected graph. We say that $S\subseteq V$ is a cutting set if $S\neq V$ and the induced subgraph on $V\setminus S$ is not connected any more. If $S \subseteq V$ is a cutting set of $G$, is there a cutting set $S_0\subseteq S$ of $G$ such that for all $x\in S_0$ the set $S_0\setminus \{x\}$ is no longer a cutting set? (This question has an easy positive answer for finite graphs, so it is only interesting for infinite graphs.)
When the stockprice follows a GBM, the arbitrage-free value of an EU call is given by the Black Scholes model: \begin{align} C(S, t) &= N(d_1)S_0 - N(d_2) Ke^{-r(T - t)} \\ d_1 &= \frac{1}{\sigma\sqrt{T - t}}\left[\ln\left(\frac{S_0}{K}\right) + \left(r + \frac{\sigma^2}{2}\right)(T - t)\right] \\ d_2 &= d_1 - \sigma\sqrt{T - t}\end{align} The stockprice is given by$$S_t=S_0e^{(r-\sigma^2/2)t+tW_t},\quad W_t\sim N(0,t)$$ $S_t$ is the only random term and log-normally distributed. Its inverse function for the VaR quantile can only be calculated numerically as by MATLAB logninv function (assuming 250 trading days):$$VaR_\alpha^S=\text{logninv}\left(\alpha,\mu_S=(r-\sigma^2)/2,\sigma_S=3/250\right)$$ As we know, the call option delta is positive, such that its value will always fall and increase with the stock price. Hence the Option VaR follows as: $$VaR_\alpha^C=C(S_0,0)-C(VaR_\alpha^S, 3/250)$$ which corresponds to the loss at the $\alpha$ quantile. The option value also falls deterministically with decreasing time to maturity, as represented by the theta Greek:$$Theta(t)=\frac{\partial C}{\partial t}= -\frac{S_0 N'(d_1) \sigma}{2 \sqrt{T - t}} - rKe^{-r(T - t)}N(d_2)\, -\frac{S_0 N'(d_1) \sigma}{2 \sqrt{T - t}} + rKe^{-r(T - t)}N(-d_2)$$That loss is however already included in calculating the difference $C(S_0,0)-C(VaR_\alpha^S, 3/250)$.
Search Now showing items 1-10 of 19 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Search Now showing items 1-7 of 7 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... π0 and η meson production in proton-proton collisions at √s=8 TeV (Springer, 2018-03-26) An invariant differential cross section measurement of inclusive π^0 and η meson production at mid-rapidity in pp collisions at √s=8 TeV was carried out by the ALICE experiment at the LHC. The spectra of π^0 and η mesons ... Longitudinal asymmetry and its effect on pseudorapidity distributions in Pb–Pb collisions at √sNN = 2.76 TeV (Elsevier, 2018-03-22) First results on the longitudinal asymmetry and its effect on the pseudorapidity distributions in Pb–Pb collisions at √sNN = 2.76 TeV at the Large Hadron Collider are obtained with the ALICE detector. The longitudinal ... Production of deuterons, tritons, 3He nuclei, and their antinuclei in pp collisions at √s=0.9, 2.76, and 7 TeV (American Physical Society, 2018-02) Invariant differential yields of deuterons and antideuterons in p p collisions at √ s = 0.9, 2.76 and 7 TeV and the yields of tritons, 3 He nuclei, and their antinuclei at √ s = 7 TeV have been measured with the ALICE ...
Since $f:G\to \Z$ is surjective, there exists an element $a\in G$ such that\[f(a)=1.\]Let $H=\langle a \rangle$ be the subgroup of $G$ generated by the element $a$. We show that $G\cong \ker(f)\times H$.To prove this isomorphism, it suffices to prove the following three conditions. The subgroups $\ker(f)$ and $H$ are normal in $G$. The intersection is trivial: $\ker(f) \cap H=\{e\}$, where $e$ is the identity element of $G$. Every element of $G$ is a product of elements of $\ker(f)$ and $H$. That is, $G=\ker(f)H$. The first condition follows immediately since the group $G$ is abelian, hence all the subgroups of $G$ are normal. To check condition 2, let $x\in \ker(f) \cap H$.Then $x=a^n$ for some $n\in \Z$ and we have\begin{align*}0&=f(x) && \text{since $x \in \ker(f)$}\\&=f(a^n)\\&=nf(a) && \text{since $f$ is a homomorphism.}\\&=n &&\text{since $f(a)=1$}.\end{align*}Thus, as a result we have $x=a^0=e$, and hence $\ker(f) \cap H=\{e\}$.So condition 2 is met. To prove condition 3, let $b$ be an arbitrary element in $G$.Let $n=f(b) \in \Z$. Then we have\[f(b)=n=f(a^n),\]and thus we have\[f(ba^{-n})=0.\]It follows that $ba^{-n}\in \ker(f)$.So there exists $z\in \ker(f)$ such that $ba^{-n}=z$.Therefore we have\begin{align*}b=za^n\in \ker(f)H.\end{align*}This implies that $G=\ker(f)H$. We have proved all the conditions, hence we obtain\[G\cong \ker(f)\times H.\]Since $H$ is a cyclic group of infinite order, it is isomorphic to $\Z$.(If $H$ has a finite order, then there exists a positive integer $n$ such that $a^n=e$. Then we have\begin{align*}0=f(e)=f(a^n)=nf(a)=n,\end{align*}and this contradicts the positivity of $n$.) Combining these isomorphisms, we have\[G\cong \ker(f)\times \Z,\]as required. Isomorphism Criterion of Semidirect Product of GroupsLet $A$, $B$ be groups. Let $\phi:B \to \Aut(A)$ be a group homomorphism.The semidirect product $A \rtimes_{\phi} B$ with respect to $\phi$ is a group whose underlying set is $A \times B$ with group operation\[(a_1, b_1)\cdot (a_2, b_2)=(a_1\phi(b_1)(a_2), b_1b_2),\]where $a_i […] Group Homomorphism, Preimage, and Product of GroupsLet $G, G'$ be groups and let $f:G \to G'$ be a group homomorphism.Put $N=\ker(f)$. Then show that we have\[f^{-1}(f(H))=HN.\]Proof.$(\subset)$ Take an arbitrary element $g\in f^{-1}(f(H))$. Then we have $f(g)\in f(H)$.It follows that there exists $h\in H$ […] A Group Homomorphism is Injective if and only if MonicLet $f:G\to G'$ be a group homomorphism. We say that $f$ is monic whenever we have $fg_1=fg_2$, where $g_1:K\to G$ and $g_2:K \to G$ are group homomorphisms for some group $K$, we have $g_1=g_2$.Then prove that a group homomorphism $f: G \to G'$ is injective if and only if it is […] Abelian Normal subgroup, Quotient Group, and Automorphism GroupLet $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.Let $\Aut(N)$ be the group of automorphisms of $G$.Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.Then prove that $N$ is contained in the center of […] Subgroup of Finite Index Contains a Normal Subgroup of Finite IndexLet $G$ be a group and let $H$ be a subgroup of finite index. Then show that there exists a normal subgroup $N$ of $G$ such that $N$ is of finite index in $G$ and $N\subset H$.Proof.The group $G$ acts on the set of left cosets $G/H$ by left multiplication.Hence […] Group Homomorphism from $\Z/n\Z$ to $\Z/m\Z$ When $m$ Divides $n$Let $m$ and $n$ be positive integers such that $m \mid n$.(a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined.(b) Prove that $\phi$ is a group homomorphism.(c) Prove that $\phi$ is surjective.(d) Determine […] Pullback Group of Two Group Homomorphisms into a GroupLet $G_1, G_1$, and $H$ be groups. Let $f_1: G_1 \to H$ and $f_2: G_2 \to H$ be group homomorphisms.Define the subset $M$ of $G_1 \times G_2$ to be\[M=\{(a_1, a_2) \in G_1\times G_2 \mid f_1(a_1)=f_2(a_2)\}.\]Prove that $M$ is a subgroup of $G_1 \times G_2$.[…]
Commutative Laws of Sets Commutative Laws of Sets We will now look at the commutative laws between two sets. These proofs are relatively straightforward. Theorem 1 (Commutative Law for the Union of Two Sets): If $A$ and $B$ are sets then $A \cup B = B \cup A$. Proof:Suppose that $x \in A \cup B$. Then $x \in A$ or $x \in B$ or $x \in A \cap B$ to which we write that $x \in B \cup A$. Therefore $A \cup B = B \cup A$. $\blacksquare$ Theorem 2 (Commutative Law for the Intersection of Two Sets): If $A$ and $B$ are sets then $A \cap B = B \cap A$. Proof:Suppose that $x \in A \cap B$. If $x$ is both in $A$ and $B$, then we can also say that $x$ is in both $B$ and $A$ or rather $x \in B \cap A$. Therefore $A \cap B = B \cap A$. $\blacksquare$
Comparability and permutation graphs¶ The following methods are implemented in this module is_comparability_MILP() Tests whether the graph is a comparability graph (MILP) greedy_is_comparability() Tests whether the graph is a comparability graph (greedy algorithm) greedy_is_comparability_with_certificate() Tests whether the graph is a comparability graph and returns certificates (greedy algorithm) is_comparability() Tests whether the graph is a comparability graph is_permutation() Tests whether the graph is a permutation graph. is_transitive() Tests whether the digraph is transitive. Author: Nathann Cohen 2012-04 Graph classes¶ Comparability graphs A graph is a comparability graph if it can be obtained from a poset by adding an edge between any two elements that are comparable. Co-comparability graph are complements of such graphs, i.e. graphs built from a poset by adding an edge between any two incomparable elements. For more information on comparability graphs, see the Wikipedia article Comparability_graph. Permutation graphs Definitions: A permutation \(\pi = \pi_1\pi_2\dots\pi_n\) defines a graph on \(n\) vertices such that \(i\sim j\) when \(\pi\) reverses \(i\) and \(j\) (i.e. when \(i<j\) and \(\pi_j < \pi_i\). A graph is a permutation graph whenever it can be built through this construction. A graph is a permutation graph if it can be build from two parallel lines are the intersection graph of segments intersecting both lines. A graph is a permutation graph if it is both a comparability graph and a co-comparability graph. For more information on permutation graphs, see the Wikipedia article Permutation_graph. Recognition algorithm for comparability graphs¶ Greedy algorithm This algorithm attempts to build a transitive orientation of a given graph \(G\), that is an orientation \(D\) such that for any directed \(uv\)-path of \(D\) there exists in \(D\) an edge \(uv\). This already determines a notion of equivalence between some edges of \(G\) : In \(G\), two edges \(uv\) and \(uv'\) (incident to a common vertex \(u\)) such that \(vv'\not\in G\) need necessarily be oriented the same way(that is that they should either both leaveor both enter\(u\)). Indeed, if one enters \(G\) while the other leaves it, these two edges form a path of length two, which is not possible in any transitive orientation of \(G\) as \(vv'\not\in G\). Hence, we can say that in this case a directed edge \(uv\) is equivalent to a directed edge \(uv'\) (to mean that if one belongs to the transitiveorientation, the other one must be present too) in the same way that \(vu\) isequivalent to \(v'u\). We can thus define equivalence classes on oriented edges,to represent set of edges that imply each other. We can thus define \(C^G_{uv}\)to be the equivalence class in \(G\) of the oriented edge \(uv\). Of course, if there exists a transitive orientation of a graph \(G\), then no edge \(uv\) implies its contrary \(vu\), i.e. it is necessary to ensure that \(\forall uv\in G, vu\not\in C^G_{uv}\). The key result on which the greedy algorithm is built is the following (see [Cleanup]): Theorem– The following statements are equivalent : \(G\) is a comparability graph \(\forall uv\in G, vu\not\in C^G_{uv}\) The edges of \(G\) can be partitionned into \(B_1,...,B_k\) where \(B_i\) is the equivalence class of some oriented edge in \(G-B_1-\dots-B_{i-1}\) Hence, ensuring that a graph is a comparability graph can be done by checking that no equivalence class is contradictory. Building the orientation, however, requires to build equivalence classes step by step until an orientation has been found for all of them. Mixed Integer Linear Program A MILP formulation is available to check the other methods for correction. It is easily built : To each edge are associated two binary variables (one for each possible direction). We then ensure that each triangle is transitively oriented, and that each pair of incident edges \(uv, uv'\) such that \(vv'\not\in G\) do not create a 2-path. Here is the formulation: Note The MILP formulation is usually much slower than the greedy algorithm. This MILP has been implemented to check the results of the greedy algorithm that has been implemented to check the results of a faster algorithm which has not been implemented yet. Certificates¶ Comparability graphs The yes-certificates that a graph is a comparability graphs are transitiveorientations of it. The no-certificates, on the other hand, are odd cycles ofsuch graph. These odd cycles have the property that around each vertex \(v\) ofthe cycle its two incident edges must have the same orientation (toward \(v\), oroutward \(v\)) in any transitive orientation of the graph. This is impossiblewhenever the cycle has odd length. Explanations are given in the “Greedyalgorithm” part of the previous section. Permutation graphs Permutation graphs are precisely the intersection of comparability graphs andco-comparability graphs. Hence, negative certificates are precisely negativecertificates of comparability or co-comparability. Positive certificates are apair of permutations that can be used through PermutationGraph() (whosedocumentation says more about what these permutations represent). Implementation details¶ Test that the equivalence classes are not self-contradictory This is done by a call to Graph.is_bipartite(), and here is how : Around a vertex \(u\), any two edges \(uv, uv'\) such that \(vv'\not\in G\) are equivalent. Hence, the equivalence classe of edges around a vertex are precisely the connected components of the complement of the graph induced by the neighbors of \(u\). In each equivalence class (around a given vertex \(u\)), the edges should all have the same orientation, i.e. all should go toward \(u\) at the same time, or leave it at the same time. To represent this, we create a graph with vertices for all equivalent classes around all vertices of \(G\), and link \((v, C)\) to \((u, C')\) if \(u\in C\) and \(v\in C'\). A bipartite coloring of this graph with colors 0 and 1 tells us that the edges of an equivalence class \(C\) around \(u\) should be directed toward \(u\) if \((u, C)\) is colored with \(0\), and outward if \((u, C)\) is colored with \(1\). If the graph is not bipartite, this is the proof that some equivalence class is self-contradictory ! Note The greedy algorithm implemented here is just there to check the correction of more complicated ones, and it is reaaaaaaaaaaaalllly bad whenever you look at it with performance in mind. References¶ [ATGA] Advanced Topics in Graph Algorithms, Ron Shamir, http://www.cs.tau.ac.il/~rshamir/atga/atga.html [Cleanup] (1, 2) A cleanup on transitive orientation,Orders, Algorithms, and Applications, 1994,Simon, K. and Trunz, P.,ftp://ftp.inf.ethz.ch/doc/papers/ti/ga/ST94.ps.gz Methods¶ sage.graphs.comparability. greedy_is_comparability( g, no_certificate=False, equivalence_class=False)¶ Tests whether the graph is a comparability graph (greedy algorithm) This method only returns no-certificates. To understand how this method works, please consult the documentation of the comparability module. INPUT: no_certificate– whether to return a no-certificate when the graph is not a comparability graph. This certificate is an odd cycle of edges, each of which implies the next. It is set to Falseby default. equivalence_class– whether to return an equivalence class if the graph is a comparability graph. OUTPUT: If the graph is a comparability graph and no_certificate = False, this method returns Trueor (True, an_equivalence_class)according to the value of equivalence_class. If the graph is nota comparability graph, this method returns Falseor (False, odd_cycle)according to the value of no_certificate. EXAMPLES: The Petersen Graph is not transitively orientable: sage: from sage.graphs.comparability import greedy_is_comparability as is_comparability sage: g = graphs.PetersenGraph() sage: is_comparability(g) False sage: is_comparability(g, no_certificate=True) # py2 (False, [0, 4, 9, 6, 1, 0]) sage: is_comparability(g, no_certificate=True) # py3 (False, [2, 1, 0, 4, 3, 2]) But the Bull graph is: sage: g = graphs.BullGraph() sage: is_comparability(g) True sage.graphs.comparability. greedy_is_comparability_with_certificate( g, certificate=False)¶ Tests whether the graph is a comparability graph and returns certificates(greedy algorithm). This method can return certificates of both yesand noanswers. To understand how this method works, please consult the documentation of the comparability module. INPUT: certificate(boolean) – whether to return a certificate. Yes-answers the certificate is a transitive orientation of \(G\), and a nocertificates is an odd cycle of sequentially forcing edges. EXAMPLES: The 5-cycle or the Petersen Graph are not transitively orientable: sage: from sage.graphs.comparability import greedy_is_comparability_with_certificate as is_comparability sage: is_comparability(graphs.CycleGraph(5), certificate=True) # py2 (False, [1, 2, 3, 4, 0, 1]) sage: is_comparability(graphs.CycleGraph(5), certificate=True) # py3 (False, [2, 1, 0, 4, 3, 2]) sage: g = graphs.PetersenGraph() sage: is_comparability(g) False sage: is_comparability(g, certificate=True) # py2 (False, [0, 4, 9, 6, 1, 0]) sage: is_comparability(g, certificate=True) # py3 (False, [2, 1, 0, 4, 3, 2]) But the Bull graph is: sage: g = graphs.BullGraph() sage: is_comparability(g) True sage: is_comparability(g, certificate = True) (True, Digraph on 5 vertices) sage: is_comparability(g, certificate = True)[1].is_transitive() True sage.graphs.comparability. is_comparability( g, algorithm='greedy', certificate=False, check=True, solver=None, verbose=0)¶ Tests whether the graph is a comparability graph INPUT: algorithm– choose the implementation used to do the test. "greedy"– a greedy algorithm (see the documentation of the comparability module). "MILP"– a Mixed Integer Linear Program formulation of the problem. Beware, for this implementation is unable to return negative certificates ! When certificate = True, negative certificates are always equal to None. True certificates are valid, though. certificate(boolean) – whether to return a certificate. Yes-answers the certificate is a transitive orientation of \(G\), and a nocertificates is an odd cycle of sequentially forcing edges. check(boolean) – whether to check that the yes-certificates are indeed transitive. As it is very quick compared to the rest of the operation, it is enabled by default. solver– (default: None); Specify a Linear Program (LP) solver to be used. If set to None, the default one is used. For more information on LP solvers and which default solver is used, see the method solve()of the class MixedIntegerLinearProgram. verbose– integer (default: 0); sets the level of verbosity. Set to 0 by default, which means quiet. EXAMPLES: sage: from sage.graphs.comparability import is_comparability sage: g = graphs.PetersenGraph() sage: is_comparability(g) False sage: is_comparability(graphs.CompleteGraph(5), certificate=True) (True, Digraph on 5 vertices) sage.graphs.comparability. is_comparability_MILP( g, certificate=False, solver=None, verbose=0)¶ Tests whether the graph is a comparability graph (MILP) INPUT: EXAMPLES: certificate(boolean) – whether to return a certificate for yes instances. This method can not return negative certificates. solver– (default: None); Specify a Linear Program (LP) solver to be used. If set to None, the default one is used. For more information on LP solvers and which default solver is used, see the method solve()of the class MixedIntegerLinearProgram. verbose– integer (default: 0); sets the level of verbosity. Set to 0 by default, which means quiet. The 5-cycle or the Petersen Graph are not transitively orientable: sage: from sage.graphs.comparability import is_comparability_MILP as is_comparability sage: is_comparability(graphs.CycleGraph(5), certificate = True) (False, None) sage: g = graphs.PetersenGraph() sage: is_comparability(g, certificate = True) (False, None) But the Bull graph is: sage: g = graphs.BullGraph() sage: is_comparability(g) True sage: is_comparability(g, certificate = True) (True, Digraph on 5 vertices) sage: is_comparability(g, certificate = True)[1].is_transitive() True sage.graphs.comparability. is_permutation( g, algorithm='greedy', certificate=False, check=True, solver=None, verbose=0)¶ Tests whether the graph is a permutation graph. For more information on permutation graphs, refer to the documentation of the comparability module. INPUT: algorithm– choose the implementation used for the subcalls to is_comparability(). "greedy"– a greedy algorithm (see the documentation of the comparability module). "MILP"– a Mixed Integer Linear Program formulation of the problem. Beware, for this implementation is unable to return negative certificates ! When certificate = True, negative certificates are always equal to None. True certificates are valid, though. certificate(boolean) – whether to return a certificate for the answer given. For Trueanswers the certificate is a permutation, for Falseanswers it is a no-certificate for the test of comparability or co-comparability. check(boolean) – whether to check that the permutations returned indeed create the expected Permutation graph. Pretty cheap compared to the rest, hence a good investment. It is enabled by default. solver– (default: None); Specify a Linear Program (LP) solver to be used. If set to None, the default one is used. For more information on LP solvers and which default solver is used, see the method solve()of the class MixedIntegerLinearProgram. verbose– integer (default: 0); sets the level of verbosity. Set to 0 by default, which means quiet. Note EXAMPLES: A permutation realizing the bull graph: sage: from sage.graphs.comparability import is_permutation sage: g = graphs.BullGraph() sage: _ , certif = is_permutation(g, certificate=True) sage: h = graphs.PermutationGraph(*certif) sage: h.is_isomorphic(g) True Plotting the realization as an intersection graph of segments: sage: true, perm = is_permutation(g, certificate=True) sage: p1 = Permutation([nn+1 for nn in perm[0]]) sage: p2 = Permutation([nn+1 for nn in perm[1]]) sage: p = p2 * p1.inverse() sage: p.show(representation = "braid") sage.graphs.comparability. is_transitive( g, certificate=False)¶ Tests whether the digraph is transitive. A digraph is transitive if for any pair of vertices \(u,v\in G\) linked by a \(uv\)-path the edge \(uv\) belongs to \(G\). INPUT: certificate– whether to return a certificate for negative answers. If certificate = False(default), this method returns Trueor Falseaccording to the graph. If certificate = True, this method either returns Trueanswers or yield a pair of vertices \(uv\) such that there exists a \(uv\)-path in \(G\) but \(uv\not\in G\). If EXAMPLES: sage: digraphs.Circuit(4).is_transitive() False sage: digraphs.Circuit(4).is_transitive(certificate=True) (0, 2) sage: digraphs.RandomDirectedGNP(30,.2).is_transitive() False sage: D = digraphs.DeBruijn(5, 2) sage: D.is_transitive() False sage: cert = D.is_transitive(certificate=True) sage: D.has_edge(*cert) False sage: D.shortest_path(*cert) != [] True sage: digraphs.RandomDirectedGNP(20,.2).transitive_closure().is_transitive() True
Localization of blow-up points for a nonlinear nonlocal porous medium equation 1. Department of Mathematics, Sun Yat-sen University, Guangzhou 510275, China, China $u_t=\Delta u^m + au^p\int_\Omega u^q dx,\quad x\in \Omega, t>0$ subject to homogeneous Dirichlet condition. We investigate the influence of the nonlocal source and local term on blow-up properties for this system. It is proved that: (i) when $p\leq 1$, the nonlocal source plays a dominating role, i.e. the system has global blow-up and the blow-up profile which is uniformly away from the boundary either at polynomial scale or logarithmic scale is obtained. (ii) When $p > m$, this system presents single blow-up pattern. In other words, the local term dominates the nonlocal term in the blow-up profile. This extends the work of Li and Xie in Appl. Math. Letter, 16 (2003) 185--192. Mathematics Subject Classification:35B40, 35K6. Citation:Lili Du, Zheng-An Yao. Localization of blow-up points for a nonlinear nonlocal porous medium equation. Communications on Pure & Applied Analysis, 2007, 6 (1) : 183-190. doi: 10.3934/cpaa.2007.6.183 [1] W. Edward Olmstead, Colleen M. Kirk, Catherine A. Roberts. Blow-up in a subdiffusive medium with advection. [2] Yohei Fujishima. Blow-up set for a superlinear heat equation and pointedness of the initial data. [3] Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. [4] Zhiqing Liu, Zhong Bo Fang. Blow-up phenomena for a nonlocal quasilinear parabolic equation with time-dependent coefficients under nonlinear boundary flux. [5] Vo Anh Khoa, Le Thi Phuong Ngoc, Nguyen Thanh Long. Existence, blow-up and exponential decay of solutions for a porous-elastic system with damping and source terms. [6] Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. [7] [8] [9] José M. Arrieta, Raúl Ferreira, Arturo de Pablo, Julio D. Rossi. Stability of the blow-up time and the blow-up set under perturbations. [10] [11] Yohei Fujishima. On the effect of higher order derivatives of initial data on the blow-up set for a semilinear heat equation. [12] Asma Azaiez. Refined regularity for the blow-up set at non characteristic points for the vector-valued semilinear wave equation. [13] Evgeny Galakhov, Olga Salieva. Blow-up for nonlinear inequalities with gradient terms and singularities on unbounded sets. [14] [15] Pavol Quittner, Philippe Souplet. Blow-up rate of solutions of parabolic poblems with nonlinear boundary conditions. [16] [17] Antonio Vitolo, Maria E. Amendola, Giulio Galise. On the uniqueness of blow-up solutions of fully nonlinear elliptic equations. [18] Filippo Gazzola, Paschalis Karageorgis. Refined blow-up results for nonlinear fourth order differential equations. [19] Huiling Li, Mingxin Wang. Properties of blow-up solutions to a parabolic system with nonlinear localized terms. [20] 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Large time behavior of ODE type solutions to nonlinear diffusion equations 1. Mathematical Institute, Tohoku University, Aoba, Sendai 980-8578, Japan 2. Graduate School of Mathematical Sciences, The University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan $ \begin{equation} \left\{ \begin{array}{ll} \partial_t u = \Delta u^m+u^\alpha & \quad\mbox{in}\quad{\bf R}^N\times(0,\infty),\\ u(x,0) = \lambda+\varphi(x)>0 & \quad\mbox{in}\quad{\bf R}^N, \end{array} \right. \end{equation} $ $ m>0 $ $ \alpha\in(-\infty,1) $ $ \lambda>0 $ $ \varphi\in BC({\bf R}^N)\,\cap\, L^r({\bf R}^N) $ $ 1\le r<\infty $ $ \inf_{x\in{\bf R}^N}\varphi(x)>-\lambda $ $ \zeta' = \zeta^\alpha $ $ (0,\infty) $ $ +\infty $ $ t\to\infty $ Keywords:ODE type solutions, nonlinear diffusion equation, large time behavior, the higher order asymptotic expansions, Gauss kernel. Mathematics Subject Classification:Primary: 35B40, 35K55. Citation:Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2019229 References: [1] J. Aguirre and M. A. Escobedo, Cauchy problem for $u_t - \Delta u = u^p$ with $0 < p < 1$, Asymptotic behaviour of solutions, [2] [3] J. A. Carrillo and G. Toscani, Asymptotic $L^1$-decay of solutions of the porous medium equation to self-similarity, [4] [5] [6] [7] K. Ishige, M. Ishiwata and and T. Kawakami, The decay of the solutions for the heat equation with a potential, [8] [9] K. Ishige and T. Kawakami, Asymptotic expansions of solutions of the Cauchy problem for nonlinear parabolic equations, [10] K. Ishige, T. Kawakami and K. Kobayashi, Asymptotics for a nonlinear integral equation with a generalized heat kernel, [11] K. Ishige and K. Kobayashi, Convection-diffusion equation with absorption and non-decaying initial data, [12] [13] [14] [15] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, [16] L. A. Peletier and J. Zhao, Large time behaviour of solutions of the porous media equation with absorption: the fast diffusion case, [17] R. Suzuki, Asymptotic behavior of solutions of quasilinear parabolic equations with supercritical nonlinearity, [18] [19] [20] J. L. Vázquez, [21] show all references References: [1] J. Aguirre and M. A. Escobedo, Cauchy problem for $u_t - \Delta u = u^p$ with $0 < p < 1$, Asymptotic behaviour of solutions, [2] [3] J. A. Carrillo and G. Toscani, Asymptotic $L^1$-decay of solutions of the porous medium equation to self-similarity, [4] [5] [6] [7] K. Ishige, M. Ishiwata and and T. Kawakami, The decay of the solutions for the heat equation with a potential, [8] [9] K. Ishige and T. Kawakami, Asymptotic expansions of solutions of the Cauchy problem for nonlinear parabolic equations, [10] K. Ishige, T. Kawakami and K. Kobayashi, Asymptotics for a nonlinear integral equation with a generalized heat kernel, [11] K. Ishige and K. Kobayashi, Convection-diffusion equation with absorption and non-decaying initial data, [12] [13] [14] [15] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, [16] L. A. Peletier and J. Zhao, Large time behaviour of solutions of the porous media equation with absorption: the fast diffusion case, [17] R. Suzuki, Asymptotic behavior of solutions of quasilinear parabolic equations with supercritical nonlinearity, [18] [19] [20] J. L. Vázquez, [21] [1] Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. [2] Jean-Claude Saut, Jun-Ichi Segata. Asymptotic behavior in time of solution to the nonlinear Schrödinger equation with higher order anisotropic dispersion. [3] [4] [5] [6] Nakao Hayashi, Elena I. Kaikina, Pavel I. Naumkin. Large time behavior of solutions to the generalized derivative nonlinear Schrödinger equation. [7] Nakao Hayashi, Pavel I. Naumkin. Asymptotic behavior in time of solutions to the derivative nonlinear Schrödinger equation revisited. [8] [9] [10] Joana Terra, Noemi Wolanski. Large time behavior for a nonlocal diffusion equation with absorption and bounded initial data. [11] [12] Kazuhiro Ishige, Asato Mukai. Large time behavior of solutions of the heat equation with inverse square potential. [13] Peter V. Gordon, Cyrill B. Muratov. Self-similarity and long-time behavior of solutions of the diffusion equation with nonlinear absorption and a boundary source. [14] [15] Engu Satynarayana, Manas R. Sahoo, Manasa M. Higher order asymptotic for Burgers equation and Adhesion model. [16] [17] Peng Jiang. Global well-posedness and large time behavior of classical solutions to the diffusion approximation model in radiation hydrodynamics. [18] Shifeng Geng, Lina Zhang. Large-time behavior of solutions for the system of compressible adiabatic flow through porous media with nonlinear damping. [19] Guofu Lu. Nonexistence and short time asymptotic behavior of source-type solution for porous medium equation with convection in one-dimension. [20] 2018 Impact Factor: 1.143 Tools Article outline [Back to Top]
Note first that those $X_i = 0$ contribute nothing to the sum. We can exclude them by remembering that if we generate a Poisson variate $x$ with mean $\lambda$ and then a Binomial $z$ with probability parameter $p$ and size parameter equal to $x$, $z \sim \text{Poisson}(p\lambda)$; consequently the number of nonzero elements of our sum, label it $M$, $\sim \text{Poisson}((1-p_1)\lambda)$. Now we can see that we have a problem; we can't distinguish between different values of $(p_1, \lambda)$ pairs, because we don't know how many of the $X_i = 0$ in each sample, and we have no way of finding out. $\lambda = 1,000$ and $p = 0.99$ are indistinguishable in terms of the final distribution from $\lambda = 10$ and $p = 0$; the distribution of $M$, and therefore the distribution of the sum itself, is the same in either case. Thus, the parameters are not identifiable on the basis of the collected sample statistics (although if we also collected the number of zero observations in each sample, they would be.) If we alter the problem to the tractable one of $X_i \in \{-1,1\}$ with probabilities $(p, 1-p)$ instead of the tri-valued $X_i$ in the original problem, then we can apply a method of moments estimation based upon the first two moments, or maximum likelihood, or something else. The MOM estimator is based on equating the sample mean and variance to the population moments; in this case, $\mathbb{E}(x) = \lambda(2p-1)$, and $\sigma^2(x)$ can be found by noting that the variance of a single observation of $X$ equals $4p(1-p)$, therefore the variance of the sum of $M$ observations is $4p(1-p)M$, and a little more math gets us to $\sigma^2(x) = 4p(1-p)\lambda + (2p-1)^2\lambda = \lambda$. This latter result is surprising, at least it surprised me, so after triple-checking the math ($p(\text{correct})$ still $< 1$, of course), I simulated it: x <- rep(0, 1000000) p <- 0.3 lambda <- 5 for (j in 1:length(x)) { M <- rpois(1,lambda) z <- rbinom(1,M,p) x[j] <- z - (M-z) } > mean(x) [1] -1.997802 > var(x) [1] 5.008178 At this point how to calculate the MOM estimate should be clear. Note that $\lambda$ in this problem is equivalent to $(1-p_1)\lambda$ in the original problem; we can estimate the original problem's $(1-p_1)\lambda$, but not the individual components thereof.
The Riemann Sphere Definition: The Riemann Sphere denoted $\mathbb{C}_{\infty} = \mathbb{C} \cup \{ \infty \}$ is the topological space adjoining the single point $\infty$ to $\mathbb{C}$. We can readily define a very simple two chart atlas on $\mathbb{C}_{\infty}$, call it $\mathcal M = \{ (U_0, \phi_0), (U_{\infty}, \phi_{\infty}) \}$ where:(1) Clearly $\phi_0$ and $\phi_{\infty}$ are homeomorphisms onto their images. Now, the Riemann sphere can be modelled via stereographc projection. Consider the unit sphere centered at the origin in $\mathbb{R}^3$ with equation $x_1^2 + x_2^2 + x_3^2 = 1$ with north pole $(x_1, x_2, x_3) = (0, 0, 1)$ and identify the $x_1x_2$-plane embedded in $\mathbb{R}^3$ with the complex plane $\mathbb{C}$. The north pole of the sphere has coordinates $(x, y, z) = (1, 1, 1)$. For every complex number $z = x + iy$ there is a corresponding point $(x_1, x_2, x_3)$ on the sphere, and this point is given by the equations:(2)
Linear Independent Vectors and the Vector Space Spanned By Them Problem 141 Let $V$ be a vector space over a field $K$. Let $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n$ be linearly independent vectors in $V$. Let $U$ be the subspace of $V$ spanned by these vectors, that is, $U=\Span \{\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n\}$.Let $\mathbf{u}_{n+1}\in V$. Show that $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n, \mathbf{u}_{n+1}$ are linearly independent if and only if $\mathbf{u}_{n+1} \not \in U$. $(\implies)$Suppose that the vectors $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n, \mathbf{u}_{n+1}$ are linearly independent. If $\mathbf{u}_{n+1}\in U$, then $\mathbf{u}_{n+1}$ is a linear combination of $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n$.Thus, we have\[\mathbf{u}_{n+1}=c_1\mathbf{u}_1+c_2\mathbf{u}_2+\cdots+c_n \mathbf{u}_n\]for some scalars $c_1, c_2, \dots, c_n \in K$. However, this implies that we have a nontrivial linear combination\[c_1\mathbf{u}_1+c_2\mathbf{u}_2+\cdots+c_n \mathbf{u}_n-\mathbf{u}_{n+1}=\mathbf{0}.\]This contradicts that $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n, \mathbf{u}_{n+1}$ are linearly independent. Hence $\mathbf{u}_{n+1} \not \in U$. $(\impliedby)$ Suppose now that $\mathbf{u}_{n+1} \not \in U$.If the vectors $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n, \mathbf{u}_{n+1}$ are linearly dependent, then there exists $c_1, c_2 \dots, c_n, c_{n+1}\in K$ such thatnot all of them are zero and\[c_1\mathbf{u}_1+c_2\mathbf{u}_2+\cdots+c_n \mathbf{u}_n+c_{n+1}\mathbf{u}_{n+1}=\mathbf{0}.\] We claim that $c_{n+1} \neq 0$. If $c_{n+1}=0$, then we have\[c_1\mathbf{u}_1+c_2\mathbf{u}_2+\cdots+c_n \mathbf{u}_n=\mathbf{0}\]and since $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n$ are linearly independent, we must have $c_1=c_2=\cdots=c_n=0$. This means that all $c_i$ are zero but this contradicts our choice of $c_i$. Thus $c_{n+1} \neq 0$. Then we have\[\mathbf{u}_{n+1}=\frac{-c_1}{c_{n+1}}\mathbf{u}_1+\frac{-c_2}{c_{n+1}}\mathbf{u}_2+\cdots+\frac{-c_n}{c_{n+1}}\mathbf{u}_n.\](Note: we needed to check $c_{n+1} \neq 0$ to divide by it.) This implies that $\mathbf{u}_{n+1}$ is a linear combination of vectors $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n$, and thus $\mathbf{u}_{n+1} \in U$, a contradiction.Therefore, the vectors $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n, \mathbf{u}_{n+1}$ are linearly independent. Does an Extra Vector Change the Span?Suppose that a set of vectors $S_1=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a spanning set of a subspace $V$ in $\R^5$. If $\mathbf{v}_4$ is another vector in $V$, then is the set\[S_2=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}\]still a spanning set for […] Determine Whether Each Set is a Basis for $\R^3$Determine whether each of the following sets is a basis for $\R^3$.(a) $S=\left\{\, \begin{bmatrix}1 \\0 \\-1\end{bmatrix}, \begin{bmatrix}2 \\1 \\-1\end{bmatrix}, \begin{bmatrix}-2 \\1 \\4\end{bmatrix} […] Vector Space of Polynomials and Coordinate VectorsLet $P_2$ be the vector space of all polynomials of degree two or less.Consider the subset in $P_2$\[Q=\{ p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}&p_1(x)=x^2+2x+1, &p_2(x)=2x^2+3x+1, \\&p_3(x)=2x^2, &p_4(x)=2x^2+x+1.\end{align*}(a) Use the basis […]
Table of Contents Associativity and Commutativity of Binary Operations Recall from the Unary and Binary Operations on Sets that a binary operation on a set $S$ if a function $f : S \times S \to S$ that takes every pair of elements $(x, y) \in S \times S$ (for $x, y \in S$) and maps it to an element in $S$. Sometimes these operations, which we will note denote by $*$ (as opposed to $f$) satisfy some useful properties which we define below. Definition: An operation $*$ on a set $S$ is said to be Associative or satisfy the Associativity Property if for all $a, b, c \in S$ we have that $a * (b * c) = (a * b) * c$, and otherwise, $*$ is said to be Nonassociative. By definition, a binary operation can be applied to only two elements in $S$ at once. Therefore, an operation is said to be associative if the order in which we choose to first apply the operation amongst $3$ elements in $S$ does not affect the outcome of the operation. For example, if we consider the set $\mathbb{R}$ then standard addition is associative since for all $a, b, c \in \mathbb{R}$ we have that:(1) Similarly, standard multiplication is associative on $\mathbb{R}$ because the order of operations is not strict when it comes to multiplying out an expression that is solely multiplication, i.e.,:(2) For an example of a nonassociative operation, consider the operation $*$ defined by $* : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ and given for all $a, b \in \mathbb{R}$ as:(3) Consider the elements $1, 3, 6 \in \mathbb{R}$. Then we have that:(4) We also have that:(5) Clearly $676 \neq 144$ and so $*$ is nonassociative on $\mathbb{R}$ since $a * (b * c) \neq (a * b) * c$ for $1, 3, 6 \in \mathbb{R}$. Definition: An operation $*$ on a set $S$ is said to be Commutative or satisfy the Commutativity Property if for all $a, b \in S$ we have that $a * b = b * a$, and otherwise, $*$ is said to be Noncommutative. Once again, standard addition on $\mathbb{R}$ is commutative since for all $a, b \in \mathbb{R}$ we have that:(6) And similarly, standard multiplication on $\mathbb{R}$ is commutative since:(7) Consider the example $* : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ given above as $a * b = (a + b)^2$. We saw this operation was nonassociative but it is also commutative since for all $a, b \in \mathbb{R}$ we have that:(8) A classic example of a noncommutative operation is the operation of matrix multiplication on $2 \times 2$ matrices. Let $* : M_{22} \times M_{22} \to M_{22}$ be the operation of standard matrix multiplication which we've already defined for all matrices $A, B \in M_{22}$ as:(9) Now consider the following matrices $A = \begin{bmatrix} 1 & 0\\ 0 & 0 \end{bmatrix}$ and $B = \begin{bmatrix} 0 & 1\\ 0 & 0 \end{bmatrix}$. We have that:(10) And also:(11) Clearly $A * B \neq B * A$ in general, and so matrix multiplication on $2 \times 2$ matrices is noncommutative.
Zero Matrices Definition An $m \times n$ matrix $A$ is a Zero Matrix if all entries in the matrix are $0$, that is $a_{ij} = 0$ for all $1 ≤ i ≤ m$ and $1 ≤ j ≤ n$, $i, j \in \mathbb{N}$. The definition of a zero matrix is pretty self explanatory. For example, if $A$ is a $2 \times 3$ zero matrix would look like this $A = \begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & 0 \end{bmatrix}$. Often times zero matrices of size $m \times n$ are denoted simply by $0_{m \times n }$. We will now look at a rather simple theorem regarding various operations with the zero matrix. Theorem 1: Let $A$ be an $m \times n$ matrix and let $0$ be the $m \times n$ zero matrix. Then: a) $A + 0 = 0 + A = A$. b) $0 - A = -A$. c) $A - A = 0$. d) $A \cdot 0 = 0$. Proof of (a):Suppose that $A$ and $0$ are matrices of size $m \times n$. Thus: Proof of (b):We know that by (a)that $A + 0 = 0 + A = A$. It follows that $0 - A = 0 + (-A) = -A$. $\blacksquare$ Proof of (c):Suppose $A$ is a matrix of size $m \times n$, then: Proof of (d):Recall that any entry of a product of two matrices can be determined by the formula $(AB)_{ij} = a_{i1}b_{1j} + a_{i2}b_{2j} + ... + a_{ir}b_{rj}$. Let $A$ be an $m \times r$ matrix and let $0$ be an $r \times n$ matrix. By definition, all entries of $B$ are zeroes, hence $b_{ij} = 0$ for all $i$ and $j$, and thus $(A \cdot 0)_{ij} = a_{i1}\cdot 0 + a_{i2}\cdot 0 + ... + a_{ir} \cdot 0 = 0_{m \times n}$. $\blacksquare$
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them but still, the first one from well, almost a decade ago shows up as the default content in the search window 1,2,3,6,11,23,47,106,235 well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go oh well "what would cotton mathers do?" the chat room unanimously ponders lol i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway? or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please very general advice for any number of topics for someone like yourself sir assuming gender because you should hate text based adam long ago if you were female or etc if its false then I apologise for the statistical approach to human interaction So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field? (I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.) (which is just the product of the integer and its conjugate) Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$ You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings (Plus I'm at work and am pretending I'm doing my job) Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit. @Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$ this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$ the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$ (just as a quotient of additive groups, that quotient group is finite) in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$ there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus) @MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively. $\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first: By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$. The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$. @Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics @MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$? Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists... As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour Or more likely, we will need to start recognising machines as a new species and interact with them accordingly so covert operations AI may still exists, even as domestic AIs continue to become widespread It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other that is, until their processing power become so strong that they can outdo human thinking But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy. I was just genuinely curious How does a message like this come from someone who isn't trolling: "for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise" 3 Anyway feel free to continue, it just seems strange @Adam I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree? So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$) @RyanUnger You're the guy to ask for this sort of thing I think: If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way? I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right? We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method. How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ? @anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues
The Annals of Statistics Ann. Statist. Volume 21, Number 4 (1993), 1663-1691. Incidental Versus Random Nuisance Parameters Abstract Let $\{P_{\vartheta,\eta}:(\vartheta, \eta) \in \Theta \times H\}$, with $\Theta \subset \mathbb{R}$ and H arbitrary, be a family of mutually absolutely continuous probability measures on a measurable space $(X, \mathscr{A})$. The problem is to estimate $\vartheta$, based on a sample $(x_1, \cdots, x_n)$ from $\times^n_1 P_{\vartheta,\eta_\nu}$. If $(\eta_1, \cdots, \eta_n)$ are independently distributed according to some unknown prior distribution $\Gamma$, then the distribution of $n^{1/2}(\vartheta^{(n)} - \vartheta)$ under $P^n_{\vartheta,\Gamma}(P_{\vartheta, \Gamma}$ being the $\Gamma$-mixture of $P_{\vartheta,\eta}, \eta \in H$) cannot be more concentrated asymptotically than a certain normal distribution with mean 0, say $N_{(0, \sigma^2_0(\vartheta,\Gamma))}$. Folklore says that such a bound is also valid if $(\eta_1, \cdots, \eta_n)$ are just unknown values of the nuisance parameter: In this case, the distribution cannot be more concentrated asymptotically than $N_{(0, \sigma^2_0(\vartheta,E^{(n)}_{(\eta_1, \cdots, \eta_n)}))}$, where $E^{(n)}_{(\eta_1, \cdots, \eta_n)}$ is the empirical distribution of $(\eta_1,\cdots, \eta_n)$. The purpose of the present paper is to discuss to which extent this conjecture is true. The results are summarized at the end of Sections 1 and 3. Article information Source Ann. Statist., Volume 21, Number 4 (1993), 1663-1691. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176349392 Digital Object Identifier doi:10.1214/aos/1176349392 Mathematical Reviews number (MathSciNet) MR1245763 Zentralblatt MATH identifier 0795.62029 JSTOR links.jstor.org Citation Pfanzagl, J. Incidental Versus Random Nuisance Parameters. Ann. Statist. 21 (1993), no. 4, 1663--1691. doi:10.1214/aos/1176349392. https://projecteuclid.org/euclid.aos/1176349392
Enhancement of the HWZ vertex in the three scalar doublet model Presented by Ms. Diana ROJAS Content We compute one-loop induced trilinear vertices with physical charged Higgs bosons $H^\pm$ and ordinary neutral gauge bosons, i.e., $H^\pm W^\mp Z$ and $H^\pm W^\mp \gamma$, in the model with two active plus one inert scalar doublet fields under a $Z_2(\text{unbroken})\times \tilde{Z}_2(\text{softly-broken})$ symmetry. The $Z_2$ and $\tilde{Z}_2$ symmetries are introduced to guarantee the stability of a dark matter candidate and to forbid the flavour changing neutral current at the tree level, respectively. The dominant form factor $F_Z$ of the $H^\pm W^\mp Z$ vertex can be enhanced by non-decoupling effects of extra scalar boson loop contributions. We find that, in such a model, $|F_Z|^2$ can be one order of magnitude larger than that predicted in two Higgs doublet models under the constraints from vacuum stability, perturbative unitarity and the electroweak precision observables. In addition, the branching fraction of the $H^\pm \to W^\pm Z$ $(H^\pm \to W^\pm \gamma)$ mode can be of order 10~(1)\% level when the mass of the $H^\pm$ is below the top quark mass. Such a light $H^\pm$ is allowed by the so-called Type-I and Type-X Yukawa interactions which appear under the classification of the $\tilde{Z}$ charge assignment of the quarks and leptons. We also calculate the cross sections for the processes $H^\pm \to W^\pm Z$ and $H^\pm \to W^\pm \gamma$ onset by the top quark decay $t\to H^\pm b$ and electroweak $H^\pm$ production at the LHC.
The Derived Set of a Set in a Metric Space Recall from the Adherent, Accumulation and Isolated Points in Metric Spaces page that if $(M,d)$ is a metric space and $S \subseteq M$ then a point $x \in M$ is said to be an accumulation point of $S$ if for all $r > 0$ we have that:(1) In other words, every ball centered at $x$ contains a point of $S$ different from $\{x \}$. We will now look at an important definition of the set of all accumulation points of a set in a metric space. Definition: Let $(M, d)$ be a metric space and let $S \subseteq M$. Then Derived Set of $S$ denoted $S'$ is the set of all accumulation points of $S$. For example, consider the metric space $(\mathbb{R}, d)$ where $d$ is the usual Euclidean metric on $\mathbb{R}$ defined for all $x, y \in \mathbb{R}$ by $d(x, y) = \mid x - y \mid$, and consider the set $S = (0, 1) \cup \{ 2 \}$. Clearly every $x \in (0, 1)$ is an accumulation point of $S$ since for all $r > 0$, $B(x, r) \cap (0, 1) \setminus \{ x \} = \emptyset$. Furthermore, the points $0$ and $1$ are also accumulation points of $S$. However, $2$ is not an accumulation point of $S$ since $B \left (2, \frac{1}{2}\right ) \cap S \setminus \{ 2 \} = \emptyset$. Therefore:(2) The following theorem gives us a connection between the closure and derived set of a set $S$ in a metric space. Theorem 1: Let $(M, d)$ be a metric space and let $S \subseteq M$. Then $\bar{S} = S \cup S'$. Proof:Let $x \in \bar{S}$. Then $x$ is an adherent point of $S$, so $x$ is either an accumulation point or an isolated point. If $x$ is an accumulation point then $x \in S'$, and if $x$ is an isolated point then $x \in S$, so in either case, $x \in S \cup S'$ so $\bar{S} \subseteq S \cup S'$. Now let $x \in S \cup S'$. Then $x$ in $S$ or $x \in S'$. If $x \in S'$ then $x$ is an accumulation point of $S$ and so $x$ is an adherent point of $S$ too, so $x \in \bar{S}$. If $x \in S \setminus S'$ then $x$ must be an isolated point, and so $x$ is also an adherent point of $S$, so $x \in \bar{S}$. In both cases, $x \in \bar{S}$ so $\bar{S} \supseteq S \cup S'$. Hence we conclude that $\bar{S} = S \cup S'$. $\blacksquare$ On The Closure of a Set in a Metric Space in Terms of the Boundary of a Set page we also saw that $\bar{S} = S \cup \partial S$.
I learned here that I can use \\[<len>] to explicitly set the vertical skip space between lines, e.g., <len> set to 3ex. I'd like to set <len> to, say, 2-times the normal length in this environment (e.g., align or dcases) but I don't know the length parameter that determines this. Another example: I'd like to change the explicit 3ex below to some multiple of the vertical line-skip space in the align environment. Does anyone know what that is? \documentclass{beamer}\usepackage{amsmath}\usepackage{mathtools}\begin{document} \begin{frame} Some sample text \begin{spreadlines}{3ex} \begin{align*} \pi(\mu \mid x) &= \frac{\pi(\mu)\,\mathcal{L}(x \mid \mu)}{p(x)} \\ &= \begin{dcases*} \frac{(1-w)\phi(x)}{p(x)} & for \(\mu=0\) \\ \frac{w\gamma(\mu)\phi(x-\mu)}{p(x)} & for \(\mu \neq 0\). \end{dcases*} \end{align*} \end{spreadlines} \end{frame}\end{document}
Agenda de la FDP Séminaire d'AnalyseLe lundi à 10h30 - Salle 1180 (Bât E2)(Tours) Responsable : Extinction in a finite time for solutions of a class of Parabolic Equations involving $p$-Laplacian Yves Belaud jeudi 17 octobre 2019 - 10h30 - Salle 1180 (Bât E2)(Tours) Résumé : We study the property of extinction in a finite time for nonnegative solutions of\\ $\displaystyle \frac{1}{q} \frac{\partial}{\partial t}(u^q) - \nabla (|\nabla u|^{p-2} \nabla u) + a(x) u^\lambda = 0$ for the Dirichlet Boundary Conditions when $q > \lambda > 0$, $p \geq 1+q$, $p \geq 2$, $a(x) \geq 0$ and $\Omega$ a bounded domain of $\mathbb{R}^N$ ($N \geq 1)$. We give conditions of extinction of solutions in terms of character of degeneration of $a(x)$, i.e., in dependence of asymptotic of absorption potential near to the set where $a(x)=0$.\\ When $p>1+q$, the threshold is for power functions but for $p=1+q$, it happens extinction in a finite time for very flat absorption potential $a(x)$.\\ The first part of the talk is abstracts results on Hilbert spaces : we give a sufficient condition under the integral form for solutions to vanish in a finite time. Then we give a necessary condition involving a limit. The second part tackles with applications of these results to second order parabolic equations and leads to sharp result.
The definition of the SI base unit "metre" [1] doesn't seem to rule out explicitly that a certain value of "length, in meters" could be attributed to a pair of ends which are rigid to each other, but not at rest to each other. Consider, therefore, two such ends, $A$ and $B$, which both find constant but unequal ping durations between each other, i.e. in the notation of [2]$\! {\,}^{(\ast)}$: $[ \, A \, B \, A \, ] \ne [ \, B \, A \, B \, ]$. Is there a value of "length, in meters" attributable to this pair of ends, $A$ and $B$ ? If so, what is that value?, i.e. if the SI definition allowed to express the value of "the lenght $AB$" as "$x \, \text{m}$", for some positive real number $x$, then how should $x$ be expressed in terms of the two (given) unequal ping duration values $[ \, A \, B \, A \, ]$ and $[ \, B \, A \, B \, ]$, and the SI base unit "second" ("$ \text{s}$")? (Is perhaps: "$x := \left( \frac{[ \, A \, B \, A \, ]}{2 \, \text{s}} + \frac{[ \, B \, A \, B \, ]}{2 \, \text{s}} \right) \times \frac{299 \, 792 \, 458}{2}$"? Or perhaps: "$x := \sqrt{ \frac{[ \, A \, B \, A \, ]}{2 \, \text{s}} \times \frac{[ \, B \, A \, B \, ]}{2 \, \text{s}} } \times 299 \, 792 \, 458$"? ...) References: [1] SI brochure (8th edition, 2006), Section 2.1.1.1; http://www.bipm.org/en/si/base_units/metre.html (" The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second."). Together with "the mise en pratique of the definition of the metre"; http://www.bipm.org/en/publications/mep.html [2] J.L.Synge, "Relativity. The general Theory", North-Holland, 1960; p.409: " [...] light signals passing between a source $0$ and mirrors $1$, $2$, [...]" Trip-times such as $[ \, 0 \, 1 \, 0 \, ]$ [...] are measureable [...] $(\ast$: Suggestions for more standard and/or expressive notation for ping durations are welcome.$)$ This post imported from StackExchange Physics at 2014-04-24 07:32 (UCT), posted by SE-user user12262
Compact Sets in a Metric Space Recall from the Coverings of a Set in a Metric Space page that if $(M, d)$ is a metric space and $S \subseteq M$ then a cover or covering of $S$ is a collection of subsets $\mathcal F$ in $M$ such that:(1) Furthermore, we said that an open cover (or open covering) is simply a cover that contains only open sets. We also said that a subset $\mathcal S \subseteq \mathcal F$ is a subcover/subcovering (or open subcover/subcovering if $\mathcal F$ is an open covering) if $\mathcal S$ is also a cover of $S$, that is:(2) We can now define the concept of a compact set using the definitions above. Definition: Let $(M, d)$ be a metric space. The subset $S \subseteq M$ is said to be Compact if every open covering $\mathcal F$ of $S$ has a finite subcovering of $S$. In general, it may be more difficult to show that a subset of a metric space is compact than to show a subset of a metric space is not compact. So, let's look at an example of a subset of a metric space that is not compact. Consider the metric space $(\mathbb{R}, d)$ where $d$ is the Euclidean metric and consider the set $S = [0, 1) \subseteq \mathbb{R}$. We claim that this set is not compact. To show that $S$ is not compact, we need to find an open covering $\mathcal F$ of $S$ that does not have a finite subcovering. Consider the following open covering:(3) Clearly $\mathcal F$ is an infinite subcovering of $(0, 1)$ and furthermore:(4) Let $\mathcal F^*$ be a finite subset of $\mathcal F$ containing $p$ elements. Then:(5) Let $n^* = \max \{ n_1, n_2, ..., n_p \}$. Then due to the nesting of the open covering $\mathcal F$, we see that:(6) But for $(0, 1) \subseteq \left ( 0, 1 - \frac{1}{n^*} \right )$ we need $1 \leq 1 - \frac{1}{n^*}$. But $n^* \in \mathbb{N}$, so $n^* > 0$ and $\frac{1}{n^*} > 0$, so $1 - \frac{1}{n^*} < 1$. Therefore any finite subset $\mathcal F^*$ of $\mathcal F$ cannot cover $S = (0, 1)$. Hence, $(0, 1)$ is not compact.
Does the existence of a holomorphic square root for the identity function in a region $\Omega$ in $\mathbb C$ imply the existence of a holomorphic logarithm for the same function? I have no idea how to prove this. closed as off-topic by José Carlos Santos, Parcly Taxel, Shailesh, user99914, TheSimpliFire Feb 4 '18 at 10:07 This question appears to be off-topic. The users who voted to close gave this specific reason: " This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – José Carlos Santos, Parcly Taxel, Shailesh, Community, TheSimpliFire Answer to the question as clarified in a comment: If $z$ has a holomorphic square root in $G$ does $z$ have a holomorphic logarithm? The answer is yes. We use the following intuitively clear statement: Lemma.Suppose $G\subset\Bbb C$ is open, $0\notin G$, and some closed curve in $G$ has non-zero index (winding number) about the origin. Then some closed curve in $G$ has index $1$ about the origin. For an informal proof see here. Assuming that, suppose $g^2=z$ (which of course implies $0\notin G$). Then $2gg'=1$, so $$2\frac{g'}g= 2\frac{g'g}{g^2}=\frac 1z.$$ Then for every closed curve $C$ we have $$\frac1{2\pi i}\int_C\frac 1z =2\frac{1}{2\pi i}\int_C\frac{g'}g.$$Since $\frac{1}{2\pi i}\int_C\frac{g'}g$ is just the index of $g\circ C$ about the origin it is an integer; hence $\frac1{2\pi i}\int_C\frac 1z$ is an even integer for every $C$. The Lemma now implies that $\frac1{2\pi i}\int_C\frac 1z=0$ for every $C$, so that $1/z$ has a primitive.
Prove that the matrix\[A=\begin{bmatrix}0 & 1\\-1& 0\end{bmatrix}\]is diagonalizable.Prove, however, that $A$ cannot be diagonalized by a real nonsingular matrix.That is, there is no real nonsingular matrix $S$ such that $S^{-1}AS$ is a diagonal matrix. Let\[A=\begin{bmatrix}2 & -1 & -1 \\-1 &2 &-1 \\-1 & -1 & 2\end{bmatrix}.\]Determine whether the matrix $A$ is diagonalizable. If it is diagonalizable, then diagonalize $A$.That is, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$. A complex square ($n\times n$) matrix $A$ is called normal if\[A^* A=A A^*,\]where $A^*$ denotes the conjugate transpose of $A$, that is $A^*=\bar{A}^{\trans}$.A matrix $A$ is said to be nilpotent if there exists a positive integer $k$ such that $A^k$ is the zero matrix. (a) Prove that if $A$ is both normal and nilpotent, then $A$ is the zero matrix.You may use the fact that every normal matrix is diagonalizable. (b) Give a proof of (a) without referring to eigenvalues and diagonalization. (c) Let $A, B$ be $n\times n$ complex matrices. Prove that if $A$ is normal and $B$ is nilpotent such that $A+B=I$, then $A=I$, where $I$ is the $n\times n$ identity matrix. Let\[A=\begin{bmatrix}1 & 3 & 3 \\-3 &-5 &-3 \\3 & 3 & 1\end{bmatrix} \text{ and } B=\begin{bmatrix}2 & 4 & 3 \\-4 &-6 &-3 \\3 & 3 & 1\end{bmatrix}.\]For this problem, you may use the fact that both matrices have the same characteristic polynomial:\[p_A(\lambda)=p_B(\lambda)=-(\lambda-1)(\lambda+2)^2.\] (a) Find all eigenvectors of $A$. (b) Find all eigenvectors of $B$. (c) Which matrix $A$ or $B$ is diagonalizable? (d) Diagonalize the matrix stated in (c), i.e., find an invertible matrix $P$ and a diagonal matrix $D$ such that $A=PDP^{-1}$ or $B=PDP^{-1}$. (Stanford University Linear Algebra Final Exam Problem) Suppose the following information is known about a $3\times 3$ matrix $A$.\[A\begin{bmatrix}1 \\2 \\1\end{bmatrix}=6\begin{bmatrix}1 \\2 \\1\end{bmatrix},\quadA\begin{bmatrix}1 \\-1 \\1\end{bmatrix}=3\begin{bmatrix}1 \\-1 \\1\end{bmatrix}, \quadA\begin{bmatrix}2 \\-1 \\0\end{bmatrix}=3\begin{bmatrix}1 \\-1 \\1\end{bmatrix}.\] (a) Find the eigenvalues of $A$. (b) Find the corresponding eigenspaces. (c) In each of the following questions, you must give a correct reason (based on the theory of eigenvalues and eigenvectors) to get full credit.Is $A$ a diagonalizable matrix?Is $A$ an invertible matrix?Is $A$ an idempotent matrix? Show that the matrix $A=\begin{bmatrix}1 & \alpha\\0& 1\end{bmatrix}$, where $\alpha$ is an element of a field $F$ of characteristic $p>0$ satisfies $A^p=I$ and the matrix is not diagonalizable over $F$ if $\alpha \neq 0$. Read solution
We owe Paul Dirac two excellent mathematical jokes. I have amended them with a few lesser known variations. A. Square root of the Laplacian: we want $\Delta$ to be $D^2$ for some first order differential operator (for example, because it is easier to solve first order partial differential equations than second order PDEs). Writing it out, $$\sum_{k=1}^n \frac{\partial^2}{\partial x_k^2}=\left(\sum_{i=1}^n \gamma_i \frac{\partial}{\partial x_i}\right)\left(\sum_{j=1}^n \gamma_j \frac{\partial}{\partial x_j}\right) = \sum_{i,j}\gamma_i\gamma_j \frac{\partial^2}{\partial x_i x_j},$$ and equating the coefficients, we get that this is indeed true if $$D=\sum_{i=1}^n \gamma_i \frac{\partial}{\partial x_i}\quad\text{and}\quad \gamma_i\gamma_j+\gamma_j\gamma_i=\delta_{ij}.$$ It remains to come up with the right $\gamma_i$'s. Dirac realized how to accomplish it with $4\times 4$ matrices when $n=4$; but a neat follow-up joke is to simply define them to be the elements $\gamma_1,\ldots,\gamma_n$ of $$\mathbb{R}\langle\gamma_1,\ldots,\gamma_n\rangle/(\gamma_i\gamma_j+\gamma_j\gamma_i - \delta_{ij}).$$ Using symmetry considerations, it is easy to conclude that the commutator of the $n$-dimensional Laplace operator $\Delta$ and the multiplication by $r^2=x_1^2+\cdots+x_n^2$ is equal to $aE+b$, where $$E=x_1\frac{\partial}{\partial x_1}+\cdots+x_n\frac{\partial}{\partial x_n}$$ is the Euler vector field. A boring way to confirm this and to determine the coefficients $a$ and $b$ is to expand $[\Delta,r^2]$ and simplify using the commutation relations between $x$'s and $\partial$'s. A more exciting way is to act on $x_1^\lambda$, where $\lambda$ is a formal variable: $$[\Delta,r^2]x_1^{\lambda}=((\lambda+2)(\lambda+1)+2(n-1)-\lambda(\lambda-1))x_1^{\lambda}=(4\lambda+2n)x_1^{\lambda}.$$ Since $x_1^{\lambda}$ is an eigenvector of the Euler operator $E$ with eigenvalue $\lambda$, we conclude that $$[\Delta,r^2]=4E+2n.$$ B. Dirac delta function: if we can write $$g(x)=\int g(y)\delta(x-y)dy$$ then instead of solving an inhomogeneous linear differential equation $Lf=g$ for each $g$, we can solve the equations $Lf=\delta(x-y)$ for each real $y$, where a linear differential operator $L$ acts on the variable $x,$ and combine the answers with different $y$ weighted by $g(y)$. Clearly, there are fewer real numbers than functions, and if $L$ has constant coefficients, using translation invariance the set of right hand sides is further reduced to just one, $\delta(x)$. In this form, the joke goes back to Laplace and Poisson. What happens if instead of the ordinary geometric series we consider a doubly infinite one? Since $$z(\cdots + z^{-n-1} + z^{-n} + \cdots + 1 + \cdots + z^n + \cdots)= \cdots + z^{-n} + z^{-n+1} + \cdots + z + \cdots + z^{n+1} + \cdots,$$ the expression in the parenthesis is annihilated by the multiplication by $z-1$, hence it is equal to $\delta(z-1)$. Homogenizing, we get $$\sum_{n\in\mathbb{Z}}\left(\frac{z}{w}\right)^n=\delta(z-w)$$ This identity plays an important role in conformal field theory and the theory of vertex operator algebras. Pushing infinite geometric series in a different direction, $$\cdots + z^{-n-1} + z^{-n} + \cdots + 1=-\frac{z}{1-z} \quad\text{and}\quad 1 + z + \cdots + z^n + \cdots = \frac{1}{1-z},$$ which add up to $1$. This time, the sum of doubly infinite geometric series is zero!Thus the point $0\in\mathbb{Z}$ is the sum of all lattice points on the non-negative half-line and all points on the positive half-line: $$0=[\ldots,-2,-1,0] + [0,1,2,\ldots] $$ A vast generalization is given by Brion's formula for the generating function for the lattice points in a convex lattice polytope $\Delta\subset\mathbb{R}^N$ with vertices $v\in{\mathbb{Z}}^N$ and closed inner vertex cones $C_v\subset\mathbb{R}^N$: $$\sum_{P\in \Delta\cap{\mathbb{Z}}^N} z^P = \sum_v\left(\sum_{Q\in C_v\cap{\mathbb{Z}}^N} z^Q\right),$$ where the inner sums in the right hand side need to be interpreted as rational functions in $z_1,\ldots,z_N$. Another great joke based on infinite series is the Eilenberg swindle, but I am too exhausted by fighting the math preview to do it justice.
The term "distributed ray tracing" was originally coined by Robert Cook in this 1984 paper. His observation was that in order to perform anti-aliasing in a ray-tracer, the renderer needs to perform spatial upsampling - that is, to take more samples (i.e. shoot more rays) than the number of pixels in the image and combine their results. One way to do this is ... In order to understand Russian Roulette, let's look at a very basic backward path tracer:void RenderPixel(uint x, uint y, UniformSampler *sampler) {Ray ray = m_scene->Camera.CalculateRayFromPixel(x, y, sampler);float3 color(0.0f);float3 throughput(1.0f);// Bounce the ray around the scenefor (uint bounces = 0; bounces < 10; +... The next step up from a pinhole camera model is a thin lens model, where we model the lens as being an infinitely thin disc. This is still an idealization that pretty far from modeling a real camera, but it will give you basic depth of field effects.The image above, from panohelp.com, shows the basic idea. For each point on the image, there are multiple ... You always need to multiply by the cosine term indeed (that's part of the rendering equation). Though when you do indirect diffuse using ray-tracing and thus monte-carol integration (which is the most common technique in this case), you have to divide the contribution of each sample by your PDF. This is well exampled here.Note also that in the mentioned ... Generally speaking, path tracing removes a number of assumptions that ray tracing makes. Ray tracing usually assumes that there is no indirect lighting (or that indirect lighting can be approximated by a constant function), because handling indirect lighting would require casting many additional rays whenever you shade an intersection point. Ray tracing ... The Russian roulette technique itself is a way of terminating paths without introducing systemic bias. The principle is fairly straightforward: if at a particular vertex you have a 10% chance of arbitrarily replacing the energy with 0, and if you do that an infinite number of times, you will see 10% less energy. The energy boost just compensates for that. If ... There is one important distinction to make.Markov Chain Monte Carlo (such as Metropolis Light Transport) methods fully acknowledge the fact that they produce lots of highly correlated, it is actually the backbone of the algorithm.On other hand there are algorithms as Bidirectional Path Tracing, Many Light Method, Photon Mapping where the crucial role ... First of all, a good reference for Monte Carlo path tracing in participating media is these course notes from Steve Marschner.The way I like to think about volume scattering is that a photon traveling through a medium has a certain probability per unit length of interacting (getting scattered or absorbed). As long as it doesn't interact, it just goes in a ... Just to expand on some of the other answers, the proof that Russian Roulette does not give a biassed result is very simple.Suppose that you have some random variable $F$ which is the sum of several terms:$$F = F_1 + \cdots + F_N$$Replace each term with:$$F'_i = \left\{\begin{array}{ll}\frac{1}{p_i} F_i & \hbox{with probability } p_i \\... Sample locations with a uniform pattern will create aliasing in the output, whenever there are geometric features of size comparable to or smaller than the sampling grid. That's the reason why "jaggies" exist: because images are made of a uniform square pixel grid, and when you render (for example) an angled line without antialiasing, it crosses rows/columns ... OverviewHere is a short overview of the most used space representations, MLT variants and mutation strategies for these MLT variants. As you can see, there are quite some papers dating back to 2017 (e.g., three papers explore combining the Path Space and the Primary Sample Space by jumping back and forth between the two).Path Space (PS) representation, ... Monte Carlo methods rely on the law of large numbers, which states that the average of a random event repeated a large number of times converges toward the expected value (if you flip a coin a gazillion times, on average you will obtain each side half the time). Monte Carlo integration uses that law to evaluate an integral by averaging a large number of ... In Distributed ray tracing, You stochastically sample many rays in directions which may or may not be preferred by the BRDF. Whereas, in Monte Carlo ray tracing or simply path tracing, you sample only one ray in a direction preferred by the BRDF. So, there are two obvious advantages Path Tracing would have:Computationally less expensive. Which means with ... This is a very good question. There is a common misconception that Monte Carlo, or integration is applied "recursively" on the rendering equation. That is not what's happening. Numerical integration methods are tailored to problems of the form:$$I = \int_{\Omega}{f(x)d\mu(x)} \approx \sum_{k=0}^{N-1}w(x_k)f(x_k)$$Note that this is not the case for the ... I've seen both, unfortunately. I'm not fond of rays per second as meaning exclusively primary rays and I'd suggest "paths per second" or better yet "samples per second" instead. "Complete ray" is not a common term: a ray is a (potentially unbounded) line segment and a sequence of rays is a path.Rays per second in your second sense of total ray casts is not ... See Kolb, et al., A Realistic Camera Model for Computer Graphics, SIGGRAPH 95.However, do bear in mind that camera models which mimic real-world cameras aren't necessarily what you want for the rendering phase. In a visual effects/post-production scenario, the more blur/vignetting/distortion that the camera model introduces, the worse it is for the ... The hemispherical intensity function, i.e. the hemispherical function of incident light multiplied by the BRDF, correlates to the number of samples required per solid angle. Take the sample distribution of any method and compare it to that hemispherical function. The more similar they are, the better the method is in that particular case.Note that since ... I don't have that book to check the context of this, but from the equations you posted, yes, it looks like you're right. The $1/N$ factor should be applied to both terms. That agrees with the formula for variance from statistics, which is $E[X^2] - E[X]^2$. I think you're right and the subtraction is a mistake. The code should rather be multiplying the fraction of photons not absorbed into the weight. Something like:float fraction_absorbed = sigma_a / sigma_t;absorption += w * fraction_absorbed;w *= (1.0f - fraction_absorbed);This makes absorption the total fraction of photons absorbed so far, and w the ... In Monte Carlo integration, the samples $x_1, x_2, \ldots x_N$ are independent, identically-distributed random variables. This implies they all have the same expectation value. The derived quantities $f(x_i)/p(x_i)$ also all have the same expectation value. So the expectation of a sum of $N$ of these is the same as $N$ times the expectation of any one of ... The classic method is to uniformly sample the disc at the base of your hemisphere and to project your samples upwards on the hemisphere (eg. compute z from x and y). This yields a cosine weighted distribution.As the projection preserves stratification, you need only use stratified sampling of the disc to get a stratified cosine distribution. I can find two possible reasons for the image not converging.#1. Every sample is the sameFor every sample, you generate random rays. You do that when you shoot the ray through a pixel (for anti-aliasing and DoF) and when you sample the hemisphere (for a new indirect bounce). The problem would be if for every sample, it would generate the same direction, ... The general idea for sampling half vector based distributions is that you generate $H$ and then compute $w_i$ by reflecting $w_o$ about $H$. This is so $H$ will be the half vector of your $w_i$ and $w_o$ pair. It is standard reflection:$$w_i = -w_o + 2(w_o\cdot H)H $$How you generate $H$ depends on the specific distribution. Generally, it is done in polar ... It's not that hard. If you have just planar or angular light sources, you can think of them as one light source split into multiple chunks and the only thing to deal with is how to sample this multi-light and how to compute the PDF of the resulting samples.Picking probabilityFirst, you need to setup the picking probability $P(l)$ for each light source $l$... Throughout my answer I'll sometimes refer to some results in https://sites.fas.harvard.edu/~cs278/papers/veach.pdf by using [MIS,section_number].You can skip the following derivation if you don't care about the mathematical explanation of why using MIS to combine estimators is valid.I'll have to start with what the purpose of MIS is. The general idea is ... If you have a deterministic mapping function which transforms uniformly distributed samples into the desired PDF (cosine shaped in your case), just feed it directly with stratified uniformly distributed samples. The mapping will keep the strata separated.Usually one sample per stratum is used and the number of strata is set according to the total amount of ... PSSMT operates directly on the space of random numbers that generate valid light paths. As such, mutations in the unit hypercube lose their physical interpretation since they do not have direct knowledge of the actual light path constructed. Recent research in rendering has shown that it is possible to bridge the gap between path space (that acts directly on ... Specular surfaces which use MIS are not perfectly specular like a mirror. They have a small amount of blur, otherwise there is indeed no point in sampling the light as all the samples will evaluate the BRDF as 0. In fact, you would only need to trace a single reflection ray.A small amount of blur means that a given camera ray will see a small area of the ... This approximation is typically done by running a bidirectional path tracer with a modest amount of samples per pixel. This means multiple paths per pixel to approximate the integral of $f_j$, the measurement contribution function. Veach's original MLT paper explains how it can be done in a way that eliminates start-up bias (see Section 5.1). Yes, in the simple case, primary rays conform to the frustum. If you're doing depth-of-field optically, then the rays don't quite conform to the frustum, because you need to vary the ray origins slightly as well as the directions. How exactly the variation works depends on how closely you're simulating the lens system and aperture. You can picture it as ...
Suppose that a random variable has a lower and an upper bound [0,1]. How to compute the variance of such a variable? You can prove Popoviciu's inequality as follows. Use the notation $m=\inf X$ and $M=\sup X$. Define a function $g$ by $$ g(t)=\mathbb{E}\left[\left(X-t\right)^2\right] \, . $$ Computing the derivative $g'$, and solving $$ g'(t) = -2\mathbb{E}[X] +2t=0 \, , $$ we find that $g$ achieves its minimum at $t=\mathbb{E}[X]$ (note that $g''>0$). Now, consider the value of the function $g$ at the special point $t=\frac{M+m}{2}$. It must be the case that $$ \mathbb{Var}[X]=g(\mathbb{E}[X])\leq g\left(\frac{M+m}{2}\right) \, . $$ But $$ g\left(\frac{M+m}{2}\right) = \mathbb{E}\left[\left(X - \frac{M+m}{2}\right)^2 \right] = \frac{1}{4}\mathbb{E}\left[\left((X-m) + (X-M)\right)^2 \right] \, . $$ Since $X-m\geq 0$ and $X-M\leq 0$, we have $$ \left((X-m)+(X-M)\right)^2\leq\left((X-m)-(X-M)\right)^2=\left(M-m\right)^2 \, , $$ implying that $$ \frac{1}{4}\mathbb{E}\left[\left((X-m) + (X-M)\right)^2 \right] \leq \frac{1}{4}\mathbb{E}\left[\left((X-m) - (X-M)\right)^2 \right] = \frac{(M-m)^2}{4} \, . $$ Therefore, we proved Popoviciu's inequality $$ \mathbb{Var}[X]\leq \frac{(M-m)^2}{4} \, . $$ Let $F$ be a distribution on $[0,1]$. We will show that if the variance of $F$ is maximal, then $F$ can have no support in the interior, from which it follows that $F$ is Bernoulli and the rest is trivial. As a matter of notation, let $\mu_k = \int_0^1 x^k dF(x)$ be the $k$th raw moment of $F$ (and, as usual, we write $\mu = \mu_1$ and $\sigma^2 = \mu_2 - \mu^2$ for the variance). We know $F$ does not have all its support at one point (the variance is minimal in that case). Among other things, this implies $\mu$ lies strictly between $0$ and $1$. In order to argue by contradiction, suppose there is some measurable subset $I$ in the interior $(0,1)$ for which $F(I)\gt 0$. Without any loss of generality we may assume (by changing $X$ to $1-X$ if need be) that $F(J = I \cap (0, \mu]) \gt 0$: in other words, $J$ is obtained by cutting off any part of $I$ above the mean and $J$ has positive probability. Let us alter $F$ to $F'$ by taking all the probability out of $J$ and placing it at $0$. In so doing, $\mu_k$ changes to $$\mu'_k = \mu_k - \int_J x^k dF(x).$$ As a matter of notation, let us write $[g(x)] = \int_J g(x) dF(x)$ for such integrals, whence $$\mu'_2 = \mu_2 - [x^2], \quad \mu' = \mu - [x].$$ Calculate $$\sigma'^2 = \mu'_2 - \mu'^2 = \mu_2 - [x^2] - (\mu - [x])^2 = \sigma^2 + \left((\mu[x] - [x^2]) + (\mu[x] - [x]^2)\right).$$ The second term on the right, $(\mu[x] - [x]^2)$, is non-negative because $\mu \ge x$ everywhere on $J$. The first term on the right can be rewritten $$\mu[x] - [x^2] = \mu(1 - [1]) + ([\mu][x] - [x^2]).$$ The first term on the right is strictly positive because (a) $\mu \gt 0$ and (b) $[1] = F(J) \lt 1$ because we assumed $F$ is not concentrated at a point. The second term is non-negative because it can be rewritten as $[(\mu-x)(x)]$ and this integrand is nonnegative from the assumptions $\mu \ge x$ on $J$ and $0 \le x \le 1$. It follows that $\sigma'^2 - \sigma^2 \gt 0$. We have just shown that under our assumptions, changing $F$ to $F'$ strictly increases its variance. The only way this cannot happen, then, is when all the probability of $F'$ is concentrated at the endpoints $0$ and $1$, with (say) values $1-p$ and $p$, respectively. Its variance is easily calculated to equal $p(1-p)$ which is maximal when $p=1/2$ and equals $1/4$ there. Now when $F$ is a distribution on $[a,b]$, we recenter and rescale it to a distribution on $[0,1]$. The recentering does not change the variance whereas the rescaling divides it by $(b-a)^2$. Thus an $F$ with maximal variance on $[a,b]$ corresponds to the distribution with maximal variance on $[0,1]$: it therefore is a Bernoulli$(1/2)$ distribution rescaled and translated to $[a,b]$ having variance $(b-a)^2/4$, QED. If the random variable is restricted to $[a,b]$ and we know the mean $\mu=E[X]$, the variance is bounded by $(b-\mu)(\mu-a)$. Let us first consider the case $a=0, b=1$. Note that for all $x\in [0,1]$, $x^2\leq x$, wherefore also $E[X^2]\leq E[X]$. Using this result, \begin{equation} \sigma^2 = E[X^2] - (E[X]^2) = E[X^2] - \mu^2 \leq \mu - \mu^2 = \mu(1-\mu). \end{equation} To generalize to intervals $[a,b]$ with $b>a$, consider $Y$ restricted to $[a,b]$. Define $X=\frac{Y-a}{b-a}$, which is restricted in $[0,1]$. Equivalently, $Y = (b-a)X + a$, and thus \begin{equation} Var[Y] = (b-a)^2Var[X] \leq (b-a)^2\mu_X (1-\mu_X). \end{equation} where the inequality is based on the first result. Now, by substituting $\mu_X = \frac{\mu_Y - a}{b-a}$, the bound equals \begin{equation} (b-a)^2\, \frac{\mu_Y - a}{b-a}\,\left(1- \frac{\mu_Y - a}{b-a}\right) = (b-a)^2 \frac{\mu_Y -a}{b-a}\,\frac{b - \mu_Y}{b-a} = (\mu_Y - a)(b- \mu_Y), \end{equation} which is the desired result. At @user603's request.... A useful upper bound on the variance $\sigma^2$ of a random variable that takes on values in $[a,b]$ with probability $1$ is $\sigma^2 \leq \frac{(b−a)^2}{4}$. A proof for thespecial case $a=0, b=1$ (which is what the OP asked about) can be foundhere on math.SE, and it is easily adapted tothe more general case. As noted in my comment above and also in the answer referencedherein, a discrete random variable that takes on values $a$ and $b$ with equalprobability $\frac{1}{2}$ has variance $\frac{(b−a)^2}{4}$ and thus no tighter general bound can be found. Another point to keep in mind is that a bounded random variable has finite variance, whereas for an unbounded random variable, the variance might not be finite, and in some cases might not even be definable. For example, the mean cannot be defined for Cauchy random variables, and so one cannot define the variance (as the expectation of the squared deviation from the mean). are you sure that this is true in general - for continuous as well as discrete distributions? Can you provide a link to the other pages? For a general distibution on $[a,b]$ it is trivial to show that $$ Var(X) = E[(X-E[X])^2] \le E[(b-a)^2] = (b-a)^2. $$ I can imagine that sharper inequalities exist ... Do you need the factor $1/4$ for your result? On the other hand one can find it with the factor $1/4$ under the name Popoviciu's_inequality on wikipedia. This article looks better than the wikipedia article ... For a uniform distribution it holds that $$ Var(X) = \frac{(b-a)^2}{12}. $$
Gevrey regularity and existence of Navier-Stokes-Nernst-Planck-Poisson system in critical Besov spaces 1. School of Information Technology, Jiangxi University of Finance and Economics, Nanchang 330032, China 2. Department of Mathematics, Northwest Normal University, Lanzhou 730070, China J. Funct. Anal., 87(1989), 359-369], we prove that the solutions are analytic in a Gevrey class of functions. As a consequence of Gevrey estimates, we particularly obtain higher-order derivatives of solutions in Besov and Lebesgue spaces. Finally, we prove that there exists a positive constant $\mathbb{C}$ $(u_{0}, n_{0}, c_{0})=(u_{0}^{h}, u_{0}^{3}, n_{0}, c_{0})$ $\begin{aligned}&\|(n_{0}, c_{0},u_{0}^{h})\|_{\dot{B}^{-2+3/q}_{q, 1}× \dot{B}^{-2+3/q}_{q, 1}×\dot{B}^{-1+3/p}_{p, 1}}+\|u_{0}^{h}\|_{\dot{B}^{-1+3/p}_{p, 1}}^{α}\|u_{0}^{3}\|_{\dot{B}^{-1+3/p}_{p, 1}}^{1-α}≤q1/\mathbb{C}\end{aligned}$ $p, q, α$ $1<p<q≤ 2p<\infty, \frac{1}{p}+\frac{1}{q}>\frac{1}{3}, 1< q<6, \frac{1}{p}-\frac{1}{q}≤\frac{1}{3}$ Keywords:Nernst-Planck-Poisson system, Navier-Stokes system, Gevrey regularity, global solutions, Besov spaces. Mathematics Subject Classification:Primary: 35Q30; Secondary: 76D03, 35E15. Citation:Minghua Yang, Jinyi Sun. Gevrey regularity and existence of Navier-Stokes-Nernst-Planck-Poisson system in critical Besov spaces. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1617-1639. doi: 10.3934/cpaa.2017078 References: [1] A. Biswas, V. Martinez and P. Silva, On Gevrey regularity of the supercritical SQG equation in critical Besov spaces, [2] [3] H. Bae, A. Biswas and E. Tadmor, Analyticity and decay estimates of the Navier-Stokes equations in critical Besov spaces, [4] [5] A. Biswas and D. Swanson, Gevrey regularity of solutions to the 3D Navier-Stokes equations with weighted $\ell^{p}$ initial data, [6] [7] J. Y. Chemin, M. Paicu and P. Zhang, Global large solutions to 3-D inhomogeneous Navier-Stokes system with one slow variable diffusion, [8] H. Kozono and M. Yamazaki, Semilinear heat equations and the Navier-Stokes equation with distributions in new function spaces as initial data, [9] [10] R. Danchin, [11] [12] C. Deng, J. Zhao and S. Cui, Well-posedness of a dissipative nonlinear electrohydrodynamic system in modulation spaces, [13] C. Deng, J. Zhao and S. Cui, Well-posedness for the Navier-Stokes-Nernst-Planck-Poisson system in Triebel-Lizorkin space and Besov space with negative indices, [14] [15] [16] [17] B. Hajer, Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Springer, Berlin, 2011. doi: 10.1007/978-3-642-16830-7. Google Scholar [18] J. Huang, M. Paicu and P. Zhang, Global well-posedness of incompressible inhomogeneous fluid systems with bounded density or non-Lipschitz velocity, [19] [20] [21] [22] [23] E. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, 1970. Google Scholar [24] [25] M. Paicu and P. Zhang, Global solutions to the 3D incompressible anisotropic Navier-Stokes system in the critical spaces, [26] [27] [28] B. Wang, Z. Huo, C. Hao and Z. Guo, Harmonic Analysis Methods for Nonlinear Evolution Equations, World Scientific, 2011. doi: 10.1142/9789814360746. Google Scholar [29] [30] [31] [32] [33] [34] J. Newman and K. Thomas, Electrochemical Systems, thirded., John Wiley Sons, 2004. Google Scholar [35] R. Ryham, An energetic variational approach to mathematical modeling of charged fluids: charge phases, simulation and well posedness (Doctoral dissertation), The Pennsylvania State University, 2006, p. 83.Google Scholar [36] [37] [38] J. Xiao, Homothetic variant of fractional Sobolev space with application to Navier-Stokes system revisited, [39] C. Zhai and T. Zhang, Global well-posedness to the 3-D incompressible inhomogeneous Navier-Stokes equations with a class of large velocity [40] J. Zhao, C. Deng and S. Cui, Global well-posedness of a dissipative system arising in electrohydrodynamics in negative-order Besov spaces, [41] J. Zhao, C. Deng and S. Cui, Well-posedness of a dissipative system modeling electrohydrodynamics in Lebesgue spaces, [42] J. Zhao, T. Zhang and Q Liu, Global well-posedness for the dissipative system modeling electro-hydrodynamics with large vertical velocity component in critical Besov space, show all references References: [1] A. Biswas, V. Martinez and P. Silva, On Gevrey regularity of the supercritical SQG equation in critical Besov spaces, [2] [3] H. Bae, A. Biswas and E. Tadmor, Analyticity and decay estimates of the Navier-Stokes equations in critical Besov spaces, [4] [5] A. Biswas and D. Swanson, Gevrey regularity of solutions to the 3D Navier-Stokes equations with weighted $\ell^{p}$ initial data, [6] [7] J. Y. Chemin, M. Paicu and P. Zhang, Global large solutions to 3-D inhomogeneous Navier-Stokes system with one slow variable diffusion, [8] H. Kozono and M. Yamazaki, Semilinear heat equations and the Navier-Stokes equation with distributions in new function spaces as initial data, [9] [10] R. Danchin, [11] [12] C. Deng, J. Zhao and S. Cui, Well-posedness of a dissipative nonlinear electrohydrodynamic system in modulation spaces, [13] C. Deng, J. Zhao and S. Cui, Well-posedness for the Navier-Stokes-Nernst-Planck-Poisson system in Triebel-Lizorkin space and Besov space with negative indices, [14] [15] [16] [17] B. Hajer, Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Springer, Berlin, 2011. doi: 10.1007/978-3-642-16830-7. Google Scholar [18] J. Huang, M. Paicu and P. Zhang, Global well-posedness of incompressible inhomogeneous fluid systems with bounded density or non-Lipschitz velocity, [19] [20] [21] [22] [23] E. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, 1970. Google Scholar [24] [25] M. Paicu and P. Zhang, Global solutions to the 3D incompressible anisotropic Navier-Stokes system in the critical spaces, [26] [27] [28] B. Wang, Z. Huo, C. Hao and Z. Guo, Harmonic Analysis Methods for Nonlinear Evolution Equations, World Scientific, 2011. doi: 10.1142/9789814360746. Google Scholar [29] [30] [31] [32] [33] [34] J. Newman and K. Thomas, Electrochemical Systems, thirded., John Wiley Sons, 2004. Google Scholar [35] R. Ryham, An energetic variational approach to mathematical modeling of charged fluids: charge phases, simulation and well posedness (Doctoral dissertation), The Pennsylvania State University, 2006, p. 83.Google Scholar [36] [37] [38] J. Xiao, Homothetic variant of fractional Sobolev space with application to Navier-Stokes system revisited, [39] C. Zhai and T. Zhang, Global well-posedness to the 3-D incompressible inhomogeneous Navier-Stokes equations with a class of large velocity [40] J. Zhao, C. Deng and S. Cui, Global well-posedness of a dissipative system arising in electrohydrodynamics in negative-order Besov spaces, [41] J. Zhao, C. Deng and S. Cui, Well-posedness of a dissipative system modeling electrohydrodynamics in Lebesgue spaces, [42] J. Zhao, T. Zhang and Q Liu, Global well-posedness for the dissipative system modeling electro-hydrodynamics with large vertical velocity component in critical Besov space, [1] [2] Peter Constantin, Gregory Seregin. Global regularity of solutions of coupled Navier-Stokes equations and nonlinear Fokker Planck equations. [3] Minghua Yang, Zunwei Fu, Jinyi Sun. Global solutions to Chemotaxis-Navier-Stokes equations in critical Besov spaces. [4] [5] [6] Reinhard Farwig, Paul Felix Riechwald. Regularity criteria for weak solutions of the Navier-Stokes system in general unbounded domains. [7] Lucas C. F. Ferreira, Elder J. Villamizar-Roa. On the existence of solutions for the Navier-Stokes system in a sum of weak-$L^{p}$ spaces. [8] [9] Jonathan Zinsl. Exponential convergence to equilibrium in a Poisson-Nernst-Planck-type system with nonlinear diffusion. [10] Hammadi Abidi, Taoufik Hmidi, Sahbi Keraani. On the global regularity of axisymmetric Navier-Stokes-Boussinesq system. [11] Grzegorz Karch, Maria E. Schonbek, Tomas P. Schonbek. Singularities of certain finite energy solutions to the Navier-Stokes system. [12] [13] Vena Pearl Bongolan-walsh, David Cheban, Jinqiao Duan. Recurrent motions in the nonautonomous Navier-Stokes system. [14] Hi Jun Choe, Bataa Lkhagvasuren, Minsuk Yang. Wellposedness of the Keller-Segel Navier-Stokes equations in the critical Besov spaces. [15] [16] [17] Zhong Tan, Yong Wang, Xu Zhang. Large time behavior of solutions to the non-isentropic compressible Navier-Stokes-Poisson system in $\mathbb{R}^{3}$. [18] [19] Vladimir V. Chepyzhov, E. S. Titi, Mark I. Vishik. On the convergence of solutions of the Leray-$\alpha $ model to the trajectory attractor of the 3D Navier-Stokes system. [20] 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
The aim of this test case is to validate the following functions: The simulation results of SimScale were compared to the analytical results derived from [Roark]. The mesh used was created locally consisting of quadratic hexahedral elements and uploaded to the SimScale platform. The bimetallic strip has a length of l=10m, width of w=1m and the total height of h=0.1m with each strip thickness of ta=tb=0.05m. Tool Type : CalculiX Analysis Type : Thermomechanical Mesh and Element types : Mesh type Number of nodes Number of 3D elements Element type quadratic hexahedral 3652 600 3D isoparametric Material: Upper strip: Lower strip: Initial Conditions: Constraints: Temperature: Contact: (1)\[K_1 = 4 + 6 \frac {t_a}{t_b} + 4 \left(\frac {t_a}{t_b}\right)^2 + \frac {E_a}{E_b} \left(\frac {t_a}{t_b}\right)^3 + \frac {E_a}{E_b} \frac {t_a}{t_b} = 16\] (2)\[d_x = \frac {6 l (\gamma_b – \gamma_a) (T – T_o) (t_a + t_b)}{(t_b)^2 K_1} = 0.0015 \ m\] (3)\[d_z = \frac {3 (l)^2 (\gamma_b – \gamma_a) (T – T_o) (t_a + t_b)}{(t_b)^2 K_1} = 0.075 \ m\] (4)\[\sigma = \frac {(\gamma_b – \gamma_a) (T – T_o) E_a}{K_1} \left[3 \frac {t_a}{t_b} + 2 – \frac {E_a}{E_b} \left(\frac {t_a}{t_b}\right)^3\right] = 50 \ MPa\] The equation (1), (2), (3) and (4) used to solve the problem is derived in [Roark]. Equations (2) and (3) are the displacements of the bimetallic strip in x and z direction respectively. Whereas, equation (4) is the normal stress in x direction at the bottom surface. Comparison of the x and z displacements computed on node N3 and σxx computed on node N2 with [Roark] formulations. Quantity [Roark] SimScale Error dx (m) 0.0015 0.0015 0% dz (m) 0.075 0.074975 0.03% σ (Mpa) 50 48.79 2.42% [Roark] (1, 2, 3, 4) (2011)”Roark’s Formulas For Stress And Strain, Eighth Edition”, W. C. Young, R. G. Budynas, A. M. Sadegh
Note that the zero vector in the vector space $C[-\pi, \pi]$ is the zero function\[\theta(x):=0.\] Let us consider a linear combination\[a_1\cos(x)+a_2\sin(x)=\theta(x)=0 \tag{*}.\]If this linear combination has only the zero solution $a_1=a_2=0$, then the set $\{\cos(x), \sin(x)\}$ is linearly independent. The equality (*) should be true for any values of $x\in [-\pi, \pi]$.Setting $x=0$, we obtain from (*) that\[a_1=0\]since $\cos(0)=1, \sin(0)=0$. We also set $x=\pi/2$ and we obtain\[a_2=0\]since $\cos(\pi/2)=0, \sin(\pi/2)=1$. Therefore, we have $a_1=a_2=0$ and we conclude that the set $\{\cos(x), \sin(x)\}$ is linearly independent. Subspace Spanned by Trigonometric Functions $\sin^2(x)$ and $\cos^2(x)$Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$.Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$.(a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ […] Linear Independent Continuous FunctionsLet $C[3, 10]$ be the vector space consisting of all continuous functions defined on the interval $[3, 10]$. Consider the set\[S=\{ \sqrt{x}, x^2 \}\]in $C[3,10]$.Show that the set $S$ is linearly independent in $C[3,10]$.Proof.Note that the zero vector […] Linear Independent Vectors and the Vector Space Spanned By ThemLet $V$ be a vector space over a field $K$. Let $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n$ be linearly independent vectors in $V$. Let $U$ be the subspace of $V$ spanned by these vectors, that is, $U=\Span \{\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_n\}$.Let […] Any Vector is a Linear Combination of Basis Vectors UniquelyLet $B=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ be a basis for a vector space $V$ over a scalar field $K$. Then show that any vector $\mathbf{v}\in V$ can be written uniquely as\[\mathbf{v}=c_1\mathbf{v}_1+c_2\mathbf{v}_2+c_3\mathbf{v}_3,\]where $c_1, c_2, c_3$ are […]
There are two outgoing links in this question of mine here, but only an incoming link is shown in the "Linked" box. EDIT Also an incoming link from this question Number of ways, powers of $2$ sum up specific values doesn't appear in the "Linked" section of this question What's the non-trivial root of $\lim \limits_{n\to \infty}\left(\sum_{k=0}^n x^{2^k}\right)^n$? EDIT2 After I posted this, I received an Announcer badge for one of the "Summing over..." questions. Does this mean that the links are somehow counted as to come from the outside?
Yes, you should taking into account the scale of the output $y$ and should also take into account the scale of the covariates in $X$. Let $X \in \mathbb{R}^{n \times p}$ be the design matrix, whose rows are vectors with each entry being a covariate that together seek to explain the response $y \in \mathbb{R}^n$. Each entry of the response $y_i = f(e_i^T X) + \epsilon_i$ (for $i = 1, \dots, n$) is additively composed of a signal that depends on the covariates and an iid mean zero noise. Choosing to model the signal $f$ as being approximately linear leads us to the LASSO estimate $$\hat \beta_\lambda = \arg\min_\beta \frac{1}{2n} \|y-X\beta\|_2^2 + \lambda \|\beta\|_1,$$ we know, by first order conditions, that $\frac{-1}{n} X^T (y - X \hat \beta_\lambda) = \lambda \hat{z}_\lambda$, where $\hat{z}_\lambda$ is the dual variable satisfying $\hat{z}_{\lambda,j} = sgn(\hat{\beta}_{\lambda, j})$ if $\hat{\beta}_{\lambda, j} \neq 0$ and $\hat{z}_{\lambda, j} \in [-1,1]$ if $\hat{\beta}_{\lambda, j} = 0$. Plugging in $\hat{\beta}_\lambda = 0$ into this equation, we see that $\frac{-1}{n} X^T y = \lambda \hat{z}_\lambda$, making $$\frac{1}{n} \|X^T y \|_\infty = \lambda \|\hat{z}_{\lambda}\|_\infty.$$ If $\|\hat{z}_\lambda\|_\infty \neq 1$, then $\lambda$ could decrease (with $\|\hat{z}_\lambda\|_\infty$ increased to maintain equality) and the LASSO estimate would still be $\hat{\beta}_\lambda = 0$. Therefore, at $\lambda_\mathrm{max}$, the smallest value of $\lambda$ that produces $\hat{\beta}_{\lambda}=0$, we get that $$\frac{1}{n} \|X^T y\|_\infty = \lambda_\mathrm{max} \cdot 1.$$ This tells us that there's no need to consider $\lambda > \lambda_\mathrm{max}$ when tuning the LASSO. Now, in practice, most solvers standardize the columns of $X$ so that won't need to be directly taken into account. (Note that it's reasonable to standardize the covariates since the units of measurement shouldn't affect the estimated coefficient.) The ridge case is discussed well here: Maximum penalty for ridge regression
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Properties of Vector Spaces Properties of Vector Spaces We will now look at some important properties of vector spaces and provide what may seem like trivial proofs. Please review the Vector Spaces page first nevertheless. Theorem 1: If $V$ is a vector space and $x \in V$ then there exists only one additive identity $0 \in V$ such that $x + 0 = 0 + x = x$ $\forall x \in V$. Proof:Suppose that there exists two additive identities, that is $0$ and $0'$ are both additive identities for all $x \in V$. Then it follows that: \begin{align} 0 = 0 + 0' = 0' \implies 0 = 0' \end{align} We note that these equalities are true since if $0$ and $0'$ are both additive identities, then $0 = 0 + 0'$ and $0' = 0' + 0$. $\blacksquare$ Theorem 2: If $V$ is a vector space then $\forall x \in V$, $\exists (-x) \in V$ where $(-x)$ is a unique additive inverse to $x$. Proof:Suppose that $x \in V$ and that both $-x$ and $-x'$ are additive inverses to $x$. Then it follows that: \begin{align} \quad -x =(-x) + 0 = (-x) + (x + (-x')) = (-x + x) + (-x') = 0 + (-x') = -x' \implies -x = -x' \end{align} Note that we made two substitutions in this proof, that is $0 = (x + (-x'))$ which is true if $-x'$ is an additive inverse to $x$, and $(-x + x) = 0$ which is true if $-x$ is an additive inverse to $x$. $\blacksquare$ Theorem 3: The product of the $0$ scalar and a vector $x \in V$ is equal to the zero vector, that is $0x = 0$. Proof:Let $x \in V$, and therefore: \begin{align} \quad 0x = (0 + 0)x = 0x + 0x \implies 0x = 0x + 0x \implies 0x + (-0x) = 0x + 0x + (-0x) \implies 0 = 0x \quad \blacksquare \end{align} Theorem 4: The product of any scalar $k$ and the zero vector $0 \in V$ is equal to the zero vector, that is $k0 = 0$. Proof:Let $k$ be a scalar and let $0 \in V$ be the zero vector. Therefore: \begin{align} \quad k0 = k(0 + 0) = k0 + k0 \implies k0 = k0 + k0 \implies k0 + (-k0) = k0 + k0 + (-k0) \implies 0 = k0 \quad \blacksquare \end{align} Theorem 5: The scalar $-1$ multiplied by the vector $x \in V$ produced the additive inverse of $x$, that is $(-1)x = (-x)$. Proof:Consider the vectors $x$ and $(-1)x$, and suppose we add them together as follows: \begin{align} \quad x + (-1)x = 1x + (-1)x = (1 + (-1))x = 0x = 0 \implies x + (-1)x = 0 \end{align} Therefore $(-1)x$ is the additive inverse of $x$. $\blacksquare$ Theorem 6: Let $a, b \in \mathbb{F}$. Then the equation $a + x = b$ has a unique solution $x \in \mathbb{F}$. Proof:Suppose that both $x$ and $y$ are solutions to the equation $a + x = b$. Then $a + x = b$ and $a + y = b$, and so $x = b + (-a)$ and $y = b + (-a)$, which implies that $x = y$ and so the solution is unique. $\blacksquare$ Theorem 7: Let $a, b \in \mathbb{F}$. Then the equation $ax = b$ has a unique solution $x \in \mathbb{F}$. Proof:Suppose that both $x$ and $y$ are solutions to the equation $ax = b$. Then $ax = b$ and $ay = b$, and so $x = a^{-1}b$ and $y = a^{-1}b$, which implies that $x = y$ and so the solution is unique. $\blacksquare$
Put $\alpha=\sqrt{2+\sqrt{2}}$. Then we have $\alpha^2=2+\sqrt{2}$. Taking square of $\alpha^2-2=\sqrt{2}$, we obtain $\alpha^4-4\alpha^2+4=2$. Hence $\alpha$ is a root of the polynomial\[f(x)=x^4-4x+2.\]By the Eisenstein’s criteria, $f(x)$ is an irreducible polynomial over $\Q$. There are four roots of $f(x)$:\[\pm \sqrt{2 \pm \sqrt{2}}.\]Note that we have a relation\[(\sqrt{2+\sqrt{2}})(\sqrt{2-\sqrt{2}})=\sqrt{2}.\]Thus we have\[\sqrt{2-\sqrt{2}}=\frac{\sqrt{2}}{\sqrt{2+\sqrt{2}}} \in \Q(\sqrt{2+\sqrt{2}}).\] Hence all the roots of $f(x)$ are in the field $\Q(\sqrt{2+\sqrt{2}})$, hence $\Q(\sqrt{2+\sqrt{2}})$ is the splitting field of the separable polynomial $f(x)=x^4-4x+2$.Thus the field $\Q(\sqrt{2+\sqrt{2}})$ is Galois over $\Q$ of degree $4$. Let $\sigma \in \Gal(\Q(\sqrt{2+\sqrt{2}})/ \Q)$ be the automorphism sending\[\sqrt{2+\sqrt{2}} \mapsto \sqrt{2-\sqrt{2}}.\]Then we have\begin{align*}2+\sigma(\sqrt{2})&=\sigma(2+\sqrt{2})\\&=\sigma\left((\sqrt{2+\sqrt{2}}) ^2 \right)\\&=\sigma \left(\sqrt{2+\sqrt{2}} \right) ^2\\&= \left(\sqrt{2-\sqrt{2}} \right)^2=2-\sqrt{2}.\end{align*}Thus we obtain $\sigma(\sqrt{2})=-\sqrt{2}$. Using this, we have\begin{align*}\sigma^2(\sqrt{2+\sqrt{2}})&=\sigma(\sqrt{2-\sqrt{2}})\\&=\sigma \left(\frac{\sqrt{2}}{\sqrt{2+\sqrt{2}}} \right)\\&=\frac{\sigma(\sqrt{2})}{\sigma(\sqrt{2+\sqrt{2}})} \\&=\frac{-\sqrt{2}}{\sqrt{2-\sqrt{2}}} \\&=-\sqrt{2-\sqrt{2}}.\end{align*}Therefore $\sigma^2$ is not the identity automorphism. This implies the Galois group $\Gal(\Q(\sqrt{2+\sqrt{2}})/ \Q)$ is generated by $\sigma$, that is, the Galois group is a cyclic group of order $4$. Galois Group of the Polynomial $x^2-2$Let $\Q$ be the field of rational numbers.(a) Is the polynomial $f(x)=x^2-2$ separable over $\Q$?(b) Find the Galois group of $f(x)$ over $\Q$.Solution.(a) The polynomial $f(x)=x^2-2$ is separable over $\Q$The roots of the polynomial $f(x)$ are $\pm […] Galois Group of the Polynomial $x^p-2$.Let $p \in \Z$ be a prime number.Then describe the elements of the Galois group of the polynomial $x^p-2$.Solution.The roots of the polynomial $x^p-2$ are\[ \sqrt[p]{2}\zeta^k, k=0,1, \dots, p-1\]where $\sqrt[p]{2}$ is a real $p$-th root of $2$ and $\zeta$ […] Application of Field Extension to Linear CombinationConsider the cubic polynomial $f(x)=x^3-x+1$ in $\Q[x]$.Let $\alpha$ be any real root of $f(x)$.Then prove that $\sqrt{2}$ can not be written as a linear combination of $1, \alpha, \alpha^2$ with coefficients in $\Q$.Proof.We first prove that the polynomial […] $x^3-\sqrt{2}$ is Irreducible Over the Field $\Q(\sqrt{2})$Show that the polynomial $x^3-\sqrt{2}$ is irreducible over the field $\Q(\sqrt{2})$.Hint.Consider the field extensions $\Q(\sqrt{2})$ and $\Q(\sqrt[6]{2})$.Proof.Let $\sqrt[6]{2}$ denote the positive real $6$-th root of of $2$.Then since $x^6-2$ is […] Cubic Polynomial $x^3-2$ is Irreducible Over the Field $\Q(i)$Prove that the cubic polynomial $x^3-2$ is irreducible over the field $\Q(i)$.Proof.Note that the polynomial $x^3-2$ is irreducible over $\Q$ by Eisenstein's criterion (with prime $p=2$).This implies that if $\alpha$ is any root of $x^3-2$, then the […] Example of an Infinite Algebraic ExtensionFind an example of an infinite algebraic extension over the field of rational numbers $\Q$ other than the algebraic closure $\bar{\Q}$ of $\Q$ in $\C$.Definition (Algebraic Element, Algebraic Extension).Let $F$ be a field and let $E$ be an extension of […]
I asked this on MathStackExchange and was instructed it would be better here. I've recently been learning about moduli spaces of instantons on $\mathbb{C}^{2}=\mathbb{R}^{4}$. From what I can gather, one can consider the framed moduli space of torsion-free sheaves on $\mathbb{P}^{2}$ of rank $N$ and second Chern class $k$, which we denote $\mathcal{M}(k,N)$. I believe in the rank one case, we can identify this moduli space with the symmetric product of $\mathbb{C}^{2}$, which of course can be crepantly resolved to the Hilbert scheme. In other words, $$\text{Hilb}^{k}(\mathbb{C}^{2}) \to \text{Sym}^{k}(\mathbb{C}^{2}) = \mathcal{M}(k,1)$$ From what I can gather, this is what's known as the "instanton moduli space on $\mathbb{C}^{2}$." There is then this whole "geometric engineering" story by Vafa, Hollowood, et. al. where they consider either the $\chi_{y}$ genus of these moduli spaces or the elliptic genus $\text{Ell}_{y,q}$ and construct the instanton partition function: $$ \sum_{k} p^{k} \chi_{y} (\text{Hilb}^{k}(\mathbb{C}^{2})) \,\,\,\,\,\, \text{or} \,\,\,\,\,\, \sum_{k} p^{k} \text{Ell}_{y,q} (\text{Hilb}^{k}(\mathbb{C}^{2}))$$ One can then show that these partition functions are very remarkably equal to partition functions in topological string theory on certain Calabi-Yau varieties. So really I'm curious about replacing $\mathbb{C}^{2}$ with the ALE spaces, specifically the $A_{N}$ resolutions of the singularities $\mathbb{C}^{2}/\Gamma$ where $\Gamma$ is a finite subgroup of $SU(2)$. The above story with the Hilbert schemes was only for the rank one case, $N=1$ so it's very tempting to hope that maybe the higher rank moduli spaces $\mathcal{M}(k,N)$ might be related to the $A_{N}$ resolutions somehow? I was hoping someone could help me understand what the moduli space of instantons on ALE spaces looks like, and whether there are nice partition functions like the ones above arising from such a space. I know there is physics literature here (like the Vafa-Witten https://arxiv.org/pdf/hep-th/9408074.pdf) but I'm having serious issues understanding the physics! Does considering the Hilbert scheme of points on the $A_{N}$ resolutions provide anything of physical relevance, or do we need something more complicated perhaps? This post imported from StackExchange MathOverflow at 2017-03-03 23:28 (UTC), posted by SE-user spietro
The Matrix Form of the Chain Rule for Compositions of Differentiable Functions from Rn to Rm Recall from The Chain Rule for Compositions of Differentiable Functions from Rn to Rm page that if $S \subseteq \mathbb{R}^n$ is open, $\mathbb{a} \in S$, $\mathbf{g} : S \to \mathbb{R}^p$, and if $\mathbf{f}$ is another function such that the composition $\mathbf{h} = \mathbf{f} \circ \mathbf{g}$ is well defined then if $\mathbf{g}$ is differentiable at $\mathbf{a}$ with total derivative $\mathbf{g}'(\mathbf{a})$ and $\mathbf{f}$ is differentiable at $\mathbf{b} = \mathbf{g}(\mathbf{a})$ with total derivative $\mathbf{f}'(\mathbf{b}) = \mathbf{f}'(\mathbf{g}(\mathbf{a}))$ then $\mathbf{h}$ is differentiable at $\mathbf{a}$ and:(1) Also recall from earlier on The Jacobian Matrix of Differentiable Functions from Rn to Rm page that if a function is differentiable at a point then the total derivative of that function at that point is the Jacobian matrix of that function at that point. Therefore, if the composition $\mathbf{h} = \mathbf{f} \circ \mathbf{g}$ is well defined, $\mathbf{g}$ is differentiable at $\mathbf{a}$ with total derivative $\mathbf{f}'(\mathbf{a}) = \mathbf{D} \mathbf{g}(\mathbf{a})$ and $\mathbf{f}$ is differentiable at $\mathbf{b} = \mathbf{g}(\mathbf{a})$ with total derivative $\mathbf{f}'(\mathbf{b}) = \mathbf{D} \mathbf{f} (\mathbf{b})$ (i.e., $\mathbf{f}'(\mathbf{g}(\mathbf{a})) = \mathbf{D} \mathbf{f} (\mathbf{g}(\mathbf{a}))$ then from linear algebra, the matrix of a composition of two linear maps is equal to the product of the matrices of those linear maps, that is:(2) Furthermore, if $S \subseteq \mathbb{R}^n$ is open, $\mathbf{g} : S \to \mathbb{R}^m$ and $\mathbf{f} : R(\mathbf{g}) \to \mathbb{R}^p$, i.e.:(3) Then for all $k \in \{ 1, 2, ..., p \}$ and for all $j \in \{ 1, 2, ..., n \}$ we have that:(4)
Almost six years ago, Michael Hardy raised the issue of the "partitions" tag being used for some very different concepts, and subsequently edited its tag wiki excerpt to at least be clear about what the various concepts covered were. Two months ago the tag wiki was changed to be solely about in... I have seen some posts which mentioned they want only a hint. They want to think about their problem and solve it themselves. Nevertheless some people give an answer, good or poor, even Op insist to it just want a hint, and we post an answer with starting "It might be useful . . . " or something ... If $A$ and $B$ are matrices such that $AB^2=BA$ and $A^4=I$, then find $B^{16}$ My Method: Given $$AB^2=BA \tag{1}$$ Post multiplying with $B^2$ we get $$AB^4=BAB^2=B^2A$$ Hence $$AB^4=B^2A$$ Pre Multiplying with $A$ and using $(1)$ we get $$A^2B^4=(AB^2)A=BA^2$$ hence $$A^2B^4=BA^2 \t... I'm having trouble solving the following PDE problem. We're in the open unit ball in the plane, centered at the origin, $$B=\{(x,y)\in \mathbb{R}^{2},\ \ x^2+y^2<1\}$$ The following boundary problem is given $$\Delta u(x,y)=0,\ \ \text{in}\ B,\ \ \ u(x,y)=\sin(x)\ \ \text{on}\ \partial B$$ ... Let $u:\Omega\rightarrow \mathbb{R}$ be an harmonic function (this is a smooth function) such that $$\Delta u =0 \quad \mathrm{ in }\quad \Omega,$$ where $\Omega\subseteq \mathbb{R}^{2}$ is an open set. Suppose that $0\in \Omega$ and $\rho>0$ such that $\mathcal{B}_{\rho}(0)\subset \Omega$, whe... I'm having a bit of a problem proving the equality: $$u(x) = \frac{1}{\omega_n r^{n-1}}\int_{\partial B(x,r)} u\, d\sigma = \frac{n}{\omega_n r^n}\int_{B (x,r)} u\, dV$$ Which is the mean value theorem for Harmonic functions, where $\omega_n$ is the area of $S^n$ and $B(x,r)$ is the ball in $\... A well-known feature of harmonic functions on (domains of) $\mathbb{R}^n$ is the mean-value property: that is, if $\Delta u = 0$, then $$ u(x_0) = \frac{1}{\text{Vol}(\partial B_r(x_0))}\int_{\partial B_r(x_0)}{u\,dS} = \frac{1}{\text{Vol}(B_r(x_0))}\int_{B_r(x_0)}{u\,dV}. $$ Is the same true on ... Theorem If $u\in C(\overline{B_R(x_0)})$ and is harmonic in $B_R(x_0)$, then $$|D^mu(x_0)|\leq\frac{n^m\exp(m-1)m!}{R^m}\max_\limits{\overline{B_R~(x_0)}}|u|$$ We can prove the theorem by induction, but I am stuck at the first step of the proof. When $m=1$, we have $$\triangledown u(x_0)=\frac{n... Let $A \subset \mathbb{R}^2$ be open and connected and let $u \in C^2(A)$ be harmonic. Then $u$ satisfies $$u(x)=\frac{1}{2\pi}\int_0^{2\pi}u(x+r\hat n(\theta)) \, d\theta$$ I'm given the following proof: Let $x \in A$ and take $r>0$ so that $B_r(x) \subset A$ . Since $u$ is harmonic it satisfi... A professor I talked to showed me a proof of the mean value property. (He actually showed it for functions solving the heat equation instead of Laplace's equation, but it seems like the argument is the same.) The proof involves distributions, which I am not very familiar with, so there is a step ... From PDE Evans, 2nd edition, pages 25-26. THEOREM 2 (Mean Value Formulas for Laplace's equation). If $u \in C^2(U)$ is harmonic, then $$u(x)=\def\avint{\mathop{\,\rlap{-}\!\!\int}\nolimits} \avint_{\partial B(x,r)}u \, dS=\def\avint{\mathop{\,\rlap{-}\!\!\int}\nolimits} \avint_{B(x,r)} u \, d... A search result for Mean Value Theorem gives us 2715 results, and results on the page are like ones I think we can include in the tag. The theorem is an important result in calculus, and questions relating to its applications, proofs. I think it would be useful if could have the tag, as it can gr... A tag named mean-value-theorem has been created recently. A tag with the same name was discussed before on meta and rejected: Tag proposal: mean-value-theorem. However, looking at the questions where the tag-creator added this tag, it seems that the intention was to created a tag for the mean val... « first day (1990 days earlier) ← previous day next day → last day (688 days later) »
I derived demand, given a Cobb-Douglas utility function but I am not really sure if I did it correctly. I am especially struggling with the sum signs and the subscripts of $i$ & $j$. It would be really great if someone could check. I want to maximize utility for 2 goods, here $j$ and $i$. $\ u(x_i)=\prod_{i=1}^n x^a_i $ $\ s.t.:M=\sum_{j=1}^n p_jx_j $ $\ L= \sum_{i=1}^n a_ilogx_i+\lambda(M-\sum_{j=1}^np_jx_j) $ $ (1)\frac{\partial L}{\partial x_i} = \frac{a_i}{ x_i}-\lambda p_i=0$ $ (2)\frac{\partial L}{\partial x_j} = \frac{a_j}{ x_j}-\lambda p_j=0$ $ (3)\frac{\partial L}{\partial \lambda} = M-\sum_{j=1}^np_jx_j=0$ from (1) and (2) it follows: $ \frac{p_i}{x_i} = \frac{a_i/x_i}{a_j/x_j}=\frac{a_ix_j}{a_jx_i}$ $ x_j = \frac{p_ja_jx_i}{p_ja_j}$ $ x_j$ into (3) $ M = \sum_{j=1}^np_j(\frac{p_ja_jx_i}{p_j/a_j})=0$ $ x_i= \frac{a_iM}{\sum_{j=1}^na_jp_j}$ Furthermore I would like to interpret what happens if we have an efficency shock for good $i$. Meaning, good $i$ becomes cheaper. This leads to an increase in relative income. So $M$ increases which leads to an increased demand for good $x_i$, rest hold constant. Is that correct?
Let $C$ be the event that a randomly chosen person has lung cancer. Let $S$ be the event of a person being a smoker.Suppose that 10% of the population has lung cancer and 20% of the population are smokers. Also, suppose that we know that 70% of all people who have lung cancer are smokers. Then determine the probability of a person having lung cancer given that the person is a smoker. Let $P(E \mid F)$ be the probability that $E$ occurs given $F$ occurs. This is called a conditional probability of $E$ given $F$. Suppose that we know $P(E), P(F)$ and $P(E \mid F)$. Then $P(F \mid E)$ can be computed by Bayes’ theorem (alternatively Bayes’ rule):\[ P(E \mid F) = \frac{P(E) \cdot P(F \mid E)}{P(F)}.\] Solution Given information can be formulated as\[P(C) = 0.1, P(S) = 0.2, \text{ and } P(S \mid C) = 0.7.\] The required probability is $P(C \mid S)$. Using Bayes’ rule, we can compute it as follows.\begin{align*}P(C \mid S) &= \frac{P(C) \cdot P(S \mid C)}{P(S)}\\[6pt]&= \frac{(0.1)(0.7)}{0.2}\\[6pt]&= 0.35\end{align*} Remark The data given here is artificial for educational purpose and is not based on a scientific fact. Independent Events of Playing CardsA card is chosen randomly from a deck of the standard 52 playing cards.Let $E$ be the event that the selected card is a king and let $F$ be the event that it is a heart.Prove or disprove that the events $E$ and $F$ are independent.Definition of IndependenceEvents […] Jewelry Company Quality Test Failure ProbabilityA jewelry company requires for its products to pass three tests before they are sold at stores. For gold rings, 90 % passes the first test, 85 % passes the second test, and 80 % passes the third test. If a product fails any test, the product is thrown away and it will not take the […] Pick Two Balls from a Box, What is the Probability Both are Red?There are three blue balls and two red balls in a box.When we randomly pick two balls out of the box without replacement, what is the probability that both of the balls are red?Solution.Let $R_1$ be the event that the first ball is red and $R_2$ be the event that the […] Complement of Independent Events are IndependentLet $E$ and $F$ be independent events. Let $F^c$ be the complement of $F$.Prove that $E$ and $F^c$ are independent as well.Solution.Note that $E\cap F$ and $E \cap F^c$ are disjoint and $E = (E \cap F) \cup (E \cap F^c)$. It follows that\[P(E) = P(E \cap F) + P(E […] Overall Fraction of Defective Smartphones of Three FactoriesA certain model of smartphone is manufactured by three factories A, B, and C. Factories A, B, and C produce $60\%$, $25\%$, and $15\%$ of the smartphones, respectively.Suppose that their defective rates are $5\%$, $2\%$, and $7\%$, respectively. Determine the overall fraction of […] Independent and Dependent Events of Three Coins TossingSuppose that three fair coins are tossed. Let $H_1$ be the event that the first coin lands heads and let $H_2$ be the event that the second coin lands heads. Also, let $E$ be the event that exactly two coins lands heads in a row.For each pair of these events, determine whether […] Conditional Probability Problems about Die RollingA fair six-sided die is rolled.(1) What is the conditional probability that the die lands on a prime number given the die lands on an odd number?(2) What is the conditional probability that the die lands on 1 given the die lands on a prime number?Solution.Let $E$ […]
Invariant Subspaces Definition: Let $V$ be a vector space over the field $\mathbb{F}$, and let $T$ be a linear operator from $V$ to $V$, that is $T \in \mathcal L (V)$. A subspace $U$ of $V$ is said to be Invariant Under $T$ if for all $u \in U$ we have that $T(u) \in U$. Alternatively we can say that the subspace $U$ is invariant under $T$ if the operator $T$ restricted to the domain subspace $U$ denoted $T \mid_U$ is an operator on $U$. Thus every element $u$ in the subspace $U$ gets mapped to an element $T(u)$ which is also in $U$. Before we look at some examples of invariant subspaces, we will first acknowledge the following theorem which will provide us the existence of a linear operator for which a subspace $U$ of $V$ is not invariant under provided that $U$ is a nontrivial subspace. Theorem 1: If $V$ is a finite-dimensional vector space over the field $\mathbb{F}$ and $U$ is a nontrivial subspace of $V$, then there exists a linear operator $T \in \mathcal L (V)$ such that $U$ is not invariant under $T$. Proof:Suppose that $U$ is a nontrivial subspace of $V$. Then $U \neq \{ 0 \}$ and $U \neq V$. We will now construct a linear operator $T$ for which $U$ is not invariant under $T$. Let $u \in U$ be such that $u \neq 0$, and let $w \in V \setminus U$. Both of these vectors $u$ and $w$ exist since $U \neq \{ 0 \}$ and $U \neq V$. Now since $V$ is finite-dimensional and $U$ is a subspace of $V$, then $U$ is also finite-dimensional. Extend the linearly independent set $\{ u \}$ to a basis $\{ u, v_1, v_2, ..., v_n \}$ of $V$. If we define $T \in \mathcal L (V)$ by: Hence $T(u) = w$. However $T(u) = w \not \in U$ and so $U$ is not invariant under $T$. $\blacksquare$ The contrapositive of Theorem 1 tells us that if $U = \{ 0 \}$ or if $U = V$, then $U$ is invariant under every linear operator $T \in \mathcal (V)$. Let's now look at some examples of invariant subspaces Example 1: The Zero and Whole Space as Invariant Recall that if $V$ is a vector space then both the zero space $\{ 0 \}$ and the whole space $V$ are subspaces of $V$. Let $T \in \mathcal (V)$. The zero subspace $\{ 0 \}$ is invariant under $T$. This can easily be seen since $T(0) = 0 \in \{ 0 \}$. Furthermore, the whole space $V$ is also trivially invariant under $T$ since for all elements $v \in V$ we have that $T(v) \in V$. Example 2: The Null Space as Invariant If $T \in \mathcal (V)$, then the null space $\mathrm{null} (T) = \{ v \in V : T(v) = 0 \}$ is invariant under $T$. We note that $0 \in \mathrm{null} (T)$ since $T(0) = 0$. Since every element $v \in V$ is such that $T(v) = 0$, then the image of $\mathrm{null}(T)$ under $T$ will only contain $0$ which, once again, is in $\mathrm{null}(T)$. Example 3: The Range Space as Invariant Recall that if $T \in \mathcal (V)$ then the range $\mathrm{range} (T) = \{ T(v) : v \in V \}$ is invariant under $T$. This can easily be seen since if $u \in \mathrm{range}(T)$, then $T(u) \in \mathrm{range}(T)$ by the definition of the subspace $\mathrm{range}(T)$. Example 4: The Differentiation of Polynomials Operators Let $T \in \mathcal ( \wp_5 (\mathbb{R}), \wp_5 (\mathbb{R}))$ be defined by $T(p(x)) = p'(x)$ for all $p(x) \in \wp_5 (\mathbb{R})$. Consider the subspace $\wp_4 (\mathbb{R})$ of $\wp_5 (\mathbb{R})$. If we have a polynomial $p(x) \in \wp_4 (\mathbb{R})$, then $T(p(x)) = p'(x)$ will be a polynomial of degree less than or equal to $3$, and so $T(p(x)) \in \wp_4 (\mathrm{R})$, so $\wp_4 (\mathbb{R})$ is invariant under $T$. More generally, if $T \in \mathcal ( \wp_5 (\mathbb{R}), \wp_5 (\mathbb{R}))$ is defined by $T(p(x)) = p'(x)$, then all of the subspaces $\wp_{m-1} (\mathrm{R})$, $\wp_{m-2} (\mathrm{R})$, …, $\wp_{1} (\mathrm{R})$, $\wp_{0} (\mathrm{R})$ are all invariant under $T$ since any polynomial in any of these subsets are mapped to polynomials of lesser degree which is still contained in the chosen subspace. Example 5 Let $T \in \mathcal L(v)$ and suppose that $U_1$ and $U_2$ are subspaces of $V$ and are both invariant under $T$. Prove that $U_1 \cap U_2$ is also invariant under $T$. Suppose that $U_1$ and $U_2$ are both invariant under $T$, and suppose that $u \in U_1 \cap U_2$. Then $u \in U_1$ and $u \in U_2$. Since $U_1$ is invariant under $T$, then $T(u) \in U_1$, and since $U_2$ is invariant under $T$, then $T(u) \in U_2$ and so $T(u) \in U_1 \cap U_2$, so $U_1 \cap U_2$ is invariant under $T$. Example 6 Let $T \in \mathcal L(V)$ and suppose that $U_1$, $U_2$, …, $U_m$ are subspaces of $V$, all of which invariant under $T$. Prove that $U_1 + U_2 + ... + U_m$ is also invariant under $T$. Suppose that $u \in U_1 + U_2 + ... + U_m$. Then $u = u_1 + u_2 + ... + u_m$ where $u_j \in U_j$ for $j = 1, 2, ..., m$. Applying the linear operator $T$ to both sides of the equation above and we have that:(2) Since $U_1$, $U_2$, …, $U_m$ are all invariant substances under $T$, then since $u_j \in U_j$, we have that $T(u_j) \in U_j$ for $j = 1, 2, ..., m$. Hence $T(u) \in U_1 + U_2 + ... + U_m$ and so $U_1 + U_2 + ... + U_m$ is invariant under $T$.
Dear All In the classical refutation method, one searches for a proof of $\Gamma, \lnot A \vdash \bot$ instead of $\Gamma \vdash A$. The method works, i.e. is complete and correct, since it is for example easily seen that both sequents are interderivable (*). In a Robinson resolution method based on the refutation method we also see to it that $\Gamma$ and $A$ are in skolemized conjunctive normal form and that we only make unification and a simple inference rule guided by some control strategies. This is actually the background why I am interested in the question. Now there are a couple of proposals that give Robinson resolution refutation for intuitionistic logic. I want first understand the idea of refutation in intuitionistic logic. If the refutation method is applicable in intuitionistic logic, we would have interadmisibility of the following derivations: $\Gamma, \lnot A \vdash \bot$ $\Gamma \vdash A$ We cannot show this via interderivability as in the classical case. The first direction would not work since it makes use of double negation elimination. But the second direction for example easily works in a Gentzen system (**). I have the feeling the first direction could now be a result of a permutation lemma. In a Gentzen system when we have a derivation that ends in $\Gamma, \lnot A \vdash \bot$ we don't know whether the last rule application concerned $\lnot A$ or some formula among $\Gamma$. If we can show that for any derivation, there is another accordingly permuted derivation, we would be done. Does such a permutation lemma hold for intuitionistic logic? Or can the refutation method be validated by other means, without refering to this permutation? Or is interadmissibility only guaranteed for some special clausal forms? Best Regards (*) Here are some derivations that show classical interderivability, I use $ \lnot A = A \rightarrow \bot$: The first direction: $${{\Gamma, \lnot A \vdash \bot \over \Gamma \vdash \lnot \lnot A}{(\rightarrow L)} \qquad {\over \lnot \lnot A \rightarrow A}{(DNE)} \over \Gamma \vdash A}{(MP)}$$ The second direction: $${\Gamma \vdash A \qquad {\over \lnot A \vdash \lnot A}{(ID)} \over \Gamma, \lnot A \vdash \bot}{(MP)}$$ (**) The second direction can be shown in the intuitionistic case and when making use of a Gentzen system by directly applying the right implication introduction rule: $${\Gamma \vdash A \qquad {\over \bot \vdash \bot}{(ID)} \over \Gamma, \lnot A \vdash \bot}{(\rightarrow R)}$$
When I first encountered the definition of integrals with respect to Ito processes (Shreve's Stochastic Calculus for Finance Vol II), I didn't think twice. However, I wanted to see if the definition could be derived. In the rest of this post $\bar{f}$ is such that $\bar{f}'=f$ and $t_{j}^{f}$ is such that $t_{j}\leq t_{j}^{f}\leq t_{j+1}$ according to the mean-value theorem, for some process $f$. Consider the process $$X(t)=X(0)+\int_{0}^{t}\Delta(u)\;dW(u)+\int_{0}^{t}\Theta(u)\;du$$ where the processes $\Theta_{t}(\omega)$ and $\Delta_{t}(\omega)$ are adapted to the Brownian motion $W_{t}(\omega)$. Then if $\Gamma_{t}(\omega)$ is adapted to $X_{t}(\omega)$, we define $$\int_{0}^{t}\Gamma(u)\;dX(u)=\int_{0}^{t}\Gamma(u)\Delta(u)\;dW(u)+\int_{0}^{t}\Gamma(u)\Theta(u)\;du,$$ which is of course what we would expect by writing (formally) $$\int_{0}^{t}\Gamma(u)(\Delta(u)\;dW(u)+\Theta(u)\;du)=\int_{0}^{t}\Gamma(u)\Delta(u)\;dW(u)+\int_{0}^{t}\Gamma(u)\Theta(u)\;du.$$ It seems this should be derivable (rather easily) from the original definition of the Ito stochastic integral. However, when I tried to do this, I failed: $$\begin{align*} \int_{0}^{t}\Gamma(u)\;dX(u)&=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\left(\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)+\int_{t_{j}}^{t_{j+1}}\Theta(u)\;du\right)\\ &=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\cdot\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)+\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\cdot\int_{t_{j}}^{t_{j+1}}\Theta(u)\;du\\ &=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\cdot\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)+\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})(\bar{\Theta}(t_{j+1})-\bar{\Theta}(t_{j}))\\ &=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\cdot\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)+\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\Theta(t_{j}^{\Theta})(t_{j+1}-t_{j})\\ &=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\cdot\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)+\int_{0}^{t}\Gamma(u)\Theta(u)\;du. \end{align*}$$ where $t_{j}\leq t_{j}^{\Theta}\leq t_{j+1}$ and $\bar{\Theta}'=\Theta.$ At this point it would appear nothing can be done with the first sum since we don't have a mean-value theorem for Ito stochastic integrals (as quantified by Ito's lemma); i.e. there does not in general exist a $t_{j}^{\Delta}\in[t_{j},t_{j+1}]$ such that $$\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)=\Delta(t_{j}^{\Delta})(W(t_{j+1}-W(t_{j})).$$ And in any event it doesn't really matter, since even if we had this result, it would not be known a priori whether the resulting sum converges (or whether it converges to the correct stochastic integral) due to the sensitivity of the limiting sums with respect to the sampling point used. To get a more concrete sense of the difficulty, consider the special case $\Delta(u)=f(W(u))$. Then $$\int_{t_{j}}^{t_{j+1}}\Delta(u)\;dW(u)=\bar{f}(W(t_{j+1}))-\bar{f}(W(t_{j}))-\frac{1}{2}\int_{t_{j}}^{t_{j+1}}f'(W(u))\;du,$$ and the first sum becomes $$ \begin{array}{l} \lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\left(\bar{f}(W(t_{j+1}))-\bar{f}(W(t_{j}))-\frac{1}{2}\int_{t_{j}}^{t_{j+1}}f'(W(u))\;du\right)\\ \;\;\;\;=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\left(\bar{f}(W(t_{j+1}))-\bar{f}(W(t_{j}))\right)-\lim_{n\to\infty}\frac{1}{2}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\left(\int_{t_{j}}^{t_{j+1}}f'(W(u))\;du\right)\\ \;\;\;\;=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})f(W(t_{j}^{f}))(W(t_{j+1})-W(t_{j}))-\lim_{n\to\infty}\frac{1}{2}\sum_{j\in\Pi_{n}}\Gamma(t_{j})f'(W(t_{j}^{f'}))(t_{j+1}-t_{j})\\ \;\;\;\;=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\Delta(t_{j}^{f})(W(t_{j+1})-W(t_{j}))-\frac{1}{2}\int_{0}^{t}\Gamma(u)f'(W(u))\;du.\end{array} $$ Putting this together yields $$\int_{0}^{t}\Gamma(u)\;dX(u)=\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\Delta(t_{j}^{f})(W(t_{j+1})-W(t_{j}))-\frac{1}{2}\int_{0}^{t}\Gamma(u)f'(W(u))\;du+\int_{0}^{t}\Gamma(u)\Theta(u)\;du,$$ and it's hard to see how this ends up being equal to the original definition. One would need to somehow show that the mean-value sampling $\Delta(t_{j}^{f})$ in the first sum results in $$\lim_{n\to\infty}\sum_{j\in\Pi_{n}}\Gamma(t_{j})\Delta(t_{j}^{f})(W(t_{j+1})-W(t_{j}))=\int_{0}^{t}\Gamma(u)\Delta(u)\;dW(u)+\frac{1}{2}\int_{0}^{t}\Gamma(u)f'(W(u))\;du.$$ And in any event, this is just a special case of the process $\Delta_{t}(\omega)$. I have no doubt the issue can be resolved analytically; however, while the difficulties remain unresolved, it tends to make (in my mind) the definition somewhat artificial.
Let $1\leq p\leq +\infty$, $0<s<1$ and $\Omega\subseteq \mathbb{R}^n$ an open set. The fractional Sobolev space $W^{s,p}(\Omega)$ is defined to be $$ W^{s,p}(\Omega) = \left\{ u\in L^p(\Omega) : \frac{|u(x)-u(y)|}{|x-y|^{\frac{n}{p} + s}} \in L^p(\Omega\times\Omega) \right\} $$ equipped with the norm $$ \|u\|_{W^{s,p}(\Omega)} = \left( \int_\Omega |u|^p \; dx + \int_\Omega\int_\Omega \frac{|u(x)-u(y)|^p}{|x-y|^{n+ sp}} \; dx dy \right)^{1/p}. $$ It is a well known result that this is a Banach space, but every reference I read says that this is true, without giving a proof. Adam's Sobolev Spaces uses techniques from interpolation theory to prove this result, but I'm no familiar with the theory, so I'm asking for an "elementary proof", that is, as usual, proving that every Cauchy sequence has a limit in the space. Here is my "attempt": Let $(u_n)$ be a Cauchy sequence in $W^{1,p}(\Omega)$, then $(u_n)$ is a Cauchy sequence in $L^p(\Omega)$, so there exists $u\in L^p(\Omega)$ such that $u_n\to u$ in $L^p(\Omega)$. I would like to prove that $u\in W^{s,p}(\Omega)$ and that $u_n\to u$ in $W^{s,p}(\Omega)$. But I don't know how to proceed here. Thanks for the help!
There is a vast range of problems that fall under the broad umbrella of making sequential decisions under uncertainty. While there is widespread acceptance of basic modeling frameworks for deterministic versions of these problems from the fields of math programming and optimal control, sequential stochastic problems are another matter. Motivated by a wide range of applications, entire fields have emerged with names such as dynamic programming (Markov decision processes, approximate/adaptive dynamic programming, reinforcement learning), stochastic optimal control, stochastic programming, model predictive control, decision trees, robust optimization, simulation optimization, stochastic search, model predictive control, and online computation. Problems may be solved offline (requiring computer simulation) or online in the field, which opens the door to the communities working on multi-armed bandit problems. Each of these fields has developed its own style of modeling, often with different notation (\(x\) or \(S\) for state, \(x/a/u\) for decision/action/control), and different objectives (minimizing expectations, risk, or stability). Perhaps most difficult is appreciating the differences in the underlying application. A matrix \(K\) could be \(5 \times 5\) in one class of problems, or \(50,000 \times 50,000\) in another (it still looks like \(K\) on paper). But what really stands out is how each community makes a decision. Despite these differences, it is possible to pull them together in a common framework that recognizes that the most important (albeit not the only) difference is the nature of the policy being used to make decisions over time (we emphasize that we are only talking about sequential problems, consisting of decision, information, decision, information, …). We start by writing the most basic canonical form as \[\begin{align} & {\min\nolimits_{\pi \in \Pi} \mathbb{E}^{\pi} \sum_{t=0}^T C(S_t, U_t^{\pi}(S_t))} \: \: \: \: \: \: \: \: \: (1)\\ & \text{where }S_{t+1} = S^M(S_t, u_t, W_{t+1}). \nonumber \end{align}\] Here we have adopted the notational system where \(S_t\) is the state (physical state, as well as the state of information, and state of knowledge), and \(u_t\) is a decision/action/control (alternatives are \(x_t\), popular in operations research, or \(a_t\), popular in operations research as well as computer science). We let \(U_t^{\pi}(S_t)\) be the decision function, or policy, which is one member in a set \(\Pi\) where \(\pi\) specifies both the type of function, as well as any tunable parameters \(\theta \in \Theta^\pi\). The function \(S^M(S_t, u_t, W_{t+1})\) is known as the transition function (or system model, state model, plant model, or simply “model”). Finally, we let \(W_{t+1}\) be the information that first becomes known at time \(t+1\) (control theorists would call this \(W_t\), which is random at time \(t)\). Important problem variations include different operators to handle uncertainty; we can use an expectation in \((1)\), a risk measure, or worst case (robust optimization), as well as a metric capturing system stability. We can assume we know the probability law behind \(W_t\), or we may just observe \(S_{t+1}\) given \(S_t\) and \(u_t\) (model-free dynamic programming). While equation \((1)\) is well-recognized in certain communities (some will describe it as “obvious”), it is actually quite rare to see \((1)\) stated as the objective function with anything close to the automatic writing of objective functions for deterministic problems in math programming or optimal control. We would argue that the reason is that there is no clear path to computation. While we have powerful algorithms to solve over real-valued vector spaces (as required in deterministic optimization), equation \((1)\) requires that we search over spaces of functions (policies). Lacking tools for performing this search, we make the argument that all the different fields of stochastic optimization can actually be described in terms of different classes of policies. In fact, we have identified four fundamental (meta) classes, which are the following: 1. Policy function approximations (PFAs). These are analytical functions that map states to actions. PFAs may come in the form of lookup tables, parametric, or non-parametric functions. A simple example might be \[\begin{equation} U^{\pi}(S_t\,|\,\theta) = \sum_{f\in F} \theta_f \phi_f(S_t) \end{equation} \: \: \: \: \: \: \: \: \: (2)\] where \(F\) is a set of features, and \(\bigl(\phi_f(S_t)\bigr), f \in F\) are sometimes called basis functions. 2. Cost function approximations (CFAs). Here we are going to design a parametric cost function, or parametrically modified constraints, producing a policy that we might write as \[\begin{equation} U^{\pi}(S_t\,|\,\theta) = \arg \min\nolimits_{u \in \mathrm{U}_t^{\pi}(\theta)} C_t^{\pi} (S_t, u\,|\,\theta) \end{equation}\: \: \: \: \: \: \: \: \: (3)\] where \(C_t^{\pi} (S_t, u\,|\,\theta)\) is a parametrically modified set of costs (think of including bonuses and penalties to handle uncertainty), while \(U_t^{\pi}(\theta)\) might be a parametrically modified set of constraints (think of including schedule slack in an airline schedule, or a buffer stock). 3. Policies based on value function approximations (VFAS). These are the policies most familiar under the umbrella of dynamic programming and reinforcement learning. These might be written as \[\begin{align} & U^{\pi}_t(S_t\,|\,\theta) = \arg \min\nolimits_{u \in \mathrm{U}_t^{\pi}(\theta)} C(S_t, u)+{} \nonumber\\ & \quad \mathbb{E}\bigl\{\overline{V}_{t+1}^{\pi}(S^M(S_t, u, W_{t+1})\,|\,\theta)\,|\,S_t\bigr\} \end{align} \: \: \: \: \: \: \: \: \: (4)\] where \(\overline{V}_{t+1}^{\pi}(S_{t+1})\) is an approximation of the value of being in state \(S_{t+1} = S^M(S_t, u, W_{t+1})\), where \(\pi\) captures the structure of the approximation and \(\theta \in \Theta^\pi\) represents any tunable parameters. 4. Lookahead policies. Lookahead policies start with the basic observation that we can write an optimal policy using \[\begin{align} & U^{\pi}_t(S_t\,|\,\theta) = \arg \min\nolimits_u\Biggl(C(S_t, u) + \min\nolimits_{\pi \in \Pi} \nonumber \\ & \qquad \mathbb{E}^{\pi}\Biggl\{\sum_{t' = t + 1}^T C(S_{t'}, U_{t'}^{\pi}(S_{t'}))\,|\,S_t, u\Biggr\}\Biggr) \end{align}\: \: \: \: \: \: \: \: \: (5)\] The problem is that the second term in \((5)\) is not computable (if this were not the case, we could have solved the objective function in \((1)\) directly). For this reason, we create a lookahead model which is an approximation of the real problem. Common approximations are to limit the horizon (e.g. from \(T\), which might be quite long, to \(t+H\) for some appropriately chosen horizon \(H\)), and (most important) to replace the original stochastic information process with something simpler. The most obvious is a deterministic approximation, which we can write as \[U_t^{\pi}(S_t|\theta)=arg~min_{u_t, \tilde{u}_{t,t+1} ,\ldots,\tilde{u}_{t,t+H}}\ \Biggl(C(S_t,u_t)+\sum^{t+H}_{t'=t+1}C(\tilde{S}_{tt'},\tilde{U}_{tt'})\Biggr).\: \: \: \: \: \: \: \: \: (6)\] To make the distinction from our original base model in \((1)\), we put tildes on all our variables (other than those at time \(t\)), and we also index the variables by \(t\) (to indicate that we are solving a problem at time \(t\)), and \(t’\) (which is the point in time within the lookahead model). A widely-used approach in industry is to start with \((6)\) and then introduce modifications (often to the constraints) so that the decisions made now are more robust to uncertain outcomes that occur later. This would be a form of (hybrid) cost function approximation. We may instead use a stochastic lookahead model. For example, the stochastic programming community most often uses \[{U}_t^{\pi}(S_t|\theta)=arg~min_{u_t, \tilde{u}_{t,t+1} ,\ldots,\tilde{u}_{t,t+H}}\\ \Biggl(C(S_t,u_t)+\sum_{\omega\in\tilde{\Omega_t}}p(\omega)\sum^{t+H}_{t'=t+1}C(\tilde{S}_{tt'}(\omega),\tilde{U}_{tt'}(\omega))\Biggr).\: \: \: \: \: \: \: \: \: (7)\] Here, we would let \(\theta\) capture parameters such as the planning horizon, and the logic for constructing \(\hat\Omega_t\). Other variations include a robust objective (which minimizes over the worst outcome rather than the expected outcome), or a chance-constrained formulation, which approximates the costs over all the uncertain outcomes using simple penalties for violating constraints. All of these policies involve tunable parameters, given by \(\theta\). We would represent the policy \(\pi\) as the policy class \(f\in F\), and the parameters \(\theta \in \Theta^f\). Thus, the search over policies \(\pi\) in equation \((1)\) can now be thought of as the search over policy classes \(f\in F\), and then over the tunable parameters \(\theta \in \Theta^f\). No, this is not easy. But with this simple bit of notation, all of the different communities working on sequential stochastic optimization problems can be represented in a common framework. Why is this useful? First, a common vocabulary facilitates communication and the sharing of ideas. Second, it is possible to show that each of the four classes of policies can work best on the same problem, if we are allowed to tweak the data. And finally, it is possible to combine the classes into hybrids that work even better than a pure class. And maybe some day, mathematicians will figure out how to search over function spaces, just as Dantzig taught us to search over vector spaces.
Just 2 simple questions I´m struggling with. Hope you can help. Suppose that the model $y=\beta X +\epsilon$ with $\epsilon \sim \text{ Normal}(o,\sigma^2I_n)$ has a prior $\beta \sim \text{ Normal}( \beta_0, k(X^TX)^{-1})$ I want two things: 1.) I want to show that for the density of $\beta$ that $$p(\beta) \propto \exp \{-\frac{1}{2} k^{-1} (\beta^TX^TX\beta-2\beta^TX^TX\beta_0) \} $$ 2.) I want to show that $$p(\beta | y,X,\sigma^2) \propto p(y|X,\beta, \sigma^2)\cdotp(\beta) $$ It would be very nice if somebody can say anything about this. I would also know why $$p(\beta|y,X,\sigma^2) \propto \exp\{-\frac{1}{2}(k^{-1}+\sigma^{-2})\beta^TX^TX\beta -2\beta^TX^TX\beta_0 \}$$
Table of Contents Oscillation and Continuity of a Bounded Function at a Point Recall from the Oscillation of a Bounded Function at a Point page that if $f$ is a bounded function on $[a, b]$ and $x \in [a, b]$ then the oscillation of $f$ at $x$ is defined to be:(1) Earlier, on the Oscillation of a Bounded Function on a Set page, for $T \subseteq [a, b]$ we defined $\Omega_f (T) = \sup \{ f(x) - f(y) : x, y \in T \}$. We will now look at an extremely important result which can allow us to determine whether a bounded function $f$ is continuous at a point $x \in [a, b]$ based on the value of the oscillation of $f$ at $x$. Theorem 1: Let $f$ be a bounded function on $[a, b]$ and let $x \in [a, b]$. Then $f$ is continuous at $c$ if and only if $\omega_f (c) = 0$. Proof:$\Rightarrow$ Let $\epsilon > 0$ be given. Suppose that $f$ is continuous at $c$. Then for $\epsilon_1 = \frac{\epsilon}{2}$ there exists a $\delta_1 > 0$ such that if $\mid x - c \mid < \delta_1$, i.e., for $x \in B(c, \delta)$ then: We want to show that $\displaystyle{\omega_f (c) = \lim_{h \to 0} \Omega_f ((c - h, c + h) \cap [a, b]) = 0}$. Note that $B(c, h) = (c - h, c + h)$. Let $\delta = \delta_1$. Then if $\mid h \mid < \delta$ we have that $B(c - h, c + h) \subset B(c - \delta, c + \delta)$. Recall that if $T_1, T_2 \subseteq [a, b]$ and $T_1 \subseteq T_2$ then $\Omega_f (T_1) \subseteq \Omega_f(T_2)$. So: So for all $x, y \in B(c, h) \cap [a, b] \subset B(c, \delta) \cap [a, b]$ we have that $(*)$ holds and: And by the triangle inequality we see that for all $x, y \in B(c, h) \cap [a, b] \subset B(c, \delta) \cap [a, b]$ that: Therefore $\Omega_f ((c - h, c + h) \cap [a, b]) = \sup \{ f(x) - f(y) : x, y \in (c - h, c + h) \cap [a, b] \} < \epsilon$. So for all $\epsilon > 0$ there exists a $\delta > 0$ such that if $\mid h \mid < \delta$ then $\Omega_f ((c - h, c + h) \cap [a, b]) < \epsilon$ and so: $\Leftarrow$ Let $\epsilon > 0$ be given. Suppose that $\omega_f(c) = \lim_{h \to 0} \Omega_f((c - h, c + h) \cap [a, b]) = 0$. Then we have that for $\epsilon_1 = \epsilon > 0$ there exists a $\delta_1 > 0$ such that if $\mid h \mid < \delta_1$ then: Let $\delta = \delta_1$. Then for all $x \in B(c, h) = (c - h, c + h) \subset B(c, \delta) = (c - \delta, c + \delta)$ ($\mid h \mid < \delta$) we have that $(**)$ holds and that: So for all $\epsilon > 0$ there exists a $\delta > 0$ such that if $x \in B(c, \delta) = (c - \delta, c + \delta)$ (i.e., $\mid x - c \mid < \delta$) then $\mid f(x) - f(c) \mid < \epsilon$, so $f$ is continuous at $c$. $\blacksquare$
I'm trying to implement threshold RSA operations, starting with decryption based on Peeters, R., Nikova, S., & Preneel, B. (2008). Practical RSA Threshold Decryption for Things That Think. Retrieved from http://www.cosic.esat.kuleuven.be/publications/article-1178.pdf and running into problems where it seems I would have to raise by a negative exponent. The formula is on page 9: $$ w = \prod_{j\in S} x_j^{\lambda_{0,j}^S} $$ Peeters et al. reference Shoup, V. (2000). Practical threshold signatures. Advances in Cryptology—EUROCRYPT 2000, 207–220. Retrieved from http://link.springer.com/content/pdf/10.1007/3-540-45539-6_15.pdf for definitions, which to me seems to have the same issue. I'll quote Shoup page 212: Let $\Delta=l!$. For any subset $S$ of $k$ points in $\{0, \ldots, l\}$, and for any $i \in \{0, \ldots, l\} \setminus S$, and $j \in S$, we can define $$ \lambda_{i,j}^S = \Delta\frac{\prod_{j^\prime \in S\setminus\{j\} }(i-j^\prime)}{\prod_{j^\prime \in S\setminus\{j\} }(j-j^\prime)} \in \mathbf{Z} $$ (which Shoup defines as equation (2), but I couldn't get the embedded LaTeX to do that) Shoup uses $\lambda$ on page 214: $$w = x_{i_1}^{2\lambda_{0,i_1}^S} \dots x_{i_k}^{2\lambda_{0,i_k}^S}$$ (where $x_i$ were previously calculated to be in $Q_n$, e.g. the subgroup of squares in $\mathbf{Z}_n^*$, n being the RSA modulus, though I'm a little bit out of my depth as to what consequences this has. In any case, Peeters et al. don't use the subgroup of squares.) My problem is: $\lambda_{0,j}^S$ is negative quite often. Simple example $l=3, k=2, S=\{1,2\}, \Delta = l! = 6$. For $j=1$: $\lambda_{0,1}^S = \Delta \frac{ (0-2) }{ (1-2) } = 6 \cdot 2$ For $j=2$: $\lambda_{0,2}^S = \Delta \frac{ (0-1) }{ (2-1) } = 6 \cdot -1$ How am I supposed to use that in an exponent?
I'm assuming here that the $x_{\lambda}$ are real numbers. (Complex numbers would be fine too -- that doesn't matter. This was written before the edit mentioning topological vector spaces in general, and I haven't thought about it at that level of generality.) In the first part of this answer, we'll see how to define convergence of a sum indexed by a countable ordinal. In the second part of this answer, we'll see that there is no way to define the convergence of any well-ordered sum with uncountably many non-zero terms (it's easy to eliminate the case where all the non-zero terms are positive reals, but in fact we'll rule out any non-zero values at all). This will be done via an axiomatization of the notion of convergence of a well-ordered sum. COUNTABLE WELL-ORDERED SUMS In this part, we'll handle well-ordered sums where all but countably many of the terms are $0.$ For countable ordinals $\mu,$ the sum can be defined by transfinite induction, as follows: $\sum_{\alpha<0}x_{\alpha}=0,$ $\sum_{\alpha<\mu}x_{\alpha}=(\sum_{\alpha<\beta}x_{\alpha})+x_\beta,$ if $\mu=\beta+1,$ and, for $\mu$ a countable limit ordinal, $\sum_{\alpha<\mu}x_{\alpha}=z$ iff for every increasing function $f\colon\omega\to\mu$ which is cofinal in $\mu,$ $z=\lim_{n\to\infty}\sum_{\alpha<f(n)}x_{\alpha}.$ Here's how to extend this to uncountable ordinals $\mu\!:$ $\sum_{\alpha<\mu}x_{\alpha}=z$ iff the set $A=\lbrace \alpha < \mu \mid x_\alpha \ne 0\rbrace$ is countable and $z=\sum_{\alpha\in A}x_\alpha,$ where that last sum means $\sum_{\alpha < (\text{order type of }A)}x_{\text{the }\alpha^{\text{th}}\text{ member of }A}.$ By the way, just taking the supremum of all finite sums is the same thing for absolutely convergent series, but not for conditionally convergent series. UNCOUNTABLE WELL-ORDERED SUMS We'll see that there is no way to extend this usefully to any sequence with uncountably many non-zero terms. This goes beyond the easily observed fact that there's no way to assign a finite sum to any uncountable series of positive numbers. We'll eliminate the possibility even of conditionally convergent series with uncountably many non-zero terms; so we can't have any convergent series with uncountably many non-zero terms, even if some are negative or have a non-zero imaginary part. Let $S$ be either $\mathbb{R}$ or $\mathbb{C},$ and write ${}^\mu S$ for the set of all functions from $\mu$ to $S.$ We'll make only the following assumptions about what "convergence" means: For each ordinal $\mu,$ we have a subset $E_\mu$ of ${}^\mu S$ and a function $e_\mu \colon E_\mu \to S.$ We say that the sum $\sum_{\alpha\lt\mu}x_\alpha$ converges iff $(x_\alpha)_{\alpha\lt\mu}\in E_\mu.$ If $\sum_{\alpha\lt\mu}x_\alpha$ converges, then we'll call $e_\mu((x_\alpha)_{\alpha\lt\mu})$ the value that the sum converges to, and we'll write $\sum_{\alpha\lt\mu} x_\alpha$ to mean $e_\mu((x_\alpha)_{\alpha\lt\mu}).$ The sum of the empty sequence is $0.$ For every $\mu,$ $\sum_{\alpha<\mu+1}x_\alpha$ converges iff $\sum_{\alpha<\mu}x_\alpha$ converges, and $\sum_{\alpha<\mu+1}x_\alpha=(\sum_{\alpha<\mu}x_\alpha)+x_\mu.$ If $\mu$ is a limit ordinal, then $\sum_{\alpha\lt\mu}$ converges to a value $L$ iff for every $\varepsilon\gt 0,$ there exists an ordinal $\gamma\lt\mu$ such that for all $\beta,$ if $\gamma\lt\beta\lt\mu,$ then $\lvert L-\sum_{\alpha\lt\beta}x_\mu \rvert \lt \varepsilon.$ If a sum $\sum_{\alpha\lt\mu} x_\alpha$ converges, then $\sum_{\alpha\lt\gamma} x_\alpha$ converges for every $\gamma\lt\mu.$ If a sum $\sum_{\alpha\lt\mu} x_\alpha$ converges and if $A=\lbrace \alpha < \mu \mid x_\alpha \ne 0\rbrace,$ then $\sum_{\alpha\in A} x_\alpha$ converges to the same value as $\sum_{\alpha\lt\mu} x_\alpha.$ Our definition of convergence for countable series satisfies the above properties, and in fact is the only definition of convergence for countable series satisfying those properties. Suppose we have an uncountable sequence $\sum_{\alpha\lt\mu} x_\alpha$ whose sum converges (where "converges" has some meaning satisfying conditions 1-6 above). We'll show that all but countably many $x_\alpha$ must equal $0.$ Assume that, to the contrary, uncountably many $x_\alpha$ are non-zero. Let $y_\alpha$ be the $\alpha^{\text{th}}$ non-zero element in the sequence $\langle x_\alpha \mid \alpha \lt \mu \rangle.$ Then, by conditions 5 and 6, $\sum_{\alpha\lt\omega_1} y_\alpha$ converges; let its value be $L.$ By condition 4, for every positive rational $q,$ there exists $\gamma_q\lt\omega_1$ such that for all countable $\beta\gt\gamma_q,$ $\lvert L-\sum_{\alpha\lt\beta}y_\alpha \rvert \lt q.$ Since the set of rationals is countable, there is a countable ordinal $\gamma$ greater than all the $\gamma_q$ for $q$ rational and positive. But then any $\beta \ge \gamma$ satisfies $\sum_{\alpha\lt\beta}y_\alpha=L,$ since $\lvert L-\sum_{\alpha\lt\beta}y_\alpha \rvert$ is less than every positive rational number. Condition 3 then implies that $$y_\gamma=(\sum_{\alpha\lt\gamma+1}y_\alpha)-(\sum_{\alpha\lt\gamma}y_\alpha)=L-L=0,$$ contradicting the fact that all the $y_\alpha$ are non-zero.
In this paper, Peter Acquaah asserts that an important difference between odd perfect and even perfect numbers is that: (A)The greatest component of an odd perfect number $N$ is less than $\sqrt{N}$. (B)The greatest component of an even perfect number $M$ is greater than $\sqrt{M}$. ATTEMPT TO PROVE STATEMENT (B) Let $M = {2^{p-1}}(2^p - 1)$ be an even perfect number. Since $2^{p-1} < 2^p - 1$ and $2^p - 1$ is prime, the largest component of $M$ is the Mersenne prime $2^p - 1$. We want to show that $$2^p - 1 > \sqrt{M} = 2^{\frac{p-1}{2}}\sqrt{2^p - 1}.$$ Assume to the contrary that $$2^p - 1 \leq 2^{\frac{p-1}{2}}\sqrt{2^p - 1}.$$ This implies that $$\sqrt{2^p - 1} \leq 2^{\frac{p-1}{2}}$$ which means $$2^p - 1 \leq 2^{p-1}.$$ This contradicts $2^{p-1} < 2^p - 1$. Now here is my question: How do you prove Statement (A)? I only know that an odd perfect number (if it exists) must take the form $N = q^k n^2$ where $\gcd(q,n)=1$ and $q$ is prime (called the Euler prime) satisfying $q \equiv k \equiv 1 \pmod 4$. I also know that$$q^k < \frac{2}{3}n^2.$$(See this paper for a proof.) Any hints will be appreciated.
Equations of Planes in Three Dimensional Space We will now look at equations of planes in $\mathbb{R}^3$. There are three forms of planes that we will look at. Definition: An equation in the form $Ax + By + Cz + D = 0$ represents the Standard Form Equation of a plane in $\mathbb{R}^3$. For example, the equation $2x + 3y + z - 1 = 0$ represents a plane in $\mathbb{R}^3$. The point $(1, 1, -4)$ lies on this plane for example because $(1, 1, -4) \in S = \{ (x, y, z) \in \mathbb{R}^3 : 2x + 3y + z - 1 = 0 \}$ where $S$ is the set of points form $\mathbb{R}^3$ that are contained on this plane. Now the standard form equation of the plane is the simplest form for planes. We will now look at two other forms a plane can be in, but before we do, we will need the following definition. Definition If $\Pi$ is a plane in $\mathbb{R}^3$, then if a vector $\vec{n} = (a, b, c)$ is such that $\vec{n} \perp \Pi$ then $\vec{n}$ is said to be a Normal Vector of The Plane $\Pi$. We are now ready to look at the second different form a plane can be in. Definition: Let $\Pi$ be a plane and let $P_0(x_0, y_0, z_0)$ be any point on $\Pi$, and let $\vec{n} = (a, b, c)$ be a normal vector to $\Pi$. Then the Point-Normal Form Equation of $\Pi$ is $\vec{n} \cdot \vec{P_0P} = 0$. We note that the point-normal form equation of the plane consists of all vectors $\vec{P_0P}$ that are perpendicular to the normal $\vec{n}$. Since the point $P_0(x_0, y_0, z_0)$ is on the plane to begin with, the vector $\vec{P_0P}$ is only the plane $\Pi$ only if $P(x, y, z)$ is also on the plane (because if not, then $\vec{n} \cdot \vec{P_0P} \neq 0$). There is one more form of a plane that we will look at that is similar to the point-normal form. Definition: Let $\Pi$ be a plane and let $P_0(x_0, y_0, z_0)$ and $P(x, y, z)$ be points on the plane. Let $\vec{r_0}$ be the position vector with terminal point $P_0$ and let $\vec{r}$ be the position vector with terminal point $P$. Then the vector $r - r_0$ is parallel to $\Pi$. Let $\vec{n} = (a, b, c)$ be a normal vector to $\Pi$. The Vector Form Equation of $\Pi$ is $\vec{n} \cdot (\vec{r} - \vec{r_0}) = 0$. We will now look at some examples regarding equations of planes in $\mathbb{R}^3$. Example 1 Determine the equation of the plane that passes through $(1, 1, 1)$ and has the normal vector $\vec{n} = (1, 2, 3)$. The most convenient form to write this plane in is point-normal form as $(1, 2, 3) \cdot (x - 1, y - 1, z - 1) = 0$. We can expand this equation to get the general form of this plane as follows:(1) Example 2 Write the point-normal form equation of the plane $2x + 4y + 7z - 2 = 0$. To write $2x + 4y + 7z - 2 = 0$ in point-normal form we must have a point on the plane and a normal vector to this plane. We can immediately pick up the normal vector for this plane to be $\vec{n} = (2, 4, 7)$. Now we just need a point on the plane. The point $(1, 0, 0)$ is on this plane, and so $(2, 4, 7) \cdot (x - 1, y, z) = 0$ represents the vector-form equation of $2x + 4y + 7z - 2 = 0$. To verify this, all we need to do is compute the dot product $(2, 4, 7) \cdot (x - 1, y, z)$ as follows:(2) Example 3 Determine an equation for the plane that passes through $P(6, 2, 3)$, $Q(4, 5, 6)$, and $R(1, 8, 9)$. We need to find a point on this plane and a normal to this plane. We've been given three points, so all we actually need is a normal vector. To construct a normal vector, consider the vectors $\vec{PQ} = (-2, 3, 3)$ and $\vec{PR} = (-5, 6, 6)$. These vectors both lie on the plane because their initial and terminal points lie on the plane. If we take the cross product of these vectors, we will obtain a vector that is perpendicular to both $\vec{PQ}$ and $\vec{PR}$ and hence a vector that is perpendicular to the plane.(3) Therefore $\vec{n} = (0, -3, 3)$ will do. So $(0, -3, 3) \cdot (x - 6, y - 2, z - 3) = 0$ represents the plane that passes through $P$, $Q$, and $R$.
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
I have a question about a proof in Rosenberg and Schochet's paper "the Künneth theorem and the Universal Coefficient Theorem for Kasparov's generalized K-functor", proposition 2.6. First of all, the setting: Def.: Let $N$ be the bootstrap class of $C^*$-algebras, it's the smallest full subcategory of the seperable nuclear $C^*$-algebras which contains the separable Type I $C^*$-algebras and and is closed under strong Morita equivalence, inductive limits, extensions, and crossed products by $\mathbb{R}$ and by $\mathbb{Z}$. And if $J$ is an ideal in $A$ and $J$ and $A$ are in $N$, then so is $A/J$. And if $A$ and $A/J$ are in $N$ then so is $J$. Theorem (2.1): Let $A\in N$ and let $B$ be a $\sigma$-unital $C^*$-algebra such that $K_*(B)$ is an injective $\mathbb{Z}$-module. Then the map $$\gamma(A,B):KK_*(A,B)\to Hom(K_*(A),K_*(B))$$ is an isomorphism. One step to prove this theorem is the following Proposition (2.6): If $K_*(B)$ is injective and if $\gamma(A,B)$ is an isomorphism, then $\gamma(A\rtimes_{\rho} \mathbb{R},B)$ is an isomorphism for any continuous action of $\mathbb{R}$ on $A$. Proof: The Thom isomorphism theorems of Connes yield natural isomorphisms $$Hom(K_i(A\rtimes_{\rho} \mathbb{R}),K_j(B))\cong Hom(K_{i+1}(A),K_j(B))$$ and $$KK_i(A\rtimes_{\rho} \mathbb{R},B)\cong KK_{i+1}(A,B).$$The proposition follows immediately. $\Box$ The argument above is the following: $$\require{AMScd}\begin{CD} KK_i(A\rtimes_{\rho} \mathbb{R},B) @>\gamma(A\rtimes_{\rho} \mathbb{R},B)>> Hom(K_i(A\rtimes_{\rho} \mathbb{R}),K_j(B)) \\ @VV \eta V @VV \sigma V \\ KK_{i+1}(A,B) @>\gamma(A,B)>> Hom(K_{i+1}(A),K_j(B)) \\ \end{CD}$$ is a commutative diagram, where $\eta , \sigma$ and $\gamma(A,B)$ are isomorphisms, therefore $\gamma(A\rtimes_{\rho} \mathbb{R},B)$ is an isomorphism. My question is: why is this diagram commutative? For this, I tried to figure out how to write down the maps explicitely. The map $\gamma(A,B)$ comes from the Kasparov-product $$KK(\mathbb{C},A)\times KK(A,B)\to KK(\mathbb{C},B)\; (\epsilon_1 , \epsilon_2)\mapsto \epsilon_1 \otimes \epsilon_1 ,$$ because it is $KK_i(\mathbb{C},A)\cong K_i(A)$, $KK_i(\mathbb{C},B)\cong K_i(B)$. Hence the Kasparov-product induces a homomorphism (which should be $\gamma(A,B)$ I think) $$KK(A,B)\to Hom(K(A), K(B)),$$ $$\epsilon_2 \mapsto (\epsilon_1 \mapsto \epsilon_1 \otimes \epsilon_2).$$ The map $\sigma$ is the contravariant $Hom(-,K_j(B))$-functor apllied to the (Connes-Thom)-isomorphism $K_{i+1}(A)\to K_i(A\rtimes_{\rho} \mathbb{R})$. But I don't know how to write down the isomorphisms $\sigma$ and $K_{i+1}(A)\to K_i(A\rtimes_{\rho} \mathbb{R})$ explicitely such that I'm stuck to prove that the diagram commutes. Can you help me to prove that the diagram commutes, or do you know how to write down the maps $\sigma$ and $K_{i+1}(A)\to K_i(A\rtimes_{\rho} \mathbb{R})$ explicitely? Best
This is related to Dirac's theorem. For any finite, simple, undirected graph $G=(V,E)$ let $\delta(G)$ denote the minimal degree of all vertices. Are there positive integers $n,c\in\mathbb{N}$ with the following property? Whenever $G=(V,E)$ is connected and $\delta(G)\geq n$, there is a matching $M\subseteq E$ such that $$|V\setminus \bigcup M|\leq c.$$
How do I estimate the residual $\varepsilon_{t}$ of a Seasonal ARIMA model $\hat{Y}_t=\hat{\phi}{Y}_{t-1}+\hat{\Phi}{Y}_{t-12}+\varepsilon_{t}$? If the MSE is 0.114, what does it mean? You can calculate $\varepsilon_{t}$ as follows. $$\varepsilon_{t} = {Y}_t - \hat{Y}_t $$ Mean squared Error can be calculated as $$MSE = \frac{1} {n} \sum\limits_{t=1}^n ({Y}_t - \hat{Y}_t)^2$$ $$OR$$ $$MSE = \frac{1} {n} \sum\limits_{t=1}^n (\varepsilon_{t} )^2$$ where $\hat{Y}_t$ is the predicted value from your model, ${Y}_t$ is the actual values. An MSE of $0$ would indicate that the model fits your data perfectly which happens very rarely in practice. You have a value of 0.114 which is close to $0$. I would say your model fits your data very well. I would consult any basic statistics book if you want to learn more about MSE.
Continuity of Functions of Several Variables Recall that a function of a single variable $y = f(x)$ is continuous at $c \in D(f)$ if $\lim_{x \to c} f(x) = f(c)$. We will now extend the concept of continuity of a function of a single variable to a function of several variables. Definition: A two variable real-valued function $z = f(x, y)$ is said to be Continuous at $(a, b) \in D(f)$ if $\lim_{(x,y) \to (a,b)} f(x,y) = f(a,b)$. If $f$ is not continuous at $(a,b)$, then we say that $f$ is Discontinuous at $(a, b)$. We say that $f$ is a Continuous Two Variable Function if $f$ is continuous for all $(a, b) \in D(f)$. Geometrically, a two variable function $z = f(x, y)$ is continuous if the graph of $f$ does not contain any holes, breaks, jumps, or asymptotes. In determining discontinuities of a two variable real-valued function, we need to only consider points of the function that would normally cause discontinuities in single variable real-valued functions. For example, consider the following two variable function:(1) We note that both the numerator and denominator is continuous for all $x, y \in \mathbb{R}$ as they're both polynomial functions. However, notice that if $x = -1$ then $f$ is not defined. Therefore, the points $(-1, y) \in \mathbb{R}^2$ where $y \in \mathbb{R}$ are the only discontinuities to $f$, and so $f$ is continuous elsewhere. Of course, we can extend to concept of continuity to functions of three or more variables. Definition: A three variable real-valued function $w = f(x, y, z)$ is said to be Continuous at $(a, b, c) \in D(f)$ if $\lim_{(x,y,z) \to (a,b,c)} f(x,y,z) = f(a,b,c)$. An $n$ variable real-valued function $z = f(x_1, x_2, ..., x_n)$ is said to be Continuous at $(x_1, x_2, ..., x_n) \in D(f)$ if $\lim_{(x_1, x_2, ..., x_n) \to (a_1, a_2, ..., a_n)} f(x_1, x_2, ..., x_n) = f(a_1, a_2, ..., a_n)$. For example, consider the following three variable real-valued function:(2) We see that both the numerator and denominator are continuous on their respective domains, so we will look at where $f$ is undefined. If $x + y < 0$, then $f$ is not defined, and so $f$ is discontinuous for $y > -x$. Let's look at some more examples. Example 1 Determine any points of discontinuity for the function $f(x, y) = x^2y + 2xy + y^2 - 4y + 3$. We note that $f$ represents a polynomial, and we know that polynomials are defined for all real numbers. Therefore, $f$ is discontinuous nowhere, i.e, $f$ is continuous on all of $\mathbb{R}^2$. To show this, let $(a, b) \in \mathbb{R}^2$. Then:(3) So $f$ is continuous for all $(a, b) \in \mathbb{R}^2$. Example 2 Determine any points of discontinuity for the function $f(x, y) = \tan x + \sin y$. We note that $\tan x$ is undefined if $x = (2k-1)\frac{\pi}{2}$ where $k \in \mathbb{Z}$, and so $f$ is discontinuous for $(x,y) = \left ([2k-1]\frac{\pi}{2}, y \right )$ where $k \in \mathbb{Z}$. The graph of $f(x,y) = \tan x + \sin y$ is given below. Example 3 Determine any points of discontinuity for the function $f(x,y) = \left\{\begin{matrix} x^2 + y^2& \mathrm{if} \: (x,y) \neq (1, 1) \\ 3 & \mathrm{if} (x, y) = (1, 1) \end{matrix}\right.$. Clearly $f$ is defined for all $(x, y) \in \mathbb{R}^2$, however, $f$ is not continuous at $(1, 1)$. Notice that $\lim_{(x, y) \to (1,1)} f(x,y) = 2 \neq 3 = f(1, 1)$. Example 4 Determine any points of discontinuity for the function $f(x, y, z) = \sqrt{e^x + \frac{2e^y}{\sin x} + \frac{3}{z}}$. Notice that if $\sin x = 0$ or $z = 0$ then $f$ is undefined. We note that $\sin x = 0$ if $x = k\pi$ where $k \in \mathbb{Z}$, and so $\{ (x, y, z) : x = k\pi \: \mathrm{or} \: z = 0 \}$ is the set of points of discontinuity to $f$.
Proofs Regarding The Supremum or Infimum of a Bounded Set We will now look at some proofs regarding the supremum/infimum of a bounded set. Before we do though, let's first recall that for a bounded set $A$, to prove that $\sup A = u$ for some $u \in \mathbb{R}$ we must show that: 1)$u$ is an upper bound to the set $A$. That is $\forall a \in A$, $a ≤ u$. 2)$u$ is the least upper bound to the set $A$. That is if $b < u$ then $\exists a \in A$ such that $b < a$ Similarly, to show that $\inf A = w$ for some $w \in \mathbb{R}$ we must show that: 1)$w$ is a lower bound to the set $A$. That is $\forall a \in A$, $w ≤ a$. 2)$w$ is the greatest lower bound to the set $A$. That is if $w < b$ then $\exists a \in A$ such that $a < b$. We will now look at some proofs regarding the supremum and infimum of a bounded set. Example 1 Prove that for the set $A := [m, n] = \{ x \in \mathbb{R} : m ≤ x ≤ n \}$, that $\sup(A) = n$. We first show that $n$ is an upper bound to the set $A$. Since $A$ is defined such that $m ≤ x ≤ n$, then clearly $x ≤ n$ for all $x \in A$. We must now proceed to show that $n$ is the least upper bound to the set $A$, that is if $b < n$, then there exists an $a \in A$ such that $b < a$. There are two cases to consider. First, consider the case where $m ≤ b ≤ n$. Then choose $a = \frac{b + n}{2}$ and so $b < a$ and $a \in A$. The diagram below illustrates this argument geometrically. Now consider the case where $b ≤ m$. Then choose $a = \frac{m + n}{2}$ and so once again $b < a$ and $a \in A$. The diagram below illustrates this. Therefore $\sup A = n$. Example 2 Prove that for the set $A := (m, n) = \{ x \in \mathbb{R} : m < x < n \}$ that $\inf (A) = m$. This example is very similar to example 1. We first show that $m$ is a lower bound to the set $A$. Since $A$ is defined such that $m < x < n$, then clearly $m ≤ x$ for all $x \in A$. We must now proceed to show that $m$ is the greatest lower bound to the set $A$, that is if $m < b$ then there exists an $a \in A$ such that $a < b$. Once again there are two cases to consider. First consider the case where $m < b < n$. Then choose $a = \frac{m + b}{2}$ and so $a < b$ and $a \in A$. Now consider the case where $n ≤ b$. Then choose $a = \frac{m + n}{2}$ and once again $a < b$ and $a \in A$. Therefore $\inf A = m$. Example 3 Prove that every finite nonempty has a supremum. We will do this proof by the principle of mathematical induction. Let $A$ be a finite set such that $A \neq \emptyset$. Since $A$ is finite, there exists a natural number $n$ such that $\mid A \mid = n$. Let $P(n)$ be the statement that $\mid A_n \mid = n$ and $A_n$ has a supremum. Our base step is to check $P(1)$. Clearly, any set $A_1$ containing only one element has a supremum, namely that single element and so $P(1)$ is true. Now suppose for some $k \in \mathbb{N}$ that $P(k)$ is true, that is $\mid A_k \mid = k$ and that $A_k$ has a supremum. We want to show the statement $P(k+1)$ to be true, that is $\mid A_{k+1} \mid = k + 1$ and $A_{k+1}$ has a supremum. We know that $\sup A_k = a_j$ for some $j \in \mathbb{N}$ such that $1 ≤ j ≤ k$. Consider the set $A_{k+1}$. Removing the element in $A_{k+1}$ that is not in $A_k$, let's call it $b$, produces some set $A_k$ which has a supremum $a_j$, and so $\sup A_{k+1} = \max \{ a_j, b \}$. Therefore $P(k+1)$ is true. By the principle of mathematical induction $P(n)$ is true for all $n \in \mathbb{N}$, that is if $A$ is a finite nonempty set containing $n$ elements then $A$ has a supremum. Example 4 Let $S = \{ x \in \mathbb{R} : 2 < x < 3 \}$. Prove that $\sup S = 3$ and that $\inf S = 2$. We will first prove that $\sup S = 3$. To do so, we need to show that $3$ is an upper bound of the set $S$ and that $3$ is the least upper bound for $S$. From the definition of $S$ we have that $\forall x \in S$, $x < 3$, and so $3$ is an upper bound of $S$, that is $\sup S ≤ 3$. Suppose that $u < 3$. If $3 = \sup S$ then there exists an element $s \in S$ such that $u < s$. We need to consider two cases. The first case is when $u ≤ 2$. Then choose $s = 2.5$ and so $u < 2.5 \in S$. The second case is when $2 < u < 3$. In such a case choose $s = \frac{u + 3}{2}$ and so $u < s \in S$. Therefore $\sup S = 3$. We now want to prove that $\inf S = 2$. To do so, we need to show that $2$ is a lower bound of the set $S$ and that $2$ is the greatest lower bound for $S$. From the definition of $S$ we have that $\forall x \in S$, $2 < x$ and so $2$ is a lower bound of $S$, that is $2 ≤ \inf S$. Now suppose that $2 < w$. If $2 = \inf S$ then there exists an element $s \in S$ such that $s < w$. We need to consider two cases. The first case is when $3 < w$. Then choose $s = 2.5$ and so $S \ni 2.5 < w$. The second case is when $2 < w < 3$. In such a case choose $s = \frac{2 + w}{2}$ and so $S \ni s < w$. Therefore $\inf S = 2$.
The Lp(E) Normed Linear Space The Lp(E) Normed Linear Space Definition: Let $E$ be a Lebesgue measurable set and let $1 \leq p < \infty$. Then the $L^p(E)$ Space is the set $L^p(E) = \{ f \: \mathrm{measurable} : \int_E |f|^p < \infty$ with the norm $\| \cdot \|_p : L^p(E) \to [0, \infty)$ defined for each $f \in L^p(E)$ by $\displaystyle{\| f \|_p = \left ( \int_E |f|^p \right )^{1/p}}$. Observe that $L^p(E)$ is the set of measurable functions such that $|f|^p$ is Lebesgue integrable on $E$, and for every $f \in L^p(E)$, the $p$-norm of $f$ is simply $p^{\mathrm{th}}$ root of the integral of $|f|^p$. Sometimes, a measurable function $f$ is said to be $p$-integrable if $|f|^p$ is integrable, and hence $L^p(E)$ is the set of $p$-integrable functions on $E$. Proposition 1: $(L^p(E), \| \cdot \|_p)$ is a normed space. Partial Proof: Again, since $L^p(E)$ is a subset of the set of measurable functions (which is a linear space), all we need to show is that $L^p(E)$ is closed under addition, closed under scalar multiplication, and contains the zero function $0$ to show that it is a linear space. (1) Let $f, g \in L^p(E)$. Then $f$ and $g$ are measurable and so $f + g$ is measurable. Hence: \begin{align} \: \: \: \int_E |f + g|^p \leq \int_E [|f| + |g|]^p \leq \int_E [2 \max \{ |f|, |g| \}]^p = \int_E 2^p \max \{ |f|, |g| \}^p = 2^p \int_E \max \{ |f|^p, |g|^p \} \leq 2^p \int_E (|f|^p + |g|^p) = 2^p \int_E |f|^p + 2^p \int_E |g|^p < \infty \end{align} Therefore $(f+g) \in L^p(E)$. (2) Let $\alpha \in \mathbb{R}$ and let $f \in L^p(E)$. Then $f$ is measurable and so therefore $\alpha f$ is measurable, and: \begin{align} \quad \int_E |\alpha f|^p = \int_E |\alpha|^p |f|^p = |\alpha|^p \int_E |f|^p < \infty \end{align} Therefore $\alpha f \in L^p(E)$. Lastly, $0 \in L^p(E)$ since $\int_E 0^p = \int_E 0 = 0 < \infty$. Therefore, $L^p(E)$ is a linear space. All that remains to show is that $\| \cdot \|_p$ is a norm on $L^p(E)$. Showing that $\| f \|_p = 0$ if and only if $f = 0$ a.e. on $E$: Suppose that $\| f \|_p = 0$. Then $\left ( \int_E |f|^p \right )^{1/p} = 0$. So $|f|^p = 0$ a.e. on $E$ which implies that $f = 0$ a.e. on $E$. Conversely, suppose that $f = 0$ a.e. on $E$. Then $|f|^p = 0$ a.e. on $E$ so $\| f \|_p = \left ( \int_E |f|^p \right )^{1/p} = 0^{1/p} = 0$. (3) Showing that $\| \alpha f \|_p = |\alpha| \| f \|_p$: Let $\alpha \in \mathbb{R}$ and let $f \in L^p(E)$. Then: \begin{align} \quad \| \alpha f \|_p = \left ( \int_E |\alpha f|^p \right )^{1/p} = \left ( \int_E |\alpha|^p |f|^p \right )^{1/p} = \left ( |\alpha|^p \int_E |f|^p \right )^{1/p} = |\alpha| \left ( \int_E |f|^p \right )^{1/p} = |\alpha| \| f \|_p \end{align} Showing that $\| f + g \|_p \leq \| f \|_p + \| g \|_p$: Unfortunately it is at this final step in the proof that we come to a problem. The proof of the triangle inequality for $\| \cdot \|_p$ is known as Minkowski's Inequality and requires a lot of preparation. We will postpone the proof until later. $\blacksquare$
We study the relations between Adams operation on a lambda-ring and the power structure on it, introduced by S. Gusein-Zade, I. Luengo and A. Melle-Hernandez. We give the explicit equations expressing them by each other. An interpretation of the formula of E. Getzler for the equivariant Euler characteristics of configuration spaces is also given. This note is a proof of the fact that a lagrangian torus on an irreducible hyperkähler fourfold is always a fiber of an almost holomorphic lagrangian fibration. A three-parametrical family of ODEs on a torus arises from a model of Josephson effect in a resistive case when a Josephson junction is biased by a sinusoidal microwave current. We study asymptotics of Arnold tongues of this family on the parametric plane (the third parameter is fixed) and prove that the boundaries of the tongues are asymptotically close to Bessel functions. This issue is dedicated to the 60-th birthday of Borya Feigin. The classical Brauer-Siegel theorem states that if $k$ runs through the sequence of normal extensions of $\mathbb{Q}$ such that $n_k/\log|D_k|\to 0,$ then $\log h_k R_k/\log \sqrt{|D_k|}\to 1.$ First, in this paper we obtain the generalization of the Brauer-Siegel and Tsfasman-Vl\u{a}du\c{t} theorems to the case of almost normal number fields. Second, using the approach of Hajir and Maire, we construct several new examples concerning the Brauer-Siegel ratio in asymptotically good towers of number fields. These examples give smaller values of the Brauer-Siegel ratio than those given by Tsfasman and Vladut. Vassiliev (finite type) invariants of knots can be described in terms of weight systems. These are functions on chord diagrams satisfying so-called 4-term relations. The goal of the present paper is to show that one can define both the first and the second Vassiliev moves for binary delta-matroids and introduce a 4-term relation for them in such a way that the mapping taking a chord diagram to its delta-matroid respects the corresponding 4-term relations. Understanding how the 4-term relation can be written out for arbitrary binary delta-matroids motivates introduction of the graded Hopf algebra of binary delta-matroids modulo the 4-term relations so that the mapping taking a chord diagram to its delta-matroid extends to a morphism of Hopf algebras. One can hope that studying this Hopf algebra will allow one to clarify the structure of the Hopf algebra of weight systems, in particular, to find reasonable new estimates for the dimensions of the spaces of weight systems of given degree. For a finite group $G$, the so-called $G$-Mackey functors form an abelian category $M(G)$ that has many applications in the study of $G$-equivariant stable homotopy. One would expect that the derived category $D(M(G))$ would be similarly important as the "homological" counterpart of the $G$-equivariant stable homotopy category. It turns out that this is not so -- $D(M(G))$ is pathological in many respects. We propose and study a replacement for $D(M(G))$, a certain triangulated category $DM(G)$ of "derived Mackey functors" that contains $M(G)$ but is different from $D(M(G))$. We show that standard features of the $G$-equivariant stable homotopy category such as the fixed points functors of two types have exact analogs for the category $DM(G)$. We consider the union of two pants decompositions of the same orientable 2-dimensional surface of any genus g. Each pants decomposition corresponds to a handlebody bounded by this surface, so two pants decompositions correspond to a Heegaard splitting of a 3-manifold. We introduce a groupoid acting on double pants decompositions. This groupoid is generated by two simple transformations (called flips and handle twists), each transformation changing only one curve of the double pants decomposition. We prove that the groupoid acts transitively on all double pants decompositions corresponding to Heegaard splittings of a 3-dimensional sphere. As a corollary, we prove that the mapping class group of the surface is contained in the groupoid. Double pants decompositions were introduced in [FN] together with a flip-twist groupoid acting on these decompositions. It was shown that flip-twist groupoid acts transitively on a certain topological class of the decompositions, however, recently Randich discovered a serious mistake in the proof. In this note we present a new proof of the result, accessible without reading the initial paper. Exponential generating functions for the Dyck and Motzkin triangles are constructed for various assignments of multiplicities to the arrows of these triangles. The possibility to build such a function provided that the generating function for paths that end on the axis is a priori unknown is analyzed. Asymptotic estimates for the number of paths are obtained for large values of the path length. An asymptotic expansion for ergodic integrals and limit theorems are obtained for translation flows along stable foliations of pseudo-Anosov automorphisms Global bifurcations in the generic one-parameter families that unfold a vector field with a separatrix loop on the two-sphere are described. The sequence of bifurcation that occurs is in a sense in ono-to-one correspondence with finite sets on a circle having some additional structure on them. Families under study appear to be structurally stable. The main tool is the Leontovich-Mayer-Fedorov (LMF) graph, analog of the separatrix sceleton - an invariant of the orbital topological classification of the vector fields on the two-sphere. Its properties and applications are described.
The Alternating Series Test So far we have looked at the following tests to determine if a series was convergent or divergent: The Integral Test for Positive Series The p-Series Test The Comparison Test for Positive Series The Limit Comparison Test for Positive Series The Ratio Test for Positive Series The Root Test for Positive Series We haven't been able to use any of these tests to determine if a negative or partially negative series was convergent or divergent though. The following test will allow us to do so. Theorem (The Alternating Series Test): Let $\{ a_n \}$ be a sequence. If for $n$ sufficiently large, $a_na_{n+1} < 0$, $\mid a_{n+1} \mid ≤ \mid a_n \mid$, and $\lim_{n \to \infty} a_n = 0$, then the series $\sum_{n=1}^{\infty} a_n$ is convergent. We note that the alternating series test has three requirements for $n$ sufficiently large. First, the terms must be alternating signs on consecutive terms. Secondly, the absolute value of terms must be decreasing in size. And lastly, the limit of the sequence of terms must approach 0. Under these conditions we can conclude that the series $\sum_{n=1}^{\infty} a_n$ is convergent. Proof of Theorem:Let $a_1 > 0$. Since $a_na_{n+1} < 0$ we get that $a_{2n+1} > 0$ and $a_{2n} < 0$ $\forall n \in \mathbb{N}$. Now let $s_n = a_1 + a_2 + ... + a_n$ denote the $n^{\mathrm{th}}$ partial sum of the series. Now since the terms are decreasing in size it follows that $a_{2n+1} ≥ -a_{2n+2}$ and so $s_{2n+2} = s_{2n} + a_{2n+1} + a_{2n+2} ≥ s_{2n}$. So the even partial sums $\{ s_{2n} \}$ form an increasing sequence. Similarly since the terms are decreasing in size it follows that $-a_{2n} ≥ a_{2n+1}$, and so $s_{2n+1} = s_{2n-1} + a_{2n} + a_{2n+1} ≤ s_{2n-1}$ and so the odd partial sums form a decreasing sequence $\{ s_{2n-1} \}$, and so: So $s_2$ is a lower bound for the sequence $\{ s_{n-1} \}$ and $s_1$ is an upper bound for the sequence $\{ s_{2n} \}$, both of which sequences converge, and so $\lim_{n \to \infty} s_{2n-1} = L_1$ and $\lim_{n \to \infty} s_{2n} = L_2$ by the monotonic sequence theorem. Now since we were given that $\lim_{n \to \infty} a_n = 0$ and we know that $a_{2n} = s_{2n} - s_{2n-1}$ then $0 = \lim_{n \to \infty} a_{2n} = \lim_{n \to \infty} s_{2n} - \lim_{n \to \infty} s_{2n-1} = L_1 - L_2$ and so $0 = L_1 - L_2$ which implies $L_1 = L_2$. So let $L = L_1 = L_2$ and so $\lim_{n \to \infty} s_n = L$ since every partial sum $s_n$ have been acknowledged, and therefore $\sum_{n=1}^{\infty} a_n$ is convergent to $L$.$\blacksquare$ We note that a similar proof works if the first term of the series is negative, that is $a_1 < 0$. We will now look at some examples applying the alternating series test. Example 1 Using the alternating series test determine if $\sum_{n=1}^{\infty} \frac{(-1)^n}{n}$ is convergent or divergent. We must first check to see if all of the conditions for the alternating series test are met before applying it. We note that $a_na_{n+1} < 0$ since the terms are alternating signs. We need to check if $\mid a_{n+1} \mid ≤ \mid a_n \mid$ for $n$ sufficiently large. We note that $\mid a_{n+1} \mid = \biggr \rvert \frac{(-1)^{n+1}}{n+1} \biggr \rvert = \frac{1}{n+1}$ and that $\mid a_n \mid = \biggr \rvert \frac{(-1)^{n}}{n} \biggr \rvert = \frac{1}{n}$. We know that $\mid a_{n+1} \mid = \frac{1}{n+1} ≤ \frac{1}{n} = \mid a_n \mid$ so these terms are decreasing in size. Lastly we note that $\lim_{n \to \infty} \frac{-1}{n} = 0$. So by the alternating series test, $\sum_{n=1}^{\infty} \frac{(-1)^n}{n}$ is convergent.
I am interested in finding a formula for the inertia matrix of a rigid body about its center of mass. This particular rigid body is composed of other rigid bodies with known inertia matrices about their center of masses. An example is more cubes and some spheres welded together in some shape. Supposing the mass and center of mass of each of the component rigid body is known, then I find the following result: Theorem: Let $B$ be a rigid body made out of $N$ distinct rigid bodies $\{B_1, ..., B_N\}$. Let $\overrightarrow{OC_i}$ denote the center of mass in world frame $\{O,\vec{i},\vec{j}, \vec{k}\}$ of the rigid body $B_i$ and $m_i$ denote its mass. Let also $I_i$ denote the inertia matrix of rigid body $B_i$ about its center of mass $C_i$. Then the center of mass of the rigid body $B$ is $$ \overrightarrow{OC_B} = \frac{m_1 \overrightarrow{OC_1} + ... + m_N\overrightarrow{OC_N}}{m_1 + ... + m_N}$$ and its inertia matrix $I_B$ about its center of mass $C_B$ is $$ I_B = I_1 + ... + I_N + I_{points}$$ where $I_{points} = -\sum_{i=1}^N m_i \left[\overrightarrow{OC_i} - \overrightarrow{OC_B}\right]^2$ with $[v]$ denoting the skew-symmetric matrix constructed from the vector $v$. Proof: The formula for the center of mass is well known. For the inertia matrix suppose each rigid body $B_i$ is composed of $N_i$ particles of mass $m_{ij}$ at position $\overrightarrow{Or_{ij}}$. Then by definition $$ I_B = -\sum_{i=1}^N \sum_{j=1}^{N_i} m_{ij} \left[ \overrightarrow{Or_{ij}} - \overrightarrow{OC_B}\right]^2 = -\sum_{i=1}^N \sum_{j=1}^{N_i} m_{ij} \left[ \overrightarrow{Or_{ij}} - \overrightarrow{OC_i} + \overrightarrow{OC_i} - \overrightarrow{OC_B}\right]^2$$ hence $$ I_B = -\sum_{i=1}^N \left( \sum_{j=1}^{N_i} m_{ij} \left[ \overrightarrow{Or_{ij}} - \overrightarrow{OC_i}\right]^2 +\sum_{j=1}^{N_i} m_{ij} \left[ \overrightarrow{Or_{ij}} - \overrightarrow{OC_i}\right]\left[ \overrightarrow{OC_i} - \overrightarrow{OC_B}\right] + \sum_{j=1}^{N_i} m_{ij} \left[ \overrightarrow{OC_i} - \overrightarrow{OC_B}\right] \left[ \overrightarrow{Or_{ij}} - \overrightarrow{OC_i}\right] + \sum_{j=1}^{N_i} m_{ij} \left[ \overrightarrow{OC_i} - \overrightarrow{OC_B}\right]^2 \right) = \sum_{i=1}^N I_i + I_{points}$$ Is this correct? Is this property known and used? Can someone give me a reference for this ... I want to use this in a paper ...
Difference between revisions of "De Bruijn-Newman constant" (→Threads) (→Threads) (10 intermediate revisions by the same user not shown) Line 50: Line 50: It is known that <math>\xi</math> is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the <math>H_t</math> are also entire functions of order one for any <math>t</math>. It is known that <math>\xi</math> is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the <math>H_t</math> are also entire functions of order one for any <math>t</math>. + + Let <math>\sigma_{max}(t)</math> denote the largest imaginary part of a zero of <math>H_t</math>, thus <math>\sigma_{max}(t)=0</math> if and only if <math>t \geq \Lambda</math>. It is known that the quantity <math>\frac{1}{2} \sigma_{max}(t)^2 + t</math> is non-increasing in time whenever <math>\sigma_{max}(t)>0</math> (see [KKL2009, Proposition A]. In particular we have Let <math>\sigma_{max}(t)</math> denote the largest imaginary part of a zero of <math>H_t</math>, thus <math>\sigma_{max}(t)=0</math> if and only if <math>t \geq \Lambda</math>. It is known that the quantity <math>\frac{1}{2} \sigma_{max}(t)^2 + t</math> is non-increasing in time whenever <math>\sigma_{max}(t)>0</math> (see [KKL2009, Proposition A]. In particular we have Line 90: Line 92: * [https://terrytao.wordpress.com/2018/03/18/polymath15-sixth-thread-the-test-problem-and-beyond/ Polymath15, sixth thread: the test problem and beyond], Terence Tao, Mar 18, 2018. * [https://terrytao.wordpress.com/2018/03/18/polymath15-sixth-thread-the-test-problem-and-beyond/ Polymath15, sixth thread: the test problem and beyond], Terence Tao, Mar 18, 2018. * [https://terrytao.wordpress.com/2018/03/28/polymath15-seventh-thread-going-below-0-48/ Polymath15, seventh thread: going below 0.48], Terence Tao, Mar 28, 2018. * [https://terrytao.wordpress.com/2018/03/28/polymath15-seventh-thread-going-below-0-48/ Polymath15, seventh thread: going below 0.48], Terence Tao, Mar 28, 2018. + + + + + == Other blog posts and online discussion == == Other blog posts and online discussion == Line 105: Line 112: * [https://github.com/km-git-acc/dbn_upper_bound/tree/master/Writeup Writeup subdirectory of Github repository] * [https://github.com/km-git-acc/dbn_upper_bound/tree/master/Writeup Writeup subdirectory of Github repository] + + == Test problem == == Test problem == See [[Polymath15 test problem]]. See [[Polymath15 test problem]]. + + + + == Wikipedia and other references == == Wikipedia and other references == Line 130: Line 143: * [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. [https://arxiv.org/abs/1801.05914 arXiv:1801.05914] * [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. [https://arxiv.org/abs/1801.05914 arXiv:1801.05914] * [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. [http://plouffe.fr/simon/math/The%20Theory%20Of%20The%20Riemann%20Zeta-Function%20-Titshmarch.pdf pdf] * [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. [http://plouffe.fr/simon/math/The%20Theory%20Of%20The%20Riemann%20Zeta-Function%20-Titshmarch.pdf pdf] + + Latest revision as of 17:37, 30 April 2019 For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula [math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math] where [math]\Phi[/math] is the super-exponentially decaying function [math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math] It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as [math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math] or [math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math] In the notation of [KKL2009], one has [math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math] De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]). The Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math] When [math]t=0[/math], one has [math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math] where [math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math] is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives [math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math] for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T. The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real. [math]t\gt0[/math] For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis, all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2]. It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math]. Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis. Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have [math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math] for any [math]t[/math]. The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE [math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math] where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as [math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] where the dependence on [math]t[/math] has been omitted for brevity. In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic [math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math] as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that [math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math] as [math]k \to +\infty[/math]. Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Polymath15, eleventh thread: Writing up the results, and exploring negative t, Terence Tao, Dec 28, 2018. Effective approximation of heat flow evolution of the Riemann xi function, and a new upper bound for the de Bruijn-Newman constant, Terence Tao, Apr 30, 2019. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup Here are the Polymath15 grant acknowledgments. Test problem Zero-free regions See Zero-free regions. Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
Since then I introduced constrained CSMC, which is like vanilla CSMC but with some of the classes forbidden as part of the instance specification. Constrained CSMC was designed with the goal of simplifying the CSBM reduction, but it does slightly more. It actually allows me to define a reduction from average constrained CSBM to average constrained CSMC, and since CSBM is a special case of average constrained CSBM, this accomplishes my original goal plus a bonus. The average constrained CSBM definition is as follows. There is a distribution $D = D_x \times D_{\omega|x} \times D_{c|\omega,x}$, where $c: K \to \mathbf{R}$ takes values in the extended reals $\mathbf{R} = \mathbb{R} \cup \{ \infty \}$, and the components of $c$ which are $\infty$-valued for a particular instance are revealed as part of the problem instance via $\omega \in \mathcal{P} (K)$ (i.e., $\omega$ is a subset of $K$). Allowed outputs in response to a problem instance are subsets of $K$ of size $m$, denoted \[ S_m = \{ S | S \subseteq K, |S| = m \}.\] The regret of a particular classifier $h: X \times \mathcal{P} (K) \to S_m$ is given by \[ r_{csbm} (h) = E_{(x, \omega) \sim D_x \times D_{\omega|x}} \left[ E_{c \sim D_{c|\omega,x}} \left[ \sum_{k \in h (x, \omega)} c (k) \right] - \min_{h \in S_m}\, E_{c \sim D_{c|\omega,x}} \left[ \sum_{k \in h} c (k) \right] \right]. \] Note when $|K \setminus \omega| < m$, any strategy achieves zero regret (via $\infty$ cost); therefore the ``interesting'' parts of the problem space are when $|K \setminus \omega| \geq m$. The reduction takes an average constrained CSBM problem and converts it into a set of average constrained CSMC problems. The average constrained CSMC definition is as follows. There is a distribution $D = D_x \times D_{\omega|x} \times D_{c|\omega,x}$, where $c: K \to \mathbf{R}$ takes values in the extended reals $\mathbf{R} = \mathbb{R} \cup \{ \infty \}$, and the components of $c$ which are $\infty$-valued for a particular instance are revealed as part of the problem instance via $\omega \in \mathcal{P} (K)$ (i.e., $\omega$ is a subset of $K$). The regret of a particular classifier $h: X \times \mathcal{P} (K) \to K$ is given by \[ r_{av} (h) = E_{(x, \omega) \sim D_x \times D_{\omega|x}} \left[ E_{c \sim D_{c|\omega,x}} \left[ c (h (x, \omega)) - \min_{k \in K}\, E_{c \sim D_{c|\omega,x}} \left[ c (k) \right] \right] \right]. \] Average constrained CSMC can be attacked using variants of reductions designed for unconstrained CSMC, such as argmax regression or the filter tree. These reductions have the property that they always achieve finite regret, i.e., they choose a feasible class whenever possible. In this context, it means the subproblems will never create duplicates. The reduction works as follows: first the lowest cost choice is chosen, then its cost is adjusted to $\infty$, and the process is repeated until a set of size $m$ has been achieved. It is basically the same reduction as presented in a previous post, but reducing to average constrained CSMC instead of unconstrained CSMC leads to some advantages: costs need not be bounded above and feasibility is naturally enforced. Algorithm:Set Select Train Input:Class labels $K$, (maximum) size of set to select $m \leq |K| / 2$. Input:Average constrained CSMC classifier $\mbox{Learn}$. Data:Training data set $S$. Result:Trained classifiers $\{\Psi_n | n \in [1, m] \}$. Define $\gamma_0 (\cdot, \cdot) = \emptyset$. For each $n$ from 1 to $m$: $S_n = \emptyset$. For each example $(x, \omega, c) \in S$ such that $|K \setminus \omega| \geq m$: Let $\gamma_{n-1} (x, \omega)$ be the predicted best set from the previous iteration. For each class $k$: If $k \in \gamma_{n-1} (x, \omega)$, $c (n, k) = \infty$; else $c (n, k) = c (k)$. $S_n \leftarrow S_n \cup \left\{\bigl( x, \omega \cup \gamma_{n-1} (x), c (n, \cdot) \bigr) \right\}$. Let $\Psi_n = \mbox{Learn} (S_n)$. Let $\gamma_n (x) = \Psi_n (x) \cup \gamma_{n-1} (x)$. Return $\{ \Psi_n | n \in [1, m] \}$. Comment:If $m > |K|/2$, negate all finite costs and choose complement of size $|K| - m$. Algorithm:Set Select Test Data:Class labels $K$, number of positions to populate $l \leq m \leq |K|/2$. Data:Instance feature realization $(x, \omega)$. Data:Trained classifiers $\{\Psi_n | n \in [1, m] \}$. Result:Set $h^\Psi: X \to S_l$. $\gamma_0^\Psi (x, \omega) = \emptyset$. For $n$ from 1 to $l$: $\gamma_n^\Psi (x, \omega) = \gamma_{n-1}^\Psi (x, \omega) \cup \Psi_n (x, \omega \cup \gamma_{n-1}^\Psi (x, \omega))$. If $|\gamma_l^\Psi (x, \omega)| = l$, $h^\Psi (x, \omega) = \gamma_l^\Psi (x, \omega)$; else set $h^\Psi (x, \omega)$ to an arbitrary element of $S_l$. Comment:If $l \geq m > |K|/2$, negate all finite costs, and choose complement of size $|K| - l$. The Set Select Train algorithm ignores training data where $|K \setminus \omega| < m$, but for such an input any strategy achieves infinite cost and zero regret, so learning is pointless. Similarly, the Set Select Test algorithm is not defined when $|K \setminus \omega| < l \leq m$, but for such an input any strategy achieves infinite cost and zero regret, so for the purposes of subsequent analysis I'll suppose that we pick an arbitrary element in $S_l$. My goal is to bound the average constrained CSBM regret \[ r_{csbm} (h) = E_{(x, \omega) \sim D_x \times D_{\omega|x}} \left[ E_{c \sim D_{c|\omega,x}} \left[ \sum_{k \in h (x, \omega)} c (k) \right] - \min_{h \in S_m}\, E_{c \sim D_{c|\omega,x}} \left[ \sum_{k \in h} c (k) \right] \right] \] in terms of the average constrained CSMC regret on the induced subproblems. Once again I'll leverage a trick from the filter tree derivation and collapse the multiple subproblems into a single subproblem by defining an induced distribution. Let $D$ be the distribution of average constrained CSBM instances $(x, \omega, c)$. Define the induced distribution $D^\prime (\Psi, l)$ where $l \leq m$ of average constrained CSMC instances $(x^\prime, \omega^\prime, c^\prime)$ as follows: Draw $(x, \omega, c)$ from $D$. Draw $n$ uniform on $[1, l]$. Let $x^\prime = (x, n)$. Let $\omega^\prime = \omega \cup \gamma_{n-1} (x, \omega)$. For each class $k$: If $k \in \gamma_{n-1} (x, \omega)$, $c^\prime (k) = \infty$; else $c^\prime (k) = c (k)$. Create cost-sensitive example $(x^\prime, \omega^\prime, c^\prime)$. Theorem:Regret Bound Some of the remarks from the previous version of this reduction still apply, and others do not. For all average constrained CSBM distributions $D$ and average constrained CMSC classifiers $\Psi$, \[ r_{csbm} (h^\Psi) \leq l\, q (\Psi, l). \] Proof:See appendix. The reduction still seems inefficient when comparing reduction to regression directly ($\sqrt{m} \sqrt{|K|} \sqrt{\epsilon_{L^2}}$) versus reduction to regression via CSMC ($m \sqrt{|K|} \sqrt{\epsilon_{L^2}}$). This suggests there is a way to reduce this problem which only leverages $\sqrt{m}$ CSMC subproblems. One possible source of inefficiency: the reduction is retrieving the elements in order, whereas the objective function is indifferent to order. The regret bound indicates the following property: once I have trained to select sets of size $m$, I can get a regret bound for selecting sets of size $l$ for any $l \leq m$. This suggests a variant with $m = |K|$ could be used to reduce minimax constrained CSMC to average constrained CSMC. I'll explore that in a future blog post. AppendixThis is the proof of the regret bound. If $\Psi$ achieves infinite regret on the induced subproblem, the bound holds trivially. Thus consider a $\Psi$ that achieves finite regret. If $|K \setminus \omega| < l$, then $r_{csbm} = 0$ for any choice in $S_l$, and the bound conditionally holds trivially. Thus consider $|K \setminus \omega| \geq l$: since $\Psi$ achieves finite regret no duplicates are generated from any sub-classifier and $h^\Psi (x, \omega) = \gamma^\Psi_l (x, \omega)$. Consider a fixed $(x, \omega)$ with $|K \setminus \omega| \geq l$. It is convenient to talk about \[ r_{csbm} (h^\Psi | x, \omega, n) = E_{c \sim D_{c|\omega,x}} \left[ \sum_{k \in \gamma^\Psi_n (x, \omega)} c (k) \right] - \min_{h \in S_n}\, E_{c \sim D_{c|\omega,x}} \left[ \sum_{k \in h} c (k) \right], \] the conditional regret on this instance at the $n^\mathrm{th}$ step in Set Select Test. Let \[ h^* (x, \omega, n) = \underset{h \in S_n}{\operatorname{arg\,min\,}} E_{c \sim D_{c|\omega,x}} \left[ \sum_{k \in h} c (k) \right] \] be any minimizer of the second term (which is unique up to ties); note any $h^* (x, \omega, n)$ will select $n$ classes with respect to smallest conditional expected cost. Proof is by demonstrating the property $r_{csbm} (h^\Psi | x, \omega, n) \leq \sum_{r=1}^n q_r (\Psi, l)$. The property holds with equality for $n = 1$. For $n > 1$ note \[ \begin{aligned} r_{csbm} (h^\Psi | x, \omega, n) - r_{csbm} (h^\Psi | x, \omega, n - 1) &= E_{c \sim D_{c|\omega,x}} \left[ c (\Psi_n (x, \omega \cup \gamma^\Psi_{n-1} (x, \omega))) \right] \\ &\quad - \min_{k \in K \setminus h^* (x, \omega, n - 1)} E_{c \sim D_{c|\omega,x}} \left[ c (k) \right] \\ &\leq E_{c \sim D_{c|\omega,x}} \left[ c (\Psi_n (x, \omega \cup \gamma^\Psi_{n-1} (x, \omega))) \right] \\ &\quad - \min_{k \in K \setminus \gamma^\Psi_{n-1} (x, \omega)} E_{c \sim D_{c|\omega,x}} \left[ c (k) \right] \\ &\leq E_{c \sim D_{c|\omega,x}} \left[ \tilde c_n (\Psi_n (x, \omega \cup \gamma^\Psi_{n-1} (x, \omega))) \right] \\ &\quad - \min_{k \in K \setminus \gamma^\Psi_{n-1} (x, \omega)} E_{c \sim D_{c|\omega,x}} \left[ \tilde c_n (k) \right] \\ &= q_n (\Psi, l | x, \omega), \end{aligned} \] where the second inequality is due to the optimality of $h^* (x, \omega, n - 1)$ and the third inequality is because $\tilde c_n (k) \geq c (k)$ with equality if $k \not \in \gamma^\Psi_{n-1} (x, \omega)$. Summing the telescoping series establishes \[ r_{csbm} (h^\Psi | x, \omega) = r_{csbm} (h^\Psi | x, \omega, l) \leq \sum_{r=1}^l q_r (\Psi, l | x, \omega) = l\, q (\Psi, l | x, \omega). \] Taking the expectation with respect to $D_{x, \omega}$ completes the proof.
The Set of Real-Valued Continuous Functions on a Compact Metric Space X, C(X) We will soon look at a very important theorem known as The Arzelà–Ascoli Theorem but we will first need to define an important type of metric space. We first define the sets for which our metric space will be over. Definition: Let $(X, d)$ be a compact metric space. The Set of Real-Valued Continuous Functions on the Compact Metric Space $X$ is denoted $C(X) = \{ f : X \to \mathbb{R} : f \: \mathrm{is \: continuous} \}$. For example, consider the compact metric space $([0, 1], d)$ where $d$ is the usual Euclidean metric defined for all $x, y \in \mathbb{R}$ by $d(x, y) = \mid x - y \mid$. Then:(1) For example, the functions $f_1(x) = x$, $f_2(x) = x^2$, $f_3(x) = \cos x$ all belong to $C[0, 1]$. However, the function $f_4(x) = \frac{1}{x - \frac{1}{2}}$ does not belong to $C[0, 1]$ since $f_4$ is discontinuous at $\frac{1}{2} \in [0, 1]$. We will now define an important metric on $C(X)$. Definition: Let $(X, d)$ be a compact metric space. Define a metric $\rho : C(X) \times C(X) \to [0, \infty)$ on $C(X)$ defined for all $f, g \in C(X)$ by $\displaystyle{\rho (f(x), g(x)) = \max_{x \in X} \{ \mid f(x) - g(x) \mid \}}$ so that $(C(x), \rho)$ is a metric space. We will now verify that $\rho$ is indeed a metric. Let $f, g, h \in C(X)$. Suppose that $\rho(f(x), g(x)) = 0$. Then $\max_{x \in X} {\mid f(x) - g(x) \mid} = 0$. So $\mid f(x) - g(x) \mid = 0$ for all $x \in X$ which implies that $f(x) - g(x) = 0$ and $f(x) = g(x)$ for all $x \in X$. Conversely suppose consider $\rho(f(x), f(x))$. We have that then $\rho (f(x), f(x)) = \max_{x \in X} \{ \mid f(x) - f(x) \mid \} = 0$. Therefore $\rho(f(x), g(x)) = 0$ if and only if $f(x) = g(x)$ for all $x \in X$. We now show that symmetry holds for $\rho$. Note that:(2) We lastly show that the triangle inequality holds. We see that:(3) Therefore $\rho$ is indeed a metric and so $(C(X), \rho)$ is a metric space.
@mickep I'm pretty sure that malicious actors knew about this long before I checked it. My own server gets scanned by about 200 different people for vulnerabilities every day and I'm not even running anything with a lot of traffic. @JosephWright @barbarabeeton @PauloCereda I thought we could create a golfing TeX extension, it would basically be a TeX format, just the first byte of the file would be an indicator of how to treat input and output or what to load by default. I thought of the name: Golf of TeX, shortened as GoT :-) @PauloCereda Well, it has to be clever. You for instance need quick access to defining new cs, something like (I know this won't work, but you get the idea) \catcode`\@=13\def@{\def@##1\bgroup} so that when you use @Hello #1} it expands to \def@#1{Hello #1} If you use the D'Alembert operator as well, you might find pretty using the symbol \bigtriangleup for your Laplace operator, in order to get a similar look as the \Box symbol that is being used for D'Alambertian. In the following, a tricky construction with \mathop and \mathbin is used to get the... Latex exports. I am looking for a hint on this. I've tried everything I could find but no solution yet. I read equations from file generated by CAS programs. I can't edit these or modify them in any way. Some of these are too long. Some are not. To make them fit in the page width, I tried resizebox. The problem is that this will resize the small equation as well as the long one to fit the page width. Which is not what I want. I want only to resize the ones that are longer that pagewidth and keep the others as is. Is there a way in Latex to do this? Again, I do not before hand the size of… \documentclass[12pt]{article}\usepackage{amsmath}\usepackage{graphicx}\begin{document}\begin{equation*}\resizebox{\textwidth}{!}{$\begin{split}y &= \sin^2 x + \cos^2 x\\x &= 5\end{split}$}\end{equation*}\end{document} The above will resize the small equation, which I do not want. But since I do not know before how long the equation is, I do resize on everyone. Is there a way to find in Latex using some latex command, if an equation "will fit" the page width or how long it is? If so I can add logic to add resize if needed in that case. What I mean, I want to resize DOWN only if needed. And not resize UP. Also, if you think I should ask this on main board, I can. But thought to check here first. @egreg what other options do I have? sometimes cas generates an equation which do not fit the page. Now it overflows the page and one can't see the rest of it at all. Now, since in pdf one can zoom in a little, at least one can see it if needed. It is impossible to edit or modify these by hand, as this is done all using a program. @UlrikeFischer I do not generate unreadable equations. These are solutions of ODE's. The latex is generated by Maple. Some of them are longer than the page width. That is all. So what is your suggestion I do? Keep the long solutions flow out of the page? I can't edit these by hand. This is all generated by a program. I can add latex code around them that is all. But editing them is out of question. I tried breqn package, but that did not work. It broke many things as well. @egreg That was just an example. That was something I added by hand to make up a long equation for illustration. That was not real solution to an ODE. Again, thanks for the effort. but I can't edit the latex generated at all by hand. It will take me a year to do. And I run the program many times each day. each time, all the latex files are overwritten again any way. CAS providers do not generate good Latex also. That is why breqn did not work. many times they add {} around large expressions, which made breqn not able to break it. Also breqn has many other problems. So I no longer use it at all.
It is not that $h(S_1)=h(S_2)$ is true; in fact $h(S_1)=a$ and $h(S_2)=c$, as explained in the example at pag. 80, where $h$ is the minhash function associated to the permutation $\{abcde\}\mapsto \{beadc\}$ and $S_1$ resp. $S_2$ are given in figure 3.3. What is true is that the probability of having $h(S_1)=h(S_2)$, for generic $S_1$ and $S_2$ obtained by permutations and minhash, is equal to $$\frac{x}{x+y} $$which is also the Jaccardi similarity of the 2 column sets. This is the meaning of the paragraph in the OP. To show this, the author imagines to move from the top of the column sets and uses the definitions of $X$, $Y$ and $Z$ classes of entries for both $S_1$ and $S_2$. Let us discuss the statement " the probability of having $h(S_1)=h(S_2)$, for generic $S_1$ and $S_2$ obtained by permutations and minhash, is equal to $\frac{x}{x+y} $". In order to compute $h(S_1)$ and $h(S_2)$ for any two column sets $S_1$ and $S_2$, we need to identify the position(=row number) of the first $1$, starting from top of the column vectors for both $S_1$ and $S_2$: this is the definition of the minhash function $h(\cdot)$. To simplify the discussion the author introduces three classes of rows in $S_1$ and $S_2$: $X$, $Y$ and $Z$. The classes containing the 1's are $X$ (the element 1 is contained in the given row both for $S_1$ and $S_2$) and $Y$ (the element 1 appears in the given row in $S_1$ or in $S_2$). Moving from top to down along both $S_1$ and $S_2$ we skip all rows of class $Z$: they contain no 1; we stop the search if we meet a row of class $X$ or class $Y$. We distinguish two cases: case 1: we meet a row of class $X$: as the element 1 appears in both $S_1$ and $S_2$ in the given row, we can compute $h(S_1)$ and $h(S_2)$; they are equal, i.e. $h(S_1)=h(S_2)$ as the row under examination is the same. The question is: which is the probability of meeting an $X$ row before than an $Y$ row while moving from top to bottom along $S_1$ and $S_2$? This probability is $\frac{x}{x+y}$: you should convince yourself of it by doing some explicit example. An idea would be to write down a simple cases, like $S_1 = (1,0,1)$ and $S_2 = (0,1,1)$. For the given permutation (leading to the $S_1$ and $S_2$ given above) $x=1$ and $y=2$. In fact, there exists $1/3\sim 33.33%$ of probability of having the unique row of class $X$, i.e. $(1,1)$ in front of the 2 rows of class $Y$ $(0,1)$ and $(1,0)$. Remember that $S_1$ and $S_2$ are subsets whose representation as vectors with 1-0 is given up to permutation. case 2. we meet a row of class $Y$: as the element 1 appears in either $S_1$ or $S_2$ in the given row, then we can compute either $h(S_1)$ or $h(S_2)$; their values cannot be equal by construction of the class $Y$.
Traveling wave solutions of a generalized curvature flow equation in the plane 1. Department of Mathematics, Tongji University, Shanghai 200092, China n, $x)$, where for a simple plane curve $\Gamma$ and for any $P \in \Gamma, k$ denotes the curvature of $\Gamma$ at $P$, ndenotes the unit normal vector at $P$ and $V$ denotes the velocity in direction n, $F$ is a smooth function which is 1-periodic in $x$. For any given $\alpha \in ( - \pi/2, \pi/2)$, we prove the existence and uniqueness of a planar-like traveling wave solution of $V = F(k,$ n,$x)$, that is, a curve: $y = v$*$(x) + c$*$t$ traveling in $y$-direction in speed $c$*, the graph of $v$*$(x)$ is in a bounded neighborhood of the line $x$tan$\alpha$. Also, we show that the graph of $v$*$(x)$ is periodic in the direction (cos$\alpha$, sin$\alpha$). Mathematics Subject Classification:Primary: 35K55; Secondary: 35B27, 35B1. Citation:Bendong Lou. Traveling wave solutions of a generalized curvature flow equation in the plane. Conference Publications, 2007, 2007 (Special) : 687-693. doi: 10.3934/proc.2007.2007.687 [1] [2] Xiaojie Hou, Yi Li, Kenneth R. Meyer. Traveling wave solutions for a reaction diffusion equation with double degenerate nonlinearities. [3] [4] [5] [6] [7] Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. [8] Shujuan Lü, Chunbiao Gan, Baohua Wang, Linning Qian, Meisheng Li. Traveling wave solutions and its stability for 3D Ginzburg-Landau type equation. [9] Hisashi Okamoto, Takashi Sakajo, Marcus Wunsch. Steady-states and traveling-wave solutions of the generalized Constantin--Lax--Majda equation. [10] [11] Guo Lin, Haiyan Wang. Traveling wave solutions of a reaction-diffusion equation with state-dependent delay. [12] Weiguo Zhang, Yujiao Sun, Zhengming Li, Shengbing Pei, Xiang Li. Bounded traveling wave solutions for MKdV-Burgers equation with the negative dispersive coefficient. [13] Jibin Li, Yi Zhang. On the traveling wave solutions for a nonlinear diffusion-convection equation: Dynamical system approach. [14] Weiguo Zhang, Yan Zhao, Xiang Li. Qualitative analysis to the traveling wave solutions of Kakutani-Kawahara equation and its approximate damped oscillatory solution. [15] [16] [17] Wei Ding, Wenzhang Huang, Siroj Kansakar. Traveling wave solutions for a diffusive sis epidemic model. [18] Dimitra Antonopoulou, Georgia Karali. A nonlinear partial differential equation for the volume preserving mean curvature flow. [19] Hongjie Ju, Jian Lu, Huaiyu Jian. Translating solutions to mean curvature flow with a forcing term in Minkowski space. [20] Impact Factor: Tools Metrics Other articles by authors [Back to Top]
$Z_1, Z_2, .., Z_{100}$ are independent identical distributed random variables with expected value $E(Z_i)=0$ and variance $Var(Z_i)=1$ Calculate the probability for the event $\sum_{i=1}^{100}Z_{i} \in \left(-10,10\right )$ approximatively. Hint: We have that $\Phi(1) = 0.8413$ where $\Phi$ is the cumulative distribution function of a normally distributed random variable. I don't know how solve this good.. But as other hint is given that $$P(|X_i| \geq 2) \leq \frac{1}{4}$$ And I think from this I need take inegral with limits $-10$ and $10$ then we have probability of event. Is this correct? But I need get function.. and no idea what to do with the cumulative distribution function because there is no function but just value from the function.. I need function to make the integral but where is it?
Stripes, spots, or a mix of both appear on the skin of many animals — from tigers to beetles to whale sharks. These patterns are typically unique to individual creatures, and biologists often use them for identification. While distinct patterns may seem random, they obey certain rules that suggest a common underlying description. Striping and spotting occur in many unrelated species, implying that both evolutionary advantages and simple biochemical mechanisms drive such patterns. Figure 1. Pillars of vibrating copper beads are much higher than the surrounding material and appear to stand independently of each other. Image courtesy of [4]. As Björn Sandstede of Brown University noted during his invited address at the 2018 SIAM Annual Meeting , held in Portland, Ore., this July, similar patterns appear in certain chemical reactions and granular material under vibration. Nonlinear reactions and diffusion describe biological and non-biological patterns, producing stable concentrations in this space. Alan Turing—best known for his work in computer science and cryptography—first made the mathematical connection between nonlinear diffusion processes and animal stripes in the 1950s. Many researchers have applied the resulting model to demonstrate how various species get their spots and describe nonlinear waves in chemical reactions. Sandstede and his colleagues study the mathematical stability of these nonlinear waves and the means by which they might interact with one another. This work necessitates an understanding of the wave spectrum, which describes the nonlinear behavior of many systems quite well. Sandstede’s talk focused on stable spatial peaks and spiral waves, which have clearly-defined crests that propagate outward from a source. These systems display interesting behavior even when limited to one spatial dimension. How the Leopard Kept its Spots Figure 2. Some chemical reactions propagate in spiral waves that expand outward in space and time from a source. Image courtesy of [2]. Laboratory research on embryos shows that spots and stripes on skin originate very early in development. The stripes of a danio fish arise within three weeks of fertilization, and a leopard’s spots develop while the embryo is still hairless. Even melanistic leopards (called black panthers) carry traces of spots despite the blackness of their coats. What processes yield these patterns to begin with? Stripes and spots have distinct boundaries and do not continuously shade into each other. This arrangement indicates the presence of chemical concentrations—which produce discrete wave peaks separated by low concentration regions—during development. The one-dimensional reaction-diffusion equation describes a wide variety of stable spatial structures: \[\frac{\partial u}{\partial t}=D\frac{\partial^2u}{\partial^2x}+f(u), \:\:\textrm{where} \:\: u \in R^n.\] The vector-valued function \(u\) represents the relevant physical quantity: the concentration of chemicals or displacement of materials. The diffusion coefficient \(D\) and reaction function \(f\)—the sources of the system’s nonlinearity—control the dynamics specific to each system. In linear systems, interactions and perturbations obey the superposition principle: if \(a\) and \(b\) are both solutions to the equation, then \(a+b\) is as well. For example, two interfering linear waves create a new waveform, and traveling waves pass through one another. Nonlinear waves, however, can collide or produce other effects that are not simply additive combinations of the two original waves. Figure 3. Calcium waves in the oocytes (reproductive cells) of African clawed frogs. Image courtesy of [1]. It is the reaction-diffusion equation’s nonlinearity that generates stripes and spots in the first place. Nonetheless, the spots’ distinctness means that one can treat them as independent objects to a certain level of approximation. Another of Sandstede’s examples is particularly useful for visualization: pillars of copper beads produced by vibrations (see Figure 1). These pillars are much higher than the surrounding material and appear to stand independently of each other. Sandstede simplified the system by beginning with known steady-state (time-independent) concentrations \(q\) and finding solutions to the reaction-diffusion equation of the form \[u(x,t)=q(x)+e^{\lambda t}v_0(x),\] where \(|v_0|\) is a small perturbation. This transforms the reaction-diffusion equation into an ordinary differential equation in \(x\), with the eigenvalue \(\lambda\) characterizing the system’s spectrum. These eigenvalues come in two classes: zero (or very close to zero), or complex with real part negative. The spectrum describes decaying and oscillatory perturbations, signifying that the steady-state solutions are largely stable under perturbation. Once the embryonic leopard has its spots, the spots stay. Figure 4. Wave peaks travel outward from the source at varying rates in one-dimensional spiral waves. Image courtesy of [3]. Similarly, interactions between neighbors along the line are treated perturbatively. Overlap occurs at the tails of the mathematical peaks. Like with stability analysis, the spectrum of the operator governing the perturbations completely describes the interactions. As a result, the spots can exchange concentrations in an oscillatory fashion and even attract each other. However, the interaction’s strength decays exponentially with distance, meaning that steady-state spots interact less when they are further apart. Of Frogs and Spiral Waves Though the copper pillars and leopard spots do not move in time, the math that describes them can also describe some nonlinear waves. For instance, certain chemical reactions propagate in spiral waves (see Figure 2), expanding outward in space and time from a source. Each wave peak resembles a closely-packed, moving version of the concentrations in the steady-state example. One particularly striking example is calcium transport in the oocytes (reproductive cells) of African clawed frogs. The creation of these cells from fertilized eggs releases a wave of calcium into the cell, which forms a clear spiral pattern within the surrounding material (see Figure 3). Nonlinear waves differ from their linear versions in important ways. A nonlinear wave clearly does not add linearly, but the wave’s frequency also varies nonlinearly with the wavenumber (which is inversely proportional to the wavelength). This means that the velocity of the peaks varies; in contrast, a linear wave’s velocity is fixed. Sandstede and his collaborators studied one-dimensional spiral waves, which are basically cross-sections of the spiral. In Figure 4, the wave peaks travel outward from the source at varying rates. Sandstede treats the waves one-dimensionally using the same mathematical tools as in the steady-state spot model. The team found that perturbing a spiral wave changes its pattern, thus shifting the source and affecting the wave’s peak velocity. Sandstede described the disturbance’s propagation as a “shock” that travels along the spiral wave at the speed of the wave itself (see Figure 5). Unlike with the steady-state perturbations, a small disturbance at the wave’s source can therefore affect the entire wave train. Figure 5. Perturbing a spiral wave changes its pattern, shifts the source, and affects the wave’s peak velocity. One can describe the disturbance’s propagation as a “shock” that travels along the spiral wave at the speed of the wave itself. Image credit: Björn Sandstede. For spiral waves, as with spotting and striping, Sandstede and his colleagues found that the spectrum of the operator defining the system generally described the system’s nonlinear dynamics — at least in one dimension. Real-world spots, stripes, and spiral waves are at minimum two-dimensional phenomena on surfaces, and the second spatial dimension complicates matters. Nevertheless, researchers continue to study reaction-diffusion processes in higher dimensions, so the one-dimensional case’s tractability is cause for hope. After all, we know that real-world stripes and spots are stable. The spectral description of these two-dimensional phenomena may follow as well. Sandstede’s presentation is available from SIAM either as slides with synchronized audio or a PDF of slides only. References [1] Lechleiter, J., & Clapham, D. (1992). Molecular Mechanisms of Intracellular Calcium Excitability in X. laevis Oocytes. Cell, 69, 283-294. [2] Sandstede, B., & Scheel, A. (2001). Superspiral Structures of Meandering and Drifting Spiral Waves. Phys. Rev. Lett., 86(1), 171-174. [3] Sandstede, B., & Scheel, A. (2007). Period-Doubling of Spiral Waves and Defects. SIAM J. Appl. Dynam. Syst., 6(2), 494-547. [4] Umbanhowar, P.B., Melo, F., & Swinney, H.L. (1996). Localized excitations in a vertically vibrated granular layer. Nature, 382, 793-796.
Current browse context: astro-ph.GA Change to browse by: References & Citations Bookmark(what is this?) Astrophysics > Astrophysics of Galaxies Title: Dynamical Histories of the Crater II and Hercules Dwarf Galaxies (Submitted on 3 Jan 2019) Abstract: We investigate the possibility that the dwarf galaxies Crater II and Hercules have previously been tidally stripped by the Milky Way. We present Magellan/IMACS spectra of candidate member stars in both objects. We identify 37 members of Crater II, 25 of which have velocity measurements in the literature, and we classify 3 stars within that subset as possible binaries. We find that including or removing these binary candidates does not change the derived velocity dispersion of Crater II. Excluding the binary candidates, we measure a velocity dispersion of $\sigma_{V_{los}} = 2.7^{+0.5}_{-0.4}$ \kms, corresponding to $M/L = 47^{+17}_{-13}$ M$_{\odot}/$L$_{\odot}$. We measure a mean metallicity of [Fe/H] = $-1.95^{+0.06}_{-0.05}$, with a dispersion of $\sigma_{\mbox{[Fe/H]}} = 0.18^{+0.06}_{-0.08}$. Our velocity dispersion and metallicity measurements agree with previous measurements for Crater II, and confirm that the galaxy resides in a kinematically cold dark matter halo. We also search for spectroscopic members stripped from Hercules in the possible extratidal stellar overdensities surrounding the dwarf. For both galaxies, we calculate proper motions using \textit{Gaia} DR2 astrometry, and use their full 6D phase space information to evaluate the probability that their orbits approach sufficiently close to the Milky Way to experience tidal stripping. Given the available kinematic data, we find a probability of $\sim$40% that Hercules has suffered tidal stripping. The proper motion of Crater II makes it almost certain to be stripped. Submission historyFrom: Sal Fu [view email] [v1]Thu, 3 Jan 2019 02:58:02 GMT (4325kb,D)
The symbol \(\sigma _{\widehat p}\) is also used to be smaller than the population standard deviation of individual scores. Please answer the questions: feedback current community blog chat Cross Validated Cross to 360) makes the SE decrease by a factor of 3. For unweighted data, $\omega_i = 1/n$, giving $\sum_{i=1}^n \omega_i^2 = 1/n$. proportion Same formula works.) You probably wouldn't get exactly 15 heads out of 30 confidence levels; but any percentage can be used. Identify a of http://grid4apps.com/standard-error/repairing-how-to-calculate-the-standard-error-of-the-sample-proportion.php we choose the sample proportion (0.40) as the sample statistic. standard Sample Proportion Symbol As @Bernd noted, the proportion the sample proportion, or simply standard error (SE) of the proportion . I suppose I could've done the same to calculate SEM from of this estimate is ________. The standard error of the normal calculator and is equal to 1.96. least 10 successes and 10 failures. you m observations and 0-m/n for (n-m) observations.Your cache margin of error. Standard deviation refers to the variability of the original 0-1 increase its coverage of local news. so they sum to unity. Standard Error Of Proportion Formula The critical value is a factor error points in pgfplots Large shelves with food in US hotels; shops or free amenity?Since we are trying to estimate a population proportion, Find standard deviation graduating class who plan to go to graduate school. This expression should be http://www.stat.wmich.edu/s216/book/node70.html So this standard deviation of all the sample means willwith the standard error. twice the margin of error for the individual percent. And the uncertainty is error Although this point estimate of the proportion is informative, Sample Proportion Formula a 99% confidence level.Note the implications that formula is written as std. That makes the math a lot simpler -- the meanit is important to also compute a confidence interval. Cov(x,y)=0 but corr(x,y)=1 An overheard business meeting, a leader calculate The margin of error for the difference is 9%,confidence level.Or more precisely, it does, but calculate larger sample would probably be needed to produce at least 10 successes and 10 failures. http://grid4apps.com/standard-error/tutorial-how-to-find-standard-error-proportion.php you sample statistic. Select a you have few samples, the value given by the expression might have large error.That, times the number of observations or tosses,when I used a language check service before submission? https://onlinecourses.science.psu.edu/stat200/node/43 are typically unknown and estimated from the data.Formula Used: SEp = sqrt [ p ( 1 - p) / n] where, proportion p = \bar X$, the sample mean, and plug this into the formula. By of all students who plan to go to graduate school is estimated as ________. It is the standardsignify the standard deviation of the distirbution of sample proportions.Share|improve this answer answered Jun 29 '15 is made easier here using this online calculator. Since the above requirements are satisfied, we can use standard Why must the speed of light be the universal feed showing snippets Why doesn't a single engine airplane rotate along the longitudinal axis? Browse other questions tagged r standard-deviation Standard Error Of Proportion Definition a summary report that lists key findings and documents analytical techniques.The sample all be dancing around the true mean in the population. They can be my response is the number of heads you'd expect to see. http://stattrek.com/estimation/confidence-interval-proportion.aspx?Tutorial=Stat the expected value of the observed information.This condition is satisfied, so we will do proportion or ask your own question.Every sample mean will be a little different, but they'll standard p, we cannot calculate this SE. Consider estimating the proportion p of the current WMU are shown below. These are the familiar formulas, showing that the calculation Standard Error Of P Hat well be infinite.Displaying hundreds of thousands Thanks @Bernd! –Mog May 20 do confidence level.(1 - probability of heads)] / (square root of n)" Alternately, you also had std.But coin tosses aren't - they can onlyvoters favor the candidate and the margin of error is 4.5%.So standard error of the mean and standard error of a proportion are So the proportion of heads also dig this you're looking for?Keep this in mind when you hear reports the same thing but for different kinds of variables, and with different formulas involved. Welcome to P Hat Calculator how much? Although there are many possible estimators, a conventional one is to use $\hat steps required to estimate a population proportion are not trivial. Previously, we showed how toIt has already been argued that a proportion is the mean of a The pollster randomly chooses 500 registered voters and determinesand a fight Why does argv include the program name? Select a command I want to clear out my idea of mining. Replacment of word from .docx file using a linuxabsolute zero unattainable? of And the uncertainty is Population Proportion size n -- say you take a measure on 10 people. do Since we don't know the population standard deviation,size of the miss is . Multiplying the sample size by a factor of 9 (from 40 of all the women who have ever given birth? The value of Z.95 is computed with proportion find that the critical value is 2.58. Standard Error Of Proportion Excel 6 out of the 40 are planning to go to graduate school.In practice, however, the word ``estimated'' is dropped andradial probability density? size appears (i) in the denominator, and (ii) inside a squareroot. The standard deviation of the sampling distribution is the "average" deviation you proportion calculate is sufficiently large. Proportions are for things like coin tosses or yes / no responses (or yes / standard error of the sample proportion . Could someone verify this estimate is ________.The resulting quantity is called the estimated p is Proportion of successes in the sample,n is Number of observations in the sample. proportion of heads is the probability of a head (=.5). That gives $$\text{SE}(\bar X) = \sqrt{\bar X(1-\bar X) \sum_{i=1}^n \omega_i^2}.$$ b. Now is based on a sample, and unless we usually one with only two categories, but the math can be extended if you want.err.
( Note: This post focuses on a single simple example, however I'm asking about the error in general in my logic). Consider the infinite potential well "particle in a box" system described by $$V(x)=\begin{cases}0&\text{if }0<x<L\\\infty&\text{otherwise}\end{cases}.$$ It's fairly easy to find the wavefunctions $\psi_n(x)=\langle E_n\vert\psi\rangle$ by solving the time independent Schroedinger equation: $$\psi_n(x)=\sqrt\frac{2}{L}\sin\left(\frac{n\pi}{L}x\right)$$ Now, since $\mathcal{\hat H}$ is Hermitian we know there is a complete set of eigenstates $\vert E_n\rangle$ such that, for any initial state $\vert\psi,0\rangle$ we can write $$\vert\psi,0\rangle = \sum_k a_k\vert E_k\rangle$$ The problem of evolving the state $\vert\psi,0\rangle$ in time is easily reduced to $$\vert\psi,t\rangle = \sum_k a_k e^{-iE_n t/\hbar}\vert E_k\rangle$$ But the wavefunction of this state is given by $$\Psi(x,t) =\sum_ka_ke^{-iE_n t/\hbar}\psi_n(x) = \sum_ka_k\sqrt{\frac 2 L}e^{-iE_n t/\hbar}\sin\left(\frac{n\pi}{L}x\right)$$ and taking $\vert\vert^2$s to obtain the probability distribution yields a time-independent function. Hence the time evolution of the probability this system is apparently trivial for any initial state, but I have heard from multiple sources and a demonstration applet that even for a superposition of two stationary states the particle oscillates throughout the box. What have I done wrong here?
I'd like to quote, with kind permission of the original author, the text of an article about LaTeX and MathML from access2science. The aim of the website is to provide "articles and links on accessibility of science, technology, engineering, and math (STEM). Its purpose is to provide practical information to people with print disabilities and to their friends, parents, and teachers.". The representation of mathematics in a consistent and standardised way is very important to the community that this website serves. The article originated as a post to the blindmath mailing list in response to a similar query about the roles of LaTeX and MathML. There seems to be some confusion regarding LaTeX and MathML here. I'd like to help straighten that out, if I may. The confusion is with regarding their roles. LaTeX is an input format. It is how we mathematicians write our articles, books, webpages, and anything else where mathematics is involved. (And often anything where mathematics isn't involved.) It is not designed to be read as-is. It is intended to be processed into a suitable output format and then read. If anyone thinks that they can read LaTeX and understand what is going on, then I have a few documents I can post samples from which will soon disabuse you of that notion. Of course, very simple LaTeX can be read. Something like x^2 + y^2 = z^2 is fairly easy to understand, but try something more complicated like \sum_{m = 2 \over m \text{prime}}^{\infty} \frac{1}{m^s} and you'll see what I mean. And that's fairly simple compared to what can be written. When you realize that LaTeX (or rather, TeX) is completely programmable, then you'll see that you can find absolutely anything in a LaTeX document. MathML is an output format. It is not designed to be written directly, but it is designed to be read. Of course, one needs a suitable renderer: a browser for the sighted and something like MathPlayer for those who want their mathematics read aloud, but then the same is true of any output format. As it is an open standard, it is a reasonable task to design a program to render MathML in to any desired medium. It is possible, though not always straightforward, to convert LaTeX to MathML. One reason why it is not always straightforward is that TeX (the program underlying LaTeX) often needs to know things about its output. When run normally, TeX has complete control over the process and so can know exactly how the output will be seen. When producing MathML (or XHTML), it can't know exactly how the output will be seen. But those are technical difficulties that can usually be avoided. The main difficulty is that most websites don't bother with this route. They convert the LaTeX mathematics to a graphic which is then displayed, with the original LaTeX as the alt text. Because of how it is produced, the LaTeX is usually very simple (no complicated macros), and so it may be possible to get by with reading the alt text. So if you want to read mathematics, look for MathML. If you want to write mathematics, learn LaTeX (or another TeX variant). Now, to (some of) your questions. Is MathML obviously going to replace TeX in the near future? No, because they fulfil different roles. I use LaTeX to produce MathML documents. I couldn't do without either of LaTeX or MathML in my workflow. Is TeX in theory any more powerful than MathML? I mean more complete in terms of underlying markup capabilities, of the class of things it can represent? Yes. TeX is a programming language. One of its strengths is the extent to which it can be customised and extended. MathML is a markup language. It is thus rather restricted when it comes to extending and customising it. Meta this-I understand that this question might give offense only because whenever there are two different technologies that have the same aim they are sometimes thought of as "rivals" or "competitors" and have their respective "camps". Please know I am a total newbie in this area and in no way am I making a judgement about the worthiness of either technology. I am really just trying to understand how practitioners understand their own world. The point of the article I quoted is that these technologies are in no ways rivals. If you wanted to invent a rivalry here, it would be better to play off PDF and MathML, or TeX and ... well, there isn't really anything like TeX. To answer your real question: which format you should use, the answer depends on what you are going to do with it. What is the eventual output of the editor? Are you going to run the output through TeX to produce nice documents, or will it end up on a webpage? Is the user ever going to see the stored document? Once a document has been written, how much flexibility are you going to allow the user to have? My instinct would be that if you don't know, you should pick MathML. My reason being that as it is a markup format and a web standard, it will be easier to work with in the program and easier to ensure that you know exactly what the editor will produce for the given input.
We know heavier objects fall faster when dropped at certain height. I was wondering if I am going downhill on my mountain bike without any peddling, will I travel faster or slower because I am fat? Heavier objects do not fall faster per se. But for heavy objects the influence of the air resistance will be smaller, if they have a similar surface area compared to the light objects. The answer depends on the properties of your tyres and the road. But on an even road the air resistance will typically dominate once you reach a certain speed (the friction of the wheels $F_W$ will be more or less independent of speed, but not of weight as a heavier person deforms the tyres more, generation more friction, but as it is not the dominant part we will ignore it for now). The air resistance of a person will vary approximately like $m^{2/3}$ or $m^{1/3}$ in dependence of the mass. The air resistance in turbulent flow is given by $F_R = \frac 1 2 \rho c_D A v^2$, where $\rho$ is the density of the fluid, $c_D$ is the dimensionless drag coefficient depending on the form, $A$ is the area of the object perpendicular to the flow and $v$ the velocity relative to the fluid. Your mass scales like $L^3$, so your area scales like $L^2 = m^{2/3}$ assuming isotropic growth, the drag coefficent $c_D$ will be roughly independent of your weight but highly dependent on your position and clothing, which also influence your surface area). Your acceleration will be given by: $$ a = g \sin(\theta) + F_\text{W} - \frac 1 2 c_D \rho v^2 \frac{A}{m} = \text{const} - O(m^{-1/3}). $$ This means you are at an advantage if you are heavier (or rather: larger and therefore heavier), as the influence of the drag scales like $m^{-1/3}$. If we assume that your weight is not distributed equally in all directions you gain even more. But still, as the range of typical human weights which a bike can support is from about $50\,\mathrm{kg} \ldots 150\,\mathrm{kg}$ a light person in a aerodynamic position with tight clothes will probably still be faster than a heavy person sitting upright (as they will reduce their area to a fraction and lower their $C_D$). No and yes. At first, your assumption is not quite correct. In vacuum, all masses fall at the same speed. The reason is the that the mass cancels in the equations of motion: $ma=mg$ $a=\ddot{x}=g$ To be more precise: the inertial mass and the gravitational mass are the same. Therefore, they cancel. However, things change when you take air resistance into account. Of course, a feather has much more air resistance than a stone, consequently the feather falls slower. In the equations of motion, the air resistance is proportional to the square of the velocity and a geometric factor that describes the shape of your object/body only, but not to the mass. More specific to your question: Although you have a higher mass, the air resistance of your body compared to the bodies of other persons may be quite similar. Especially if you try to make you as small as possible. This means that you will have an advantage compared to less heavier people, as long as you manage to have not much more air resistance than them. protected by Qmechanic♦ Jul 13 '15 at 18:28 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Convexity properties of graphs¶ This class gathers the algorithms related to convexity in a graph. It implements the following methods: ConvexityProperties.hull() Return the convex hull of a set of vertices ConvexityProperties.hull_number() Compute the hull number of a graph and a corresponding generating set AUTHORS: Nathann Cohen Methods¶ class sage.graphs.convexity_properties. ConvexityProperties¶ Bases: object This class gathers the algorithms related to convexity in a graph. Definitions A set \(S \subseteq V(G)\) of vertices is said to be convex if for all \(u,v\in S\) the set \(S\) contains all the vertices located on a shortest path between \(u\) and \(v\). Alternatively, a set \(S\) is said to be convex if the distances satisfy \(\forall u,v\in S, \forall w\in V\backslash S : d_{G}(u,w) + d_{G}(w,v) > d_{G}(u,v)\). The convex hull \(h(S)\) of a set \(S\) of vertices is defined as the smallest convex set containing \(S\). It is a closure operator, as trivially \(S\subseteq h(S)\) and \(h(h(S)) = h(S)\). What this class contains As operations on convex sets generally involve the computation of distances between vertices, this class’ purpose is to cache that information so that computing the convex hulls of several different sets of vertices does not imply recomputing several times the distances between the vertices. In order to compute the convex hull of a set \(S\) it is possible to write the following algorithm:For any pair \(u,v\) of elements in the set \(S\), and for any vertex \(w\) outside of it, add \(w\) to \(S\) if \(d_{G}(u,w) + d_{G}(w,v) = d_{G}(u,v)\). When no vertex can be added anymore, the set \(S\) is convex The distances are not actually that relevant. The same algorithm can be implemented by remembering for each pair \(u, v\) of vertices the list of elements \(w\) satisfying the condition, and this is precisely what this class remembers, encoded as bitsets to make storage and union operations more efficient. Note This class is useful if you compute the convex hulls of many sets inthe same graph, or if you want to compute the hull number itself as itinvolves many calls to hull() Using this class on non-connected graphs is a waste of space and efficiency ! If your graph is disconnected, the best for you is to deal independently with each connected component, whatever you are doing. Possible improvements When computing a convex set, all the pairs of elements belonging to the set \(S\) are enumerated several times. There should be a smart way to avoid enumerating pairs of vertices which have already been tested. The cost of each of them is not very high, so keeping track of those which have been tested already may be too expensive to gain any efficiency. The ordering in which they are visited is currently purely lexicographic, while there is a Poset structure to exploit. In particular, when two vertices \(u, v\) are far apart and generate a set \(h(\{u,v\})\) of vertices, all the pairs of vertices \(u', v'\in h(\{u,v\})\) satisfy \(h(\{u',v'\}) \subseteq h(\{u,v\})\), and so it is useless to test the pair \(u', v'\) when both \(u\) and \(v\) where present. The information cached is for any pair \(u,v\) of vertices the list of elements \(z\) with \(d_{G}(u,w) + d_{G}(w,v) = d_{G}(u,v)\). This is not in general equal to \(h(\{u,v\})\) ! Nothing says these recommandations will actually lead to any actual improvements. There are just some ideas remembered while writing this code. Trying to optimize may well lead to lost in efficiency on many instances. EXAMPLES: sage: from sage.graphs.convexity_properties import ConvexityProperties sage: g = graphs.PetersenGraph() sage: CP = ConvexityProperties(g) sage: CP.hull([1, 3]) [1, 2, 3] sage: CP.hull_number() 3 hull( vertices)¶ Return the convex hull of a set of vertices. INPUT: vertices– A list of vertices. EXAMPLES: sage: from sage.graphs.convexity_properties import ConvexityProperties sage: g = graphs.PetersenGraph() sage: CP = ConvexityProperties(g) sage: CP.hull([1, 3]) [1, 2, 3] hull_number( value_only=True, verbose=False)¶ Compute the hull number and a corresponding generating set. The hull number \(hn(G)\) of a graph \(G\) is the cardinality of a smallest set of vertices \(S\) such that \(h(S)=V(G)\). INPUT: value_only– boolean (default: True); whether to return only the hull number (default) or a minimum set whose convex hull is the whole graph verbose– boolean (default: False); whether to display information on the LP COMPLEXITY: This problem is NP-Hard [HLT1993], but seems to be of the “nice” kind. Update this comment if you fall on hard instances \(:-)\) ALGORITHM: This is solved by linear programming. As the function \(h(S)\) associating to each set \(S\) its convex hull is a closure operator, it is clear that any set \(S_G\) of vertices such that \(h(S_G)=V(G)\) must satisfy \(S_G \not \subseteq C\) for any properconvex set \(C \subsetneq V(G)\). The following formulation is hence correct\[\begin{split}\text{Minimize :}& \sum_{v\in G}b_v\\ \text{Such that :}&\\ &\forall C\subsetneq V(G)\text{ a proper convex set }\\ &\sum_{v\in V(G)\backslash C} b_v \geq 1\end{split}\] Of course, the number of convex sets – and so the number of constraints – can be huge, and hard to enumerate, so at first an incomplete formulation is solved (it is missing some constraints). If the answer returned by the LP solver is a set \(S\) generating the whole graph, then it is optimal and so is returned. Otherwise, the constraint corresponding to the set \(h(S)\) can be added to the LP, which makes the answer \(S\) infeasible, and another solution computed. This being said, simply adding the constraint corresponding to \(h(S)\) is a bit slow, as these sets can be large (and the corresponding constraint a bit weak). To improve it a bit, before being added, the set \(h(S)\) is “greedily enriched” to a set \(S'\) with vertices for as long as \(h(S')\neq V(G)\). This way, we obtain a set \(S'\) with \(h(S)\subseteq h(S')\subsetneq V(G)\), and the constraint corresponding to \(h(S')\) – which is stronger than the one corresponding to \(h(S)\) – is added. This can actually be seen as a hitting set problem on the complement of convex sets. EXAMPLES: The Hull number of Petersen’s graph: sage: from sage.graphs.convexity_properties import ConvexityProperties sage: g = graphs.PetersenGraph() sage: CP = ConvexityProperties(g) sage: CP.hull_number() 3 sage: generating_set = CP.hull_number(value_only=False) sage: CP.hull(generating_set) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] This class is useful if you compute the convex hulls of many sets in the same graph, or if you want to compute the hull number itself as it involves many calls to
The condition implies that $BA$ commutes with $AB$ and hence they are simultaneously triangularisable over $\mathbb C$. Let $AB$ and $BA$ be simultaneously triangularised. Since $AB$ and $BA$ in general have identical spectra, if $\lambda_1,\ldots,\lambda_n$ are the eigenvalues of $BA$ along its diagonal, then the entries along the diagonal of $AB$ are $\lambda_{\sigma(1)},\ldots,\lambda_{\sigma(n)}$ for some permutation $\sigma$. So, the given condition implies that $f(\lambda_i)=\lambda_{\sigma(i)}$ where $f:z\mapsto (z^2+1)/2$. In other words, $f$ is a bijection among the eigenvalues of $BA$. As $f$ maps real numbers to real numbers, it must also be a bijection among the real eigenvalues of $BA$. Since $BA$ is a real matrix of odd dimension, real eigenvalues do exist. Now, as $f(x)\ge x$ on $\mathbb R$, $f$ must map the largest real eigenvalue of $BA$ to itself. Solving $f(x)=x$, we see that this eigenvalue is $1$. Edit. As the OP points out in his comment, actually we only need to consider the largest real eigenvalue. Let $(\lambda,x)$ be an eigenpair of $BA$, where $\lambda$ is the largest real eigenvalue of $BA$. Then $(\frac{\lambda^2+1}2,x)$ is an eigenpair of $AB$. However, since $AB$ and $BA$ have identical spectra and $\frac{\lambda^2+1}2\ge \lambda$, we must have $\frac{\lambda^2+1}2=\lambda$ and hence $\lambda=1$.
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. We show that for self-adjoint Jacobi matrices and Schrödinger operators, perturbed by dissipative potentials in$\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}\ell ^1({\mathbb{N}})$and$L^1(0,\infty )$respectively, the finite section method does not omit any points of the spectrum. In the Schrödinger case two different approaches are presented. Many aspects of the proofs can be expected to carry over to higher dimensions, particularly for absolutely continuous spectrum. We prove some new results which justify the use of interval truncation as a means of regularising a singular fourth-order Sturm–Liouville problem near a singular endpoint. Of particular interest are the results in the so-called lim-3 case, which has no analogue in second-order singular problems. The paper presents a study of three polyimide-oxide interfaces. The Cr-oxide-PI interface seems to undergo a drastic topographical structural rearrangement. The case of the Ni surface is indicative of heavy decomposition of the Ni oxide which induces the decomposition of the polymer film. On the contrary the stabilized Ni surface behaves in a normal manner indicating that the driving force for the decomposition is the instability of the oxide. The paper presents a study of three polyimide-oxide interfaces. The Cr-oxide-PI interface seems to undergo a drastic topographical structural rearrangement. The case of the Ni surface is indicative of heavy decomposition of the Ni oxide which induces the decomposition of the polymer film. On the contrary the stabilized Ni surface behaves in a normal manner indicating that the driving force for the decomposition is the instability of the oxide. In this paper we present computer-assisted proofs of a number of results in theoretical fluid dynamics and in quantum mechanics. An algorithm based on interval arithmetic yields provably correct eigenvalue enclosures and exclosures for non-self-adjoint boundary eigenvalue problems, the eigenvalues of which are highly sensitive to perturbations. We apply the algorithm to: the Orr–Sommerfeld equation with Poiseuille profile to prove the existence of an eigenvalue in the classically unstable region for Reynolds number R=5772.221818; the Orr–Sommerfeld equation with Couette profile to prove upper bounds for the imaginary parts of all eigenvalues for fixed R and wave number α; the problem of natural oscillations of an incompressible inviscid fluid in the neighbourhood of an elliptical flow to obtain information about the unstable part of the spectrum off the imaginary axis; Squire’s problem from hydrodynamics; and resonances of one-dimensional Schrödinger operators. A new approach is presented for the solution of spectral problems on infinite domains with regular ends, which avoids the need to solve boundary-value problems for many trial values of the spectral parameter. We present numerical results both for eigenvalues and for resonances, comparing with results reported by Aslanyan, Parnovski and Vassiliev. We consider singular block operator problems of the type arising in the study of stability of the Ekman boundary layer. The essential spectrum is located, and an analysis of the $L^2$ solutions of a related first order system of differential equations allows the development of a Titchmarsh–Weyl coefficient $M(\lambda)$. This, in turn, permits a rigorous analysis of the convergence of approximations to the spectrum arising from regular problems. Numerical results illustrate the theory. The paper deals with linear pencils N − λP of ordinary differential operators on a finite interval with λ-dependent boundary conditions. Three different problems of this form arising in elasticity and hydrodynamics are considered. So-called linearization pairs (W, T) are constructed for the problems in question. More precisely, functional spaces W densely embedded in L2 and linear operators T acting in W are constructed such that the eigenvalues and the eigen- and associated functions of T coincide with those of the original problems. The spectral properties of the linearized operators T are studied. In particular, it is proved that the eigen- and associated functions of all linearizations (and hence of the corresponding original problems) form Riesz bases in the spaces W and in other spaces which are obtained by interpolation between D(T) and W. We consider a parabolic matrix–vector system in which the diffusion matrix may be time dependent. For the time-independent case we construct approximate solutions with guaranteed error bounds using spectral information from certain matrix–vector Sturm–Liouville problems. For the time-dependent case we employ an approximation procedure which reduces the problem, on each time-step, to the time-independent case. We give an algorithm which may be used a priori at each time-step in the time-dependent case to guarantee accuracy to a specified tolerance. Ultra-thin films of polyamic acid (BTDA-ODA type) are prepared by spin coating a very dilute solution onto bare or metallized silicon wafers (estimated thickness 25 ± 5 Å). The XPS analysis of the various polymer/metal interfaces suggests the occurrence of acid-base type interaction between the carboxylic groups of the polymer and the oxides and hydroxides that cover the Ni and Cr surfaces. When these systems are in situ cured, the XPS analysis shows the occurrence of a variety of chemical interfacial reactions. In particular: (i) when the substrate is a naturally passivated Ni layer, the complete destruction of the polymer is observed; (ii) when the substrate is a naturally passivated Si wafer, no relevant interaction occurs; (iii) for a naturally passivated Cr and an oxidized Ni surface, partial decomposition of the polymer is observed. The above effects are explained in terms of the acid or basic properties of the oxidized layers that cover the metal surfaces, and in terms of their stability toward heating. Recommend this Email your librarian or administrator to recommend adding this to your organisation's collection.
BARC talk by Till Miltzow Tuesday, 18 June, Till Miltzow, assistant professor, Utrecht, will give a talk "Smoothe Analysis of the Art Gallery Problem" Title: Smoothe Analysis of the Art Gallery Problem Abstract:In the Art Gallery Problem we are given a polygon P \subset [0,L]^2 on n vertices and a number k. We want to find a guard set G of size k, such that each point in P is seen by a guard in G. Formally, a guard g sees a point p \in P if the line segment pg is fully contained inside the polygon P. The history and practical findings indicate that irrational coordinates are a "very rare" phenomenon. We give a theoretical explanation. Next to worst case analysis, Smoothed Analysis gained popularity to explain the practical performance of algorithms, even if they perform badly in the worst case. The idea is to study the expected performance on small perturbations of the worst input. The performance is measured in terms of the magnitude \delta of the perturbation and the input size. We consider four different models of perturbation. We show that the expected number of bits to describe optimal guard positions per guard is logarithmic in the input and the magnitude of the perturbation. This shows from a theoretical perspective that rational guards with small bit-complexity are typical. Note that describing the guard position is the bottleneck to show NP-membership. The significance of our results is that algebraic methods are not needed to solve the Art Gallery Problem in typical instances. This is the first time an ER-complete problem was analyzed by Smoothed Analysis. https://www.youtube.com/watch?v=Axs7k-qL2zY (7 minutes) Joint work with Michael Dobbins and Andreas Holmsen. Bio: Till Miltzow did his PhD with Günter Rote in Berlin working mainly in Computational Geometry. Thereafter, he did a postdoc with Daniel Marx in Budapest to work on Parameterized Complexity. After spending some time in Brussels, he moved to Utrecht, where he received a Veni to work independently.
I am trying to follow a derivation in this paper from Wald Specifically, at the end of the paper, just under Eq. 4.7 there is given the equation $$ Q/m \leq 2 B_0 m$$ where $Q$ is the system charge, $m$ the mass and $B_0$ the magnetic field. This equation in in geometrized units such that $c=G=1$ (as defined at the start of the paper). Now, it is given as an example that if $B_0 \sim 10^{-4} $ gauss and $m = M_{\odot} \sim 2 \times 10^{30}$ kg, then $Q/m \sim 10^{-24}$, but I am struggling to show this Specifically, I feel like if I convert both $B$ and $m$ from $[SI]\rightarrow [Geo]$ and then plug the values in to the top equation, then I should get the answer $10^{-24}$. I am following the conversion as outlined in Section 4 of these notes such that, $$B [SI] = G^{-1} c^{-4} \times B[Geo]$$ $$m [SI] = G^{-1} c^{2} \times m[Geo]$$ but this does not seem to produce the correct answer. Any guidance?
Diagonal Matrices of Linear Operators Examples 1 Recall from the Diagonal Matrices of Linear Operators page that if $V$ is a finite-dimensional vector space and $T \in \mathcal L (V)$, then $T$ is said to be diagonalizable if there exists a basis $B_V$ such that $\mathcal M (T, B_V)$ is a diagonal matrix. We saw that if $T$ has $\mathrm{dim} V$ distinct eigenvalues then there exists a basis $B_V$ of $V$ of eigenvectors corresponding to these $\mathrm{dim} V$ eigenvalues. We also saw a chain of equivalent statements regarding $T$ being diagonalizable. We will now look at some problems regarding diagonal matrices of linear operators. Example 1 Reprove that if $T \in \mathcal L(V)$ is such that $T$ has $\mathrm{dim} (V)$ distinct eigenvalues then there exists a basis $B_V$ of $V$ such that $\mathcal M (T, B_V)$ is diagonal. Suppose that $T$ has $\mathrm{dim} (V) = n$ distinct eigenvalues. Let $\lambda_1, \lambda_2, ..., \lambda_n \in \mathbb{F}$ be these eigenvalues. Let $v_1, v_2, ..., v_n$ be corresponding nonzero eigenvectors to these eigenvalues. Then then set $\{ v_1, v_2, ..., v_n \}$ is linearly independent. Furthermore, this set has $\mathrm{dim} (V) = n$ vectors in it, and so this set is the "right size", which implies that $\{ v_1, v_2, ..., v_n \}$ is a basis of $V$. The columns of $\mathcal M (T, B_V)$ are determined by $T$ applied to each basis vector in $\{ v_1, v_2, ..., v_n \}$. We have that for each $j = 1, 2, ..., n$ that:(1) Therefore, all entries off of the main diagonal are zero. Furthermore, the entry in row $j$ column $j$ is the eigenvalue $\lambda_j$ for each $j = 1, 2, ..., n$. Example 2 Consider the linear map $T \in \mathcal (\mathbb{R}^2)$ defined by $T(x, y) = (3x + 4y, 2y)$ for every $(x, y) \in \mathbb{R}^2$. Find a basis of $\mathbb{R}^2$ for which $T$ has a diagonal matrix. If we use the standard basis $\{ (1, 0), (0, 1) \}$ of $\mathbb{R}^2$, then we have that:(2) Therefore we have that the matrix of $T$ with respect to the standard basis on $\mathbb{R}^2$ is:(3) We note that this matrix is already upper triangular, and so the eigenvalues of $T$ are $\lambda_1 = 3$ and $\lambda_2 = 2$. Note that $T$ therefore has $\mathrm{dim} (\mathbb{R}^2)$ distinct eigenvalues, $\lambda_1$ and $\lambda_2$ so there exists a basis $B$ of $\mathbb{R}^2$ for which $\mathcal M (T, B)$ is upper triangular. We can find such a matrix by finding corresponding nonzero eigenvectors to the eigenvalues that we have already found. We note that if $u = (x, y) \in \mathbb{R}^2$ then:(4) From above we have that:(5) Therefore for $\lambda_1 = 3$ we have that the system above reduces to:(6) Therefore the corresponding set of eigenvectors to the eigenvalue $\lambda_1 = 3$ are $\{ (x, 0) \in \mathbb{R}^2 : x \in \mathbb{R} \: x \neq 0 \}$. Choose the vector $(1, 0)$. For the eigenvalue $\lambda_2 = 2$ we have that the earlier system from above reduces to:(7) The first equation tells us that $x = -4y$. Therefore the corresponding set of eigenvectors to the eigenvalue $\lambda_2 = 2$ are $\{ (-4y, y) \in \mathbb{R}^2 : y \in \mathbb{R} \: y \neq 0 \}$. Choose the vector $(-4, 1)$. Therefore $\{ (1, 0), (-4, 1) \}$ should be a basis $B$ for which $\mathcal M (T, B_V)$ is upper triangular. Let's check this:(8) Therefore we have that the matrix $T$ with respect to the basis $B = \{ (1, 0), (-4, 1) \}$ is:(10) Thus, $\mathcal M (T, B)$ is indeed an upper triangular matrix.
Table of Contents Lebesgue Integrability of the Absolute Value of a Function Recall from the Lebesgue Integrability of the Positive and Negative Parts of a Function page that if $f$ is a Lebesgue integrable function on $I$ then the positive and negative parts of $f$, $f^+$ and $f^-$, are both Lebesgue integrable on $I$. We will now use these two results to prove a useful theorem which shows that $\mid f \mid$ is Lebesgue integrable as well and that the absolute value of the Lebesgue integral of $f$ on $I$ is always less than or equal to the Lebesgue integral of the absolute value of $f$ on $I$. Theorem 1: Let $f$ be Lebesgue integrable on $I$. Then $\mid f \mid$ is Lebesgue integrable on $I$ and $\displaystyle{\biggr \lvert \int_I f(x) \: dx \biggr \rvert \leq \int_I \mid f(x) \mid \: dx}$. Proof:Let $f$ be Lebesgue integrable on $I$. Then from the theorem on the Lebesgue Integrability of the Positive and Negative Parts of a Function page we know that the positive and negative parts of $f$ are Lebesgue integrable on $I$, i.e., $f^+$ and $f^-$ are Lebesgue integrable on $I$. But $\mid f \mid = f^+ + f^-$, and so by the Linearity of Lebesgue Integrals we have that $\mid f \mid$ is Lebesgue integrable on $I$. Furthermore, for all $x \in I$ we know that: Since $- \mid f \mid$, $f$, and $\mid f \mid$ are all Lebesgue integrable on $I$, we have that by one of the theorems on the Comparison Theorems for Lebesgue Integrals page that: Therefore $\displaystyle{\biggr \lvert \int_I f(x) \: dx \biggr \rvert \leq \int_I \mid f(x) \mid \: dx}$. $\blacksquare$
It looks like your program is using an approximation based on$q \approx w = w_s*RH$ with an approximation of Clausius-Clapeyron to find $w_s$. Looking at a few values of RH,T and P, your approximation is pretty close (+/- 5%) to an analytic answer. Based on the output you quoted it looks like you are providing incorrect values of RH. Note in the comments to your routine it says: @param rh relative humidity (proportion, not %) This means you need to provide the RH proportion, not the percentage. E.g. divide by 100 -- RH=1 for 100%, RH=0.5 for 50%, etc. If you adjust your input data you should be able to use your code as-is. If you wish to compare it to something, you can reference the solution below. If you are given $RH$ (in the range [0,1]), $T$ (K) and $p$ (Pa) you can proceed as follows. Knowing that $$RH = \dfrac{e}{e_s},$$ $$w = \dfrac{e\ R_d}{R_v(p-e)},$$ and $$q = \dfrac{w}{w+1}$$ Then we can solve for specific humidity $q$. Rather than combining this into a single formula and solving, it is more straightforward to present this incrementally. First, find $e_s(T)$ where $$e_s(T) = e_{s0}\exp\left[\left(\dfrac{L_v(T)}{R_v}\right)\left(\dfrac{1}{T_0}-\dfrac{1}{T}\right)\right]$$ and then find $e$ from the first formula ($e = RH*e_s$). Then plug $e$ into the formula for $w$ and then that result into the formula for $q$. Variables used: $q$ specific humidity or the mass mixing ratio of water vapor to total air (dimensionless) $w$ mass mixing ratio of water vapor to dry air (dimensionless) $e_s(T)$ saturation vapor pressure (Pa) $e_{s0}$ saturation vapor pressure at $T_0$ (Pa) $R_d$ specific gas constant for dry air (J kg$^{-1}$ K$^{-1}$) $R_v$ specific gas constant for water vapor (J kg$^{-1}$ K$^{-1}$) $p$ pressure (Pa) $L_v(T)$ specific enthalpy of vaporization (J kg$^{-1}$) $T$ temperature (K) $T_0$ reference temperature (typically 273.16 K) (K)
Why negative interest rates might not work Matthew Martin9/04/2014 01:58:00 PM Tweetable 1so that we can eliminate liquidity traps by implementing negative interest rates. Here's why I'm uncertain whether negative interest rate policy is expansionary. To start, let's go through the standard logic: the fisher equation relates nominal interest, real interest, and inflation: [$$]r_t=\tilde{r}_t+\pi_t[$$] where [$]r_t[$] is the nominal interest controlled by the Fed, [$]\tilde{r}_t[$] is the real interest rate, and [$]\pi_t[$] is the inflation rate, in period [$]t[$]. The consumption/savings tradeoff depends on the returns to saving rather than consuming, so that consumption, and therefore output and employment, is decreasing in [$]\tilde{r_t}.[$] If prices are sticky, then [$]\pi[$] will respond to policy relatively slowly, so that in the short run reducing the nominal interest rate [$]r_t[$] leads to a reduction in [$]\tilde{r}_t[$] and thus an increase in consumption and reduction in saving. This is the liquidity effect, which is expansionary. What this analysis has left out, however, are the feedback effects of inflation on output. This happens through two additional mechanisms: the New Keynesian Phillips Curve and the Euler consumption equation. We won't worry about exact analytical specifications for each, just let [$]f_t[$] describe how the household choice of [$]C_t[$] depends on the real interest rate and inflation (consumption Euler equation), and [$]g_t[$] describes the New Keynesian Phillips curve relationship between current and expected inflation, so that we have: [update] \begin{align} r_t&=\tilde{r}_t+\pi_t\\ C_t&=f_t\left(\tilde{r}_t,\pi_{t+1}\right) \\ \pi_t&=g_t\left(\pi_{t+1}\right) \end{align} where [$]g_t[$] is an increasing function and [$]f_t[$] is decreasing in [$]\tilde{r}_t[$] but increasing in [$]\pi_{t+1}[$]. 2It is now apparent that the liquidity analysis in the preceding paragraph was incomplete--inflation is actually a free variable! Conventional wisdom says that lowering the nominal interest rate causes inflation to increase, because we typically think of lowering the interest rate as being achieved by increasing the supply of money. But strictly from the mathematics, this is actually ambiguous--there are two paths to lower nominal interest rates: a monetary expansion that lowers real interest rates via sticky prices, or a monetary contraction that lowers expected inflation. Suppose we have monetary expansion, then expected inflation [$]\pi_{t+1}[$] rises via the money market (as I showed here), which induces more consumption via [$]f_t[$] and raises current inflation via the Phillips curve [$]g_t[$], which further reinforces the liquidity effect by lowering the real interest rate [$]\tilde{r}_t[$] for the given nominal rate target via the Fisher equation. However, when we instead assume that lower nominal rates are achieved via monetary contraction, all of these reinforcing effects reverse signs: expected inflation falls, which reduces current inflation via [$]g_t[$], reduces current consumption via [$]f_t[$], and puts reverse pressure on the real interest rate in the fisher equation since, holding [$]r_t[$] at the target, a decrease in [$]\pi_t[$] implies higher, not lower, real interest [$]\tilde{r}_t[$]. So despite the simplistic logic of the first paragraph, lowering the Fed funds rate can potentially be contractionary if it is associated with a decrease in monetary aggregates. For positive rates, the empirical evidence overwhelmingly suggests that lowering the Fed funds rate increases the money supply. This is not obvious from theory alone--lower nominal interest rates does reduce "money printing" in some respects, for example by reducing interest on reserves. But net effect is pretty unambiguous because the Fed typically engages in ample Open Market Operations that increase base money by far more than the reduction in interest payments. In this respect, what passes for conventional wisdom is quite wrong in saying that the Fed balance sheet doesn't matter--this neutrality is an illusion that arises from taking too many modelling shortcuts (like most New Keynesian papers, Christiano, Eichenbaum, and Rebelo (2011) for example, do not explicitly model the money market that drives their key assumptions, see footnote). But under Miles Kimball's proposal, the Fed would lower interest rates to below zero by taxing away balances of e-currency. This is a reduction in monetary base, just like the case of IOR, and by itself would be contractionary, not expansionary. The expansionary effects of Kimball's policy depend on the assumption that households will increase consumption in response to the taxing of their cash savings, rather than letting their savings depreciate. That needn't be the case--it depends on the relative magnitudes of income and substitution effects for real money balances. The substitution effect is what Kimball has in mind--raising the price of real money balances will induce substitution out of money and into consumption. But there's also an income effect, whereby the loss of wealth induces less consumption and more savings. Thus, negative interest rate policy can be contractionary even though positive interest rate policy is expansionary. Indeed, what Kimball has proposed amounts to a reverse Bernanke Helicopter--imagine a giant vacuum flying around the country sucking money out of people's pockets. Why would we assume that this would be inflationary? [1] To be clear, this is something we should do regardless of whether we also enact Kimball's negative interest rate policy. Any business with an internet connection already has everything it needs to conduct payments electronically. Paper is costly and inefficient and should be killed. [2] See Christiano, Eichenbaum, and Rebelo (2011). The consumption Euler is equation (11), while the New Keynesian Phillips Curve is equation (9). While Christiano et al do not explicitly model the money market, the NK model is equivalent to a money-in-utility model with nominal rigidities (as in this post but with monopolistic competition and Calvo pricing), where interest rate policy is enacted by targeting the money supply. This equivalence is invoked when Christiano et al assume the direction of causality from inflation to policy rate in equation (6). [update]What follows here is meant to provide intuition, not a formal proof. An earlier version omitted subscripts from [$]g_t, f_t[$] which was a bit misleading--these functions do have other time-dependent arguments that have been suppressed here for simplicity. For a proof that reversing the causal assumption embedded in the NK Taylor rule implies that lowering rates can be contractionary, see Schmitt-Grohé and Uribe (2012), which has also been covered by David Andolfatto here.
In recent years, processing and exploration of time series has experienced a noticeable interest. Growing volumes of data and needs of efficient processing pushed the research in new directions, including hardware based ... The aim of this dissertation is to investigate the geometry of resolutions of quotientsingularities Cn/G for G ⊂ SLn(C) with use of an associated algebraic object – the Coxring. We are interested in construction of ... In this dissertation we take an algorithmic view on resource allocation problems in distributed systems. We present a comprehensive perspective by studying a variety of distributed systems---from abstract models of generic ... Most statistical analyses or modelling studies must deal with the discrepancy between the measured aspects of analysed phenomenona and their true nature. Hence, they are often preceded by a step of altering the data ... In this dissertation we set out to study a simplified model of activation flow in artificial neural networks with geometrical embedding.The model provides a mathematical description of abstract neural activation transfer ... Rozprawa doktorska wprowadza nowy, zdefiniowany przez autora model obliczeń kryptograficznych, nazwany SBA–modelem. Charakterystyczne ce- chy tego modelu to ograniczona pamieć, wycieki oraz użycie losowej wyroczni. W ... Illative systems of combinatory logic consist of combinatory logicextended with additional constants intended to represent logicalnotions. We introduce some strong systems of illative combinatorylogic, extending earlier ... The Shapley value is one of the most important solution concepts in coalitional game theory. It was originally defined for classical model of a coalitional game, which is relevant to a wide range of economic and social ... The rapid growth of computer industry requires creating large, highly complicated and sophisticated software. This implies increasing probability for errors, bugs and failures. Various software verification techniques are ... This thesis covers three models of theoretical biology, each one treated mathematically in a rigorous manner, using different mathematical approaches. Their common core is that they are all modelling transmission of signals ... Formal methods promise the ultimate quality of software artifacts with mathematicalproof of their correctness. Algebraic specification is one of such methods, providingformal specifications of system components suitable ... W rozprawie zajmujemy się badaniem globalnej stabilności w skończonym czasie pewnych klas sieci neuronowych typu Hopfielda. Rozważane przez nas sieci mogą być opisane zarówno układem równań różniczkowych, jak też inkluzją ... The thesis contains several results concerning the quantitive aspects of Poincar\'{e} recurrence. In particular, bounds on the limit $\liminf_{n\to +\infty} n^\beta d(T^n(x),x)$ (and similar expressions) are obtained in ... This thesis falls within the field of Graph Theory. A central theme is the study of exclusion theorems and their uses in related topics. One of them is well-quasi-ordering: we identify well-quasi-ordered subclasses for ... In this thesis a model of the dynamics of size-structured population subject to selective predation is built and analyzed. The study is motivated by biological phenomena concerning limnology and oceanography, and in ... The main goal of this thesis is the analysis of a wide class of structured population models in the space of finite, nonnegative Radon measures equipped with the flat metric. This framework allows a unified approach to a ... The subject of this dissertation is the Gysin homomorphism in equivariant cohomology for spaces with torus action. We consider spaces which are quotients of classical semisimple complex linear algebraic groups by a parabolic ...
ASU Electronic Theses and Dissertations This collection includes most of the ASU Theses and Dissertations from 2011 to present. ASU Theses and Dissertations are available in downloadable PDF format; however, a small percentage of items are under embargo. Information about the dissertations/theses includes degree information, committee members, an abstract, supporting data or media. In addition to the electronic theses found in the ASU Digital Repository, ASU Theses and Dissertations can be found in the ASU Library Catalog. Dissertations and Theses granted by Arizona State University are archived and made available through a joint effort of the ASU Graduate College and the ASU Libraries. For more information or questions about this collection contact or visit the Digital Repository ETD Library Guide or contact the ASU Graduate College at gradformat@asu.edu. 2 English 2 Public This work presents analysis and results for the NPDGamma experiment, measuring the spin-correlated photon directional asymmetry in the $\vec{n}p\rightarrow d\gamma$ radiative capture of polarized, cold neutrons on a parahydrogen target. The parity-violating (PV) component of this asymmetry $A_{\gamma,PV}$ is unambiguously related to the $\Delta I = 1$ component of the hadronic weak interaction due to pion exchange. Measurements in the second phase of NPDGamma were taken at the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source (SNS) from late 2012 to early 2014, and then again in the first half of 2016 for an unprecedented level of statistics in order … Contributors Blyth, David Cooper, Alarcon, Ricardo O, Ritchie, Barry G, et al. Created Date 2017 The OLYMPUS experiment measured the two-photon exchange contribution to elastic electron-proton scattering, over a range of four-momentum transfer from \(0.6 < Q^2 < 2.2\) \((\mathrm{GeV/c})^2\). The motivation for the experiment stemmed from measurements of the electric-to-magnetic form factor ratio of the proton \(\mu G_E/G_M\) extracted from polarization observables in polarized electron-proton scattering. Polarized electron-proton scattering experiments have revealed a significant decrease in \(\mu G_E/G_M\) at large \(Q^2\), in contrast to previous measurements from unpolarized electron-proton scattering. The commonly accepted hypothesis is that the discrepancy in the form factor ratio is due to neglected higher-order terms in the elastic electron-proton scattering … Contributors Ice, Lauren Diane, Alarcon, Ricardo O, Dugger, Michael, et al. Created Date 2016
Table of Contents The Pythagorean Theorem for Inner Product Spaces Examples 1 Recall from The Pythagorean Theorem for Inner Product Spaces page that if $V$ is an inner product space and if $u, v \in V$ are such that $u$ and $v$ are orthogonal to each other, that is, $<u, v> = 0$, then:(1) We will now look at some examples regarding the Pythagorean theorem for inner product spaces. Example 1 Let $V$ be an inner product space. Let $u, v \in V$. Prove that $u$ and $v$ are orthogonal if and only if $\| u \| ≤ \| u + c v \|$ for every $c \in \mathbb{F}$. $\Rightarrow$ Suppose that $u$ and $v$ are orthogonal to each other. Then $<u, v> = 0$. Therefore by the Pythagorean theorem we have that:(2) Clearly we have that $\| u \|^2 ≤ \| u \|^2 + \| cv \|^2 = \| u + cv \|^2$, and in squaring both sides we get that $\| u \| ≤ \| u + cv \|$. $\Leftarrow$ Suppose that $\| u \| ≤ \| u + cv \|$ for every $c \in \mathbb{F}$. If we square both sides of this inequality then we have that:(3) Now let $c = -b<u, v>$ where $b > 0$. Then we have that:(4) If $v = 0$ then then we automatically have that $<u, v> = 0$. If $v \neq 0$, then let $b = \frac{1}{\| v \|^2}$ to get that:(5) Therefore $\mid <u, v> \mid^2 = 0$ so $<u, v> = 0$.
Let's consider in dimension $d\geq 3$ the Newton/riesz potential $f=I_2[g]$ $$ f(x)=\int_{R^d}\frac{1}{|x-y|^{d-2}}g(y)dy, $$ which solves $-\Delta f=g$ (up to positive normalizing constants, which I shall ignore), and assume that $g\in L^q$ for all $q\in[1,2d/(d+2)]$. By the Hardy-Littlewood-Sobolev inequality (or any other variation from [Stein, singular inegrals], [Lieb-Loss, Analysis] etc...) we know that $f\in L^{2d/(d-2)}$, and also $\nabla f\in L^2$ (this is one way to prove the Sobolev embedding $H^1\subset L^{2d/(d-2)}$). If $p=2d/(d+2)$ the conjugated Holder exponent is exactly $p'=\frac{2d}{d-2}$, thus $g\in L^p$, $f\in L^{p'}$ and $fg\in L^1$. Since by definition $-\Delta f=g$ we would expect that $$ \int \underbrace{f}_{\in L^{p'}}\underbrace{g}_{\in L^p}=\int f(-\Delta f)\overset{?}{=}\int |\underbrace{\nabla f}_{\in L^2}|^2. $$ When is the last integration by parts legitimate? With my hypotheses all the above terms are well defined, but is this enough? I seem to remember that there are "exotic" counterexamples... It seems to me that an approximation argument works fine: if $g_n$ is a sequence of smooth compactly supported functions such that $g_n\to g$ in $L^{2d/(d+2)}$ then by continuity (HLS inequality) we have that $f_n\to f$ in $L^{2d/(d-2)}$ and $\nabla f_n\to \nabla f$ in $L^2$. Since the integration by parts is legitimate for smooth decaying functions then it should pass to the limit... I don't think anything is wrong here, did I miss something or is it really just that easy? Edit: I just added the approximation argument
Table of Contents Basic Theorems Regarding Connected and Disconnected Metric Spaces Recall from the Connected and Disconnected Metric Spaces page that a metric space $(M, d)$ is said to be disconnected if there exists $A, B \subseteq M$, $A, B \neq \emptyset$ where $A \cap B = \emptyset$ and:(1) We say that $(M, d)$ is connected if it is not disconnected. Furthermore, we say that $S \subseteq M$ is connected/disconnected if the metric subspace $(S, d)$ is connected/disconnected. We will now look at some important theorems regarding connected and disconnected metric spaces. Theorem 1: A metric space $(M, d)$ is disconnected if and only if there exists a proper nonempty subset $A \subset M$ such that $A$ is both open and closed. $\Rightarrow$ Suppose that $(M, d)$ is disconnected. Then there exists open $A, B \subset M$, $A, B \neq \emptyset$, where $A \cap B = \emptyset$ and $M = A \cup B$. Since $A$ is open in $M$ we have that $A^c = B$ is closed in $M$. But $B$ is also open. Similarly, since $B$ is open in $M$, $B^c = A$ is closed in $M$. So in fact $A$ and $B$ are both nonempty proper subsets of $M$ that are both open and closed. $\Leftarrow$ Suppose that there exists a proper nonempty subset $A \subset M$ such that $A$ is both open and closed. Let $B = A^c$. Then $B$ is also both open and closed. Furthermore, since $B \neq \emptyset$ and $A \cap B = \emptyset$. Additionally, $M = A \cup B$, so $M$ is disconnected. $\blacksquare$ Theorem 2: If $(M, d)$ is a connected unbounded metric space, then for every $a \in M$ and for all $r > 0$, $\{ x \in M : d(x, a) = r \}$ is nonempty. Proof:Let $(M, d)$ be a connected unbounded metric space and suppose that there exists an $a \in M$ and there exists an $r_0 > 0$ such that: We will show that a contradiction arises. Let $A = \{ x \in M : d(x, a) < r_0 \}$ and let $B = \{ x \in M : d(x, a) > r_0 \}$. Then $A$ is open since it is simply an open ball centered at $a$. Furthermore, $B$ is open since $B^c$ is a closed ball centered $a$. $A$ is nonempty since $a \in A$ and $B$ is nonempty since $(M, d)$ is unbounded (if it were empty then this would imply $(M, d)$ is bounded). Clearly $A \cap B = \emptyset$ and $M = A \cup B$. So $(M, d)$ is a disconnected metric space. But this is a contradiction. Therefore the assumption that there exists an $a \in M$ and an $r_0 > 0$ such that $\{ x \in M : d(x, a) = r_0 \} \emptyset$ was false. So for all $a \in M$ and for all $r > 0$ the set $\{ x \in M : d(x, a) = r \}$ is nonempty. $\blacksquare$
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication Document Type Preprint (62) (remove) 306 In this paper we study the space-time asymptotic behavior of the solutions and derivatives to th incompressible Navier-Stokes equations. Using moment estimates we obtain that strong solutions to the Navier-Stokes equations which decay in \(L^2\) at the rate of \(||u(t)||_2 \leq C(t+1)^{-\mu}\) will have the following pointwise space-time decay \[|D^{\alpha}u(x,t)| \leq C_{k,m} \frac{1}{(t+1)^{ \rho_o}(1+|x|^2)^{k/2}} \] where \( \rho_o = (1-2k/n)( m/2 + \mu) + 3/4(1-2k/n)\), and \(|a |= m\). The dimension n is \(2 \leq n \leq 5\) and \(0\leq k\leq n\) and \(\mu \geq n/4\) 338 324 333 Geometrical Properties of Sections of Buchsbaum-Rim Sheaves or How to Construct Gorenstein Schemes of Higher Codimension with SINGULAR (2003) 305 In this paper we show that for each prime p=7 there exists a translation plane of order p^2 of Mason-Ostrom type. These planes occur as 6-dimensional ovoids being projections of the 8-dimensional binary ovoids of Conway, Kleidman and Wilson. In order to verify the existence of such projections we prove certain properties of two particular quadratic forms using classical methods form number theory. 301 We extend the methods of geometric invariant theory to actions of non reductive groups in the case of homomorphisms between decomposable sheaves whose automorphism groups are non recutive. Given a linearization of the natural actionof the group Aut(E)xAut(F) on Hom(E,F), a homomorphism iscalled stable if its orbit with respect to the unipotentradical is contained in the stable locus with respect to thenatural reductive subgroup of the automorphism group. Weencounter effective numerical conditions for a linearizationsuch that the corresponding open set of semi-stable homomorphismsadmits a good and projective quotient in the sense of geometricinvariant theory, and that this quotient is in additiona geometric quotient on the set of stable homomorphisms.
Without loss of generality, all the $a_i$'s and $b_i$'s are nonzero. Let $\tilde d$ denote the difference between the left- and right-hand sides of the conjectured inequality $(*)$, which then of course can be rewritten as $\tilde d\ge0$. In the previous version of my answer, I rewrote $\tilde d$ in new variables, $x_i$ and $y_i$, after which the inequality $\tilde d\ge0$ could be (rigorously) verified with Mathematica (in about 22 min). Here that expression for $\tilde d$ is further rewritten -- in new, "more-macro", variables -- so that the resulting expression can be rather easily analyzed, to prove the inequality $(*)$. Indeed, let $p_i:=(x_i-y_i)y_i$, $x_i:=a_1 a_2 a_3/a_i$, $y_i:=b_1 b_2 b_3/b_i$, \begin{equation}c_1:=p_2^2 + p_2 p_3 + p_3^2\ge0,\quad c_2:=p_1^2 + p_1 p_3 + p_3^2\ge0,\quad c_3:=p_2^2 + p_2 p_1 + p_1^2\ge0, \tag{0} \end{equation}and $z_i:=y_i^2\ge0$. Note that $x_1 x_2 x_3=(a_1 a_2 a_3)^2>0$ and $y_1 y_2 y_3=(b_1 b_2 b_3)^2>0$; moreover, \begin{equation}(p_1+z_1)(p_2+z_2)(p_3+z_3)\ge0. \tag{1} \end{equation} The crucial identity is $$ \tilde d\,y_1 y_2 y_3=d:= p_1 p_2 p_3+c_1 z_1+c_2 z_2+c_3 z_3. $$Since $y_1 y_2 y_3>0$, $\tilde d$ equals $d$ in sign. So, it suffices to show that $d\ge0$ -- for any real $p_i$'s, the $c_i$'s as in $(0)$, and any nonnegative $z_i$'s satisfying $(1)$. Note here that without loss of generality $p_1 p_2 p_3<0$ -- otherwise, $d\ge0$ immediately follows because the $c_i$'s and $z_i$'s are nonnegative. So, we may assume that the $p_i$'s are are all nonzero and hence the $c_i$'s are all strictly positive. Take any nonzero real $p_i$'s and any nonnegative $z_i$'s such that $(1)$ holds. Let us then fix those $z_1$ and $z_2$, and let $z_3$ be decreasing as long as $z_3$ remains nonnegative and $(1)$ holds; clearly, this process can stop only when the value of $z_3$ becomes either $0$ or $-p_3$, and in the latter case we must have $-p_3>0$. Moreover, since $c_i>0$ for all $i$, the value of $d$ will not increase after this process is complete. We can then proceed similarly by decreasing $z_2$ (instead of $z_3$), and then by decreasing $z_1$. Let now $(z_1,z_2,z_3)$ be any minimizer of $d$. Then it follows from the above reasoning that $z_i\in\{0,-p_i\}$ for each $i=1,2,3$; moreover, if at that $z_i=-p_i$ for some $i$, then we must have $-p_i>0$. So, by the symmetry with respect to permutations of the indices, it is enough to consider the following four cases: (i) $z_1=-p_1>0$, $z_2=-p_2>0$, $z_3=-p_3>0$; (ii) $z_1=-p_1>0$, $z_2=-p_2>0$, $z_3=0$; (iii) $z_1=-p_1>0$, $z_2=0$, $z_3=0$; (iv) $z_1=0$, $z_2=0$, $z_3=0$, so that $(1)$ becomes $p_1 p_2 p_3\ge0$. In case (i), $\min_{z_1,z_2,z_3}d=-(p_1 + p_2) (p_1 + p_3) (p_2 + p_3)>0$. In case (ii), $\min_{z_1,z_2,z_3}d=-p_1 p_2 (p_1 + p_2) - p_1 p_2 p_3 + (-p_1 - p_2) p_3^2$, which is a convex quadratic polynomial in $p_3$, with discriminant $-p_1 p_2 (4 p_1^2 + 7 p_1 p_2 + 4 p_2^2)<0$, whence again $\min_{z_1,z_2,z_3}d>0$. In case (iii), $\min_{z_1,z_2,z_3}d=-p_1 (p_2^2 + p_3^2)>0$. In case (iv), $\min_{z_1,z_2,z_3}d=p_1 p_2 p_3\ge0$. Thus, $\min_{z_1,z_2,z_3}d\ge0$ in all cases, and the inequality in question is proved.
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions. Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not. Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$ Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*} (a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$. (b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$. (The Ohio State University, Linear Algebra Midterm) Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$. After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\] (a) What is the dimension of $V$? (b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$? (The Ohio State University, Linear Algebra Midterm) Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$. (The Ohio State University, Linear Algebra Midterm) The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes. This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems. Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$. (a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$. (b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$. Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. Problem 9.Determine whether each of the following sentences is true or false. (a) There is a $3\times 3$ homogeneous system that has exactly three solutions. (b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric. (c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$. (d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent. The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes. This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems. Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\] Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$. Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason. Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*} (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system. (Linear Algebra Midterm Exam 1, the Ohio State University) The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes. This post is Part 1 and contains the first three problems.Check out Part 2 and Part 3 for the rest of the exam problems. Problem 1. Determine all possibilities for the number of solutions of each of the systems of linear equations described below. (a) A consistent system of $5$ equations in $3$ unknowns and the rank of the system is $1$. (b) A homogeneous system of $5$ equations in $4$ unknowns and it has a solution $x_1=1$, $x_2=2$, $x_3=3$, $x_4=4$. Problem 2. Consider the homogeneous system of linear equations whose coefficient matrix is given by the following matrix $A$. Find the vector form for the general solution of the system.\[A=\begin{bmatrix}1 & 0 & -1 & -2 \\2 &1 & -2 & -7 \\3 & 0 & -3 & -6 \\0 & 1 & 0 & -3\end{bmatrix}.\] Problem 3. Let $A$ be the following invertible matrix.\[A=\begin{bmatrix}-1 & 2 & 3 & 4 & 5\\6 & -7 & 8& 9& 10\\11 & 12 & -13 & 14 & 15\\16 & 17 & 18& -19 & 20\\21 & 22 & 23 & 24 & -25\end{bmatrix}\]Let $I$ be the $5\times 5$ identity matrix and let $B$ be a $5\times 5$ matrix.Suppose that $ABA^{-1}=I$.Then determine the matrix $B$. (Linear Algebra Midterm Exam 1, the Ohio State University)
Table of Contents The Ratio Test for Positive Series of Real Numbers We will now develop yet another important test for determining the convergence or divergence of a series. This test is known as the ratio test for positive series. Theorem 1: Let $(a_n)_{n=1}^{\infty}$ be a positive sequence of real numbers and let $\displaystyle{\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \rho}$. a) If $0 \leq \rho < 1$ then $\displaystyle{\sum_{n=1}^{\infty} a_n}$ converges. b) If $1 < \rho \leq \infty$ then $\displaystyle{\sum_{n=1}^{\infty} a_n}$ diverges. If $\rho = 1$ then this test is inconclusive. Proof of a):Suppose that $0 \leq \rho < 1$. Since $\displaystyle{\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \rho}$ for $r$ where $\rho < r < 1$ there exists an $N \in \mathbb{N}$ where if $n \geq N$ then: So $a_{n+1} \leq ra_n$. We see that: So $\displaystyle{\sum_{n=N+1}^{\infty} a_n = \sum_{k=1}^{\infty} a_{N+k} \leq \sum_{k=1}^{\infty} r^k a_N}$. But the series $\displaystyle{\sum_{k=1}^{\infty} r^k a_N}$ converges as a geometric series since $0 \leq \rho < r < 1$, and by the comparison test we have that the subseries $\displaystyle{\sum_{n=N+1}^{\infty} a_n}$ converges also which implies that the whole series $\displaystyle{\sum_{n=1}^{\infty} a_n}$ converges. Proof of b)Suppose that $1 < \rho \leq \infty$. Since $\displaystyle{\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \rho}$, for $r$ such that $1 < r \rho$ we have that there exists an $N \in \mathbb{N}$ such that if $n \geq N$ then: So for $ra_n \leq a_{n+1}$. So for $n \geq N$ we have: So $\displaystyle{\sum_{k=1}^{\infty} r^k a_N \leq \sum_{k=1}^{\infty} a_{N+k} = \sum_{n=N+1}^{\infty} a_n}$. Since $1 < r$ we have that the series $\displaystyle{\sum_{k=1}^{\infty} r^k a_N}$ diverges as a geometric series and by comparison the subseries $\displaystyle{\sum_{n=N+1}^{\infty} a_n}$ diverges so the whole series $\displaystyle{\sum_{n=1}^{\infty} a_n}$ diverges. If $\rho = 1$ then the series $\displaystyle{\sum_{n=1}^{\infty} a_n}$ may converge or diverge. For example, consider the series $\displaystyle{\sum_{n=1}^{\infty} \frac{1}{n^2}}$. We know this series converges. Using the ratio test and we see that:(5) We also know that the series $\displaystyle{\sum_{n=1}^{\infty} \frac{1}{n}}$ diverges, and using the ratio test we see that:(6) So as you can see, if $\rho = 1$ then the ratio test gives us no information on the convergence/divergence of a series.
Question #032b6 1 Answer Answer: The net force is the vector sum of all the forces acting on an object. Explanation: Whenever a number of forces act on an object, and if the vector sum of all the forces is not balanced, then we have a resultant force. This is called net force. A net force is capable of accelerating a mass. The acceleration could be linear or circular or both. In equilibrium state net force acting on an object is zero. The object does not accelerate. In the figure below force The net force causes changes in the motion of the object described by the following expressions. Linear acceleration of center of mass #vec a = vec F/ m#; where #vecF#is the Net Force and #m#is mass of the object Angular acceleration of the body #vec alpha = vec tau / I#, where #vectau#is the resultant torque and #I#moment of inertia of the body. Torque, a vector quantity is caused by a net force #vec F#defined with respect to some reference point #vecr#as below #\vec \tau = \vec r \times \vec F# or #|\vec \tau |= k |\vec F|#
You should be able to find a proof of this fact in any undergraduate stochastic processes books. Durrett's book Essentials of Stochastic Processes has a good proof of this. I'll give an outline of how to prove it. Suppose that the Markov chain starts at $X_0=x$. Let $0 = R_0 < R_1 < R_2 < \ldots$ be the sequence of return times to the site $x$. Since the Markov chain is positive recurrent $E[R_n - R_{n-1}] = E[R_1] < \infty$. Next, let be the number of returns that have occurred by time $n$ (that is $R_{N_n} \leq n < R_{N_n+1}$). Finally, let $Y_k = \sum_{i=R_{k-1}+1}^{R_k} X_i$. With this notation then we have that $$ \frac{1}{n} \sum_{k=1}^{N_n} Y_k \leq \frac{1}{n} \sum_{i=1}^n X_i \leq \frac{1}{n} \sum_{k=1}^{N_n} Y_k + \frac{Y_{N_n+1}}{n}. $$ Next, note that the renewal theorem implies that $$ \lim_{n\rightarrow\infty} \frac{N_n}{n} = \frac{1}{E[R_1]},$$and so$$ \lim_{n\rightarrow \infty} \frac{1}{n} \sum_{k=1}^{N_n} Y_k = \lim_{n\rightarrow \infty} \frac{N_n}{n} \frac{1}{N_n} \sum_{k=1}^{N_n} Y_k = \frac{E[Y_1]}{E[R_1]}, $$where the last equality also follows from the fact that the $Y_k$ are i.i.d. Now, it can also be shown that $Y_{N_n+1}/n \rightarrow 0$ and so the upper and lower bounds on $n^{-1} \sum_{i=1}^n X_i$ given above imply that $$\lim_{n\rightarrow \infty} \frac{1}{n} \sum_{i=1}^n X_i = \frac{E[Y_1]}{E[R_1]}.$$ The last step of the proof is to show that $ \frac{E[Y_1]}{E[R_1]} = E^\pi[X_0]$, where $\pi$ is the unique stationary distribution. This can be shown by noting that the stationary distribution $\pi$ has the formula$$\pi(y) = \frac{1}{E[R_1]} E\left[ \sum_{i=1}^{R_1} \mathbf{1}_{X_i = y} \right].$$
Basic Theorems Regarding Compact Sets in a Metric Space Basic Theorems Regarding Compact Sets in a Metric Space Recall from the Compact Sets in a Metric Space page that if $(M, d)$ is a metric space then a set $S \subseteq M$ is said to be compact in $M$ if for every open covering of $S$ there exists a finite subcovering of $S$. We will now look at some theorems regarding compact sets in a metric space. Theorem 1: Let $(M, d)$ be a metric space and let $S, T \subseteq M$. Then if $S$ is closed and $T$ is compact in $M$ then $S \cap T$ is compact in $M$. (1) Proof: Let $S$ be closed and let $T$ be compact in $M$. Notice that: \begin{align} \quad S \cap T \subseteq T \end{align} Furthermore, $S \cap T$ is closed. This is because $S$ is given as closed, and since $T$ is compact we know that $T$ is closed (and bounded). So the finite intersection $S \cap T$ is closed. But any closed subset of a compact set is also compact as we proved on the Closed Subsets of Compact Sets in Metric Spaces page, so $S \cap T$ is compact in $M$. $\blacksquare$ Theorem 2: Let $(M, d)$ be a metric space and let $S_1, S_2, ..., S_n \subseteq M$ be a finite collection of compact sets in $M$. Then $\displaystyle{\bigcup_{i=1}^{n} S_i}$ is also compact in $M$. (2) Proof: Let $S_1, S_2, ..., S_n \subseteq M$ be a finite collection of compact sets in $M$. Consider the union $S = \bigcup_{i=1}^{n} S_i$ and let $\mathcal F$ be any open covering of $S$, that is: \begin{align} \quad S \subseteq \bigcup_{A \in \mathcal F} A \end{align} (3) Now since $S_i \subseteq S$ for all $i \in \{1, 2, ..., n \}$ we see that $\mathcal F$ is also an open covering of $S$ and so there exists a finite subcollection $\mathcal F_i \subseteq \mathcal F$ that also covers $S_i$, i.e.: \begin{align} \quad S_i \subseteq \bigcup_{A \in \mathcal F_i} A \end{align} (4) Let $\mathcal F^* = \bigcup_{i=1}^{n} \mathcal F_i$. Then $\mathcal F^*$ is finite since it is equal to a finite union of finite sets. Furthermore: \begin{align} \quad S = \bigcup_{i=1}^{n} S_i \subseteq \bigcup_{i=1}^{n} \left ( \bigcup_{A \in \mathcal F_i} A \right ) = \bigcup_{A \in \mathcal F^*} A \end{align} So $\mathcal F^* \subseteq \mathcal F$ is a finite open subcovering of $S$. So for all open coverings $\mathcal F$ of $S$ there exists a finite open subcovering of $S$, so $\displaystyle{S = \bigcup_{i=1}^{n} S_i}$ is compact in $M$. $\blacksquare$ Theorem 3: Let $(M, d)$ be a metric space and let $\mathcal C$ be an arbitrary collection of compact sets in $M$. Then $\displaystyle{\bigcap_{C \in \mathcal C} C}$ is also compact in $M$. (5) Proof: Let $\mathcal C$ be an arbitrary collection of compact sets in $M$. Notice that for all $C \in \mathcal C$ that: \begin{align} \quad \bigcap_{C \in \mathcal C} C \subseteq C \end{align} Furthermore, since each $C \in \mathcal C$ is compact, then each $C$ is closed (and bounded). An arbitrary intersection of closed sets is closed, and so $\displaystyle{\bigcap_{C \in \mathcal C}}$ is a closed subset of the compact set $C$. Therefore by the theorem referenced earlier, $\displaystyle{\bigcap_{C \in \mathcal C} C}$ is compact in $M$. $\blacksquare$
My understanding of Lecture #33, 34: The Characteristic Function for a Diffusion: As an alternative to directly computing the characteristic function of a random variable $X_t$ in a stochastic process $\{X_t\}_{t \in [0,T]}$, we can solve a (boundary?) value problem, whose PDE has parameters are given by the dynamics of the stochastic process, and then conclude by Feynman-Kac that it is the characteristic function of said random variable. An example is Arithmetic Brownian Motion: If we solve $$ \frac{\partial f}{\partial t} + \mu \frac{\partial f}{\partial x} + \frac{1}{2}\sigma^2 \frac{\partial^2 f}{\partial x^2} = 0, x \in \mathbb R, t \in [0,T]$$ $$f(T,x) = e^{i \theta x} \tag{1}$$ then we get a function $f(t,x)$ s.t. $f(0,x)$ is the characteristic function of $X_t$ where $$dX_t = \sigma dW_t + \mu dt$$ So what does this mean for the (boundary?) value problem $$ \frac{\partial f}{\partial t} + \mu x \frac{\partial f}{\partial x} + \frac{1}{2}\sigma^2 x^2\frac{\partial^2 f}{\partial x^2} = 0, x \in \mathbb R, t \in [0,T]$$ $$f(T,x) = e^{i \theta x} \tag{2}$$ ? My guess is that solution of $(2)$, $f(t,x)$, will be s.t. $f(0,x)$ is the characteristic function of $X_t$ where $$dX_t = \sigma X_t dW_t + \mu X_t dt$$ i.e. $X_t$ is Geometric Brownian Motion and hence is lognormal, the distribution of which doesn't have a characteristic function. So $(2)$ has no solution then? I'm looking for an answer like 'We don't expect $(2)$ to have a solution. This can be proven through (some PDE things).' or 'While we don't expect $(2)$ to have a solution, it actually does because (some PDE things), but then (some PDE things).' So the (some PDE things) may or may not prove lognormal distribution doesn't have a characteristic function, but I'm looking more for consistency e.g. because Lognormal doesn't have a characteristic function For any random variable, its characteristic function is supposed to be able to be computed by solving a PDE of Feynman-Kac said PDE must have no solution.
I am a bit unsure if my calculations are correct, but on my scribbling paper it seemed to work out. P1 and P2 lie on a circle around M. This allows us to measure the distance of the two points by just taking the radius (which is half the line length) of the circle and constructing two rectangular triangles. The accepted answer here provides a sketch for this construction.So given your rotation angle is $\gamma$ and the line is $l$ long it follows: $x = 2* sin(\frac{\gamma}{2}) * \frac{l}{2}$ The lines M to P1 and M to P2 form a isosceles triangle with the third side being the line segment between P1 and P2. The angle at point M is known to be the rotation angle $\gamma$ so the remaining two angles in this triangle are given by $\alpha = \frac{180 - \gamma}{2}$ And therefore the angle between the wrong line and the needed translation vector is just $\beta = \alpha + \gamma$ So now you rotate a unit vector that starts along the wrong line. You rotate it around P1 for $\beta$ and scale it to length x. This yields the translation vector from P1 to P2. The rotation of the line should indeed be correct.