text
stringlengths
1
3.78M
meta
dict
An Australian robot known as the “stabbing machine” is helping forensic experts to study the different factors and variables of violent knife crime. “When a person gets stabbed, rips in the victim’s clothing may contain clues to help catch the attacker,” explained Popular Science last week. “Forensic scientists are trying to understand what tears and distortions in the fabric around a stab wound can say about the knife type, angle of attack, and stabbing technique that caused the wound.” The machine boasts an “interchangeable knife holder,” “simulation of stab events through pneumatic system,” “60 stabbing positions via an Arduino microcontroller and knife holder,” and a “robust and highly reproducible positioning system,” allowing the robot to recreate various knife crime scenarios that feature different knives and variables with more accuracy and precision than a human. “Various types of knives make various types of cuts, as you might expect, and the shapes of holes left in clothes can indicate whether the weapon was serrated, dull, curved and so on,” wrote Tech Crunch. “Ordinarily a human stabber is employed in recreating these holes in test fabric — for comparison, you understand — but people are notoriously un-robotic in their execution of this type of task, and, as in other things, small deviations in force and angle creep in where unvarying exactitude is needed.” Despite the potential benefits, Popular Science reports that the machine still needs to improve greatly, as it currently only “jabs” at the force of a human bite. “Future versions of the machine will probably need to work on accuracy, consistency, and power—the device currently jabs at a pressure of 1 megapascal, which is about the force of a human bite,” they declared. “But eventually, a device like this could help to turn the analysis of textile damage into a science.” Charlie Nash is a reporter for Breitbart Tech. You can follow him on Twitter @MrNashington or like his page at Facebook.
{ "pile_set_name": "OpenWebText2" }
Hummingbirds (book) Hummingbirds is a large format, fine art book coffee table book about hummingbirds written by John C. Arvin, with 212 illustrations of hummingbirds in their habitat, and published in 2016. The book is published by Gorgas Science Foundation in the United States of America and Felis Creations in India. The illustrations were painted by three wildlife artists: Sangeetha Kadur, Raul Andrade, and Vydhehi Kadur. Sangeetha Kadur from India is the sister of the wildlife filmmaker Sandesh Kadur. The first volume showcases the 127 species of hummingbirds found throughout North America, Central America, and the Caribbean Islands. References Category:Hummingbirds Category:2016 non-fiction books Category:Coffee table books
{ "pile_set_name": "Wikipedia (en)" }
--- bibliography: - '/home/osegu/LyxDocs/osegu.bib' --- *Int. J. Bifurcation and Chaos* **13**[, 3147-3233, (2003). Tutorial and Review paper.]{} **TOWARD A THEORY OF CHAOS** A. Sengupta\ Department of Mechanical Engineering\ Indian Institute of Technology. Kanpur\ E-Mail: osegu@iitk.ac.in **ABSTRACT** [This paper formulates a new approach to the study of chaos in discrete dynamical systems based on the notions of inverse ill-posed problems, set-valued mappings, generalized and multivalued inverses, graphical convergence of a net of functions in an extended multifunction space [@Sengupta2000], and the topological theory of convergence. Order, chaos, and complexity are described as distinct components of this unified mathematical structure that can be viewed as an application of the theory of convergence in topological spaces to increasingly nonlinear mappings, with the boundary between order and complexity in the topology of graphical convergence being the region in $\textrm{Multi}(X)$ that is susceptible to chaos. The paper uses results from the discretized spectral approximation in neutron transport theory [@Sengupta1988; @Sengupta1995] and concludes that the numerically exact results obtained by this approximation of the Case singular eigenfunction solution is due to the graphical convergence of the Poisson and conjugate Poisson kernels to the Dirac delta and the principal value multifunctions respectively. In $\textrm{Multi}(X)$, the continuous spectrum is shown to reduce to a point spectrum, and we introduce a notion of]{} *latent chaotic states* [to interpret superposition over generalized eigenfunctions. Along with these latent states, spectral theory of nonlinear operators is used to conclude that nature supports complexity to attain efficiently a multiplicity of states that otherwise would remain unavailable to it. ]{} *Keywords:* [chaos, complexity, ill-posed problems, graphical convergence, topology, multifunctions.]{} **Prologue** **1.** @Peitgen1992 **2.** Mitchell Feigenbaum’s *Foreword* (pp 1-7) in @Peitgen1992 **3.** [^1] Opening address of Heitor Gurgulino de Souza, Rector United Nations University, Tokyo @Grebogi1997 **4.** [^2] @Gallagher1999 **5.** @Goldenfeld1999 **6.** @Gleick1987 **7.** @Waldrop1992 **8.** @Brown1996 **9.** @Falconer1990 **10.** @Robinson1999 **1. Introduction** The purpose of this paper is to present an unified, self-contained mathematical structure and physical understanding of the nature of chaos in a discrete dynamical system and to suggest a plausible explanation of *why* natural systems tend to be chaotic. The somewhat extensive quotations with which we begin above, bear testimony to both the increasingly significant — and perhaps all-pervasive — role of nonlinearity in the world today as also our imperfect state of understanding of its manifestations. The list of papers at both the UN Conference [@Grebogi1997] and in *Science* [@Gallagher1999] is noteworthy if only to justify the observation of @Gleick1987 that “chaos seems to be everywhere”. Even as everybody appears to be finding chaos and complexity in all likely and unlikely places, and possibly because of it, it is necessary that we have a clear mathematically-physical understanding of these notions that are supposedly reshaping our view of nature. This paper is an attempt to contribute to this goal. To make this account essentially self-contained we include here, as far as this is practicable, the basics of the background material needed to understand the paper in the form of *Tutorials* and an extended *Appendix.* The paradigm of chaos of the kneading of the dough is considered to provide an intuitive basis of the mathematics of chaos [@Peitgen1992], and one of our fundamental objectives here is to recount the mathematical framework of this process in terms of the theory of ill-posed problems arising from non-injectivity [@Sengupta1997], *maximal ill-posedness,* and *graphical convergence* of functions [@Sengupta2000]. A natural mathematical formulation of the kneading of the dough in the form of *stretch-cut-and-paste* and *stretch-cut-and-fold* operations is in the ill-posed problem arising from the increasing non-injectivity of the function $f$ modeling the kneading operation. ***Begin Tutorial1: Functions and Multifunctions*** A *relation,* or *correspondence,* between two sets $X$ and $Y$, written $\mathscr{M}\!:X\qquad Y$, is basically a rule that associates subsets of $X$ to subsets of $Y$; this is often expressed as $(A,B)\in\mathscr{M}$ where $A\subset X$ and $B\subset Y$ and $(A,B)$ is an ordered pair of sets. The domain $$\mathcal{D}(\mathscr{M})\overset{\textrm{def}}=\{ A\subset X\!:(\!\exists Z\in\mathscr{M})(\pi_{X}(Z)=A)\}$$ and range $$\mathcal{R}(\mathscr{M})\overset{\textrm{def}}=\{ B\subset Y\!:(\!\exists Z\in\mathscr{M})(\pi_{Y}(Z)=B)\}$$ of $\mathscr{M}$ are respectively the sets of $X$ which under $\mathscr{M}$ corresponds to sets in $Y$; here $\pi_{X}$ and $\pi_{Y}$ are the projections of $Z$ on $X$ and $Y$ respectively. Equivalently, $\mathcal{D}(\mathcal{M})=\{ x\in X\!:\mathscr{M}(x)\neq\emptyset\}$ and $\mathcal{R}(\mathscr{M})=\bigcup_{x\in\mathcal{D}(\mathcal{M})}\mathscr{M}(x)$. The *inverse* $\mathscr M^{-}$ of $\mathscr{M}$ is the relation $$\mathscr M^{-}=\{(B,A)\!:(A,B)\!\in\mathscr{M}\}$$ so that $\mathscr M^{-}$ assigns $A$ to $B$ iff $\mathscr{M}$ assigns $B$ to $A$. In general, a relation may assign many elements in its range to a single element from its domain; of especial significance are *functional relations* $f$[^3] that can assigns only a unique element in $\mathcal{R}(f)$ to any element in $\mathcal{D}(f)$. Fig. \[Fig: functions\] illustrates the distinction between arbitrary and functional relations $\mathscr{M}$ and $f$. This difference between functions (or maps) and multifunctions is basic to our developments and should be fully understood. Functions can again be classified as injections (or $1:1$) and surjections (or onto). $f\!:X\rightarrow Y$ is said to be *injective* (or *one-to-one*) if $x_{1}\neq x_{2}\Rightarrow f(x_{1})\neq f(x_{2})$ for all $x_{1},x_{2}\in X$, while it is *surjective* (or *onto*) if $Y=f(X)$. $f$ is *bijective* if it is both $1:1$ and onto. Associated with a function $f\!:X\rightarrow Y$ is its inverse $f^{-1}\!:Y\supseteq\mathcal{R}(f)\rightarrow X$ that exists on $\mathcal{R}(f)$ iff $f$ is injective. Thus when $f$ is bijective, $f^{-1}(y):=\{ x\in X\!:y=f(x)\}$ exists for every $y\in Y$; infact $f$ is bijective iff $f^{-1}(\{ y\})$ is a singleton for each $y\in Y$. Non-injective functions are not at all rare; if anything, they are very common even for linear maps and it would be perhaps safe to conjecture that they are overwhelmingly predominant in the nonlinear world of nature. Thus for example, the simple linear homogeneous differential equation with constant coefficients of order $n>1$ has $n$ linearly independent solutions so that the operator $D^{n}$ of $D^{n}(y)=0$ has a $n$-dimensional null space. Inverses of non-injective, and in general non-bijective, functions will be denoted by $f^{-}$. If $f$ is not injective then $$A\subset f^{-}f(A)\overset{\textrm{def}}=\textrm{sat}(A)$$ where $\textrm{sat}(A)$ is the *saturation of* $A\subseteq X$ *induced by* $f$; if $f$ is not surjective then $${\textstyle ff^{-}(B):=B\bigcap f(X)\subseteq B.}$$ If $A=\textrm{sat}(A)$, then $A$ is said to be *saturated,* and $B\subseteq\mathcal{R}(f)$ whenever $ff^{-}(B)=B$. Thus for non-injective $f$, $f^{-}f$ is not an identity on $X$ just as $ff^{-}$ is not **$\mathbf{1}_{Y}$** if $f$ is not surjective. However the set of relations $$ff^{-}f=f,\qquad f^{-}ff^{-}=f^{-}\label{Eqn: f_inv_f}$$ that is always true will be of basic significance in this work. Following are some equivalent statements on the injectivity and surjectivity of functions $f\!:X\rightarrow Y$. (Injec) $f$ is $1:1$ $\Leftrightarrow$ there is a function $f_{\textrm{L }}\!:Y\rightarrow X$ called the left inverse of $f$, such that $f_{\textrm{L}}f=\mathbf{1}_{X}$ $\Leftrightarrow$ $A=f^{-}f(A)$ for all subsets $A$ of $X$$\Leftrightarrow$$f(\bigcap A_{i})=\bigcap f(A_{i})$. (Surjec) $f$ is onto $\Leftrightarrow$ there is a function $f_{\textrm{R }}\!:Y\rightarrow X$ called the right inverse of $f$, such that $ff_{\textrm{R}}=\mathbf{1}_{Y}$ $\Leftrightarrow$ $B=ff^{-}(B)$ for all subsets $B$ of $Y$. As we are primarily concerned with non-injectivity of functions, saturated sets generated by equivalence classes of $f$ will play a significant role in our discussions. A relation $\mathscr{E}\mathcal{E}$ on a set $X$ is said to be an *equivalence relation* if it is[^4] (ER1) Reflexive: $(\forall x\in X)(x\mathcal{E}x)$. (ER2) Symmetric: $(\forall x,y\in X)(x\mathcal{E}y\Longrightarrow y\mathcal{E}x)$. (ER3) Transitive: $(\forall x,y,z\in X)(x\mathcal{E}y\wedge y\mathcal{E}z\Longrightarrow x\mathcal{E}z)$. Equivalence relations group together unequal elements $x_{1}\neq x_{2}$ of a set as equivalent according to the requirements of the relation. This is expressed as $x_{1}\sim x_{2}\textrm{ }(\textrm{mod }\mathcal{E})$ and will be represented here by the shorthand notation $x_{1}\sim_{\mathcal{E}}x_{2}$, or even simply as $x_{1}\sim x_{2}$ if the specification of $\mathcal{E}$ is not essential. Thus for a noninjective map if $f(x_{1})=f(x_{2})$ for $x_{1}\neq x_{2}$, then $x_{1}$ and $x_{2}$ can be considered to be equivalent to each other since they map onto the same point under $f$; thus $x_{1}\sim_{f}x_{2}\Leftrightarrow f(x_{1})=f(x_{2})$ defines the equivalence relation $\sim_{f}$ induced by the map $f$. Given an equivalence relation $\sim$ on a set $X$ and an element $x\in X$ the subset $$[x]\overset{\textrm{def}}=\{ y\in X\!:y\sim x\}$$ is called the *equivalence class of $x$;* thus $x\sim y\Leftrightarrow[x]=[y]$*.* In particular, equivalence classes generated by $f\!:X\rightarrow Y$, $[x]_{f}=\{ x_{\alpha}\in X\!:f(x_{\alpha})=f(x)\}$, will be a cornerstone of our analysis of chaos generated by the iterates of non-injective maps, and the equivalence relation $\sim_{f}:=\{(x,y)\!:f(x)=f(y)\}$ generated by $f$ is uniquely defined by the partition that $f$ induces on $X$. Of course as $x\sim x$, $x\in[x]$. It is a simple matter to see that any two equivalence classes are either disjoint or equal so that the equivalence classes generated by an equivalence relation on $X$ form a disjoint cover of $X.$ The *quotient set of $X$ under $\sim$,* denoted by $X/\sim\;:=\{[x]\!:x\in X\}$, has the equivalence classes $[x]$ as its elements; thus $[x]$ plays a dual role either as subsets of $X$ or as elements of $X/\sim$. The rule $x\mapsto[x]$ defines a surjective function $Q\!:X\rightarrow X/\sim$ known as the *quotient map.* **Example 1.1.** Let $$S^{1}=\{(x,y)\in\mathbb{R}^{2})\!:x^{2}+y^{2}=1\}$$ be the unit circle in $\mathbb{R}^{2}$. Consider $X=[0,1]$ as a subspace of $\mathbb{R}$, define a map $$q\!:X\rightarrow S^{1},\qquad s\longmapsto(\cos2\pi s,\sin2\pi s),\,\, s\in X,$$ from $\mathbb{R}$ to $\mathbb{R}^{2}$, and let $\sim$ be the equivalence relation on $X$ $$s\sim t\Longleftrightarrow(s=t)\vee(s=0,t=1)\vee(s=1,t=0).$$ If we bend $X$ around till its ends touch, the resulting circle represents the quotient set $Y=X/\sim$ whose points are equivalent under $\sim$ as follows $$[0]=\{0,1\}=[1],\qquad[s]=\{ s\}\,\textrm{for all }s\in(0,1).$$ Thus $q$ is bijective for $s\in(0,1)$ but two-to-one for the special values $s=0\textrm{ and }1$, so that for $s,t\in X$,$$s\sim t\Longleftrightarrow q(s)=q(t).$$ This yields a bijection $h\!:X/\sim\:\rightarrow S^{1}$ such that $$q=h\circ Q$$ defines the quotient map $Q\!:X\rightarrow X/\sim$ by $h([s])=q(s)$ for all $s\in[0,1]$. The situation is illustrated by the commutative diagram of Fig. \[Fig: quotient\] that appears as an integral component in a different and more general context in Sec. 2. It is to be noted that commutativity of the diagram implies that if a given equivalence relation $\sim$ on $X$ is completely determined by $q$ that associates the partitioning equivalence classes in $X$ to unique points in $S^{1}$, then $\sim$ is identical to the equivalence relation that is induced by $Q$ on $X$. Note that a larger size of the equivalence classes can be obtained by considering $X=\mathbb{R}_{+}$ for which $s\sim t\Leftrightarrow|s-t|\in\mathbb{Z}_{+}$.$\qquad\blacksquare$ ***End Tutorial1*** One of the central concepts that we consider and employ in this work is the inverse $f^{-}$ of a nonlinear, non-injective, function $f$; here the equivalence classes $[x]_{f}=f^{-}f(x)$ of $x\in X$ are the saturated subsets of $X$ that partition $X$. While a detailed treatment of this question in the form of the non-linear ill-posed problem and its solution is given in Sec. 2 [@Sengupta1997], it is sufficient to point out here from Figs. \[Fig: functions\](c) and \[Fig: functions\](d), that the inverse of a noninjective function is not a function but a multifunction while the inverse of a multifunction is a noninjective function. Hence one has the general result that$$\begin{aligned} f\textrm{ is a non injective function} & \Longleftrightarrow & f^{-}\textrm{ is a multifunction}.\label{Eqn: func-multi}\\ f\textrm{ is a multifunction} & \Longleftrightarrow & f^{-}\textrm{ is a non injective function}\nonumber \end{aligned}$$ The inverse of a multifunction $\mathscr{M}\!:X\qquad Y$ is a generalization of the corresponding notion for a function $f\!:X\rightarrow Y$ such that $$\mathscr M^{-}(y)\overset{\textrm{def}}=\{ x\in X\!:y\in\mathscr{M}(x)\}$$ leads to $${\textstyle \mathscr M^{-}(B)=\{ x\in X\!:\mathscr{M}(x)\bigcap B\neq\emptyset\}}$$ for any $B\subseteq Y$, while a more restricted inverse that we shall not be concerned with is given as $\mathscr M^{+}(B)=\{ x\in X\!:\mathscr{M}(x)\subseteq B\}$. Obviously, $\mathscr M^{+}(B)\subseteq\mathscr M^{-}(B)$. A multifunction is injective if $x_{1}\neq x_{2}\Rightarrow\mathscr{M}(x_{1})\bigcap\mathscr{M}(x_{2})=\emptyset$, and in common with functions it is true that $$\begin{aligned} \mathscr{M}\left(\bigcup_{\alpha\in\mathbb{{D}}}A_{\alpha}\right)= & \bigcup_{\alpha\in\mathbb{{D}}}\mathscr{M}(A_{\alpha})\\ \mathscr{M}\left(\bigcap_{\alpha\in\mathbb{{D}}}A_{\alpha}\right)\subseteq & \bigcap_{\alpha\in\mathbb{{D}}}\mathscr{M}(A_{\alpha})\end{aligned}$$ and where $\mathbb{D}$ is an index set. The following illustrates the difference between the two inverses of $\mathscr{M}$. Let $X$ be a set that is partitioned into two disjoint $\mathscr{M}$-invariant subsets $X_{1}$ and $X_{2}$. If $x\in X_{1}$ (or $x\in X_{2}$) then $\mathscr{M}(x)$ represents that part of $X_{1}$ (or of $X_{2}$ ) that is realized immediately after one application of $\mathscr{M}$, while $\mathscr M^{-}(x)$ denotes the possible precursors of $x$ in $X_{1}$ (or of $X_{2}$) and $\mathscr M^{+}(B)$ is that subset of $X$ whose image lies in $B$ for any subset $B\subset X$. In this work the multifunctions we are explicitly concerned with arise as the inverses of non-injective functions. The second major component of our theory is the *graphical convergence of a net of functions to a multifunction.* In Tutorial2 below, we replace for the sake of simplicity and without loss of generality, the net (which is basically a sequence where the index set is not necessarily the positive integers; thus every sequence is a net but the family[^5] indexed, for example, by $\mathbb{Z}$, the set of *all* integers, is a net and not a sequence) with a sequence and provide the necessary background and motivation for the concept of graphical convergence. ***Begin Tutorial2: Convergence of Functions*** This Tutorial reviews the inadequacy of the usual notions of convergence of functions either to limit functions or to distributions and suggests the motivation and need for introduction of the notion of graphical convergence of functions to multifunctions. Here, we follow closely the exposition of @Korevaar1968, and use the notation $(f_{k})_{k=1}^{\infty}$ to denote real or complex valued functions on a bounded or unbounded interval $J$. A sequence of piecewise continuous functions $(f_{k})_{k=1}^{\infty}$ is said to converge to the function $f$, notation $f_{k}\rightarrow f$, on a bounded or unbounded interval $J$[^6] \(1) *Pointwise* if$$f_{k}(x)\longrightarrow f(x)\qquad\textrm{for all }x\in J,$$ that is: Given any arbitrary real number $\varepsilon>0$ there exists a $K\in\mathbb{N}$ that may depend on $x$, such that $|f_{k}(x)-f(x)|<\varepsilon$ for all $k\geq K$. \(2) *Uniformly* if $$\sup_{x\in J}|f(x)-f_{k}(x)|\longrightarrow0\qquad\textrm{as }k\longrightarrow\infty,$$ that is: Given any arbitrary real number $\varepsilon>0$ there exists a $K\in\mathbb{N}$, such that $\sup_{x\in J}|f_{k}(x)-f(x)|<\varepsilon$ for all $k\geq K$. \(3) *In the mean of order $p\geq1$* if $|f(x)-f_{k}(x)|^{p}$ is integrable over $J$ for each $k$ $$\int_{J}|f(x)-f_{k}(x)|^{p}\longrightarrow0\qquad\textrm{as }k\rightarrow\infty.$$ For $p=1$, this is the simple case of *convergence in the mean.* \(4) *In the mean $m$-integrally* if it is possible to select indefinite integrals $$f_{k}^{(-m)}(x)=\pi_{k}(x)+\int_{c}^{x}dx_{1}\int_{c}^{x_{1}}dx_{2}\cdots\int_{c}^{x_{m-1}}dx_{m}f_{k}(x_{m})$$ and $$f^{(-m)}(x)=\pi(x)+\int_{c}^{x}dx_{1}\int_{c}^{x_{1}}dx_{2}\cdots\int_{c}^{x_{m-1}}dx_{m}f(x_{m})$$ such that for some arbitrary real $p\geq1$, $$\int_{J}|f^{(-m)}-f_{k}^{(-m)}|^{p}\longrightarrow0\qquad\textrm{as }k\rightarrow\infty.$$ where the polynomials $\pi_{k}(x)$ and $\pi(x)$ are of degree $<m$, and $c$ is a constant to be chosen appropriately. \(5) *Relative to test functions $\varphi$* if $f\varphi$ and $f_{k}\varphi$ are integrable over $J$ and $$\int_{J}(f_{k}-f)\varphi\longrightarrow0\qquad\textrm{for every }\varphi\in\mathcal{C}_{0}^{\infty}(J)\textrm{ as }k\longrightarrow\infty,$$ where $\mathcal{C}_{0}^{\infty}(J)$ is the class of infinitely differentiable continuous functions that vanish throughout some neighbourhood of each of the end points of $J$. For an unbounded $J$, a function is said to vanish in some neighbourhood of $+\infty$ if it vanishes on some ray $(r,\infty)$. While pointwise convergence does not imply any other type of convergence, uniform convergence on a bounded interval implies all the other convergences. It is to be observed that apart from pointwise and uniform convergences, all the other modes listed above represent some sort of an averaged contribution of the entire interval $J$ and are therefore not of much use when pointwise behaviour of the limit $f$ is necessary. Thus while limits in the mean are not unique, oscillating functions are tamed by $m$-integral convergence for adequately large values of $m$, and convergence relative to test functions, as we see below, can be essentially reduced to $m$-integral convergence. On the contrary, our graphical convergence — which may be considered as a pointwise biconvergence with respect to both the direct and inverse images of $f$ just as usual pointwise convergence is with respect to its direct image only — allows a sequence (in fact, a net) of functions to converge to an arbitrary relation, unhindered by external influences such as the effects of integrations and test functions. To see how this can indeed matter, consider the following **Example 1.2.** Let $f_{k}(x)=\sin kx,\, k=1,2,\cdots$ and let $J$ be any bounded interval of the real line. Then $1$-integrally we have$$f_{k}^{(-1)}(x)=-\frac{1}{k}\cos kx=-\frac{1}{k}+\int_{0}^{x}\sin kx_{1}dx_{1},$$ which obviously converges to $0$ uniformly (and therefore in the mean) as $k\rightarrow\infty$. And herein lies the point: even though we cannot conclude about the exact nature of $\sin kx$ as $k$ increases indefinitely (except that its oscillations become more and more pronounced), we may very definitely state that $\lim_{k\rightarrow\infty}(\cos kx)/k=0$ uniformly. Hence from$$f_{k}^{(-1)}(x)\longrightarrow0=0+\int_{0}^{x}\lim_{k\rightarrow\infty}\sin kx_{1}dx_{1}$$ it follows that $$\lim_{k\rightarrow\infty}\sin kx=0\label{Eqn: intsin}$$ $1$-integrally. Continuing with the same sequence of functions, we now examine its test-functional convergence with respect to $\varphi\in\mathcal{C}_{0}^{1}(-\infty,\infty)$ that vanishes for all $x\notin(\alpha,\beta)$. Integrating by parts, $$\begin{aligned} {\displaystyle {\displaystyle \int_{-\infty}^{\infty}f_{k}\varphi}}= & {\displaystyle \int_{\alpha}^{\beta}\varphi(x_{1})\sin kx_{1}dx_{1}}\\ = & -\frac{1}{k}\left[\varphi(x_{1})\cos kx_{1}\right]_{\alpha}^{\beta}-\frac{1}{k}\int_{\alpha}^{\beta}\varphi^{\prime}(x_{1})\cos kx_{1}dx_{1}\end{aligned}$$ The first integrated term is $0$ due to the conditions on $\varphi$ while the second also vanishes because $\varphi\in\mathcal{C}_{0}^{1}(-\infty,\infty)$. Hence $$\int_{-\infty}^{\infty}f_{k}\varphi\longrightarrow0=\int_{\alpha}^{\beta}\lim_{k\rightarrow\infty}\varphi(x_{1})\sin ksdx_{1}$$ for all $\varphi$, and leading to the conclusion that $$\lim_{k\rightarrow\infty}\sin kx=0\label{Eqn: testsin}$$ test-functionally.$\qquad\blacksquare$ This example illustrates the fact that if $\textrm{Supp}(\varphi)=[\alpha,\beta]\subseteq J$[^7], integrating by parts sufficiently large number of times so as to wipe out the pathological behaviour of $(f_{k})$ gives $$\begin{aligned} \int_{J}f_{k}\varphi= & \int_{\alpha}^{\beta}f_{k}\varphi\\ = & \int_{\alpha}^{\beta}f_{k}^{(-1)}\varphi^{\prime}=\cdots=(-1)^{m}\int_{\alpha}^{\beta}f_{k}^{(-m)}\varphi^{m}\end{aligned}$$ where $f_{k}^{(-m)}(x)=\pi_{k}(x)+\int_{c}^{x}dx_{1}\int_{c}^{x_{1}}dx_{2}\cdots\int_{c}^{x_{m-1}}dx_{m}f_{k}(x_{m})$ is an $m$-times arbitrary indefinite integral of $f_{k}$. If now it is true that $\int_{\alpha}^{\beta}f_{k}^{(-m)}\rightarrow\int_{\alpha}^{\beta}f^{(-m)}$, then it must also be true that $f_{k}^{(-m)}\varphi^{(m)}$ converges in the mean to $f^{(-m)}\varphi^{(m)}$ so that $$\int_{\alpha}^{\beta}f_{k}\varphi=(-1)^{m}\int_{\alpha}^{\beta}f_{k}^{(-m)}\varphi^{(m)}\longrightarrow(-1)^{m}\int_{\alpha}^{\beta}f^{(-m)}\varphi^{(m)}=\int_{\alpha}^{\beta}f\varphi.$$ In fact the converse also holds leading to the following Equivalences between $m$-convergence in the mean and convergence with respect to test-functions, [@Korevaar1968]. **Type 1 Equivalence.** If $f$ and $(f_{k})$ are functions on $J$ that are integrable on every interior subinterval, then the following are equivalent statements. \(a) For every interior subinterval $I$ of $J$ there is an integer $m_{I}\geq0$, and hence a smallest integer $m\geq0$, such that certain indefinite integrals $f_{k}^{(-m)}$ of the functions $f_{k}$ converge in the mean on $I$ to an indefinite integral $f^{(-m)}$; thus $\int_{I}|f_{k}^{(-m)}-f^{(-m)}|\rightarrow0.$ \(b) $\int_{J}(f_{k}-f)\varphi\rightarrow0$ for every $\varphi\in\mathcal{C}_{0}^{\infty}(J)$. A significant generalization of this Equivalence is obtained by dropping the restriction that the limit object $f$ be a function. The need for this generalization arises because metric function spaces are known not to be complete: Consider the sequence of functions (Fig. \[Fig: FuncSpace\](a)) $$\begin{aligned} f_{k}(x)= & \left\{ \begin{array}{lcl} 0 & \textrm{} & \textrm{if }a\leq x\leq0\\ kx & \textrm{} & \textrm{if }0\leq x\leq1/k\\ 1 & \textrm{} & \textrm{if }1/k\leq x\leq b\end{array}\right.\label{Eqn: Lp[a,b]}\end{aligned}$$ which is not Cauchy in the uniform metric $\rho(f_{j},f_{k})=\sup_{a\leq x\leq b}|f_{j}(x)-f_{k}(x)|$ but is Cauchy in the mean $\rho(f_{j},f_{k})=\int_{a}^{b}|f_{j}(x)-f_{k}(x)|dx$, or even pointwise. However in either case, $(f_{k})$ cannot converge in the respective metrics to a *continuous function* and the limit is a discontinuous unit step function $$\Theta(x)=\left\{ \begin{array}{lcl} 0 & & \textrm{if }a\leq x\leq0\\ 1 & & \textrm{if }0<x\leq b\end{array}\right.$$ with graph $([a,0],0)\bigcup((0,b],1)$, which is also integrable on $[a,b]$. Thus even if the limit of the sequence of continuous functions is not continuous, both the limit and the members of the sequence are integrable functions. This Riemann integration is not sufficiently general, however, and this type of integrability needs to be replaced by a much weaker condition resulting in the larger class of the Lebesgue integrable complete space of functions $L[a,b]$.[^8] The functions in Fig \[Fig: FuncSpace\](b1), $$\delta_{k}(x)=\left\{ \begin{array}{ccl} k & & \textrm{if }0<x<1/k\\ 0 & & x\in[a,b]-(0,1/k),\end{array}\right.$$ can be associated with the arbitrary indefinite integrals $$\Theta_{k}(x)\overset{\textrm{def}}=\delta_{k}^{(-1)}(x)=\left\{ \begin{array}{lcl} 0 & & a\leq x\leq0\\ kx & & 0<x<1/k\\ 1 & & 1/k\leq x\leq b\end{array}\right.$$ of Fig. \[Fig: FuncSpace\](b2), which, as noted above, converge in the mean to the unit step function $\Theta(x)$; hence $\int_{-\infty}^{\infty}\delta_{k}\varphi\equiv\int_{\alpha}^{\beta}\delta_{k}\varphi=-\int_{\alpha}^{\beta}\delta_{k}^{(-1)}\varphi^{\prime}\rightarrow-\int_{0}^{\beta}\varphi^{\prime}(x)dx=\varphi(0)$. But there can be no *functional relation $\delta(x)$* for which $\int_{\alpha}^{\beta}\delta(x)\varphi(x)dx=\varphi(0)$ for *all* $\varphi\in C_{0}^{1}[\alpha,\beta]$, so that unlike in the case in Type 1 Equivalence, the limit in the mean $\Theta(x)$ of the indefinite integrals $\delta_{k}^{(-1)}(x)$ *cannot be expressed as the indefinite integral $\delta^{(-1)}(x)$ of some function $\delta(x)$ on any interval containing the origin.* This leads to the second more general type of equivalence **Type 2 Equivalence.** If $(f_{k})$ are functions on $J$ that are integrable on every interior subinterval, then the following are equivalent statements. \(a) For every interior subinterval $I$ of $J$ there is an integer $m_{I}\geq0$, and hence a smallest integer $m\geq0$, such that certain indefinite integrals $f_{k}^{(-m)}$ of the functions $f_{k}$ converge in the mean on $I$ to an integrable function $\Theta$ which, unlike in Type 1 Equivalence, need not itself be an indefinite integral of some function $f$. \(b) $c_{k}(\varphi)=\int_{J}f_{k}\varphi\rightarrow c(\varphi)$ for every $\varphi\in\mathcal{C}_{0}^{\infty}(J)$. Since we are now given that $\int_{I}f_{k}^{(-m)}(x)dx\rightarrow\int_{I}\Psi(x)dx$, it must also be true that $f_{k}^{(-m)}\varphi^{(m)}$ converges in the mean to $\Psi\varphi^{(m)}$ whence $$\int_{J}f_{k}\varphi=(-1)^{m}\int_{I}f_{k}^{(-m)}\varphi^{(m)}\longrightarrow(-1)^{m}\int_{I}\Psi\varphi^{(m)}\left(\neq(-1)^{m}\int_{I}f^{(-m)}\varphi^{(m)}\right).$$ The natural question that arises at this stage is then: What is the nature of the relation (not function any more) $\Psi(x)$? For this it is now stipulated, despite the non-equality in the equation above, that as in the mean $m$-integral convergence of $(f_{k})$ to a *function* $f$, $$\Theta(x):=\lim_{k\rightarrow\infty}\delta_{k}^{(-1)}(x)\overset{\textrm{def}}=\int_{-\infty}^{x}\delta(x^{\prime})dx^{\prime}\label{Eqn: delta1}$$ *defines* the non-functional relation (“generalized function”) $\delta(x)$ integrally as a solution of the integral equation (\[Eqn: delta1\]) of the first kind; hence formally[^9] $$\delta(x)=\frac{d\Theta}{dx}\label{Eqn: delta2}$$ ***End Tutorial2*** The above tells us that the “delta function” is not a function but its indefinite integral is the piecewise continuous *function* $\Theta$ obtained as the mean (or pointwise) limit of a sequence of non-differentiable functions with the integral of $d\Theta_{k}(x)/dx$ being preserved for all $k\in\mathbb{Z}_{+}$. What then is the delta (and not its integral)? The answer to this question is contained in our multifunctional extension $\textrm{Multi}(X,Y)$ of the function space $\textrm{Map}(X,Y)$ considered in Sec. 3. Our treatment of ill-posed problems is used to obtain an understanding and interpretation of the numerical results of the discretized spectral approximation in neutron transport theory [@Sengupta1988; @Sengupta1995]. The main conclusions are the following: In a one-dimensional discrete system that is governed by the iterates of a nonlinear map, the dynamics is chaotic if and only if the system evolves to a state of *maximal ill-posedness.* The analysis is based on the non-injectivity, and hence ill-posedness, of the map; this may be viewed as a mathematical formulation of the *stretch-and-fold* and *stretch-cut-and-paste* kneading operations of the dough that are well-established artifacts in the theory of chaos and the concept of maximal ill-posedness helps in obtaining a *physical understanding* of the nature of chaos. We do this through the fundamental concept of the *graphical convergence* of a sequence (generally a net) of functions [@Sengupta2000] that is allowed to converge graphically, when the conditions are right, to a set-valued map or multifunction. Since ill-posed problems naturally lead to multifunctional inverses through functional generalized inverses [@Sengupta1997], it is natural to seek solutions of ill-posed problems in multifunctional space $\textrm{Multi}(X,Y)$ rather than in spaces of functions $\textrm{Map}(X,Y)$; here $\textrm{Multi}(X,Y)$ is an extension of $\textrm{Map}(X,Y)$ that is generally larger than the smallest dense extension $\textrm{Multi}_{\mid}(X,Y)$. Feedback and iteration are natural processes by which nature evolves itself. Thus almost every process of evolution is a self-correction process by which the system proceeds from the present to the future through a controlled mechanism of input and evaluation of the past. Evolution laws are inherently nonlinear and complex; here *complexity* is to be understood as the natural manifestation of the nonlinear laws that govern the evolution of the system. This work presents a mathematical description of complexity based on [@Sengupta1997] and [@Sengupta2000] and is organized as follows. In Sec. 1, we follow [@Sengupta1997] to give an overview of ill-posed problems and their solution that forms the foundation of our approach. Secs. 2 to 4 apply these ideas by defining a chaotic dynamical system as a *maximally ill-posed problem;* by doing this we are able to overcome the limitations of the three Devaney characterizations of chaos [@Devaney1989] that apply to the specific case of iteration of transformations in a metric space, and the resulting graphical convergence of functions to multifunctions is the basic tool of our approach. Sec. 5 analyzes graphical convergence in $\textrm{Multi}(X)$ for the discretized spectral approximation of neutron transport theory, which suggests a natural link between ill-posed problems and spectral theory of non-linear operators. This seems to offer an answer to the question of *why* a natural system should increase its complexity, and eventually tend toward chaoticity, by becoming increasingly nonlinear. **2. Ill-Posed Problem and its solution** This section based on @Sengupta1997 presents a formulation and solution of ill-posed problems arising out of the non-injectivity of a function $f\!:X\rightarrow Y$ between topological spaces $X$ and $Y$. A workable knowledge of this approach is necessary as our theory of chaos leading to the characterization of chaotic systems as being a *maximally ill-posed* state of a dynamical system is a direct application of these ideas and can be taken to constitute a mathematical representation of the familiar *stretch-cut-and paste* and *stretch-and-fold* paradigms of chaos. The problem of finding an $x\in X$ for a given $y\in Y$ from the functional relation $f(x)=y$ is an inverse problem that is *ill-posed* (or, the equation $f(x)=y$ is ill-posed) if any one or more of the following conditions are satisfied. (IP1) $f$ *is not injective.* This *non-uniqueness* problem of the solution for a given $y$ is the single most significant criterion of ill-posedness used in this work. (IP2) *$f$ is not surjective.* For a $y\in Y$, this is the *existence* problem of the given equation. (IP3) When $f$ *is bijective,* the inverse *$f^{-1}$* is not continuous, which means that small changes in $y$ may lead to large changes in $x$. A problem $f(x)=y$ for which a solution exists, is unique, and small changes in data $y$ lead to only small changes in the solution $x$ is said to be *well-posed* or *properly posed.* This means that $f(x)=y$ is well-posed if $f$ is bijective and the inverse $f^{-1}\!:Y\rightarrow X$ is continuous; otherwise the equation is *ill-posed* or *improperly posed.* It is to be noted that the three criteria are not, in general, independent of each other. Thus if $f$ represents a bijective, bounded linear operator between Banach spaces $X$ and $Y$, then the inverse mapping theorem guarantees that the inverse $f^{-1}$ is continuous. Hence ill-posedness depends not only on the algebraic structures of $X$, $Y$, $f$ but also on the topologies of $X$ and $Y$. **Example 2.1.** As a non-trivial example of an inverse problem, consider the heat equation$$\frac{\partial\theta(x,t)}{\partial t}=c^{2}\frac{\partial^{2}\theta(x,t)}{\partial x^{2}}$$ for the temperature distribution $\theta(x,t)$ of a one-dimensional homogeneous rod of length $L$ satisfying the initial condition $\theta(x,0)=\theta_{0}(x),\textrm{ }0\leq x\leq L$, and boundary conditions $\theta(0,t)=0=\theta(L,t),\,0\leq t\leq T$, having the Fourier sine-series solution $$\theta(x,t)=\sum_{n=1}^{\infty}A_{n}\sin\left(\frac{n\pi}{L}x\right)e^{-\lambda_{n}^{2}t}\label{Eqn: heat1}$$ where $\lambda_{n}=(c\pi/a)n$ and $$A_{n}=\frac{2}{L}\int_{0}^{a}\theta_{0}(x^{\prime})\sin\left(\frac{n\pi}{L}x^{\prime}\right)dx^{\prime}$$ are the Fourier expansion coefficients. While the direct problem evaluates $\theta(x,t)$ from the differential equation and initial temperature distribution $\theta_{0}(x)$, the inverse problem calculates $\theta_{0}(x)$ from the integral equation $$\theta_{T}(x)=\frac{2}{L}\int_{0}^{a}k(x,x^{\prime})\theta_{0}(x^{\prime})dx^{\prime},\qquad0\leq x\leq L,$$ when this final temperature $\theta_{T}$ is known, and $$k(x,x^{\prime})=\sum_{n=1}^{\infty}\sin\left(\frac{n\pi}{L}x\right)\sin\left(\frac{n\pi}{L}x^{\prime}\right)e^{-\lambda_{n}^{2}T}$$ is the kernel of the integral equation. In terms of the final temperature the distribution becomes $$\theta_{T}(x)=\sum_{n=1}^{\infty}B_{n}\sin\left(\frac{n\pi}{L}x\right)e^{-\lambda_{n}^{2}(t-T)}\label{Eqn: heat2}$$ with Fourier coefficients $$B_{n}=\frac{2}{L}\int_{0}^{a}\theta_{T}(x^{\prime})\sin\left(\frac{n\pi}{L}x^{\prime}\right)dx^{\prime}.$$ In $L^{2}[0,a]$, Eqs. (\[Eqn: heat1\]) and (\[Eqn: heat2\]) at $t=T$ and $t=0$ yield respectively $$\Vert\theta_{T}(x)\Vert^{2}=\frac{L}{2}\sum_{n=1}^{\infty}A_{n}^{2}e^{-2\lambda_{n}^{2}T}\leq e^{-2\lambda_{1}^{2}T}\Vert\theta_{0}\Vert^{2}\label{Eqn: heat3}$$ $$\Vert\theta_{0}\Vert^{2}=\frac{L}{2}\sum_{n=1}^{\infty}B_{n}^{2}e^{2\lambda_{n}^{2}T}.\label{Eqn: heat4}$$ The last two equations differ from each other in the significant respect that whereas Eq. (\[Eqn: heat3\]) shows that the direct problem is well-posed according to (IP3), Eq. (\[Eqn: heat4\]) means that in the absence of similar bounds the inverse problem is ill-posed.[^10]$\qquad\blacksquare$ **Example 2.2.** Consider the ****Volterra integral equation of the first kind $$y(x)=\int_{a}^{x}r(x^{\prime})dx^{\prime}=Kr$$ where $y,r\in C[a,b]$ and $K\!:C[0,1]\rightarrow C[0,1]$ is the corresponding integral operator. Since the differential operator $D=d/dx$ under the sup-norm $\Vert r\Vert=\sup_{0\leq x\leq1}|r(x)|$ is unbounded, the inverse problem $r=Dy$ for a differentiable function $y$ on $[a,b]$ is ill-posed, see Example 6.1. However, $y=Kr$ becomes well-posed if $y$ is considered to be in $C^{1}[0,1]$ with norm $\Vert y\Vert=\sup_{0\leq x\leq1}|Dy|$. This illustrates the importance of the topologies of $X$ and $Y$ in determining the ill-posed nature of the problem when this is due to (IP3).$\qquad\blacksquare$ Ill-posed problems in nonlinear mathematics of type (IP1) arising from the non-injectivity of $f$ can be considered to be a generalization of non-uniqueness of solutions of linear equations as, for example, in eigenvalue problems or in the solution of a system of linear algebraic equations with a larger number of unknowns than the number of equations. In both cases, for a given $y\in Y$, the solution set of the equation $f(x)=y$ is given by $$f^{-}(y)=[x]_{f}=\{ x^{\prime}\in X:f(x^{\prime})=f(x)=y\}.$$ A significant point of difference between linear and nonlinear problems is that unlike the special importance of 0 in linear mathematics, there are no preferred elements in nonlinear problems; this leads to a shift of emphasis from the null space of linear problems to equivalence classes for nonlinear equations. To motivate the role of equivalence classes, let us consider the null spaces in the following linear problems. \(a) Let $f:\mathbb{R}^{2}\rightarrow\mathbb{R}$ be defined by $f(x,y)=x+y$, $(x,y)\in\mathbb{R}^{2}$. The null space of $f$ is generated by the equation $y=-x$ on the $x$-$y$ plane, and the graph of $f$ is the plane passing through the lines $\rho=x$ and $\rho=y.$ For each $\rho\in\textrm{R}$ the equivalence classes $f^{-}(\rho)=\{(x,y)\in\textrm{R}^{2}\!:x+y=\rho\}$ are lines on the graph parallel to the null set. \(b) For a linear operator $A\!:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$, $m<n$, satisfying (1) and (2), the problem $Ax=y$ reduces $A$ to echelon form with rank $r$ less than $\min\{ m,n\}$, when the given equations are consistent. The solution however, produces a generalized inverse leading to a set-valued inverse $A^{-}$ of $A$ for which the inverse images of $y\in\mathcal{R}(A)$ are multivalued because of the non-trivial null space of $A$ introduced by assumption (1). Specifically, a null-space of dimension $n-r$ is generated by the free variables $\{ x_{j}\}_{j=r+1}^{n}$ which are arbitrary: this is illposedness of type (1). In addition, $m-r$ rows of the row reduced echelon form of $A$ have all 0 entries that introduces restrictions on $m-r$ coordinates $\{ y_{i}\}_{i=r+1}^{m}$ of $y$ which are now related to $\{ y_{i}\}_{i=1}^{r}$: this illustrates illposedness of type (2). Inverse ill-posed problems therefore generate multivalued solutions through a generalized inverse of the mapping. \(c) The eigenvalue problem $$\left(\frac{d^{2}}{dx^{2}}+\lambda^{2}\right)y=0\qquad y(0)=0=y(1)$$ has the following equivalence class of 0 $$[0]_{D^{2}}=\{\sin(\pi mx)\}_{m=0}^{\infty},\qquad D^{2}=\left(d^{2}/dx^{2}+\lambda^{2}\right),$$ as its eigenfunctions corresponding to the eigenvalues $\lambda_{m}=\pi m$. Ill-posed problems are primarily of interest to us explicitly as noninjective maps $f$, that is under the condition of (IP1). The two other conditions (IP2) and (IP3) are not as significant and play only an implicit role in the theory. In its application to iterative systems, the degree of non-injectivity of $f$ defined as the number of its injective branches, increases with iteration of the map. A necessary (but not sufficient) condition for chaos to occur is the increasing non-injectivity of $f$ that is expressed descriptively in the chaos literature as *stretch-and-fold* or *stretch-cut-and-paste* operations. This increasing noninjectivity that we discuss in the following sections, is what causes a dynamical system to tend toward chaoticity. Ill-posedness arising from non-surjectivity of (injective) $f$ in the form of *regularization* [@Tikhonov1977] **has received wide attention in the literature of ill-posed problems; this however is not of much significance in our work. ***Begin Tutorial3: Generalized Inverse*** In this Tutorial, we take a quick look at the equation $a(x)=y$, where $a\!:X\rightarrow Y$ is a linear map that need not be either one-one or onto. Specifically, we will take $X$ and $Y$ to be the Euclidean spaces $\mathbb{R}^{n}$ and $\mathbb{R}^{m}$ so that $a$ has a matrix representation $A\in\mathbb{R}^{m\times n}$ where $\mathbb{R}^{m\times n}$ is the collection of $m\times n$ matrices with real entries. The inverse $A^{-1}$ exists and is unique iff $m=n$ and $\textrm{rank}(A)=n$; this is the situation depicted in Fig. \[Fig: functions\](a). If $A$ is neither one-one or onto, then we need to consider the multifunction $A^{-}$, a functional choice of which is known as the *generalized inverse* $G$ of $A$. A good introductory text for generalized inverses is @Campbell1979Figure \[Fig: MP\_Inverse\](a) introduces the following definition of the *Moore-Penrose* generalized inverse $G_{\textrm{MP}}$. **Definition 2.1.** ***Moore-Penrose Inverse.*** *If $a\!:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}$ is a linear transformation with matrix representation $A\in\mathbb{R}^{m\times n}$ then the* Moore-Penrose inverse $G_{\textrm{MP}}\in\mathbb{R}^{n\times m}$ of $A$ *(we will use the same notation* $G_{\textrm{MP}}\!:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}$ *for the inverse of the map $a$) is the noninjective map defined in terms of the row and column spaces of $A$,* $\textrm{row}(A)=\mathcal{R}(A^{\textrm{T}})$, $\textrm{col}(A)=\mathcal{R}(A)$*, as* $$G_{\textrm{MP}}(y)\overset{\textrm{def}}=\left\{ \begin{array}{lcl} (a|_{\textrm{row}(A)})^{-1}(y), & & \textrm{if }y\in\textrm{col}(A)\\ 0 & & \textrm{if }y\in\mathcal{N}(A^{\textrm{T}}).\end{array}\right.\qquad\square\label{Eqn: Def: Moore-Penrose}$$ Note that the restriction $a|_{\textrm{row}(A)}$ of $a$ to $\mathcal{R}(A^{\textrm{T}})$ is bijective so that the inverse $(a|_{\textrm{row}(A)})^{-1}$ is well-defined. The role of the transpose matrix appears naturally, and the $G_{\textrm{MP}}$ of Eq. (\[Eqn: Def: Moore-Penrose\]) is the unique matrix that satisfies the conditions $$\begin{array}{c} AG_{\textrm{MP}}A=A,\quad G_{\textrm{MP}}AG_{\textrm{MP}}=G_{\textrm{MP}},\\ (G_{\textrm{MP}}A)^{\textrm{T}}=G_{\textrm{MP}}A,\quad(AG_{\textrm{MP}})^{\textrm{T}}=AG_{\textrm{MP}}\end{array}\label{Eqn: MPInverse}$$ that follow immediately from the definition (\[Eqn: Def: Moore-Penrose\]); hence $G_{\textrm{MP}}A$ and $AG_{\textrm{MP}}$ are orthogonal projections[^11] onto the subspaces $\mathcal{R}(A^{\textrm{T}})=\mathcal{R}(G_{\textrm{MP}})$ and $\mathcal{R}(A)$ respectively. Recall that the range space $\mathcal{R}(A^{\textrm{T}})$ of $A^{\textrm{T}}$ is the same as the *row space* $\textrm{row}(A)$ of $A$, and $\mathcal{R}(A)$ is also known as the *column space* of $A$, $\textrm{col}(A)$. **Example 2.3.** For $a\!:\mathbb{R}^{5}\rightarrow\mathbb{R}^{4}$, let $$A=\left(\begin{array}{rrrrr} 1 & -3 & 2 & 1 & 2\\ 3 & -9 & 10 & 2 & 9\\ 2 & -6 & 4 & 2 & 4\\ 2 & -6 & 8 & 1 & 7\end{array}\right)$$ By reducing the augmented matrix $\left(A|y\right)$ to the row-reduced echelon form, it can be verified that the null and range spaces of $A$ are $3$- and $2$-dimensional respectively. A basis for the null space of $A^{\textrm{T}}$ and of the row and column space of $A$ obtained from the echelon form are respectively $$\left(\begin{array}{r} -2\\ 0\\ 1\\ 0\end{array}\right),\textrm{ }\left(\begin{array}{r} 1\\ -1\\ 0\\ 1\end{array}\right);\quad\textrm{and }\left(\begin{array}{r} 1\\ -3\\ 0\\ 3/2\\ 1/2\end{array}\right),\textrm{ }\left(\begin{array}{r} 0\\ 0\\ 1\\ -1/4\\ 3/4\end{array}\right);\textrm{ }\left(\begin{array}{r} 1\\ 0\\ 2\\ -1\end{array}\right),\textrm{ }\left(\begin{array}{r} 0\\ 1\\ 0\\ 1\end{array}\right).$$ According to its definition Eq. (\[Eqn: Def: Moore-Penrose\]), the Moore-Penrose inverse maps the middle two of the above set to $(0,0,0,0,0)^{\textrm{T}}$, and the $A$-image of the first two (which are respectively $(19,70,38,51)^{\textrm{T}}$ and $(70,275,140,205)^{\textrm{T}}$ lying, as they must, in the span of the last two), to the span of $(1,-3,2,1,2)^{\textrm{T}}$ and $(3,-9,10,2,9)^{\textrm{T}}$ because $a$ restricted to this subspace of $\mathbb{R}^{5}$ is bijective. Hence $$G_{\textrm{MP}}\left(A\left(\begin{array}{r} 1\\ -3\\ 0\\ 3/2\\ 1/2\end{array}\right)\textrm{ }A\left(\begin{array}{r} 0\\ 0\\ 1\\ -1/4\\ 3/4\end{array}\right)\begin{array}{rr} -2 & 1\\ 0 & -1\\ 1 & 0\\ 0 & 1\end{array}\right)=\left(\begin{array}{rrrr} 1 & 0 & 0 & 0\\ -3 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 3/2 & -1/4 & 0 & 0\\ 1/2 & 3/4 & 0 & 0\end{array}\right).$$ The second matrix on the left is invertible as its rank is $4$. This gives $${\displaystyle G_{\textrm{MP}}=\left(\begin{array}{rrrr} 9/275 & -1/275 & 18/275 & -2/55\\ -27/275 & 3/275 & -54/275 & 6/55\\ -10/143 & 6/143 & -20/143 & 16/143\\ 238/3575 & -57/3575 & 476/3575 & -59/715\\ -129/3575 & 106/3575 & -258/3575 & 47/715\end{array}\right)}\label{Eqn: MPEx5}$$ as the Moore-Penrose inverse of $A$ that readily verifies all the four conditions of Eqs. (\[Eqn: MPInverse\]). The basic point here is that, as in the case of a bijective map, $G_{\textrm{MP}}A$ and $AG_{\textrm{MP}}$ are identities on the row and column spaces of $A$ that define its rank. For later use — when we return to this example for a simpler inverse $G$ — given below are the orthonormal bases of the four fundamental subspaces with respect to which $G_{\textrm{MP}}$ is a representation of the generalized inverse of $A$; these calculations were done by MATLAB. The basis for \(a) the column space of $A$ consists of the first $2$ columns of the eigenvectors of $AA^{\textrm{T}}$: $$\begin{array}{c} (-1633/2585,-363/892,\textrm{ }3317/6387,\textrm{ }363/892)^{\textrm{T}}\\ (-929/1435,\textrm{ }709/1319,\textrm{ }346/6299,-709/1319)^{\textrm{T}}\end{array}$$ \(b) the null space of $A^{\textrm{T}}$ consists of the last $2$ columns of the eigenvectors of $AA^{\textrm{T}}$:$$\begin{array}{c} (-3185/8306,\textrm{ }293/2493,-3185/4153,\textrm{ }1777/3547)^{\textrm{T}}\\ (323/1732,\textrm{ }533/731,\textrm{ }323/866,\textrm{ }1037/1911)^{\textrm{T}}\end{array}$$ \(c) the row space of $A$ consists of the first $2$ columns of the eigenvectors of $A^{\textrm{T}}A$: $$\begin{array}{c} (421/13823,\textrm{ }44/14895,-569/918,-659/2526,\textrm{ }1036/1401)\\ (661/690,\textrm{ }412/1775,\textrm{ }59/2960,-1523/10221,-303/3974)\end{array}$$ \(d) the null space of $A$ consists of the last $3$ columns of the of $A^{\textrm{T}}A$:$$\begin{array}{c} (-571/15469,-369/776,\textrm{ }149/25344,-291/350,-389/1365)\\ (-281/1313,\textrm{ }956/1489,\textrm{ }875/1706,-1279/2847,\textrm{ }409/1473)\\ (292/1579,-876/1579,\textrm{ }203/342,\textrm{ }621/4814,\textrm{ }1157/2152)\end{array}$$ The matrices $Q_{1}$ and $Q_{2}$ with these eigenvectors $(x_{i})$ satisfying $\Vert x_{i}\Vert=1$ and $(x_{i},x_{j})=0$ for $i\neq j$ as their columns are *orthogonal matrices* with the simple inverse criterion $Q^{-1}=Q^{\textrm{T}}$.$\qquad\blacksquare$ ***End Tutorial3*** The basic issue in the solution of the inverse ill-posed problem is its reduction to an well-posed one when restricted to suitable subspaces of the domain and range of $A$. Considerations of geometry leading to their decomposition into orthogonal subspaces is only an additional feature that is not central to the problem: recall from Eq. (\[Eqn: f\_inv\_f\]) that any function $f$ must necessarily satisfy the more general set-theoretic relations $ff^{-}f=f$ and $f^{-}ff^{-}=f^{-}$ of Eq. (\[Eqn: MPInverse\]) for the multiinverse $f^{-}$ of $f\!:X\rightarrow Y$. The second distinguishing feature of the MP-inverse is that it is defined, by a suitable extension, on all of $Y$ and not just on $f(X)$ which is perhaps more natural. The availability of orthogonality in inner-product spaces allows this extension to be made in an almost normal fashion. As we shall see below the additional geometric restriction of Eq. (\[Eqn: MPInverse\]) is not essential to the solution process, and infact, only results in a less canonical form of the inverse. ***Begin Tutorial4: Topological Spaces*** This Tutorial is meant to familiarize the reader with the basic principles of a topological space. A topological space $(X,\mathcal{U})$ is a set $X$ with a class[^12] $\mathcal{U}$ of distinguished subsets, called *open sets of $X$,* that satisfy (T1) The empty set $\emptyset$ and the whole $X$ belong to $\mathcal{U}$ (T2) Finite intersections of members of $\mathcal{U}$ belong to $\mathcal{U}$ (T3) Arbitrary unions of members of $\mathcal{U}$ belong to $\mathcal{U}$. **Example 2.4.** (1) The smallest topology possible on a set $X$ is its *indiscrete topology* when the only open sets are $\emptyset$ and $X$; the largest is the *discrete topology* where every subset of $X$ is open (and hence also closed). \(2) In a metric space $(X,d)$, let $B_{\varepsilon}(x,d)=\{ y\in X\!:d(x,y)<\varepsilon\}$ be an open ball at $x$. Any subset $U$ of $X$ such that for each $x\in U$ there is a $d$-ball $B_{\varepsilon}(x,d)\subseteq U$ in $U$, is said to be an open set of $(X,d)$. The collection of all these sets is the topology induced by $d$. The topological space $(X,\mathcal{U})$ is then said to be *associated with (induced by)* $(X,d)$. \(3) If $\sim$ is an equivalence relation on a set $X$, the set of all saturated sets $[x]_{\sim}=\{ y\in X\!:y\sim x\}$ is a topology on $X;$ this topology is called the *topology of saturated sets.* We argue in Sec. 4.2 that this constitutes the defining topology of a chaotic system. \(4) For any subset $A$ of the set $X$, the $A$-*inclusion topology on $X$* consists of $\emptyset$ and every superset of $A$, while the $A$-*exclusion topology on* $X$ consists of all subsets of $X-A$. Thus $A$ is open in the inclusion topology and closed in the exclusion, and in general every open set of one is closed in the other. The special cases of the *$a$-inclusion* and *$a$-exclusion* topologies for $A=\{ a\}$ are defined in a similar fashion. \(5) The *cofinite* and *cocountable topologies* in which the open sets of an infinite (resp. uncountable) set $X$ are respectively the complements of finite and countable subsets, are examples of topologies with some unusual properties that are covered in Appendix A1. If $X$ is itself finite (respectively, countable), then its cofinite (respectively, cocountable) topology is the discrete topology consisting of all its subsets. It is therefore useful to adopt the convention, unless stated to the contrary, that cofinite and cocountable spaces are respectively infinite and uncountable.$\qquad\blacksquare$ In the space $(X,\mathcal{U})$, a *neighbourhood of a point* $x\in X$ is a nonempty subset $N$ of $X$ that contains an open set $U$ containing $x$; thus $N\subseteq X$ is a neighbourhood of $x$ iff $$x\in U\subseteq N\label{Eqn: Def: nbd1}$$ for some $U\in\mathcal{U}$. The largest open set that can be used here is $\textrm{Int}(N)$ (where, by definition, $\textrm{Int}(A)$ is the largest open set that is contained in $A$) so that the above neighbourhood criterion for a subset $N$ of $X$ can be expressed in the equivalent form $$N\subseteq X\textrm{ is a }\mathcal{U}-\textrm{neighbourhood of }x\textrm{ iff }x\in\textrm{Int}_{\mathcal{U}}(N)\label{Eqn: Def: nbd2}$$ implying that a subset of $(X,\mathcal{U})$ is a neighbourhood of all its interior points, so that $N\in\mathcal{N}_{x}\Rightarrow N\in\mathcal{N}_{y}$ for all $y\in\textrm{Int}(N)$. The collection of all neighbourhoods of $x$ $$\mathcal{N}_{x}\overset{\textrm{def}}=\{ N\subseteq X\!:x\in U\subseteq N\textrm{ for some }U\in\mathcal{U}\}\label{Eqn: Def: nbd system}$$ **is the *neighbourhood system* at $x$, and the subcollection $U$ of the topology used in this equation constitutes a *neighbourhood* (*local*) *base* or *basic neighbourhood system, at* $x$, see Def. A1.1 of Appendix A1. The properties (N1) $x$ belongs to every member $N$ of *$\mathcal{N}_{x}$,* (N2) The intersection of any two neighbourhoods of *$x$* is another neighbourhood of $x$: $N,M\in\mathcal{N}_{x}\Rightarrow N\bigcap M\in\mathcal{N}_{x}$, (N3) Every superset of **any neighbourhood of $x$ is a neighbourhood of $x$: $(M\in\mathcal{N}_{x})\wedge(M\subseteq N)\Rightarrow N\in\mathcal{N}_{x}$. that characterize *$\mathcal{N}_{x}$* completely are a direct consequence of the definition (\[Eqn: Def: nbd1\]), (\[Eqn: Def: nbd2\]) that may also be stated as (N0) Any neighbourhood $N\in\mathcal{N}_{x}$ contains another neighbourhood $U$ of $x$ that is a *neighbourhood of each of its point*s: $((\forall N\in\mathcal{N}_{x})(\exists U\in\mathcal{N}_{x})(U\subseteq N))\!:(\forall y\in U\Rightarrow U\in\mathcal{N}_{y})$. Property (N0) infact serves as the defining characteristic of an open set, and *$U$* can be identified with the largest open set $\textrm{Int}(N)$ contained in $N$; hence *a set $G$ in a topological space is open iff it is a neighbourhood of each of its points.* Accordingly if *$\mathcal{N}_{x}$* is a given class of subsets of $X$ associated with each $x\in X$ satisfying $(\textrm{N}1)-(\textrm{N}3)$, then (N0) defines the special class of neighbourhoods $G$ $$\mathcal{U}=\{ G\in\mathcal{N}_{x}\!:x\in B\subseteq G\textrm{ for all }x\in G\textrm{ and a basic nbd }B\in\mathcal{N}_{x}\}\label{Eqn: nbd-topology}$$ as the unique topology on $X$ that contains a basic neighbourhood of each of its points, for which the neighbourhood system at $x$ **coincides exactly with the assigned collection** *$\mathcal{N}_{x}$*; compare Def A1.1.** Neighbourhoods in topological spaces are a generalization of the familiar notion of distances of metric spaces that quantifies “closeness” of points of $X$. A *neighbourhood of a nonempty subset* $A$ of $X$ that will be needed later on is defined in a similar manner: $N$ is a neighbourhood of $A$ iff $A\subseteq\textrm{Int}(N)$, that is $A\subseteq U\subseteq N$; thus the neighbourhood system at $A$ is given by $\mathcal{N}_{A}=\bigcap_{a\in A}\mathcal{N}_{a}:=\{ G\subseteq X\!:G\in\mathcal{N}_{a}\textrm{ for every }a\in A\}$ is the class of common neighbourhoods of each point of $A$. Some examples of neighbourhood systems at a point $x$ in $X$ are the following: \(1) In an indiscrete space $(X,\mathcal{U})$, $X$ is the only neighbourhood of every point of the space; in a discrete space any set containing $x$ is a neighbourhood of the point. \(2) In an infinite cofinite (or uncountable cocountable) space, every neighbourhood of a point is an open neighbourhood of that point. \(3) In the topology of saturated sets under the equivalence relation $\sim$, the neighbourhood system at $x$ consists of all supersets of the equivalence class $[x]_{\sim}$. \(4) Let $x\in X$. In the $x$-inclusion topology, $\mathcal{N}_{x}$ consists of all the non-empty open sets of $X$ which are the supersets of $\{ x\}$. For a point $y\neq x$ of $X$, $\mathcal{N}_{y}$ are the supersets of $\{ x,y\}$. For any given class $_{\textrm{T}}\mathcal{S}$ of subsets of $X$, a unique topology $\mathcal{U}(_{\textrm{T}}\mathcal{S})$ can always be constructed on $X$ by taking all *finite* *intersections* $_{\textrm{T}}\mathcal{S}_{\wedge}$ of members of $\mathcal{S}$ followed by *arbitrary* *unions* $_{\textrm{T}}\mathcal{S}_{\wedge\vee}$ of these finite intersections. $\mathcal{U}(_{\textrm{T}}\mathcal{S}):=\,_{\textrm{T}}\mathcal{S}_{\wedge\vee}$ is the smallest topology on $X$ that contains $_{\textrm{T}}\mathcal{S}$ and is said to be *generated by* $_{\textrm{T}}\mathcal{S}$. For a given topology $\mathcal{U}$ on $X$ satisfying $\mathcal{U}=\mathcal{U}(_{\textrm{T}}\mathcal{S})$, $_{\textrm{T}}\mathcal{S}$ is a *subbasis,* and $_{\textrm{T}}\mathcal{S}_{\wedge}:=\,_{\textrm{T}}\mathcal{B}$ a *basis, for the topology* $\mathcal{U}$; for more on topological basis, see Appendix A1. The topology generated by a subbase essentially builds not from the collection $_{\textrm{T}}\mathcal{S}$ itself but from the finite intersections $_{\textrm{T}}\mathcal{S}_{\wedge}$ of its subsets; in comparison the base generates a topology directly from a collection $_{\textrm{T}}\mathcal{S}$ of subsets by forming their unions. Thus whereas *any* class of subsets can be used as a subbasis, a given collection must meet certain qualifications to pass the test of a base for a topology: these and related topics are covered in Appendix A1. Different subbases, therefore, can be used to generate different topologies on the same set $X$ as the following examples for the case of $X=\mathbb{R}$ demonstrates; here $(a,b)$, $[a,b)$, $(a,b]$ and $[a,b]$, for $a\leq b\in\mathbb{R}$, are the usual open-closed intervals in $\mathbb{R}$[^13]. The subbases $_{\textrm{T}}\mathcal{S}_{1}=\{(a,\infty),(-\infty,b)\}$, $_{\textrm{T}}\mathcal{S}_{2}=\{[a,\infty),(-\infty,b)\}$, $_{\textrm{T}}\mathcal{S}_{3}=\{(a,\infty),(-\infty,b]\}$ and $_{\textrm{T}}\mathcal{S}_{4}=\{[a,\infty),(-\infty,b]\}$ give the respective bases $_{\textrm{T}}\mathcal{B}_{1}=\{(a,b)\}$, $_{\textrm{T}}\mathcal{B}_{2}=\{[a,b)\}$, $_{\textrm{T}}\mathcal{B}_{3}=\{(a,b]\}$ and $_{\textrm{T}}\mathcal{B}_{4}=\{[a,b]\}$, $a\leq b\in\mathbb{R}$, leading to the *standard (usual*)*, lower limit (Sorgenfrey*)*, upper limit,* and *discrete* (take $a=b$) topologies on $\mathbb{R}$. Bases of the type $(a,\infty)$ and $(-\infty,b)$ provide the *right* and *left ray* topologies on $\mathbb{R}$. *This feasibility of generating different topologies on a set can be of great practical significance because open sets determine convergence characteristics of nets and continuity characteristics of functions, thereby making it possible for nature to play around with the structure of its working space in its kitchen to its best possible advantage.*[^14] ** Here are a few essential concepts and terminology for topological spaces. **Definition 2.2.** ***Boundary, Closure, Interior*.** *The* *boundary of $A$ in $X$* *is the set of points $x\in X$ such that every neighbourhood $N$ of $x$ intersects both $A$ and $X-A$:* $${\textstyle \textrm{Bdy}(A)\overset{\textrm{def}}=\{ x\in X\!:(\forall N\in\mathcal{N}_{x})((N\bigcap A\neq\emptyset)\wedge(N\bigcap(X-A)\neq\emptyset))\}}\label{Eqn: Def: Boundary}$$ *where $\mathcal{N}_{x}$ is the neighbourhood system of Eq. (\[Eqn: Def: nbd system\]) at $x$.* *The* *closure of $A$* *is the set of all points $x\in X$ such that each neighbourhood of $x$ contains at least one point of $A$* ***that may be $\boldmath{x}$ itself****. Thus the set* $${\textstyle \textrm{Cl}(A)\overset{\textrm{def}}=\{ x\in X\!:(\forall N\in\mathcal{N}_{x})\textrm{ }(N\bigcap A\neq\emptyset)\}}\label{Eqn: Def: Closure}$$ *of all points in $X$ adherent* *to* *$A$ is given by* *is the union* ***of $A$ with its boundary.* *The* *interior of $A$* $$\textrm{Int}(A)\overset{\textrm{def}}=\{ x\in X\!:(\exists N\in\mathcal{N}_{x})\textrm{ }(N\subseteq A)\}\label{Eqn: Def: Interior}$$ *consisting of those points of $X$ that are in $A$ but not in its boundary,* $\textrm{Int}(A)=A-\textrm{Bdy}(A)$*, is the largest open subset of $X$ that is contained in $A$. Hence it follows that* $\textrm{Int}(\textrm{Bdy}(A))=\emptyset$, *the boundary of $A$ is the intersection of the closures of $A$ and $X-A$,* *and a subset $N$ of $X$ is a neighbourhood of $x$ iff* $x\in\textrm{Int}(N)$*.$\qquad\square$* The three subsets $\textrm{Int}(A)$, $\textrm{Bdy}(A)$ and *exterior* of $A$ defined as $\textrm{Ext}(A):=\textrm{Int}(X-A)=X-\textrm{Cl}(A)$, are pairwise disjoint and have the full space $X$ as their union. **Definition 2.3.** ***Derived and Isolated sets.*** *Let $A$ be a subset of $X$. A point $x\in X$ (which may or may not be a point of $A$) is a* *cluster point of* $A$ *if every neighbourhood $N\in\mathcal{N}_{x}$ contains atleast one point of $A$* ***different from*** *$\mathbf{x}$. The* *derived set of $A$* $${\textstyle \textrm{Der}(A)\overset{\textrm{def}}=\{ x\in X\!:(\forall N\in\mathcal{N}_{x})\textrm{ }(N\bigcap(A-\{ x\})\neq\emptyset)\}}\label{Eqn: Def: Derived}$$ *is the set of all cluster points of $A$. The complement of* $\textrm{Der}(A)$ in $A$ $$\textrm{Iso}(A)\overset{\textrm{def}}=A-\textrm{Der}(A)=\textrm{Cl}(A)-\textrm{Der}(A)\label{Eqn: Def: Isolated}$$ *are the* *isolated* *points* *of* $A$ *to which no proper sequence in $A$ converges, that is there exists a neighbourhood of any such point that contains no other point of $A$ so that* *the only sequence that converges to* $a\in\textrm{Iso}(A)$ *is the constant sequence $(a,a,a,\cdots)$.$\qquad\square$* Clearly, $$\begin{array}{ccl} {\textstyle \textrm{Cl}(A)} & = & A\bigcup\textrm{Der}(A)=A\bigcup\textrm{Bdy}(A)\\ & = & \textrm{Iso}(A)\bigcup\textrm{Der}(A)=\textrm{Int}(A)\bigcup\textrm{Bdy}(A)\end{array}$$ with the last two being disjoint unions, and $A$ is closed iff $A$ contains all its cluster points, $\textrm{Der}(A)\subseteq A$, iff $A$ contains its closure. Hence $$\begin{gathered} A=\textrm{Cl}(A)\Longleftrightarrow\textrm{Cl}(A)=\{ x\in A\!:((\exists N\in\mathcal{N}_{x})(N\subseteq A))\vee((\forall N\in\mathcal{N}_{x})(N\bigcap(X-A)\neq\emptyset))\}\end{gathered}$$ Comparison of Eqs. (\[Eqn: Def: Boundary\]) and (\[Eqn: Def: Derived\]) also makes it clear that $\textrm{Bdy}(A)\subseteq\textrm{Der}(A)$. The special case of $A=\textrm{Iso}(A)$ with $\textrm{Der}(A)\subseteq X-A$ is important enough to deserve a special mention: **Definition 2.4.** ***Donor set.*** *A proper, nonempty subset $A$ of $X$ such that* $\textrm{Iso}(A)=A$ *with* $\textrm{Der}(A)\subseteq X-A$ *will be called* *self-isolated* *or* *donor.* *Thus sequences eventually in a donor set converges only in its complement; this is the opposite of the characteristic of a closed set where all converging sequences eventually in the set must necessarily converge in it. A closed-donor set with a closed neighbour has no derived or boundary sets, and will be said to be* *isolated in $X$.*$\qquad\square$ **Example 2.5.** In an isolated set sequences converge, if they have to, simultaneously in the complement (because it is donor) and in it (because it is closed). Convergent sequences in such a set can only be constant sequences. Physically, if we consider adherents to be contributions made by the dynamics of the corresponding sequences, then an isolated set is secluded from its neighbour in the sense that it neither receives any contributions from its surroundings, nor does it give away any. In this light and terminology, a closed set is a *selfish* set (recall that a set $A$ is closed in $X$ iff every convergent net of $X$ that is eventually in $A$ converges in $A$; conversely a set is open in $X$ iff the only nets that converge in $A$ are eventually in it), **whereas a set with a derived set that intersects itself and its complement may be considered to be *neutral.* Appendix A3 shows the various possibilities for the derived set and boundary of a subset $A$ of $X$.$\qquad\blacksquare$ Some useful properties of these concepts for a subset $A$ of a topological space $X$ are the following. \(a) $\textrm{Bdy}_{X}(X)=\emptyset$, \(b) $\textrm{Bdy}(A)=\textrm{Cl}(A)\bigcap\textrm{Cl}(X-A)$, \(c) $\textrm{Int}(A)=X-\textrm{Cl}(X-A)=A-\textrm{Bdy}(A)=\textrm{Cl}(A)-\textrm{Bdy}(A)$, \(d) $\textrm{Int}(A)\bigcap\textrm{Bdy}(A)=\emptyset$, \(e) $X=\textrm{Int}(A)\bigcup\textrm{Bdy}(A)\bigcup\textrm{Int}(X-A)$, \(f) $${\textstyle \textrm{Int}(A)=\bigcup\{ G\subseteq X\!:G\textrm{ is an open set of }X\textrm{ contained in }A\}}\label{Eqn: interior}$$ \(g) $${\textstyle \textrm{Cl}(A)=\bigcap\{ F\subseteq X\!:F\textrm{ is a closed set of }X\textrm{ containing }A\}}\label{Eqn: closure}$$ A straightforward consequence of property (b) is that the boundary of any subset $A$ of a topological space $X$ is closed in $X$; this significant result may also be demonstrated as follows. If $x\in X$ is not in the boundary of $A$ there is some neighbourhood $N$ of $x$ that does not intersect both $A$ and $X-A$. For each point $y\in N$, $N$ is a neighbourhood of that point that does not meet $A$ and $X-A$ simultaneously so that $N$ is contained wholly in $X-\textrm{Bdy}(A)$. We may now take $N$ to be open without any loss of generality implying thereby that $X-\textrm{Bdy}(A)$ is an open set of $X$ from which it follows that $\textrm{Bdy}(A)$ is closed in $X$. Further material on topological spaces relevant to our work can be found in Appendix A3. ***End Tutorial4*** Working in a general topological space, we now recall the solution of an ill-posed problem $f(x)=y$ [@Sengupta1997] that leads to a multifunctional inverse $f^{-}$ through the generalized inverse $G$. Let $f\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$ be a (nonlinear) function between two topological space $(X,\mathcal{U})$ and $(Y,\mathcal{V})$ that is neither one-one or onto. Since $f$ is not one-one, $X$ can be partitioned into disjoint equivalence classes with respect to the equivalence relation $x_{1}\sim x_{2}\Leftrightarrow f(x_{1})=f(x_{2})$. Picking a representative member from each of the classes (this is possible by the Axiom of Choice; see the following Tutorial) produces a *basic set* $X_{\textrm{B}}$ of $X$; it is basic as it corresponds to the row space in the linear matrix example which is all that is needed for taking an inverse. $X_{\textrm{B}}$ is the counterpart of the quotient set $X/\sim$ of Sec. 1, with the important difference that whereas the points of the quotient set are the equivalence classes of $X$, $X_{\textrm{B}}$ *is a subset of* $X$ with each of the classes contributing a point to $X_{\textrm{B}}$. It then follows that $f_{\textrm{B}}\!:X_{\textrm{B}}\rightarrow f(X)$ is the bijective restriction $a|_{\textrm{row}(A)}$ that reduces the original ill-posed problem to a well-posed one with $X_{\textrm{B}}$ and $f(X)$ corresponding respectively to the row and column spaces of $A$, and $f_{\textrm{B}}^{-1}\!:f(X)\rightarrow X_{\textrm{B}}$ is the basic inverse from which the multiinverse $f^{-}$ is obtained through $G$, which in turn corresponds to the Moore-Penrose inverse $G_{\textrm{MP}}$. The topological considerations (obviously not for inner product spaces that applies to the Moore-Penrose inverse) needed to complete the solution are discussed below and in Appendix A1. ***Begin Tutorial5: Axiom of Choice and Zorn’s Lemma*** Since some of our basic arguments depend on it, this Tutorial contains a short description of the Axiom of Choice that has been described as “one of the most important, and at the same time one of the most controversial, principles of mathematics”. What this axiom states is this: For any set $X$ there exists a function $f_{\textrm{C}}\!:\mathcal{P}_{0}(X)\rightarrow X$ such that $f_{\textrm{C}}(A_{\alpha})\in A_{\alpha}$ for every non-empty subset $A_{\alpha}$ of $X$; here $\mathcal{P}_{0}(X)$ is the class of all subsets of $X$ except $\emptyset$. Thus, if $X=\{ x_{1},x_{2},x_{3}\}$ is a three element set, a possible choice function is given by $$\begin{array}{c} f_{\textrm{C}}(\{ x_{1},x_{2},x_{3}\})=x_{3},\quad f_{\textrm{C}}(\{ x_{1},x_{2}\})=x_{1},\quad f_{\textrm{C}}(\{ x_{2},x_{3}\})=x_{3},\quad f_{\textrm{C}}(\{ x_{3},x_{1}\})=x_{3},\\ f_{\textrm{C}}(\{ x_{1}\})=x_{1},\quad f_{\textrm{C}}(\{ x_{2}\})=x_{2},\quad f_{\textrm{C}}(\{ x_{3}\})=x_{3}.\end{array}$$ It must be appreciated that the axiom is only an existence result that asserts *every set* to have a choice function, even when nobody knows how to construct one in a specific case. Thus, for example, how does one pick out the isolated irrationals $\sqrt{2}$ or $\pi$ from the uncountable reals? There is no doubt that they do exist, for we can construct a right angled triangle with sides of length $1$ or a circle of radius $1$. The axiom tells us that these choices are possible even though we do not know how exactly to do it; all that can be stated with confidence is that we can actually pick up rationals arbitrarily close to these irrationals. The axiom of choice is essentially meaningful when $X$ is infinite as illustrated in the last two examples. This is so because even when $X$ is denumerable, it would be physically impossible to make an infinite number of selections either all at a time or sequentially: the Axiom of Choice nevertheless tells us that this is possible. The real strength and utility of the Axiom however is when $X$ and some or all of its subsets are uncountable as in the case of the choice of the *single element* $\pi$ from the reals. To see this more closely in the context of maps that we are concerned with, let $f\!:X\rightarrow Y$ be a non-injective, onto map. To construct a functional right inverse $f_{r}\!:Y\rightarrow X$ of $f$, we must choose, for each $y\in Y$ one *representative* element $x_{\textrm{rep}}$ from the set $f^{-}(y)$ and define $f_{r}(y)$ to be that element according to $f\circ f_{r}(y)=f(x_{\textrm{rep}})=y$. If there is no preferred or natural way to make this choice, the axiom of choice allows us to make an arbitrary selection from the infinitely many that may be possible from $f^{-}(y)$. When a natural choice is indeed available, as for example in the case of the initial value problem $y^{\prime}(x)=x;\, y(0)=\alpha_{0}$ on $[0,a]$, the definite solution $\alpha_{0}+x^{2}/2$ may be selected from the infinitely many $\int_{0}^{x}x^{\prime}dx^{\prime}=\alpha+x^{2}/2,\textrm{ }0\leq x\leq a$ that are permissible, and the axiom of choice sanctions this selection. In addition, each $y\in Y$ gives rise to the family of solution sets $A_{y}=\{ f^{-}(y)\!:y\in Y\}$ and the real power of the axiom is its assertion that it is possible to make a choice $f_{\textrm{C}}(A_{y})\in A_{y}$ on every $A_{y}$ simultaneously; this permits the choice **on every $A_{y}$ of the collection to be made at the same time. ***Pause Tutorial5*** Figure \[Fig: GenInv\] shows our [@Sengupta1997] formulation and solution of the inverse ill-posed problem $f(x)=y$. In sub-diagram $X-X_{\textrm{B}}-f(X)$, the surjection $p\!:X\rightarrow X_{\textrm{B}}$ is the counterpart of the quotient map $Q$ of Fig. \[Fig: quotient\] that is known in the present context as the *identification* of $X$ with $X_{\mathrm{B}}$ (as it *identifies* each saturated subset of $X$ with its representative point in $X_{\textrm{B}}$), with the space $(X_{\textrm{B}},\textrm{FT}\{\mathcal{U};p\})$ carrying the *identification topology* $\textrm{FT}\{\mathcal{U};p\}$ being known as an *identification space.* By sub-diagram $Y-X_{\textrm{B}}-f(X)$, the image $f(X)$ of $f$ gets the *subspace topology*[^15] $\textrm{IT}\{ j;\mathcal{V}\}$ from $(Y,\mathcal{V})$ by the inclusion $j\!:f(X)\rightarrow Y$ when its open sets are generated as, and only as, $j^{-1}(V)=V\bigcap f(X)$ for $V\in\mathcal{V}$. Furthermore if the bijection $f_{\textrm{B}}$ connecting $X_{\textrm{B}}$ and $f(X)$ (which therefore acts as a $1:1$ correspondence between their points, implying that these sets are set-theoretically identical except for their names) is image continuous, then by Theorem A2.1 of Appendix 2, so is the *association* $q=f_{\textrm{B}}\circ p\!:X\rightarrow f(X)$ that associates saturated sets of $X$ with elements of $f(X)$; this makes $f(X)$ look like an identification space of $X$ by assigning to it the topology $\textrm{FT}\{\mathcal{U};q\}$. On the other hand if $f_{\textrm{B}}$ happens to be preimage continuous, then $X_{\textrm{B}}$ acquires, by Theorem A2.2, the initial topology $\textrm{IT}\{ e;\mathcal{V}\}$ by the *embedding* $e\!:X_{\textrm{B}}\rightarrow Y$ that embeds $X_{\textrm{B}}$ into $Y$ through $j\circ f_{\textrm{B}}$, making it look like a subspace of $Y$[^16]. In this dual situation, $f_{\textrm{B}}$ has the highly interesting topological property of being simultaneously image and preimage continuous when the open sets of $X_{\textrm{B}}$ and $f(X)$ — which are simply the $f_{\textrm{B}}^{-1}$-images of the open sets of $f(X)$ which, in turn, are the $f_{\textrm{B}}$-images of these saturated open sets — can be considered to have been generated by $f_{\textrm{B}}$, and are respectively the smallest and largest collection of subsets of $X$ and $Y$ that makes $f_{\textrm{B}}$ *ini*(tial-fi)*nal continuous* [@Sengupta1997]*.* A bijective ininal function such as $f_{\textrm{B}}$ is known as a *homeomorphism* and ininality for functions that are neither $1:1$ **nor onto is a generalization of homeomorphism for bijections; refer Eqs. (\[Eqn: INI\]) and (\[Eqn: HOM\]) for a set-theoretic formulation of this distinction. A homeomorphism $f\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$ renders the homeomorphic spaces $(X,\mathcal{U})$ and $(Y,\mathcal{V})$ topologically indistinguishable which may be considered to be identical in as far as their topological properties are concerned. **Remark.** It may be of some interest here to speculate on the significance of *ininality* in our work. Physically, a map $f\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$ between two spaces can be taken to represent an interaction between them and the algebraic and topological characters of $f$ determine the nature of this interaction. A simple bijection merely sets up a correspondence, that is an interaction, between every member of $X$ with some member $Y$, whereas a continuous map establishes the correspondence among the special category of “open” sets. Open sets, as we see in Appendix A1, are the basic ingredients in the theory of convergence of sequences, nets and filters, and the characterization of open sets in terms of convergence, namely that *a set $G$ in $X$ is open in it if every net or sequence that converges in $X$ to a point in $G$ is eventually in $G$*, see Appendix A1, may be interpreted to mean that such sets represent groupings of elements that require membership of the group before permitting an element to belong it; an open set unlike its complement the closed or *selfish* set, however, does not forbid a net that has been eventually in it to settle down in its selfish neighbour, who nonetheless will never allow such a situation to develop in its own territory. An ininal map forces these well-defined and definite groups in $(X,\mathcal{U})$ and $(Y,\mathcal{V})$ to interact with each other through $f$; this is not possible with simple continuity as there may be open sets in $X$ that are not derived from those of $Y$ and non-open sets in $Y$ whose inverse images are open in $X$. *It is our hypothesis that the driving force behind the evolution of a system represented by the input-output relation $f(x)=y$ is the attainment of the ininal triple state $(X,f,Y)$ for the system.* A preliminary analysis of this hypothesis is to be found in Sec. 4.2. For ininality of the interaction, it is therefore necessary to have$$\begin{aligned} \textrm{FT}\{\mathcal{U};f_{<}\} & = & \textrm{IT}\{ j;\mathcal{V}\}\label{Eqn: ininal}\\ \textrm{IT}\{\,_{<}f;\mathcal{V}\} & = & \textrm{FT}\{\mathcal{U};p\}\};\nonumber \end{aligned}$$ in what follows we will refer to the injective and surjective restrictions of $f$ by their generic topological symbols of embedding $e$ and association $q$ respectively. What are the topological characteristics of $f$ in order that the requirements of Eq. (\[Eqn: ininal\]) be met? From Appendix A1, it should be clear by superposing the two parts of Fig. \[Fig: Initial-Final\] over each other that given $q\!:(X,\mathcal{U})\rightarrow(f(X),\textrm{FT}\{\mathcal{U};q\})$ in the first of these equations, $\textrm{IT}\{ j;\mathcal{V}\}$ will equal $\textrm{FT}\{\mathcal{U};q\}$ iff $j$ is an ininal open inclusion and $Y$ receives $\textrm{FT}\{\mathcal{U};f\}$. In a similar manner, preimage continuity of $e$ requires $p$ to be open ininal and $f$ to be preimage continuous if the second of Eq. (\[Eqn: ininal\]) is to be satisfied. Thus under the restrictions imposed by Eq. (\[Eqn: ininal\]), the interaction $f$ between $X$ and $Y$ must be such as to give $X$ the smallest possible topology of $f$-saturated sets and $Y$ the largest possible topology of images of all these sets: $f$, under these conditions, is an ininal transformation. Observe that a direct application of parts (b) of Theorems A2.1 and A2.2 to Fig. \[Fig: GenInv\] implies that Eq. (\[Eqn: ininal\]) is satisfied iff $f_{\textrm{B}}$ is ininal, that is iff it is a homeomorphism. Ininality of $f$ is simply a reflection of this as it is neither $1:1$ nor onto. The $f$- and $p$-images of each saturated set of $X$ are singletons in $Y$ (these saturated sets in $X$ arose, in the first place, as $f^{-}(\{ y\})$ for $y\in Y$) and in $X_{\textrm{B}}$ respectively. This permits the embedding $e=j\circ f_{\textrm{B}}$ to give $X_{\textrm{B}}$ the character of a virtual subspace of $Y$ just as $i$ makes $f(X)$ a real subspace. Hence the inverse images $p^{-}(x_{r})=f^{-}(e(x_{r}))$ with $x_{r}\in X_{\textrm{B}}$, and $q^{-}(y)=f^{-}(i(y))$ with $y=f_{\textrm{B}}(x_{r})\in f(X)$ are the same, and are just the corresponding $f^{-}$ images via the injections $e$ and $i$ respectively. $G$, a left inverse of $e$, is a generalized inverse of $f$. $G$ is a generalized inverse because the two set-theoretic defining requirements of $fGf=f$ and $GfG=G$ for the generalized inverse are satisfied, as Fig. \[Fig: GenInv\] shows, in the following forms $$jf_{\textrm{B}}Gf=f\qquad Gjf_{\textrm{B}}G=G.$$ In fact the commutativity embodied in these equalities is self evident from the fact that $e=if_{\textrm{B}}$ is a left inverse of $G$, that is $eG=\bold1_{Y}$. On putting back $X_{\textrm{B}}$ into $X$ by identifying each point of $X_{\textrm{B }}$ with the set it came from yields the required set-valued inverse $f^{-}$, and $G$ may be viewed as a functional selection of the multiinverse $f^{-}$. An *injective branch* of a function $f$ in this work refers to the restrictions $f_{\mathrm{B}}$ and its associated inverse $f_{\mathrm{B}}^{-1}$. The following example of an inverse ill-posed problem will be useful in fixing the notations introduced above. Let $f$ on $[0,1]$ be the function of \[Fig: gen-inv\]. Then $f(x)=y$ is well-posed for $[0,1/4)$, and ill-posed in [\[]{}1/4,1[\]]{}. There are two injective branches of $f$ in $\{[1/4,3/8)$$\bigcup$ $(5/8,1]\}$, and $f$ is constant ill-posed in $[3/8,5/8]$. Hence the basic component $f_{\textrm{B}}$ of $f$ can be taken to be $f_{\textrm{B}}(x)=2x$ for $x\in[0,3/8)$ having the inverse $f_{\textrm{B}}^{-1}(y)=x/2$ with $y\in[0,3/4]$. The generalized inverse is obtained by taking $[0,3/4]$ as a subspace of $[0,1]$, while the multiinverse $f^{-}$ follows by associating with every point of the basic domain $[0,1]_{\textrm{B}}=[0,3/8]$, the respective equivalent points $[3/8]_{f}=[3/8,5/8]$ and $[x]_{f}=\{ x,7/4-3x\}\textrm{ for }x\in[1/4,3/8)$. Thus the inverses $G$ and $f^{-}$ of $f$ are[^17] $$G(y)=\left\{ \begin{array}{ccl} y/2, & & y\in[0,3/4]\\ 0, & & y\in(3/4,1]\end{array}\right.,\quad f^{-}(y)=\left\{ \begin{array}{ccl} y/2, & & y\in[0,1/2)\\ \{ y/2,7/4-3y/2\}, & & y\in[1/2,3/4)\\ {}[3/8,5/8], & & y=3/4\\ 0, & & y\in(3/4,1],\end{array}\right.$$ which shows that $f^{-}$ is multivalued. In order to avoid cumbersome notations, an injective branch of $f$ will always refer to a representative basic branch $f_{\textrm{B}}$, and its “inverse” will mean either $f_{\textrm{B}}^{-1}$ or $G$. **Example 2.3, Revisited.** The row reduced echelon form of the augmented matrix $(A|b)$ of Example 2.3 is $${\displaystyle (A|b)\longrightarrow\left(\begin{array}{rrrrrcl} 1 & -3 & 0 & 3/2 & 1/2 & & 5b_{1}/2-b_{2}/2\\ 0 & 0 & 1 & -1/4 & 3/4 & & -3b_{1}/4+b_{2}/4\\ 0 & 0 & 0 & 0 & 0 & & -2b_{1}+b_{3}\\ 0 & 0 & 0 & 0 & 0 & & b_{1}-b_{2}+b_{4}\end{array}\right)}\label{Eqn: RowReduce}$$ The multifunctional solution $x=A^{-}b$, with $b$ any element of $Y=\mathbb{R}^{4}$ not necessarily in the of image of $a$, is$$x=A^{-}b=Gb+x_{2}\left(\begin{array}{c} 3\\ 1\\ 0\\ 0\\ 0\end{array}\right)+x_{4}\left(\begin{array}{r} -3/2\\ 0\\ 1/4\\ 1\\ 0\end{array}\right)+x_{5}\left(\begin{array}{r} -1/2\\ 0\\ -3/4\\ 0\\ 1\end{array}\right),$$ with its multifunctional character arising from the arbitrariness of the coefficients $x_{2}$, $x_{4},$ and $x_{5}$. The generalized inverse $$G=\left(\begin{array}{rrrr} 5/2 & -1/2 & 0 & 0\\ 0 & 0 & 0 & 0\\ -3/4 & 1/4 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{array}\right)\!:Y\rightarrow X_{\textrm{B}}\label{Eqn: GenInvEx5}$$ is the unique matrix representation of the functional inverse $a_{\textrm{B}}^{-1}\!:a(\mathbb{R}^{5})\rightarrow X_{\textrm{B}}$ extended to $Y$ defined according to[^18] $$g(b)=\left\{ \begin{array}{ccl} a_{\textrm{B}}^{-1}(b), & & \textrm{ if }b\in\mathcal{R}(a)\\ 0, & & \textrm{ if }b\in Y-\mathcal{R}(a),\end{array}\right.\label{Eqn: Def: GenInv}$$ that bears comparison with the basic inverse $$A_{\textrm{B}}^{-1}(b^{*})=\left(\begin{array}{rrrr} 5/2 & -1/2 & 0 & 0\\ 0 & 0 & 0 & 0\\ -3/4 & 1/4 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{array}\right)\left(\begin{array}{c} b_{1}\\ b_{2}\\ 2b_{1}\\ b_{2}-b_{1}\end{array}\right)\!:a(\mathbb{R}^{5})\rightarrow X_{\textrm{B}}$$ between the $2$-dimensional column and row spaces of $A$ which is responsible for the particular solution of $Ax=b$. Thus $G$ is simply $A_{\textrm{B}}^{-1}$ acting on its domain $a(X)$ considered a subspace of $Y$, suitably extended to the whole of $Y$. That it is indeed a generalized inverse is readily seen through the matrix multiplications $GAG$ and $AGA$ that can be verified to reproduce $G$ and $A$ respectively. Comparison of Eqs. (\[Eqn: Def: Moore-Penrose\]) and (\[Eqn: Def: GenInv\]) shows that the Moore-Penrose inverse differs from ours through the geometrical constraints imposed in its definition, Eqs. (\[Eqn: MPInverse\]). Of course, this results in a more complex inverse (\[Eqn: MPEx5\]) as compared to our very simple (\[Eqn: GenInvEx5\]); nevertheless it is true that both the inverses satisfy $$\begin{aligned} E((E(G_{\textrm{MP}}))^{\textrm{T}}) & = & \left(\begin{array}{ccccc} 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\end{array}\right)\\ \\ & = & E((E(G))^{\textrm{T}})\end{aligned}$$ where $E(A)$ is the row-reduced echelon form of $A$. The canonical simplicity of Eq. (\[Eqn: GenInvEx5\]) as compared to Eq. (\[Eqn: MPEx5\]) is a general feature that suggests a more natural choice of bases by the map $a$ than the orthogonal set imposed by Moore and Penrose. This is to be expected since the MP inverse, governed by Eq. (\[Eqn: MPInverse\]), is a subset of our less restricted inverse described by only the first two of (\[Eqn: MPInverse\]); more specifically the difference is made clear in Fig. \[Fig: MP\_Inverse\](a) which shows that for any $b\notin\mathcal{R}(A)$, only $G_{\textrm{MP}}(b_{\bot})=0$ as compared to $G(b)=0$. This seems to imply that introducing extraneous topological considerations into the purely set theoretic inversion process may not be a recommended way of inverting, and the simple bases comprising the row and null spaces of $A$ and $A^{\textrm{T}}$ — that are mutually orthogonal just as those of the Moore-Penrose — are a better choice for the particular problem $Ax=b$ than the general orthonormal bases that the MP inverse introduces. These “good” bases, with respect to which the generalized inverse $G$ has a considerably simpler representation, are obtained in a straight forward manner from the row-reduced forms of $A$ and $A^{\textrm{T}}$. These bases are \(a) The column space of $A$ is spanned by the columns $(1,\textrm{ }3,\textrm{ }2,\textrm{ }2)^{\textrm{T}}$ and $(1,\textrm{ }5,\textrm{ }2,\textrm{ }4)^{\textrm{T}}$ of $A$ that correspond to the basic columns containing the leading $1$’s in the row-reduced form of $A$, \(b) The null space of $A^{\textrm{T}}$ is spanned by the solutions $(-2,\textrm{ }0,\textrm{ }1,\textrm{ }0)^{\textrm{T}}$ and $(1,-1,\textrm{ }0,\textrm{ }1)^{\textrm{T}}$ of the equation $A^{\textrm{T}}b=0$, \(c) The row space of $A$ is spanned by the rows $(1,-3,\textrm{ }2,\textrm{ }1,\textrm{ }2)$ and $(3,-9,\textrm{ }10,\textrm{ }2,\textrm{ }9)$ of $A$ corresponding to the non-zero rows in the row-reduced form of $A$, \(d) The null space of $A$ is spanned by the solutions $(3,\textrm{ }1,\textrm{ }0,\textrm{ }0,\textrm{ }0)$, $(-6,\textrm{ }0,\textrm{ }1,\textrm{ }4,\textrm{ }0)$, and $(-2,\textrm{ }0,-3,\textrm{ }0,\textrm{ }4)$ of the equation $Ax=0$.$\qquad\blacksquare$ The main differences between the natural “good” bases and the MP-bases that are responsible for the difference in form of the inverses, is that the later have the additional restrictions of being orthogonal to each other (recall the orthogonality property of the $Q$-matrices), and the more severe one of basis vectors mapping onto basis vectors according to $Ax_{i}=\sigma_{i}b_{i}$, $i=1,\cdots,r$, where the $\{ x_{i}\}_{i=1}^{n}$ and $\{ b_{j}\}_{j=1}^{m}$ are the eigenvectors of $A^{\textrm{T}}A$ and $AA^{\textrm{T}}$ respectively and $(\sigma_{i})_{i=1}^{r}$ are the positive square roots of the non-zero eigenvalues of $A^{\textrm{T}}A$ (or of $AA^{\textrm{T}}$), with $r$ denoting the dimension of the row or column space. This is considered as a serious restriction as the linear combination of the basis $\{ b_{j}\}$ that $Ax_{i}$ should otherwise have been equal to, allows a greater flexibility in the matrix representation of the inverse that shows up in the structure of $G$. These are, in fact, quite general considerations in the matrix representation of linear operators; thus the basis that diagonalizes an $n\times n$ matrix (when this is possible) is not the standard “diagonal” orthonormal basis of $\mathbb{R}^{n}$, but a problem-dependent, less canonical, basis consisting of the $n$ eigenvectors of the matrix. The $0$-rows of the inverse of Eq. (\[Eqn: GenInvEx5\]) result from the $3$-dimensional null-space variables $x_{2}$, $x_{4}$, and $x_{5}$, while the $0$-columns come from the $2$-dimensional image-space dependency of $b_{3}$, $b_{4}$ on $b_{1}$ and $b_{2}$, that is from the last two zero rows of the reduced echelon form (\[Eqn: RowReduce\]) of the augmented matrix. We will return to this theme of the generation of a most appropriate problem-dependent topology for a given space in the more general context of chaos in Sec. 4.2. In concluding this introduction to generalized inverses we note that the inverse $G$ of $f$ comes very close to being a right inverse: thus even though $AG\not\neq\bold1_{2}$ its row-reduced form $$\left(\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{array}\right)$$ is to be compared with the corresponding less satisfactory $$\left(\begin{array}{cccr} 1 & 0 & 2 & -1\\ 0 & 1 & 0 & 1\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{array}\right)$$ representation of $AG_{\textrm{MP}}$. **3. Multifunctional extension of function spaces** The previous section has considered the solution of ill-posed problems as multifunctions and has shown how this solution may be constructed. Here we introduce the multifunction space $\textrm{Multi}_{\mid}(X)$ as the first step toward obtaining a smallest dense extension $\textrm{Multi}(X)$ of the function space $\textrm{Map}(X)$. $\textrm{Multi}_{\mid}(X)$ is basic to our theory of chaos [@Sengupta2000] in the sense that a chaotic state of a system can be fully described by such an indeterminate multifunctional state. In fact, multifunctions also enter in a natural way in describing the spectrum of nonlinear functions that we consider in Section 6; this is required to complete the construction of the smallest extension $\textrm{Multi}(X)$ of the function space $\textrm{Map}(X)$. The main tool in obtaining the space $\textrm{Multi}_{\mid}(X)$ from $\textrm{Map}(X)$ is a generalization of the technique of pointwise convergence of continuous functions to (discontinuous) functions. In the analysis below, we consider nets instead of sequences as the spaces concerned, like the topology of pointwise convergence, may not be first countable, Appendix A1. ***3.1. Graphical convergence of a net of functions*** Let $(X,\mathcal{U})$ and $(Y,\mathcal{V})$ be Hausdorff spaces and $(f_{\alpha})_{\alpha\in\mathbb{D}}:X\rightarrow Y$ be a net of piecewise continuous functions, not necessarily with the same domain or range, and suppose that for each $\alpha\in\mathbb{D}$ there is a finite set $I_{\alpha}=\{1,2,\cdots P_{\alpha}\}$ such that $f_{\alpha}^{-}$ has $P_{\alpha}$ functional branches possibly with different domains; obviously $I_{\alpha}$ is a singleton iff $f$ is a injective. For each $\alpha\in\mathbb{D}$, define functions $(g_{\alpha i})_{i\in I_{\alpha}}\!:Y\rightarrow X$ such that $$f_{\alpha}g_{\alpha i}f_{\alpha}=f_{\alpha i}^{I}\qquad i=1,2,\cdots P_{\alpha,}$$ where $f_{\alpha i}^{I}$ is a basic injective branch of $f_{\alpha}$ on some subset of its domain: $g_{\alpha i}f_{\alpha i}^{I}=1_{X}$ on $\mathcal{D}(f_{\alpha i}^{I})$, $f_{\alpha i}^{I}g_{\alpha i}=1_{Y}$ on $\mathcal{D}(g_{\alpha i})$ for each $i\in I_{\alpha}$. The use of nets and filters is dictated by the fact that we do not assume $X$ and $Y$ to be first countable. In the application to the theory of dynamical systems that follows, $X$ and $Y$ are compact subsets of $\mathbb{R}$ when the use of sequences suffice. In terms of the residual and cofinal subsets $\textrm{Res}(\mathbb{D})$ and $\textrm{Cof}(\mathbb{D})$ of a directed set $\mathbb{D}$ (Def. A1.7), with $x$ and $y$ in the equations below being taken to belong to the required domains, define subsets $\mathcal{D}_{-}$ of $X$ and $\mathcal{R}_{-}$ of $Y$ as $$\mathcal{D}_{-}=\{ x\in X\!:((f_{\nu}(x))_{\nu\in\mathbb{D}}\textrm{ converges in }(Y,\mathcal{V}))\}\label{Eqn: D-}$$ $$\mathcal{R}_{-}=\{ y\in Y\!:\textrm{ }(\exists i\in I_{\nu})((g_{\nu i}(y))_{\nu\in\mathbb{D}}\textrm{ converges in }(X,\mathcal{U}))\}\label{Eqn: R-}$$ Thus: $\mathcal{D}_{-}$ is the set of points of $X$ on which the values of a given net of functions $(f_{\alpha})_{\alpha\in\mathbb{D}}$ converge pointwise in $Y$. Explicitly, this is the subset of $X$ on which subnets[^19] in $\textrm{Map}(X,Y)$ combine to form a net of functions that converge pointwise to a limit function $F:\mathcal{D}_{-}\rightarrow Y$. $\mathcal{R}_{-}$ is the set of points of $Y$ on which the values of the nets in $X$ generated by the injective branches of $(f_{\alpha})_{\alpha\in\mathbb{D}}$ converge pointwise in $Y$. Explicitly, this is the subset of $Y$ on which subnets of injective branches of $(f_{\alpha})_{\alpha\in\mathbb{D}}$ in $\textrm{Map}(Y,X)$ combine to form a net of functions that converge pointwise to a family of limit functions $G:\mathcal{R}_{-}\rightarrow X$. Depending on the nature of $(f_{\alpha})_{\alpha\in\mathbb{D}}$, there may be more than one $\mathcal{R}_{-}$ with a corresponding family of limit functions on each of them. To simplify the notation, we will usually let $G:\mathcal{R}_{-}\rightarrow X$ denote all the limit functions on all the sets $\mathcal{R}_{-}$. If we consider cofinal rather than residual subsets of $\mathbb{D}$ then corresponding $\mathbb{D}_{+}$ and $\mathbb{R}_{+}$ can be expressed as $$\mathcal{D}_{+}=\{ x\in X\!:((f_{\nu}(x))_{\nu\in\textrm{Cof}(\mathbb{D})}\textrm{ converges in }(Y,\mathcal{V}))\}\label{Eqn: D+}$$ $$\mathcal{R}_{+}=\{ y\in Y\!:(\exists i\in I_{\nu})((g_{\nu i}(y))_{\nu\in\textrm{Cof}(\mathbb{D})}\textrm{ converges in }(X,\mathcal{U}))\}.\label{Eqn: R+}$$ It is to be noted that the conditions $\mathcal{D}_{+}=\mathcal{D}_{-}$ and $\mathcal{R}_{+}=\mathcal{R}_{-}$ are necessary and sufficient for the Kuratowski convergence to exist. Since $\mathcal{D}_{+}$ and $\mathcal{R}_{+}$ differ from $\mathcal{D}_{-}$ and $\mathcal{R}_{-}$ only in having cofinal subsets of $D$ replaced by residual ones, and since residual sets are also cofinal, it follows that $\mathcal{D}_{-}\subseteq\mathcal{D}_{+}$ and $\mathcal{R}_{-}\subseteq\mathcal{R}_{+}$. The sets $\mathcal{D}_{-}$ and $\mathcal{R}_{-}$ serve for the convergence of a net of functions just as $\mathcal{D}_{+}$ and $\mathcal{R}_{+}$ are for the convergence of subnets of the nets (*adherence*). The later sets are needed when subsequences are to be considered as sequences in their own right as, for example, in dynamical systems theory in the case of $\omega$-limit sets. As an illustration of these definitions, consider the sequence of injective functions on the interval $[0,1]$ $f_{n}(x)=2^{n}x$, for $x\in\left[0,1/2^{n}\right],\textrm{ }n=0,1,2\cdots$. Then $\mathbb{D}_{0.2}$ is the set $\{0,1,2\}$ and only $\mathbb{D}_{0}$ is eventual in $\mathbb{D}$. Hence $\mathcal{D}_{-}$ is the single point set {0}. On the other hand $\mathbb{D}_{y}$ is eventual in $\mathbb{D}$ for all $y$ and $\mathcal{R}_{-}$ is $[0,1]$. **Definition** **[<span style="font-variant:small-caps;">3.1</span>]{}***[<span style="font-variant:small-caps;">.</span>]{}* ***Graphical Convergence of a net of functions.*** *A net of functions $(f_{\alpha})_{\alpha\in D}\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$ is said to* *converge graphically* *if either $\mathcal{D}_{-}\neq\emptyset$ or $\mathcal{R}_{-}\neq\emptyset$; in this case let $F\!:\mathcal{D}_{-}\rightarrow Y$ and $G:\mathcal{R}_{-}\rightarrow X$ be the entire collection of limit functions. Because of the assumed Hausdorffness of $X$ and $Y$, these limits are well defined.* *The graph of the* *graphical limit* $\mathscr{M}$ *of the net* $(f_{\alpha})\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$ ***denoted by* ***$f_{\alpha}\overset{\mathbf{G}}\longrightarrow\mathscr{M}$, is the subset of $\mathcal{D}_{-}\times\mathcal{R}_{-}$that is the union of the graphs of the function $F$ and the multifunction $G^{-}$ $$\mathbf{G}_{\mathscr{M}}=\mathbf{G}_{F}\bigcup\mathbf{G}_{G^{-}}$$* *where $$\mathbf{G}_{G^{-}}=\{(x,y)\in X\times Y\!:(y,x)\in\mathbf{G}_{G}\subseteq Y\times X\}.\qquad\square$$* ***Begin Tutorial6: Graphical Convergence*** The following two examples are basic to the understanding of the graphical convergence of functions to multifunctions and were the examples that motivated our search of an acceptable technique that did not require vertical portions of limit relations to disappear simply because they were non-functions: the disturbing question that needed an answer was how not to mathematically sacrifice these extremely significant physical components of the limiting correspondences. Furthermore, it appears to be quite plausible to expect a physical interaction between two spaces $X$ and $Y$ to be a consequence of both the direct interaction represented by $f\!:X\rightarrow Y$ and also the inverse interaction $f^{-}\!:Y\rightarrow X$, and our formulation of pointwise biconvergence is a formalization of this idea. Thus the basic examples (1) and (2) below produce multifunctions instead of discontinuous functions that would be obtained by the usual pointwise limit. **Example 3.1.** (1) $$f_{n}(x)=\left\{ \begin{array}{lc} 0 & -1\leq x\leq0\\ nx & 0\leq x\leq1/n\\ 1 & 1/n\leq x\leq1\end{array}\right.:\quad[-1,1]\rightarrow[0,1]$$ $$g_{n}(y)=y/n:\quad[0,1]\rightarrow[0,1/n]$$ Then$$F(x)=\left\{ \begin{array}{cc} 0 & -1\leq x\leq0\\ 1 & 0<x\leq1\end{array}\right.\qquad\mathrm{on}\qquad\mathcal{D}_{-}=\mathcal{D}_{+}=[-1,0]\bigcup(0,1]$$ $$G(y)=0\quad\mathrm{on}\quad\mathcal{R}_{-}=[0,1]=\mathcal{R}_{+}.$$ The graphical limit is $([-1,0],0)\bigcup(0,[0,1])\bigcup((0,1],1)$. \(2) $f_{n}(x)=nx$ for $x\in[0,1/n]$ gives $g_{n}(y)=y/n:[0,1]\rightarrow[0,1/n].$ Then $$F(x)=0\quad\mathrm{on}\quad\mathcal{D}_{-}=\{0\}=\mathcal{D}_{+},\qquad G(y)=0\quad\mathrm{on}\quad\mathcal{R}_{-}=[0,1]=\mathcal{R}_{+}.$$ The graphical limit is $(0,[0,1])$.$\qquad\blacksquare$ [1.4]{} In these examples that we consider to be the prototypes of graphical convergence of functions to multifunctions, $G(y)=0$ on $\mathcal{R}_{-}$ because $g_{n}(y)\rightarrow0$ for all $y\in\mathcal{R}_{-}$. Compare the graphical multifunctional limits with the corresponding usual pointwise functional limits characterized by discontinuity at $x=0$. Two more examples from @Sengupta2000 that illustrate this new convergence principle tailored specifically to capture one-to-many relations are shown in Fig. \[Fig: Example2\_1\] which also provides an example in Fig. \[Fig: Example2\_1\](c) of a function whose iterates do not converge graphically because in this case both the sets $\mathcal{D}_{-}$ and $\mathcal{R}_{-}$are empty. The power of graphical convergence in capturing multifunctional limits is further demonstrated by the example of the sequence $(\sin n\pi x)_{n=1}^{\infty}$ that converges to $0$ both $1$-integrally and test-functionally, Eqs. (\[Eqn: intsin\]) and (\[Eqn: testsin\]). [$\quad$(b) $F(x)=1$ on $\mathcal{D}_{-}=\{0\}$ and $G(y)=0$ on $\mathcal{R}_{-}=\{1\}$. Also $F(x)=-1/2,\textrm{ }0,\textrm{ }1,\textrm{ }3/2$ respectively on $\mathcal{D}_{+}=(0,3],\textrm{ }\{2\},\textrm{ }\{0\},\textrm{ }(0,2)$ and $G(y)=0,\textrm{ }0,\textrm{ }2,\textrm{ }3$ respectively on $\mathcal{R}_{+}=(-1/2,1],\textrm{ }[1,3/2),\textrm{ }[0,3/2),\textrm{ }[-1/2,0)$. ]{} [$\quad$(c) For $f(x)=-0.05+x-x^{2}$, no graphical limit as $\mathcal{D}_{-}=\emptyset=\mathcal{R}_{-}$.]{} [$\quad$(d) For $f(x)=0.7+x-x^{2}$, $F(x)=\alpha$ on $\mathcal{D}_{-}=[a,c]$, $G_{1}(y)=a$ and $G_{2}(y)=c$ on $\mathcal{R}_{-}=(-\infty,\alpha]$. Notice how the two fixed points and their equivalent images define the converged limit rectangular multi. As in example (1) one has $\mathcal{D}_{-}=\mathcal{D}_{+}$; also $\mathcal{R}_{-}=\mathcal{R}_{+}$.]{} It is necessary to understand how the concepts of *eventually in* and *frequently in* of Appendix A2 apply in examples (a) and (b) of Fig. [\[Fig: Example2\_1\].]{} In these two examples we have two subsequences one each for the even indices and the other for the odd. For a point-to-point functional relation, this would mean that the sequence frequents the adherence set $\textrm{adh}(x)$ of the sequence $(x_{n})$ but does not converge anywhere as it is not eventually in every neighbourhood of any point. For a multifunctional limit however it is possible, as demonstrated by these examples, for the subsequences to be eventually in every neighbourhood of certain *subsets* common to the eventual limiting sets of the subsequences; this intersection of the subsequential limits is now *defined to be the limit of the original sequence.* A similar situation obtains, for example, in the solution of simultaneous equations: The solution of the equation $a_{11}x_{1}+a_{12}x_{2}=b_{1}$ for one of the variables $x_{2}$ say with $a_{12}\neq0$, is the set represented by the straight line $x_{2}=m_{1}x_{1}+c_{1}$ for all $x_{1}$ in its domain, while for a different set of constants $a_{21}$, $a_{22}$ and $b_{2}$ the solution is the entirely different set $x_{2}=m_{2}x_{1}+c_{2}$, under the assumption that $m_{1}\neq m_{2}$ and $c_{1}\neq c_{2}$. Thus even though the individual equations (subsequences) of the simultaneous set of equations (sequence) may have distinct solutions (limits), the solution of the equations is their common point of intersection. Considered as sets in $X\times Y$, the discussion of convergence of a sequence of graphs $f_{n}\!:X\rightarrow Y$ would be incomplete without a mention of the convergence of a sequence of sets under the Hausdorff metric that is so basic in the study of fractals. In this case, one talks about the convergence of a sequence of compact subsets of the metric space $\mathbb{R}^{n}$ so that the sequences, as also the limit points that are the fractals, are compact subsets of $\mathbb{R}^{n}$. Let $\mathcal{K}$ denote the collection of all nonempty compact subsets of $\mathbb{R}^{n}$. Then the *Hausdorff metric* $d_{\textrm{H}}$ between two sets on $\mathcal{K}$ is defined to be $$d_{\textrm{H}}(E,F)=\max\{\delta(E,F),\delta(F,E)\}\qquad E,F\in\mathcal{K},$$ where $$\delta(E,F)=\max_{x\in E}\textrm{ }\min_{y\in F}\Vert\mathbf{x-y}\Vert_{2}$$ is $\delta(E,F)$ is the non-symmetric $2$-norm in $\mathbb{R}^{n}$. The power and utility of the Hausdorff distance is best understood in terms of the dilations $E+\varepsilon:=\bigcup_{x\in E}D_{\varepsilon}(x)$ of a subset $E$ of $\mathbb{R}^{n}$ by $\varepsilon$ where $D_{\varepsilon}(x)$ is a closed ball of radius $\varepsilon$ at $x$; physically a dilation of $E$ by $\varepsilon$ is a closed $\varepsilon$-neighbourhood of $E$. Then a fundamental property of $d_{\textrm{H}}$ is that $d_{\textrm{H}}(E,F)\leq\varepsilon$ iff both $E\subseteq F+\varepsilon$ and $F\subseteq E+\varepsilon$ hold simultaneously which leads [@Falconer1990] to the interesting consequence that *If $(F_{n})_{n=1}^{\infty}$ and $F$ are nonempty compact sets, then $\lim_{n\rightarrow\infty}F_{n}=F$ in the Hausdorff metric iff $F_{n}\subseteq F+\varepsilon$ and $F\subseteq F_{n}+\varepsilon$ eventually. Furthermore if $(F_{n})_{n=1}^{\infty}$ is a decreasing sequence of elements of a filter-base in $\mathbb{R}^{n}$, then the nonempty and compact limit set $F$ is given by $$\lim_{n\rightarrow\infty}F_{n}=F=\bigcap_{n=1}^{\infty}F_{n}.$$* Note that since $\mathbb{R}^{n}$ is Hausdorff, the assumed compactness of $F_{n}$ ensures that they are also closed in **$\mathbb{R}^{n}$; $F$, therefore, is just the adherent set of the filter-base. In the deterministic algorithm for the generation of fractals by the so-called iterated function system (IFS) approach, $F_{n}$ is the inverse image by the $n^{\textrm{th}}\textrm{ }$ iterate of a non-injective function $f$ having a finite number of injective branches and converging graphically to a multifunction. Under the conditions stated above, the Hausdorff metric ensures convergence of any class of compact subsets in $\mathbb{R}^{n}$. It appears eminently plausible that our multifunctional graphical convergence on $\textrm{Map}(\mathbb{R}^{n})$ implies Hausdorff convergence on $\mathbb{R}^{n}$: in fact pointwise biconvergence involves simultaneous convergence of image and preimage nets on $Y$ and $X$ respectively. Thus confining ourselves to the simpler case of pointwise convergence, if $(f_{\alpha})_{\alpha\in\mathbb{D}}$ is a net of functions in $\textrm{Map}(X,Y)$, then the following theorem expresses the link between convergence in $\textrm{Map}(X,Y)$ and in $Y$. **Theorem 3.1.** *A net of functions* $(f_{\alpha})_{\alpha\in\mathbb{D}}$ *converges to a function* $f$ *in* $(\textrm{Map}(X,Y),\mathcal{T})$ *in the topology of pointwise convergence iff* $(f_{\alpha})$ *converges pointwise to $f$ in the sense that $f_{\alpha}(x)\rightarrow f(x)$ in $Y$ for every $x$ in $X$.$\qquad\square$* **Proof.** *Necessity.* First consider $f_{\alpha}\rightarrow f$ in $(\textrm{Map}(X,Y),\mathcal{T})$. For an open neighbourhood $V$ of $f(x)$ in $Y$ with $x\in X$, let $B(x;V)$ be a local neighbourhood of $f$ in $(\textrm{Map}(X,Y),\mathcal{T})$, see Eq. (\[Eqn: point\]) in Appendix A1. By assumption of convergence, $(f_{\alpha})$ must eventually be in $B(x;V)$ implying that $f_{\alpha}(x)$ is eventually in $V$. Hence $f_{\alpha}(x)\rightarrow f(x)$ in $Y$. *Sufficiency.* Conversely, if $f_{\alpha}(x)\rightarrow f(x)$ in $Y$ for every $x\in X$, then for a *finite* collection of points $(x_{i})_{i=1}^{I}$ of $X$ ($X$ may itself be uncountable) and corresponding open sets $(V_{i})_{i=1}^{I}$ in $Y$ with $f(x_{i})\in V_{i}$, let $B((x_{i})_{i=1}^{I};(V_{i})_{i=1}^{I})$ be an open neighbourhood of $f$. From the assumed pointwise convergence $f_{\alpha}(x_{i})\rightarrow f(x_{i})$ in $Y$ for $i=1.2.\cdots.I$, it follows that $(f_{\alpha}(x_{i}))$ is eventually in $V_{i}$ for every $(x_{i})_{i=1}^{I}$. Because $\mathbb{D}$ is a directed set, the existence of a residual applicable globally for all $i=1,2,\cdots,I$ is assured leading to the conclusion that $f_{\alpha}(x_{i})\in V_{i}$ eventually for every $i=1,2,\cdots,I$. Hence $f_{\alpha}\in B((x_{i})_{i=1}^{I};(V_{i})_{i=1}^{I})$ eventually; this completes the demonstration that $f_{\alpha}\rightarrow f$ in $(\textrm{Map}(X,Y),\mathcal{T})$, and thus of the proof.$\qquad\blacksquare$ ***End Tutorial6*** ***3.2. The Extension*** **Multi$_{\mid}$(*X,Y*)** ***of*** **Map(*X,Y*)** In this Section we show how the topological treatment of pointwise convergence of functions to functions given in Example A1.1 of Appendix 1 can be generalized to generate the boundary $\textrm{Multi}_{\mid}(X,Y)$ between $\textrm{Map}(X,Y)$ and $\textrm{Multi}(X,Y)$; here $X$ and $Y$ are Hausdorff spaces and $\textrm{Map}(X,Y)$ and $\textrm{Multi}(X,Y)$ are respectively the sets of all functional and non-functional relations between $X$ and $Y$. The generalization we seek defines neighbourhoods of $f\in\textrm{Map}(X,Y)$ to consist of those functional relations in $\textrm{Multi}(X,Y)$ whose images at any point $x\in X$ lies not only arbitrarily close to $f(x)$ (this generates the usual topology of pointwise convergence $\mathcal{T}_{Y}$ of Example A1.1) but whose inverse images at $y=f(x)\in Y$ contain points arbitrarily close to $x$. Thus the graph of $f$ must not only lie close enough to $f(x)$ at $x$ in $V$, but must additionally be such that $f^{-}(y)$ has at least branch in $U$ about $x$; thus $f$ is constrained to cling to $f$ as the number of points on the graph of $f$ increases with convergence and, unlike in the situation of simple pointwise convergence, no gaps in the graph of the limit object is permitted not only, as in Example A1.1 on the domain of $f$, but simultaneously on it range too. We call the resulting generated topology the *topology of pointwise biconvergence on* $\textrm{Map}(X,Y)$, to be denoted by $\mathcal{T}$. Thus for any given integer $I\geq1$, the generalization of Eq. (\[Eqn: point\]) gives for $i=1,2,\cdots,I$, the open sets of $(\textrm{Map}(X,Y),\mathcal{T})$ to be $$\begin{gathered} B((x_{i}),(V_{i});(y_{i}),(U_{i}))=\{ g\in\mathrm{Map}(X,Y)\!:\\ (g(x_{i})\in V_{i})\wedge(g^{-}(y_{i})\bigcap U_{i}\neq\emptyset)\textrm{ },i=1,2,\cdots,I\},\label{Eqn: func_bi}\end{gathered}$$ where $(x_{i})_{i=1}^{I},(V_{i})_{i=1}^{I}$ are as in that example, $(y_{i})_{i=1}^{I}\in Y$, and the corresponding open sets $(U_{i})_{i=1}^{I}$ in $X$ are chosen arbitrarily[^20]. A local base at $f$, for $(x_{i},y_{i})\in\mathbf{G}_{f}$, is the set of functions of (\[Eqn: func\_bi\]) with $y_{i}=f(x_{i})$ and the collection of all local bases $$B_{\alpha}=B((x_{i})_{i=1}^{I_{\alpha}},(V_{i})_{i=1}^{I_{\alpha}};(y_{i})_{i=1}^{I_{\alpha}},(U_{i})_{i=1}^{I_{\alpha}}),\label{Eqn: local_base}$$ for every choice of $\alpha\in\mathbb{D}$, is a base $_{\textrm{T}}\mathcal{B}$ of $(\textrm{Map}(X,Y),\mathcal{T})$. Here the directed set $\mathbb{D}$ is used as an indexing tool because, as pointed out in Example A1.1, the topology of pointwise convergence is not first countable. In a manner similar to Eq. (\[Eqn: func\_bi\]), the open sets of $(\mathrm{Multi}(X,Y),\widehat{\mathcal{T}})$, where $\textrm{Multi}(X,Y)$ are multifunctions with only countably many values in $Y$ for every point of $X$ (so that we exclude continuous regions from our discussion except for the “vertical lines” of $\textrm{Multi}_{\mid}(X,Y)$), can be defined by $$\begin{gathered} \widehat{B}((x_{i}),(V_{i});(y_{i}),(U_{i}))=\{\mathscr{G}\in\mathrm{Multi}(X,Y)\!:(\mathscr{G}(x_{i})\bigcap V_{i}\neq\emptyset)\wedge(\mathscr G^{-}(y_{i})\bigcap U_{i}\neq\emptyset)\}\label{Eqn: multi_bi}\end{gathered}$$ where $$\mathscr G^{-}(y)=\{ x\in X\!:y\in\mathscr{G}(x)\}.$$ and $(x_{i})_{i=1}^{I}\in\mathcal{D}(\mathscr{G}),(V_{i})_{i=1}^{I};(y_{i})_{i=1}^{I}\in\mathcal{R}(\mathscr{G}),(U_{i})_{i=1}^{I}$ are chosen as in the above. The topology $\widehat{\mathcal{T}}$ of $\textrm{Multi}(X,Y)$ is generated by the collection of all local bases $\widehat{B_{\alpha}}$ for every choice of $\alpha\in\mathbb{D}$, and it is not difficult to see from Eqs. (\[Eqn: func\_bi\]) and (\[Eqn: multi\_bi\]), that the restriction **** of $\widehat{\mathcal{T}}$ to $\textrm{Map}(X,Y)$ is just $\mathcal{T}$. Henceforth $\widehat{\mathcal{T}}$ and $\mathcal{T}$ will be denoted by the same symbol $\mathcal{T}$, and convergence in the topology of pointwise biconvergence in $(\textrm{Multi}(X,Y),\mathcal{T})$ will be denoted by $\rightrightarrows$, with the notation being derived from Theorem 3.1. **Definition 3.2.** ***Functionization of a multifunction.*** *A net of functions* $(f_{\alpha})_{\alpha\in\mathbb{D}}$ *in* $\textrm{Map}(X,Y)$ *converges in* $(\textrm{Multi}(X,Y),\mathcal{T})$, $f_{\alpha}\rightrightarrows\mathscr{M}$, *if it biconverges pointwise in* $(\textrm{Map}(X,Y),\mathcal{T}^{*})$. *Such a net of functions will be said to be a* *functionization of* $\mathscr{M}$*.$\qquad\square$* **Theorem 3.2.** *Let $(f_{\alpha})_{\alpha\in\mathbb{D}}$ be a net of functions in $\textrm{Map}(X,Y)$. Then $$f_{\alpha}\overset{\mathbf{G}}\longrightarrow\mathscr{M}\Longleftrightarrow f_{\alpha}\rightrightarrows\mathcal{M}.\qquad\square$$* **Proof.** If $(f_{\alpha})$ converges graphically to $\mathscr{M}$ then either $\mathcal{D}_{-}$ or $\mathcal{R}_{-}$ is nonempty; let us assume both of them to be so. Then the sequence of functions $(f_{\alpha})$ converges pointwise to a function $F$ on $\mathcal{D}_{-}$ and to functions $G$ on $\mathcal{R}_{-}$, and the local basic neighbourhoods of $F$ and $G$ generate the topology of pointwise biconvergence. Conversely, for pointwise biconvergence on $X$ and $Y$, $\mathcal{R}_{-}$ and $\mathcal{D}_{-}$ must be non-empty.$\qquad\blacksquare$ Observe that the boundary of $\textrm{Map}(X,Y)$ in the topology of pointwise biconvergence is a “line parallel to the $Y$-axis”. We denote this closure of $\textrm{Map}(X,Y)$ as **Definition 3.3.** $\textrm{Multi}_{\mid}((X,Y),\mathcal{T})=\mathrm{Cl}(\mathrm{Map}((X,Y),\mathcal{T})).$$\qquad\square$ The sense in which $\textrm{Multi}_{\mid}(X,Y)$ is the smallest closed topological extension of $M=\textrm{Map}(X,Y)$ is the following, refer Thm. A1.4 and its proof. Let $(M,\mathcal{T}_{0})$ be a topological space and suppose that$${\textstyle \widehat{M}=M\bigcup\{\widehat{m}\}}$$ is obtained by adjoining an extra point to $M$; here $M=\textrm{Map}(X,Y)$ and $\widehat{m}\in\textrm{Cl}(M)$ is the multifunctional limit in $\widehat{M}=\textrm{Multi}_{\mid}(X,Y)$. Treat all open sets of $M$ generated by local bases of the type (\[Eqn: local\_base\]) with finite intersection property as a filter-base $_{\textrm{F}}\mathcal{B}$ on $X$ that induces a filter $\mathcal{F}$ on $M$ (by forming supersets of all elements of $_{\textrm{F}}\mathcal{B}$; see Appendix A1) and thereby the filter-base $${\textstyle \widehat{_{\textrm{F}}\mathcal{B}}=\{\widehat{B}=B\bigcup\{\widehat{m}\}\!:B\in\,_{\textrm{F}}\mathcal{B}\}}$$ on $\widehat{M}$; this filter-base at $m$ can also be obtained independently from Eq. (\[Eqn: multi\_bi\]). Obviously $\widehat{_{\textrm{F}}\mathcal{B}}$ is an extension of $_{\textrm{F}}\mathcal{B}$ on $\widehat{M}$ and $_{\textrm{F}}\mathcal{B}$ is the filter induced on $M$ by $\widehat{_{\textrm{F}}\mathcal{B}}$. We may also consider the filter-base to be a topological base on $M$ that defines a coarser topology $\mathcal{T}$ on $M$ (through all unions of members of $_{\textrm{F}}\mathcal{B}$) and hence the topology$${\textstyle \widehat{\mathcal{T}}=\{\widehat{G}=G\bigcup\{\widehat{m}\}\!:G\in\mathcal{T}\}}$$ on $\widehat{M}$ to be the topology associated with $\widehat{\mathcal{F}}$. A finer topology on $\widehat{M}$ may be obtained by adding to $\widehat{\mathcal{T}}$ all the discarded elements of $\mathcal{T}_{0}$ that do not satisfy FIP. It is clear that $\widehat{m}$ is on the boundary of $M$ because every neighbourhood of $\widehat{m}$ intersects $M$ by construction; thus $(M,\mathcal{T})$ is dense in $(\widehat{M,}\widehat{\mathcal{T}})$ which is the required topological extension of $(M,\mathcal{T}).$ In the present case, a filter-base at $f\in\mathrm{Map}(X,Y)$ is the neighbourhood system $_{\textrm{F}}\mathcal{B}_{f}$ at $f$ given by decreasing sequences of neighbourhoods $(V_{k})$ and $(U_{k})$ of $f(x)$ and $x$ respectively, and the filter $\widehat{\mathcal{F}}$ is the neighbourhood filter $\mathcal{N}_{f}\bigcup G$ where $G\in$$\textrm{Multi}_{\mid}(X,Y)$. We shall present an alternate, and perhaps more intuitively appealing, description of graphical convergence based on the adherence set of a filter on Sec. 4.1. As more serious examples of the graphical convergence of a net of functions to multifunction than those considered above, Fig. \[Fig: tent4\] shows the first four iterates of the tent map $$t(x)=\left\{ \begin{array}{lc} 2x & 0\leq x<1/2\\ 2(1-x) & 1/2\leq x\leq1\end{array}\right.\qquad\begin{array}{c} (t^{1}=t).\end{array}$$ defined on $[0.1]$ and the sine map $f_{n}=|\sin(2^{n-1}\pi x)|,\; n=1,\cdots,4$ with domain $[0,1]$. These examples illustrate the important generalization that *periodic points may be replaced by the more general equivalence classes* where a sequence of functions converges graphically; this generalization based on the ill-posed interpretation of dynamical systems is significant for non-iterative systems as in second example above. The equivalence classes of the tent map for its two fixed points $0$ and $2/3$ generated by the first 4 iterates are $$[0]_{4}=\left\{ 0,\frac{1}{8},\frac{1}{4},\frac{3}{8},\frac{1}{2},\frac{5}{8},\frac{3}{4},\frac{7}{8},1\right\}$$ $$\left[\frac{2}{3}\right]_{4}=\left\{ c,\frac{1}{8}\mp c,\frac{1}{4}\mp c,\frac{3}{8}\mp c,\frac{1}{2}\mp c,\frac{5}{8}\mp c,\frac{3}{4}\mp c,\frac{7}{8}\mp c,1-c\right\}$$ where $c=1/24$. If the moduli of the slopes of the graphs passing through these equivalent fixed points are greater than 1 then the graphs converge to multifunctions and when these slopes are less than 1 the corresponding graphs converge to constant functions. It is to be noted that the number of equivalent fixed points in a class increases with the number of iterations $k$ as $2^{k-1}+1;$ this *increase in the degree of ill-posedness is typical of discrete chaotic systems and can be regarded as a paradigm of chaos generated by* *the convergence of a family of functions.* The $m^{\textrm{th}}$ iterate $t^{m}$ of the tent map has $2^{m}$ fixed points corresponding to the $2^{m}$ injective branches of $t^{m}$ $$x_{mj}=\left\{ \begin{array}{ll} {\displaystyle \frac{j-1}{2^{m}-1}}, & j=1,3,\cdots,(2^{m}-1),\\ {\displaystyle \frac{j}{2^{m}+1}}, & j=2,4,\cdots,2^{m},\end{array}\right.t^{m}(x_{mj})=x_{mj},\textrm{ }j=1,2,\cdots,2^{m}.$$ Let $X_{m}$ be the collection of these $2^{m}$ fixed points (thus $X_{1}=\{0,2/3\}$), and denote by $[X_{m}]$ the set of the equivalent points, one coming from each of the injective branches, for each of the fixed points: thus $$\begin{array}{crcl} \mathcal{D}_{-}= & [X_{1}] & = & \{[0],[2/3]\}\\ & [X_{2}] & = & \{[0],[2/5],[2/3],[4/5]\}\end{array}$$ and $\mathcal{D}_{+}=\bigcap_{m=1}^{\infty}[X_{m}]$ is a nonempty countable set dense in $X$ at each of which the graphs of the sequence $(t^{m})$ converge to a multifunction. New sets $[X_{n}]$ will be formed by subsequences of the higher iterates $t^{n}$ for $m=in$ with $i=1,2,\cdots$ where these subsequences remain fixed. For example, the fixed points $2/5$ and $4/5$ produced respectively by the second and fourth injective branches of $t^{2}$, are also fixed for the seventh and thirteenth branches of $t^{4}.$ For the shift map $2x\;\textrm{mod}(1)$ on $[0,1]$, $\mathcal{D}_{-}=\{[0],[1]\}$ where $[0]=\bigcap_{m=1}^{\infty}\{(i-1)/2^{m}\!:i=1,2,\cdots,2^{m}\}$ and $[1]=\bigcap_{m=1}^{\infty}\{ i/2^{m}\!:i=1,2,\cdots,2^{m}\}$. It is useful to compare the graphical convergence of $(\sin(\pi nx))_{n=1}^{\infty}$ to $[0,1]$ at $0$ and to $0$ at $1$ with the usual integral and test-functional convergences to $0$; note that the point $1/2$, for example, belongs to $\mathcal{D}_{+}$and not to $\mathcal{D}_{-}=\{0,1\}$ because it is frequented by even $n$ only. However for the subsequence $(f_{2^{m-1}})_{m\in\mathbb{Z}_{+}}$, $1/2$ is in $\mathcal{D}_{-}$ because if the graph of $f_{2^{m-1}}$ passes through $(1/2,0)$ for some $m$, then so do the graphs for all higher values . Therefore $[0]=\bigcap_{m=1}^{\infty}\{ i/2^{m-1}\!:i=0,1,\cdots,2^{m-1}\}$ is the equivalence class of $(f_{2^{m-1}})_{m=1}^{\infty}$ and this sequence converges to $[-1,1]$ on this set. Thus our extension $\textrm{Multi}(X)$ is distinct from the distributional extension of function spaces with respect to test functions, and is able to correctly generate the pathological behaviour of the limits that are so crucially vital in producing chaos. **4. Discrete chaotic systems are maximally ill-posed** The above ideas apply to the development of a criterion for chaos in discrete dynamical systems that is based on the limiting behaviour of the graphs of a sequence of functions $(f_{n})$ on $X,$ rather than on the values that the sequence generates as is customary. For the development of the maximality of ill-posedness criterion of chaos, we need to refresh ourselves with the following preliminaries. ***Resume Tutorial5: Axiom of Choice and Zorn’s Lemma*** Let us recall from the first part of this Tutorial that for nonempty subsets $(A_{\alpha})_{\alpha\in\mathbb{D}}$ of a nonempty set $X$, the Axiom of Choice ensures the existence of a set $A$ such that $A\bigcap A_{\alpha}$ consists of a single element for every $\alpha$. The choice axiom has far reaching consequences and a few equivalent statements, one of which the Zorn’s lemma that will be used immediately in the following, is the topic of this resumed Tutorial. The beauty of the Axiom, and of its equivalents, is that they assert the existence of mathematical objects that, in general, cannot be demonstrated and it is often believed that Zorn’s lemma is one of the most powerful tools that a mathematician has available to him that is “almost indispensable in many parts of modern pure mathematics” with significant applications in nearly all branches of contemporary mathematics. This “lemma” talks about maximal (as distinct from “maximum”) elements of a partially ordered set, a set in which some notion of $x_{1}$ “preceding” $x_{2}$ for two elements of the set has been defined. A relation $\preceq$ on a set $X$ is said to be a *partial order* (or simply an *order*) if it is (compare with the properties (ER1)–(ER3) of an equivalence relation, Tutorial1) (OR1) Reflexive, that is $(\forall x\in X)(x\preceq x)$. (OR2) Antisymmetric: $(\forall x,y\in X)(x\preceq y\wedge y\preceq x\Longrightarrow x=y)$. (OR3) Transitive, that is $(\forall x,y,z\in X)(x\preceq y\wedge y\preceq z\Longrightarrow x\preceq z)$. Any notion of order on a set $X$ in the sense of one element of $X$ preceding another should possess at least this property. The **relation is a *preorder* $\precsim$ if it is only reflexive and transitive, that is if only (OR1) and (OR3) are true. If the hypothesis of (OR2) is also satisfied by a preorder, then this $\precsim$ induces an equivalence relation $\sim$ on $X$ according to $(x\precsim y)\wedge(y\precsim x)\Leftrightarrow x\sim y$ that evidently is actually a partial order iff $x\sim y\Leftrightarrow x=y$. For any element $[x]\in X/\sim$ of the induced quotient space, let $\leq$ denote the generated order in $X/\sim$ so that $$x\precsim y\Longleftrightarrow[x]\leq[y];$$ then $\leq$ is a partial order on $X/\sim$. If every two element of $X$ are *comparable*, in the sense that either $x_{1}\preceq x_{2}$ or $x_{2}\preceq x_{1}$ for all $x_{1},x_{2}\in X$, then $X$ is said to be a *totally ordered set* or a *chain.* A totally ordered subset $(C,\preceq)$ of a partially ordered set $(X,\preceq)$ with the ordering induced from $X$, is known as a *chain in $X$* if $$C=\{ x\in X\!:(\forall c\in X)(c\preceq x\vee x\preceq c)\}.\label{Eqn: chain}$$ The most important class of chains that we are concerned with in this work is that on the subsets $\mathcal{P}(X)$ of a set $(X,\subseteq)$ under the inclusion order; Eq. (\[Eqn: chain\]), as we shall see in what follows, defines a family of chains of nested subsets in $\mathcal{P}(X)$. Thus while the relation $\precsim$ in $\mathbb{Z}$ defined by $n_{1}\precsim n_{2}\Leftrightarrow\mid n_{1}\mid\,\leq\,\mid n_{2}\mid$ with $n_{1},n_{2}\in\mathbb{Z}$ preorders $\mathbb{Z}$, it is not a partial order because although $-n\precsim n\textrm{ and }n\precsim-n$ for any $n\in\mathbb{Z}$, it is does not follow that $-n=n$. A common example of partial order on a set of sets, for example on the power set $\mathcal{P}(X)$ of a set $X$ (see footnote \[Foot: notation\]), is the inclusion relation $\subseteq$: the ordered set $\mathcal{X}=(\mathcal{P}(\{ x,y,z\}),\subseteq)$ is partially ordered but not totally ordered because, for example, $\{ x,y\}\not\subseteq\{ y,x\}$, or $\{ x\}$ is not comparable to $\{ y\}$ unless $x=y$; however $C=\{\{\emptyset,\{ x\},\{ x,y\}\}$ does represent one of the many possible chains of $\mathcal{X}$. Another useful example of partial order is the following: Let $X$ and $(Y,\leq)$ be sets with $\leq$ ordering $Y$, and consider $f,g\in\textrm{Map}(X,Y)$ with $\mathcal{D}(f),\mathcal{D}(g)\subseteq X$. Then $$\begin{aligned} (\mathcal{D}(f)\subseteq\mathcal{D}(g))(f=g|_{\mathcal{D}(f)}) & \Longleftrightarrow & f\preceq g\nonumber \\ (\mathcal{D}(f)=\mathcal{D}(g))(\mathcal{R}(f)\subseteq\mathcal{R}(g)) & \Longleftrightarrow & f\preceq g\label{Eqn: FunctionOrder}\\ (\forall x\in\mathcal{D}(f)=\mathcal{D}(g))\textrm{ }(f(x)\leq g(x)) & \Longleftrightarrow & f\preceq g\nonumber \end{aligned}$$ define partial orders on $\textrm{Map}(X,Y)$. In the last case, the order is not total because any two functions whose graphs cross at some point in their common domain cannot be ordered by the given relation, while in the first any $f$ whose graph does not coincide with that of $g$ on the common domain is not comparable to it by this relation. Let $(X,\preceq)$ be a partially ordered set and let $A$ be a subset of $X$. An element $a_{+}\in(A,\preceq)$ is said to be a *maximal* element of $A$ with respect to $\preceq$ if $$(\forall a\in(A,\preceq))(a_{+}\preceq a)\Longrightarrow\textrm{ }a=a_{+},\label{Eqn: maximal}$$ that is iff there is no $a\in A$ with $a\neq a_{+}$ and $a\succ a_{+}$[^21]. Expressed otherwise, this implies that an element $a_{+}$ of a subset $A\subseteq(X,\preceq)$ is maximal in $(A,\preceq)$ iff it is true that $$(a\preceq a_{+}\in A)\textrm{ }(\textrm{for every }a\in(A,\preceq)\textrm{ comparable to }a_{+});\label{Eqn: maximal1}$$ thus $a_{+}$ in $A$ is a maximal element of $A$ iff it is strictly greater than every *other comparable* element of $A$. This of course does not mean that each element $a$ of $A$ satisfies $a\preceq a_{+}$ because every pair of elements of a partially ordered set need not be comparable: in a totally ordered set there can be at most one maximal element. In comparison, an element $a_{\infty}$ of a subset $A\subseteq(X,\preceq)$ is *the* unique *maximum* (*largest, greatest, last*) element of $A$ iff $$(a\preceq a_{\infty}\in A)\textrm{ }(\textrm{for every }a\in(A,\preceq)),\label{Eqn: maximum}$$ implying that $a_{\infty}$ is *the* element of $A$ that is strictly larger than every other element of $A$. As in the case of the maximal, although this also does not require all elements of $A$ to be comparable to each other, it does require $a_{\infty}$ to be larger than every element of $A$. The dual concepts of minimal and minimum can be similarly defined by essentially reversing the roles of $a$ and $b$ in relational expressions like $a\preceq b$. The last concept needed to formalize Zorn’s lemma is that of an upper bound: For a subset $(A,\preceq)$ of a partially ordered set $(X,\preceq)$, an element $u$ of $X$ is an *upper bound of* $A$ *in* $X$ iff $$(a\preceq u\in(X,\preceq))\textrm{ }(\textrm{for every }a\in(A,\preceq))\label{Eqn: upper bound}$$ which requires the upper bound $u$ to be larger than all members of $A$, with the corresponding lower bounds of $A$ being defined in a similar manner. Of course, it is again not necessary that the elements of $A$ be comparable to each other, and it should be clear from Eqs. (\[Eqn: maximum\]) and (\[Eqn: upper bound\]) that when an upper bound of a set is in the set itself, then it is the maximum element of the set. If the upper (lower) bounds of a subset $(A,\preceq)$ of a set $(X,\preceq)$ has a least (greatest) element, then this smallest upper bound (largest lower bound) is called *the* *least upper bound* (*greatest lower* *bound*) or *supremum* (*infimum*) *of $A$ in $X$*. Combining Eqs. (\[Eqn: maximum\]) and (\[Eqn: upper bound\]) then yields $$\begin{array}{rcl} {\displaystyle \sup_{X}A} & = & \{ a_{\leftarrow}\in\Omega_{A}\!:a_{\leftarrow}\preceq u\textrm{ }\forall\textrm{ }u\in(\Omega_{A},\preceq)\}\\ {\displaystyle \inf_{X}A} & = & \{_{\rightarrow}a\in\Lambda_{A}\!:l\preceq\,_{\rightarrow}a\textrm{ }\forall\textrm{ }l\in(\Lambda_{A},\preceq)\}\end{array}\label{Eqn: supinf1}$$ where **$\Omega_{A}=\{\textrm{ }u\in X\!:(\forall\textrm{ }a\in A)(a\preceq u)\}$ **and **$\Lambda_{A}=\{ l\in X\!:(\forall\textrm{ }a\in A)(l\preceq a)\}$ **are the sets of all upper and lower bounds of $A$ in $X$*.* Equation (\[Eqn: supinf1\]) may be expressed in the equivalent but more transparent form as $$\begin{array}{c} {\displaystyle a_{\leftarrow}={\displaystyle \sup_{X}A}\Longleftrightarrow(a\in A\Rightarrow a\preceq a_{\leftarrow})\wedge(a_{0}\prec a_{\leftarrow}\Rightarrow a_{0}\prec b\preceq a_{\leftarrow}\textrm{ for some }b\in A)}\\ _{\rightarrow}a={\displaystyle \inf_{X}A}\Longleftrightarrow(a\in A\Rightarrow\,_{\rightarrow}a\preceq a)\wedge(_{\rightarrow}a\prec a_{1}\Rightarrow\,_{\rightarrow}a\preceq b\prec a_{1}\textrm{ for some }b\in A)\end{array}\label{Eqn: supinf2}$$ to imply that *$a_{\leftarrow}$* ($_{\rightarrow}a$) is *the* upper (lower) bound of $A$ in $X$ which precedes (succeeds) every other upper (lower) bound of $A$ **in $X$. Notice that uniqueness in the definitions above is a direct consequence of the uniqueness of greatest and least elements of a set. **It must be noted that whereas maximal and maximum are properties of the particular subset and have nothing to do with anything outside it, upper and lower bounds of a set are defined only with respect to a superset that may contain it. The following example, beside being useful in Zorn’s lemma, is also of great significance in fixing some of the basic ideas needed in our future arguments involving classes of sets ordered by the inclusion relation. **Example 4.1.** Let $\mathcal{X}=\mathcal{P}(\{ a,b,c\})$ be ordered by the inclusion relation $\subseteq$. The subset $\mathcal{A}=\mathcal{P}(\{ a,b,c\})-\{ a,b,c\}$ has three maximals $\{ a,b\}$, $\{ b,c\}$ and $\{ c,a\}$ but no maximum as there is no $A_{\infty}\in\mathcal{A}$ satisfying $A\preceq A_{\infty}$ for every $A\in\mathcal{A}$, while $\mathcal{P}(\{ a,b,c\})-\emptyset$ the three minimals $\{ a\}$, $\{ b\}$, and $\{ c\}$ but no minimum. This shows that a subset of a partially ordered set may have many maximals (minimals) without possessing a maximum (minimum), but a subset has a maximum (minimum) iff this is its unique maximal (minimal). If $\mathcal{A}=\{\{ a,b\},\{ a,c\}\}$, then every subset of the intersection of the elements of $\mathcal{A}$, namely $\{ a\}$ and $\emptyset$, are lower bounds of $\mathcal{A}$, and all supersets in $\mathcal{X}$ of the union of its elements — which in this case is just $\{ a,b,c\}$ — are its upper bounds. Notice that while the maximal (minimal) and maximum (minimum) are elements of $\mathcal{A}$, upper and lower bounds need not be contained in their sets. In this class $(\mathcal{X},\subseteq)$ of subsets of a set $X$, $X_{+}$ is a maximal element of $\mathcal{X}$ iff $X_{+}$ is not contained in any other subset of $X$, while $X_{\infty}$ is a maximum of $\mathcal{X}$ iff $X_{\infty}$ contains every other subset of $X$. Let $\mathcal{A}:=\{ A_{\alpha}\in\mathcal{X}\}_{\alpha\in\mathbb{D}}$ be a nonempty subclass of $(\mathcal{X},\subseteq)$, and suppose that both $\bigcup A_{\alpha}$ and $\bigcap A_{\alpha}$ are elements of $\mathcal{X}$. Since each $A_{\alpha}$ is $\subseteq$-less than $\bigcup A_{\alpha}$, it follows that $\bigcup A_{\alpha}$ is an upper bound of $\mathcal{A}$; this is also be the smallest of all such bounds because if $U$ is any other upper bound then every $A_{\alpha}$ must precede $U$ by Eq. (\[Eqn: upper bound\]) and therefore so must $\bigcup A_{\alpha}$ (because the union of a class of subsets of a set is the smallest that contain each member of the class: $A_{\alpha}\subseteq U\Rightarrow\bigcup A_{\alpha}\subseteq U$ for subsets $(A_{\alpha})$ and $U$ of $X$). Analogously, since $\bigcap A_{\alpha}$ is $\subseteq$-less than each $A_{\alpha}$ it is a lower bound of $\mathcal{A}$; that it is the greatest of all the lower bounds $L$ in $\mathcal{X}$ follows because the intersection of a class of subsets is the largest that is contained in each of the subsets: $L\subseteq A_{\alpha}\Rightarrow L\subseteq\bigcap A_{\alpha}$ for subsets $L$ and $(A_{\alpha})$ of $X$. Hence the supremum and infimum of $\mathcal{A}$ in $(\mathcal{X},\subseteq)$ given by $$A_{\leftarrow}=\sup_{(\mathcal{X},\subseteq)}\mathcal{A}=\bigcup_{A\in\mathcal{A}}A\qquad\textrm{and}\qquad_{\rightarrow}A=\inf_{(\mathcal{X},\subseteq)}\mathcal{A}=\bigcap_{A\in\mathcal{A}}A\label{Eqn: supinf3}$$ are both elements of $(\mathcal{X},\subseteq)$. Intuitively, an upper (respectively, lower) bound of $\mathcal{A}$ in $\mathcal{X}$ is any subset of $\mathcal{X}$ that contains (respectively, is contained in) every member of $\mathcal{A}$.$\qquad\blacksquare$ The statement of Zorn’s lemma and its proof can now be completed in three stages as follows. For Theorem 4.1 below that constitutes the most significant technical first stage, let $g$ be a function on $(X,\preceq)$ that assigns to every $x\in X$ an *immediate successor* $y\in X$ such that $${\textstyle \mathscr{M}(x)=\{\textrm{ }y\succ x\!:\not\exists\textrm{ }x_{*}\in X\textrm{ satisfying }x\prec x_{*}\prec y\}}$$ are all the successors of $x$ in $X$ with no element of $X$ lying strictly between $x$ and $y$. Select a representative of $\mathscr{M}(x)$ by a choice function $f_{\textrm{C}}$ such that $$g(x)=f_{\textrm{C}}(\mathscr{M}(x))\in\mathscr{M}(x)$$ is an immediate successor of $x$ chosen from the many possible in the set $\mathscr{M}(x)$. The basic idea in the proof of the first of the three-parts is to express the existence of a maximal element of a partially ordered set $X$ in terms of the existence of a fixed point in the set, which follows as a contradiction of the assumed hypothesis that every point in $X$ has an immediate successor. Our basic application of immediate successors in the following will be to classes $\mathcal{X}\subseteq(\mathcal{P}(X),\subseteq)$ of subsets of a set $X$ ordered by inclusion. In this case for any $A\in\mathcal{X}$, the function $g$ can be taken to be the superset $${\textstyle g(A)=A\bigcup f_{\textrm{C}}(\mathscr{G}(A)),\quad\textrm{where }\mathscr{G}(A)=\{ x\in X-A\!:A\bigcup\{ x\}\in\mathcal{X}\}}\label{Eqn: FilterTower}$$ of $A$. Repeated application of $g$ to $A$ then generates a principal filter, and hence an associated sequence, based at $A$. **Theorem 4.1.** *Let $(X,\preceq)$ be a partially ordered set that satisfies* (ST1) *There is a smallest element $x_{0}$ of $X$ which has no immediate predecessor in $X$.* (ST2) *If $C\subseteq X$ is a totally ordered subset in $X$, then $c_{*}=\sup_{X}C$ is in $X$.* *Then there exists a maximal element $x_{+}$ of $X$ which has no immediate successor in $X$.*$\qquad\square$ **Proof.** Let $T\subseteq(X,\preceq)$ be a subset of $X$. If the conclusion of the theorem is false then the alternative (ST3) *Every element $x\in T$ has an immediate successor $g(x)$ in $T$*[^22] leads, as shown below, to a contradiction that can be resolved only by the conclusion of the theorem. A subset $T$ of $(X,\preceq)$ satisfying conditions (ST1)$-$(ST3) is sometimes known as an $g$*-tower* or an $g$*-sequence:* an obvious example of a tower is $(X,\preceq)$ itself. If $${\textstyle _{\rightarrow}T=\bigcap\{ T\in\mathcal{T}\!:T\textrm{ is an }x_{0}-\textrm{tower}\}}$$ is the $(\mathcal{P}(X),\subseteq)$-infimum of the class $\mathcal{T}$ of all sequential towers of $(X,\preceq)$, we show that this smallest sequential **tower is infact a *sequential totally ordered chain* in $(X,\preceq)$ built from $x_{0}$ by the $g$-function. Let the subset $$C_{\textrm{T}}=\{ c\in X\!:(\forall t\in\,_{\rightarrow}T)(t\preceq c\vee c\preceq t)\}\subseteq X\label{Eqn: tower-chain}$$ of $X$ be an $g$-chain in $_{\rightarrow}T$ in the sense that (cf. Eq. (\[Eqn: chain\])) it is that subset of $X$ each of whose elements is comparable with some element of $_{\rightarrow}T$. The conditions (ST1)$-$(ST3) for $C_{\textrm{T}}$ can be verified as follows to demonstrate that $C_{\textrm{T}}$ is an $g$-tower. \(1) $x_{0}\in C_{\textrm{T}}$, because it is less than each $x\in\,_{\rightarrow}T$. \(2) Let $c_{\leftarrow}=\sup_{X}C_{\textrm{T}}$ be the supremum of the chain $C_{\textrm{T}}$ in $X$ so that by (ST2), $c_{\leftarrow}\in X$. Let $t\in\,_{\rightarrow}T$. If there is *some* $c\in C_{\textrm{T}}$ such that $t\preceq c$, then surely $t\preceq c_{\leftarrow}$. Else, $c\preceq t$ for *every* $c\in C_{\textrm{T}}$ shows that $c_{\leftarrow}\preceq t$ because $c_{\leftarrow}$ is the smallest of all the upper bounds $t$ of $C_{\textrm{T}}$. Therefore $c_{\leftarrow}\in C_{\textrm{T}}$. \(3) In order to show that $g(c)\in C$ whenever $c\in C$ it needs to verified that for all $t\in\,_{\rightarrow}T$, either $t\preceq c\Rightarrow t\preceq g(c)$ or $c\preceq t\Rightarrow g(c)\preceq t$. As the former is clearly obvious, we investigate the later as follows; note that $g(t)\in\,_{\rightarrow}T$ by (ST3). The first step is to show that the subset $$C_{g}=\{ t\in\,_{\rightarrow}T\!:(\forall c\in C_{\textrm{T}})(t\preceq c\vee g(c)\preceq t)\}\label{Eqn: chain_g}$$ of $_{\rightarrow}T$, which is a chain in $X$ (observe the inverse roles of $t$ and $c$ here as compared to that in Eq. (\[Eqn: tower-chain\])), is a tower: Let $t_{\leftarrow}$ be the supremum of $C_{g}$ and take $c\in C$. If there is *some* $t\in C_{g}$ for which $g(c)\preceq t$, then clearly $g(c)\preceq t_{\leftarrow}$. Else, $t\preceq x$ for *each* $t\in C_{g}$ shows that $t_{\leftarrow}\preceq c$ because $t_{\leftarrow}$ is the smallest of all the upper bounds $c$ of $C_{g}$. Hence $t_{\leftarrow}\in C_{g}$. Property (ST3) for $C_{g}$ follows from a small yet significant modification of the above arguments in which the immediate successors $g(t)$ of $t\in C_{g}$ formally replaces the supremum $t_{\leftarrow}$ of $C_{g}$. Thus given a $c\in C$, if there is *some* $t\in C_{g}$ for which $g(c)\preceq t$ then $g(c)\prec g(t)$; this combined with $(c=t)\Rightarrow(g(c)=g(t))$ yields $g(c)\preceq g(t)$. On the other hand, $t\prec c$ for *every* $t\in C_{g}$ requires $g(t)\preceq c$ as otherwise $(t\prec c)\Rightarrow(c\prec g(t))$ would, from the resulting consequence $t\prec c\prec g(t)$, contradict the assumed hypothesis that $g(t)$ is the immediate successor of $t$. Hence, $C_{g}$ is a $g$-tower in $X$. To complete the proof that $g(c)\in C_{\textrm{T}}$, and thereby the argument that $C_{\textrm{T}}$ is a tower, we first note that as $_{\rightarrow}T$ is the smallest tower and $C_{g}$ is built from it, $C_{g}=\,_{\rightarrow}T$ must infact be $_{\rightarrow}T$ itself. From Eq. (\[Eqn: chain\_g\]) therefore, for every $t\in\,_{\rightarrow}T$ either $t\preceq g(c)$ or $g(c)\preceq t$, so that $g(c)\in C_{\textrm{T}}$ whenever $c\in C_{\textrm{T}}$. This concludes the proof that $C_{\textrm{T}}$ is actually the tower $_{\rightarrow}T$ in $X$. From (ST2), the implication of the chain $C_{\textrm{T}}$ $$C_{\textrm{T}}=\,_{\rightarrow}T=C_{g}\label{Eqn: ChainedTower}$$ being the minimal tower $_{\rightarrow}T$ is that the supremum $t_{\leftarrow}$ of the totally ordered $_{\rightarrow}T$ *in its own tower* (as distinct from in the tower $X$: recall that $_{\rightarrow}T$ is a subset of $X$) must be contained in itself, that is $$\sup_{C_{\textrm{T}}}(C_{\textrm{T}})=t_{\leftarrow}\in\,_{\rightarrow}T\subseteq X.\label{Eqn: sup chain}$$ This however leads to the contradiction from (ST3) that $g(t_{\leftarrow})$ be an element of $_{\rightarrow}T$, unless of course $$g(t_{\leftarrow})=t_{\leftarrow},\label{Eqn: fixed point}$$ which because of (\[Eqn: ChainedTower\]) may also be expressed equivalently as $g(c_{\leftarrow})=c_{\leftarrow}\in C_{\textrm{T}}$. As the sequential totally ordered set $_{\rightarrow}T$ is a subset of $X$, Eq. (48) implies that $t_{\leftarrow}$ is a maximal element of $X$ which allows (ST3) to be replaced by the remarkable inverse criterion that $(\textrm{ST}3^{\prime})$ If $x\in X$ and $w$ precedes $x,$ $w\prec x$, then $w\in X$ that is obviously false for a general tower $T$. In fact, it follows directly from Eq. (\[Eqn: maximal\]) that under $(\textrm{ST}3^{\prime})$ *any $x_{+}\in X$ is a maximal element of $X$ iff it is a fixed point of $g$* as given by Eq. (\[Eqn: fixed point\]). This proves the theorem and also demonstrates how, starting from a minimum element of a partially ordered set $X$, (ST3) can be used to generate inductively a totally ordered sequential subset of $X$ *leading to a maximal $x_{+}=c_{\leftarrow}\in(X,\preceq)$ that is a fixed point of the generating function $g$* *whenever the supremum* $t_{\leftarrow}$ *of the chain $_{\rightarrow}T$ is in* $X$.$\qquad\blacksquare$ **Remarks.** The proof of this theorem, despite its apparent length and technically involved character, carries the highly significant underlying message that [0.1in]{} *Any inductive sequential $g$-construction of an infinite chained tower* $C_{\textrm{T}}$ *starting with a smallest element $x_{0}\in(X,\preceq)$ such that a supremum $c_{\leftarrow}$ of the $g$-generated sequential chain* $C_{\textrm{T}}$ *in its own tower is contained in itself, must necessarily terminate with a fixed point relation of the type* (\[Eqn: fixed point\]) *with respect to the supremum. Note from Eqs. (\[Eqn: sup chain\]) and (\[Eqn: fixed point\]) that the role of* (ST2) *applied to a fully ordered tower is the identification of the maximal of the tower — which depends only the tower and has nothing to do with anything outside it — with its supremum that depends both on the tower and its complement.* Thus although purely set-theoretic in nature, the filter-base associated with a sequentially totally ordered set may be interpreted to lead to the usual notions of adherence and convergence of filters and thereby of a generated topology for $(X,\preceq)$, see Appendix A1 and Example A1.3. This very significant apparent inter-relation between topologies, filters and orderings will form the basis of our approach to the condition of maximal ill-posedness for chaos. In the second stage of the three stage programme leading to Zorn’s lemma, the tower Theorem 4.1 and the comments of the preceding paragraph are applied at one higher level to a very special class of the power set of a set, the class of all the chains of a partially ordered set, to directly lead to the physically significant **Theorem 4.2.** **Hausdorff Maximal Principle.** *Every partially ordered set $(X,\preceq)$ has a maximal totally ordered subset*.[^23]$\qquad\square$ **Proof.** Here the base level is $$\mathcal{X}=\{ C\in\mathcal{P}(X)\!:C\textrm{ is a chain in }(X,\preceq)\}\subseteq\mathcal{P}(X)\label{Eqn: Hausdorff}$$ be the set of all the totally ordered subsets of $(X,\preceq)$. Since $\mathcal{X}$ is a collection of (sub)sets of $X$, we order it by the inclusion relation on $\mathcal{X}$ and use the tower Theorem to demonstrate that $(\mathcal{X},\subseteq)$ has a maximal element $C_{\leftarrow}$, which by the definition of $\mathcal{X}$, is the required maximal chain in $(X,\preceq)$. Let $\mathcal{C}$ be a chain in $\mathcal{X}$ of the chains in $(X,\preceq)$. In order to apply the tower Theorem to $(\mathcal{X},\subseteq)$ we need to verify hypothesis (ST2) that the smallest $$C_{*}=\sup_{\mathcal{X}}\mathcal{C}=\bigcup_{C\in\mathcal{C}}C\label{Eqn: HausdorffChain}$$ of the possible upper bounds of $\mathcal{C}$ (see Eq. (\[Eqn: supinf3\])) is a chain of $(X,\preceq)$. Indeed, if $x_{1},x_{2}\in X$ are two points of $C_{\textrm{sup}}$ with $x_{1}\in C_{1}$ and $x_{2}\in C_{2}$, then from the $\subseteq$-comparability of $C_{1}$ and $C_{2}$ we may choose $x_{1},x_{2}\in C_{1}\supseteq C_{2}$, say. Thus $x_{1}$ and $x_{2}$ are $\preceq$-comparable as $C_{1}$ is a chain in $(X,\preceq)$; $C_{*}\in\mathcal{X}$ is therefore a chain in $(X,\preceq)$ which establishes that the supremum of a chain of $(\mathcal{X},\subseteq)$ is a chain in $(X,\preceq)$. The tower Theorem 4.1 can now applied to $(\mathcal{X},\subseteq)$ with $C_{0}$ as its smallest element to construct a $g$-sequentially towered fully ordered subset of $\mathcal{X}$ consisting of chains in $X$ $$\mathcal{C}_{\textrm{T}}=\{ C_{i}\in\mathcal{P}(X)\!:C_{i}\subseteq C_{j}\textrm{ for }i\leq j\in\mathbb{N}\}=\,_{\rightarrow}\mathcal{T}\subseteq\mathcal{P}(X)$$ of $(\mathcal{X},\subseteq)$ — consisting of the common elements of all $g$-sequential towers $\mathcal{T}\in\mathfrak{T}$ of $(\mathcal{X},\subseteq)$ — that infact is a principal filter base of chained subsets of $(X,\preceq)$ at $C_{0}$. The supremum (chain in $X$) $C_{\leftarrow}$ of $\mathcal{C}_{\textrm{T}}$ in $\mathcal{C}_{\textrm{T}}$ must now satisfy, by Thm. 4.1, the fixed point $g$-chain of $X$ $$\sup_{\mathcal{C}_{\textrm{T}}}(\mathcal{C}_{\textrm{T}})=C_{\leftarrow}=g(C_{\leftarrow})\in\mathcal{C}_{\textrm{T}}\subseteq\mathcal{P}(X),$$ where the chain $g(C)=C\bigcup f_{\textrm{C}}(\mathscr{G}(C)-C)$ with $\mathscr{G}(C)=\{ x\in X-C\!:C\bigcup\{ x\}\in\mathcal{X}\}$, is an immediate successor of $C$ obtained by choosing one point $x=f_{\textrm{C}}(\mathscr{G}(C)-C)$ from the many possible in $\mathscr{G}(C)-C$ such that the resulting $g(C)=C\bigcup\{ x\}$ is a strict successor of the chain $C$ with no others lying between it and $C$. Note that $C_{\leftarrow}\in(\mathcal{X},\subseteq)$ is only one of the many maximal fully ordered subsets possible in $(X,\preceq)$.$\qquad\blacksquare$ With the assurance of the existence of a maximal chain $C_{\leftarrow}$ among all fully ordered subsets of a partially ordered set $(X,\preceq)$, the arguments are completed by returning to the basic level of $X$. **Theorem 4.3. Zorn’s Lemma.** *Let $(X,\preceq)$ be a partially ordered set such that every totally ordered subset of $X$ has an upper bound in $X$. Then $X$ has at least one maximal element with respect to its order.$\qquad\square$* **Proof.** The proof of this final part is a mere application of the Hausdorff Maximal Principle on the existence of a maximal chain $C_{\leftarrow}$ in $X$ to the hypothesis of this theorem that $C_{\leftarrow}$ has an upper bound $u$ in $X$ that quickly leads to the identification of this bound as a maximal element $x_{+}$ of $X$. Indeed, if there is an element $v\in X$ that is comparable to $u$ and $v\not\preceq u$, then $v$ cannot be in $C_{\leftarrow}$ as it is necessary for every $x\in C_{\leftarrow}$ to satisfy $x\preceq u$. Clearly then $C_{\leftarrow}\bigcup\{ v\}$ is a chain in $(X,\preceq)$ bigger than $C_{\leftarrow}$ which contradicts the assumed maximality of $C_{\leftarrow}$ among the chains of $X$.$\qquad\blacksquare$ The sequence of steps leading to Zorn’s Lemma, and thence to the maximal of a partially ordered set, is summarised in Fig. \[Fig: Zorn\]. [(a) The one-level higher subset $\mathcal{X}=\{ C\in\mathcal{P}(X)\!:C\textrm{ is a chain in }(X,\preceq)\}$ of $\mathcal{P}(X)$ consisting of all the totally ordered subsets of $(X,\preceq)$, ]{} [(b) The smallest common $g$-sequential totally ordered towered chain $\mathcal{C}_{\textrm{T}}=\{ C_{i}\in\mathcal{P}(X)\!:C_{i}\subseteq C_{j}\textrm{ for }i\leq j\}\subseteq\mathcal{P}(X)$ of all sequential $g$-towers of $\mathcal{X}$ by Thm. 4.1, which infact is a principal filter base of totally ordered subsets of $(X,\preceq)$ at the smallest element $C_{0}$. ]{} [(c) Apply Hausdorff Maximal Principle to $(\mathcal{X},\subseteq)$ to get the subset $\sup_{\mathcal{C}_{\textrm{T}}}(\mathcal{C}_{\textrm{T}})=C_{\leftarrow}=g(C_{\leftarrow})\in\mathcal{C}_{\textrm{T}}\subseteq\mathcal{P}(X)$ of $(X,\preceq)$ as the supremum of $(\mathcal{X},\subseteq)$ in $\mathcal{C}_{\textrm{T}}$. The identification of this supremum as a maximal element of $(\mathcal{X},\subseteq)$ is a consequence of (ST2) and Eqs. (\[Eqn: sup chain\]), (\[Eqn: fixed point\]) that actually puts the supremum into $\mathcal{X}$ itself. ]{} [By returning to the original level $(X,\preceq)$ ]{} [(d) Zorn’s Lemma finally yields the required maximal element $u\in X$ as an upper bound of the maximal totally ordered subset $(C_{\leftarrow},\preceq)$ of $(X,\preceq)$. ]{} [The dashed segment denotes the higher Hausdorff $(\mathcal{X},\subseteq)$ level leading to the base $(X,\preceq)$ Zorn level. ]{} The three examples below of the application of Zorn’s Lemma clearly reflect the increasing complexity of the problem considered, with the maximals a point, a subset, and a set of subsets of $X$, so that these are elements of $X$, $\mathcal{P}(X)$, and $\mathcal{P}^{2}(X)$ respectively. **Example 4.2.** (1) Let $X=(\{ a,b,c\},\preceq)$ be a three-point base-level ground set ordered lexicographically, that is $a\prec b\prec c$. A chain $\mathcal{C}$ of the partially ordered Hausdorff-level set $\mathcal{X}$ consisting of subsets of $X$ given by Eq. (\[Eqn: Hausdorff\]) is, for example, $\{\{ a\},\{ a,b\}\}$ and the six $g$-sequential chained towers $$\begin{array}{c} \mathcal{C}_{1}=\{\emptyset,\{ a\},\{ a,b\},\{ a,b,c\}\},\qquad\mathcal{C}_{2}=\{\emptyset,\{ a\},\{ a,c\},\{ a,b,c\}\}\\ \mathcal{C}_{3}=\{\emptyset,\{ b\},\{ a,b\},\{ a,b,c\}\},\qquad\mathcal{C}_{4}=\{\emptyset,\{ b\},\{ b,c\},\{ a,b,c\}\}\\ \mathcal{C}_{5}=\{\emptyset,\{ c\},\{ a,c\},\{ a,b,c\}\},\qquad\mathcal{C}_{6}=\{\emptyset,\{ c\},\{ b,c\},\{ a,b,c\}\}\end{array}$$ built from the smallest element $\emptyset$ corresponding to the six distinct ways of reaching $\{ a,b,c\}$ from $\emptyset$ along the sides of the cube marked on the figure with solid lines, all belong to $\mathcal{X}$; see Fig. \[Fig: order\](b). An example of a tower in $(\mathcal{X},\subseteq)$ which is not a chain is $$\mathcal{T}=\{\emptyset,\{ a\},\{ b\},\{ c\},\{ a,b\},\{ a,c\},\{ b,c\},\{ a,b,c\}\}.$$ Hence the common infimum towered chained subset $$\mathcal{C}_{\textrm{T}}=\{\emptyset,\{ a,b,c\}\}=\,_{\rightarrow}\mathcal{T}\subseteq\mathcal{P}(X)$$ of $\mathcal{X}$, with $$\sup_{\mathcal{C}_{\textrm{T}}}(\mathcal{C}_{\textrm{T}})=C_{\leftarrow}=\{ a,b,c\}=g(C_{\leftarrow})\in\mathcal{C}_{\textrm{T}}\subseteq\mathcal{P}(X)$$ the only maximal element of $\mathcal{P}(X)$. Zorn’s Lemma now assures the existence of a maximal element of $c\in X$. Observe how the maximal element of $(X,\preceq)$ is obtained by going one level higher to $\mathcal{X}$ at the Hausdorff stage and returning to the base level $X$ at Zorn, see Fig. \[Fig: Zorn\] for a schematic summary of this sequence of steps. \(2) *Basis of a vector space.* A linearly independent set of vectors in a vector space $X$ that spans the space is known as the Hamel basis of $X$. To prove the existence of a Hamel basis in a vector space, Zorn’s lemma is invoked as follows. The ground base level of the linearly independent subsets of $X$ $$\mathcal{X}=\{\{ x_{i_{j}}\}_{j=1}^{J}\in\mathcal{P}(X)\!:\textrm{Span}(\{ x_{i_{j}}\}_{j=1}^{J})=0\Rightarrow(\alpha_{j})_{j=1}^{J}=0\,\forall J\geq1\}\subseteq\mathcal{P}(X)),$$ with $\textrm{Span}(\{ x_{i_{j}}\}_{j=1}^{J}):=\sum_{j=1}^{J}\alpha_{j}x_{i_{j}}$, is such that no $x\in\mathcal{X}$ can be expressed as a linear combination of the elements of $\mathcal{X}-\{ x\}$. $\mathcal{X}$ clearly has a smallest element, say $\{ x_{i_{1}}\}$, for some non-zero $x_{i_{1}}\in X$. Let the higher Hausdorff level $$\mathfrak{X}=\{\mathcal{C}\in\mathcal{P}^{2}(X)\!:\mathcal{C}\textrm{ is a chain in }(\mathcal{X},\subseteq)\}\subseteq\mathcal{P}^{2}(X)$$ collection of the chains $$\mathcal{C}_{i_{K}}=\{\{ x_{i_{1}}\},\{ x_{i_{1}},x_{i_{2}}\},\cdots,\{ x_{i_{1}},x_{i_{2}},\cdots,x_{i_{K}}\}\}\textrm{ }\in\mathcal{P}^{2}(X)$$ of $\mathcal{X}$ comprising linearly independent subsets of $X$ be $g$-built from the smallest $\{ x_{i_{1}}\}$. Any chain $\mathfrak{C}$ of $\mathfrak{X}$ is bounded above by the union $\mathcal{C}_{*}=\sup_{\mathfrak{X}}\mathfrak{C}=\bigcup_{\mathcal{C}\in\mathfrak{C}}\mathcal{C}$ which is a chain in $\mathcal{X}$ containing $\{ x_{i_{1}}\}$, thereby verifying (ST2) for $\mathfrak{X}$. Application of the tower theorem to $\mathfrak{X}$ implies that the chain $$\mathfrak{C}_{\textrm{T}}=\{\mathcal{C}_{i_{1}},\mathcal{C}_{i_{2}},\cdots,\mathcal{C}_{i_{n}},\cdots\}=\,_{\rightarrow}\mathfrak{T}\subseteq\mathcal{P}^{2}(X)$$ in $\mathfrak{X}$ of chains of $\mathcal{X}$ is a $g$-sequential fully ordered towered subset of $(\mathfrak{X},\subseteq)$ consisting of the common elements of all $g$-sequential towers of $(\mathfrak{X},\subseteq)$, that infact is a *chained* *principal ultrafilter on $(\mathcal{P}(X),\subseteq)$ generated by the filter-base $\{\{\{ x_{i_{1}}\}\}\}$* *at $\{ x_{i_{1}}\}$*, where $$\mathfrak{T}=\{\mathcal{C}_{i_{1}},\mathcal{C}_{i_{2}},\cdots,\mathcal{C}_{j_{n}},\mathcal{C}_{j_{n+1}},\cdots\}$$ for some $n\in\mathbb{N}$ is an example of non-chained $g$-tower whenever $(\mathcal{C}_{j_{k}})_{k=n}^{\infty}$ is neither contained in nor contains any member of the $(\mathcal{C}_{i_{k}})_{k=1}^{\infty}$ chain. Hausdorff’s chain theorem now yields the fixed-point $g$-chain $\mathcal{C}_{\leftarrow}\,\in\mathfrak{X}$ of $\mathcal{X}$ $$\sup_{\mathfrak{C}_{\textrm{T}}}(\mathfrak{C}_{\textrm{T}})=\mathcal{C}_{\leftarrow}=\{\{ x_{i_{1}}\},\{ x_{i_{1}},x_{i_{2}}\},\{ x_{i_{1}},x_{i_{2}},x_{i_{3}}\},\cdots\}=g(\mathcal{C}_{\leftarrow})\in\mathfrak{C}_{\textrm{T}}\subseteq\mathcal{P}^{2}(X)$$ as a maximal *totally ordered* *principal filter on $X$ that is generated by the filter-base $\{\{ x_{i_{1}}\}\}$* *at $x_{i_{1}}$*, whose supremum $B=\{ x_{i_{1}},x_{i_{2}},\cdots\}\in\mathcal{P}(X)$ is, by Zorn’s lemma, a maximal element of the base level $\mathcal{X}$. This maximal linearly independent subset of $X$ is the required Hamel basis for $X$: Indeed, if the span of $B$ is not the whole of $X$, then $\textrm{Span}(B)\bigcup x$, with $x\notin\textrm{Span}(B)$ would, by definition, be a linearly independent set of $X$ strictly larger than $B$, contradicting the assumed maximality of the later. It needs to be understood that since the infinite basis cannot be classified as being linearly independent, we have here an important example of the supremum of the maximal chained set not belonging to the set even though this criterion was explicitly used in the construction process according to (ST2) and (ST3). Compared to this purely algebraic concept of basis in a vector space, is the Schauder basis in a normed space which combines topological structure with the linear in the form of convergence: If a normed vector space contains a sequence $(e_{i})_{i\in\mathbb{Z}_{+}}$ with the property that for every $x\in X$ there is an unique sequence of scalars $(\alpha_{i})_{i\in\mathbb{Z}_{+}}$ such that the remainder $\parallel x-(\alpha_{1}e_{1}+\alpha_{2}e_{2}+\cdots+\alpha_{I}e_{I})\parallel$ approaches $0$ as $I\rightarrow\infty$, then the collection $(e_{i})$ is known as a Schauder basis for $X$. \(3) *Ultrafilter.* Let $X$ be a set. The set $${\textstyle _{\textrm{F}}\mathcal{S}=\{ S_{\alpha}\in\mathcal{P}(X)\!:S_{\alpha}\bigcap S_{\beta}\neq\emptyset,\textrm{ }\forall\alpha\neq\beta\}\subseteq\mathcal{P}(X)}$$ of all nonempty subsets of $X$ with finite intersection property is known as a *filter subbase on* $X$ and $_{\textrm{F}}\mathcal{B}=\{ B\subseteq X\!:B=\bigcap_{i\in I\subset\mathbb{D}}S_{i}\}$, for $I\subset\mathbb{D}$ a finite subset of a directed set $\mathbb{D}$, is a *filter-base on $X$* *associated with the subbase* $_{\textrm{F}}\mathcal{S}$; cf. Appendix A1. Then the *filter generated by* $_{\textrm{F}}\mathcal{S}$ consisting of every superset of the finite intersections $B\in\,_{\textrm{F}}\mathcal{B}$ of sets of $_{\textrm{F}}\mathcal{S}$ is the smallest filter that contain the subbase $_{\textrm{F}}\mathcal{S}$ and base $_{\textrm{F}}\mathcal{B}$. For notational simplicity, we will denote the subbase $_{\textrm{F}}\mathcal{S}$ in the rest of this example simply by $\mathcal{S}$. Consider the base-level ground set of all filter subbases on $X$ $$\mathfrak{S}=\{\mathcal{S}\in\mathcal{P}^{2}(X)\!:\bigcap_{\emptyset\neq\mathcal{R}\subseteq\mathcal{S}}\mathcal{R}\neq\emptyset\textrm{ for every finite subset of }\mathcal{S}\}\subseteq\mathcal{P}^{2}(X),$$ ordered by inclusion in the sense that $\mathcal{S}_{\alpha}\subseteq\mathcal{S}_{\beta}\textrm{ for all }\alpha\preceq\beta\in\mathbb{D}$, and let the higher Hausdorff-level $$\widetilde{\mathfrak{X}}=\{\mathfrak{C}\in\mathcal{P}^{3}(X)\!:\mathfrak{C}\textrm{ is a chain in }(\mathfrak{S},\subseteq)\}\subseteq\mathcal{P}^{3}(X)$$ comprising the collection of the totally ordered chains $$\mathfrak{C}_{\kappa}=\{\{ S_{\alpha}\},\{ S_{\alpha},S_{\beta}\},\cdots,\{ S_{\alpha},S_{\beta},\cdots,S_{\kappa}\}\}\in\mathcal{P}^{3}(X)$$ of $\mathfrak{S}$ be $g$-built from the smallest $\{ S_{\alpha}\}$ then an *ultrafilter* on $X$ is a maximal member $\mathcal{S}_{+}$ of $(\mathfrak{S},\subseteq)$ in the usual sense that any subbase $\mathcal{S}$ on $X$ must necessarily be contained in $\mathcal{S}_{+}$ so that $\mathcal{S}_{+}\subseteq\mathcal{S}\Rightarrow\mathcal{S}=\mathcal{S}_{+}$ for any $\mathcal{S}\subseteq\mathcal{P}(X)$ with FIP. The tower theorem now implies that the element $$\widetilde{\mathfrak{C}_{\textrm{T}}}=\{\mathfrak{C}_{\alpha},\mathfrak{C}_{\beta},\cdots,\mathfrak{C}_{\nu},\cdots\}=\,\widetilde{_{\rightarrow}\mathfrak{T}}\subseteq\mathcal{P}^{3}(X)$$ of $\mathcal{P}^{4}(X)$, which is a chain in $\widetilde{\mathfrak{X}}$ of the chains of $\mathfrak{S}$, is a $g$-sequential fully ordered towered subset of the common elements of all sequential towers of $(\widetilde{\mathfrak{X}},\subseteq)$ and a *chained* *principal ultrafilter on $(\mathcal{P}^{2}(X),\subseteq)$ generated by the filter-base $\{\{\{ S_{\alpha}\}\}\}$* *at* $\{ S_{\alpha}\}$; here $$\widetilde{\mathfrak{T}}=\{\mathfrak{C}_{\alpha},\mathfrak{C}_{\beta},\cdots,\mathfrak{C}_{\sigma},\mathfrak{C}_{\varsigma},\cdots\},$$ is an obvious example of non-chained $g$-tower whenever $(\mathfrak{C}_{\sigma})$ is neither contained in, nor contains, any member of the $\mathfrak{C}_{\alpha}$-chain. Hausdorff’s chain theorem now yields the fixed-point $\widetilde{\mathfrak{C}_{\leftarrow}}\,\in\widetilde{\mathfrak{X}}$ $$\sup_{\widetilde{\mathfrak{C}_{\textrm{T}}}}(\widetilde{\mathfrak{C}_{\textrm{T}}})=\widetilde{\mathfrak{C}_{\leftarrow}}=\{\{ S_{\alpha}\},\{ S_{\alpha},S_{\beta}\},\{ S_{\alpha},S_{\beta},S_{\gamma}\},\cdots\}=g(\widetilde{\mathfrak{C}_{\leftarrow}})\in\widetilde{\mathfrak{C}_{\textrm{T}}}\subseteq\mathcal{P}^{3}(X)$$ as a maximal *totally ordered* $g$-chained towered subset of $X$ that is, by Zorn’s lemma, a maximal element of the base level subset $\mathfrak{S}$ of $\mathcal{P}^{2}(X)$. $\widetilde{\mathfrak{C}_{\leftarrow}}$ is a *chained principal ultrafilter on* $(\mathcal{P}(X),\subseteq)$ *generated by the filter-base $\{\{ S_{\alpha}\}\}$* *at $S_{\alpha}$*, while $\mathcal{S}_{+}=\{ S_{\alpha},S_{\beta},S_{\gamma},\cdots\}\in\mathcal{P}^{2}(X)$ is an (non-principal) *ultrafilter on* $X$ — characterized by the property that any collection of subsets on $X$ with FIP (that is any filter subbase on $X$) must be contained the maximal set $\mathcal{S}_{+}$ having FIP — that is not a principal filter unless $\mathcal{S}_{\alpha}$ is a singleton set $\{ x_{\alpha}\}$. $\qquad\blacksquare$ [1.4]{} What emerges from these application of Zorn’s Lemma is the remarkable fact that *infinities (the dot-dot-dots) can be formally introduced as “limiting cases” of finite systems in a purely set-theoretic context* *without the need for topologies, metrics or convergences.* The significance of this observation will become clear from our discussions on filters and topology leading to Sec. 4.2 below. Also, the observation on the successive iterates of the power sets $\mathcal{P}(X)$ in the examples above was to suggest their anticipated role in the complex evolution of a dynamical system that is expected to play a significant part in our future interpretation and understanding of this adaptive and self-organizing phenomenon of nature. ***End Tutorial5*** From the examples in Tutorial5, it should be clear that the sequential steps summarized in Fig. \[Fig: Zorn\] are involved in an application of Zorn’s lemma to show that a partially ordered set has a maximal element with respect to its order. Thus for a partially ordered set $(X,\preceq)$, form the set $\mathcal{X}$ of all chains $C$ in $X$. If $C_{+}$ is a maximal chain of $X$ obtained by the Hausdorff Maximal Principle from the chain $\mathcal{C}$ of all chains of $X$, then its supremum $u$ is a maximal element of $(X,\preceq)$. This sequence is now applied, paralleling Example 4.2(1), to the set of arbitrary relations $\textrm{Multi}(X)$ on an infinite set $X$ in order to formulate our definition of chaos that follows. Let $f$ be a *noninjective map* in $\textrm{Multi}(X)$ and $P(f)$ the number of injective branches of $f$. Denote by $$F=\{ f\in\textrm{Multi}(X)\!:f\textrm{ is a noninjective function on }X\}\subseteq\textrm{Multi}(X)$$ the resulting basic collection of noninjective functions in $\textrm{Multi}(X)$. \(i) For every $\alpha$ in some directed set $\mathbb{D}$, let $F$ have the extension property $$(\forall f_{\alpha}\in F)(\exists f_{\beta}\in F)\!:P(f_{\alpha})\leq P(f_{\beta})$$ \(ii) Let a partial order $\preceq$ on $\textrm{Multi}(X)$ be defined, for $f_{\alpha},f_{\beta}\in\textrm{Map}(X)\subseteq\textrm{Multi}(X)$ by $$P(f_{\alpha})\leq P(f_{\beta})\Longleftrightarrow f_{\alpha}\preceq f_{\beta},\label{Eqn: chaos1}$$ with $P(f):=1$ for the smallest $f$, define a partially ordered subset $(F,\preceq)$ of $\textrm{Multi}(X)$. This is actually a preorder on $\textrm{Multi}(X)$ in which functions with the same number of injective branches are equivalent to each other. \(iii) Let $$C_{\nu}=\{ f_{\alpha}\in\textrm{Multi}(X)\!:f_{\alpha}\preceq f_{\nu}\}\in\mathcal{P}(F),\qquad\nu\in\mathbb{D},$$ be the $g$-chains of non-injective functions of $\textrm{Multi}(X)$ and $$\mathcal{X}=\{ C\in\mathcal{P}(F)\!:C\textrm{ is a chain in }(F,\preceq)\}\subseteq\mathcal{P}(F)$$ denote the corresponding Hausdorff level of the chains of $F$, with $$\mathcal{C}_{\textrm{T}}=\{ C_{\alpha},C_{\beta},\cdots,C_{\nu},\cdots\}=\,_{\rightarrow}\mathcal{T}\subseteq\mathcal{P}(F)$$ being a $g$-sequential chain in $\mathcal{X}$ . **By Hausdorff Maximal Principle, there is a maximal fixed-point $g$-towered chain $C_{\leftarrow}\in\mathcal{X}$ of $F$ $$\sup_{\mathcal{C}_{\textrm{T}}}(\mathcal{C}_{\textrm{T}})=C_{\leftarrow}=\{ f_{\alpha},f_{\beta},f_{\gamma},\cdots\}=g(C_{\leftarrow})\in\mathcal{C}_{\textrm{T}}\subseteq\mathcal{P}(F).$$ Zorn’s Lemma now applied to this maximal chain yields its supremum as the maximal element of $C_{\leftarrow}$, and thereby of $F$. It needs to be appreciated, as in the case of the algebraic Hamel basis, that the existence of this maximal non-functional element was obtained purely set theoretically as the “limit” of a net of functions with increasing non-linearity, without resorting to any topological arguments. Because it is not a function, this supremum does not belong to the functional $g$-towered chain having it as a fixed point, and this maximal chain does not possess a largest, or even a maximal, element, although it does have a supremum.[^24] The supremum is a contribution of the inverse functional relations $(f_{\alpha}^{-})$ in the following sense. From Eq. (\[Eqn: func-multi\]), the net of increasingly non-injective functions of Eq. (\[Eqn: chaos1\]) implies a corresponding net of increasingly multivalued functions ordered inversely by the inverse relation $f_{\alpha}\preceq f_{\beta}\Leftrightarrow f_{\beta}^{-}\preceq f_{\alpha}^{-}$. Thus the inverse relations which are as much an integral part of graphical convergence as are the direct relations, have a smallest element belonging to the multifunctional class. Clearly, this smallest element as the required supremum of the increasingly non-injective tower of functions defined by Eq. (\[Eqn: chaos1\]), serves to complete the significance of the tower by capping it with a “boundary” element that can be taken to bridge the classes of functional and non-functional relations on $X$. We are now ready to define a *maximally ill-posed problem $f(x)=y$* for *$x,y\in X$* in terms of a *maximally non-injective map $f$* as follows. **Definition 4.1.** ***Chaotic map.*** *Let $A$ be a non-empty closed set of a compact Hausdorff space $X.$ A function* $f\in\textrm{Multi}(X)$ **(*equivalently the sequence of functions $(f_{i})$*)** *is* *maximally non-injective* *or* *chaotic on* **$A$** *with respect to the order relation* **(\[Eqn: chaos1\])** *if* *(a) for any $f_{i}$ on $A$ there exists an $f_{j}$ on $A$ satisfying $f_{i}\preceq f_{j}$ for every $j>i\in\mathbb{N}$.* *(b) the set $\mathcal{D}_{+}$ consists of a countable collection of isolated singletons.$\qquad\square$* **Definition 4.2.** ***Maximally ill-posed problem.*** *[<span style="font-variant:small-caps;">L</span>]{}et $A$ be a non-empty closed set of a compact Hausdorff space $X$ and let $f$ be a functional relation in* $\textrm{Multi}(X)$*. The problem $f(x)=y$ is* *maximally ill-posed at* **$y$** *if $f$ is chaotic on $A$*.$\qquad\square$ As an example of the application of these definitions, on the dense set $\mathcal{D}_{+}$, the tent map satisfies both the conditions of sensitive dependence on initial conditions and topological transitivity [@Devaney1989] and is also maximally non-injective; the tent map is therefore chaotic on $\mathcal{D}_{+}.$ In contrast, the examples of Secs. 1 and 2 are not chaotic as the maps are not topologically transitive, although the Liapunov exponents, as in the case of the tent map, are positive. Here the $(f_{n})$ are identified with the iterates of $f,$ and the “fixed point” as one through which graphs of all the functions on residual index subsets pass. When the set of points $\mathcal{D}_{+}$ is dense in $[0,1]$ and both $\mathcal{D}_{+}$ and $[0,1]-\mathcal{D}_{+}=[0,1]-\bigcup_{i=0}^{\infty}f^{-i}(\textrm{Per}(f))$ (where $\textrm{Per}(f)$ denotes the set of periodic points of $f$) are totally disconnected, it is expected that at any point on this complement the behaviour of the limit will be similar to that on $\mathcal{D}_{+}$: these points are special as they tie up the iterates on $\textrm{Per}(f)$ to yield the multifunctions. Therefore in any neighbourhood $U$ of a $\mathcal{D}_{+}$-point, there is an $x_{0}$ at which the *forward orbit $\{ f^{i}(x_{0})\}_{i\geq0}$ is chaotic* in the sense that \(a) the sequence neither diverges nor does it converge in the image space of $f$ to a periodic orbit of any period, and \(b) the Liapunov exponent given by $$\begin{aligned} \lambda(x_{0}) & = & \lim_{n\rightarrow\infty}\ln\left|\frac{df^{n}(x_{0})}{dx}\right|^{1/n}\\ & = & {\displaystyle \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}\ln\left|\frac{df(x_{i})}{dx}\right|,\, x_{i}=f^{i}(x_{0}),}\end{aligned}$$ which is a measure of the average slope of an orbit at $x_{0}$ or equivalently of the average loss of information of the position of a point after one iteration, is positive. Thus *an orbit with positive Liapunov exponent is chaotic if it is not* *asymptotic* (that is neither convergent nor adherent, having no convergent suborbit in the sense of Appendix A1) *to an unstable periodic orbit* *or to any other limit set on which the dynamics is simple.* A basic example of a chaotic orbit is that of an irrational in $[0,1]$ under the shift map and that of the chaotic set its closure, the full unit interval. Let $f\in\textrm{Map}((X,\mathcal{U}))$ and suppose that $A=\{ f^{j}(x_{0})\}_{j\in\mathbb{N}}$ is a sequential set corresponding to the orbit $\textrm{Orb}(x_{0})=(f^{j}(x_{0}))_{j\in\mathbb{N}}$, and let $f_{\mathbb{R}_{i}}(x_{0})=\bigcup_{j\geq i}f^{j}(x_{0})$ be the $i$-residual of the sequence $(f^{j}(x_{0}))_{j\in\mathbb{N}}$, with $_{\textrm{F}}\mathcal{B}_{x_{0}}=\{ f_{\mathbb{R}_{i}}(x_{0})\!:\textrm{Res}(\mathbb{N})\rightarrow X\textrm{ for all }i\in\mathbb{N}\}$ being the decreasingly nested filter-base associated with $\textrm{Orb}(x_{0})$. The so-called *$\omega$-limit set of* $x_{0}$ given by $$\begin{array}{ccl} \omega(x_{0}) & \overset{\textrm{def}}= & \{ x\in X\!:(\exists n_{k}\in\mathbb{N})(n_{k}\rightarrow\infty)\textrm{ }(f^{n_{k}}(x_{0})\rightarrow x)\}\\ & = & \{ x\in X\!:(\forall N\in\mathcal{N}_{x})(\forall f_{\mathbb{R}_{i}}\in\,_{\textrm{F}}\mathcal{B}_{x_{0}})\textrm{ }(f_{\mathbb{R}_{i}}(x_{0})\bigcap N\neq\emptyset)\}\end{array}\label{Eqn: Def: omega(x)}$$ is simply the adherence set $\textrm{adh}(f^{j}(x_{0}))$ of the sequence $(f^{j}(x_{0}))_{j\in\mathbb{N}}$, see Eq. (\[Eqn: net adh\]); hence Def. A1.11 of the filter-base associated with a sequence and Eqs. (\[Eqn: adh net2\]), (\[Eqn: adh filter\]), (\[Eqn: filter adh\*\]) and (\[Eqn: net-fil\]) allow us to express $\omega(x_{0})$ more meaningfully as $$\omega(x_{0})=\bigcap_{i\in\mathbb{N}}\textrm{Cl}(f_{\mathbb{R}_{i}}(x_{0})).\label{Eqn: adh_omega_x}$$ It is clear from the second of Eqs. (\[Eqn: Def: omega(x)\]) that for a continuous $f$ and any $x\in X$, $x\in\omega(x_{0})$ implies $f(x)\in\omega(x_{0})$ so that the entire orbit of $x$ lies in $\omega(x_{0})$ whenever $x$ does implying that the $\omega$-limit set is positively invariant; it is also closed because the adherent set is a closed set according to Theorem A1.3. Hence $x_{0}\in\omega(x_{0})\Rightarrow A\subseteq\omega(x_{0})$ reduces the $\omega$-limit set to the closure of $A$ without any isolated points, $A\subseteq\textrm{Der}(A)$. In terms of Eq. (\[Eqn: PrinFil\_Cl(A)\]) involving principal filters, Eq. (\[Eqn: adh\_omega\_x\]) in this case may be expressed in the more transparent form $\omega(x_{0})=\bigcap\textrm{Cl}(\,_{\textrm{F}}\mathcal{P}(\{ f^{j}(x_{0})\}_{j=0}^{\infty}))$ where the principal filter $_{\textrm{F}}\mathcal{P}(\{ f^{j}(x_{0})\}_{j=0}^{\infty})$ at $A$ consists of all supersets of $A=\{ f^{j}(x_{0})\}_{j=0}^{\infty}$, and $\omega(x_{0})$ represents the adherence set of the principal filter at $A$, see the discussion following Theorem A1.3. If $A$ represents a chaotic orbit under this condition, then $\omega(x_{0})$ is sometimes known as a *chaotic set* [@Alligood1997]; thus the chaotic orbit infinitely often visits every member of its chaotic set[^25] which is simply the $\omega$-limit set of a chaotic orbit that is itself contained in its own limit set. Clearly the chaotic set if positive invariant, and from Thm. A1.3 and its corollary it is also compact. Furthermore, if all (sub)sequences emanating from points $x_{0}$ in some neighbourhood of the set converge to it, then $\omega(x_{0})$ is called a *chaotic attractor,* see @Alligood1997. As common examples of chaotic sets that are not attractors mention may be made of the tent map with a peak value larger than $1$ at $0.5$, and the logistic map with $\lambda\geq4$ again with a peak value at $0.5$ exceeding $1$. [Figure \[Fig: logcob357\],]{} [contd: Multifunctional and cobweb plots of $\lambda_{*}x(1-x)$ where $\lambda_{*}=3.5699456$]{} It is important that the difference in the dynamical behaviour of the system on $\mathcal{D}_{+}$ and its complement be appreciated. At any fixed point $x$ of $f^{i}$ in $\mathcal{D}_{+}$ (or at its equivalent images in $[x]$) the dynamics eventually gets attached to the (equivalent) fixed point, and the sequence of iterates converges graphically in $\textrm{Multi}(X)$ to $x$ (or its equivalent points). [Figure \[Fig: logcob357\],]{} [contd: Mulnctional and cobweb plots of $3.57x(1-x)$. ]{} When $x\notin\mathcal{D}_{+}$, however, the orbit $A=\{ f^{i}(x)\}_{i\in\mathbb{N}}$ is chaotic in the sense that $(f^{i}(x))$ is not asymptotically periodic and not being attached to any particular point they wander about in the closed chaotic set $\omega(x)=\textrm{Der}(A)$ containing $A$ such that for any given point in the set, some subsequence of the chaotic orbit gets arbitrarily close to it. Such sequences do not converge anywhere but only frequent every point of $\textrm{Der}(A)$. Thus although in the limit of progressively larger iterations there is complete uncertainty of the outcome of an experiment conducted at either of these two categories of initial points, whereas on $\mathcal{D}_{+}$ this is due to a random choice from a multifunctional set of equally probable outputs as dictated by the specific conditions under which the experiment was conducted at that instant, on its complement the uncertainty is due to the chaotic behaviour of the functional iterates themselves. Nevertheless it must be clearly understood *that this later behaviour is* *entirely due to the multifunctional limits at the $\mathcal{D}_{+}$ points which completely determine the behaviour of the system on its complement.* As an explicit illustration of this situation, recall that for the shift map $2x\textrm{ mod}(1)$ the $\mathcal{D}_{+}$ points are the rationals on $[0,1]$, and any irrational is represented by a non-terminating and non-repeating decimal so that almost all decimals in $[0,1]$ in any base contain all possible sequences of any number of digits. For the logistic map, the situation is more complex, however. Here the onset of chaos marking the end of the period doubling sequence at $\lambda_{*}=3.5699456$ is signaled by the disappearance of all stable fixed points, Fig. \[Fig: logcob357\](c), with Fig. \[Fig: logcob357\](a) being a demonstration of the stable limits for $\lambda=3.569$ that show up as convergence of the iterates to constant valued functions (rather than as constant valued inverse functions) at stable fixed points, shown more emphatically in Fig\[Fig: log357\](a). What actually happens at $\lambda_{*}$ is shown in Fig. \[Fig: attractor\](a) in the next subsection: the almost vertical lines produced at a large, but finite, iterations $i$ (the multifunctions are generated only in the limiting sense of $i\rightarrow\infty$ and represent a boundary between functional and non-functional relations on a set), decrease in magnitude with increasing iterations until they reduce to points. This gives rise to a (totally disconnected) Cantor set on the $y$-axis in contrast with the connected intervals that the multifunctional limits at $\lambda>\lambda_{*}$ of Figs. \[Fig: attractor\](b)–(d) produce. By our characterization Definition 4.1 of chaos therefore, $\lambda x(1-x)$ is chaotic for the values of $\lambda>\lambda_{*}$ that are shown in Fig. \[Fig: attractor\]. We return to this case in the following subsection. ** [Figure]{} [\[Fig: log357\], contd: Isolated fixed points of logistic map. The sequence of points generated by the iterates of the map are marked on the $y$-axis of (a)–(c) in]{} *italics*[. The singletons $\{ x\}$ are $\omega$-limit sets of the respective fixed point $x$ and is generated by the constant sequence $(x,x,\cdots)$. Whereas in (a) this is the limit of every point in $(0,1)$, in the other cases these fixed points are isolated in the sense of Def. 2.3. The isolated points, however, give rise to sequences that converge to more than one point in the form of limit cycles as shown in figures (b)–(d). ]{} As an example of chaos *in a noniterative system*, we investigate the following question: While maximality of non-injectiveness produced by an increasing number of injective branches is necessary for a family of functions to be chaotic, is this also sufficient for the system to be chaotic? This is an important question especially in the context of a non-iterative family of functions where fixed points are of no longer relevant. Consider the sequence of functions $|\sin(\pi nx)|_{n=1}^{\infty}.$ The graphs of the subsequence $|\sin(2^{n-1}\pi x)|$ and of the sequence $(t^{n}(x))$ on [\[]{}0,1[\]]{} are qualitatively similar in that they both contain $2^{n-1}$ of their functional graphs each on a base of $1/2^{n-1}.$ Thus both $|\sin(2^{n-1}\pi x)|_{n=1}^{\infty}$ and $(t^{n}(x))_{n=1}^{\infty}$ converge graphically to the multifunction [\[]{}0,1[\]]{} on the same set of points equivalent to 0. This is sufficient for us to conclude that $|\sin(2^{n-1}\pi x)|_{n=1}^{\infty}$, and hence $|\sin(\pi nx)|_{n=1}^{\infty}$, is chaotic on the infinite equivalent set [\[]{}0[\]]{}. While Fig. \[Fig: tent4\] was a comparison of the first four iterates of the tent and absolute sine maps, Fig. [\[Fig: tent17\]]{} following shows the “converged” graphical limits for after 17 iterations. ***4.1. The chaotic attractor*** One of the most fascinating characteristics of chaos in dynamical systems is the appearance of attractors the dynamics on which are chaotic. **For a subset $A$ of a topological space $(X,\mathcal{U})$ such that $\mathcal{R}(f(A))$ is contained in $A$ — in this section, unless otherwise stated to the contrary, $f(A)$ *will* *denote the* *graph and not the range (image)* *of* $f$ — which ensures that the iteration process can be carried out in $A$, let $$\begin{array}{ccl} {\displaystyle f_{\mathbb{R}_{i}}(A)} & = & {\displaystyle \bigcup_{j\geq i\in\mathbb{N}}f^{j}(A)}\\ & = & {\displaystyle \bigcup_{j\geq i\in\mathbb{N}}\left(\bigcup_{x\in A}f^{j}(x)\right)}\end{array}\label{Eqn: absorbing set}$$ generate the filter-base $_{\textrm{F}}\mathcal{B}$ with $A_{i}:=f_{\mathbb{R}_{i}}(A)\in\,_{\textrm{F}}\mathcal{B}$ being decreasingly nested, $A_{i+1}\subseteq A_{i}$ for all $i\in\mathbb{N}$, in accordance with Def. A1.1. The existence of a maximal chain with a corresponding maximal element as asssured by the Hausdorff Maximal Principle and Zorn’s Lemma respectively implies a nonempty core of $_{\textrm{F}}\mathcal{B}$. As in Sec. 3 following Def. 3.3, we now identify the filterbase with the neighbourhood base at $f^{\infty}$ which allows us to define $$\begin{array}{ccl} {\displaystyle \textrm{Atr}(A_{1})} & \overset{\textrm{def}}= & \textrm{adh}(\,_{\textrm{F}}\mathcal{B})\\ & = & {\displaystyle \bigcap_{A_{i}\in\,_{\textrm{F}}\mathcal{B}}\textrm{Cl}(A_{i})}\end{array}\label{Eqn: attractor_adherence}$$ as the attractor of the set $A_{1}$, where the last equality follows from Eqs.(\[Eqn: Def: omega(A)\]) and (\[Eqn: Def: Closure\]) and the closure is with respect to the topology induced by the neighbourhood filter base $_{\textrm{F}}\mathcal{B}$. Clearly the attractor as defined here is the graphical limit of the sequence of functions $(f^{i})_{i\in\mathbb{N}}$ which may be verified by reference to Def. A1.8, Thm. A1.3 and the proofs of Thms. A1.4 and A1.5, together with the directed set Eq. (\[Eqn: DirectedIndexed\]) with direction (\[Eqn: DirectionIndexed\]). The *basin of attraction* of the attractor is $A_{1}$ because the graphical limit $(\mathcal{D}_{+},F(\mathcal{D}_{+}))\bigcup(G(\mathcal{R}_{+}),\mathcal{R}_{+})$ of Def. 3.1 may be obtained, as indicated above, by a proper choice of sequences associated with $\mathcal{A}$. Note that in the context of iterations of functions, the graphical limit $(\mathcal{D}_{+},y_{0})$ of the sequence $(f^{n}(x))$ denotes a stable fixed point $x_{*}$ with image $x_{*}=f(x_{*})=y_{0}$ to which iterations starting at any point $x\in\mathcal{D}_{+}$ converge. The graphical limits $(x_{i0},\mathcal{R}_{+})$ are generated with respect to the class $\{ x_{i*}\}$ of points satisfying $f(x_{i0})=x_{i*}$, $i=0,1,2,\cdots$ equivalent to unstable fixed point $x_{*}:=x_{0*}$ to which inverse iterations starting at any initial point in $\mathcal{R}_{+}$ must converge. Even though only $x_{*}$ is inverse stable, an equivalent class of graphically converged limit multis is produced at every member of the class $x_{i*}\in[x_{*}]$, resulting in the far-reaching consequence *that every member of the class is as significant as the parent fixed point $x_{*}$ from which they were born in determining the dynamics of the evolving system.* The point to remember about infinite intersections of a collection of sets having finite intersection property, as in Eq. (\[Eqn: attractor\_adherence\]), is that this may very well be empty; recall, however, that in a compact space this is guaranteed not to be so. In the general case, if $\textrm{core}(\mathcal{A})\neq\emptyset$ then $\mathcal{A}$ is the principal filter at this core, and $\textrm{Atr}(A_{1})$ by Eqs. (\[Eqn: attractor\_adherence\]) and (\[Eqn: PrinFil\_Cl(A)\]) is the closure of this core, which in this case of the topology being induced by the filterbase, is just the core itself. $A_{1}$ by its very definition, is a positively invariant set as any sequence of graphs converging to **$\textrm{Atr}(A_{1})$ must be eventually in $A_{1}$: the entire sequence therefore lies in $A_{1}$. Clearly, from Thm. A3.1 and its corollary, the attractor is a positively invariant compact set. A typical attractor is illustrated by the derived sets in the second column of Fig. \[Fig: DerSets\] which also illustrates that the set of functional relations are open in $\textrm{Multi}(X)$; specifically functional-nonfunctional correspondences are neutral-selfish related as in Fig. \[Fig: DerSets\], 3-2, with the attracting graphical limit of Eq. (\[Eqn: attractor\_adherence\]) forming the boundary of (finitely)many-to-one functions and the one-to-(finitely)many multifunctions. Equation (\[Eqn: attractor\_adherence\]) is to be compared with the *image definition of an attractor* [@Stuart1996] where $f(A)$ denotes the range and not the graph of $f$. Then Eq. (\[Eqn: attractor\_adherence\]) can be used to define a sequence of points $x_{k}\in A_{n_{k}}$ and hence the subset $$\begin{aligned} \omega(A) & \overset{\textrm{def}}= & \{ x\in X\!:(\exists n_{k}\in\mathbb{N})(n_{k}\rightarrow\infty)(\exists x_{k}\in A_{n_{k}})\textrm{ }(f^{n_{k}}(x_{k})\rightarrow x)\}\nonumber \\ & = & \{ x\in X\!:(\forall N\in\mathcal{N}_{x})(\forall A_{i}\in\mathcal{A})(N\bigcap A_{i}\neq\emptyset)\}\label{Eqn: Def: omega(A)}\end{aligned}$$ as the corresponding attractor of $A$ that satisfies an equation formally similar to (\[Eqn: attractor\_adherence\]) with the difference that the filter-base $\mathcal{A}$ is now in terms of the image $f(A)$ of $A$, which allows the adherence expression to take the particularly simple form $$\omega(A)=\bigcap_{i\in\mathbb{N}}\textrm{Cl}(f^{i}(A)).\label{Eqn: omega(A)_intersect}$$ The complimentary subset excluded from this definition of $\omega(A)$, as compared to $\textrm{Atr}(A_{1})$, that is required to complete the formalism is given by Eq. (\[Eqn: basin\]) below. Observe that the equation for $\omega(A)$ is essentially Eq. (\[Eqn: adh net1\]), even though we prefer to use the alternate form of Eq. (\[Eqn: adh net2\]) as this brings out more clearly the frequenting nature of the sequence. The basin of attraction $$\begin{array}{ccl} B_{f}(A) & = & \{ x\in A\!:\omega(x)\subseteq\textrm{Atr}(A)\}\\ & = & \{ x\in A\!:(\exists n_{k}\in\mathbb{N})(n_{k}\rightarrow\infty)\textrm{ }(f^{n_{k}}(x)\rightarrow x^{*}\in\omega(A)\textrm{ })\end{array}\label{Eqn: basin}$$ of the attractor is the smallest subset of $X$ in which sequences generated by $f$ must eventually lie in order to adhere at $\omega(A)$. Comparison of Eqs. (\[Eqn: Attractor\_R+\]) with (\[Eqn: R+\]) and (\[Eqn: basin\]) with (\[Eqn: D+\]) show that $\omega(A)$ can be identified with the subset $\mathcal{R}_{+}$ on the $y$-axis on which the multifunctional limits $G\!:\mathcal{R}_{+}\rightarrow X$ of graphical convergence are generated, with its basin of attraction being contained in the $\mathcal{D}_{+}$ associated with the injective branch of $f$ that generates $\mathcal{R}_{+}$. In summary it may be concluded that since definitions (\[Eqn: Def: omega(A)\]) and (\[Eqn: basin\]) involve both the domain and range of $f$, a description of the attractor in terms of the graph of $f$, like that of Eq. (\[Eqn: attractor\_adherence\]), is more pertinent and meaningful as it combines the requirements of both these equations. Thus, for example, as $\omega(A)$ is not the function $G(\mathcal{R}_{+})$, this attractor does not include the equivalence class of inverse stable points that may be associated with $x_{*}$, see for example Fig. \[Fig: omega\]. From Eq. (\[Eqn: Def: omega(A)\]), we may make the particularly simple choice of $(x_{k})$ to satisfy $f^{n_{k}}(x_{-k})=x$ so that $x_{-k}=f_{\textrm{B}}^{-n_{k}}(x)$, where $x_{-k}\in[x_{-k}]:=f^{-n_{k}}(x)$ is the element of the equivalence class of the inverse image of $x$ corresponding to the injective branch $f_{\textrm{B}}$. This choice is of special interest to us as it is the class that generates the $G$-function on $\mathcal{R}_{+}$ in graphical convergence. This allows us to express $\omega(A)$ as $$\omega(A)=\{ x\in X\!:(\exists n_{k}\in\mathbb{N})(n_{k}\rightarrow\infty)(f_{\textrm{B}}^{-n_{k}}(x)=x_{-k}\textrm{ converges in }(X,\mathcal{U}))\};\label{Eqn: Attractor_R+}$$ note that the $x_{-k}$ of this equation and the $x_{k}$ of Eq. (\[Eqn: Def: omega(A)\]) are, in general, quite different points. A simple illustrative example of the construction of $\omega(A)$ for the positive injective branch of the homeomorphism $(4x^{2}-1)/3$, $-1\leq x\leq1$, is shown in Fig. \[Fig: omega\], where the arrow-heads denote the converging sequences $f^{n_{i}}(x_{i})\rightarrow x$ and $f^{n_{i}-m}(x_{i})\rightarrow x_{-m}$ which proves invariance of $\omega(A)$ for a homeomorphic $f$; here continuity of the function and its inverse is explicitly required for invariance. Positive invariance of a subset $A$ of $X$ implies that for any $n\in\mathbb{N}$ and $x\in A$, $f^{n}(x)=y_{n}\in A$, while negative invariance assures that for any $y\in A$, $f^{-n}(y)=x_{-n}\in A$. Invariance of $A$ in both the forward and backward directions therefore means that for any $y\in A$ and $n\in\mathbb{N}$, there exists a $x\in A$ such that $f^{n}(x)=y$. In interpreting this figure, it may be useful to recall from Def. 4.1 that an increasing number of injective branches of $f$ is a necessary, but not sufficient, condition for the occurrence of chaos; thus in Figs. \[Fig: log357\](a) and \[Fig: omega\], increasing noninjectivity of $f$ leads to constant valued limit functions over a connected $\mathcal{D}_{+}$ in a manner similar to that associated with the classical Gibb’s phenomenon in the theory of Fourier series. Graphical convergence of an increasingly nonlinear family of functions implied by its increasing non-injectivity may now be combined with the requirements of an attractor to lead to the concept of a chaotic attractor to be that on which the dynamics is chaotic in the sense of Defs. 4.1. and 4.2. Hence **Definition 4.3.** ***Chaotic Attractor.*** *Let $A$ be a positively invariant subset of $X$. The attractor* $\textrm{Atr}(A)$ *is chaotic on $A$ if there is sensitive dependence on initial conditions for* all *$x\in A$. The sensitive dependence manifests itself as multifunctional graphical limits for all $x\in\mathcal{D}_{+}$ and as chaotic orbits when* $x\not\in\mathcal{D}_{+}$*.*$\qquad\square$ [$$f_{\textrm{f}}(x)=\left\{ \begin{array}{ccl} 2(1+x)/3, & & 0\leq x<1/2\\ 2(1-x), & & 1/2\leq x\leq1\end{array}\right.$$ ]{} The picture of chaotic attractors that emerge from the foregoing discussions and our characterization of chaos of Def. 4.1 is that it it is a subset of $X$ that is simultaneously “spiked” multifunctional on the $y$-axis and consists of a dense collection of singleton domains of attraction on the $x$-axis. This is illustrated in Figure \[Fig: attractor\] which shows some typical chaotic attractors. The first four diagrams (a)$-$(d) are for the logistic map with (b)$-$(d) showing the 4-, 2- and 1-piece attractors for $\lambda=3.575,\textrm{ }3.66,\textrm{ and }3.8$ respectively that are in qualitative agreement with the standard bifurcation diagram reproduced in (e). Figs. (b)$-$(d) have the advantage of clearly demonstrating how the attractors are formed by considering the graphically converged limit as the object of study unlike in Fig. (e) which shows the values of the 501-1001th iterates of $x_{0}=1/2$ as a function of $\lambda$. The difference in Figs. (a) and (b) for a change of $\lambda$ from [$\lambda>\lambda_{*}=3.5699456$]{} to 3.575 is significant as $\lambda=\lambda_{*}$ marks the boundary between the nonchaotic region for $\lambda<\lambda_{*}$ and the chaotic for $\lambda>\lambda_{*}$ (this is to be understood as being suitably modified by the appearance of the nonchaotic windows for some specific intervals in $\lambda>\lambda_{*}$). At $\lambda_{*}$ the generated fractal Cantor set $\Lambda$ is an attractor as it attracts almost every initial point $x_{0}$ so that the successive images $x^{n}=f^{n}(x_{0})$ converge toward the Cantor set $\Lambda$. In Fig. (f) the chaotic attractors for the piecewise continuous function on $[0,1]$ $$f_{\textrm{f}}(x)=\left\{ \begin{array}{ccc} 2(1+x)/3, & & 0\leq x<1/2\\ 2(1-x), & & 1/2\leq x\leq1,\end{array}\right.$$ is $[0,1]$ where the dotted lines represent odd iterates and the full lines even iterates of $f$; here the attractor disappears if the function is reflected about the $x$-axis. [Figure]{} [\[Fig: attractor\]]{}[, contd.]{} [Chaotic attractors for $\lambda=3.66$ and $\lambda=3.8$.]{} ***4.2. Why Chaos? A Preliminary Inquiry*** The question as to why a natural system should evolve chaotically is both interesting and relevant, and this section attempts to advance a plausible answer to this inquiry that is based on the connection between topology and convergence contained in the Corollary to Theorem A1.5. Open sets are groupings of elements that govern convergence of nets and filters, because the required property of being either eventually of frequently in (open) neighbourhoods of a point determines the eventual behaviour of the net; recall in this connection the unusual convergence characteristics in cofinite and cocountable spaces. Conversely for a given convergence characteristic of a class of nets, it is possible to infer the topology of the space that is responsible for this convergence, and it is this point of view that we adopt here to investigate the question of this subsection: recall that our Definitions 4.1 and 4.2 were based on purely algebraic set-theoretic arguments on ordered sets, just as the role of the choice of an appropriate problem-dependent basis was highlighted at the end of Sec. 2. [Figure]{} [\[Fig: attractor\], contd. Bifurcation diagram and attractors for $f_{\textrm{f}}(x)$.]{} Chaos as manifest in its attractors is a direct consequence of the increasing nonlinearity of the map with increasing iteration; we reemphasize that this is only a necessary condition so that the increasing nonlinearities of Figs. \[Fig: log357\] and \[Fig: omega\] eventually lead to stable states and not to chaotic instability. Under the right conditions as enunciated following Fig. \[Fig: Zorn\], chaos appears to be the natural outcome of the difference in the behaviour of a function $f$ and its inverse $f^{-}$ under their successive applications. Thus $f=ff^{-}f$ allows $f$ to take advantage of its multi inverse to generate all possible equivalence classes that is available to it, a feature not accessible to $f^{-}=f^{-}ff^{-}$. As we have seen in the foregoing, equivalence classes of fixed points, stable and unstable, are of defining significance in determining the ultimate behaviour of an evolving dynamical system and as the eventual (as also frequent) character of a filter or net in a set is dictated by open neighbourhoods of points of the set, *it is postulated that chaoticity on a set $X$ leads to a reformulation of the open sets of $X$ to equivalence classes generated by the evolving map $f$,* see Example 2.4(3). Such a redefinition of open sets of equivalence classes allow the evolving system to temporally access an ever increasing number of states even though the equivalent fixed points are not fixed under iterations of $f$ except for the parent of the class, and can be considered to be the governing criterion for the cooperative or collective behaviour of the system. The predominance of the role of $f^{-}$ in $f=ff^{-}f$ in generating the equivalence classes (that is exploiting the many-to-one character) of $f$ is reflected as limit multis for $f$ (that is constant $f^{-}$ on $\mathcal{R}_{+}$) in $f^{-}=f^{-}ff^{-}$; this interpretation of the dynamics of chaos is meaningful as graphical convergence leading to chaos is a result of pointwise biconvergence of the sequence of iterates of the functions generated by $f$. But as $f$ is a noninjective function *on* $X$ *possessing the property of increasing nonlinearity in the form of increasing noninjectivity with iteration,* various cycles of disjoint equivalence classes are generated under iteration, see for example Fig. \[Fig: tent4\](a) for the tent map. A reference to Fig. \[Fig: GenInv\] shows that the basic set $X_{\textrm{B}}$, for a finite number $n$ of iterations of $f$, contains the parent of each of these open equivalent sets in the domain of $f$, with the topology on $X_{\textrm{B}}$ being the corresponding $p$-images of these disjoint saturated open sets of the domain. In the limit of infinite iterations of $f$ leading to the multifunction $\mathcal{M}$ (this is the $f^{\infty}$ of Sec. 4.1), the generated open sets constitute a basis for a topology on $\mathcal{D}(f)$ and the basis for the topology of $\mathcal{R}(f)$ are the corresponding $\mathcal{M}$-images of these equivalent classes. *It is our contention that the motive force behind evolution toward a chaos, as defined by Def. 4.1, is the drive toward a state of the dynamical system that supports ininality of the limit multi* $\mathcal{M}$*;* see Appendix A2 with the discussions on Fig. \[Fig: GenInv\] and Eq. (\[Eqn: ininal\]) in Sec. 2. In the limit of infinite iterations therefore, the open sets of the range $\mathcal{R}(f)\subseteq X$ are the multi images that graphical convergence generates at each of these inverse-stable fixed points. $X$ therefore has two topologies imposed on it by the dynamics of $f$: the first of equivalence classes generated by the limit multi $\mathcal{M}$ in the domain of $f$ and the second as $\mathcal{M}$-images of these classes in the range of $f$. Quite clearly these two topologies need not be the same; their intersection therefore can be defined to be the *chaotic topology* *on* $X$ *associated with the chaotic map* $f$ on $X$. Neighbourhoods of points in this topology cannot be arbitrarily small as they consist of all members of the equivalence class to which any element belongs; hence a sequence converging to any of these elements necessarily converges to all of them, and the eventual objective of chaotic dynamics is to generate a topology in $X$ with respect to which elements of the set can be grouped together in as large equivalence classes as possible in the sense that if a net converges simultaneously to points $x\neq y\in X$ then $x\sim y$: $x$ is of course equivalent to itself while $x,y,z$ are equivalent to each other iff they are simultaneously in every open set in which the net may eventually belong. This hall-mark of chaos can be appreciated in terms of a necessary obliteration of any separation property that the space might have originally possessed, see property (H3) in Appendix A3. We reemphasize that a set in this chaotic context is required to act in a dual capacity depending on whether it carries the initial or final topology under $\mathcal{M}$. This preliminary inquiry into the nature of chaos is concluded in the final section of this work. **5. Graphical convergence works** We present in this section some real evidence in support of our hypothesis of graphical convergence of functions in $\textrm{Multi}(X,Y)$. The example is taken from neutron transport theory, and concerns the discretized spectral approximation [@Sengupta1988; @Sengupta1995] of Case’s singular eigenfunction solution of the monoenergetic neutron transport equation, [@Case1967]. The neutron transport equation is a linear form of the Boltzmann equation that is obtained as follows. Consider the neutron-moderator system as a mixture of two species of gases each of which satisfies a Boltzmann equation of the type$$\begin{gathered} \left(\frac{\partial}{\partial t}+v_{i}.\nabla\right)f_{i}(r,v,t)=\\ =\int dv^{\prime}\int dv_{1}\int dv_{1}^{\prime}\sum_{j}W_{ij}(v_{i}\rightarrow v^{\prime};v_{1}\rightarrow v_{1}^{\prime})\{ f_{i}(r,v^{\prime},t)f_{j}(r,v_{1}^{\prime},t)--f_{i}(r,v,t)f_{j}(r,v_{1},t)\})\end{gathered}$$ where$$W_{ij}(v_{i}\rightarrow v^{\prime};v_{1}\rightarrow v_{1}^{\prime})=\mid v-v_{1}\mid\sigma_{ij}(v-v^{\prime},v_{1}-v_{1}^{\prime})$$ $\sigma_{ij}$ being the cross-section of interaction between species $i$ and $j$. Denote neutrons by subscript 1 and the background moderator with which the neutrons interact by 2, and make the assumptions that \(i) The neutron density $f_{1}$ is much less compared with that of the moderator $f_{2}$ so that the terms $f_{1}f_{1}$ and $f_{1}f_{2}$ may be neglected in the neutron and moderator equations respectively. \(ii) The moderator distribution $f_{2}$ is not affected by the neutrons. This decouples the neutron and moderator equations and leads to an equilibrium Maxwellian $f_{\textrm{M}}$ for the moderator while the neutrons are described by the linear equation $$\begin{gathered} \left(\frac{\partial}{\partial t}+v.\nabla\right)f(r,v,t)=\\ =\int dv^{\prime}\int dv_{1}\int dv_{1}^{\prime}W_{12}(v\rightarrow v^{\prime};v_{1}\rightarrow v_{1}^{\prime})\{ f(r,v^{\prime},t)f_{\textrm{M}}(v_{1}^{\prime})--f(r,v,t)f_{\textrm{M}}(v_{1})\})\end{gathered}$$ This is now put in the standard form of the neutron transport equation [@Williams1967]$$\begin{gathered} \left(\frac{1}{v}\frac{\partial}{\partial t}+\Omega.v+\mathcal{S}(E)\right)\Phi(r,E,\widehat{\Omega},t)=\int d\Omega^{\prime}\int dE^{\prime}\mathcal{S}(r,E^{\prime}\rightarrow E;\widehat{\Omega}^{\prime}\cdot\widehat{\Omega})\textrm{ }\Phi(r,E^{\prime},\widehat{\Omega}^{\prime},t).\end{gathered}$$ where $E=mv^{2}/2$ is the energy and $\widehat{\Omega}$ the direction of motion of the neutrons. The steady state, monoenergetic form of this equation is Eq. (\[Eqn: NeutronTransport\]) $$\mu\frac{\partial\Phi(x,\mu)}{\partial x}+\Phi(x,\mu)=\frac{c}{2}\int_{-1}^{1}\Phi(x,\mu^{\prime})d\mu^{\prime},\qquad0<c<1,\,-1\leq\mu\leq1$$ and its singular eigenfunction solution for $x\in(-\infty,\infty)$ is given by Eq. (\[Eqn: CaseSolution\_FR\]) $$\begin{gathered} \Phi(x,\mu)=a(\nu_{0})e^{-x/\nu_{0}}\phi(\mu,\nu_{0})+a(-\nu_{0})e^{x/\nu_{0}}\phi(-\nu_{0},\mu)+\int_{-1}^{1}a(\nu)e^{-x/\nu}\phi(\mu,\nu)d\nu;\end{gathered}$$ see Appendix A4 for an introductory review of Case’s solution of the one-speed neutron transport equation. --------------- --------------- ------------------------ ------------------------------ ---------------------------------- $\mathcal{R}=X$ $\textrm{Cl}(\mathcal{R})=X$ $\textrm{Cl}(\mathcal{R})\neq X$ Not injective $\cdots$ $P\sigma(\mathscr{L})$ $P\sigma(\mathscr{L})$ $P\sigma(\mathscr{L})$ Not contiuous $C\sigma(\mathscr{L})$ $C\sigma(\mathscr{L})$ $R\sigma(\mathscr{L})$ Continuous $\rho(\mathscr{L})$ $\rho(\mathscr{L})$ $R\sigma(\mathscr{L})$ --------------- --------------- ------------------------ ------------------------------ ---------------------------------- : [\[Table: spectrum\]Spectrum of linear operator $\mathscr{L}\in\textrm{Map}(X)$. Here $\mathscr L_{\lambda}:=\mathscr{L}-\lambda$ satisfies the equation $\mathscr L_{\lambda}(x)=0$, with the resolvent set $\rho(\mathscr{L})$ of $\mathscr{L}$ consisting of all those complex numbers $\lambda$ for which $\mathscr L_{\lambda}^{-1}$]{} [exists as a continuous operator with dense domain. Any value of $\lambda$ for which this is not true is in the spectrum $\sigma(\mathscr{L})$ of $\mathscr{L}$, that is further subdivided into three disjoint components of the point, continuous, and residual spectra according to the criteria shown in the table. ]{} The term “eigenfunction” is motivated by the following considerations. Consider the eigenvalue equation $$(\mu-\nu)\mathscr F_{\nu}(\mu)=0,\qquad\mu\in V(\mu),\textrm{ }\nu\in\mathbb{R}\label{Eqn: eigen}$$ in the space of multifunctions $\textrm{Multi}(V(\mu),(-\infty,\infty))$, where $\mu$ is in either of the intervals $[-1,1]$ or $[0,1]$ depending on whether the given boundary conditions for Eq. (\[Eqn: NeutronTransport\]) is full-range or half range. If we are looking only for functional solutions of Eq. (\[Eqn: eigen\]), then the unique function $\mathcal{F}$ that satisfies this equation for all possible $\mu\in V(\mu)$ and $\nu\in\mathbb{R}-V(\mu)$ is $\mathcal{F}_{\nu}(\mu)=0$ which means, according to Table \[Table: spectrum\], that the point spectrum of $\mu$ is empty and $(\mu-\nu)^{-1}$ exists for all $\nu$. When $\nu\in V(\mu)$, however, this inverse is not continuous and we show below that in $\textrm{Map}(V(\mu),0)$, $\nu\in V(\mu)$ belongs to the continuous spectrum of $\mu$. This distinction between the nature of the inverses depending on the relative values of $\mu$ and $\nu$ suggests a wider “non-function” space in which to look for the solutions of operator equations, and in keeping with the philosophy embodied in Fig. \[Fig: GenInv\] of treating inverse problems in the space of multifunctions, we consider all $\mathscr F_{\nu}\in\textrm{Multi}(V(\mu),\mathbb{R}))$ satisfying Eq. (\[Eqn: eigen\]) to be eigenfunctions of $\mu$ for the corresponding eigenvalue $\nu$, leading to the following multifunctional solution of (\[Eqn: eigen\])$$\begin{aligned} \mathscr F_{\nu}(\mu) & = & \left\{ \begin{array}{ccl} (V(\mu),0), & & \textrm{if }\nu\notin V(\mu)\\ (V(\mu)-\nu,0)\bigcup(\nu,\mathbb{R})), & & \textrm{if }\nu\in V(\mu),\end{array}\right.\end{aligned}$$ where $V(\mu)-\nu$ is used as a shorthand for the interval $V(\mu)$ with $\nu$ deleted. Rewriting the eigenvalue equation (\[Eqn: eigen\]) as $\mu_{\nu}(\mathscr F_{\nu}(\mu))=0$ and comparing this with Fig. \[Fig: GenInv\], allows us to draw the correspondences $$\begin{aligned} f & \Longleftrightarrow & \mu_{\nu}\nonumber \\ X\textrm{ and }Y & \Longleftrightarrow & \{\mathscr F_{\nu}\in\textrm{Multi}(V(\mu),\mathbb{R})\!:\mathscr F_{\nu}\in\mathcal{D}(\mu_{\nu})\}\nonumber \\ f(X) & \Longleftrightarrow & \{0\!:0\in Y\}\label{Eqn: GenInv_Spectrum}\\ X_{\textrm{B}} & \Longleftrightarrow & \{0\!:0\in X\}\nonumber \\ f^{-} & \Longleftrightarrow & \mu_{\nu}^{-}.\nonumber \end{aligned}$$ Thus a multifunction in $X$ is equivalent to $0$ in $X_{\textrm{B}}$ under the linear map $\mu_{\nu}$, and we show below that this multifunction is infact the Dirac delta “function” $\delta_{\nu}(\mu)$, usually written as $\delta(\mu-\nu)$. This suggests that in $\textrm{Multi}(V(\mu),\mathbb{R})$*, every $\nu\in V(\mu)$ is in the point spectrum of $\mu$*, so that *discontinuous functions that are pointwise limits of functions in function space can be replaced by graphically converged multifunctions in the space of multifunctions*. Completing the equivalence class of $0$ in Fig. \[Fig: GenInv\], gives the multifunctional solution of Eq. (\[Eqn: eigen\]). From a comparison of the definition of ill-posedness (Sec. 2) and the spectrum (Table \[Table: spectrum\]), it is clear that $\mathscr L_{\lambda}(x)=y$ is ill-posed iff \(1) $\mathscr L_{\lambda}$ not injective $\Leftrightarrow$ $\lambda\in P\sigma(\mathscr L_{\lambda})$, which corresponds to the first row of Table \[Table: spectrum\]. \(2) $\mathscr L_{\lambda}$ not surjective $\Leftrightarrow$ the values of $\lambda$ correspond to the second and third columns of Table \[Table: spectrum\]. \(3) $\mathscr L_{\lambda}$ is bijective but not open $\Leftrightarrow$ $\lambda\textrm{ is either in }C\sigma(\mathscr L_{\lambda})\textrm{ or }R\sigma(\mathscr L_{\lambda})$ corresponding to the second row of Table \[Table: spectrum\]. We verify in the three steps below that $X=L_{1}[-1,1]$ of integrable functions, $\nu\in V(\mu)=[-1,1]$ belongs to the continuous spectrum of $\mu$. \(a) *$\mathcal{R}(\mu_{\nu})$ is dense, but not equal to $L_{1}$*. The set of functions $g(\mu)\in L_{1}$ such that $\mu_{\nu}^{-1}g\in L_{1}$ cannot be the whole of $L_{1}$. Thus, for example, the piecewise constant function $g=\textrm{const}\neq0$ on $\mid\mu-\nu\mid\leq\delta>0$ and $0$ otherwise is in $L_{1}$ but not in *$\mathcal{R}(\mu_{\nu})$* as $\mu_{\nu}^{-1}g\not\in L_{1}$. Nevertheless for any $g\in L_{1}$, we may choose the sequence of functions $$g_{n}(\mu)=\left\{ \begin{array}{ccl} 0, & & \textrm{if }\mid\mu-\nu\mid\leq1/n\\ g(\mu), & & \textrm{otherwise}\end{array}\right.$$ in $\mathcal{R}(\mu_{\nu})$ to be eventually in every neighbourhood of $g$ in the sense that $\lim_{n\rightarrow\infty}\int_{-1}^{1}\mid g-g_{n}\mid=0$. \(b) *The inverse $(\mu-\nu)^{-1}$ exists but is not continuous.* The inverse exists because, as noted earlier, $0$ is the only functional solution of Eq. (\[Eqn: eigen\]). Nevertheless although the net of functions $$\delta_{\nu\varepsilon}(\mu)=\frac{1}{\tan^{-1}(1+\nu)/\varepsilon+\tan^{-1}(1-\nu)/\varepsilon}\left(\frac{\varepsilon}{(\mu-\nu)^{2}+\varepsilon^{2}}\right),\qquad\varepsilon>0$$ is in the domain of $\mu_{\nu}$ because $\int_{-1}^{1}\delta_{\nu\varepsilon}(\mu)d\mu=1$ for all $\varepsilon>0$, $$\lim_{\varepsilon\rightarrow0}\int_{-1}^{1}\mid\mu-\nu\mid\delta_{\nu\varepsilon}(\mu)d\mu=0$$ implying that $(\mu-\nu)^{-1}$ is unbounded. Taken together, (a) and (b) show that functional solutions of Eq. (\[Eqn: eigen\]) lead to state 2-2 in Table \[Table: spectrum\]; hence $\nu\in[-1,1]=C\sigma(\mu)$. \(c) The two integral constraints in (b) also mean that $\nu\in C\sigma(\mu)$ is a *generalized eigenvalue* of $\mu$ which justifies calling the graphical limit $\delta_{\nu\varepsilon}(\mu)\overset{\mathbf{G}}\rightarrow\delta_{\nu}(\mu)$ a *generalized,* or singular, *eigenfunction*, see Fig. \[Fig: Poison\] which clearly indicates the convergence of the net of functions[^26]. From the fact that the solution Eq. (\[Eqn: CaseSolution\_FR\]) of the transport equation contains an integral involving the multifunction $\phi(\mu,\nu)$, we may draw an interesting physical interpretation. As the multi appears *every where* on $V(\mu)$ (that is there are no chaotic orbits but only the multifunctions that produce them), we have here a situation typical of *maximal ill-posedness* characteristic of chaos: note that both the functions comprising $\phi_{\varepsilon}(\mu,\nu)$ are non-injective. As the solution (\[Eqn: CaseSolution\_FR\]) involves an integral over all $\nu\in V(\mu)$, the singular eigenfunctions — that collectively may be regarded as representing a *chaotic substate of* the system represented by the solution of the neutron transport equation — combine with the functional components $\phi(\pm\nu_{0},\mu)$ to produce the well-defined, non-chaotic, experimental end result of the neutron flux $\Phi(x,\mu)$. The solution (\[Eqn: CaseSolution\_FR\]) is obtained by assuming $\Phi(x,\mu)=e^{-x/\nu}\phi(\mu,\nu)$ to get the equation for $\phi(\mu,\nu)$ to be $(\mu-\nu)\phi(\mu,\nu)=-c\nu/2$ with the normalization $\int_{-1}^{1}\phi(\mu,\nu)=1$. As $\mu_{\nu}^{-1}$ is not invertible in $\textrm{Multi}(V(\mu),\mathbb{R})$ and $\mu_{\nu\textrm{B}}\!:X_{\textrm{B}}\rightarrow f(X)$ does not exist, the alternate approach of regularization was adopted in [@Sengupta1988; @Sengupta1995] to rewrite $\mu_{\nu}\phi(\mu,\nu)=-c\nu/2$ as $\mu_{\nu\varepsilon}\phi_{\varepsilon}(\mu,\nu)=-c\nu/2$ with $\mu_{\nu\varepsilon}:=\mu-(\nu+i\varepsilon)$ being a net of bijective functions for $\varepsilon>0$; this is a consequence of the fact that for the multiplication operator every nonreal $\lambda$ belongs to the resolvent set of the operator. The family of solutions of the later equation is given by [@Sengupta1988; @Sengupta1995] $$\phi_{\varepsilon}(\nu,\mu)=\frac{c\nu}{2}\frac{\nu-\mu}{(\mu-\nu)^{2}+\varepsilon^{2}}+\frac{\lambda_{\varepsilon}(\nu)}{\pi_{\varepsilon}}\frac{\varepsilon}{(\mu-\nu)^{2}+\varepsilon^{2}}\label{Eqn: phieps}$$ where the required normalization $\int_{-1}^{1}\phi_{\varepsilon}(\nu,\mu)=1$ gives $$\begin{array}{ccl} {\displaystyle \lambda_{\varepsilon}(\nu)} & = & {\displaystyle \frac{\pi_{\varepsilon}}{\tan^{-1}(1+\nu)/\varepsilon+\tan^{-1}(1-\nu)/\varepsilon}\left(1-\frac{c\nu}{4}\ln\frac{(1+\nu)^{2}+\varepsilon^{2}}{(1-\nu)^{2}+\varepsilon^{2}}\right)}\\ & \overset{\varepsilon\rightarrow0}\longrightarrow & \pi\lambda(\nu)\end{array}$$ with $$\pi_{\varepsilon}=\varepsilon\int_{-1}^{1}\frac{d\mu}{\mu^{2}+\varepsilon^{2}}=2\tan^{-1}\left(\frac{1}{\varepsilon}\right)\overset{\varepsilon\rightarrow0}\longrightarrow\pi.$$ These discretized equations should be compared with the corresponding exact ones of Appendix A4. We shall see that the net of functions (\[Eqn: phieps\]) converges graphically to the multifunction Eq. (\[Eqn: singular\_eigen\]) as $\varepsilon\rightarrow0$. In the discretized spectral approximation., the singular eigenfunction $\phi(\mu,\nu)$ is replaced by $\phi_{\varepsilon}(\mu,\nu)$, $\varepsilon\rightarrow0$, with the integral in $\nu$ being replaced by an appropriate sum. The solution Eq. (\[Eqn: CaseSolution\_HR\]) of the physically interesting half-space $x\geq0$ problem then reduces to [@Sengupta1988; @Sengupta1995] $$\Phi_{\varepsilon}(x,\mu)=a(\nu_{0})e^{-x/\nu_{0}}\phi(\mu,\nu_{0})+\sum_{i=1}^{N}a(\nu_{i})e^{-x/\nu_{i}}\phi_{\varepsilon}(\mu,\nu_{i})\qquad\mu\in[0,1]\label{Eqn: DiscSpect_HR}$$ where the nodes $\{\nu_{i}\}_{i=1}^{N}$ are chosen suitably. This discretized spectral approximation to Case’s solution has given surprisingly accurate numerical results for a set of properly chosen nodes when compared with exact calculations. Because of its involved nature [@Case1967], the exact calculations are basically numerical which leads to nonlinear integral equations as part of the solution procedure. To appreciate the enormous complexity of the exact treatment of the half-space problem, we recall that the complete set of eigenfunctions $\{\phi(\mu,\nu_{0}),\{\phi(\mu,\nu)\}_{\nu\in[0,1]}\}$ are orthogonal with respect to the half-range weight function $W(\mu)$ of half-range theory, Eq. (\[Eqn: W(mu)\]), that is expressed only in terms of solution of the nonlinear integral equation Eq. (\[Eqn: Omega(-mu)\]). The solution of a half-space problem then evaluates the coefficients $\{ a(\nu_{0}),a(\nu)_{\nu\in[0,1]}\}$ from the appropriate half range (that is $0\leq\mu\leq1$) orthogonality integrals satisfied by the eigenfunctions $\{\phi(\mu,\nu_{0}),\{\phi(\mu,\nu)\}_{\nu\in[0,1]}\}$ with respect to the weight $W(\mu)$, see Appendix A4 for the necessary details of the half-space problem in neutron transport theory. As may be appreciated from this brief introduction, solutions to half-space problems are not simple and actual numerical computations must rely a great deal on tabulated values of the $X$-function. Self-consistent calculations of sample benchmark problems performed by the discretized spectral approximation in a full-range adaption of the half-range problem described below that generate all necessary data, independent of numerical tables, with the quadrature nodes $\{\nu_{i}\}_{i=1}^{N}$ taken at the zeros Legendre polynomials show that the full range formulation of this approximation [@Sengupta1988; @Sengupta1995] can give very accurate results not only of integrated quantities like the flux $\Phi$ and leakage of particles out of the half space, but of also basic ‘"raw‘" data like the extrapolated end point $$z_{0}=\frac{c\nu_{0}}{4}\int_{0}^{1}\frac{\nu}{N(\nu)}\left(1+\frac{c\nu^{2}}{1-\nu^{2}}\right)\ln\left(\frac{\nu_{0}+\nu}{\nu_{0}-\nu}\right)d\nu\label{Eqn: extrapolated}$$ and of the $X$-function itself. Given the involved nature of the exact theory, it is our contention that the remarkable accuracy of these basic data, some of which is reproduced in Table \[Table: extrapolated\], is due to the graphical convergence of the net of functions $$\phi_{\varepsilon}(\mu,\nu)\overset{\mathbf{G}}\longrightarrow\phi(\mu,\nu)$$ shown in Fig. \[Fig: Case\]; here $\varepsilon=1/\pi N$ so that $\varepsilon\rightarrow0$ as $N\rightarrow\infty$. By this convergence, the delta function and principal values in $[-1,1]$ are the multifunctions $([-1,0),0)\bigcup(0,[0,\infty)\bigcup((0,1],0)$ and $\{1/x\}_{x\in[-1,0)}\bigcup(0,(-\infty,\infty))\bigcup\{1/x\}_{x\in(0,1]}$ respectively. Tables \[Table: extrapolated\] and \[Table: X-function\], taken from @Sengupta1988 and @Sengupta1995, show respectively the extrapolated end point and $X$-function by the full-range adaption of the discretized spectral approximation for two different half range problems denoted as Problems A and B defined as $$\begin{aligned} Problem\textrm{ }A\quad & \textrm{Equation}\!:\textrm{ }{\textstyle {\mu\Phi_{x}+\Phi=(c/2)\int_{-1}^{1}\Phi(x,\mu^{\prime})d\mu^{\prime},\; x\geq0}}\\ & \textrm{Boundary condition}:\textrm{ }\Phi(0,\mu)=0,\;\mu\geq0\\ & \textrm{Asymptotic condition}:\textrm{ }\Phi\rightarrow e^{-x/\nu_{0}}\phi(\mu,\nu_{0}),\; x\rightarrow\infty.\\ Problem\textrm{ }B\quad & \textrm{Equation}\!:\textrm{ }{\textstyle {\mu\Phi_{x}+\Phi=(c/2)\int_{-1}^{1}\Phi(x,\mu^{\prime})d\mu^{\prime},\; x\geq0}}\\ & \textrm{Boundary condition}:\textrm{ }\Phi(0,\mu)=1,\;\mu\geq0\\ & \textrm{Asymptotic condition}:\textrm{ }\Phi\rightarrow0,\; x\rightarrow\infty.\end{aligned}$$ The full $-1\leq\mu\leq1$ range form of the half $0\leq\mu\leq1$ range discretized spectral approximation replaces the exact integral boundary condition at $x=0$ by a suitable quadrature sum over the values of $\nu$ taken at the zeros of Legendre polynomials; thus the condition at $x=0$ can be expressed as $$\psi(\mu)=a(\nu_{0})\phi(\mu,\nu_{0})+\sum_{i=1}^{N}a(\nu_{i})\phi_{\varepsilon}(\mu,\nu_{i}),\qquad\mu\in[0,1],\label{Eqn: BC}$$ where $\psi(\mu)=\Phi(0,\mu)$ is the specified incoming radiation incident on the boundary from the left, and the half-range coefficients $a(\nu_{0})$, $\{ a(\nu)\}_{\nu\in[0,1]}$ are to be evaluated using the $W$-function of Appendix 4. We now exploit the relative simplicity of the full-range calculations by replacing Eq. (\[Eqn: BC\]) by Eq. (\[Eqn: HRFR\_Discrete\]) following, where the coefficients $\{ b(\nu_{i})\}_{i=0}^{N}$ are used to distinguish the full-range coefficients from the half-range ones. The significance of this change lies in the overwhelming simplicity of the full-range weight function $\mu$ as compared to the half-range function $W(\mu)$, and the resulting simplicity of the orthogonality relations that follow, see Appendix A4. The basic data of $z_{0}$ and $X(-\nu)$ are then completely generated self-consistently [@Sengupta1988; @Sengupta1995] by the discretized spectral approximation from the full-range adaption $$\sum_{i=0}^{N}b_{i}\phi_{\varepsilon}(\mu,\nu_{i})=\psi_{+}(\mu)+\psi_{-}(\mu),\qquad\mu\in[-1,1],\textrm{ }\nu_{i}\geq0\label{Eqn: HRFR_Discrete}$$ of the discretized boundary condition Eq. (\[Eqn: BC\]), where $\psi_{+}(\mu)$ is by definition the incident flux $\psi(\mu)$ for $\mu\in[0,1]$ and $0$ if $\mu\in[-1,0]$, while $$\psi_{-}(\mu)=\left\{ \begin{array}{ccl} {\displaystyle \sum_{i=0}^{N}b_{i}^{-}\phi_{\varepsilon}(\mu,\nu_{i})} & & \textrm{if }\mu\in[-1,0],\textrm{ }\nu_{i}\geq0\textrm{ }\\ 0 & & \textrm{if }\mu\in[0,1]\end{array}\right.$$ is the the emergent angular distribution out of the medium. Equation (\[Eqn: HRFR\_Discrete\]) corresponds to the full-range $\mu\in[-1,1],\textrm{ }\nu_{i}\geq0$ form $$b(\nu_{0})\phi(\mu,\nu_{0})+\int_{0}^{1}b(\nu)\phi(\mu,\nu)d\nu=\psi_{+}(\mu)+\left(b^{-}(\nu_{0})\phi(\mu,\nu_{0})+\int_{0}^{1}b^{-}(\nu)\phi(\mu,\nu)d\nu\right)\label{Eqn: HRFR}$$ of boundary condition (\[Eqn: BC\_HR\]) with the first and second terms on the right having the same interpretation as for Eq. (\[Eqn: HRFR\_Discrete\]). This full-range simulation merely states that the solution (\[Eqn: CaseSolution\_HR\]) of Eq. (\[Eqn: NeutronTransport\]) holds for all $\mu\in[-1,1]$, $x\geq0$, although it was obtained, unlike in the regular full-range case, from the given radiation $\psi(\mu)$ incident on the boundary at $x=0$ over only half the interval $\mu\in[0,1]$. To obtain the simulated full-range coefficients $\{ b_{i}\}$ and $\{ b_{i}^{-}\}$ of the half-range problem, we observe that there are effectively only half the number of coefficients as compared to a normal full-range problem because $\nu$ is now only over half the full interval. This allows us to generate two sets of equations from (\[Eqn: HRFR\]) by integrating with respect to $\mu\in[-1,1]$ with $\nu$ in the half intervals $[-1,0]$ and $[0,1]$ to obtain the two sets of coefficients $b^{-}$ and $b$ respectively. Accordingly we get from Eq. (\[Eqn: HRFR\_Discrete\]) with $\textrm{ }j=0,1,\cdots,N$ the sets of equations $$\begin{array}{c} {\displaystyle {\displaystyle (\psi,\phi_{j-})_{\mu}^{(+)}=-\sum_{i=0}^{N}b_{i}^{-}(\phi_{i+},\phi_{j-})_{\mu}^{(-)}}}\\ b_{j}={\displaystyle \left((\psi,\phi_{j+})_{\mu}^{(+)}+\sum_{i=0}^{N}b_{i}^{-}(\phi_{i+},\phi_{j+})_{\mu}^{(-)}\right)}\end{array}\label{Eqn: FRBC1}$$ where $(\phi_{j\pm})_{j=1}^{N}$ represents $(\phi_{\varepsilon}(\mu,\pm\nu_{j}))_{j=1}^{N}$, $\phi_{0\pm}=\phi(\mu,\pm\nu_{0})$, the $(+)$ $(-)$ superscripts are used to denote the integrations with respect to $\mu\in[0,1]$ and $\mu\in[-1,0]$ respectively, and $(f,g)_{\mu}$ denotes the usual inner product in $[-1,1]$ with respect to the full range weight $\mu$. While the first set of $N+1$ equations give $b_{i}^{-}$, the second set produces the required $b_{j}$ from these ‘"negative‘" coefficients. By equating these calculated $b_{i}$ with the exact half-range expressions for $a(\nu)$ with respect to $W(\mu)$ as outlined in Appendix A4, it is possible to find numerical values of $z_{0}$ and $X(-\nu)$. Thus from the second of Eq. (\[Eqn: Constant\_Coeff\]), $\{ X(-\nu_{i})\}_{i=1}^{N}$ is obtained with $b_{i\textrm{B}}\textrm{ }=a_{i\textrm{B}}$, $i=1,\cdots,N$, which is then substituted in the second of Eq. (\[Eqn: Milne\_Coeff\]) with $X(-\nu_{0})$ obtained from $a_{\textrm{A}}(\nu_{0})$ according to Appendix A4, to compare the respective $a_{i\textrm{A}}$ with the calculated $b_{i\textrm{A}}$ from (\[Eqn: FRBC1\]). Finally the full-range coefficients of Problem A can be used to obtain the $X(-\nu)$ values from the second of Eqs. (\[Eqn: Milne\_Coeff\]) and compared with the exact tabulated values as in Table \[Table: X-function\]. The tabulated values of $cz_{0}$ from Eq. (\[Eqn: extrapolated\]) show a consistent deviation from our calculations of Problem A according to $a_{\textrm{A}}(\nu_{0})=-\exp(-2z_{0}/\nu_{0})$. Since the $X(-\nu)$ values of Problem A in Table \[Table: X-function\] also need the same $b_{0\textrm{A}}$ as input that was used in obtaining $z_{0}$, it is reasonable to conclude that the ‘"exact‘" numerical integration of $z_{0}$ is inaccurate to the extent displayed in Table \[Table: extrapolated\]. ----- --------- --------- --------- -------- $N=2$ $N=6$ $N=10$ Exact 0.2 0.78478 0.78478 0.78478 0.7851 0.4 0.72996 0.72996 0.72996 0.7305 0.6 0.71535 0.71536 0.71536 0.7155 0.8 0.71124 0.71124 0.71124 0.7113 0.9 0.71060 0.71060 0.71061 0.7106 ----- --------- --------- --------- -------- : \[Table: extrapolated\][Extrapolated end-point $z_{0}$.]{} -- -- ----------- ----------- ----------- ---------- $\nu_{i}$ Problem A Problem B Exact 0.2133 0.8873091 0.8873091 0.887308 0.7887 0.5826001 0.5826001 0.582500 0.0338 1.3370163 1.3370163 1.337015 0.1694 1.0999831 1.0999831 1.099983 0.3807 0.8792321 0.8792321 0.879232 0.6193 0.7215240 0.7215240 0.721524 0.8306 0.6239109 0.6239109 0.623911 0.9662 0.5743556 0.5743556 0.574355 0.0130 1.5971784 1.5971784 1.597163 0.0674 1.4245314 1.4245314 1.424532 0.1603 1.2289940 1.2289940 1.228995 0.2833 1.0513750 1.0513750 1.051376 0.4255 0.9058140 0.9058410 0.905842 0.5744 0.7934295 0.7934295 0.793430 0.7167 0.7102823 0.7102823 0.710283 0.8397 0.6516836 0.6516836 0.651683 0.9325 0.6136514 0.6136514 0.613653 0.9870 0.5933988 0.5933988 0.593399 -- -- ----------- ----------- ----------- ---------- : [\[Table: X-function\]$X(-\nu)$ by the full range method.]{} From these numerical experiments and Fig. \[Fig: Case\] we may conclude that the continuous spectrum $[-1,1]$ of the position operator $\mu$ acts as the $\mathcal{D}_{+}$ points in generating the multifunctional Case singular eigenfunction $\phi(\mu,\nu)$. Its rational approximation $\phi_{\varepsilon}(\mu,\nu)$ in the context of the simple simulated full-range computations of the complex half-range exact theory of Appendix A4, clearly demonstrates the utility of graphical convergence of sequence of functions to multifunction. The totality of the multifunctions $\phi(\mu,\nu)$ for all $\nu$ in Fig. \[Fig: Case\](c) and (d) endows the problem with the character of maximal ill-posedness that is characteristic of chaos. This chaotic signature of the transport equation is however latent as the experimental output $\Phi(x,\mu)$ is well-behaved and regular. This important example shows how nature can use hidden and complex chaotic substates to generate order through a process of superposition. **6. Does Nature support complexity?** The question of this section is basic in the light of the theory of chaos presented above as it may be reformulated to the inquiry of what makes nature support chaoticity in the form of increasing non-injectivity of an input-output system. It is the purpose of this Section to exploit the connection between spectral theory and the dynamics of chaos that has been presented in the previous section. Since linear operators on finite dimensional spaces do not possess continuous or residual spectra, spectral theory on infinite dimensional spaces essentially involves limiting behaviour to infinite dimensions of the familiar matrix eigenvalue-eigenvector problem. As always this means extensions, dense embeddings and completions of the finite dimensional problem that show up as generalized eigenvalues and eigenvectors. In its usual form, the goal of nonlinear spectral theory consists [@Appel2000] in the study of $T_{\lambda}^{-1}$ for nonlinear operators $T_{\lambda}$ that satisfy more general continuity conditions, like differentiability and Lipschitz continuity, than simple boundedness that is enough for linear operators. The following generalization of the concept of the spectrum of a linear operator to the nonlinear case is suggestive. For a nonlinear map, $\lambda$ need not appear only in a multiplying role, so that an eigenvalue equation can be written more generally as a fixed-point equation $$f(\lambda;x)=x$$ with a fixed point corresponding to the eigenfunction of a linear operator and an “eigenvalue” being the value of $\lambda$ for which this fixed point appears. The correspondence of the residual and continuous parts of the spectrum are, however, less trivial than for the point spectrum. This is seen from the following two examples, [@Roman1975]. Let $Ae_{k}=\lambda_{k}e_{k},\textrm{ }k=1,2,\cdots$ be an eigenvalue equation with $e_{j}$ being the $j^{\textrm{th}}$ unit vector. Then $(A-\lambda)e_{k}:=(\lambda_{k}-\lambda)e_{k}=0$ iff $\lambda=\lambda_{k}$ so that $\{\lambda_{k}\}_{k=1}^{\infty}\in P\sigma(A)$ are the only eigenvalues of $A$. Consider now $(\lambda_{k})_{k=1}^{\infty}$ to be a sequence of real numbers that tends to a finite $\lambda^{*}$; for example let $A$ be a diagonal matrix having $1/k$ as its diagonal entries. Then $\lambda^{*}$ belongs to the continuous spectrum of $A$ because $(A-\lambda^{*})e_{k}=(\lambda_{k}-\lambda^{*})e_{k}$ with $\lambda_{k}\rightarrow\lambda^{*}$ implies that $(A-\lambda^{*})^{-1}$ is an unbounded linear operator and $\lambda^{*}$ a generalized eigenvalue of $A$. In the second example $Ae_{k}=e_{k+1}/(k+1)$, it is not difficult to verify that: (a) The point spectrum of $A$ is empty, (b) The range of $A$ is not dense because it does not contain $e_{1}$, and (c) $A^{-1}$ is unbounded because $Ae_{k}\rightarrow0$. Thus the generalized eigenvalue $\lambda^{*}=0$ in this case belongs to the residual spectrum of $A$. In either case, $\lim_{j\rightarrow\infty}e_{j}$ is the corresponding generalized eigenvector that enlarges the trivial null space $\mathcal{N}(\mathscr L_{\lambda^{*}})$ of the generalized eigenvalue $\lambda^{*}$. In fact in these two and the Dirac delta example of Sec. 5 of continuous and residual spectra, the generalized eigenfunctions arise as the limits of a sequence of functions whose images under the respective $\mathcal{L}_{\lambda}$ converge to $0$; recall the definition of footnote \[Foot: gen\_eigen\]. This observation generalizes to the dense extension $\textrm{Multi}_{|}(X,Y)$ of $\textrm{Map}(X,Y)$ as follows. If $x\in\mathcal{D}_{+}$ is not a fixed point of $f(\lambda;x)=x$, but there is some $n\in\mathbb{N}$ such that $f^{n}(\lambda;x)=x$, then the limit $n\rightarrow\infty$ generates a multifunction at $x$ as was the case with the delta function in the previous section and the various other examples that we have seen so far in the earlier sections. One of the main goals of investigations on the spectrum of nonlinear operators is to find a set in the complex plane that has the usual desirable properties of the spectrum of a linear operator, @Appel2000. In this case, the focus has been to find a suitable class of operators $\mathcal{C}(X)$ with $T\in\mathcal{C}(X)$, such that the resolvent set is expressed as$$\rho(T)=\{\lambda\in\mathbb{C}\!:(T_{\lambda}\textrm{ is }1:1)(\textrm{Cl}(\mathcal{R}(T_{\lambda})=X)\textrm{ and }(T_{\lambda}^{-1}\in\mathcal{C}(X)\textrm{ on }\mathcal{R}(T_{\lambda}))\}$$ with the spectrum $\sigma(T)$ being defined as the complement of this set. Among the classes $\mathcal{C}(X)$ that have been considered, beside spaces of continuous functions $C(X)$, are linear boundedness $B(X)$, Frechet differentiability $C^{1}(X)$, Lipschitz continuity $\textrm{Lip}(X)$, and Granas quasiboundedness $Q(x)$, where $\textrm{Lip}(X)$ specifically takes into account the nonlinearity of $T$ to define $$\Vert T\Vert_{\textrm{Lip}}=\sup_{x\neq y}\frac{\Vert T(x)-T(y)\Vert}{\Vert x-y\Vert},\qquad|T|_{\textrm{lip}}=\inf_{x\neq y}\frac{\Vert T(x)-T(y)\Vert}{\Vert x-y\Vert}\label{Eqn: LipNorm}$$ that are plainly generalizations of the corresponding norms of linear operators. Plots of $f_{\lambda}^{-}(y)=\{ x\in\mathcal{D}(f-\lambda)\!:(f-\lambda)x=y\}$ for the functions $f\!:\mathbb{R}\rightarrow\mathbb{R}$$$\begin{array}{rcl} f_{\lambda\textrm{a}}(x) & = & \left\{ \begin{array}{cll} -1-\lambda x, & & x<-1\\ (1-\lambda)x, & & -1\leq x\leq1\\ 1-\lambda x, & & 1<x,\end{array}\right.\\ \\f_{\lambda\textrm{b}}(x) & = & \left\{ \begin{array}{cll} -\lambda x, & & x<1\\ (1-\lambda x)-1, & & 1\leq x\leq2\\ 1-\lambda x, & & 2<x\end{array}\right.\\ \\f_{\lambda\textrm{c}}(x) & = & \left\{ \begin{array}{cll} -\lambda x & & x<1\\ \sqrt{x-1}-\lambda x & & 1\leq x,\end{array}\right.\\ \\f_{\lambda\textrm{d}}(x) & = & \left\{ \begin{array}{cll} (x-1)^{2}+1-\lambda x & & 1\leq x\leq1\\ (1-\lambda)x & & \textrm{otherwise}\end{array}\right.\\ \\f_{\lambda\textrm{e}}(x) & = & \tan^{-1}(x)-\lambda x,\\ \\f_{\lambda\textrm{f}}(x) & = & \left\{ \begin{array}{cll} 1-2\sqrt{-x}-\lambda x, & & x<-1\\ (1-\lambda)x, & & -1\leq x\leq1\\ 2\sqrt{x}-1-\lambda x, & & 1<x\end{array}\right.\end{array}$$ taken from @Appel2000 are shown in Fig. \[Fig: Appel\]. It is easy to verify that the Lipschitz and linear upper and lower bounds of these maps are as in Table \[Table: Appel\_bnds\]. The point spectrum defined by $$P\sigma(f)=\{\lambda\in\mathbb{C}\!:(f-\lambda)x=0\textrm{ for some }x\neq0\}$$ is the simplest to calculate. Because of the special role played by the zero element $0$ in generating the point spectrum in the linear case, the bounds $m\Vert x\Vert\leq\Vert\mathscr{L}x\Vert\leq M\Vert x\Vert$ together with $\mathscr{L}x=\lambda x$ imply $\textrm{Cl}(P\sigma(\mathscr{L}))=[\Vert\mathscr{L}\Vert_{\textrm{b}},\Vert\mathscr{L}\Vert_{\textrm{B}}]$ — where the subscripts denote the lower and upper bounds in Eq. (\[Eqn: LipNorm\]) and which is sometimes taken to be a descriptor of the point spectrum of a nonlinear operator — as can be seen in Table \[Table: Appel\_spectra\] and verified from Fig. \[Fig: Appel\]. The remainder of the spectrum, as the complement of the resolvent set, is more difficult to find. Here the convenient characterization of the resolvent of a continuous linear operator as the set of all sufficiently large $\lambda$ that satisfy $|\lambda|>M$ is of little significance as, unlike for a linear operator, the non-existence of an inverse is not just due the set $\{ f^{-1}(0)\}$ which happens to be the only way a linear map can fail to be injective. Thus the map defined piecewise as $\alpha+2(1-\alpha)x$ for $0\leq x<1/2$ and $2(1-x)$ for $1/2\leq x\leq1$, with $0<\alpha<1$, is not invertible on its range although $\{ f^{-}(0)\}=1$. Comparing Fig. \[Fig: Appel\] and Table \[Table: Appel\_bnds\], it is seen that in cases (b), (c) and (d), the intervals $[|f|_{\textrm{b}},\Vert f\Vert_{\textrm{B}}]$ are subsets of the $\lambda$-values for which the respective maps are not injective; this is to be compared with (a), (e) and (f) where the two sets are the same. Thus the linear bounds are not good indicators of the uniqueness properties of solution of nonlinear equations for which the Lipschitzian bounds are seen to be more appropriate. [|c||c|c|c|c|]{} Function& $|f|_{\textrm{b}}$& $\Vert f\Vert_{\textrm{B}}$& $|f|_{\textrm{lip}}$& $\Vert f\Vert_{\textrm{Lip}}$[\ ]{} $f_{\textrm{a}}$& $0$& $1$& $0$& $1$[\ ]{} $f_{\textrm{b}}$& $0$& $1/2$& $0$& $1$[\ ]{} $f_{\textrm{c}}$& $0$& $1/2$& $0$& $\infty$[\ ]{} $f_{\textrm{d}}$& $2(\sqrt{2}-1)$& $\infty$& $0$& $2$[\ ]{}$f_{\textrm{e}}$& $0$& $1$& $0$& $1$[\ ]{} $f_{\textrm{f}}$& $0$& $1$& $0$& $1$[\ ]{} [|c||c|c|]{} Functions& $\sigma_{\textrm{Lip}}(f)$& $P\sigma(f)$[\ ]{} $f_{\textrm{a}}$& $[0,1]$& $(0,1]$[\ ]{} $f_{\textrm{b}}$& $[0,1]$& $[0,1/2]$[\ ]{} $f_{\textrm{c}}$& $[0,\infty)$& $[0,1/2]$[\ ]{} $f_{\textrm{d}}$& $[0,2]$& $[2(\sqrt{2}-1),1]$[\ ]{} $f_{\textrm{e}}$& $[0,1]$& $(0,1)$[\ ]{} $f_{\textrm{f}}$& $[0,1]$& $(0,1)$[\ ]{} In view of the above, we may draw the following conclusions. If we choose to work in the space of multifunctions $\textrm{Multi}(X,\mathcal{T})$, with $\mathcal{T}$ the topology of pointwise biconvergence, when all functional relations are (multi) invertible on their ranges, we may make the following definition for the net of functions $f(\lambda;x)$ satisfying $f(\lambda;x)=x$. **Definition 6.1.** *Let* $f(\lambda;\cdot)\in\textrm{Multi}(X,\mathcal{T})$ *be a function. The resolvent set of $f$ is given by* $$\rho(f)=\{\lambda\!:(f(\lambda;\cdot)^{-1}\in\textrm{Map}(X,\mathcal{T}))\wedge(\textrm{Cl}(\mathcal{R}(f(\lambda;\cdot))=X)\},$$ *and any $\lambda$ not in $\rho$ is in the spectrum of $f$.$\qquad\square$* Thus apart from multifunctions, $\lambda\in\sigma(f)$ also generates functions on the boundary of functional and non-functional relations in $\textrm{Multi}(X,\mathcal{T})$. While it is possible to classify the spectrum into point, continuous and residual subsets, as in the linear case, it is more meaningful for nonlinear operators to consider $\lambda$ as being either in the *boundary spectrum* $\textrm{Bdy}(\sigma(f))$ or in the *interior spectrum* $\textrm{Int}(\sigma(f))$, depending on whether or not the multifunction $f(\lambda;\cdot)^{-}$ arises as the graphical limit of a net of functions in either $\rho(f)$ or $R\sigma(f)$. This is suggested by the spectra arising from the second row of Table \[Table: spectrum\] (injective $\mathcal{L}_{\lambda}$ and discontinuous $\mathcal{L}_{\lambda}^{-1}$) that lies sandwiched in the $\lambda$-plane between the two components arising from the first and third rows, see @Naylor1971 Sec. 6.6, for example. According to this simple scheme, the spectral set is a closed set with its boundary and interior belonging to $\textrm{Bdy}(\sigma(f))$ and $\textrm{Int}(\sigma(f))$ respectively. Table \[Table: Appel\_multi\] shows this division for the examples in Fig. \[Fig: Appel\]. Because $0$ is no more significant than any other point in the domain of a nonlinear map in inducing non-injectivity, the division of the spectrum into the traditional sets would be as shown in Table \[Table: Appel\_multi\]; compare also with the conventional linear point spectrum of Table \[Table: Appel\_spectra\]. In this nonlinear classification, the point spectrum consists of any $\lambda$ for which the inverse $f(\lambda;\cdot)^{-}$ is set-valued, irrespective of whether this is produced at $0$ or not, while the continuous and residual spectra together comprise the boundary spectrum. Thus a $\lambda$ can be both in the point and the continuous or residual spectra which need not be disjoint. The continuous and residual spectra are included in the boundary spectrum which may also contain parts of the point spectrum. Function $\textrm{Int}(\sigma(f))$ $\textrm{Bdy}(\sigma(f))$ $P\sigma(f)$ $C\sigma(f)$ $R\sigma(f)$ ------------------ --------------------------- --------------------------- -------------- -------------- -------------- $f_{\textrm{a}}$ $(0,1)$ $\{0,1\}$ $[0,1]$ $\{1\}$ $\{0\}$ $f_{\textrm{b}}$ $(0,1)$ $\{0,1\}$ $[0,1]$ $\{1\}$ $\{0\}$ $f_{\textrm{c}}$ $(0,\infty)$ $\{0\}$ $[0,\infty)$ $\{0\}$ $\emptyset$ $f_{\textrm{d}}$ $(0,2)$ $\{0,2\}$ $(0,2)$ $\{0,2\}$ $\emptyset$ $f_{\textrm{e}}$ $(0,1)$ $\{0,1\}$ $(0,1)$ $\{1\}$ $\{0\}$ $f_{\textrm{f}}$ $(0,1)$ $\{0,1\}$ $(0,1)$ $\{0,1\}$ $\emptyset$ : \[Table: Appel\_multi\][Nonlinear spectra of functions of Fig. \[Fig: Appel\]. Compare the present point spectra with the usual linear spectra of Table \[Table: Appel\_spectra\].]{} **Example 6.1.** To see how these concepts apply to linear mappings, consider the equation $(D-\lambda)y(x)=r(x)$ where $D=d/dx$ is the differential operator on $L^{2}[0,\infty)$, and let $\lambda$ be real. For $\lambda\neq0$, the unique solution of this equation in $L^{2}[0,\infty)$, is $$\begin{aligned} y(x)= & \left\{ \begin{array}{ll} {\displaystyle e^{\lambda x}\left(y(0)+\int_{0}^{x}e^{-\lambda x^{\prime}}r(x^{\prime})dx^{\prime}\right)}, & \lambda<0\\ {\displaystyle e^{\lambda x}\left(y(0)-\int_{x}^{\infty}e^{-\lambda x^{\prime}}r(x^{\prime})dx^{\prime}\right),} & \lambda>0\end{array}\right.\end{aligned}$$ showing that for $\lambda>0$ the inverse is functional so that $\lambda\in(0,\infty)$ belongs to the resolvent of $D$. However, when $\lambda<0$, apart from the $y=0$ solution (since we are dealing a with linear problem, only $r=0$ is to be considered), $e^{\lambda x}$ is also in $L^{2}[0,\infty)$ so that all such $\lambda$ are in the point spectrum of $D$. For $\lambda=0$ and $r\neq0$, the two solutions are not necessarily equal unless $\int_{0}^{\infty}r(x)=0$, so that the range $\mathcal{R}(D-I)$ is a subspace of $L^{2}[0,\infty)$. To complete the problem, it is possible to show [@Naylor1971] that $0\in C\sigma(D)$, see Ex. 2.2; hence the continuous spectrum forms at the boundary of the functional solution for the resolvent-$\lambda$ and the multifunctional solution for the point spectrum. With a slight variation of problem to $y(0)=0$, all $\lambda<0$ are in the resolvent set, while $\lambda>0$ the inverse is bounded but must satisfy $y(0)=\int_{0}^{\infty}e^{-\lambda x}r(x)dx=0$ so that $\textrm{Cl}(\mathcal{R}(D-\lambda))\neq L^{2}[0,\infty)$. Hence $\lambda>0$ belong to the residual spectrum. The decomposition of the complex $\lambda$-plane for these and some other linear spectral problems taken from @Naylor1971 is shown in Fig. \[Fig: spectrum\]. In all cases, the spectrum due to the second row of Table \[Table: spectrum\] acts as a boundary between that arising from the first and third rows, which justifies our division of the spectrum for a nonlinear operator into the interior and boundary components. Compare Example 2.2.$\qquad\blacksquare$ From the basic representation of the resolvent operator $(\mathbf{1}-f)^{-1}$ $$\mathbf{1}+f+f^{2}+\cdots+f^{i}+\cdots$$ in $\textrm{Multi}(X)$, if the iterates of $f$ converge to a multifunction for some $\lambda$, then that $\lambda$ must be in the spectrum of $f$, which means that the control parameter of a chaotic dynamical system is in its spectrum. Of course, the series can sum to a multi even otherwise: take $f_{\lambda}(x)$ to be identically $x$ with $\lambda=1$, for example, to get $1\in P\sigma(f)$. A comparison of Tables \[Table: spectrum\] and \[Table: Appel\_spectra\] reveal that in case (d), for example, $0$ and $2$ belong to the Lipschtiz spectrum because although $f_{\textrm{d}}^{-1}$ is not Lipschitz continuous, $\Vert f\Vert_{\textrm{Lip}}=2$. It should also be noted that the boundary between the functional resolvent and multifunctional spectral set is formed by the graphical convergence of a net of resolvent functions while the multifunctions in the interior of the spectral set evolve graphically independent of the functions in the resolvent. The chaotic states forming the boundary of the functional and multifunctional subsets of $\textrm{Multi}(X)$ marks the transition from the less efficient functional state to the more efficient multifunctional one. These arguments also suggest the following. The countably many outputs arising from the non-injectivity of $f(\lambda;\cdot)$ corresponding a given input can be interpreted to define *complexity because* *in a nonlinear system each of these possibilities constitute a experimental result in itself that may not be combined in any definite predetermined manner.* This is in sharp contrast to linear systems where a linear combination, governed by the initial conditions, always generate a unique end result; recall also the combination offered by the singular generalized eigenfunctions of neutron transport theory. This multiplicity of possibilities that have no definite combinatorial property is the basis of the diversity of nature, and is possibly responsible for Feigenbaum’s “historical prejudice”, [@Feigenbaum1992], see Prelude, 2. Thus *order* represented by the functional resolvent passes over to *complexity* of the countably multifunctional interior spectrum via the uncountably multifunctional boundary that is a prerequisite for *chaos.* We may now strengthen our hypothesis offered at the end of the previous section in terms of the examples of Figs. \[Fig: Appel\] and \[Fig: spectrum\], that nature uses chaoticity as an intermediate step to the attainment of states that would otherwise be inaccessible to it. Well-posedness of a system is an extremely inefficient way of expressing a multitude of possibilities as this requires a different input for every possible output. Nature chooses to express its myriad manifestations through the multifunctional route leading either to averaging as in the delta function case or to a countable set of well-defined states, as in the examples of Fig. \[Fig: Appel\] corresponding to the interior spectrum. Of course it is no distraction that the multifunctional states arise respectively from $f_{\lambda}$ and $f_{\lambda}^{-}$ in these examples as $f$ is a function on $X$ that is under the influence of both $f$ and its inverse. The functional resolvent is, for all practical purposes, only a tool in this structure of nature. The equation $f(x)=y$ is typically an input-output system in which the inverse images at a functional value $y_{0}$ represents a set of input parameters leading to the same experimental output $y_{0};$ this is stability characterized by a complete insensitivity of the output to changes in input. On the other hand, a continuous multifunction at $x_{0}$ is a signal for a hypersensitivity to input because the output, which is a definite experimental quantity, is a choice from the possibly infinite set $\{ f(x_{0})\}$ made by a choice function which represents the experiment at that particular point in time. Since there will always be finite differences in the experimental parameters when an experiment is repeated, the choice function (that is the experimental output) will select a point from $\{ f(x_{0})\}$ that is representative of that experiment and which need not bear any definite relation to the previous values; this is instability and signals sensitivity to initial conditions. Such a state is of high entropy as the number of available states $f_{\textrm{C}}(\{ f(x_{0})\})$ — where $f_{\textrm{C}}$ is the choice function — is larger than a functional state represented by the singleton $\{ f(x_{0})\}.$ **Epilogue** @Gleick1987 **Appendix** This Appendix gives a brief overview of some aspects of topology that are necessary for a proper understanding of the concepts introduced in this work. **A1. Convergence in Topological Spaces: Sequence, Net and Filter.** In the theory of convergence in topological spaces, *countability* plays an important role. To understand the significance of this concept, some preliminaries are needed. The notion of a basis, or base, is a familiar one in analysis: a base is a subcollection of a set which may be used to construct, in a specified manner, any element of the set. This simplifies the statement of a problem since a smaller number of elements of the base can be used to generate the larger class of every element of the set. This philosophy finds application in topological spaces as follows. Among the three properties $(\textrm{N}1)-(\textrm{N}3)$ of the neighbourhood system $\mathcal{N}_{x}$ of Tutorial4, (N1) and (N2) are basic in the sense that the resulting subcollection of $\mathcal{N}_{x}$ can be used to generate the full system by applying $(\textrm{N}3)$; this *basic neighbourhood* *system*, or *neighbourhood (local) bas*e $\mathcal{B}_{x}$ *at* $x$, is characterized by (NB1) $x$ belongs to each member $B$ of $\mathcal{B}_{x}$*.* (NB2) The intersection of any two members of **$\mathcal{B}_{x}$ **contains another member of $\mathcal{B}_{x}$: $B_{1},B_{2}\in\mathcal{B}_{x}\Rightarrow(\exists B\in\mathcal{B}_{x}\!:B\subseteq B_{1}\bigcap B_{2})$. ** Formally, compare Eq. (\[Eqn: nbd-topology\]), **Definition A1.1.** *A neighbourhood (local) base* $\mathcal{B}_{x}$ *at $x$ in a topological space $(X,\mathcal{U})$ is a subcollection of the neighbourhood system $\mathcal{N}_{x}$ having the property that each $N\in\mathcal{N}_{x}$ contains some member of* $\mathcal{B}_{x}$*.* *Thus* $$\mathcal{B}_{x}\overset{\textrm{def}}=\{ B\in\mathcal{N}_{x}\!:x\in B\subseteq N\textrm{ for each }N\in\mathcal{N}_{x}\}\label{Eqn: TBx}$$ *determines the full neighbourhood system* $$\mathcal{N}_{x}=\{ N\subseteq X\!:x\in B\subseteq N\textrm{ for some }B\textrm{ }\in\,\mathcal{B}_{x}\}\label{Eqn: TBx_nbd}$$ *reciprocally as all supersets of the basic elements.$\qquad\square$* The entire neighbourhood system $\mathcal{N}_{x}$, which is recovered from the base by forming all supersets of the basic neighbourhoods, **is trivially a local base at $x$; non-trivial examples are given below. The second example of a base, consisting as usual of a subcollection of a given collection, is the topological base $_{\textrm{T}}\mathcal{B}$ that allows the specification of the topology on a set $X$ in terms of a smaller collection of open sets. **Definition A1.2.** *A base* $_{\textrm{T}}\mathcal{B}$ *in a topological space $(X,\mathcal{U})$ is a subcollection of the topology $\mathcal{U}$ having the property that each $U\in\mathcal{U}$ contains some member of* $_{\textrm{T}}\mathcal{B}$*.* *Thus* $$_{\textrm{T}}\mathcal{B}\overset{\textrm{def}}=\{ B\in\mathcal{U}\!:B\subseteq U\textrm{ for each }U\in\mathcal{U}\}\label{Eqn: TB}$$ *determines reciprocally the topology $\mathcal{U}$ as* $$\mathcal{U}=\left\{ U\subseteq X\!:U=\bigcup_{B\in\,\!_{\textrm{T}}\mathcal{B}\,}B\right\} \qquad\square\label{Eqn: TB_topo}$$ This means that the topology on $X$ can be reconstructed form the base by taking all possible unions of members of the base, and a collection of subsets of a set $X$ is a topological base iff Eq. (\[Eqn: TB\_topo\]) of arbitrary unions of elements of $_{\textrm{T}}\mathcal{B}$ generates a topology on $X$. This topology, which is the coarsest (that is the smallest) that contains $_{\textrm{T}}\mathcal{B}$, is obviously closed under finite intersections. Since the open set $\textrm{Int}(N)$ is a neighbourhood of $x$ whenever $N$ is, Eq. (\[Eqn: TBx\_nbd\]) and the definition Eq. (\[Eqn: Def: nbd system\]) of $\mathcal{N}_{x}$ implies that *the open neighbourhood system of any point in a topological space is an example of a neighbourhood base at that point,* an observation that has often led, together with Eq. (\[Eqn: TB\]), to the use of the term “neighbourhood” as a synonym for “non-empty open set”. The distinction between the two however is significant as neighbourhoods need not necessarily be open sets; thus while not necessary, it is clearly sufficient for the local basic sets $B$ to be open in Eqs. (\[Eqn: TBx\]) and (\[Eqn: TBx\_nbd\]). If Eq. (\[Eqn: TBx\_nbd\]) holds for every $x\in N$, then the resulting $\mathcal{N}_{x}$ reduces to the topology induced by the open basic neighbourhood system $\mathcal{B}_{x}$ as given by Eq. (\[Eqn: nbd-topology\]). In order to check if a collection of subsets $_{\textrm{T}}\mathcal{B}$ of $X$ qualifies to be a basis, it is not necessary to verify properties $(\textrm{T}1)-(\textrm{T}3)$ of Tutorial4 for the class (\[Eqn: TB\_topo\]) generated by it because of the properties (TB1) and (TB2) below whose strong affinity to (NB1) and (NB2) is formalized in Theorem A1.1. **Theorem A1.1.** *A collection* $_{\textrm{T}}\mathcal{B}$ *of subsets of $X$ is a* *topological basis on* $X$ *iff* (TB1) *$X=\bigcup_{B\in\,_{\textrm{T}}\mathcal{B}}B$. Thus each $x\in X$ must belong to some* $B\in\,_{\textrm{T}}\mathcal{B}$ *which implies the existence of a* *local base* *at each point* *$x\in X$.* (TB2) *The intersection of any two members $B_{1}$ and $B_{2}$ of* $_{\textrm{T}}\mathcal{B}$ *with $x\in B_{1}\bigcap B_{2}$* ***contains another member of* $_{\textrm{T}}\mathcal{B}$: $(B_{1},B_{2}\in\,_{\textrm{T}}\mathcal{B})\wedge(x\in B_{1}\bigcap B_{2})\Rightarrow(\exists B\in\,_{\textrm{T}}\mathcal{B}\!:x\in B\subseteq B_{1}\bigcap B_{2})$.$\qquad\square$ This theorem, together with Eq. (\[Eqn: TB\_topo\]) ensures that a given collection of subsets of a set $X$ satisfying (TB1) and (TB2) induces *some* topology on $X$; compared to this is the result that *any* collection of subsets of a set $X$ is a *subbasis* for some topology on $X$. If $X$, however, already has a topology $\mathcal{U}$ imposed on it, then Eq. **(\[Eqn: TB\]) must also be satisfied in order that the topology generated by $_{\textrm{T}}\mathcal{B}$ is indeed $\mathcal{U}$. The next theorem connects the two types of bases of Defs. A1.1 and A1.2 by asserting that although a local base of a space need not consist of open sets and a topological base need not have any reference to a point of $X$, any subcollection of the base containing a point is a local base at that point. **Theorem A1.2.** *A collection of open sets* $_{\textrm{T}}\mathcal{B}$ *is a base for a topological space $(X,\mathcal{U})$ iff for each $x\in X$, the subcollection* $$\mathcal{B}_{x}=\{ B\in\mathcal{U}\!:x\in B\in\!\,_{\textrm{T}}\mathcal{B}\}\label{Eqn: base_local base}$$ *of basic sets containing $x$ is a local base at* $x$.$\qquad\square$ **Proof.** *Necessity.* Let $_{\textrm{T}}\mathcal{B}$ be a base of *$(X,\mathcal{U})$* and $N$ be a neighbourhood of $x$, so that $x\in U\subseteq N$ for some open set $U=\bigcup_{B\in\!\,_{\textrm{T}}\mathcal{B}}B$ and basic open sets $B$. Hence $x\in B\subseteq N$ shows, from Eq. (\[Eqn: TBx\]), that $B\in\mathcal{B}_{x}$ is a local basic set at $x$. *Sufficiency.* If $U$ is an open set of $X$ containing $x$, then the definition of local base Eq. (\[Eqn: TBx\]) requires $x\in B_{x}\subseteq U$ for some subcollection of basic sets $B_{x}$ in $\mathcal{B}_{x}$; hence $U=\bigcup_{x\in U}B_{x}$. By Eq. (\[Eqn: TB\_topo\]) therefore, $_{\textrm{T}}\mathcal{B}$ is a topological base for $X$.$\qquad\blacksquare$ Because the basic sets are open, (TB2) of Theorem A1.1 leads to the following physically appealing paraphrase of Thm. A1.2. **Corollary.** *A collection* $_{\textrm{T}}\mathcal{B}$ *of open sets of* $(X,\mathcal{U})$ *is a topological base that generates* $\mathcal{U}$ *iff for each open set $U$ of $X$ and each $x\in U$ there is an open set* $B\in\!\,_{\textrm{T}}\mathcal{B}$ *such that $x\in B\subseteq U$*; *that is iff* $$x\in U\in\mathcal{U}\Longrightarrow(\exists B\in\,_{\textrm{T}}\mathcal{B}\!:x\in B\subseteq U).\qquad\square$$ ****Example A1.1.** Some examples of local bases in $\mathbb{R}$ are intervals of the type $(x-\varepsilon,x+\varepsilon)$, $[x-\varepsilon,x+\varepsilon]$ for real $\varepsilon$, $(x-q,x+q)$ for rational $q$, $(x-1/n,x+1/n)$ for $n\in\mathbb{Z}_{+}$, while for a metrizable space with the topology induced by a metric $d$, each of the following is a local base at $x\in X$: $B_{\varepsilon}(x;d):=\{ y\in X:d(x,y)<\varepsilon\}$ and $D_{\varepsilon}(x;d):=\{ y\in X:d(x,y)\leq\varepsilon\}$ for $\varepsilon>0$, $B_{q}(x;d)$ for $\mathbb{Q}\ni q>0$ and $B_{1/n}(x;d)$ for $n\in\mathbb{Z}_{+}$. In $\mathbb{R}^{2}$, two neighbourhood bases at any $x\in\mathbb{R}^{2}$ are the disks centered at $x$ and the set of all squares at $x$ with sides parallel to the axes. Although these bases have no elements in common, they are nevertheless equivalent in the sense that they both generate the same (usual) topology in $\mathbb{R}^{2}$. Of course, the entire neighbourhood system at any point of a topological space is itself a (less useful) local base at that point. By Theorem A1.2, $B_{\varepsilon}(x;d)$, $D_{\varepsilon}(x;d)$, $\varepsilon>0$, $B_{q}(x;d)$, $\mathbb{Q}\ni q>0$ and $B_{1/n}(x;d)$, $n\in\mathbb{Z}_{+}$, for all $x\in X$ are examples of bases in a metrizable space with topology induced by a metric $d$.$\qquad\square$ In terms of local bases and bases, it is now possible to formulate the notions of first and second countability as follows. **Definition A1.3.** *A topological space is* *first countable* *if each $x\in X$ has some countable neighbourhood base, and is* *second countable* *if it has a countable base.* $\qquad\square$ Every metrizable space $(X,d)$ is first countable as both $\{ B(x,q)\}_{\mathbb{Q}\ni q>0}$ and $\{ B(x,1/n)\}_{n\in\mathbb{Z}_{+}}$ are examples of countable neighbourhood bases at any $x\in(X,d)$; hence $\mathbb{R}^{n}$ is first countable. It should be clear that although every second countable space is first countable, *only a countable first countable space can be second countable*, and a common example of a uncountable first countable space that is also second countable is provided by $\mathbb{R}^{n}$. Metrizable spaces need not be second countable: any uncountable set having the discrete topology is as an example. **Example A1.2.** The following is an important example of a space that is not first countable as it is needed for our pointwise biconvergence of Section 3. Let $\textrm{Map}(X,Y)$ be the set of all functions between the uncountable spaces $(X,\mathcal{U})$ and $(Y,\mathcal{V})$. Given any integer $I\geq1$, and any *finite* collection of points $(x_{i})_{i=1}^{I}$ of $X$ and of open sets $(V_{i})_{i=1}^{I}$ in $Y$, let $$B((x_{i})_{i=1}^{I};(V_{i})_{i=1}^{I})=\{ g\in\textrm{Map}(X,Y)\!:(g(x_{i})\in V_{i})(i=1,2,\cdots,I)\}\label{Eqn: point}$$ be the functions in $\textrm{Map}(X,Y)$ whose graphs pass through each of the sets $(V_{i})_{i=1}^{I}$ at $(x_{i})_{i=1}^{I}$, and let $_{\textrm{T}}\mathcal{B}$ be the collection of all such subsets of $\textrm{Map}(X,Y)$ for every choice of $I$, $(x_{i})_{i=1}^{I}$, and $(V_{i})_{i=1}^{I}$. The existence of a unique topology $\mathcal{T}$ — the *topology of pointwise convergence* on $\textrm{Map}(X,Y)$ — that is generated by the open sets $B$ of the collection $_{\textrm{T}}\mathcal{B}$ now follows because (TB1) is satisfied: For any $f\in\textrm{Map}(X,Y)$ there must be some $x\in X$ and a corresponding $V\subseteq Y$ such that $f(x)\in V$, and (TB2) is satisfied because $$B((s_{i})_{i=1}^{I};(V_{i})_{i=1}^{I})\bigcap B((t_{j})_{j=1}^{J};(W_{j})_{j=1}^{J})=B((s_{i})_{i=1}^{I},(t_{j})_{j=1}^{J};(V_{i})_{i=1}^{I},(W_{j})_{j=1}^{J})$$ implies that a function simultaneously belonging to the two open sets on the left must pass through each of the points defining the open set on the right. We now demonstrate that $(\textrm{Map}(X,Y),\mathcal{T})$ is not first countable by verifying that it is not possible to have a countable local base at any $f\in\textrm{Map}(X,Y)$. If this is not indeed true, let $B_{f}^{I}((x_{i})_{i=1}^{I};(V_{i})_{i=1}^{I})=\{ g\in\textrm{Map}(X,Y)\!:(g(x_{i})\in V_{i})_{i=1}^{I}\}$, which denotes those members of $_{\textrm{T}}\mathcal{B}$ that contain $f$ with $V_{i}$ an open neighbourhood of $f(x_{i})$ in $Y$, be a countable local base at $f$, see Thm. A1.2. Since $X$ is uncountable, it is now possible to choose some $x^{*}\in X$ different from any of the $(x_{i})_{i=1}^{I}\textrm{ }$ (for example, let $x^{*}\in\mathbb{R}$ be an irrational for rational $(x_{i})_{i}^{I}\textrm{ }$), and let $f(x^{*})\in V^{*}$ where $V^{*}$ is an open neighbourhood of $f(x^{*})$. Then $B(x^{*};V^{*})$ is an open set in $\textrm{Map}(X,Y)$ containing $f$; hence from the definition of the local base, Eq. (\[Eqn: TBx\]), or equivalently from the Corollary to Theorem A1.2, there exists some (countable) $I\in\mathbb{N}$ such that $f\in B^{I}\subseteq B(x^{*};V^{*})$. However, $$\begin{array}{ccc} f^{*}(x) & = & \begin{cases} y_{i}\in V_{i}, & \textrm{if }x=x_{i},\textrm{ and }1\leq i\leq I\\ y^{*}\in V^{*} & \textrm{if }x=x^{*}\\ \textrm{arbitrary}, & \textrm{otherwise}\end{cases}\end{array}$$ is a simple example of a function on $X$ that is in $B^{I}$ (as it is immaterial as to what values the function takes at points other than those defining $B^{I}$), but not in $B(x^{*};V^{*})$. From this it follows that *a sufficient condition for the topology of pointwise convergence to be first countable is that $X$ be countable.*$\qquad\blacksquare$ Even though it is not first countable, $(\textrm{Map}(X,Y),\mathcal{T})$ is a Hausdorff space when $Y$ is Hausdorff. Indeed, if $f,g\in(\textrm{Map}(X,Y),\mathcal{T})$ with $f\neq g$, then $f(x)\neq g(x)$ for some $x\in X$. But then as $Y$ is Hausdorff, it is possible to choose disjoint open intervals $V_{f}$ and $V_{g}$ at $f(x)$ and $g(x)$ respectively. With this background on first and second countability, it is now possible to go back to the question of nets, filters and sequences. Technically, a sequence on a set $X$ is a map $x\!:\mathbb{N}\rightarrow X$ from the set of natural numbers to $X$; instead of denoting this is in the usual functional manner of $x(i)\textrm{ with }i\in\mathbb{N}$, it is the standard practice to use the notation $(x_{i})_{i\in\mathbb{N}}$ for the terms of a sequence. However, if the space $(X,\mathcal{U})$ is not first countable (and as seen above this is not a rare situation), it is not difficult to realize that sequences are inadequate to describe convergence in $X$ simply because it can have only countably many values whereas the space may require uncountably many neighbourhoods to completely define the neighbourhood system at a point. The resulting uncountable generalizations of a sequence in the form of *nets* and *filters* is achieved through a corresponding generalization of the index set $\mathbb{N}$ to the directed set $\mathbb{D}$. **Definition A1.4.** *A* *directed set* *$\mathbb{D}$ is a preordered set for which the order $\preceq$, known as a* *direction of* $\mathbb{D}$, *satisfies* \(a) *$\alpha\in\mathbb{D}$ $\Rightarrow$ $\alpha\preceq\alpha$* (that is $\preceq$ is reflexive)*.* \(b) **$\alpha,\beta,\gamma\in\mathbb{D}\textrm{ such that }(\alpha\preceq\beta\wedge\beta\preceq\gamma)$ $\Rightarrow$ $\alpha\preceq\gamma$ (that is $\preceq$ is transitive). \(c) $\alpha,\beta\in\mathbb{D}$ $\Rightarrow$ $\exists\gamma\in\mathbb{D}\textrm{ such that }(\alpha\preceq\gamma)\wedge(\beta\preceq\gamma)$*.$\qquad\square$* While the first two properties are obvious enough and constitutes the preordering of $\mathbb{{D}}$, the third which replaces antisymmetry, ensures that for any finite number of elements of the directed set (recall that a preordered set need not be fully ordered), there is always a successor. Examples of directed sets can be both straight forward, as any totally ordered set like $\mathbb{N}$, $\mathbb{R}$, $\mathbb{Q}$, or $\mathbb{Z}$ and all subsets of a set $X$ under the superset or subset relation (that is $(\mathcal{P}(X),\supseteq)$ or $(\mathcal{P}(X),\subseteq)$ that are directed by their usual ordering, and not quite so obvious as the following examples which are significantly useful in dealing with convergence questions in topological spaces, amply illustrate. The neighbourhood system $$_{\mathbb{D}}N=\{ N\!:N\in\mathcal{N}_{x}\}$$ at a point $x\in X$, directed by the reverse inclusion direction $\preceq$ defined as $$M\preceq N\Longleftrightarrow N\subseteq M\qquad\textrm{for }M,N\in\mathcal{N}_{x},\label{Eqn: Direction1}$$ is a fundamental example of a *natural direction of $\mathcal{N}_{x}$*. In fact while reflexivity and transitivity are clearly obvious, (c) follows because for any $M,N\in\mathcal{N}_{x}$, $M\preceq M\bigcap N$ and $N\preceq M\bigcap N$. Of course, this direction is not a total ordering on $\mathcal{N}_{x}$. A more naturally useful directed set in convergence theory is $$_{\mathbb{D}}N_{t}=\{(N,t)\!:(N\in\mathcal{N}_{x})(t\in N)\}\label{Eqn: Directed}$$ under its *natural direction* $$(M,s)\preceq(N,t)\Longleftrightarrow N\subseteq M\qquad\textrm{for }M,N\in\mathcal{N}_{x};\label{Eqn: Direction2}$$ **$_{\mathbb{D}}N_{t}$ is more useful than $_{\mathbb{D}}N$ because, unlike the later, $_{\mathbb{D}}N_{t}$ does not require a simultaneous choice of points from every $N\in\mathcal{N}_{x}$ that implicitly involves a simultaneous application of the Axiom of Choice; see Examples A1.2(2) and (3) below. The general indexed variation $$_{\mathbb{D}}N_{\beta}=\{(N,\beta)\!:(N\in\mathcal{N}_{x})(\beta\in\mathbb{D})(x_{\beta}\in N)\}\label{Eqn: DirectedIndexed}$$ of Eq. (\[Eqn: Directed\]), with natural direction $$(M,\alpha)\leq(N,\beta)\Longleftrightarrow(\alpha\preceq\beta)\wedge(N\subseteq M),\label{Eqn: DirectionIndexed}$$ often proves useful in applications as will be clear from the proofs of Theorems A1.3 and A1.4. **Definition A1.5.** ***Net.*** *Let $X$ be any set and $\mathbb{D}$ a directed set. A net $\chi\!:\mathbb{D}\rightarrow X$* *in $X$* *is a function* *on the directed set $\mathbb{D}$ with values in $X$.$\qquad\square$* A net, to be denoted as $\chi(\alpha)$, $\alpha\in\mathbb{D}$, is therefore a function indexed by a directed set. We adopt the convention of denoting nets in the manner of functions and do not use the sequential notation $\chi_{\alpha}$ that can also be found in the literature. Thus, while every sequence is a special type of net, $\chi:\!\mathbb{Z}\rightarrow X$ is an example of a net that is not a sequence. Convergence of sequences and nets are described most conveniently in terms of the notions of being *eventually in* and *frequently in* every neighbourhood of points. We describe these concepts in terms of nets which apply to sequences with obvious modifications. **Definition A1.6.** *A net* $\chi\!:\mathbb{D}\rightarrow X$ *is said to be* \(a) *Eventually in* *a subset $A$* *of* *$X$ if its tail is eventually in $A$*: *$(\exists\beta\in\mathbb{D})\!:(\forall\gamma\succeq\beta)(\chi(\gamma)\in A).$* \(b) *Frequently in* *a subset $A$* *of* *$X$ if for any index $\beta\in\mathbb{D}$, there is a successor index $\gamma\in\mathbb{D}$ such that $\chi(\gamma)$* is in $A$: *$(\forall\beta\in\mathbb{D})(\exists\gamma\succeq\beta)\!:(\chi(\gamma)\in A).\qquad\square$* It is not difficult to appreciate that \(i) A net eventually in a subset is also frequently in it but not conversely, \(ii) A net eventually (respectively, frequently) in a subset cannot be frequently (respectively, eventually) in its complement. With these notions of eventually in and frequently in, convergence characteristics of a net may be expressed as follows. **Definition A1.7.** *A net* *$\chi\!:\mathbb{D}\rightarrow X$ converges to $x\in X$ if it is eventually in every neighbourhood of $x$, that is* $$(\forall N\in\mathcal{N}_{x})(\exists\mu\in\mathbb{D})(\chi(\nu\succeq\mu)\in N).$$ *The point $x$ is known as the* *limit* *of $\chi$ and the collection of all limits of a net is the* *limit set* $$\textrm{lim}(\chi)=\{ x\in X\!:(\forall N\in\mathcal{N}_{x})(\exists\mathbb{R}_{\beta}\in\textrm{Res}(\mathbb{D}))(\chi(\mathbb{R}_{\beta})\subseteq N)\}\label{Eqn: lim net}$$ *of $\chi$, with the set of* *residuals* $\textrm{Res}(\mathbb{D})$ *in $\mathbb{D}$ given by* $$\textrm{Res}(\mathbb{D})=\{\mathbb{R}_{\alpha}\in\mathcal{P}(\mathbb{D})\!:\mathbb{R}_{\alpha}=\{\beta\in\mathbb{D}\textrm{ for all }\beta\succeq\alpha\in\mathbb{D}\}\}.\label{Eqn: residual}$$ *The net* *adheres at* *$x\in X$*[^27] *if it is frequently in every neighbourhood of $x$, that is* $$((\forall N\in\mathcal{N}_{x})(\forall\mu\in\mathbb{D}))((\exists\nu\succeq\mu)\!:\chi(\nu)\in N).$$ *The point $x$ is known as the* *adherent* *of $\chi$ and the collection of all adherents of $\chi$ is the* *adherent set of the net, which* *may be expressed in terms of the* *cofinal subset* *of $\mathbb{D}$* $$\textrm{Cof}(\mathbb{D})=\{\mathbb{C}_{\alpha}\in\mathcal{P}(\mathbb{D})\!:\mathbb{C}_{\alpha}=\{\beta\in\mathbb{D}\textrm{ for some }\beta\succeq\alpha\in\mathbb{D}\}\}\label{Eqn: cofinal}$$ (thus $\mathbb{D}_{\alpha}$ is cofinal in $\mathbb{D}$ iff it intersects every residual in $\mathbb{D}$), *as* $$\textrm{adh}(\chi)=\{ x\in X\!:(\forall N\in\mathcal{N}_{x})(\exists\mathbb{C}_{\beta}\in\textrm{Cof}(\mathbb{D}))(\chi(\mathbb{C}_{\beta})\subseteq N)\}.\label{Eqn: adh net1}$$ *This recognizes, in keeping with the limit set, each subnet of a net to be a net in its own right, and is equivalent to* $${\textstyle \textrm{adh}(\chi)=\{ x\in X\!:(\forall N\in\mathcal{N}_{x})(\forall\mathbb{R}_{\alpha}\in\textrm{Res}(\mathbb{D}))(\chi(\mathbb{R}_{\alpha})\bigcap N\neq\emptyset)\}.\qquad\square}\label{Eqn: adh net2}$$ Intuitively, a sequence is eventually in a set $A$ if it is always in it after a finite number of terms (of course, the concept of a *finite number of terms* is unavailable for nets; in this case the situation may be described by saying that a net is eventually in $A$ if its *tail is in* $A$) and it is frequently in $A$ if it always returns to $A$ to leave it again. It can be shown that a net is eventually (resp. frequently) in a set iff it is not frequently (resp.eventually) in its complement. The following examples illustrate graphically the role of a proper choice of the index set $\mathbb{D}$ in the description of convergence. **Example A1.3.** (1) Let $\gamma\in\mathbb{D}$. The eventually constant net $\chi(\delta)=x$ for $\delta\succeq\gamma$ converges to $x$. \(2) Let $\mathcal{N}_{x}$ be a neighbourhood system at a point $x$ in $X$ and suppose that the net $(\chi(N))_{N\in\mathcal{N}_{x}}$ is defined by $$\chi(M)\overset{\textrm{def}}=s\in M;\label{Eqn: Def: Net1}$$ here the directed index set $_{\mathbb{D}}N$ is ordered by the natural direction (\[Eqn: Direction1\]) of $\mathcal{N}_{x}$. Then $\chi(N)\rightarrow x$ because given any $x$-neighbourhood $M\in\!\:_{\mathbb{D}}N$, it follows from $$M\preceq N\in\,{}_{\mathbb{D}}N\Longrightarrow\chi(N)=t\in N\subseteq M\label{Eqn: DirectedNet1}$$ that a point in any subset of $M$ is also in $M$; $\chi(N)$ is therefore eventually in every neighbourhood of $x$. \(3) This slightly more general form of the previous example provides a link between the complimentary concepts of nets and filters that is considered below. For a point $x\in X$, and $M,N\in\mathcal{N}_{x}$ with the corresponding directed set $_{\mathbb{D}}M_{s}$ of Eq. (\[Eqn: Directed\]) ordered by its natural order (\[Eqn: Direction2\]), the net $$\chi(M,s)\overset{\textrm{def}}=s\label{Eqn: Def: Net2}$$ converges to $x$ because, as in the previous example, for any given $(M,s)\in\:\!_{\mathbb{D}}N_{s}$, it follows from $$(M,s)\preceq(N,t)\in\!\:_{\mathbb{D}}M_{s}\Longrightarrow\chi(N,t)=t\in N\subseteq M\label{Eqn: DirectedNet2}$$ that $\chi(N,t)$ is eventually in every neighbourhood $M$ of $x$. The significance of the directed set $_{\mathbb{D}}N_{t}$ of Eq. (\[Eqn: Directed\]), as compared to $_{\mathbb{D}}N$, is evident from the net that it induces *without using the Axiom of Choice*: For a subset $A$ of $X$, the net $\chi(N,t)=t\in A$ indexed by the directed set $${\textstyle _{\mathbb{D}}N_{t}=\{(N,t)\!:(N\in\mathcal{N}_{x})(t\in N\bigcap A)\}}\label{Eqn: Closure_Directed}$$ under the direction of Eq. (\[Eqn: Direction2\]), converges to $x\in X$ with all such $x$ defining the closure $\textrm{Cl}(A)$ of $A$. Furthermore taking the directed set to be $${\textstyle _{\mathbb{D}}N_{t}=\{(N,t)\!:(N\in\mathcal{N}_{x})(t\in N\bigcap A-\{ x\})\}}\label{Eqn: Der_Directed}$$ which, unlike Eq. (\[Eqn: Closure\_Directed\]), excludes the point $x$ that may or may not be in the subset $A$ of $X$, induces the net $\chi(N,t)=t\in A-\{ x\}$ converging to $x\in X$, with the set of all such $x$ yielding the derived set $\textrm{Der}(A)$ of $A$. In contrast, Eq. (\[Eqn: Closure\_Directed\]) also includes the isolated points $t=x$ of $A$ so as to generate its closure. Observe how neighbourhoods of a point, which define convergence of nets and filters in a topological space $X$, double up here as index sets to yield a self-consistent tool for the description of convergence. As compared with sequences where, the index set is restricted to positive integers, the considerable freedom in the choice of directed sets as is abundantly borne out by the two preceding examples, is not without its associated drawbacks. Thus as a trade-off, the wide range of choice of the directed sets may imply that induction methods, so common in the analysis of sequences, need no longer apply to arbitrary nets. \(4) The non-convergent nets (actually these are sequences) \(a) $(1,-1,1,-1,\cdots)$ adheres at $1$ and $-1$ and \(b) $\begin{array}{ccl} x_{n} & = & {\displaystyle \left\{ \begin{array}{lcl} n & & \textrm{if }n\textrm{ is odd}\\ 1-1/(1+n) & & \textrm{if }n\textrm{ is even}\end{array},\right.}\end{array}$ adheres at $1$ for its even terms, but is unbounded in the odd terms.$\qquad\blacksquare$ A converging sequence or net is also adhering but, as examples (4) show, the converse is false. Nevertheless it is true, as again is evident from examples (4), that in a first countable space where sequences suffice, a sequence $(x_{n})$ adheres at $x$ iff some subsequence $(x_{n_{m}})_{m\in\mathbb{N}}$ of $(x_{n})$ converges to $x$. If the space is not first countable this has a corresponding equivalent formulation for nets with subnets replacing subsequences as follows. Let $(\chi(\alpha))_{\alpha\in\mathbb{D}}$ be a net. A *subnet* of $\chi(\alpha)$ is the net $\zeta(\beta)=\chi(\sigma(\beta))$, $\beta\in\mathbb{E}$, where $\sigma\!:(\mathbb{E},\leq)\rightarrow(\mathbb{D},\preceq)$ is a function that captures the essence of the subsequential mapping $n\mapsto n_{m}$ in $\mathbb{N}$ by satisfying (SN1) $\sigma$ is an increasing order-preserving function: it respects the order of $\mathbb{E}$: $\sigma(\beta)\preceq\sigma(\beta^{\prime})$ for every $\beta\leq\beta^{\prime}\in\mathbb{E}$, and (SN2) For every $\alpha\in\mathbb{D}$ there exists a $\beta\in\mathbb{E}$ such that $\alpha\preceq\sigma(\beta)$. These generalize the essential properties of a subsequence in the sense that (1) Even though the index sets $\mathbb{D}$ and $\mathbb{E}$ may be different, it is necessary that the values of $\mathbb{E}$ be contained in $\mathbb{D}$, and (2) There are arbitrarily large $\alpha\in\mathbb{D}$ such that $\chi(\alpha=\sigma(\beta))$ is a value of the subnet $\zeta(\beta)$ for some $\beta\in\mathbb{E}$. Recalling the first of the order relations Eq. (\[Eqn: FunctionOrder\]) on $\textrm{Map}(X,Y)$, we will denote a subnet $\zeta$ of $\chi$ by $\zeta\preceq\chi$. We now consider the concept of filter on a set $X$ that is very useful in visualizing the behaviour of sequences and nets, and in fact filters constitute an alternate way of looking at convergence questions in topological spaces. A filter $\mathcal{F}$ on a set $X$ is a collection of *nonempty* subsets of $X$ satisfying properties $(\textrm{F}1)-(\textrm{F}3)$ below that are simply those of a neighbourhood system $\mathcal{N}_{x}$ without specification of the reference point $x$. (F1) The empty set $\emptyset$ does not belong to $\mathcal{F}$, (F2) The intersection of any two members of a filter is another member of the filter: $F_{1},F_{2}\in\mathcal{F}\Rightarrow F_{1}\bigcap F_{2}\in\mathcal{F}$, (F3) Every superset of **a member of a filter belongs to the filter: $(F\in\mathcal{F})\wedge(F\subseteq G)\Rightarrow G\in\mathcal{F}$; in particular $X\in\mathcal{F}$. **Example A1.4.** (1) The *indiscrete filter* is the smallest filter on $X$. \(2) The neighbourhood system $\mathcal{N}_{x}$ is the important *neighbourhood filter at $x$ on $X$,* and any local base at $x$ is also a filter-base for $\mathcal{N}_{x}$. In general for any subset $A$ of $X$, $\{ N\subseteq X\!:A\subseteq\textrm{Int}(N)\}$ is a filter on $X$ at $A$. \(3) All subsets of $X$ containing a point $x\in X$ is the *principal filter* $_{\textrm{F}}\mathcal{P}(x)$ *on $X$ at $x$.* More generally, if $\mathcal{F}$ consists of all supersets of a *nonempty* subset $A$ of $X$, then $\mathcal{F}$ is the *principal filter* $_{\textrm{F}}\mathcal{P}(A)=\{ N\subseteq X\!:A\subseteq\textrm{Int}(N)\}$ *at $A$. By adjoining the empty set to this filter give the $p$-inclusion and $A$-inclusion topologies on $X$ respectively.* The single element sets $\{\{ x\}\}$ and $\{ A\}$ are particularly simple examples of filter-bases that generate the principal filters at $x$ and $A$. \(4) For an uncountable (resp. infinite) set $X$, all cocountable (resp. cofinite) subsets of $X$ constitute the *cocountable* (resp. *cofinite* or *Frechet*) filter on $X$. Again, adding to these filters the empty set gives the respective topologies.$\qquad\blacksquare$ Like the topological and local bases $_{\textrm{T}}\mathcal{B}$ and $\mathcal{B}_{x}$ respectively, a subclass of $\mathcal{F}$ may be used to define a filter-base $_{\textrm{F}}\mathcal{B}$ that in turn generate $\mathcal{F}$ on $X$, just as it is possible to define the concepts of limit and adherence sets for a filter to parallel those for nets that follow straightforwardly from Def. A1.7, taken with Def. A1.11. **Definition A1.8.** *Let $(X,\mathcal{T})$ be a topological space and $\mathcal{F}$ a filter on $X$. Then*$$\textrm{lim}(\mathcal{F})=\{ x\in X\!:(\forall N\in\mathcal{N}_{x})(\exists F\in\mathcal{F})(F\subseteq N)\}\label{Eqn: lim filter}$$ and $${\textstyle \textrm{adh}(\mathcal{F})=\{ x\in X\!:(\forall N\in\mathcal{N}_{x})(\forall F\in\mathcal{F})(F\bigcap N\neq\emptyset)\}}\label{Eqn: adh filter}$$ *are respectively the sets of* *limit points* *and* *adherent* *points* *of $\mathcal{F}$*[^28]*.$\qquad\square$* A comparison of Eqs. (\[Eqn: lim net\]) and (\[Eqn: adh net2\]) with Eqs. (\[Eqn: lim filter\]) and (\[Eqn: adh filter\]) respectively demonstrate their formal similarity; this inter-relation between filters and nets will be made precise in Definitions A1.10 and A1.11 below. It should be clear from the preceding two equations that $$\textrm{lim}(\mathcal{F})\subseteq\textrm{adh}(\mathcal{F}),\label{Eqn: lim/adh(fil)}$$ with a similar result $$\textrm{lim}(\chi)\subseteq\textrm{adh}(\chi)\label{Eqn: lim/adh(net)}$$ holding for nets because of the duality between nets and filters as displayed by Defs. A1.9 and A1.10 below, with the equality in Eqs. (\[Eqn: lim/adh(fil)\]) and (\[Eqn: lim/adh(net)\]) being true (but not characterizing) for ultrafilters and ultranets respectively, see Example 4.2(3) for an account of this notion . It should be clear from the equations of Definition A1.8 that $$\textrm{adh}(\mathcal{F})=\{ x\in X\!:(\exists\textrm{ a finer filter }\mathcal{G}\supseteq\mathcal{F}\textrm{ on }X)\textrm{ }(\mathcal{G}\rightarrow x)\}\label{Eqn: filter adh}$$ consists of all the points of $X$ to which some finer filter $\mathcal{G}$ (in the sense that $\mathcal{F}\subseteq\mathcal{G}$ implies every element of $\mathcal{F}$ is also in $\mathcal{G}$) converges in $X$; thus $${\textstyle \textrm{adh}(\mathcal{F})=\bigcup\lim(\mathcal{G}\!:\mathcal{G}\supseteq\mathcal{F}),}$$ which corresponds to the net-result of Theorem A1.5 below, that a net *$\chi$* adheres at *$x$* iff there is some subnet of *$\chi$* that converges to *$x$* in *$X$*. Thus if $\zeta\preceq\chi$ is a subnet of $\chi$ and $\mathcal{F}\subseteq\mathcal{G}$ is a filter coarser than $\mathcal{G}$ then $$\begin{aligned} \lim(\chi)\subseteq\lim(\zeta) & & \lim(\mathcal{F})\subseteq\lim(\mathcal{G})\\ \textrm{adh}(\zeta)\subseteq\textrm{adh}(\chi) & & \textrm{adh}(\mathcal{G})\subseteq\textrm{adh}(\mathcal{F});\end{aligned}$$ a filter $\mathcal{G}$ finer than a given filter $\mathcal{F}$ corresponds to a subnet $\zeta$ of a given net $\chi$. The implication of this correspondence should be clear from the association between nets and filters contained in Definitions A1.10 and A1.11. A filter-base in $X$ is a *nonempty* family $(B_{\alpha})_{\alpha\in\mathbb{D}}=\!\,_{\textrm{F}}\mathcal{B}$ of subsets of $X$ characterized by (FB1) There are no empty sets in the collection $_{\textrm{F}}\mathcal{B}$: $(\forall\alpha\in\mathbb{D})(B_{\alpha}\neq\emptyset)$ (FB2) The intersection of any two members of **$_{\textrm{F}}\mathcal{B}$ **contains another member of $_{\textrm{F}}\mathcal{B}$: $B_{\alpha},B_{\beta}\in\,_{\textrm{F}}\mathcal{B}\Rightarrow(\exists B\in\,_{\textrm{F}}\mathcal{B}\!:B\subseteq B_{\alpha}\bigcap B_{\beta})$; hence any class of subsets of $X$ that does not contain the empty set and is closed under finite intersections is a base for a unique filter on $X$; compare the properties (NB1) and (NB2) of a local basis given at the beginning of this Appendix. Similar to Def. A1.1 for the local base, it is possible to define **Definition A1.9.** *A filter-base* $_{\textrm{F}}\mathcal{B}$ *in a set $X$ is a subcollection of the filter* $\mathcal{F}$ *on $X$ having the property that each $F\in\mathcal{F}$ contains some member of* $_{\textrm{F}}\mathcal{B}$*.* *Thus* $$_{\textrm{F}}\mathcal{B}\overset{\textrm{def}}=\{ B\in\mathcal{F}\!:B\subseteq F\textrm{ for each }F\in\mathcal{F}\}\label{Eqn: FB}$$ *determines the filter* $$\mathcal{F}=\{ F\subseteq X\!:B\subseteq F\textrm{ for some }B\textrm{ }\in\!\,_{\textrm{F}}\mathcal{B}\}\label{Eqn: filter_base}$$ *reciprocally as all supersets of the basic elements.$\qquad\square$* This is the smallest filter on $X$ that contains $_{\textrm{F}}\mathcal{B}$ and is said to be *the filter generated by its filter-base* $_{\textrm{F}}\mathcal{B}$; alternatively $_{\textrm{F}}\mathcal{B}$ is the filter-base of $\mathcal{F}$. The entire neighbourhood system $\mathcal{N}_{x}$, the local base $\mathcal{B}_{x}$, $\mathcal{N}_{x}\bigcap A$ for $x\in\textrm{Cl}(A)$, and the set of all residuals of a directed set $\mathbb{D}$ are among the most useful examples of filter-bases on $X$, $A$ and $\mathbb{D}$ respectively. Of course, every filter is trivially a filter-base of itself, and *the singletons $\{\{ x\}\}$, $\{ A\}$ are filter-bases that generate the principal filters $_{\textrm{F}}\mathcal{P}(x)$ and $_{\textrm{F}}\mathcal{P}(A)$ at $x$, and $A$ respectively*. Paralleling the case of topological subbase $_{\textrm{T}}\mathcal{S}$, a filter subbase $_{\textrm{F}}\mathcal{S}$ can be defined on $X$ to be any collection of subsets of $X$ *with the finite intersection property* (as compared with $_{\textrm{T}}\mathcal{S}$ where no such condition was necessary, this represents the fundamental point of departure between topology and filter) and it is not difficult to deduce that the filter generated by **$_{\textrm{F}}\mathcal{S}$ on $X$ is obtained by taking all finite intersections $_{\textrm{F}}\mathcal{S}_{\wedge}$ of members of $_{\textrm{F}}\mathcal{S}$ followed by their supersets $_{\textrm{F}}\mathcal{S}_{\Sigma\wedge}$. $\mathcal{F}(_{\textrm{F}}\mathcal{S}):=\,_{\textrm{F}}\mathcal{S}_{\Sigma\wedge}$ is the smallest filter on $X$ that contains $_{\textrm{F}}\mathcal{S}$ and is the filter *generated by* $_{\textrm{F}}\mathcal{S}$. Equation (\[Eqn: adh filter\]) can be put in the more useful and transparent form given by **Theorem A1.3.** *For a filter $\mathcal{F}$ in a space $(X,\mathcal{T})$* $$\begin{aligned} {\displaystyle \textrm{adh}(\mathcal{F})} & = & {\displaystyle \bigcap_{F\in\mathcal{F}}\textrm{Cl}(F)}\label{Eqn: filter adh*}\\ & = & {\displaystyle \bigcap_{B\in\,_{\textrm{F}}\mathcal{B}}\textrm{Cl}(B)},\nonumber \end{aligned}$$ *and dually* $\textrm{adh}(\chi)$, *are closed set*s.$\qquad\square$ **Proof.** Follows immediately from the definitions for the closure of a set Eq. (\[Eqn: Def: Closure\]) and the adherence of a filter Eq. (\[Eqn: adh filter\]). As always, it is a matter of convenience in using the basic filters **$_{\textrm{F}}\mathcal{B}$** instead of $\mathcal{F}$ to generate the adherence set.$\qquad\blacksquare$ It is infact true that the limit sets $\lim(\mathcal{F})$ and $\lim(\chi)$ are also closed set of $X$; the arguments involving ultrafilters are omitted. Similar to the notion of the adherence set of a filter is its *core —* a concept that unlike the adherence, is purely set-theoretic being the infimum of the filter and is not linked with any topological structure of the underlying (infinite) set $X$ — defined as $${\displaystyle \textrm{core }(\mathcal{F})=\bigcap_{F\in\mathcal{F}}F.}\label{Eqn: core}$$ From Theorem A1.3 and the fact that the closure of a set $A$ is the smallest closed set that contains $A$, see Eq. (\[Eqn: closure\]) at the end of Tutorial4, it is clear that in terms of filters$$\begin{aligned} A & = & \textrm{core}(\,_{\textrm{F}}\mathcal{P}(A))\nonumber \\ \textrm{Cl}(A) & = & \textrm{adh}(\,_{\textrm{F}}\mathcal{P}(A))\label{Eqn: PrinFil_Cl(A)}\\ & = & \textrm{core}(\textrm{Cl}(\,_{\textrm{F}}\mathcal{P}(A)))\nonumber \end{aligned}$$ where $_{\textrm{F}}\mathcal{P}(A)$ is the principal filter at $A$; thus *the core and adherence sets of the principal filter at $A$ are equal respectively to $A$ and* $\textrm{Cl}(A)$ *—* a classic example of equality in the general relation $\textrm{Cl}(\bigcap A_{\alpha})\subseteq\bigcap\textrm{Cl}(A_{\alpha})$ — but both are empty, for example, in the case of an infinitely decreasing family of rationals centered at any irrational (leading to a principal filter-base of rationals at the chosen irrational). This is an important example demonstrating that *the infinite intersection of a non-empty family of (closed) sets with the finite intersection property may be empty,* *a situation that cannot arise on a finite set or an infinite compact set*. Filters on $X$ with an empty core are said to be *free,* and are *fixed* otherwise: notice that by its very definition filters cannot be free on a finite set, and a free filter represents an additional feature that may arise in passing from finite to infinite sets. Clearly $(\textrm{adh}(\mathcal{F})=\emptyset)\Rightarrow(\textrm{core}(\mathcal{F})=\emptyset)$, but as the important example of the rational space in the reals illustrate, the converse need not be true. Another example of a free filter of the same type is provided by the filter-base $\{[a,\infty)\!:a\in\mathbb{R}\}$ in $\mathbb{R}$. Both these examples illustrate the important property that *a filter is free iff it contains the cofinite filter,* and the cofinite filter is the smallest possible free filter on an infinite set. The free cofinite filter, as these examples illustrate, may be typically generated as follows. Let $A$ be a subset of $X$, $x\in\textrm{Bdy}_{X-A}(A)$, and consider the directed set Eq. (\[Eqn: Closure\_Directed\]) to generate the corresponding net in $A$ given by $\chi(N\in\mathcal{N}_{x},t)=t\in A$. Quite clearly, the core of any Frechet filter based on this net must be empty as the point $x$ does not lie in $A$. In general, the intersection is empty because if it were not so then the complement of the intersection — which is an element of the filter — would be infinite in contravention of the hypothesis that the filter is Frechet. It should be clear that every filter finer than a free filter is also free, and any filter coarser than a fixed filter is fixed. Nets and filters are complimentary concepts and one may switch from one to the other as follows. **Definition A1.10.** *Let $\mathcal{F}$ be a filter on $X$ and let $_{\mathbb{D}}F_{x}=\{(F,x)\!:(F\in\mathcal{F})(x\in F)\}$ be a directed set with its natural direction $(F,x)\preceq(G,y)\Rightarrow(G\subseteq F)$. The net $\chi_{\mathcal{F}}\,\!:\,_{\mathbb{D}}F_{x}\rightarrow X$ defined by* $$\chi_{\mathcal{F}}(F,x)=x$$ *is said to be* *associated with* *the filter* *$\mathcal{F}$, see Eq. (\[Eqn: DirectedNet2\]).$\qquad\square$* **Definition A1.11.** *Let $\chi\!:\mathbb{D}\rightarrow X$ be a net and $\mathbb{R}_{\alpha}=\{\beta\in\mathbb{D}\!:\beta\succeq\alpha\in\mathbb{D}\}$ a residual in $\mathbb{D}$. Then* $$_{\textrm{F}}\mathcal{B}_{\chi}\overset{\textrm{def}}=\{\chi(\mathbb{R}_{\alpha})\!:\textrm{Res}(\mathbb{D})\rightarrow X\textrm{ for all }\alpha\in\mathbb{D}\}$$ *is the* *filter-base associated with* *$\chi$, and the corresponding filter $\mathcal{F}_{\chi}$ obtained by taking all supersets of the elements of* $_{\textrm{F}}\mathcal{B}_{\chi}$ *is the* *filter* *associated with* *$\chi$.$\qquad\square$* $_{\textrm{F}}\mathcal{B}_{\chi}$ is a filter-base in $X$ because $\chi(\bigcap\mathbb{R}_{\alpha})\subseteq\bigcap\chi(\mathbb{R}_{\alpha})$, that holds for any functional relation, proves (FB2). It is not difficult to verify that \(i) $\chi$ is eventually in $A\Longrightarrow A\in\mathcal{F}_{\chi}$, and \(ii) $\chi$ is frequently in $A\Longrightarrow(\forall\mathbb{R}_{\alpha}\in\textrm{Res}(\mathbb{D}))(A\bigcap\chi(\mathbb{R}_{\alpha})\neq\emptyset)$ $\Longrightarrow A\bigcap\mathcal{F}_{\chi}\neq\emptyset$ . Limits and adherences are obviously preserved in switching between nets (respectively, filters) and the filters (respectively, nets) that they generate: $$\begin{aligned} \lim(\chi)=\lim(\mathcal{F}_{\chi}), & & \textrm{adh}(\chi)=\textrm{adh}(\mathcal{F}_{\chi})\label{Eqn: net-fil}\\ \lim(\mathcal{F})=\lim(\chi_{\mathcal{F}}), & & \textrm{adh}(\mathcal{F})=\textrm{adh}(\chi_{\mathcal{F}}).\label{Eqn: fil-net}\end{aligned}$$ The proofs of the two parts of Eq. (\[Eqn: net-fil\]), for example, go respectively as follows. $x\in\lim(\chi)\Leftrightarrow\chi\textrm{ is eventually in }\mathcal{N}_{x}\Leftrightarrow(\forall N\in\mathcal{N}_{x})(\exists F\in\mathcal{F}_{\chi})\textrm{ such that }(F\subseteq N)\Leftrightarrow x\in\lim(\mathcal{F}_{\chi})$, and $x\in\textrm{adh}(\chi)\Leftrightarrow\chi\textrm{ is frequently in }\mathcal{N}_{x}\Leftrightarrow(\forall N\in\mathcal{N}_{x})(\forall F\in\mathcal{F}_{\chi})\textrm{ }(N\bigcap F\neq\emptyset)\Leftrightarrow x\in\textrm{adh}(\mathcal{F}_{\chi})$; here $F$ is a superset of $\chi(\mathbb{R}_{\alpha})$. Some examples of convergence of filters are \(1) Any filter on an indiscrete space $X$ converges to every point of $X$. \(2) Any filter on a space that coincides with its topology (minus the empty set, of course) converges to every point of the space. \(3) For each $x\in X$, the neighbourhood filter $\mathcal{N}_{x}$ converges to $x$; this is the smallest filter on $X$ that converges to $x$. \(4) The *indiscrete* filter $\mathcal{F}=\{ X\}$ converges to no point in the space $(X,\{\emptyset,A,X-A,X\})$, but converges to every point of $X-A$ if $X$ has the topology $\{\emptyset,A,X\}$ because the only neighbourhood of any point in $X-A$ is $X$ which is contained in the filter. One of the most significant consequences of convergence theory of sequences and nets, as shown by the two theorems and the corollary following, is that this can be used to describe the topology of a set. The proofs of the theorems also illustrate the close inter-relationship between nets and filters. **Theorem A1.4.** *For a subset $A$ of a topological space $X$,* $$\textrm{Cl}(A)=\{ x\in X\!:(\exists\textrm{ a net }\chi\textrm{ in }A)\textrm{ }(\chi\rightarrow x)\}.\qquad\square\label{Eqn: net closure}$$ **Proof.** *Necessity.* For **$x\in\textrm{Cl}(A)$, construct a **net **$\chi\rightarrow x$ in *$A$* as **follows. Let $\mathcal{B}_{x}$ be a topological local base at $x$, which by definition is the collection of all open sets of $X$ containing $x$. For each $\beta\in\mathbb{D}$, the sets $$N_{\beta}=\bigcap_{\alpha\preceq\beta}\{ B_{\alpha}\!:B_{\alpha}\in\mathcal{B}_{x}\}$$ form a nested decreasing local neighbourhood filter base at $x$. With respect to the directed set $_{\mathbb{D}}N_{\beta}=\{(N_{\beta},\beta)\!:(\beta\in\mathbb{D})(x_{\beta}\in N_{\beta})\}$ of Eq. (\[Eqn: DirectedIndexed\]), define the desired net in $A$ by $${\textstyle \chi(N_{\beta},\beta)=x_{\beta}\in N_{\beta}\bigcap A}$$ where the family of nonempty decreasing subsets $N_{\beta}\bigcap A$ of $X$ constitute the filter-base in $A$ as required by the directed set $_{\mathbb{D}}N_{\beta}$. It now follows from Eq. (\[Eqn: DirectionIndexed\]) and the arguments in Example A1.3(3) that $x_{\beta}\rightarrow x$; compare the directed set of Eq. (\[Eqn: Closure\_Directed\]) for a more compact, yet essentially identical, argument. Carefully observe the dual roles of $\mathcal{N}_{x}$ as a neighbourhood filter base at $x$. *Sufficiency.* Let $\chi$ be a net in $A$ that converges to $x\in X$. For any $N_{\alpha}\in\mathcal{N}_{x}$, there is a $\mathbb{R}_{\alpha}\in\textrm{Res}(\mathbb{D})$ of Eq. (\[Eqn: residual\]) such that $\chi(\mathbb{R}_{\alpha})\subseteq N_{\alpha}$. Hence the point $\chi(\alpha)=x_{\alpha}$ of $A$ belongs to $N_{\alpha}$ so that $A\bigcap N_{\alpha}\neq\emptyset$ which means, from Eq. (\[Eqn: Def: Closure\]), that $x\in\textrm{Cl}(A)$.$\qquad\blacksquare$ **Corollary.** Together with Eqs. (\[Eqn: Def: Closure\]) and (\[Eqn: Def: Derived\]), is follows that $$\textrm{Der}(A)=\{ x\in X\!:(\exists\textrm{ a net }\zeta\textrm{ in }A-\{ x\})(\zeta\rightarrow x)\}\qquad\square\label{Eqn: net derived}$$ The filter forms of Eqs. (\[Eqn: net closure\]) and (\[Eqn: net derived\]) $$\begin{aligned} \textrm{Cl}(A) & = & \{ x\in X\!:(\exists\textrm{ a filter }\mathcal{F}\textrm{ on }X)(A\in\mathcal{F})(\mathcal{F}\rightarrow x)\}\label{Eqn: filter cls_der}\\ \textrm{Der}(A) & = & \{ x\in X\!:(\exists\textrm{ a filter }\mathcal{F}\textrm{ on }X)(A-\{ x\}\in\mathcal{F})(\mathcal{F}\rightarrow x)\}\nonumber \end{aligned}$$ then follows from Eq. (\[Eqn: Def: LimFilter\]) and the finite intersection property (F2) of $\mathcal{F}$ so that every neighbourhood of $x$ must intersect $A$ (respectively $A-\{ x\}$) in Eq. (\[Eqn: filter cls\_der\]) to produce the converging net needed in the proof of Theorem A1.3. We end this discussion of convergence in topological spaces with a proof of the following theorem which demonstrates the relationship that “eventually in” and “frequently in” bears with each other; Eq. (\[Eqn: net adh\]) below is the net-counterpart of the filter equation (\[Eqn: filter adh\]). **Theorem A1.5.** *If $\chi$ is a net in a topological space $X$, then* $x\in\textrm{adh}(\chi)$ *iff some subnet $\zeta(\beta)=\chi(\sigma(\beta))$ of $\chi(\alpha)$, with $\alpha\in\mathbb{D}$ and $\beta\in\mathbb{E}$ , converges in $X$ to $x$; thus* $$\textrm{adh}(\chi)=\{ x\in X\!:(\exists\textrm{ a subnet }\zeta\preceq\chi\textrm{ in }X)(\zeta\rightarrow x)\}.\qquad\square\label{Eqn: net adh}$$ ****Proof.** *Necessity.* Let $x\in\textrm{adh}(\chi)$. Define a subnet function $\sigma\!:\,_{\mathbb{D}}N_{\alpha}\rightarrow\mathbb{D}$ by $\sigma(N_{\alpha},\alpha)=\alpha$ where $_{\mathbb{D}}N_{\alpha}$ is the directed set of Eq. (\[Eqn: DirectedIndexed\]): (SN1) and (SN2) are quite evidently satisfied according to Eq. (\[Eqn: DirectionIndexed\]). Proceeding as in the proof of the preceding theorem it follows that $x_{\beta}=\chi(\sigma(N_{\alpha},\alpha))=\zeta(N_{\alpha},\alpha)\rightarrow x$ is the required converging subnet that exists from Eq. (\[Eqn: adh net1\]) and the fact that $\chi(\mathbb{R}_{\alpha})\bigcap N_{\alpha}\neq\emptyset$ for every $N_{\alpha}\in\mathcal{N}_{x}$, by hypothesis. *Sufficiency.* Assume now that $\chi$ has a subnet $\zeta(N_{\alpha},\alpha)$ that converges to $x$. If $\chi$ does not adhere at $x$, there is a neighbourhood $N_{\alpha}$ of $x$ not frequented by it, in which case $\chi$ must be eventually in $X-N_{\alpha}$. Then $\zeta(N_{\alpha},\alpha)$ is also eventually in $X-N_{\alpha}$ so that $\zeta$ cannot be eventually in $N_{\alpha}$, a contradiction of the hypothesis that $\zeta(N_{\alpha},\alpha)\rightarrow x$.[^29]$\qquad\blacksquare$ Eqs. (\[Eqn: net closure\]) and (\[Eqn: net adh\]) imply that the closure of a subset $A$ of $X$ is the class of $X$-adherences of all the (sub)nets of $X$ that are eventually in $A$. This includes both the constant nets yielding the isolated points of $A$ and the non-constant nets leading to the cluster points of $A$, and implies the following physically useful relationship between convergence and topology that can be used as defining criteria for open and closed sets having a more appealing physical significance than the original definitions of these terms. Clearly, the term “net” is justifiably used here to include the subnets too. The following corollary of Theorem A1.5 summarizes the basic topological properties of sets in terms of nets (respectively, filters). **Corollary.** Let $A$ be a subset of a topological space $X$. Then \(1) $A$ is closed in $X$ iff every convergent net of $X$ that is eventually in $A$ actually converges to a point in $A$ (respectively, iff the adhering points of each filter-base on $A$ all belong to $A$). Thus no $X$-convergent net in a closed subset may converge to a point outside it. \(2) $A$ is open in $X$ iff every convergent net of $X$ that converges to a point in $A$ is eventually in $A$. Thus no $X$-convergent net outside an open subset may converge to a point in the set. \(3) $A$ is closed-and-open (clopen) in $X$ iff every convergent net of $X$ that converges in $A$ is eventually in $A$ and conversely. \(4) $x\in\textrm{Der}(A)$ iff some net (respectively, filter-base) in $A-\{ x\}$ converges to $x$; this clearly eliminates the isolated points of $A$ and $x\in\textrm{Cl}(A)$ iff some net (respectively, filter-base) in $A$ converges to $x$.$\qquad\square$ **Remark.** The differences in these characterizations should be fully appreciated: If we consider the cluster points $\textrm{Der}(A)$ of a net $\chi$ in $A$ as the *resource generated by* $\chi$, then a closed subset of $X$ can be considered to be *selfish* as its keeps all its resource to itself: $\textrm{Der}(A)\cap A=\textrm{Der}(A)$. The opposite of this is a *donor* set that donates all its generated resources to its neighbour: $\textrm{Der}(A)\cap X-A=\textrm{Der}(A)$, while for a *neutral* set, both $\textrm{Der}(A)\cap A\neq\emptyset$ and $\textrm{Der}(A)\cap X-A\neq\emptyset$ implying that the convergence resources generated in $A$ and $X-A$ can be deposited only in the respective sets. The clopen sets (see diagram 2-2 of Fig. \[Fig: DerSets\]) are of some special interest as they are boundary less so that no net-resources can be generated in this case as any such limit are required to be simultaneously in the set and its complement. **Example A1.1, Continued.** This continuation ****of Example A1.2 illustrates how sequential convergence is inadequate in spaces that are not first countable like the uncountable set with cocountable topology. In this topology, a sequence can converge to a point $x$ in the space iff it has only a finite number of distinct terms, and is therefore eventually constant. Indeed, let the complement $$G\overset{\textrm{def}}=X-F,\qquad F=\{ x_{i}\!:x_{i}\neq x,\textrm{ }i\in\mathbb{N}\}$$ of the countably closed sequential set $F$ be an open neighbourhood of $x\in X$. Because a sequence $(x_{i})_{i\in\mathbb{N}}$ in $X$ converges to a point $x\in X$ iff it is eventually in *every* neighbourhood (including $G$) of $x$, the sequence represented by the set $F$ cannot converge to $x$ unless it is of the uncountable type[^30]$$(x_{0},x_{1},\cdots,x_{I},x_{I+1},x_{I+1},\cdots)\label{Eqn: cocount}$$ with only a finite number $I$ of distinct terms actually belonging to the closed sequential set $F=X-G$, and $x_{I+1}=x$. Note that as we are concerned only with the eventual behaviour of the sequence, we may discard all distinct terms from $G$ by considering them to be in $F$, and retain only the constant sequence $(x,x,\cdots)$ in $G$. In comparison with the cofinite case that was considered in Sec. 4, the entire countably infinite sequence can now lie outside a neighbourhood of $x$ thereby enforcing the eventual constancy of the sequence. This leads to a generalization of our earlier cofinite result in the sense that a cocountable filter on a cocountable space converges to every point in the space. It is now straightforward to verify that for a point $x_{0}$ in an uncountable cocountable space $X$ \(a) Even though no sequence in the open set $G=X-\{ x_{0}\}$ can converge to $x_{0}$, yet $x_{0}\in\textrm{Cl}(G)$ since the intersection of any (uncountable) open neighbourhood $U$ of $x_{0}$ with $G$, being an uncountable set, is not empty. \(b) By corollary (1) of Theorem A1.5, the uncountable open set $G=X-\{ x_{0}\}$ is also closed in $X$ because if any sequence $(x_{1},x_{2},\cdots)$ in $G$ converges to some $x\in X$, then $x$ must be in $G$ as the sequence must be eventually constant in order for it to converge. But this is a contradiction as $G$ cannot be closed since it is not countable.[^31] By the same reckoning, although $\{ x_{0}\}$ is not an open set because its complement is not countable, nevertheless it follows from Eq. (\[Eqn: cocount\]) that should any sequence converge to the only point $x_{0}$ of this set, then it must eventually be in $\{ x_{0}\}$ so by corollary (2) of the same theorem, $\{ x_{0}\}$ becomes an open set. \(c) The identity map $\mathbf{1}\!:X\rightarrow X_{\textrm{d}}$, where $X_{\textrm{d}}$ is $X$ with discrete topology, is not continuous because the inverse image of any singleton of $X_{\textrm{d}}$ is not open in $X$. Yet if a sequence converges in $X$ to $x$, then its image $(\mathbf{1}(x))=(x)$ must actually converge to $x$ in $X_{\textrm{d}}$ because a sequence converges in a discrete space, as in the cofinite or cocountable spaces, iff it is eventually constant; this is so because each element of a discrete space being clopen is boundaryless. This pathological behaviour of sequences in a non Hausdorff, non first countable space does not arise if the discrete indexing set of sequences is replaced by a continuous, uncountable directed set like $\mathbb{R}$ for example, leading to nets in place of sequences. In this case the net can be in an open set without having to be constant-valued in order to converge to a point in it as the open set can be defined as the complement of a closed countable part of the uncountable net. The careful reader could not have failed to notice that the burden of the above arguments, as also of that in the example following Theorem 4.6, is to formalize the fact that since *a closed set is already defined as a countable (respectively finite) set,* the closure operation cannot add further points to it from its complement, and any sequence that converges in an open set in these topologies must necessarily be eventually constant at its point of convergence, a restriction that no longer applies to a net. The cocountable topology thus has the very interesting property of filtering out a countable part from an uncountable set, as for example the rationals in $\mathbb{R}$.$\qquad\blacksquare$ This example serves to illustrate the hard truth that in a space that is not first countable, the simplicity of sequences is not enough to describe its topological character, and in fact “sequential convergence will be able to describe only those topologies in which the number of (basic) neighbourhoods around each point is no greater than the number of terms in the sequences”, @Willard1970. It is important to appreciate the significance of this interplay of convergence of sequences and nets (and of continuity of functions of Appendix A1) and the topology of the underlying spaces. A comparison of the defining properties (T1), (T2), (T3) of topology $\mathcal{T}$ with (F1), (F2), (F3) of that of the filter $\mathcal{F}$, shows that a filter is very close to a topology with the main difference being with regard to the empty set which must always be in $\mathcal{T}$ but never in $\mathcal{F}$. Addition of the empty set to a filter yields a topology, but removal of the empty set from a topology need not produce the corresponding filter as the topology may contain nonintersecting sets. The distinction between the topological and filter-bases should be carefully noted. Thus \(a) While the topological base may contain the empty set, a filter-base cannot. \(b) From a given topology, form a common base by dropping all basic open sets that do not intersect. Then a (coarser) topology can be generated from this base by taking all unions, and a filter by taking all supersets according to Eq. (\[Eqn: filter\_base\]). For any given filter this expression may be used to extract a subclass $_{\textrm{F}}\mathcal{B}$ as a base for $\mathcal{F}$. **A2. Initial and Final topology** The commutative diagram of Fig. \[Fig: GenInv\] contains four sub-diagrams $X-X_{\textrm{B}}-f(X)$, $Y-X_{\textrm{B}}-f(X)$, $X-X_{\textrm{B}}-Y$ and $X-f(X)-Y$. Of these, the first two are especially significant as they can be used to conveniently define the topologies on $X_{\textrm{B}}$ and $f(X)$ from those of $X$ and $Y$, so that $f_{\textrm{B}}$, $f_{\textrm{B}}^{-1}$ and $G$ have some desirable continuity properties; we recall that a function $f\!:X\rightarrow Y$ is continuous if inverse images of open sets of $Y$ are open in $X$. This simple notion of continuity needs refinement in order that topologies on $X_{\textrm{B}}$ and $f(X)$ be unambiguously defined from those of $X$ and $Y$, a requirement that leads to the concepts of the so-called *final* and *initial topologies.* To appreciate the significance of these new constructs, note that if $f\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$ is a continuous function, there may be open sets in $X$ that are not inverse images of open — or for that matter of any — subset of $Y$, just as it is possible for non-open subsets of $Y$ to contribute to $\mathcal{U}$. When the triple $\{\mathcal{U},f,\mathcal{V}\}$ are tuned in such a manner that these are impossible, the topologies so generated on $X$ and $Y$ are the initial and final topologies respectively; they are the smallest (coarsest) and largest (finest) topologies on $X$ and $Y$ that make $f\!:X\rightarrow Y$ continuous. It should be clear that every image and preimage continuous function is continuous, but the converse is not true. Let $\textrm{sat}(U):=f^{-}f(U)\subseteq X$ be the saturation of an open set $U$ of $X$ and $\textrm{comp}(V):=ff^{-}(V)=V\bigcap f(X)\in Y$ be the component of an open set $V$ of $Y$ on the range $f(X)$ of $f$. Let $\mathcal{U}_{\textrm{sat}}$, $\mathcal{V}_{\textrm{comp}}$ denote respectively the saturations $U_{\textrm{sat}}=\{\textrm{sat}(U)\!:U\in\mathcal{U}\}$ of the open sets of $X$ and the components $V_{\textrm{comp}}=\{\textrm{comp}(V)\!:V\in\mathcal{V}\}$ of the open sets of $Y$ whenever these are also open in $X$ and $Y$ respectively. Plainly, $\mathcal{U}_{\textrm{sat}}\subseteq\mathcal{U}$ and $\mathcal{V}_{\textrm{comp}}\subseteq\mathcal{V}$. **Definition A2.1.** *For a function* $e\!:X\rightarrow(Y,\mathcal{V})$, *the* *preimage* *or* *initial topology of* $X$ *based on (generated by)* **$e$** *and $\mathcal{V}$* *is* $$\textrm{IT}\{ e;\mathcal{V}\}\overset{\textrm{def}}=\{ U\subseteq X\!:U=e^{-}(V)\textrm{ if }V\in\mathcal{V}_{\textrm{comp}}\},\label{Eqn: IT}$$ *while for $q\!:(X,\mathcal{U})\rightarrow Y$, the* *image* *or* *final topology of* $Y$ *based on (generated by) $\mathcal{U}$ and* **$q$** *is* $$\textrm{FT}\{\mathcal{U};q\}\overset{\textrm{def}}=\{ V\subseteq Y\!:q^{-}(V)=U\textrm{ if }U\in\mathcal{U}_{\textrm{sat}}\}.\qquad\square\label{Eqn: FT'}$$ Thus, the topology of $(X,\textrm{IT}\{ e;\mathcal{V}\})$ consists of, and only of, the $e$-saturations of all the open sets of $e(X)$, while the open sets of $(Y,\textrm{FT}\{\mathcal{U};q\})$ are the $q$-images *in* $Y$ (and not just in $q(X)$) of all the $q$-saturated open sets of $X$.[^32] The need for defining (\[Eqn: IT\]) in terms of $\mathcal{V}_{\textrm{comp}}$ rather than $\mathcal{V}$ will become clear in the following. The subspace topology $\textrm{IT}\{ i;\mathcal{U}\}$ of a subset $A\subseteq(X,\mathcal{U})$ is a basic example of the initial topology **by the inclusion map $i\!:X\supseteq A\rightarrow(X,\mathcal{U})$, and we take its generalization $e\!:(A,\textrm{IT}\{ e;\mathcal{V}\})\rightarrow(Y,\mathcal{V})$ that embeds a subset $A$ of $X$ into $Y$ as the prototype of a preimage continuous map. Clearly the topology of $Y$ may also contain open sets not in $e(X)$, and any subset in $Y-e(X)$ may be added to the topology of $Y$ without altering the preimage topology of $X$: *open sets of $Y$ not in $e(X)$ may be neglected in obtaining the preimage topology* as $e^{-}(Y-e(X))=\emptyset$. The final topology on a quotient set by the quotient map $Q\!:(X,\mathcal{U})\rightarrow X/\sim$, which is just the collection of $Q$-images of the $Q$-saturated open sets of $X$, known as the *quotient topology of $X/\sim$,* is the basic example of the image topology and the resulting space $(X/\sim,\textrm{FT}\{\mathcal{U};Q\})$ is called the *quotient space.* We take the generalization $q\!:(X,\mathcal{U})\rightarrow(Y,\textrm{FT}\{\mathcal{U};q\})$ of $Q$ as the prototype of a image continuous function. The following results are specifically useful in dealing with initial and final topologies; compare the corresponding results for open maps given later. **Theorem A2.1.** *Let $(X,\mathcal{U})$ and $(Y_{1},\mathcal{V}_{1})$ be topological spaces and let $X_{1}$ be a set. If $f\!:X_{1}\rightarrow(Y_{1},\mathcal{V}_{1})$, $q\!:(X,\mathcal{U})\rightarrow X_{1}$, and $h=f\circ q\!:(X,\mathcal{U})\rightarrow(Y_{1},\mathcal{V}_{1})$ are functions with the topology $\mathcal{U}_{1}$ of $X_{1}$ given by* $\textrm{FT}\{\mathcal{U};q\}$, *then* \(a) *$f$ is continuous iff $h$ is continuous.* \(b) *$f$ is image continuous iff* $\mathcal{V}_{1}=\textrm{FT}\{\mathcal{U};h\}$.$\qquad\square$ **Theorem A2.2.** *Let $(Y,\mathcal{V})$ and $(X_{1},\mathcal{U}_{1})$ be topological spaces and let $Y_{1}$ be a set. If $f\!:(X_{1},\mathcal{U}_{1})\rightarrow Y_{1}$, $e\!:Y_{1}\rightarrow(Y,\mathcal{V})$ and $g=e\circ f\!:(X_{1},\mathcal{U}_{1})\rightarrow(Y,\mathcal{V})$ are function with the topology $\mathcal{V}_{1}$ of $Y_{1}$ given by* $\textrm{IT}\{ e;\mathcal{V}\}$*, then* \(a) *$f$ is continuous iff $g$ is continuous.* \(b) *$f$ is preimage continuous iff* $\mathcal{U}_{1}=\textrm{IT}\{ g;\mathcal{V}\}$.$\qquad\square$ As we need the second part of these theorems in our applications, their proofs are indicated below. The special significance of the first parts is that they ensure the converse of the usual result that the composition of two continuous functions is continuous, namely that one of the components of a composition is continuous whenever the composition is so. **Proof of Theorem A2.1.** If $f$ be image continuous, $\mathcal{V}_{1}=\{ V_{1}\subseteq Y_{1}\!:f^{-}(V_{1})\in\mathcal{U}_{1}\}$ and $\mathcal{U}_{1}=\{ U_{1}\subseteq X_{1}\!:q^{-}(U_{1})\in\mathcal{U}\}$ are the final topologies of $Y_{1}$ and $X_{1}$ based on the topologies of $X_{1}$ and $X$ respectively. Then $\mathcal{V}_{1}=\{ V_{1}\subseteq Y_{1}\!:q^{-}f^{-}(V_{1})\in\mathcal{U}\}$ shows that $h$ is image continuous. Conversely, when $h$ is image continuous, $\mathcal{V}_{1}=\{ V_{1}\subseteq Y_{1}\!:h^{-}(V_{1})\}\in\mathcal{U}\}=\{ V_{1}\subseteq Y_{1}\!:q^{-}f^{-}(V_{1})\}\in\mathcal{U}\}$, with $\mathcal{U}_{1}=\{ U_{1}\subseteq X_{1}\!:q^{-}(U_{1})\in\mathcal{U}\}$, proves $f^{-}(V_{1})$ to be open in $X_{1}$ and thereby $f$ to be image continuous. **Proof of Theorem A2.2.** If $f$ be preimage continuous, $\mathcal{V}_{1}=\{ V_{1}\subseteq Y_{1}\!:V_{1}=e^{-}(V)\textrm{ if }V\in\mathcal{V}\}$ and $\mathcal{U}_{1}=\{ U_{1}\subseteq X_{1}\!:U_{1}=f^{-}(V_{1})\textrm{ if }V_{1}\in\mathcal{V}_{1}\}$ are the initial topologies of $Y_{1}$ and $X_{1}$ respectively. Hence from $\mathcal{U}_{1}=\{ U_{1}\subseteq X_{1}\!:U_{1}=f^{-}e^{-}(V)\textrm{ if }V\in\mathcal{V}\}$ it follows that $g$ is preimage continuous. Conversely, when $g$ is preimage continuous, $\mathcal{U}_{1}=\{ U_{1}\subseteq X_{1}\!:U_{1}=g^{-}(V)\textrm{ if }V\in\mathcal{V}\textrm{ }\}=\{ U_{1}\subseteq X_{1}\!:U_{1}=f^{-}e^{-}(V)\textrm{ if }V\in\mathcal{V}\}$ and $\mathcal{V}_{1}=\{ V_{1}\subseteq Y_{1}\!:V_{1}=e^{-}(V)\textrm{ if }V\in\mathcal{V}\}$ shows that $f$ is preimage continuous.$\qquad\blacksquare$ Since both Eqs. (\[Eqn: IT\]) and (\[Eqn: FT’\]) are in terms of inverse images (the first of which constitutes a direct, and the second an inverse, problem) the image $f(U)=\textrm{comp}(V)$ for $V\in\mathcal{V}$ is of interest as it indicates the relationship of the openness of $f$ with its continuity. This, and other related concepts are examined below, where the range space $f(X)$ is always taken to be a subspace of $Y$. Openness of a function *$f\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$* is the “inverse” of continuity, when images of open sets of $X$ are required to be open in $Y$; such a function is said to be *open.* Following are two of the important properties of open functions. \(1) *If $f\!:(X,\mathcal{U})\rightarrow(Y,f(\mathcal{U}))$ is an open function, then so is* $f_{<}\!:(X,\mathcal{U})\rightarrow(f(X),\textrm{IT}\{ i;f(\mathcal{U})\})$*. The converse is true if $f(X)$ is an open set of $Y$; thus openness of* $f_{<}\!:(X,\mathcal{U})\rightarrow(f(X),f_{<}(\mathcal{U}))$ *implies tha*t *of $f\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$ whenever $f(X)$ is open in $Y$ such that $f_{<}(U)\in\mathcal{V}$ for $U\in\mathcal{U}$.* The truth of this last assertion follows easily from the fact that if $f_{<}(U)$ is an open set of $f(X)\subset Y$, then necessarily $f_{<}(U)=V\bigcap f(X)$ for some $V\in\mathcal{V}$, and the intersection of two open sets of $Y$ is again an open set of $Y$. \(2) *If $f\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$ and $g\!:(Y,\mathcal{V})\rightarrow(Z,\mathcal{W})$ are open functions then $g\circ f\!:(X,\mathcal{U})\rightarrow(Z,\mathcal{W})$* *is also open.* It follows that the condition in (1) on $f(X)$ can be replaced by the requirement that the inclusion $i\!:(f(X),\textrm{IT}\{ i;\mathcal{V}\})\rightarrow(Y,\mathcal{V})$ be an open map. This interchange of $f(X)$ with its inclusion $i\!:f(X)\rightarrow Y$ into $Y$ is a basic result that finds application in many situations. Collected below are some useful properties of the initial and final topologies that we need in this work. ***Initial Topology.*** In Fig. \[Fig: Initial-Final\](b), consider $Y_{1}=h(X_{1})$, $e\rightarrow i$ and $f\rightarrow h_{<}\!:X_{1}\rightarrow(h(X_{1}),\textrm{IT}\{ i;\mathcal{V}\})$. From $h^{-}(B)=h^{-}(B\bigcap h(X_{1}))$ for any $B\subseteq Y$, it follows that for an open set $V$ of $Y$, $h^{-}(V_{\textrm{comp}})=h^{-}(V)$ is an open set of $X_{1}$ which, if the topology of $X_{1}$ is $\textrm{IT}\{ h;\mathcal{V}\}$, are the only open sets of $X_{1}$. Because $V_{\textrm{comp}}$ is an open set of $h(X_{1})$ in its subspace topology, this implies that *the preimage topologies* $\textrm{IT}\{ h;\mathcal{V}\}$ and $\textrm{IT}\{ h_{<};\textrm{IT}\{ i;\mathcal{V}\}\}$ *of $X_{1}$ generated by $h$ and* $h_{<}$ *are the same.* Thus the preimage topology of $X_{1}$ is not affected if $Y$ is replaced by the subspace $h(X_{1})$, the part $Y-h(X_{1})$ contributing nothing to $\textrm{IT}\{ h;\mathcal{V}\}$. *A preimage continuous function* $e\!:X\rightarrow(Y,\mathcal{V})$ *is not necessarily an open function.* Indeed, if $U=e^{-}(V)\in\textrm{IT}\{ e;\mathcal{V}\}$, it is almost trivial to verify along the lines of the restriction of open maps to its range, that $e(U)=ee^{-}(V)=e(X)\bigcap V$, $V\in\mathcal{V}$, is open in $Y$ (implying that $e$ is an open map) iff $e(X)$ is an open subset of $Y$ (because finite intersections of open sets are open). A special case of this is the important consequence that *the restriction* $e_{<}\!:(X,\textrm{IT}\{ e;\mathcal{V}\})\rightarrow(e(X),\textrm{IT}\{ i;\mathcal{V}\})$ *of* $e\!:(X,\textrm{IT}\{ h;\mathcal{V}\})\rightarrow(Y,\mathcal{V})$ *to its range is an open map.* Even though a preimage continuous map need not be open, it is true that *an injective, continuous and open map $f\!:X\rightarrow(Y,\mathcal{V})$ is preimage continuous.* Indeed, from its injectivity and continuity, inverse images of all open subsets of $Y$ are saturated-open in $X$, and openness of $f$ ensures that these are the only open sets of $X$ the condition of injectivity being required to exclude non-saturated sets from the preimage topology. It is therefore possible to rewrite Eq. (\[Eqn: IT\]) as $$U\in\textrm{IT}\{ e;\mathcal{V}\}\Longleftrightarrow e(U)=V\textrm{ if }V\in\mathcal{V}_{\textrm{comp}},\label{Eqn: IT'}$$ and to compare it with the following criterion for an *injective, open-continuous* *map* $f\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$ that necessarily satisfies $\textrm{sat}(A)=A$ for all $A\subseteq X$ $$U\in\mathcal{U}\Longleftrightarrow(\{\{ f(U)\}_{U\in\mathcal{U}}=\mathcal{V}_{\textrm{comp}})\wedge(f^{-1}(V)|_{V\in\mathcal{V}}\in\mathcal{U}).\label{Eqn: OCINJ}$$ ***Final Topology.*** Since it is necessarily produced on the range $\mathcal{R}(q)$ of $q$, the final topology is often considered in terms of a surjection. This however is not necessary as, much in the spirit of the initial topology, $Y-q(X)\neq\emptyset$ inherits the discrete topology without altering anything, thereby allowing condition (\[Eqn: FT’\]) to be restated in the following more transparent form $$V\in\textrm{FT}\{\mathcal{U};q\}\Longleftrightarrow V=q(U)\textrm{ if }U\in\mathcal{U}_{\textrm{sat}},\label{Eqn: FT}$$ and to compare it with the following criterion for a *surjective, open-continuous* *map* $f\!:(X,\mathcal{U})\rightarrow(Y,\mathcal{V})$ that necessarily satisfies $_{f}B=B$ for all $B\subseteq Y$ $$V\in\mathcal{V}\Longleftrightarrow(\mathcal{U}_{\textrm{sat}}=\{ f^{-}(V)\}_{V\in\mathcal{V}})\wedge(f(U)|_{U\in\mathcal{U}}\in\mathcal{V}).\label{Eqn: OCSUR}$$ As may be anticipated from Fig. \[Fig: Initial-Final\], the final topology does not behave as well for subspaces as the initial topology does. This is so because in Fig. \[Fig: Initial-Final\](a) the two image continuous functions $h$ and $q$ are connected by a preimage continuous inclusion $f$, whereas in Fig. \[Fig: Initial-Final\](b) all the three functions are preimage continuous. Thus quite like open functions, although image continuity of $h\!:(X,\mathcal{U})\rightarrow(Y_{1},\textrm{FT}\{\mathcal{U};h\})$ implies that of $h_{<}\!:(X,\mathcal{U})\rightarrow(h(X),\textrm{IT}\{ i;\textrm{FT}\{\mathcal{U};h\}))$ for a subspace $h(X)$ of $Y_{1}$, the converse need not be true unless — entirely like open functions again — either $h(X)$ is an open set of $Y_{1}$ or $i\!:(h(X),\textrm{IT}\{ i;\textrm{FT}\{\mathcal{U};h\}))\rightarrow(X,\textrm{FT}\{\mathcal{U};h\})$ is an open map. Since an open preimage continuous map is image continuous, this makes $i\!:h(X)\rightarrow Y_{1}$ an ininal function and hence all the three legs of the commutative diagram image continuous. Like preimage continuity, *an image continuous function $q\!:(X,\mathcal{U})\rightarrow Y$ need not be open.* However, although *the restriction of an image continuous function to the saturated open sets of its domain is an open function*, $q$ is unrestrictedly open iff the saturation of every open set of $X$ is also open in $X$. Infact it can be verified without much effort that a continuous, open surjection is image continuous. Combining Eqs. (\[Eqn: IT’\]) and (\[Eqn: FT\]) gives the following criterion for ininality $$U\textrm{ and }V\in\textrm{IFT}\{\mathcal{U}_{\textrm{sat}};f;\mathcal{V}\}\Longleftrightarrow(\{ f(U)\}_{U\in\mathcal{U}_{\textrm{sat}}}=\mathcal{V})(\mathcal{U}_{\textrm{sat}}=\{ f^{-}(V)\}_{V\in\mathcal{V}}),\label{Eqn: INI}$$ which reduces to the following for a homeomorphism $f$ that satisfies both $\textrm{sat}(A)=A$ for $A\subseteq X$ and $_{f}B=B$ for $B\subseteq Y$ $$U\textrm{ and }V\in\textrm{HOM}\{\mathcal{U};f;\mathcal{V}\}\Longleftrightarrow(\mathcal{U}=\{ f^{-1}(V)\}_{V\in\mathcal{V}})(\{ f(U)\}_{U\in\mathcal{U}}=\mathcal{V})\label{Eqn: HOM}$$ and compares with $$\begin{gathered} U\textrm{ and }V\in\textrm{OC}\{\mathcal{U};f;\mathcal{V}\}\Longleftrightarrow(\textrm{sat}(U)\in\mathcal{U}\!:\{ f(U)\}_{U\in\mathcal{U}}=\mathcal{V}_{\textrm{comp}})\wedge\\ \wedge(\textrm{comp}(V)\in\mathcal{V}\!:\{ f^{-}(V)\}_{V\in\mathcal{V}}=\mathcal{U}_{\textrm{sat}})\label{Eqn: OC}\end{gathered}$$ for an open-continuous $f$. The following is a slightly more general form of the restriction on the inclusion that is needed for image continuity to behave well for subspaces of $Y$. **Theorem A2.3.** *Let* $q\!:(X,\mathcal{U})\rightarrow(Y,\textrm{FT}\{\mathcal{U};q\})$ *be an image continuous* *function. For a subspace* $B$ of $(Y,\textrm{FT}\{\mathcal{U};q\})$,$$\textrm{FT}\{\textrm{IT}\{ j;\mathcal{U}\};q_{<}\}=\textrm{IT}\{ i;\textrm{FT}\{\mathcal{U};q\}\}$$ *where* $q_{<}\!:(q^{-}(B),\textrm{IT}\{ j;\mathcal{U}\})\rightarrow(B,\textrm{FT}\{\textrm{IT}\{ j;\mathcal{U}\};q_{<}\})$, *if either $q$ is an* *open map or $B$ is an open set of* $Y$.$\qquad\square$ In summary we have the useful result that an open preimage continuous function is image continuous and an open image continuous function is preimage continuous, where the second assertion follows on neglecting non-saturated open sets in $X$; this is permitted in as far as the generation of the final topology is concerned, as these sets produce the same images as their saturations. Hence *an image continuous function* $q\!:X\rightarrow Y$ *is preimage continuous iff every open set in $X$ is saturated with respect to $q$,* and *a preimage continuous function* $e\!:X\rightarrow Y$ *is image continuous iff the $e$-image of every open set of $X$ is open in $Y$.* **A3. More on Topological Spaces** This Appendix — which completes the review of those concepts of topological spaces begun in Tutorial4 that are needed for a proper understanding of this work — begins with the following summary of the different possibilities in the distribution of $\textrm{Der}(A)$ and $\textrm{Bdy}(A)$ between sets $A\subseteq X$ and its complement $X-A$, and follows it up with a few other important topological concepts that have been used, explicitly or otherwise, in this work. **Definition A3.1.** ***Separation, Connected Space*.** *A* *separation* *(disconnection)* *of $X$ is a pair of mutually disjoint nonempty open (and therefore closed) subsets $H_{1}$ and $H_{2}$ such that $X=H_{1}\cup H_{2}$* *A space $X$ is said to be* *connected* *if it has no separation, that is if it cannot be partitioned into two open or two closed nonempty subsets. $X$ is* *separated (disconnected)* *if it is not connected.$\qquad\square$* It follows from the definition, that for a disconnected space $X$ the following are equivalent statements. \(a) There exist a pair of disjoint nonempty open subsets of $X$ that cover $X$. \(b) There exist a pair of disjoint nonempty closed subsets of $X$ that cover $X.$ \(c) There exist a pair of disjoint nonempty clopen subsets of $X$ that cover $X.$ \(d) There exists a nonempty, proper, clopen subset of $X$. By a *connected subset* is meant a subset of $X$ that is connected *when provided with its relative topology making it a subspace of $X$.* Thus any connected subset of a topological space must necessarily be contained in any clopen set that might intersect it: if $C$ and $H$ are respectively connected and clopen subsets of $X$ such that $C\bigcap H\neq\emptyset$, then $C\subset H$ because $C\bigcap H$ is a nonempty clopen set in $C$ which must contain $C$ because $C$ is connected. For testing whether a subset of a topological space is connected, the following relativized form of (a)$-$(d) is often useful. **Lemma A3.1.** *A subset $A$ of $X$ is disconnected iff there are disjoint open sets $U$ and $V$ of $X$ satisfying* $${\textstyle U\bigcap A\neq\emptyset\neq V\bigcap A\textrm{ such that }A\subseteq U\bigcup V,\;\textrm{with }U\bigcap V\bigcap A=\emptyset}\label{Eqn: SubDisconnect1}$$ *or there are disjoint closed sets $E$ and $F$ of $X$ satisfying* $${\textstyle E\bigcap A\neq\emptyset\neq F\bigcap A\textrm{ such that }A\subseteq E\bigcup F,\;\textrm{with }E\bigcap F\bigcap A=\emptyset.}\label{Eqn: SybDisconnect2}$$ *Thus $A$ is disconnected iff there are disjoint clopen subsets in the relative topology of $A$ that cover $A$.$\qquad\square$* **Lemma A3.2.** *If $A$ is a subspace of $X$, a* *separation of* *$A$ is a pair of disjoint nonempty subsets $H_{1}$ and $H_{2}$ of $A$ whose union is $A$ neither of which contains a cluster point of the other. $A$ is connected iff there is no separation of $A.$* *$\qquad\square$* **Proof.** Let $H_{1}$ and $H_{2}$ be a separation of $A$ so that they are clopen subsets of $A$ whose union is $A$. As $H_{1}$ is a closed subset of $A$ it follows that $H_{1}=\textrm{Cl}_{X}(H_{1})\bigcap A$, where $\textrm{Cl}_{X}(H_{1})\bigcap A$ is the closure of $H_{1}$ in $A$; hence $\textrm{Cl}_{X}(H_{1})\bigcap H_{2}=\emptyset$. But as the closure of a subset is the union of the set and its adherents, an empty intersection signifies that $H_{2}$ cannot contain any of the cluster points of $H_{1}$. A similar argument shows that $H_{1}$ does not contain any adherent of $H_{2}$. Conversely suppose that neither $H_{1}$ nor $H_{2}$ contain an adherent of the other: $\textrm{Cl}_{X}(H_{1})\bigcap H_{2}=\emptyset$ and $\textrm{Cl}_{X}(H_{2})\bigcap H_{1}=\emptyset$. Hence $\textrm{Cl}_{X}(H_{1})\bigcap A=H_{1}$ and $\textrm{Cl}_{X}(H_{2})\bigcap A=H_{2}$ so that both $H_{1}$ and $H_{2}$ are closed in $A.$ But since $H_{1}=A-H_{2}$ and $H_{2}=A-H_{1}$, they must also be open in the relative topology of $A$. *$\qquad\blacksquare$* Following are some useful properties of connected spaces. (c1) The closure of any connected subspace of a space is connected. In general, every $B$ satisfying $$A\subseteq B\subseteq\textrm{Cl}(A)$$ is connected. Thus any subset of $X$ formed from $A$ by adjoining to it some or all of its adherents is connected so that *a topological space with a dense connected subset is connected.* (c2) The union of any class of connected subspaces of $X$ with nonempty intersection is a connected subspace of $X$. (c3) A topological space is connected iff there is a covering of the space consisting of connected sets with nonempty intersection. Connectedness is a topological property: Any space homeomorphic to a connected space is itself connected. (c4) If $H_{1}$ and $H_{2}$ is a separation of $X$ and $A$ is any connected subset $A$ of $X$, then either $A\subseteq H_{1}$ or $A\subseteq H_{2}$*.* While the real line $\mathbb{R}$ is connected, a subspace of $\mathbb{R}$ is connected iff it is an interval in $\mathbb{R}$. The important concept of total disconnectedness introduced below needs the following **Definition A3.2.** ***Component*.** *A* *component $C^{*}$* *of a space $X$ is a maximally* (with respect to inclusion) *connected subset of $X$.* *$\qquad\square$* Thus a component is a connected subspace which is not properly contained in any larger connected subspace of $X$. The maximal element need not be unique as there can be more than one component of a given space and a “maximal” criterion rather than “maximum” is used as the component need not contain every connected subsets of $X$; it simply must not be contained in any other connected subset of $X$. Components can be constructively defined as follows: Let $x\in X$ be any point. Consider the collection of all connected subsets of $X$ to which $x$ belongs Since $\{ x\}$ is one such set, the collection is nonempty. As the intersection of the collection is nonempty, its union is a nonempty connected set $C$. This the largest connected set containing $x$ and is therefore a component containing $x$ and we have (C1) Let $x\in X$. The unique component of $X$ containing $x$ is the union of all the connected subsets of $X$ that contain *$x$.* Conversely **any nonempty connected subset $A$ of $X$ **is contained in that unique component of $X$ to which each of the points of $A$ belong*.* Hence *a* *topological space is connected iff it is the unique component of itself.* (C2) Each component $C^{*}$ of $X$ is a closed set of $X$: By property (c1) above, $\textrm{Cl}(C^{*})$ is also connected and from $C^{*}\subseteq\textrm{Cl}(C^{*})$ it follows that $C^{*}=\textrm{Cl}(C^{*})$. Components need not be open sets of $X$: an example of this is the space of rationals $\mathbb{Q}$ in reals in which the components are the individual points which cannot be open in $\mathbb{R}$; see Example (2) below. (C3) Components of $X$ are equivalence classes of $(X,\sim)$ with $x\sim y$ iff they are in the same component: while reflexivity and symmetry are obvious enough, transitivity follows because if $x,y\in C_{1}$ and $y,z\in C_{2}$ with $C_{1}$, $C_{2}$ connected subsets of $X$, then $x$ and $z$ are in the set $C_{1}\bigcup C_{2}$ which is connected by property c(2) above as they have the point $y$ in common. Components are connected disjoint subsets of $X$ whose union is $X$ (that is they form a partition of $X$ with each point of $X$ contained in exactly one component of $X$) such that any connected subset of $X$ can be contained in only one of them. Because a connected subspace cannot contain in it any clopen subset of $X$, it follows that *every clopen connected subspace must be a component of $X$.* Even when a space is disconnected, it is always possible to decompose it into pairwise disjoint connected subsets. If $X$ is a discrete space this is the only way in which $X$ may be decomposed into connected pieces. If $X$ is not discrete, there may be other ways of doing this. For example, the space $$X=\{ x\in\mathbb{R}\!:(0\leq x\leq1)\vee(2<x<3)\}$$ has the following three distinct decomposition into connected subsets: $$\begin{array}{rcl} {\displaystyle X} & = & [0,1/2)\bigcup[1/2,1]\bigcup(2,7/3]\bigcup(7/3,3)\\ X & = & \{0\}\bigcup{\displaystyle \left(\bigcup_{n=1}^{\infty}\left(\frac{1}{n+1},\frac{1}{n}\right]\right)}\bigcup(2,3)\\ X & = & [0,1]\bigcup(2,3).\end{array}$$ Intuition tells us that only in the third of these decompositions have we really broken up $X$ into its connected pieces. What distinguishes the third from the other two is that neither of the pieces $[0,1]$ or $(2,3)$ can be enlarged into bigger connected subsets of $X$. As connected spaces, the empty set and the singleton are considered to be *degenerate* and any connected subspace with more than one point is *nondegenerate.* At the opposite extreme of the largest possible component of a space $X$ which is $X$ itself, are the singletons $\{ x\}$ for every $x\in X$. This leads to the extremely important notion of a **Definition A3.3.** ***Totally disconnected space.*** *A space $X$ is* *totally disconnected* *if every pair of distinct points in it can be separated by a disconnection of $X$.$\qquad\square$* $X$ is totally disconnected iff the components in $X$ are single points with the only nonempty connected subsets of $X$ being the one-point sets: If $x\neq y\in A\subseteq X$ are distinct points of a subset $A$ of $X$ then $A=(A\bigcap H_{1})\bigcup(A\bigcap H_{2})$, where $X=H_{1}\bigcup H_{2}$ with $x\in H_{1}$ and $y\in H_{2}$ is a disconnection of $X$ (it is possible to choose $H_{1}$ and $H_{2}$ in this manner because $X$ is assumed to be totally disconnected), is a separation of $A$ that demonstrates that any subspace of a totally disconnected space with more than one point is disconnected. A totally disconnected space has interesting physically appealing separation properties in terms of the (separated) Hausdorff spaces; here a topological space $X$ is *Hausdorff, or $T_{2}$,* iff each two distinct points of $X$ can be *separated* by disjoint neighbourhoods, so that for every $x\neq y\in X$, there are neighbourhoods $M\in\mathcal{N}_{x}$ and $N\in\mathcal{N}_{y}$ such that $M\bigcap N=\emptyset$. This means that for any two distinct points $x\neq y\in X$, it is impossible to find points that are arbitrarily close to both of them. Among the properties of Hausdorff spaces, the following need to be mentioned. (H1) $X$ is Hausdorff iff for each $x\in X$ and any point $y\neq x$, there is a neighbourhood $N$ of $x$ such that $y\not\in\textrm{Cl}(N)$. This leads to the significant result that for any $x\in X$ the closed singleton $$\{ x\}=\bigcap_{N\in\mathcal{N}_{x}}\textrm{Cl}(N)$$ *is the intersection of the closures of any local base at that point,* which in the language of nets and filters (Appendix A1) means that a net in a Hausdorff space cannot converge to more than one point in the space and the adherent set $\textrm{adh}(\mathcal{N}_{x})$ of the neighbourhood filter at $x$ is the singleton $\{ x\}$. (H2) Since each singleton is a closed set, each finite set in a Hausdorff space is also closed in $X$. Unlike a cofinite space, however, there can clearly be infinite closed sets in a Hausdorff space. (H3) Any point $x$ in a Hausdorff space $X$ is a cluster point of $A\subseteq X$ iff every neighbourhood of $x$ contains infinitely many points of $A$, a fact that has led to our mental conditioning of the points of a (Cauchy) sequence piling up in neighbourhoods of the limit. Thus suppose for the sake of argument that although some neighbourhood of $x$ contains only a finite number of points, $x$ is nonetheless a cluster point of $A$. Then there is an open neighbourhood $U$ of $x$ such that $U\bigcap(A-\{ x\})=\{ x_{1},\cdots,x_{n}\}$ is a finite closed set of $X$ not containing $x$, and $U\bigcap(X-\{ x_{1},\cdots,x_{n}\})$ being the intersection of two open sets, is an open neighbourhood of $x$ not intersecting $A-\{ x\}$ implying thereby that $x\not\in\textrm{Der}(A)$; infact $U\bigcap(X-\{ x_{1},\cdots,x_{n}\})$ is simply $\{ x\}$ if $x\in A$ or belongs to $\textrm{Bdy}_{X-A}(A)$ when $x\in X-A$. Conversely if every neighbourhood of a point of $X$ intersects $A$ in infinitely many points, that point must belong to $\textrm{Der}(A)$ by definition. Weaker separation axioms than Hausdorffness are those of $T_{0}$, respectively $T_{1}$, spaces in which for every pair of distinct points *at least one,* respectively *each one,* has some neighbourhood not containing the other; the following table is a listing of the separation properties of some useful spaces. Space $T_{0}$ $T_{1}$ $T_{2}$ ------------------------- -------------- -------------- -------------- Discrete $\checkmark$ $\checkmark$ $\checkmark$ Indiscrete $\times$ $\times$ $\times$ $\mathbb{R}$, standard $\checkmark$ $\checkmark$ $\checkmark$ left/right ray $\checkmark$ $\times$ $\times$ Infinite cofinite $\checkmark$ $\checkmark$ $\times$ Uncountable cocountable $\checkmark$ $\checkmark$ $\times$ $x$-inclusion/exclusion $\checkmark$ $\times$ $\times$ $A$-inclusion/exclusion $\times$ $\times$ $\times$ : \[Table: separation\][Separation properties of some useful spaces.]{} It should be noted that that as none of the properties (H1)–(H3) need neighbourhoods of both the points simultaneously, it is sufficient for $X$ to be $T_{1}$ for the conclusions to remain valid. From its definition it follows that any totally disconnected space is a Hausdorff space and is therefore both $T_{1}$ and $T_{0}$ spaces as well. However, if a Hausdorff space has a base of clopen sets then it is totally disconnected; this is so because if $x$ and $y$ are distinct points of $X$, then the assumed property of $x\in H\subseteq M$ for every $M\in\mathcal{N}_{x}$ and some clopen set $M$ yields $X=H\bigcup(X-H)$ as a disconnection of $X$ that separates $x$ and $y\in X-H$; note that the assumed Hausdorffness of $X$ allows $M$ to be chosen so as not to contain $y$. **Example A3.1.** (1) Every indiscrete space is connected; every subset of an indiscrete space is connected. Hence if $X$ is empty or a singleton, it is connected. A discrete space is connected iff it is either empty or is a singleton; the only connected subsets in a discrete space are the degenerate ones. This is an extreme case of lack of connectedness, and a discrete space is the simplest example of a total disconnected space. \(2) $\mathbb{Q}$, the set of rationals considered as a subspace of the real line, is (totally) disconnected because all rationals larger than a given irrational $r$ is a clopen set in $\mathbb{Q}$, and $${\textstyle \mathbb{Q}=((-\infty,r)\bigcap\mathbb{Q})\bigcup(\mathbb{Q}\bigcap(r,\infty))\qquad r\textrm{ is an irrational}}$$ is the union of two disjoint clopen sets in the relative topology of $\mathbb{Q}$. The sets *$(-\infty,r)\cap\mathbb{Q}$* and $\mathbb{Q}\cap(r,\infty)$ are clopen in $\mathbb{Q}$ because neither contains a cluster point of the other. Thus for example, any neighbourhood of the second must contain the irrational $r$ in order to be able to cut the first which means that any neighbourhood of a point in either of the relatively open sets cannot be wholly contained in the other. The only connected sets of $\mathbb{Q}$ are one point subsets consisting of the individual rationals. In fact, a connected piece of $\mathbb{Q}$, being a connected subset of $\mathbb{R}$, is an interval in $\mathbb{R}$, and a nonempty interval cannot be contained in $\mathbb{Q}$ unless it is a singleton. It needs to be noted that the individual points of the rational line are not (cl)open because any open subset of $\mathbb{R}$ that contains a rational must also contain others different from it. This example shows that a space need not be discrete for each of its points to be a component and thereby for the space to be totally disconnected. In a similar fashion, the set of irrationals is (totally) disconnected because all the irrationals larger than a given rational is an example of a clopen set in $\mathbb{R}-\mathbb{Q}$. \(3) The $p$-inclusion ($A$-inclusion) topology is connected; a subset in this topology is connected iff it is degenerate or contains $p$. For, a subset inherits the discrete topology if it does not contain $p$, and $p$-inclusion topology if it contains $p$. \(4) The cofinite (cocountable) topology on an infinite (uncountable) space is connected; a subset in a cofinite (cocountable) space is connected iff it is degenerate or infinite (countable). \(5) Removal of a single point may render a connected space disconnected and even totally disconnected. In the former case, the point removed is called a *cut point* and in the second, it is a *dispersion point.* Any real number is a cut point of $\mathbb{R}$ and it does not have any dispersion point only. \(6) Let $X$ be a topological space. Considering components of $X$ as equivalence classes by the equivalence relation $\sim$ with $Q\!:X\rightarrow X/\sim$ denoting the quotient map, $X/\sim$ is totally disconnected: As $Q^{-}([x])$ is connected for each $[x]\in X/\sim$ in a component class of $X$, and as any open or closed subset $A\subseteq X/\sim$ is connected iff $Q^{-}(A)$ is open or closed, it must follow that $A$ can only be a singleton.$\qquad\blacksquare$ The next notion of compactness in topological spaces provides an insight of the role of nonempty adherent sets of filters that lead in a natural fashion to the concept of attractors in the dynamical systems theory that we take up next. **Definition A3.4.** ***Compactness.*** *A topological space $X$ is* *compact* *iff every open cover of $X$ contains a finite subcover of $X$*. *$\qquad\square$* This definition of compactness has an useful equivalent contrapositive reformulation: *For any given collection of open sets of $X$ if none of its finite subcollections cover $X$, then the entire collection also cannot cover $X$.* The following theorem is a statement of the fundamental property of compact spaces in terms of adherences of filters in such spaces, the proof of which uses this contrapositive characterization of compactness. **Theorem A3.1.** *A topological space $X$ is compact iff each class of closed subsets of $X$ with finite intersection property has nonempty intersection.* *$\qquad\square$* **Proof.** *Necessity.* Let $X$ be a compact space. Let $\mathcal{F}=\{ F_{\alpha}\}_{\alpha\in\mathbb{D}}$ be a collection of closed subsets of $X$ with finite FIP, and let $\mathcal{G}=\{ X-F_{\alpha}\}_{\alpha\in\mathbb{D}}$ be the corresponding open sets of $X$. If $\{ G_{i}\}_{i=1}^{N}$ is a nonempty finite subcollection from $\mathcal{G}$, then $\{ X-G_{i}\}_{i=1}^{N}$ is the corresponding nonempty finite subcollection of $\mathcal{F}$. Hence from the assumed finite intersection property of $\mathcal{F}$, it must be true that $$\begin{array}{ccl} {\displaystyle X-\bigcup_{i=1}^{N}G_{i}} & = & {\displaystyle \bigcap_{i=1}^{N}(X-G_{i})}\qquad(\textrm{DeMorgan}'\textrm{s Law})\\ & \neq & \emptyset,\end{array}$$ so that no finite subcollection of $\mathcal{G}$ can cover $X$. Compactness of $X$ now implies that $\mathcal{G}$ too cannot cover $X$ and therefore $$\bigcap_{\alpha}F_{\alpha}=\bigcap_{\alpha}(X-G_{\alpha})=X-\bigcup_{\alpha}G_{\alpha}\neq\emptyset.$$ The proof of the converse is a simple exercise of reversing the arguments involving the two equations in the proof above.$\qquad\blacksquare$ Our interest in this theorem and its proof lies in the following corollary — *which essentially means that for every filter $\mathcal{F}$ on a compact space the adherent set* $\textrm{adh}(\mathcal{F})$ *is not empty —* from which follows that every net in a compact space must have a convergent subnet. **Corollary.** *A space $X$ is compact iff for every class $\mathcal{A}=(A_{\alpha})$ of nonempty subsets of $X$ with* FIP*,* $\textrm{adh}(\mathcal{A})=\bigcap_{A_{\alpha}\in\mathcal{A}}\textrm{Cl}(A_{\alpha})\neq\emptyset$*.*$\qquad\square$ The proof of this result for nets given by the next theorem illustrates the general approach in such cases which is all that is basically needed in dealing with attractors of dynamical systems; compare Theorem A1.3. **Theorem A3.2.** *A topological space $X$ is compact iff each net in $X$ adheres in $X$*.$\qquad\square$ **Proof.** *Necessity.* Let $X$ be a compact space, $\chi\!:\mathbb{D}\rightarrow X$ a net in $X$, and $\mathbb{R}_{\alpha}$ the residual of $\alpha$ in the directed set $\mathbb{D}$. For the filter-base $(_{\textrm{F}}\mathcal{B}_{\chi(\mathbb{R}_{\alpha})})_{\alpha\in\mathbb{D}}$ of nonempty, decreasing, nested subsets of $X$ associated with the net $\chi$, compactness of $X$ requires from $\bigcap_{\alpha\preceq\delta}\textrm{Cl}(\chi(\mathbb{R}_{\alpha})\supseteq\chi(\mathbb{R}_{\delta})\neq\emptyset$, that the uncountably intersecting subset $$\textrm{adh}(_{\textrm{F}}\mathcal{B}_{\chi}):=\bigcap_{\alpha\in\mathbb{D}}\textrm{Cl}(\chi(\mathbb{R}_{\alpha}))$$ of $X$ be non-empty. If $x\in\textrm{adh}(_{\textrm{F}}\mathcal{B}_{\chi})$ then because $x$ is in the closure of $\chi(\mathbb{R}_{\beta})$, it follows from Eq. (\[Eqn: Def: Closure\]) that $N\bigcap\chi(\mathbb{R}_{\beta})\neq\emptyset$[^33] for every $N\in\mathcal{N}_{x}$, $\beta\in\mathbb{D}$. Hence $\chi(\gamma)\in N$ for some $\gamma\succeq\beta$ so that $x\in\textrm{adh}(\chi)$; see Eq. (\[Eqn: adh net2\]). *Sufficiency.* Let *$\chi$* be a net in $X$ that adheres at $x\in X$. From any class $\mathcal{F}$ of closed subsets of $X$ with FIP, construct as in the proof of Thm. A1.4, a decreasing nested sequence of closed subsets $C_{\beta}=\bigcap_{\alpha\preceq\beta\in\mathbb{D}}\{ F_{\alpha}\!:F_{\alpha}\in\mathcal{F}\}$ and consider the directed set $_{\mathbb{D}}C_{\beta}=\{(C_{\beta},\beta)\!:(\beta\in\mathbb{D})(x_{\beta}\in C_{\beta})\}$ with its natural direction (\[Eqn: DirectionIndexed\]) to define the net $\chi(C_{\beta},\beta)=x_{\beta}$ in $X$; see Def. A1.10. From the assumed adherence of $\chi$ at some $x\in X$, it follows that $N\bigcap F\neq\emptyset$ for every $N\in\mathcal{N}_{x}$ and $F\in\mathcal{F}$. Hence $x$ belongs to the closed set $F$ so that $x\in\textrm{adh}(\mathcal{F})$; see Eq. (\[Eqn: adh filter\]). Hence $X$ is compact.$\qquad\blacksquare$ Using Theorem A1.5 that specifies a definite criterion for the adherence of a net, this theorem reduces to the useful formulation that *a space is compact iff each net in it has some convergent subnet.* An important application is the following: Since every decreasing sequence $(F_{m})$ of nonempty sets has FIP (because $\bigcap_{m=1}^{M}F_{m}=F_{M}$ for every finite $M$), *every decreasing sequence of nonempty* closed *subsets* *of a compact spac*e *has nonempty intersection.* For a complete metric space this is known as the *Nested Set Theorem,* and for $[0,1]$ and other compact subspaces of $\mathbb{R}$ as the *Cantor Intersection Theorem.*[^34] For subspaces $A$ of $X$, it is the relative topology that determines as usual compactness of $A$; however the following criterion renders this test in terms of the relative topology unnecessary and shows that the topology of $X$ itself is sufficient to determine compactness of subspaces: *A subspace $K$ of a topological space $X$ is compact iff each open cover of $K$ in $X$ contains a finite cover of $K$.* A proper understanding of the distinction between compactness and closedness of subspaces — which often causes much confusion to the non-specialist — is expressed in the next two theorems. As a motivation for the first that establishes that not every subset of a compact space need be compact, mention may be made of the subset $(a,b)$ of the compact closed interval $[a,b]$ in $\mathbb{R}$. **Theorem A3.3.** *A closed subset $F$ of a compact space $X$ is compact.* *$\qquad\square$* **Proof.** Let $\mathcal{G}$ be an open cover of $F$ so that an open cover of $X$ is $\mathcal{G}\bigcup(X-F)$, which because of compactness of $X$ contains a finite subcover $\mathcal{U}$. Then $\mathcal{U}-(X-F)$ is a finite collection of $\mathcal{G}$ that covers $F$.*$\qquad\blacksquare$* It is not true in general that a compact subset of a space is necessarily closed. For example, in an infinite set $X$ with the cofinite topology, let $F$ be an infinite subset of $X$ with $X-F$ also infinite. Then although $F$ is not closed in $X$, it is nevertheless compact because $X$ is compact. Indeed, let $\mathcal{G}$ be an open cover of $X$ and choose any nonempty $G_{0}\in\mathcal{G}$. If $G_{0}=X$ then $\{ G_{0}\}$ is the required finite cover of $X$. If this is not the case, then because $X-G_{0}=\{ x_{i}\}_{i=1}^{n}$ is a finite set, there is a $G_{i}\in\mathcal{G}$ with $x_{i}\in G_{i}$ for each $1\leq i\leq n$, and therefore $\{ G_{i}\}_{i=0}^{n}$ is the finite cover that demonstrates the compactness of the cofinite space $X$. Compactness of $F$ now follows because the subspace topology on $F$ is the induced cofinite topology from $X$. The distinguishing feature of this topology is that it, like the cocountable, is not Hausdorff: If $U$ and $V$ are any two nonempty open sets of $X$, then they cannot be disjoint as the complements of the open sets can only be finite and if $U\bigcap V$ were to be indeed empty, then $${\textstyle X=X-\emptyset={X-(U\bigcap V)=(X-U)\bigcup(X-V)}}$$ would be a finite set. An immediate fallout of this is that in an infinite cofinite space, a sequence $(x_{i})_{i\in\mathbb{N}}$ (and even a net) with $x_{i}\neq x_{j}$ for $i\neq j$ behaves in an extremely unusual way: *It converges,* as in the indiscrete space, *to* *every point of the space.* Indeed if $x\in X$, where $X$ is an infinite set provided with its cofinite topology, and $U$ is any neighbourhood of $x$, any infinite sequence $(x_{i})_{i\in\mathbb{N}}$ in $X$ must be eventually in $U$ because $X-U$ is finite, and ignoring of the initial set of its values lying in $X-U$ in no way alters the ultimate behaviour of the sequence (note that this implies that the filter induced on $X$ by the sequence agrees with its topology). Thus $x_{i}\rightarrow x$ for any $x\in X$ is a reflection of the fact that there are no small neighbourhoods of any point of $X$ with every neighbourhood being almost the whole of $X$, except for a null set consisting of only a finite number of points. This is in sharp contrast with Hausdorff spaces where, although every finite set is also closed, every point has arbitrarily small neighbourhoods that lead to unique limits of sequences. A corresponding result for cocountable spaces can be found in Example A1.2 Continued. This example of the cofinite topology motivates the following “converse” of the previous theorem. **Theorem A3.4.** *Every compact subspace of a Hausdorff space is closed.$\qquad\square$* **Proof.** Let $K$ be a nonempty compact subset of $X$, Fig. \[Fig: cmpct\_clsd\], and let $x\in X-K$. Because of the separation of $X$, for every $y\in K$ there are disjoint open subsets $U_{y}$ and $V_{y}$ of $X$ with $y\in U_{y}$, and $x\in V_{y}$. Hence $\{ U_{y}\}_{y\in K}$ is an open cover for $K$, and from its compactness there is a finite subset $A$ of $K$ such that $K\subseteq\bigcup_{y\in A}U_{y}$ with $V=\bigcap_{y\in A}V_{y}$ an open neighbourhood of $x$; $V$ is open because each $V_{y}$ is a neighbourhood of $x$ and the intersection is over finitely many points $y$ of $A$. To prove that $K$ is closed in $X$ it is enough to show that $V$ is disjoint from $K$: If there is indeed some $z\in V\bigcap K$ then $z$ must be in some $U_{y}$ for $y\in A$. But as $z\in V$ it is also in $V_{y}$ which is impossible as $U_{y}$ and $V_{y}$ are to be disjoint. **This last part of the argument infact shows that *if $K$ is a compact subspace of a Hausdorff space $X$ and $x\notin K$, then there are disjoint open sets $U$ and $V$ of $X$ containing $x$ and $K$.$\qquad\blacksquare$* The last two theorems may be combined to give the obviously important **Corollary.** *In a compact Hausdorff space, closedness and compactness of its subsets are equivalent concepts.$\qquad\square$* In the absence of Hausdorffness, it is not possible to conclude from the assumed compactness of the space that every point to which the net may converge actually belongs to the subspace. **Definition A3.5.** *A subset $D$ of a topological space* *$(X,\mathcal{U})$* *is* *dense in $X$ if* $\textrm{Cl}(D)=X$*. Thus the closure of $D$ is the largest open subset of $X$, and every neighbourhood of any point of $X$ contains a point of $D$ not necessarily distinct from it; refer to the distinction between Eqs. (\[Eqn: Def: Closure\]) and (\[Eqn: Def: Derived\]).$\qquad\square$* Loosely, $D$ is dense in $X$ iff every point of $X$ has points of $D$ arbitrarily close to it. A *self-dense* (*dense in itself*) set is a set without any isolated points; hence $A$ is self-dense iff $A\subseteq\textrm{Der}(A)$. A closed self-dense set is called a *perfect set* so that a closed set $A$ is perfect iff it has no isolated points. Accordingly **$$A\textrm{ is perfect}\Longleftrightarrow A=\textrm{Der}(A),$$ means that the closure of a set without any isolated points is a perfect set. **Theorem A3.5.** *The following are equivalent statements.* \(1) *$D$ is dense in $X$*. \(2) *If $F$ is any closed set of $X$ with $D\subseteq F$, then $F=X$*; *thus the only closed superset of $D$ is $X$.* \(3) *Every nonempty (basic) open set of $X$ cuts $D;$ thus the only open set disjoint from $D$ is the empty set $\emptyset$.* \(4) *The exterior of $D$ is empty.$\qquad\square$* **Proof.** (3) If $U$ indeed is a nonempty open set of $X$ with $U\bigcap D=\emptyset$, then $D\subseteq X-U\neq X$ leads to the contradiction $X=\textrm{Cl}(D)\subseteq\textrm{Cl}(X-U)=X-U\neq X$, which also incidentally proves (2). From (3) it follows that for any open set $U$ of $X$, $\textrm{Cl}(U)=\textrm{Cl}(U\bigcap D)$ because if $V$ is any open neighbourhood of $x\in\textrm{Cl}(U)$ then $V\bigcap U$ is a nonempty open set of $X$ that must cut $D$ so that $V\bigcap(U\bigcap D)\neq\emptyset$ implies $x\in\textrm{Cl}(U\bigcap D)$. Finally, $\textrm{Cl}(U\bigcap D)\subseteq\textrm{Cl}(U)$ completes the proof.$\qquad\blacksquare$ **Definition A3.6.** (a) *A set $A\subseteq X$ is said to be* *nowhere dense* *in* ***$X$ if* $\textrm{Int}(\textrm{Cl}(A))=\emptyset$ *and* *residual* *in* ***$X$ if* $\textrm{Int}(A)=\emptyset$*.$\qquad\square$* $A$ is nowhere dense in $X$ iff $$\textrm{Bdy}(X-\textrm{Cl}(A))=\textrm{Bdy}(\textrm{Cl}(A))=\textrm{Cl}(A)$$ so that $${\textstyle \textrm{Cl}(X-\textrm{Cl}(A))={(X-\textrm{Cl}(A))\bigcup\textrm{Cl}(A)=X}}$$ from which it follows that $$A\textrm{ is nwd in }X\Longleftrightarrow X-\textrm{Cl}(A)\textrm{ is dense in }X$$ and $$A\textrm{ is residual in }X\Longleftrightarrow X-A\textrm{ is dense in }X.$$ Thus $A$ is nowhere dense iff $\textrm{Ext}(A):=X-\textrm{Cl}(A)$ **is dense in *$X$,* and in particular a closed set is nowhere dense in $X$ iff its complement is open dense in $X$ with open-denseness being complimentarily dual to closed-nowhere denseness. The rationals in reals is an example of a set that is residual but not nowhere dense. The following are readily verifiable properties of subsets of $X$. \(1) A set $A\subseteq X$ is nowhere dense in $X$ iff it is contained in its own boundary, iff it is contained in the closure of the complement of its closure, that is $A\subseteq\textrm{Cl}(X-\textrm{Cl}(A))$. In particular a closed subset $A$ is nowhere dense in $X$ iff $A=\textrm{Bdy}(A)$, that is iff it contains no open set. \(2) From $M\subseteq N\Rightarrow\textrm{Cl}(M)\subseteq\textrm{Cl}(N)$ it follows, with $M=X-\textrm{Cl}(A)$ and $N=X-A$, that a nowhere dense set is residual, but a residual set need not be nowhere dense unless it is also closed in $X$. \(3) Since $\textrm{Cl}(\textrm{Cl}(A))=\textrm{Cl}(A)$, $\textrm{Cl}(A)$ is nowhere dense in $X$ iff $A$ is. \(4) For any $A\subseteq X$, both $\textrm{Bdy}_{A}(X-A):=\textrm{Cl}(X-A)\bigcap A$ and $\textrm{Bdy}_{X-A}(A):=\textrm{Cl}(A)\bigcap(X-A)$ are residual sets and as Fig. \[Fig: DerSets\] shows $$\textrm{Bdy}_{X}(A)=\textrm{Bdy}_{X-A}(A)\bigcup\textrm{Bdy}_{A}(X-A)$$ is the union of these two residual sets. When $A$ is closed (or open) in $X$, its boundary consisting of the only component $\textrm{Bdy}_{A}(X-A)$ (or $\textrm{Bdy}_{X-A}(A)$) as shown by the second row (or column) of the figure, being a closed set of $X$ is also nowhere dense in $X$; infact *a closed nowhere dense set is always the boundary of some open set.* Otherwise, the boundary components of the two residual parts — as in the donor-donor, donor-neutral, neutral-donor and neutral-neutral cases — need not be individually closed in $X$ (although their union is) and their union is a residual set that need not be nowhere dense in $X$: the union of two nowhere dense sets is nowhere dense but the union of a residual and a nowhere dense set is a residual set. One way in which a two-component boundary can be nowhere dense is by having $\textrm{Bdy}_{A}(X-A)\supseteq\textrm{Der}(A)$ or $\textrm{Bdy}_{X-A}(A)\supseteq\textrm{Der}(X-A)$, so that it is effectively in one piece rather than in two, as shown in Fig. \[Fig: DerSets1\](b). **Theorem A3.6.** *$A$ is nowhere dense in $X$ iff each non-empty open set of $X$ has a non-empty open subset disjoint from* Cl *$\qquad\square$* **Proof.** If $U$ is a nonempty open set of $X$, then $U_{0}=U\cap\textrm{Ext}(A)\neq\emptyset$ as $\textrm{Ext}(A)$ is dense in $X$; $U_{0}$ is the open subset that is disjoint from $\textrm{Cl}(A)$. It clearly follows from this that each non-empty open set of $X$ has a non-empty open subset disjoint from a nowhere dense set $A$.$\qquad\blacksquare$ What this result (which follows just from the definition of nowhere dense sets) actually means is that no point in $\textrm{Bdy}_{X-A}(A)$ can be isolated in it. **Corollary.** $A$ *is nowhere dense in $X$ iff* Cl$(A)$ *does not contain any nonempty open set of $X$* *any nonempty open set that contains $A$ also contains its closure.* *$\qquad\square$* **Example A3.2.** Each finite subset of $\mathbb{R}^{n}$ is nowhere dense in $\mathbb{R}^{n}$; the set $\{1/n\}_{n=1}^{\infty}$ is nowhere dense in $\mathbb{R}$. The Cantor set $\mathcal{C}$ is nowhere dense in $[0,1]$ because every neighbourhood of any point in $\mathcal{C}$ must contain, by its very construction, a point with $1$ in its ternary representation. That the interior and the interior of the closure of a set are not necessarily the same is seen in the example of the rationals in reals: The set of rational numbers $\mathbb{Q}$ has empty interior because any neighbourhood of a rational number contains irrational numbers (so also is the case for irrational numbers) and $\mathbb{R}=\textrm{Int}(\textrm{Cl}(\mathbb{Q}))\supseteq\textrm{Int}(\mathbb{Q})=\emptyset$ justifies the notion of a nowhere dense set.$\qquad\blacksquare$ The following properties of $\mathcal{C}$ can be taken to define any subset of a topological space as a Cantor set; set-theoretically it should be clear from its classical middle-third construction that the Cantor set consists of all points of the closed interval $[0,1]$ whose infinite triadic (base 3) representation, expressed so as not to terminate with an infinite string of $1$’s, does not contain the digit $1$. Accordingly, any end-point of the infinite set of closed intervals whose intersection yields the Cantor set, is represented by a repeating string of either $0$ or $2$ while a non end-point has every other arbitrary collection of these two digits. Recalling that any number in $[0,1]$ is a rational iff its representation in any base is terminating or recurring — thus any decimal that neither repeats or terminates but consists of all possible sequences of all possible digits represents an irrational number — it follows that both rationals and irrationals belong to the Cantor set. ($\mathcal{C}1$) ***$\mathcal{C}$ is*** *******totally disconnected.*** If possible, let $\mathcal{C}$ have a component containing points $a$ and $b$ with $a<b$. Then $[a,b]\subseteq\mathcal{C}\Rightarrow[a,b]\subseteq C_{i}$ for all $i$. But this is impossible because we may choose $i$ large enough to have $3^{-i}<b-a$ so that $a$ and $b$ must belong to two different members of the pairwise disjoint closed $2^{i}$ subintervals each of length $3^{-i}$ that constitutes $C_{i}$. Hence $$[a,b]\textrm{ is not a subset of any }C_{i}\Longrightarrow[a,b]\textrm{ is not a subset of }\mathcal{C}.$$ ($\mathcal{C}2$) ***$\mathcal{C}$ is perfect*** so that for any $x\in\mathcal{C}$ every neighbourhood of $x$ must contain some other point of $\mathcal{C}$. Supposing to the contrary that the singleton $\{ x\}$ is an open set of $\mathcal{C}$, there must be an $\varepsilon>0$ such that in the usual topology of $\mathbb{R}$$${\textstyle \{ x\}=\mathcal{C}\bigcap(x-\varepsilon,x+\varepsilon).}\label{Eqn: Cantor_Perfect}$$ Choose a positive integer $i$ large enough to satisfy $3^{-i}<\varepsilon$. Since $x$ is in every $C_{i}$, it must be in one of the $2^{i}$ pairwise disjoint closed intervals $[a,b]\subset(x-\varepsilon,x+\varepsilon)$ each of length $3^{-i}$ whose union is $C_{i}$. As $[a,b]$ is an interval, at least one of the endpoints of $[a,b]$ is different from $x$, and since an endpoint belongs to $\mathcal{C}$, $\mathcal{C}\cap(x-\varepsilon,x+\varepsilon)$ must also contain this point thereby violating Eq. (\[Eqn: Cantor\_Perfect\]). ($\mathcal{C}3$) ***$\mathcal{C}$ is nowhere dense*** because each neighbourhood of any point of $\mathcal{C}$ intersects $\textrm{Ext}(\mathcal{C})$; see Thm. A3.6. ($\mathcal{C}4$) ***$\mathcal{C}$ is compact*** because it is a closed subset contained in the compact subspace $[0,1]$ of $\mathbb{R}$, see Thm. A3.3. The compactness of $[0,1]$ follows from the Heine-Borel Theorem which states that any subset of the real line is compact iff it is both closed and bounded with respect to the Euclidean metric on $\mathbb{R}$. Compare ($\mathcal{C}1$) and ($\mathcal{C}2$) with the essentially similar arguments of Example A3.1(2) for the subspace of rationals in $\mathbb{R}$. **A4. Neutron Transport Theory** This section introduces the reader to the basics of the *linear* neutron transport theory where graphical convergence approximations to the singular distributions, interpreted here as multifunctions, led to the present study of this work. The one-speed (that is mono-energetic) neutron transport equation in one dimension and plane geometry, is $$\mu\frac{\partial\Phi(x,\mu)}{\partial x}+\Phi(x,\mu)=\frac{c}{2}\int_{-1}^{1}\Phi(x,\mu^{\prime})d\mu^{\prime},\,0<c<1,\,-1\leq\mu\leq1\label{Eqn: NeutronTransport}$$ where $x$ is a non-dimensional physical space variable that denotes the location of the neutron moving in a direction $\theta=\cos^{-1}(\mu)$, $\Phi(x,\mu)$ is a neutron density distribution function such that $\Phi(x,\mu)dxd\mu$ is the expected number of neutrons in a distance $dx$ about the point $x$ moving at constant speed with their direction cosines of motion in $d\mu$ about $\mu$, and $c$ is a physical constant that will be taken to satisfy the restriction shown above. Case’s method starts by assuming the solution to be of the form $\Phi_{\nu}(x,\mu)=e^{-x/\mu}\phi(\mu,\nu)$ with a normalization integral constraint of $\int_{-1}^{1}\phi(\mu,\nu)d\mu=1$ to lead to the simple equation $$(\nu-\mu)\phi(\mu,\nu)=\frac{c\nu}{2}\label{Eqn: case_eigen}$$ for the unknown function $\phi(\nu,\mu)$. Case then suggested, see @Case1967, the non-simple complete solution of this equation to be $$\phi(\mu,\nu)=\frac{c\nu}{2}\mathcal{P}\frac{1}{\nu-\mu}+\lambda(v)\delta(v-\mu),\label{Eqn: singular_eigen}$$ where $\lambda(\nu)$ is the usual combination coefficient of the solutions of the homogeneous and non-homogeneous parts of a linear equation, $\mathcal{P}(\cdot)$ is a principal value and $\delta(x)$ the Dirac delta, to lead to the full-range $-1\leq\mu\leq1$ solution valid for $-\infty<x<\infty$ $$\Phi(x,\mu)=a(\nu_{0})e^{-x/\nu_{0}}\phi(\mu,\nu_{0})+a(-\nu_{0})e^{x/\nu_{0}}\phi(-\nu_{0},\mu)+\int_{-1}^{1}a(\nu)e^{-x/\nu}\phi(\mu,\nu)d\nu\label{Eqn: CaseSolution_FR}$$ of the one-speed neutron transport equation (\[Eqn: NeutronTransport\]). Here the real $\nu_{0}$ and $\nu$ satisfy respectively the integral constraints $$\frac{c\nu_{0}}{2}\ln\frac{\nu_{0}+1}{\nu_{0}-1}=1,\qquad\mid\nu_{0}\mid>1$$ $$\lambda(\nu)=1-\frac{c\nu}{2}\ln\frac{1+\nu}{1-\nu},\qquad\nu\in[-1,1],$$ with $$\phi(\mu,\nu_{0})=\frac{c\nu_{0}}{2}\frac{1}{\nu_{0}-\mu}$$ following from Eq. (\[Eqn: singular\_eigen\]). It can be shown [@Case1967] that the eigenfunctions **$\phi(\nu,\mu)$** satisfy the full-range orthogonality condition $$\int_{-1}^{1}\mu\phi(\nu,\mu)\phi(\nu^{\prime},\mu)d\mu=N(\nu)\delta(\nu-\nu^{\prime}),$$ where the odd normalization constants $N$ are given by $$\begin{array}{ccl} {\displaystyle N(\pm\nu_{0})} & = & {\displaystyle \int_{-1}^{1}\mu\phi^{2}(\pm\nu_{0},\mu)d\mu}\qquad\textrm{for }\mid\nu_{0}\mid>1\\ & = & {\displaystyle \pm\frac{c\nu_{0}^{3}}{2}\left(\frac{c}{\nu_{0}^{2}-1}-\frac{1}{\nu_{0}^{2}}\right)},\end{array}$$ and$$N(\nu)=\nu\left(\lambda^{2}(\nu)+\left(\frac{\pi c\nu}{2}\right)^{2}\right)\qquad\textrm{for }\nu\in[-1,1].$$ With a source of particles $\psi(x_{0},\mu)$ located at $x=x_{0}$ in an infinite medium, Eq. (\[Eqn: CaseSolution\_FR\]) reduces to the boundary condition, with $\mu,\textrm{ }\nu\in[-1,1]$, $$\psi(x_{0},\mu)=a(\nu_{0})e^{-x_{0}/\nu_{0}}\phi(\mu,\nu_{0})+a(-\nu_{0})e^{x_{0}/\nu_{0}}\phi(-\nu_{0},\mu)+\int_{-1}^{1}a(\nu)e^{-x_{0}/\nu}\phi(\mu,\nu)d\nu\label{Eqn: BC_FR}$$ for the determination of the expansion coefficients $a(\pm\nu_{0}),\textrm{ }\{ a(\nu)\}_{\nu\in[-1,1]}$. Use of the above orthogonality integrals then lead to the complete solution of the problem to be $$a(\nu)=\frac{e^{x_{0}/\nu}}{N(\nu)}\int_{-1}^{1}\mu\psi(x_{0},\mu)\phi(\mu,\nu)d\mu,\qquad\nu=\pm\nu_{0}\textrm{ or }\nu\in[-1,1].$$ For example, in the infinite-medium Greens function problem with $x_{0}=0$ and $\psi(x_{0},\mu)=\delta(\mu-\mu_{0})/\mu$, the coefficients are $a(\pm\nu_{0})=\phi(\mu_{0},\pm\nu_{0})/N(\pm\nu_{0})$ when $\nu=\pm\nu_{0}$, and $a(\nu)=\phi(\mu_{0},\nu)/N(\nu)$ for $\nu\in[-1,1]$. For a half-space $0\leq x<\infty$, the obvious reduction of Eq. (\[Eqn: CaseSolution\_FR\]) to $$\Phi(x,\mu)=a(\nu_{0})e^{-x/\nu_{0}}\phi(\mu,\nu_{0})+\int_{0}^{1}a(\nu)e^{-x/\nu}\phi(\mu,\nu)d\nu\label{Eqn: CaseSolution_HR}$$ with boundary condition, $\mu,\textrm{ }\nu\in[0,1]$, $$\psi(x_{0},\mu)=a(\nu_{0})e^{-x_{0}/\nu_{0}}\phi(\mu,\nu_{0})+\int_{0}^{1}a(\nu)e^{-x_{0}/\nu}\phi(\mu,\nu)d\nu,\label{Eqn: BC_HR}$$ leads to an infinitely more difficult determination of the expansion coefficients due to the more involved nature of the orthogonality relations of the eigenfunctions in the half-interval $[0,1]$ that now reads for $\nu,\textrm{ }\nu^{\prime}\in[0,1]$ [@Case1967] $$\begin{aligned} \int_{0}^{1}W(\mu)\phi(\mu,\nu^{\prime})\phi(\mu,\nu)d\mu & = & \frac{W(\nu)N(\nu)}{\nu}\delta(\nu-\nu^{\prime})\nonumber \\ \int_{0}^{1}W(\mu)\phi(\mu,\nu_{0})\phi(\mu,\nu)d\mu & = & 0\nonumber \\ \int_{0}^{1}W(\mu)\phi(\mu,-\nu_{0})\phi(\mu,\nu)d\mu & = & c\nu\nu_{0}X(-\nu_{0})\phi(\nu,-\nu_{0})\nonumber \\ \int_{0}^{1}W(\mu)\phi(\mu,\pm\nu_{0})\phi(\mu,\nu_{0})d\mu & = & \mp\left(\frac{c\nu_{0}}{2}\right)^{2}X(\pm\nu_{0})\label{Eqn: HR Ortho}\\ \int_{0}^{1}W(\mu)\phi(\mu,\nu_{0})\phi(\mu,-\nu)d\mu & = & \frac{c^{2}\nu\nu_{0}}{4}X(-\nu)\nonumber \\ \int_{0}^{1}W(\mu)\phi(\mu,\nu^{\prime})\phi(\mu,-\nu)d\mu & = & \frac{c\nu^{\prime}}{2}(\nu_{0}+\nu)\phi(\nu^{\prime},-\nu)X(-\nu)\nonumber \end{aligned}$$ where the half-range weight function $W(\mu)$ is defined as $$W(\mu)=\frac{c\mu}{2(1-c)(\nu_{0}+\mu)X(-\mu)}\label{Eqn: W(mu)}$$ in terms of the $X$-function $$X(-\mu)=\textrm{exp}-\left\{ \frac{c}{2}\int_{0}^{1}\frac{\nu}{N(\nu)}\left[1+\frac{c\nu^{2}}{1-\nu^{2}}\right]\ln(\nu+\mu)d\nu\right\} ,\qquad0\leq\mu\leq1,$$ that is conveniently obtained from a numerical solution of the nonlinear integral equation $$\Omega(-\mu)=1-\frac{c\mu}{2(1-c)}\int_{0}^{1}\frac{\nu_{0}^{2}(1-c)-\nu^{2}}{(\nu_{0}^{2}-\nu^{2})(\mu+\nu)\Omega(-\nu)}d\nu\label{Eqn: Omega(-mu)}$$ to yield $$X(-\mu)=\frac{\Omega(-\mu)}{\mu+\nu_{0}\sqrt{1-c}},$$ and the $X(\pm\nu_{0})$ satisfy $$X(\nu_{0})X(-\nu_{0})=\frac{\nu_{0}^{2}(1-c)-1}{2(1-c)v_{0}^{2}(\nu_{0}^{2}-1)}.$$ Two other useful relations involving the $W$-function are given by $\int_{0}^{1}W(\mu)\phi(\mu,\nu_{0})d\mu=c\nu_{0}/2$ and $\int_{0}^{1}W(\mu)\phi(\mu,\nu)d\mu=c\nu/2$. The utility of these full and half range orthogonality relations lie in the fact that a suitable class of functions of the type that is involved here can always be expanded in terms of them, see @Case1967. An example of this for a full-range problem has been given above; we end this introduction to the generalized — traditionally known as singular in neutron transport theory — eigenfunction method with two examples of half-range orthogonality integrals to the half-space problems A and B of Sec. 5. **Problem A: The Milne Problem.** In this case there is no incident flux of particles from outside the medium at $x=0$, but for large $x>0$ the neutron distribution inside the medium behaves like $e^{x/\nu_{0}}\phi(-\nu_{0},\mu)$. Hence the boundary condition (\[Eqn: BC\_HR\]) at $x=0$ reduces to $$-\phi(\mu,-\nu_{0})=a_{\textrm{A}}(\nu_{0})\phi(\mu,\nu_{0})+\int_{0}^{1}a_{\textrm{A}}(\nu)\phi(\mu,\nu)d\nu\qquad\mu\geq0.$$ Use of the fourth and third equations of Eq. (\[Eqn: HR Ortho\]) and the explicit relation Eq. (\[Eqn: W(mu)\]) for $W(\mu)$ gives respectively the coefficients $$\begin{aligned} {\displaystyle a_{\textrm{A}}(\nu_{0})} & = & X(-\nu_{0})/X(v_{0})\nonumber \\ a_{\textrm{A}}(\nu) & = & -\frac{1}{N(\nu)}\textrm{ }c(1-c)\nu_{0}^{2}\nu X(-\nu_{0})X(-\nu)\label{Eqn: Milne_Coeff}\end{aligned}$$ The extrapolated end-point $z_{0}$ of Eq. (\[Eqn: extrapolated\]) is related to $a_{\textrm{A}}(\nu_{0})$ of the Milne problem by $a_{\textrm{A}}(\nu_{0})=-\exp(-2z_{0}/\nu_{0})$. **Problem B: The Constant Source Problem.** Here ****the boundary condition at $x=0$ is $$1=a_{\textrm{B}}(\nu_{0})\phi(\mu,\nu_{0})+\int_{0}^{1}a_{\textrm{B}}(\nu)\phi(\mu,\nu)d\nu\qquad\mu\geq0$$ which leads, using the integral relations satisfied by $W$, to the expansion coefficients $$\begin{aligned} {\displaystyle a_{\textrm{B}}(\nu_{0})} & = & -2/c\nu_{0}X(v_{0})\label{Eqn: Constant_Coeff}\\ a_{\textrm{B}}(\nu) & = & \frac{1}{N(\nu)}\textrm{ }(1-c)\nu(\nu_{0}+\nu)X(-\nu)\nonumber \end{aligned}$$ where the $X(\pm\nu_{0})$ are related to Problem A as $$\begin{aligned} X(\nu_{0}) & = & \frac{1}{\nu_{0}}\sqrt{\frac{\nu_{0}^{2}(1-c)-1}{2a_{\textrm{A}}(\nu_{0})(1-c)(\nu_{0}^{2}-1)}}\\ X(-\nu_{0}) & = & \frac{1}{\nu_{0}}\sqrt{\frac{a_{A}(\nu_{0})\left(\nu_{0}^{2}(1-c)-1\right)}{2(1-c)(\nu_{0}^{2}-1)}}.\end{aligned}$$ This brief introduction to the singular eigenfunction method should convince the reader of the great difficulties associated with half-space, half-range methods in particle transport theory; note that the $X$-functions in the coefficients above must be obtained from numerically computed tables. In contrast, full-range methods are more direct due to the simplicity of the weight function $\mu$, which suggests the full-range formulation of half-range problems presented in Sec. 5. Finally it should be mentioned that this singular eigenfunction method is based on the theory of singular integral equations. **Acknowledgment** It is a pleasure to thank the referees for recommending an enlarged Tutorial and Review revision of the original submission *Graphical Convergence, Chaos and Complexity*, **and the Editor Professor Leon O Chua for suggesting a pedagogically self-contained, jargonless no-page limit version accessible to a wider audience for the present form of the paper. Financial assistance during the initial stages of this work from the National Board for Higher Mathematics is also acknowledged. [^1]: \[Foot: UNConf\]A partial listing of papers is as follows: *Chaos and Politics: Application of nonlinear dynamics to socio-political issues; Chaos in Society: Reflections on the impact of chaos theory on sociology; Chaos in neural networks; The impact of chaos on mathematics; The impact of chaos on physics; The impact of chaos on economic theory; The impact of chaos on engineering; The impact of chaos on biology; Dynamical disease:* and *The impact of nonlinear dynamics and chaos on cardiology and medicine.* [^2]: \[Foot: ScienceMag\]The eight Viewpoint articles are titled: *Simple Lessons from Complexity; Complexity in Chemistry; Complexity in Biological Signaling Systems; Complexity and the Nervous System; Complexity, Pattern, and Evolutionary Trade-Offs in Animal Aggregation; Complexity in Natural Landform Patterns; Complexity and Climate* and *Complexity and the Economy*. [^3]: [\[Foot: reln&graph\]We do not distinguish between a relation and its graph although technically they are different objects. Thus although a functional relation, strictly speaking, is the triple $(X,f,Y)$ written traditionally as $f\!:X\rightarrow Y$, we use it synonymously with the graph $f$ itself. Parenthetically, the word]{} *functional* [in this work is not necessarily employed for a scalar-valued function, but is used in a wider sense to distinguish between a function and an arbitrary relation (that is a multifunction). Formally, whereas an arbitrary relation from $X$ to $Y$ is a subset of $X\times Y$, a functional relation must satisfy an additional restriction that requires $y_{1}=y_{2}$ whenever $(x,y_{1})\in f$ and $(x,y_{2})\in f$. In this subset notation, $(x,y)\in f\Leftrightarrow y=f(x)$. ]{} [^4]: [\[Foot: EquivRel\]An useful alternate way of expressing these properties for a relation $\mathscr{M}$ on $X$ are]{} [$\quad$(ER2) $\mathscr{M}$ is symmetric iff $\mathscr{M}=\mathscr M^{-}$ ]{} [$\quad$(ER3) $\mathscr{M}$ is transitive iff $\mathscr{M}\circ\mathscr{M}\subseteq\mathscr{M}$, ]{} [with $\mathscr{M}$ an equivalence relation only if $\mathscr{M}\circ\mathscr{M}=\mathscr{M}$, where for $\mathscr{M}\subseteq X\times Y$ and $\mathscr{N}\subseteq Y\times Z$, the composition $\mathscr{N}\circ\mathscr{M}:=\{(x,z)\in X\times Z\!:(\exists y\in Y)\textrm{ }((x,y)\in\mathscr{M})\wedge((y,z)\in\mathscr{N})\}$]{} [^5]: [\[Foot: family\]A function $\chi\!:\mathbb{D}\rightarrow X$ will be called a]{} *family* [in $X$ indexed by $\mathbb{D}$ when reference to the domain $\mathbb{D}$ is of interest, and a]{} *net* [when it is required to focus attention on its values in $X$.]{} [^6]: [\[Foot: extension\]Observe that it is]{} *not* [being claimed that $f$ belongs to the same class as $(f_{k})$. This is the single most important cornerstone on which this paper is based: the need to “complete” spaces that are topologically “incomplete”. The classical high-school example of the related problem of having to enlarge, or extend, spaces that are not big enough is the solution space of algebraic equations with real coefficients like $x^{2}+1=0$. ]{} [^7]: [\[Foot: support\]By definition, the support (or supporting interval) of $\varphi(x)\in\mathcal{C}_{0}^{\infty}[\alpha,\beta]$ is $[\alpha,\beta]$ if $\varphi$ and all its derivatives vanish for $x\leq\alpha$ and $x\geq\beta$. ]{} [^8]: [\[Foot: integral\]Both Riemann and Lebesgue integrals can be formulated in terms of the so-called]{} *step functions* [$s(x)$, which are piecewise constant functions with values $(\sigma_{i})_{i=1}^{I}$on a finite number of bounded subintervals $(J_{i})_{i=1}^{I}$ (which may reduce to a point or may not contain one or both of the end-points) of a bounded or unbounded interval $J$, with integral $\int_{J}s(x)dx\overset{\textrm{def}}=\sum_{i=1}^{I}\sigma_{i}|J_{i}|$. While the Riemann integral of a bounded function $f(x)$ on a bounded interval $J$ is defined with respect to sequences of step functions $(s_{j})_{j=1}^{\infty}$ and $(t_{j})_{j=1}^{\infty}$ satisfying $s_{j}(x)\leq f(x)\leq t_{j}(x)$ on $J$ with $\int_{J}(s_{j}-t_{j})\rightarrow0$ as $j\rightarrow\infty$ as $R\int_{J}f(x)dx=\lim\int_{J}s_{j}(x)dx=\lim\int_{J}t_{j}(x)dx$, the less restrictive Lebesgue integral is defined for arbitrary functions $f$ over bounded or unbounded intervals $J$ in terms of Cauchy sequences of step functions $\int_{J}|s_{i}-s_{k}|\rightarrow0$, $i,k\rightarrow\infty$, converging to $f(x)$ as $$s_{j}(x)\rightarrow f(x)\textrm{ pointwise almost everywhere on }J,$$ ]{} [to be $$\int_{J}f(x)dx\overset{\textrm{def}}=\lim_{j\rightarrow\infty}\int_{J}s_{j}(x)dx.$$ ]{} [That the Lebesgue integral is more general (and therefore is the proper candidate for completion of function spaces) is illustrated by the example of the function defined over $[0,1]$ to be $0$ on the rationals and $1$ on the irrationals for which an application of the definitions verify that whereas the Riemann integral is undefined, the Lebesgue integral exists and has value $1$. The Riemann integral of a bounded function over a bounded interval exists and is equal to its Lebesgue integral. Because it involves a larger family of functions, all integrals in integral convergences are to be understood in the Lebesgue sense. ]{} [^9]: [\[Foot: delta\]The observant reader cannot have failed to notice how mathematical ingenuity successfully transferred the “troubles” of $(\delta_{k})_{k=1}^{\infty}$ to the sufficiently differentiable benevolent receptor $\varphi$ so as to be able to work backward, via the resultant trouble free $(\delta_{k}^{(-m)})_{k=1}^{\infty}$, to the final object $\delta$. This necessarily hides the true character of $\delta$ to allow only a view of its integral manifestation on functions. This unfortunately is not general enough in the strongly nonlinear physical situations responsible for chaos, and is the main reason for constructing the multifunctional extension of function spaces that we use. ]{} [^10]: [\[Foot: cont=3Dbound\]Recall that for a linear operator continuity and boundedness are equivalent concepts. ]{} [^11]: [\[Foot: OrthoMatrix\]A real matrix $A$ is an orthogonal projector iff $A^{2}=A$ and $A=A^{\textrm{T}}$. ]{} [^12]: [\[Foot: class\]In this sense, a]{} *class* [is a set of sets. ]{} [^13]: [\[Foot: interval\]By definition, an interval $I$ in a totally ordered set $X$ is a subset of $X$ with the property $$(x_{1},x_{2}\in I)\wedge(x_{3}\in X\!:x_{1}\prec x_{3}\prec x_{2})\Longrightarrow x_{3}\in I$$ ]{} [so that any element of $X$ lying between two elements of $I$ also belongs to $I$.]{} [^14]: [\[Foot: entropy\]Although we do not pursue this point of view here, it is nonetheless tempting to speculate that the answer to the question]{} *“Why* [does the entropy of an isolated system increase?” may be found by exploiting this line of reasoning that seeks to explain the increase in terms of a visible component associated with the usual topology as against a different latent workplace topology that governs the dynamics of nature.]{} [^15]: [\[Foot: subspace\]In a subspace $A$ of $X$, a subset $U_{A}$ of $A$ is open iff $U_{A}=A\bigcap U$ for some open set $U$ of $X$. The notion of subspace topology can be formalized with the help of the inclusion map $i\!:A\rightarrow(X,\mathcal{U})$ that puts every point of $A$ back to where it came from, thus $$\begin{array}{ccl} \mathcal{U}_{A} & = & \{ U_{A}=A\bigcap U\!:U\in\mathcal{U}\}\\ & = & \{ i^{-}(U)\!:U\in\mathcal{U}\}.\end{array}$$ ]{} [^16]: [\[Foot: assoc&embed\]A surjective function is an]{} *association* [iff it is image continuous and an injective function is an]{} *embedding* [iff it is preimage continuous. ]{} [^17]: [\[Foot: 0=3Dphi\]If $y\notin\mathcal{R}(f)$ then $f^{-}(\{ y\}):=\emptyset$ which is true for any subset of $Y-\mathcal{R}(f)$. However from the set-theoretic definition of natural numbers that requires $0:=\emptyset$, $1=\{0\}$, $2=\{0,1\}$ to be defined recursively, it follows that $f^{-}(y)$ can be identified with $0$ whenever $y$ is not in the domain of $f^{-}$. Formally, the successor set $A^{+}=A\bigcup\{ A\}$ of $A$ can be used to write $0:=\emptyset$, $1=0^{+}=0\bigcup\{0\}$, $2=1^{+}=1\bigcup\{1\}=\{0\}\bigcup\{1\}$ $3=2^{+}=2\bigcup\{2\}=\{0\}\bigcup\{1\}\bigcup\{2\}$ etc. Then the set of natural numbers $\mathbb{N}$ is defined to be the intersection of all the successor sets, where a successor set $\mathcal{S}$ is any set that contains $\emptyset$ and $A^{+}$ whenever $A$ belongs to $\mathcal{S}$. Observe how in the successor notation, countable union of singleton integers recursively define the corresponding sum of integers. ]{} [^18]: [See footnote \[Foot: 0=3Dphi\] for a justification of the definition when $b$ is not in $\mathcal{R}(a)$.]{} [^19]: [\[Foot: subnet\]A subnet is the generalized uncountable equivalent of a subsequence; for the technical definition, see Appendix A1. ]{} [^20]: [\[Foot: point\_inter\]Equation (\[Eqn: func\_bi\]) is essentially the intersection of the pointwise topologies (\[Eqn: point\]) due to $f$ and $f^{-}$. ]{} [^21]: [\[Foot: strict reln\]If $\preceq$ is an order relation in $X$ then the]{} *strict relation $\prec$ in $X$* [corresponding to $\preceq$, given by $x\prec y\Leftrightarrow(x\preceq y)\wedge(x\neq y)$,]{} *is not an order relation* [because unlike $\preceq$, $\prec$ is not reflexive even though it is both transitive and asymmetric.]{} ** [^22]: [\[Foot: infinite\]This makes $T$, and hence $X$, inductively defined infinite sets. It should be realized that (ST3)]{} *does not mean* [that every member of $T$ is obtained from $g$, but only ensures that the immediate successor of any element of $T$ is also in $T.$ The infimum $_{\rightarrow}T$ of these towers satisfies the additional property of being totally ordered (and is therefore essentially a sequence or net) in $(X,\preceq)$ to which (ST2) can be applied. ]{} [^23]: [\[Foot: Hausdorff\]Recall that this means that if there is a totally ordered chain $C$ in $(X,\preceq)$ that succeeds $C_{+}$, then $C$ must be $C_{+}$ so that no chain in $X$ can be strictly larger than $C_{+}$. The notation adopted here and below is the following: If $X=\{ x,y\}$ is a non-empty set, then $\mathcal{X}:=\mathcal{P}(X)=\{ A\!:A\subseteq X\}=\{\emptyset,\{ x\},\{ y\},\{ x,y\}\}$ is the set of subsets of $X$, and $\mathfrak{X}:=\mathcal{P}^{2}(X)=\{\mathcal{A}:\mathcal{A}\subseteq\mathcal{X}\}$, the set of all subsets of $\mathcal{X}$, consists of the $16$ elements $\emptyset$, $\{\emptyset\}$, $\{\{ x\}\}$, $\{\{ y\}\}$, $\{\{ x,y\}\}$, $\{\{\emptyset\},\{ x\}\}$, $\{\{\emptyset\},\{ y\}\}$, $\{\{\emptyset\},\{ x,y\}\}$, $\{\{ x\},\{ y\}\}$, $\{\{ x\},\{ x,y\}\}$, $\{\{ y\},\{ x,y\}\}$, $\{\{\emptyset\},\{ x\},\{ y\}\}$, $\{\{\emptyset\},\{ x\},\{ x,y\}\}$, $\{\{\emptyset\},\{ y\},\{ x,y\}\}$, $\{\{ x\},\{ y\},\{ x,y\}\}$, and $\mathcal{X}$: an element of $\mathcal{P}^{2}(X)$ is a subset of $\mathcal{P}(X)$, any element of which is a subset of $X$. Thus if $C=\{0,1,2\}$ is a chain in $(X=\{0,1,2\},\leq)$, then $\mathcal{C}=\{\{0\},\{0,1\},\{0,1,2\}\}\subseteq\mathcal{P}(X)$ and $\mathfrak{C}=\{\{\{0\}\},\{\{0\},\{0,1\}\},\{\{0\},\{0,1\},\{0,1,2\}\}\}\subseteq\mathcal{P}^{2}(X)$ represent chains in $(\mathcal{P}(X),\subseteq)$ and $(\mathcal{P}^{2}(X),\subseteq)$ respectively . ]{} [^24]: [\[Foot: supremum\]A similar situation arises in the following more intuitive example. Although the subset $A=\{1/n\}_{n\in Z_{+}}$ of the interval $I=[-1,1]$ has no a smallest or minimal elements, it does have the infimum 0. Likewise, although $A$ is bounded below by any element of $[-1,0)$, it has no greatest lower bound in $[-1,0)\bigcup(0,1]$. ]{} [^25]: [\[Foot: omega-limit\]How does this happen for $A=\{ f^{i}(x_{0})\}_{i\in\mathbb{N}}$ that is not the constant sequence $(x_{0})$ at a fixed point? As $i\in\mathbb{N}$ increases, points are added to $\{ x_{0},f(x_{0}),\cdots,f^{I}(x_{0})\}$ not, as would be the case in a normal sequence, as a piled-up Cauchy tail, but as points generally lying between those already present; recall a typical graph as of Fig. \[Fig: tent4\] for example.]{} [^26]: \[Foot: gen\_eigen\][The technical definition of a generalized eigenvalue is as follows. Let $\mathcal{L}$ be a linear operator such that there exists in the domain of $\mathcal{L}$ a sequence of elements $(x_{n})$ with $\Vert x_{n}\Vert=1$ for all $n$. If $\lim_{n\rightarrow\infty}\Vert(\mathcal{L}-\lambda)x_{n}\Vert=0$ for some $\lambda\in\mathbb{C}$, then this $\lambda$ is a]{} *generalized eigenvalue* [of $\mathcal{L}$, the corresponding eigenfunction $x_{\infty}$ being a]{} *generalized eigenfunction.* [^27]: \[Foot: cluster\]This is also known as a *cluster point*; we shall, however, use this new term exclusively in the sense of the elements of a derived set, see Definition 2.3. [^28]: \[Foot: Filter\_conv\][The restatement $$\mathcal{F}\rightarrow x\Longleftrightarrow\mathcal{N}_{x}\subseteq\mathcal{F}\label{Eqn: Def: LimFilter}$$ of Eq. (\[Eqn: lim filter\]) that follows from (F3), and sometimes taken as the definition of convergence of a filter, is significant as it ties up the algebraic filter with the topological neighbourhood system to produce the filter theory of convergence in topological spaces. From the defining properties of $\mathcal{F}$ it follows that for each $x\in X$, $\mathcal{N}_{x}$ is the coarsest (that is smallest) filter on $X$ that converges to $x$.]{} [^29]: \[Foot: adh\_seq\][In a first countable space, while the corresponding proof of the first part of the theorem for sequences is essentially the same as in the present case, the more direct proof of the converse illustrates how the convenience of nets and directed sets may require more general arguments. Thus if a sequence $(x_{i})_{i\in\mathbb{N}}$ has a subsequence $(x_{i_{k}})_{k\in\mathbb{N}}$ converging to $x$, then a more direct line of reasoning proceeds as follows. Since the subsequence converges to $x$, its tail $(x_{i_{k}})_{k\geq j}$ must be in every neighbourhood $N$ of $x$. But as the number of such terms is infinite whereas $\{ i_{k}\!:k<j\}$ is only finite, it is necessary that for any given $n\in\mathbb{N}$, cofinitely many elements of the sequence $(x_{i_{k}})_{i_{k}\geq n}$ be in $N$. Hence $x\in\textrm{adh}((x_{i})_{i\in\mathbb{N}})$. ]{} [^30]: \[Foot: seq xxx\][This is uncountable because interchanging any two eventual terms of the sequence does not alter the sequence. ]{} [^31]: [Note that $\{ x\}$ is a $1$-point set but $(x)$ is an uncountable sequence.]{} [^32]: \[Foot: e&q\][We adopt the convention of denoting arbitrary preimage and image continuous functions by $e$ and $q$ respectively even though they are not be injective or surjective; recall that the embedding $e\!:X\supseteq A\rightarrow Y$ and the association $q\!:X\rightarrow f(X)$ are $1:1$ and onto respectively. ]{} [^33]: \[Foot: fil-nbd\][This is of course a triviality if we identify each $\chi(\mathbb{R}_{\beta})$ (or $F$ in the proof of the converse that follows) with a neighbourhood $N$ of $X$ that generates a topology on $X$.]{} [^34]: **Nested-set theorem.** *If $(E_{n})$ is a decreasing sequence of nonempty, closed, subsets of a complete metric space $(X,d)$ such that* [$\lim_{n\rightarrow\infty}\textrm{dia}(E_{n})=0$]{}*, then there is a unique point* [$$x\in\bigcap_{n=0}^{\infty}E_{n}.$$ The uniqueness arises because the limiting condition on the diameters of $E_{n}$ imply, from property (H1), that $(X,d)$ is a Hausdorff space. ]{}
{ "pile_set_name": "ArXiv" }
Media playback is unsupported on your device Media caption The Antony Gormley sculpture will now have a permanent home at Saddell Bay A cast-iron sculpture by renowned artist Antony Gormley is to remain in place permanently after it was bought and granted planning permission. The abstract human form looks out over the Kilbrannan Sound to Arran from the rocks below Saddell Castle in Kintyre. Gormley, who is most famous for the Angel of the North, made the sculpture in 2015 to celebrate 50 years of the Landmark Trust. It was one of five placed at trust properties around the UK. Image copyright Landmark Trust The life-sized figures, together known as Land, were originally to have remained in place until May 2016. The other four were removed as planned last year. They have been returned to the artist who will use them for future projects. The Kintyre sculpture, called Grip, is the only one to get a permanent home. Image copyright Landmark Trust It has been purchased for the trust by an anonymous private donor for an undisclosed sum. It has been granted planning permission by Argyll and Bute Council. Gormley said: "There is an excitement about making a sculpture that can live out here amongst the waves and the wind, the rain and snow, in night and day. "The sculpture is like a standing stone, a marker in space and time, linking with a specific place and its history but also looking out towards the horizon, having a conversation with a future that hasn't yet happened." Image copyright Landmark Trust Caroline Stanford, who managed the Land installation, said: "Grip's human scale and magical setting make it a deeply moving work by one of this generation's finest artists. "We are so grateful to our wonderful donor for enabling it to stay in Scotland for good." The Landmark Trust has owned Saddell Bay since 1975. It has restored each of the six buildings on the bay and they are available for self-catering holidays. The five locations for the Land sculptures were: Image copyright Jill Tate Clavell Tower, Kimmeridge Bay, Dorset Lengthsman's Cottage, Lowsonford, Warwickshire Lundy Island, Bristol Channel Martello Tower, Aldeburgh, Suffolk Saddell Bay, Mull of Kintyre Image copyright Jill Tate Image copyright Jill Tate Image copyright Clare Richardson Image copyright Clare Richardson
{ "pile_set_name": "OpenWebText2" }
Social experiment - single thoughts on programming that made an impact - RiderOfGiraffes With the indulgence of the regulars, I'd like to try an experiment. Recently, as a result of an item here, I read an article from which a single line has stuck. It wasn't the most profound, it wasn't the most amusing, it wasn't even necessarily the most valuable, but it's one that has immediately made me think differently, and will be with me a long time.<p>I'd like to create a collection of such things. I'm pretty sure things that affect you guys will be of varying value, but if we make a list and then let them float up and down as they get modded, maybe we'll get something interesting.<p>Replies are discouraged. These will not be universal truths, they will not be unarguable, but I suggest that this is not the place. Perhaps if you <i>really</i> disagree you can blog about it and submit a link. But perhaps not here - perhaps as a main item.<p>Worth a try? Who will play along? I'll start ... ====== RiderOfGiraffes You can't make your program run faster, you can only make it do less. ------ anamax Almost any problem can be solved by adding a level of indirection. Almost every program can be sped up by removing a level of indirection. ------ reddiar The hardest bugs are those where your mental model of the situation is just wrong, so you can't see the problem at all : B. Kernighan ------ bayareaguy The cheapest, fastest and most reliable components of a computer system are those that aren't there. -- Gordon Bell ------ anamax Someone reading/modifying a program is never as smart as the person who wrote it, even if they're the same person. ------ anamax If you don't know the tradeoffs that you're making, how do you know that you're making the right ones? ------ asimjalis Make it beautiful even if you are just hacking together a quick spike. ------ hboon Make it run, make it right, make it fast. In that order. ------ anamax All programming is an exercise in caching.
{ "pile_set_name": "HackerNews" }
Spiritual Emergence or Crisis Spiritual or paranormal emergence or crisis, generally arising from an experience, contemplative or situation that does not fit into your present belief system and you are finding it difficult to assimilate or understand. Sometimes brought about by extreme cases of trauma, major surgery, accidents including near death or out of body experiences, that you cannot explain. Can be mis-diagnosed as a mental health issue, associated with paranormal, mystical or psychological disturbances. Had experiences you cannot understand or explain?… feel like your whole world has been turned upside down and you don’t know where to turn?…… come chat to us, we understand, appreciate, acknowledge and support this process. We also assist with negative energy release, spirit release and soul fragmentation release and retrieval programs. WHY CHOOSE THE HELP CLINIC? Most medical professionals and therapists will deny the existence, validity, truth or reality of such things as energy medicine, consciousness of energy, psychic ability, precognition, past lives, telepathy, reincarnation, the existence of spirit or soul, mystical experiences, astral and etheric bodies, the chakras, highly intuitive children & adults, near death experiences, auras or the validity of meditation and other spiritual practices, to name a few. We are a group of practitioners who will attest to all of the above and more. Through expansive research, laying witness to, personal experiences, and years of training and client counselling, our healing team are not your average western practitioners, we are a new breed of tharapists bridging the gap between objective science and inner wisdom…………we are The HELP Clinic! Don’t take our word for it…….you need to see our results to believe them. Our ‘healing & counsellng rooms’ are beautiful spaces filed with healing Reiki energy and our practitioners are the best in the field. If your looking for mainstream, traditional practitioners you wont find them here! What you will find is true support towards complete wellness, balance, vitality, wellbeing, health, healing and harmony, with results that are extraordinary.
{ "pile_set_name": "Pile-CC" }
6 Types Of Social Media Scams And How to Avoid Them With more than 60 million active social media users as of last year logging an average daily use of 4.17 hours, Filipinos spend more time on social media compared to anyone else in the world, according to We Are Social’s Digital in 2017 Southeast Asia report. This fondness of Filipinos towards social media has resulted to cybercriminals exploiting social media platforms to prey on unsuspecting netizens. However, users of these sites remain careless, making them highly vulnerable. This was proven by a study conducted by Kaspersky Lab last year which revealed majority of Filipino internet users are averagely at risk to attacks online. The research also showed only 1 out of 10 netizens (11%) can identify a safe Facebook web page. Facebook is the top used social media site in the country. “While social networking sites appear like a safe online playground for millions of Filipinos, we would like to remind them that cybercriminals are lurking on the other side of the screen waiting for their next victim. The prevalence of scams in social media should serve as warning alarms for Filipinos to take their online security seriously,” says Sylvia Ng, General Manager at Kaspersky Lab Southeast Asia. As security is a two-pronged process, it requires effective security solution and users’ cyber savviness, here are the known social media scams plus tips from Kaspersky Lab on how to avoid them: Scam: Mutual connection In this scam, a stranger contacts you through social channels and claims a common interest or a mutual connection, for example, from an introduction at a wedding or large gathering. If you post a lot of pictures and haven’t updated your privacy settings, it’s easy for cybercriminals to make some educated guesses about how to best approach you. Tip: If you receive such a claim, dismiss the conversation. Don’t provide further personal details and don’t add that person as a friend. Also, update your privacy settings to share your photos and posts only with people you really know. 2. Scam: Message from a friend This scam appears as a private message from your friend. Attackers might have already accessed your friend’s credentials and forwarded them to a third party which can then use it to send spams to you and others. Sending spam from real accounts works better for cyber criminals than setting up false accounts because people are more likely to trust a message from one of their social media friends. They are more likely to click on suspicious links or to open questionable messages than they would if the message looked like it was coming directly from, say, a bank. Tip: If you start to get suspicious of social media messages from your friends, notify them immediately (but not by responding to any of those suspicious messages) that their accounts have likely been hacked. If you are redirected to a new page when you open the message, check the URL of this page. If it isn’t in line with where you expected to be sent to, leave immediately. 3. Scam: Bogus password reset requests A user might, for instance, get an email that has all of the themes and imagery of a typical message from a social media account, except this email will tell the user they need to reset their password and will offer that user a login prompt to do so. The user clicks on the prompt, is directed to a fake webpage that looks like the social media site, and then the user enters their login and password. Just like that, the phishing attack has succeeded. Tip: Compare the address of the sender to the address that usually appears when you get an email from this person or organization — it’s probably a fake. Look for telltale signs of forgery in emails that request personal information – spelling errors are immediate red flags. If the prompt to a webpage to enter your data has an URL that is different than the site you expected to be going to, that is a sure sign of a phishing attack. 4. Scam: 18+ Video and Malicious extension The scammer starts by hijacking several social media accounts. On their behalf, the criminal shares a post with a link to something that is supposed to be a YouTube video suitable for adults only. The bad guys also tag about a dozen friends of each of those accounts. The video would not play, and the page would suggest that you install a browser extension in order to play it. When installed, that extension steals your data because it has access to all the data the user inputs in the browser, including your logins, passwords, and credit card information — as soon as they type it in on some site. The other thing it does is posting the same link to the same video on the victim’s social media page such as Facebook and thus continuing to spread the malware. Tip: If your friend wanted you to click on a link, he would surely give you a better description as to why you should click. Either do not click on the link, or click and be extremely cautious about what you do next. Do not install or get rid off extension with no description, no screenshots and no rating. 5. Scam: Trending topics Twitter created the concept of “trending” topics, and hashtags are the medium for labeling content to increase its popularity. However, there are users who hijack trending topics to lead to content that masquerades as relevant to the topic, but instead includes a link that leads to offensive or harmful web pages. Beware, because whether it’s the latest celebrity buzz or a major tragedy in the news, trolls are particularly effective at doing this because their posts during sensitive times inflame readers —tweets mocking victims of school shootings, for instance — and by outraging people can entice them to click through to bad content. Tip: Don’t feed the trolls and just ignore or report them. Whether they are bullies or spammers, sooner or later you’re going to end up with unwanted and potentially malicious followers. Periodically scroll through your list of followers and block to prevent them from seeing your updates. 6. Scam: Calls for help Scammers often trick victims with shocking stories about dying babies, drowning puppies, or struggling veterans. Such posts travel around social networks disguised as calls for help and generate a lot of reposts, but a large proportion of them are scams. In fact, they are used for financial theft, phishing, and spreading malware. Real calls for help are usually created by your family, friends, and friends of your friends. Tip: Be vigilant and do a check on each post before clicking its “Like” or “Share” buttons. Don’t want to check each and every post of this kind? Then don’t click on it at all — don’t risk turning yourself and your friends into scam victims. Most importantly, ensure that your web browser, antivirus, and all software programs on your computer are always updated to the latest versions that have the latest security patches.
{ "pile_set_name": "Pile-CC" }
As is generally known, food and beverages are offered and served to air passengers during flights. The food service items are generally cold-stored in catering transport containers, i.e. so-called trolleys, from which the food items are usually also served. Except during the actual service periods, the food service trolleys are stowed in a galley, and are further cooled by corresponding cooling arrangements. The total number of galley areas or stowage areas for food service trolleys provided in an aircraft is essentially dependent upon the number of passengers and the particular intended utilization of the aircraft, for example, for long intercontinental flights or alternatively for short haul flights. The galleys are typically arranged at various locations within the aircraft cabin in such a manner that the distribution of meals and beverages to passengers can be achieved in the shortest amount of time and with the shortest involved transport distance. A known arrangement for cooling each individual food service trolley is to have cooling air inlets and outlets on each trolley, which are supplied with cold air produced by a compression air chiller plant, for example. Typically in the known art, an autonomous cooling plant using cold air as a cooling medium and having its own compression cooling machine, such as an air chiller plant, is provided for each individual galley on the aircraft. It is also known to provide the cool air by using heat exchangers in direct thermal communication with the outer skin of the aircraft fuselage to take advantage of the cold temperatures of the environment at cruising altitudes of the aircraft. Thus, in the known arrangements, the food service trolleys that are to be cooled are stowed in galley areas directly proximate to the cooling plants. Each respective cooling plant and the associated cooling medium conduits for each galley are rigidly and permanently installed in proximity to the respective galley. Such an arrangement entails a great redundant weight and a large space requirement, and produces additional undesirable heat and noise in the aircraft cabin. The prior art does not allow a flexible rearrangement of the galleys within the aircraft cabin and therewith a rapid reconfiguring of the cabin space for various applications of the aircraft due to the fixed and permanent arrangement and high space requirement of the multiple cooling plants.
{ "pile_set_name": "USPTO Backgrounds" }
/* -------------------------------------------------------------------------- * * OpenSim: ExpressionBasedPointToPointForce.cpp * * -------------------------------------------------------------------------- * * The OpenSim API is a toolkit for musculoskeletal modeling and simulation. * * See http://opensim.stanford.edu and the NOTICE file for more information. * * OpenSim is developed at Stanford University and supported by the US * * National Institutes of Health (U54 GM072970, R24 HD065690) and by DARPA * * through the Warrior Web program. * * * * Copyright (c) 2005-2017 Stanford University and the Authors * * Author(s): Ajay Seth * * * * Licensed under the Apache License, Version 2.0 (the "License"); you may * * not use this file except in compliance with the License. You may obtain a * * copy of the License at http://www.apache.org/licenses/LICENSE-2.0. * * * * Unless required by applicable law or agreed to in writing, software * * distributed under the License is distributed on an "AS IS" BASIS, * * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * * See the License for the specific language governing permissions and * * limitations under the License. * * -------------------------------------------------------------------------- */ //============================================================================= // INCLUDES //============================================================================= #include "ExpressionBasedPointToPointForce.h" #include <OpenSim/Simulation/Model/Model.h> #include <lepton/Parser.h> #include <lepton/ParsedExpression.h> using namespace OpenSim; using namespace std; //============================================================================= // STATICS //============================================================================= //============================================================================= // CONSTRUCTOR(S) AND DESTRUCTOR //============================================================================= //_____________________________________________________________________________ //Default constructor. ExpressionBasedPointToPointForce::ExpressionBasedPointToPointForce() { setNull(); constructProperties(); } //_____________________________________________________________________________ // Convenience constructor for API users. ExpressionBasedPointToPointForce::ExpressionBasedPointToPointForce( const string& body1Name, const SimTK::Vec3& point1, const string& body2Name, const SimTK::Vec3& point2, const string& expression) { setNull(); constructProperties(); // Set properties to the passed-in values. setBody1Name(body1Name); setBody2Name(body2Name); setPoint1(point1); setPoint2(point2); setExpression(expression); } // Set the expression for the force function and create it's lepton program void ExpressionBasedPointToPointForce::setExpression(const string& expression) { set_expression(expression); } //============================================================================= // CONSTRUCTION //============================================================================= //_____________________________________________________________________________ /** * Set the data members of this force to their null values. */ void ExpressionBasedPointToPointForce::setNull() { setAuthors("Ajay Seth"); } //_____________________________________________________________________________ /** * Construct properties and initialize to their default values. */ void ExpressionBasedPointToPointForce::constructProperties() { constructProperty_body1(); constructProperty_body2(); const SimTK::Vec3 bodyOrigin(0.0, 0.0, 0.0); constructProperty_point1(bodyOrigin); constructProperty_point2(bodyOrigin); std::string zero = "0.0"; constructProperty_expression( zero ); } //============================================================================= // Connect this force element to the rest of the model. //============================================================================= void ExpressionBasedPointToPointForce::extendConnectToModel(Model& model) { Super::extendConnectToModel(model); // Let base class connect first. // Look up the two bodies being connected by bushing by name in the // model. TODO: use Sockets const string& body1Name = getBody1Name(); const string& body2Name = getBody2Name(); if(getModel().hasComponent(body1Name)) _body1 = &(getModel().getComponent<PhysicalFrame>(body1Name)); else _body1 = &(getModel().getComponent<PhysicalFrame>( "./bodyset/" + body1Name)); if (getModel().hasComponent(body2Name)) _body2 = &(getModel().getComponent<PhysicalFrame>(body2Name)); else _body2 = &(getModel().getComponent<PhysicalFrame>( "./bodyset/" + body2Name)); if(getName() == "") setName("expressionP2PForce_"+body1Name+"To"+body2Name); string& expression = upd_expression(); expression.erase( remove_if(expression.begin(), expression.end(), ::isspace), expression.end() ); _forceProg = Lepton::Parser::parse(expression).optimize().createProgram(); } //============================================================================= // Create the underlying system component(s) //============================================================================= void ExpressionBasedPointToPointForce:: extendAddToSystem(SimTK::MultibodySystem& system) const { Super::extendAddToSystem(system); // Base class first. this->_forceMagnitudeCV = addCacheVariable("force_magnitude", 0.0, SimTK::Stage::Velocity); // Beyond the const Component get access to underlying SimTK elements ExpressionBasedPointToPointForce* mutableThis = const_cast<ExpressionBasedPointToPointForce *>(this); // Get underlying mobilized bodies mutableThis->_b1 = _body1->getMobilizedBody(); mutableThis->_b2 = _body2->getMobilizedBody(); } //============================================================================= // Computing //============================================================================= // Compute and apply the force void ExpressionBasedPointToPointForce::computeForce(const SimTK::State& s, SimTK::Vector_<SimTK::SpatialVec>& bodyForces, SimTK::Vector& generalizedForces) const { using namespace SimTK; const Transform& X_GB1 = _b1->getBodyTransform(s); const Transform& X_GB2 = _b2->getBodyTransform(s); const Vec3 s1_G = X_GB1.R() * getPoint1(); const Vec3 s2_G = X_GB2.R() * getPoint2(); const Vec3 p1_G = X_GB1.p() + s1_G; // point measured from ground origin const Vec3 p2_G = X_GB2.p() + s2_G; const Vec3 r_G = p2_G - p1_G; // vector from point1 to point2 const double d = r_G.norm(); // distance between the points const Vec3 v1_G = _b1->findStationVelocityInGround(s, getPoint1()); const Vec3 v2_G = _b2->findStationVelocityInGround(s, getPoint2()); const Vec3 vRel = v2_G - v1_G; // relative velocity //speed along the line connecting the two bodies const double ddot = dot(vRel, r_G)/d; std::map<std::string, double> forceVars; forceVars["d"] = d; forceVars["ddot"] = ddot; double forceMag = _forceProg.evaluate(forceVars); setCacheVariableValue(s, _forceMagnitudeCV, forceMag); const Vec3 f1_G = (forceMag/d) * r_G; bodyForces[_b1->getMobilizedBodyIndex()] += SpatialVec(s1_G % f1_G, f1_G); bodyForces[_b2->getMobilizedBodyIndex()] -= SpatialVec(s2_G % f1_G, f1_G); } // get the force magnitude that has already been computed const double& ExpressionBasedPointToPointForce:: getForceMagnitude(const SimTK::State& s) { return getCacheVariableValue(s, _forceMagnitudeCV); } //============================================================================= // Reporting //============================================================================= // Provide names of the quantities (column labels) of the force value(s) // reported. OpenSim::Array<std::string> ExpressionBasedPointToPointForce::getRecordLabels() const { const string& body1Name = getBody1Name(); const string& body2Name = getBody2Name(); OpenSim::Array<std::string> labels(""); labels.append(getName()+"."+body1Name+".force.X"); labels.append(getName()+"."+body1Name+".force.Y"); labels.append(getName()+"."+body1Name+".force.Z"); labels.append(getName()+"."+body1Name+".point.X"); labels.append(getName()+"."+body1Name+".point.Y"); labels.append(getName()+"."+body1Name+".point.Z"); labels.append(getName()+"."+body2Name+".force.X"); labels.append(getName()+"."+body2Name+".force.Y"); labels.append(getName()+"."+body2Name+".force.Z"); labels.append(getName()+"."+body2Name+".point.X"); labels.append(getName()+"."+body2Name+".point.Y"); labels.append(getName()+"."+body2Name+".point.Z"); return labels; } // Provide the value(s) to be reported that correspond to the labels. OpenSim::Array<double> ExpressionBasedPointToPointForce:: getRecordValues(const SimTK::State& state) const { OpenSim::Array<double> values(1); SimTK::Vector_<SimTK::SpatialVec> bodyForces(0); SimTK::Vector_<SimTK::Vec3> particleForces(0); SimTK::Vector mobilityForces(0); //get the net force added to the system contributed by the Spring _model->getForceSubsystem().getForce(_index) .calcForceContribution(state, bodyForces, particleForces, mobilityForces); SimTK::Vec3 forces = bodyForces(_body1->getMobilizedBodyIndex())[1]; values.append(3, &forces[0]); SimTK::Vec3 gpoint = _body1->findStationLocationInGround(state, getPoint1()); values.append(3, &gpoint[0]); forces = bodyForces(_body2->getMobilizedBodyIndex())[1]; values.append(3, &forces[0]); gpoint = _body2->findStationLocationInGround(state, getPoint2()); values.append(3, &gpoint[0]); return values; }
{ "pile_set_name": "Github" }
At Twilight By Xemerani Watch 62 Favourites 14 Comments 4K Views hmmmmGDNKJ i really,,really hate how this turned out but...I guess it is what it is //sighhhhh This is the second commission for Rinny, this time a full pic, of her character in (I think) their main outfit! >V< this took forever but was really fun and I learned a lot!! Gotta get more used to doing bgs.. i dont know why my night ones always turn out so saturated though eek IMAGE DETAILS Image size 2149x3035px 2.51 MB Show More Published : Jan 16, 2018
{ "pile_set_name": "OpenWebText2" }
Janus (journal) Janus was an academic journal published in Amsterdam in the French language from 1896 to 1990, devoted to the history of medicine and the history of science. It should not be confused with a different journal by the same name on the history of medicine, published roughly 50 years earlier in Germany as . Founding and early history The journal was founded in 1896 by Carel Eduard Daniëls and Hendrik Peypers, with the French subtitle [International Archive for the History of Medicine and Medical Geography]. In his 1895 doctoral dissertation in history, Peypers had already quoted Schlegel concerning the Janus-like viewpoint of the historian, "the prophet who also looks backwards": From 1915 onward, the journal called itself the [journal of the Dutch Society for the History of Medical, Exact, and Natural Sciences]. The society was founded at the same time as the journal, and existed primarily to publish the journal. This series of the journal ended in 1941, interrupted by World War II. Post-war revival In 1957, the same journal was restarted, this time subtitled [International Review for the History of Science, Medicine, Pharmacy, and Technology]. It had as co-editor; Bruins had recently returned to Amsterdam from teaching mathematics in Baghdad, and in 1969 he would be named professor of the history of mathematics at the University of Amsterdam. In 1963 he took over full editorship of the journal. Under the influence of Bruins, the journal began including the history of mathematics in its repertoire of topics. Bruins died in 1990, and his journal ceased publication in the same year. References Category:French-language journals Category:History of science journals Category:Publications established in 1896 Category:Publications disestablished in 1990
{ "pile_set_name": "Wikipedia (en)" }
574 F.2d 1043 Federico F. MARTINEZ, Petitioner-Appellant,v.Palemon CHAVEZ, Mora County Sheriff, Respondent-Appellee. No. 77-1469. United States Court of Appeals,Tenth Circuit. Submitted Nov. 13, 1977.Decided April 26, 1978. Federico F. Martinez, pro se. Toney Anaya, Atty. Gen., Ralph W. Muxlow, II, Asst. Atty. Gen., Santa Fe, N. M., for respondent-appellee. Before SETH, PICKETT and McWILLIAMS, Circuit Judges. PER CURIAM. 1 Martinez is appealing dismissal of his civil rights action submitted to the district court pursuant to 42 U.S.C. § 1983. The action is related to events surrounding Martinez' incarceration in the Mora County, New Mexico, jail, and names as defendants a New Mexico state trial judge (Angel), two county prosecutors (Armijo and Vaughn), and the sheriff of Mora County (Chavez). Martinez sought both injunctive and monetary relief but, since he is no longer an inmate in the Mora County jail and makes no allegations regarding the likelihood of future confinement in the jail, we view only the monetary damages claim as viable. 2 Martinez' civil rights complaint included three separate counts, only two of which are relevant to this appeal. We first consider Martinez' contentions regarding an arrest which Martinez alleges was based on an unfounded escape charge. All four named defendants are alleged to be co-conspirators in this count. The district court concluded that the trial judge and two prosecutors were immune and dismissed the action as to them. Martinez then sought reinstatement of the action as to the judge and two prosecutors as well as to amend his complaint as to them. The district court denied reinstatement as to the three and neither granted nor denied the proposed amendment. Martinez now contends that the district court erred in failing to grant his motion to amend his complaint and to grant his motion to vacate the judgment dismissing the action as to the judge and two prosecutors. 3 We see no error in the district court's action regarding defendants Angel, Armijo and Vaughn. We have reviewed both Martinez' original and proposed amended complaints and find no allegations of fact which would support a finding that any of the three acted outside the scope of their judicial or prosecutorial duties. Defendant Angel was accordingly cloaked with judicial immunity, see Pierson v. Ray, 386 U.S. 547, 87 S.Ct. 1213, 18 L.Ed.2d 288 (1967); and defendants Armijo and Vaughn with prosecutorial immunity, see, Imbler v. Pachtman, 424 U.S. 409, 96 S.Ct. 984, 47 L.Ed.2d 128 (1976); Atkins v. Lanning, 556 F.2d 485 (10th Cir. 1977). Dismissal as to each was proper. Following further proceedings discussed below, the district court also dismissed on this count as to defendant Chavez, stating that Martinez' allegations regarding false charges of escape relate to the state court, not Chavez. The record supports this conclusion. 4 We view as far more substantial Martinez' allegations that he was subjected over a four month period of time to cruel and unusual punishment, in the form of suffocating conditions, as a result of Sheriff Chavez' inactions regarding ventilation of the Mora County jail. Only Chavez is named as a defendant in this count. Review of the procedure employed by the district court in disposing of this count is warranted. 5 The district court granted Martinez leave to proceed in forma pauperis pursuant to 28 U.S.C. § 1915(a) and service of process issued. Chavez answered, represented by the Attorney General for the State of New Mexico, arguing that Martinez' claims regarding ventilation of the jail fail to allege violation of a federal constitutional right; and, that Sheriff Chavez was not the responsible individual for overseeing proper ventilation of the jail. The district court ordered that the parties submit affidavits and counter-affidavits, including those of witnesses, and that "trial of the cause will be entirely upon the affidavits submitted in support of, and in opposition to, the complaint and Findings of Fact and Conclusions of Law and Judgment will be entered thereon." 6 Chavez submitted affidavits executed by himself and the County Planner for Mora County stating that: Martinez was a resident in the jail for approximately four months during which time he complained everyday about the ventilation system; the ventilation system was inspected at that time and found to be operating properly (even if inadequately); the Board of County Commissioners has ultimate responsibility for the jail, including the ventilation system; Martinez' complaints about the ventilation system were referred by the sheriff to the Board and the County Planner; the ventilation system has always worked and repair was never required; Martinez was never treated medically, while a resident in the jail, for a respiratory disease; and, Martinez was the sole complainant regarding the ventilation system. 7 Martinez responded, with affidavits executed by himself and another inmate, that: Martinez complained frequently and directly to Chavez regarding inadequate ventilation of the jail and his consequent respiratory complications; Chavez has direct responsibility for operation and maintenance of the jail; Chavez personally informed Martinez on numerous occasions that the ventilation system was out of order; Chavez, in Martinez' presence, informed the public defender and a law student that the system was out of order; Martinez was treated during the period in question for a respiratory disorder; numerous other inmates made the same complaint to Chavez regarding ventilation of the jail; and, medical records would show that Martinez was in fact suffering a respiratory disease as a result of incarceration in the Mora County jail. 8 Rather than conducting "trial by affidavit" as it initially proposed,1 the district court implemented a procedure similar to that approved in our recently filed opinion, Martinez et al, v. Aaron (Malley, Warden), 570 F.2d 317 (10th Cir. No. 77-1395, filed January 23, 1978). Based on the pleadings and foregoing affidavits, the district court found Martinez' action to be frivolous and dismissed pursuant to 28 U.S.C. § 1915(d).2 9 Whether or not a complaint states a cognizable legal claim,3 the accuracy of facts alleged therein is always a point of contention. As recognized in Martinez, supra, the burden is on the United States district courts to develop effective and legally permissible methods of dealing with the ever increasing numbers of prisoner civil rights actions. It was in order to aid in determining which facts alleged in the complaint were relevant, accurate, and subject to bona fide dispute, that we approved the procedure employed by the district court in Martinez.4 10 Having reviewed the Martinez type procedure employed by the district court in this case, we find no impropriety in the procedure itself and turn now to application of that procedure to the specific facts at hand. The questions presented in this case were whether Martinez was in fact subjected to conditions sufficiently onerous that they violated the Eighth Amendment to the United States Constitution, and, if so, whether Sheriff Chavez may be held monetarily liable. Without question suffocating jail conditions may indeed offend "the evolving standards of decency that mark the progress of a maturing society" and thus constitute cruel and unusual punishment. See, Gregg v. Georgia, 428 U.S. 153, 173, 96 S.Ct. 2909, 2925, 49 L.Ed.2d 859 (1976); Battle v. Anderson, 564 F.2d 388 (10th Cir. 1977); Gregory v. Wyse, 512 F.2d 378 (10th Cir. 1975). The record developed by the district court is clearly not dispositive of questions regarding adequacy of the ventilation system and we express no opinion regarding the existence or non-existence of oppressive jail conditions. The record does, however, establish that, if this case were to proceed to trial, Martinez could prove no facts which would entitle him to the requested relief from Sheriff Chavez. 11 The undisputed, crucial facts of this case, as established by the pleadings and affidavits submitted by each of the parties, are these: Martinez did complain to Sheriff Chavez regarding ventilation of the jail; Chavez did report these complaints to both the County Board of Commissioners and the County Planner; the County Planner and Board have ultimate responsibility for maintenance of the jail; pursuant to these reports the ventilation system was investigated and found to be operating, even if inadequately. Based upon these undisputed facts, it is clear that Sheriff Chavez was not deliberately indifferent to Martinez' complaints. To the contrary, Sheriff Chavez reported Martinez' complaints to the appropriate authorities (the County Planner and the Board of County Commissioners). Even accepting all Martinez' allegations of fact as true, including those contained in his affidavits, it is clear that Martinez could not establish through proof at trial that he is entitled to recover from Chavez. Martinez could make no rational argument on the law or facts in support of his claim and the action was accordingly properly dismissed as frivolous under § 1915(d). The procedure employed by the district court in reaching its § 1915(d) frivolity finding was entirely proper under the circumstances of this case. 12 When this appeal was docketed the parties were notified that it was assigned to Calendar D and would be considered by a panel of judges, without oral argument, on the record of proceedings before the district court. Although each was afforded the opportunity to submit a memorandum in support of their respective positions, only Martinez has done so. Having reviewed this memorandum and the record of proceedings before the district court, we are convinced, for the reasons set forth above, that the district court's judgment of dismissal is correct and should be affirmed. 13 Affirmed. 1 We, of course, cannot condone "trial by affidavit" as initially proposed, if such "trial" were to resolve bona fide issues of fact. See, Taylor v. Gibson, 529 F.2d 709 (5th Cir. 1976) 2 The procedure approved in Martinez, supra, and ultimately employed by the district court in this case, involves initial grant of pauper status pursuant to § 1915(a), and subsequent dismissal pursuant to § 1915(d) after finding that the action is frivolous. In this circuit ". . . the test for frivolousness is whether the plaintiff can make a rational argument on the law or facts in support of his claim." Bennett v. Passic, 545 F.2d 1260 at 1261 (10th Cir. 1976) 3 The district court may have viewed Martinez' complaint as facially sufficient since, as stated, it ordered initially that its determinations regarding factual questions would be by affidavit and counter affidavit and that trial would be by affidavit. Given the increasing sophistication of today's civil rights litigants, we recognize that it is no great legal achievement to allege facts which state a claim under § 1983. Most often a district court is faced with a civil rights complaint which is deficient pursuant to Rule 8, F.R.Civ.P.; fails to allege facts which state a claim upon which relief can be granted; or both. In keeping with Haines v. Kerner, 404 U.S. 519, 92 S.Ct. 594, 30 L.Ed.2d 652 (1972), pro se complaints are construed liberally and every opportunity is extended to the pro se litigant to make an adequate complaint 4 That procedure involved an "investigation" to be conducted by prison authorities, the purpose being to make a record for the benefit of the trial court by which it could determine preliminary issues, including those of jurisdiction and frivolity pursuant to § 1915(d). Similar procedures approved by the Fifth Circuit have included use of both a "special report" to be prepared by the state attorney general, and "questionnaires" to be propounded by the district court itself. See, Bruce v. Wade, 537 F.2d 850, 853, n.5 (5th Cir. 1976); Taylor v. Gibson, supra at 717; Watson v. Ault, 525 F.2d 886, 892 (5th Cir. 1976); Hardwick v. Ault, 517 F.2d 295, 298 (5th Cir. 1975)
{ "pile_set_name": "FreeLaw" }
Erratum to: Mol Genet Genomics DOI 10.1007/s00438-009-0432-z {#Sec1} ============================================================ The Author would like to correct their names in the Author group. Instead of abbreviated form the names should change as: Geetha Govind, Vokkaliga ThammeGowda Harshavardhan, Jayaker Kalaiarasi Patricia, Ramachandra Dhanalakshmi, Muthappa Senthil Kumar, Nese Sreenivasulu, Makarla Udayakumar. The online version of the original article can be found under doi:10.1007/s00438-009-0432-z.
{ "pile_set_name": "PubMed Central" }
人間の代わりにロボットが働くようになれば、人手不足を解消できるのか。そんな実験を、居酒屋チェーンを運営する養老乃瀧が東京・池袋で始めた。既存店舗の一部にロボットが接客やドリンク提供を行う「ロボ酒場」を設け、店舗の省人化や効率化にどれだけ効果があるか検証するという。ロボットがどんな接客をしているのか、実際に店舗で確かめてみた。 ロボ酒場は、JR池袋駅南口付近の「一軒め酒場」内にある。ロボットは作業カウンターやビールサーバーを取り付けた専用スペースに立っており、天井に設置された4つのカメラで来店者を認識。「ロボ酒場へようこそ」などと声をかける。 ロボットの挨拶や表情は、来店者の年齢や性別、表情などに合わせて変化する。どんな人にどんな接客をする接客をすると喜ばれるかを、搭載したAI(人工知能)に学習させることで、徐々に接客の質を高められるという。 1|2 次のページへ
{ "pile_set_name": "OpenWebText2" }
Tomohiro Kawaguchi, Cupertino US Tomohiro Kawaguchi, Cupertino, CA US Patent application number Description Published 20090006876 STORAGE SYSTEM COMPRISING FUNCTION FOR REDUCING POWER CONSUMPTION - For at least one of storage unit, processor and cache memory which are I/O process-participating devices related to I/O command process, when a load of one or more I/O process-participating devices or a part thereof is a low load equal to or less than a predetermined threshold value, a processing related to a state of one or more of the I/O process-participating devices or a part thereof is redirected to another one or more I/O process-participating devices or a part thereof, and the state of the one or more I/O process-participating devices or a part thereof is shifted to a power-saving state. 01-01-2009 20090144496 FAST ACCESSIBLE COMPRESSED THIN PROVISIONING VOLUME - A computerized data storage system includes at least one storage device including a nonvolatile writable medium; a cache memory operatively coupled to the storage port and including a data storing area and a data management controller and a storage port. The storage port is operable to connect to a host computer, receive and send I/O information required by the host computer. The storage port is also operable to receive a request to read data, and, in response to the request to read data, the storage port is operable to send the data stored in the data storing area of the cache memory. The storage port is further operable to receive a request to write data, and, in response to the request to write data, the storage port is operable to send the write data to the data storing area of the cache memory. The storage system further includes a thin provisioning controller operable to provide a virtual volume having a virtual volume page, a capacity pool having a capacity pool page and manage a mapping between the virtual volume page and the capacity pool page. The storage system further includes a data compression controller operable to perform a compression operation, and a data decompression controller operable to perform a decompression operation. 06-04-2009 20090240880 HIGH AVAILABILITY AND LOW CAPACITY THIN PROVISIONING - A data storage system and method for simultaneously providing thin provisioning and high availability. The system includes external storage volume and two storage subsystems coupled together and to external storage volume. Each of storage subsystems includes disk drives and a cache area, each of the storage subsystems includes at least one virtual volume and at least one capacity pool. The virtual volume is allocated from storage elements of the at least one capacity pool. The capacity pool includes the disk drives and at least a portion of external storage volume. The storage elements of the capacity pool are allocated to the virtual volume in response to a data access request. The system further includes a host computer coupled to the storage subsystems and configured to switch input/output path between the storage subsystems. Each of the storage subsystems is adapted to copy received write I/O request to other storage subsystems. Upon receipt of request from another storage subsystem, storage element of the capacity pool of storage subsystem is prevented from being allocated to the virtual volume of that storage subsystem. 09-24-2009 20100049823 Initial copyless remote copy - Embodiments of the invention reduce the traffic between datacenters during initial remote copy. In one embodiment, a computer system comprises a first datacenter including a first source volume and a second datacenter including a second source volume. Prior to establishment of remote copy of deployed volumes between the first datacenter and the second datacenter, the first source volume of the first datacenter and the second source volume of the second datacenter have identical source objects. During establishment of remote copy, the first datacenter replicates the source object in the first source volume to a first target volume, the second datacenter replicates the source object in the second source volume to a second target volume, and a first replicated object in the first target volume and a second replicated object in the second target volume are related to each other by remote copy with no copying therebetween. 02-25-2010 20100057789 Low traffic failback remote copy - The local storage performs remote copy to the remote storage. For low traffic failback remote copy, the remote storage performs a delta copy to the local storage, the delta being the difference between the remote storage and local storage. The local storage backs up snapshot data. The remote storage resolves the difference of the snapshot of the local storage and the remote storage. The difference resolution method can take one of several approaches. First, the system informs the timing of snapshot of the local storage to the remote storage and records the accessed area of the data. Second, the system informs the timing of snapshot of the local storage to the remote storage, and the remote storage makes a snapshot and compares the snapshot and remote copied data. Third, the system compares the local data and remote copy data with hashed data. 03-04-2010 20100058319 AGILE DEPLOYMENT OF SERVER - System and method for agile deployment of servers. The system includes one or more storage subsystems, a host computer and a storage management server or general severs together with a system management server. A system administrator or a storage supplier preliminarily installs an application package on a server. The application package may include an operating system, programs, libraries, configuration data and initial data. When the system requires a new physical or virtual server, the system administrator replicates the installed application package and conducts the new server runs with the replicated application package. Operation sequences are provided for order of copying of the application package between the management servers and the storage subsystems. Change in the data from an initial state may be stored instead of the complete data. 03-04-2010 20100107003 Fast Data Recovery From HDD Failure - A storage system comprises a first storage device having a first plurality of hard disk drives and a first controller. The first controller stores data in the first plurality of hard disk drives by stripes. Each stripe includes M data and N parity data allocated to M+N hard disk drives of the first plurality of hard disk drives. A first hard disk drive includes data or parity data of both a first stripe of the stripes and a second stripe of the stripes, while a second hard disk drive includes data or parity data of only one of the first stripe or the second stripe. During data recovery involving failure of one of the first plurality of hard disk drives, the data in the failed hard disk drive is recovered for each stripe by calculation using data and parity data in other hard disk drives for each stripe. 04-29-2010 20100274966 HIGH AVAILABILTY LARGE SCALE IT SYSTEMS WITH SELF RECOVERY FUNCTIONS - Storage Systems in the IT system provide information of the status of its components to the System Monitoring Server. System Monitoring Server calculates storage availability of storage systems based on information using failure rates of the components, and determines whether the volumes of the storage system should be migrated based on a predetermined policy. If migration is required, System Monitoring Server selects the target storage system based on storage availability of storage systems, and requests migration to be performed. 10-28-2010 20110066802 DYNAMIC PAGE REALLOCATION STORAGE SYSTEM MANAGEMENT - In one embodiment, a storage system for storage management in a tiered storage environment comprises a plurality of storage volumes in a pool which are divided into a plurality of tiers having different tier levels, the tiers being organized according to a tier configuration rule, the plurality of storage volumes provided by a plurality of physical storage devices in the storage system; and a controller controlling the plurality of physical storage devices, the controller including a processor and a memory. The controller changes tier configurations of the tiers of storage volumes when the tier configuration rule is changed, the tier configurations including the tier levels. The controller allocates the pool to a plurality of virtual volumes based on a change of tier levels against the physical storage devices which occurs when the pool does not meet the tier configuration rule that was in effect. 03-17-2011 20110072225 APPLICATION AND TIER CONFIGURATION MANAGEMENT IN DYNAMIC PAGE REALLOCATION STORAGE SYSTEM - For storage management in a tiered storage environment in a system having one or more applications running on a host computer which is connected to a storage system, the storage system comprises storage volumes in a pool which are divided into a plurality of tiers having different tier levels, the tiers being organized according to a tier configuration rule; and a controller. The controller allocates the pool to a plurality of virtual volumes based on a change of the tier levels against the physical storage devices. The controller stores a relation between data in the storage system being accessed by each application running on the host computer and an application ID of the application accessing the data. The tier level of a portion of a storage volume of the plurality of storage volumes is changed based at least in part on the application accessing data in the storage volume. 03-24-2011 20110126083 FAST DATA RECOVERY FROM HDD FAILURE - A storage system comprises a first storage device having a first plurality of hard disk drives and a first controller. The first controller stores data in the first plurality of hard disk drives by stripes. Each stripe includes M data and N parity data allocated to M+N hard disk drives of the first plurality of hard disk drives. A first hard disk drive includes data or parity data of both a first stripe of the stripes and a second stripe of the stripes, while a second hard disk drive includes data or parity data of only one of the first stripe or the second stripe. During data recovery involving failure of one of the first plurality of hard disk drives, the data in the failed hard disk drive is recovered for each stripe by calculation using data and parity data in other hard disk drives for each stripe. 05-26-2011 20110153905 METHOD AND APPARATUS FOR I/O PATH SWITCHING - A system for input/output path switching comprises a host; a network switch coupled to the host; and a plurality of storage systems which include a first storage system and a second storage system. For switching an I/O path, from a path between the host and the first storage system via the network switch to another path between the host and the second storage system via the network switch, one of the host or the network switch changes FCID (Fibre Channel Node port identifier) information therein, to migrate a WWPN (World Wide Port Name) from association with the first storage system network interface to association with the second storage system network interface. The FCID information includes address information of storage system network interfaces of the storage systems for connecting to the network switch. 06-23-2011 20110208909 REDUCTION OF I/O LATENCY FOR WRITABLE COPY-ON-WRITE SNAPSHOT FUNCTION - According to one aspect of the invention, a method of controlling a storage system comprises storing data in a first volume in the storage system which has volumes including the first volume and a plurality of second volumes; prohibiting write I/O (input/output) access against the first volume after storing the data in the first volume; performing subsequent write requests received by the storage system against the second volumes in the storage system after storing the data in the first volume, each write request having a target volume which is one of the second volumes; and in response to each one write request of the write requests, determining whether the target volume of the one write request is write prohibited or not, and performing the one write request only if the target volume is not write prohibited. 08-25-2011 20110252274 Methods and Apparatus for Managing Error Codes for Storage Systems Coupled with External Storage Systems - A system comprising a plurality of storage systems, which uses storage devices of multiple levels of reliability. The reliability as a whole system is increased by keeping the error code for the relatively low reliability storage disks in the relatively high reliability storage system. The error code is calculated using hash functions and the value is used to compare with the hash value of the data read from the relatively low reliability storage disks. 10-13-2011 20120005504 STORAGE SYSTEM COMPRISING FUNCTION FOR REDUCING POWER CONSUMPTION - For at least one of storage unit, processor and cache memory which are I/O process-participating devices related to I/O command process, when a load of one or more I/O process-participating devices or a part thereof is a low load equal to or less than a predetermined threshold value, a processing related to a state of one or more of the I/O process-participating devices or a part thereof is redirected to another one or more I/O process-participating devices or a part thereof, and the state of the one or more I/O process-participating devices or a part thereof is shifted to a power-saving state. 01-05-2012 20120017061 METHODS AND APPARATUS FOR CONTROLLING DATA BETWEEN STORAGE SYSTEMS PROVIDING DIFFERENT STORAGE FUNCTIONS - A system comprises a plurality of storage systems, which provides different storage functions, and is controlled by a management server. The management server determines whether to change the control of the storage controller between the storage systems, or to mount the target volume as an external volume and keep the storage controller under control so that the storage function provided to the source volume is maintained even after the configuration change between the storage systems. After the determination, the management server instructs the storage system to perform according to the determination. 01-19-2012 20120047346 TIERED STORAGE POOL MANAGEMENT AND CONTROL FOR LOOSELY COUPLED MULTIPLE STORAGE ENVIRONMENT - A system comprises a first storage system including a first storage controller, which receives input/output commands from host computers and provides first storage volumes to the host computers; and a second storage system including a second storage controller which receives input/output commands from host computers and provides second storage volumes to the host computers. A first data storing region of one of the first storage volumes is allocated from a first pool by the first storage controller. A second data storing region of another one of the first storage volumes is allocated from a second pool by the first storage controller. A third data storing region of one of the second storage volumes is allocated from the first pool by the second storage controller. A fourth data storing region of another one of the second storage volumes is allocated from the second pool by the second storage controller. 02-23-2012 20120179864 METRICS AND MANAGEMENT FOR FLASH MEMORY STORAGE LIFE - According to one aspect of the invention, a method of evaluating reliability of flash memory media comprises managing a flash memory remaining life for each disk of a plurality of flash memory media disks provided in one or more flash memory media groups each of which has a configuration and a relationship between said each flash memory media group and the flash memory media disks in said each flash memory media group, wherein each flash memory media group is one of a RAID group or a thin provisioning pool; and calculating to obtain information of each flash memory media group based on the measured flash memory remaining life for each disk in said each flash memory media group, the configuration of said each flash memory media group, and the relationship between said each flash memory media group and the flash memory media disks in said each flash memory media group. 07-12-2012 20120226672 Method and Apparatus to Align and Deduplicate Objects - In deduplicating data including objects, the system obtains information of the location of the objects and uses the information in calculating the hash value. The hash value calculation program divides data from the boundary location to chunks to match the boundary location of the objects subject to deduplication and the hash value is calculated from each chunk. 09-06-2012 20120226876 NETWORK EFFICIENCY FOR CONTINUOUS REMOTE COPY - A method for controlling data for a storage system comprises: receiving a write input/output (I/O) command of a data from a host computer, the write I/O command including an application ID identifying an application operating on the host computer which sends the write I/O request; maintaining a record of a relation between the application ID in the write I/O command and a storage location of the data to be written in a first volume of the storage system; determining, based on the application ID, whether a data transfer function between the first volume and a second storage volume is to be performed on the data beyond writing the data to the storage location in the first volume; and if the data transfer function is to be performed on the data, then performing the data transfer function on the data to the second volume. 09-06-2012 20120290750 Systems and Methods For Eliminating Single Points of Failure For Storage Subsystems - Systems and methods directed to preventing a single point of failure by utilizing N_Port ID Virtualization (NPIV). During some procedures used by storage subsystems, such as migration, there is oftentimes only a single path from a host to a storage subsystem, which causes a potential single point of failure for the entire system. By utilizing NPIV, this problem may be mitigated. 11-15-2012 20120304006 LOW TRAFFIC FAILBACK REMOTE COPY - The local storage performs remote copy to the remote storage. For low traffic failback remote copy, the remote storage performs a delta copy to the local storage, the delta being the difference between the remote storage and local storage. The local storage backs up snapshot data. The remote storage resolves the difference of the snapshot of the local storage and the remote storage. The difference resolution method can take one of several approaches. First, the system informs the timing of snapshot of the local storage to the remote storage and records the accessed area of the data. Second, the system informs the timing of snapshot of the local storage to the remote storage, and the remote storage makes a snapshot and compares the snapshot and remote copied data. Third, the system compares the local data and remote copy data with hashed data. 11-29-2012 20120324162 STORAGE SYSTEM COMPRISING FUNCTION FOR REDUCING POWER CONSUMPTION - For at least one of storage unit, processor and cache memory which are I/O process-participating devices related to I/O command process, when a load of one or more I/O process-participating devices or a part thereof is a low load equal to or less than a predetermined threshold value, a processing related to a state of one or more of the I/O process-participating devices or a part thereof is redirected to another one or more I/O process-participating devices or a part thereof, and the state of the one or more I/O process-participating devices or a part thereof is shifted to a power-saving state. 12-20-2012 20130054891 DISTRIBUTION DESIGN FOR FAST RAID REBUILD ARCHITECTURE - Exemplary embodiments of the invention provide a distribution design for fast RAID rebuild architecture that avoids the deterioration of the availability/reliability in the distribution architecture. According to one aspect of the invention, a storage system comprises: a data storage unit including a plurality of storage devices; a storage controller including a processor, a memory, and a controller for controlling data transfer between the memory and corresponding storage devices in the data storage unit; and an internal network coupled between the storage controller and the storage devices. Based on loads of the processor of the storage controller and the internal network, the storage controller controls to limit a number of redundant storage devices over which to distribute a write data. 02-28-2013 20130054894 INCREASE IN DEDUPLICATION EFFICIENCY FOR HIERARCHICAL STORAGE SYSTEM - Exemplary embodiments provide improvement of deduplication efficiency for hierarchical storage systems. In one embodiment, a storage system comprises a storage controller; and a plurality of first volumes and a plurality of external volumes which are configured to be mounted to external devices. The storage controller controls to store related data which are derived from one of the plurality of first volumes in a first external volume of the plurality of external volumes. In another embodiment, the storage controller receives object data from a server and allocates the object data to the plurality of pool volumes. The plurality of pool volumes include a plurality of external volumes which are configured to be mounted to external devices. The storage controller controls to store the object data to the plurality of pool volumes based on object allocation information received from a backup server. 02-28-2013 20130103778 METHOD AND APPARATUS TO CHANGE TIERS - Systems and methods directed to changing tiers for a storage area that utilizes thin provisioning. Systems and methods check the area subject to a tier change command and change the tier based on the tier specified in the tier change command, and the tier presently associated with the targeted storage area. The pages of the systems and methods may be further restricted to one file per page. 04-25-2013 20130132668 VOLUME COPY MANAGEMENT METHOD ON THIN PROVISIONING POOL OF STORAGE SUBSYSTEM - Exemplary embodiments provide integrated thin provisioning pool for primary logical volume and secondary logical volume in a storage subsystem. A storage system comprises a processor; a memory; and a storage controller. In one embodiment, the storage controller is configured to allocate storage area from a first pool in response to a write request, and to control allocation of storage areas for a plurality of related data, which are to be allocated from the first pool, from different specified RAID groups in the first pool. In another embodiment, the storage controller is configured to allocate storage area from a first pool in response to a write request, and to control allocation of storage areas for a plurality of related data, which are to be allocated from the first pool, from different RAID groups in the first pool. 05-23-2013 20130179737 METHODS AND APPARATUS FOR MANAGING ERROR CODES FOR STORAGE SYSTEMS COUPLED WITH EXTERNAL STORAGE SYSTEMS - A system comprising a plurality of storage systems, which uses storage devices of multiple levels of reliability. The reliability as a whole system is increased by keeping the error code for the relatively low reliability storage disks in the relatively high reliability storage system. The error code is calculated using hash functions and the value is used to compare with the hash value of the data read from the relatively low reliability storage disks. 07-11-2013 20130238852 MANAGEMENT INTERFACE FOR MULTIPLE STORAGE SUBSYSTEMS VIRTUALIZATION - A storage system comprises: storage subsystems having storage controllers managing virtual volumes, each storage controller managing a plurality of logical volumes and controlling to store data for a virtual volume of the virtual volumes to a logical volume of the logical volumes; and a control module operable, in response to receiving a command commanding a registration of a storage function for a virtual volume, to translate the received command into a translated command commanding a registration of the storage function for a target logical volume of the logical volumes, based on a mapping between the virtual volumes, the logical volumes, and the storage controllers. The storage controller which manages the target logical volume processes the translated command commanding the registration of the storage function for the target logical volume. The control module is provided in at least one of the storage controllers or another computer in the storage system. 09-12-2013 20130297899 TRAFFIC REDUCING ON DATA MIGRATION - Exemplary embodiments provide a technique to reduce the traffic between storage devices during data migration. In one embodiment, a system comprises a plurality of storage systems which are operable to migrate a set of primary and secondary volumes between the storage systems by managing and copying, between the storage systems, a plurality of same data and a plurality of difference data between the primary and secondary volumes and location information of each of the plurality of difference data, the location information identifying a location of the difference data in the primary or secondary volume associated with the difference data. Each secondary volume which corresponds to a primary volume, if said each source secondary volume contains data, has a same data as the primary volume and, if said each secondary volume is not synchronized with the primary volume, further has a difference data with respect to the primary volume. 11-07-2013 20140115384 FAST DATA RECOVERY FROM HDD FAILURE - A storage system comprises a first storage device having a first plurality of hard disk drives and a first controller. The first controller stores data in the first plurality of hard disk drives by stripes. Each stripe includes M data and N parity data allocated to M+N hard disk drives of the first plurality of hard disk drives. A first hard disk drive includes data or parity data of both a first stripe of the stripes and a second stripe of the stripes, while a second hard disk drive includes data or parity data of only one of the first stripe or the second stripe. During data recovery involving failure of one of the first plurality of hard disk drives, the data in the failed hard disk drive is recovered for each stripe by calculation using data and parity data in other hard disk drives for each stripe. 04-24-2014 20140173390 METHODS AND APPARATUS FOR MANAGING ERROR CODES FOR STORAGE SYSTEMS COUPLED WITH EXTERNAL STORAGE SYSTEMS - A system comprising a plurality of storage systems, which uses storage devices of multiple levels of reliability. The reliability as a whole system is increased by keeping the error code for the relatively low reliability storage disks in the relatively high reliability storage system. The error code is calculated using hash functions and the value is used to compare with the hash value of the data read from the relatively low reliability storage disks. 06-19-2014 20150067257 FAST ACCESSIBLE COMPRESSED THIN PROVISIONING VOLUME - A computerized data storage system includes at least one storage device including a nonvolatile writable medium; a cache memory and a data management controller and a storage port. The storage port is operable to receive a request to read data, and, in response to the request to read data, to send the data stored in the data storing area of the cache memory. The storage port is further operable to receive a request to write data, and, in response to the request to write data, to send the write data to the data storing area of the cache memory. The storage system further includes a thin provisioning controller operable to provide a virtual volume, and a capacity pool. The storage system further includes a data compression controller and a data decompression controller.
{ "pile_set_name": "Pile-CC" }
Q: Using codeigniter routes to leave out a part of the uri I have this uri http://localhost/ur/index.php/reports/annual/gm/8312/44724286729 but the annual part serves the purpose of showing the user what report he/she is viewing. The function that is therefore being mapped is gm with the parameters public function gm($id,$telephone_number{ /** General Meeting */ } in http://localhost/ur/index.php/reports/annual/gm/8312/44724286729 My controller file is called reports.How would i use routes to ignore annual and only use gm and as my function and other sections of the uri as my parameters namely 8312/44724286729?. I have tried this in my routes $route['annual/(:any)'] = "gm"; A: You could try to use the dollar-sign syntax as such $route['reports/annual/(:any)/(:any)'] = "reports/gm/$1/$2"; And in your reports controller the gm function will use $1 and $2 as arguments. P.S.: see the URI Routing section in CI manual.
{ "pile_set_name": "StackExchange" }
.. _pt Proctored Session Results: ################################################### Viewing Proctored Session Results with Proctortrack ################################################### To review individual violation videos and screenshots, follow these steps: #. In the LMS, open the Proctortrack Review Dashboard by navigating to the **edX Instructor Dashboard** -> **Special Exams** tab -> **Review Dashboard**. #. The Verificient **Proctortrack Review Dashboard** will load inline in the LMS. #. Navigate to the **Quiz List** tab and locate the exam you want to review. #. Click on **View Sessions** to open the list of learners who took the exam #. Review all learners who are flagged as “Require Attention” as follows. #. To review an individual learner’s session, click on the learner’s name to pop out their detailed exam results in a new tab. Here you can review their exam data, including Video Monitoring, Online Violations, Verification scans, and Onboarding tabs to understand what infractions (if any) were flagged as suspicious #. If the suspicious behavior is deemed to be in violation of proctoring rules of your course, select **Fail** to fail the learner and set their grade to 0. Learners will get an email informing them that they did not pass proctoring review, and their grade was set to 0. #. If needed, you can later revert this decision by clicking **Pass** to pass the learner and restore their original exam grade. #. If needed, you can download the violation screenshot and data by clicking the **Export Data arrow**. To see a summary of proctored exam results, you use the Proctored Exam Results report. This report is a .csv file that you can download from the instructor dashboard. You can use this report to view proctoring results for all learners, or :ref:`determine whether a specific learner has passed the proctoring review<Determine if Learner Passed Proctoring Review>`. .. note:: The Proctored Exam Results report contains information about the proctoring review. The report does not include information about the learner's score on the exam. A learner might pass the proctoring review but not earn a high enough score to pass the exam itself. For more information about the Proctored Exam Results report, see the following sections. .. contents:: :local: :depth: 1 .. _Viewing PT Proctored Session Results: ********************************************* Download the Proctored Exam Results Report ********************************************* At any time after learners have taken the proctored exam in your course, you can download a .csv file that displays the current status of the proctoring session for participating learners. To generate and download the Proctored Exam Results report, follow these steps. .. important:: This report contains confidential, personally identifiable data. Be sure to follow your institution's data stewardship policies when you open or save this report. #. View the live version of your course. #. In the LMS, select **Instructor**, then select **Data Download**. #. In the **Reports** section, select **Generate Proctored Exam Results Report**. A status message indicates that the report generation process is in progress. This process can take some time to complete. You can navigate away from this page while the process runs. #. To check the progress of the report generation, reload the page in your browser and scroll to the **Pending Tasks** section. The table shows the status of active tasks. When the report is complete, a linked .csv file name becomes available in the **Reports Available for Download** section. The most recently generated reports appear at the top of the list. File names are in the following format. ``{course_id}_proctored_exam_results_report_{datetime}.csv`` #. To download a report file, select the link for the report you requested. The .csv file begins downloading automatically. .. note:: To prevent the accidental distribution of learner data, you can download exam result report files only by clicking the links on this page. These links expire after 5 minutes. If necessary, refresh the page to generate new links. #. When the download is complete, open the .csv files in a spreadsheet application to sort, graph, and compare data. .. _PT Proctored Session Results File: ******************************************** Interpret the Proctored Exam Results Report ******************************************** The Proctored Exam Results report contains the following fields. .. list-table:: :widths: 30 55 :header-rows: 1 * - Column - Description * - course_id - The ID of the course. * - exam_name - The name of the proctored exam in the body of the course. * - username - The username that identifies the learner taking the proctored exam. * - email - The email address that identifies the learner taking the proctored exam. * - attempt_code - An identifier for the exam attempt. The attempt code is an internal identifier and is included in the report for use in troubleshooting. * - allowed_time_limit_mins - The amount of time in minutes that this learner was allotted for completing the exam. * - is_sample_attempt - Indicates whether this exam attempt was for a practice exam. * - started_at - The date and time that the learner started to take the proctored exam. * - completed_at - The date and time that the learner submitted the proctored exam. * - status - The current status of the proctoring session as a whole. The proctoring session encompasses the time from when the learner chooses to take the proctored exam until the proctored exam review is complete. If the proctored exam review is complete, the value in the ``review_status`` column affects the value in this column. For possible values in the status column and an explanation of each value, see :ref:`Proctoring Results Status Column`. * - review_status - The current status of the proctoring exam review by Proctortrack/the course team. If the proctored exam review is complete, the value in this column affects the value in the ``status`` column. For possible values and an explanation of each value, see :ref:`Proctoring Results Review Status Column PT`. * - Suspicious Count - Number of incidents during the exam that Proctortrack marked as "Suspicious". * - Suspicious Comments - The comments that Proctortrack entered for each "Suspicious" incident, separated by semicolons (;). * - Rules Violation Count - Number of incidents during the exam that Proctortrack marked as "Rules Violation". * - Rules Violation Comments - The comments that Proctortrack entered for each "Rules Violation" incident, separated by semicolons (;). .. _Proctoring Results Status Column: =============================== Values in the ``status`` Column =============================== The following table describes the possible values in the ``status`` column. .. list-table:: :widths: 30 55 :header-rows: 1 * - Value - Description * - completed - The learner has completed the proctored exam. * - created - The exam attempt record has been created, but the exam has not yet been started. * - declined - The learner declined to take the exam as a proctored exam. * - error - An error has occurred with the exam. * - expired - The course end date passed before the learner completed the proctored exam. * - ready_to_start - The exam attempt record has been created. The learner still needs to start the exam. * - ready_to_submit - The learner has completed but not yet submitted the proctored exam. * - rejected - The proctoring session review has been completed, and the learner has not passed the review. The learner receives a value of "Unsatisfactory" on the learner exam page and in a notification email message. Additionally, the learner automatically receives a score of 0 for the exam. For most courses, the learner is no longer eligible for academic credit. This value results from a value of "Suspicious" in the :ref:`review_status<Proctoring Results Review Status Column PT>` column after a member of the course team marks the exam session a failure in the Proctortrack dashboard. * - second_review_required - The exam attempt has been reviewed and the review team has determined that the exam requires additional evaluation. Course teams should perform this second round of review, as described :ref:`above<pt Proctored Session Results>` This status results from a value of "Suspicious" in the :ref:`review_status<Proctoring Results Review Status Column PT>` column. * - started - The learner has started the proctored exam. * - submitted - The learner has completed the proctored exam and results have been submitted for review. * - timed_out - The proctored exam has timed out. * - verified - The proctoring session review has been completed, and the learner has passed the review. The learner receives a value of "Satisfactory" on the learner exam page and in a notification email message. This value results from a value of "Clean" or "Rules Violation" in the :ref:`review_status<Proctoring Results Review Status Column PT>` column. .. _Proctoring Results Review Status Column PT: ====================================== Values in the ``review_status`` Column ====================================== After learners complete a proctored exam, a reviewer from the proctoring service provider reviews the exam according to specific criteria, including the :ref:`Online Proctoring Rules <CA Online Proctoring Rules>`. The value in the ``review_status`` column shows the outcome of the proctored exam review. Additionally, the value in the ``review_status`` column affects the following information for the course team and for the learner. * The values in the ``status`` column. * The proctoring result that is visible on the learner exam page and in the email notification that the learner receives. For example, if the ``review_status`` column has a value of "Clean", the value in the ``status`` column is "verified". On the learner exam page and in the email notification, the status of the exam is "Satisfactory". If the ``review_status`` column has a value of "Suspicious", the value in the ``status`` column is "second_review_required" or "rejected". If the ``status`` is "rejected", then on the learner exam page and in the email notification, the status of the exam is "Unsatisfactory". The following table describes the possible values in the ``review_status`` column. .. list-table:: :widths: 30 20 55 :header-rows: 1 * - Value - Exam Result - Description * - Clean - Pass - No rules violations or suspicious incidents occurred. The learner has passed the proctoring review. This value causes a value of "verified" in the ``status`` column. The learner receives a result of "Satisfactory" for the proctored exam. * - Not Reviewed - n/a - The proctoring review is not yet complete. * - Rules Violation - Pass - An incident occurred that violates proctored exam rules, but the incident does not compromise exam integrity. For example, music may be playing. The learner has passed the proctoring review. This value causes a value of "verified" in the ``status`` column. The learner receives a result of "Satisfactory" for the proctored exam. * - Suspicious - Fail - An incident has occurred that directly compromises exam integrity. For example, cheating might have occurred. The learner has failed the proctoring review. This value causes a value of "second_review_required" or "rejected" in the ``status`` column. The learner receives a result of "Unsatisfactory" for the proctored exam in the latter case. The learner also receives a score of 0 on the exam. In most courses, the learner is no longer eligible for academic credit. .. _Determine if Learner Passed Proctoring Review: ******************************************************* Determine if a Learner Passed the Proctored Exam Review ******************************************************* To determine whether a specific learner passed the proctored exam review, you can either view the Proctored Session Results report or view the course as the learner. ========================================= View the Proctored Session Results Report ========================================= #. Download and open the Proctored Session Results report. #. In the row for the learner, check the ``status`` column. * If the value in the column is "verified", the learner passed the review. * If the value is "rejected", the learner did not pass the review. The learner automatically receives a score of 0 on the exam. Additionally, for most courses, the learner is no longer eligible for academic credit. ============================== View the Course as the Learner ============================== #. :ref:`View the course as the learner that you want<Roles for Viewing Course Content>`. #. Open the page for the proctored exam. On the page, the learner's status is visible as "Pending", "Satisfactory", or "Unsatisfactory".
{ "pile_set_name": "Github" }
Breadcrumb Creating a Hunting Club: Critical Relocation Images and story by Thomas Allen Endure tough conditions and work hard now. The effort will pay off later when it matters. — Anonymous entrepreneur On our 600-acre property we have 11 shooting houses. Ten of those sit overlooking BioLogic food plots, and one is completely out of use. Of those 10 huntable locations, four were absolutely trashed and two of those needed relocated to better accommodate access and seasonal wind directions. When we started the club in September of 2017, we knew adjustments to those houses and their locations would have to be made at some point. But during our first year we decided to hunt them as is due to budget and timing. It was a decision that could have negatively impacted how productive our hunting was overall, but it was still educational seeing first-hand how the deer reacted to each spot. I believe it was essential to witness movement on the club is it was, now the changes we accomplish make more sense. We decided to add two new shooting houses to existing food plots where the existing house was in very poor repair, and add one. Then two new shooting houses on new food plots. We started 2018 with a pretty big list of tasks and expectations. We were able to get a head start on that list, and on a reasonable budget, which required sweat equity and creativity. As we stand, we’ve got a good handle on Project Critical Relocation. The Timber If you followed last year’s series, you know that one of our members owns a small sawmill. His brother-in-law happens to own a tree-cutting business, and has seemingly unlimited access to pine logs. All to our potential benefit. To buy the 4x4s, 2x6s and 3/4-inch pressure treated plywood for decking it would cost us over $300 per platform. That’s not including building a solid shooting house on top of the platform. You may as well just figure another $300+ for that. With Paul’s sawmill and access to uncut timber, factor in a few tanks of gas, a few saw blades and several hot Alabama days at the sawmill’s helm we accomplished the goal of generating materials at a fraction of the cost. I’d like to show you a few of the finished products and why we did what we did. Saving Money Knowing how much it cost to build a complete shooting house from top to bottom we evaluated our options. Based on our available budget, we just couldn’t build multiple brand new shooting houses. But we could manage a few platforms built out of milled lumber. Time was also a factor to consider. It took several days to mill the lumber for four new shooting house platforms. It would take at least that long to produce lumber for the top halves. To save on both we came up with an alternative that I believe will prove just as effective, perhaps even long term. I designed the shooting house platforms with dimensions to comfortably accommodate two to three hunters inside a standard 5-hub ground blind. That produces a savings of over $200 if we were to buy the materials to build hard-side permanent houses. I can’t think of a good reason to do otherwise. At first, I thought the ground blind option would be a temporary solution until we could afford to build a permanent house. But the wasps don’t seem to mess with the blind nearly as much as they do a solid, wooden structure And we can take the blinds down at the end of the season to help lengthen it’s lifespan. The Remodel We had two houses that were in the right location, but needed a new structure. I mean, they were in very. Sorry. Shape. Not to mention very uncomfortable. This video shows a platform that we completely overhauled and put an Ameristep blind on top of in place of a “permanent” house. There is one more remodel that we hope to have accomplished before the season arrives. But this one in particular is ready to go. The Relocations There were two shooting houses overlooking BioLogic food plots that were really in the exact wrong spot. Access — in and out — was the primary concern for each house. One in fact was in the middle of the food plot; you were literally bumping deer off it every time you left after dark. Honestly, I'm not sure what the original builder was thinking. Perhaps they didn’t want to kill any deer. (I jest, but seriously) On Field 10, we built a platform with an Ameristep blind on it that is now directly adjacent to the access road. Now you can slip out of the house and get down the road without deer seeing you, even if they are feeding in the field. This particular spot will now require a specific wind, but when conditions are right it could be one of the best sits on the club. The second spot we relocated was in a bottom field covered in BioLogic Maximum. It was a beautiful food plot last year. We’ll be planting it the same way again this year. The shooting house, however, was a mere 10 yards off the edge of the field and you had to walk across half of it to access the house. It was in the basin of the bowl-like formation of the surrounding land, meaning any forecasted wind would swirl bad, no matter what. My daughter shot the first deer on the club last year over this food plot. It worked once, but many visits after that ended up in a goose egg. We decided to move the house back 70 yards and up the hill. The new position will allow for better scent control as the wind is much more consistent in that spot, and access is 100-precent better. We’re very excited about both spots now. They are completely revamped with new platforms and blinds in locations that will reduce human presence and scent dispersion. More to come… About the author: Thomas Allen calls central Alabama home, where he lives with his beloved wife, Kathryn, and two growing children, Tommy and Taylor. Follow Thomas on Twitter: @ThomasAllenIV and Instagram: ThomasAllen4
{ "pile_set_name": "Pile-CC" }
To be clear, members of this new Independent Group had a variety of motivations for leaving – four because of alleged anti-Semitism in the Labour Party, as well as objections to the party’s left-wing bias, and some also in fear of being de-selected. However, the Conservative defectors, and all but one from Labour, want another “people’s vote” on Brexit. It is instructive that Labour leader Jeremy Corbyn then moved to favour a second referendum, while Theresa May seeks to delay another parliamentary vote, hoping the Europeans will “blink”. Loading Given that the EU would still clearly prefer that the UK not leave, it is unlikely to offer any significant concessions, so it is hard to see how May can ever win a parliamentary vote on her withdrawal bill. Indeed, she continues to lose procedural votes that work to reduce her negotiating authority. No vote would mean that the UK would still leave on March 29, a “hard Brexit”, which most don’t want. The hope of the UK establishment is that a concensus emerges for a second referendum, or for withdrawing the leave application altogether. Much will depend on whether there are further defections. It is clear that an issue such as Brexit is not only divisive across the UK community, but also within each of the major parties, important enough to threaten the traditional party duopoly. Many say, or hope, it can’t happen, but it may. Party unity has been a significant issue in Australia's political history, but disunity has probably been more about individuals than issues, especially since World War II. The most notable exception was the ALP/DLP split of the mid-1950s over communism and communist influence. But Gorton v McMahon, Howard v Peacock, Hawke v Keating, Rudd v Gillard v Rudd, and Abbott v Turnbull were mostly just about personalities, occasionally dressed under a policy cloak. However, over the post-war period there has been an increasingly significant voter drift against both major parties. In the late 1940s, together, they attracted about 95 per cent of the vote. Now it's in the mid-70s, at best. Most recently, the drift has manifested in support for high-profile independents, a trend that will gather momentum at the May election. Loading This is also occurring as both leaders have net negative poll ratings, and the electorate is genuinely faced with the choice between two lessers, both parties and individuals. While there are multiple motivations, a principle reason for the voter drift has been dissatisfaction with the two major parties, which have become increasingly self-absorbed internally while focusing more on point-scoring against, and blame-shifting to, each other, rather than governing, and particularly meeting the bigger policy challenges. Conspicuous disloyalty, interminable scandals, opportunistic, short-term policy thought bubbles, cynicism, and so on, have led voters to disengage, stop listening, or protest where and when they can. The economic narrative embraced by both sides doesn’t match the lived, and increasingly difficult, experience of voters, while the big issues they realistically expect their representatives to handle are simply denied, ignored, or fiddled with. In all this, individual members are increasingly under considerable pressure to toe the line, to rabbit on with the party line, as summarised in the dot-points emailed to them each morning, even if they fundamentally disagree. It is a daily farce in the media. It becomes harder and harder to defend the indefensible, especially for those in marginal seats where their feedback is contrary, where their support is waning. Loading Some members have already broken out of the two-party structure – Cory Bernardi, Clive Palmer. Banks, most noticeably, joining significant independents, Cathy McGowan, Rebekha Sharkey, Andrew Wilkie, Kerryn Phelps and Bob Katter. Independents could certainly hold the “power of the balance” in the Lower House after the next election. However, could climate policy be our Brexit? Could it threaten our LNP/ALP duopoly? It is a divisive issue within both the LNP and the ALP, against the background of some 70 to 80 per cent of voters consistently wanting decisive government-led action on the essential transition to a low-carbon society, and to renewables. Advocates for a long-term climate action plan across the two parties are much closer than they are to the deniers, coal advocates and intransigents in their own parties.
{ "pile_set_name": "OpenWebText2" }
Q: org.hibernate.exception.JDBCConnectionException: could not execute query using hibernate I've developed an application and it worked just fine locally , and when I uploaded it to a remote server it gave me com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException. I've tried the solution mentioned in the topic in this link : https://stackoverflow.com/questions/7565143/com-mysql-jdbc-exceptions-jdbc4-mysqlnontransientconnectionexception-no-operati#= here is the code that has access to database: protected Number getCount(Class clazz){ Session currentSession = HibernateUtil.getSessionFactory().getCurrentSession(); Transaction transaction = currentSession.beginTransaction(); return (Number) currentSession.createCriteria(clazz).setProjection(Projections.rowCount()).uniqueResult(); } and here is my hibernate configuration: <hibernate-configuration> <session-factory> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/lyrics_db</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.connection.password">123456</property> <property name="hibernate.hbm2ddl.auto">update</property> <property name="hibernate.show_sql">true</property> <property name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property> <property name="c3p0.acquire_increment">1</property> <property name="c3p0.idle_test_period">100</property> <!-- seconds --> <property name="c3p0.max_size">100</property> <property name="c3p0.max_statements">0</property> <property name="c3p0.min_size">10</property> <property name="c3p0.timeout">1800</property> <!-- seconds --> </session-factory> </hibernate-configuration> and it's not working and I'm getting the same exception , and here is my full stack trace: Dec 3, 2013 8:02:44 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet ServletAdaptor threw exception org.hibernate.exception.JDBCConnectionException: could not execute query at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:74) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43) at org.hibernate.loader.Loader.doList(Loader.java:2223) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2104) at org.hibernate.loader.Loader.list(Loader.java:2099) at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:94) at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1569) at org.hibernate.impl.CriteriaImpl.list(CriteriaImpl.java:283) at daos.UltimateDao.listWithLimitWithOrder(UltimateDao.java:47) at daos.LyricDao.getTopHundred(LyricDao.java:73) at com.xeeapps.service.Service.getTopHundred(Service.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1480) at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1411) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1360) at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1350) at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:538) at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:716) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:662) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed. at sun.reflect.GeneratedConstructorAccessor127.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1014) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:988) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:974) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:919) at com.mysql.jdbc.ConnectionImpl.throwConnectionClosedException(ConnectionImpl.java:1290) at com.mysql.jdbc.ConnectionImpl.checkClosed(ConnectionImpl.java:1282) at com.mysql.jdbc.ConnectionImpl.prepareStatement(ConnectionImpl.java:4468) at com.mysql.jdbc.ConnectionImpl.prepareStatement(ConnectionImpl.java:4434) at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:1076) at org.hibernate.jdbc.AbstractBatcher.getPreparedStatement(AbstractBatcher.java:505) at org.hibernate.jdbc.AbstractBatcher.getPreparedStatement(AbstractBatcher.java:423) at org.hibernate.jdbc.AbstractBatcher.prepareQueryStatement(AbstractBatcher.java:139) at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1547) at org.hibernate.loader.Loader.doQuery(Loader.java:673) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:236) at org.hibernate.loader.Loader.doList(Loader.java:2220) ... 40 more Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 38,056,253 milliseconds ago. The last packet sent successfully to the server was 38,056,857 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1117) at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3871) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2484) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2664) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2794) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155) at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2322) at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:144) at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:186) at org.hibernate.loader.Loader.getResultSet(Loader.java:1787) at org.hibernate.loader.Loader.doQuery(Loader.java:674) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:236) at org.hibernate.loader.Loader.doList(Loader.java:2220) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2104) at org.hibernate.loader.Loader.list(Loader.java:2099) at org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:94) at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1569) at org.hibernate.impl.CriteriaImpl.list(CriteriaImpl.java:283) at org.hibernate.impl.CriteriaImpl.uniqueResult(CriteriaImpl.java:305) at daos.UltimateDao.get(UltimateDao.java:24) at daos.SongDao.getSong(SongDao.java:31) at daos.LyricDao.getLyricForSong(LyricDao.java:24) at com.xeeapps.service.Service.getLyricForASong(Service.java:82) ... 32 more Caused by: java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3852) ... 53 more Dec 3, 2013 8:02:44 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet Faces Servlet threw exception com.sun.jersey.api.client.UniformInterfaceException: GET http://localhost:8080/LyricsService/webresources/service/getTopHundred returned a response status of 500 Internal Server Error at com.sun.jersey.api.client.WebResource.handle(WebResource.java:686) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:507) at client.LyricsClient.getTopHundred(LyricsClient.java:71) at controllers.TopHundredController.init(TopHundredController.java:32) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.el.parser.AstValue.invoke(AstValue.java:191) at org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:276) at com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:105) at com.sun.faces.facelets.tag.jsf.core.DeclarativeSystemEventListener.processEvent(EventHandler.java:128) at javax.faces.component.UIComponent$ComponentSystemEventListenerAdapter.processEvent(UIComponent.java:2563) at javax.faces.event.SystemEvent.processListener(SystemEvent.java:108) at javax.faces.event.ComponentSystemEvent.processListener(ComponentSystemEvent.java:118) at com.sun.faces.application.ApplicationImpl.processListeners(ApplicationImpl.java:2187) at com.sun.faces.application.ApplicationImpl.invokeComponentListenersFor(ApplicationImpl.java:2135) at com.sun.faces.application.ApplicationImpl.publishEvent(ApplicationImpl.java:289) at com.sun.faces.application.ApplicationImpl.publishEvent(ApplicationImpl.java:247) at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:107) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:219) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:647) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:662) Does anyone know why this exception keeps happening even I changed my configuration? A: I guess the problem was that I didn't commit the transaction , so it made some sort of lock.I figured this out from this topic: What happens if you don't commit transaction in a database (say SQL Server)
{ "pile_set_name": "StackExchange" }
Frosta (disambiguation) Frosta is a municipality in Trøndelag county, Norway. Frosta may also refer to: Places Frosta (village), a village in the municipality of Frosta in Trøndelag county, Norway Frosta Church, a church in the municipality of Frosta in Trøndelag county, Norway Frosta Hundred, a hundred in the traditional province of Scania in Sweden Other uses Frosta AG, a German frozen food producer SS Frosta, a tanker ship that was involved in the MV George Prince ferry disaster Frosta (She-Ra), a character in She-Ra: Princess of Power See also Frostating, a court in Frosta, Norway
{ "pile_set_name": "Wikipedia (en)" }
Metropolitan municipality (South Africa) In South Africa, a metropolitan municipality or Category A municipality is a municipality which executes all the functions of local government for a city or conurbation. This is by contrast to areas which are primarily rural, where the local government is divided into district municipalities and local municipalities. The Constitution, section 155.1.a, defines "Category A" municipalities. In the Municipal Structures Act it is laid out that this type of local government is to be used for conurbations, "centre[s] of economic activity", areas "for which integrated development planning is desirable", and areas with "strong interdependent social and economic linkages". The metropolitan municipality is similar to the consolidated city-county in the US, although a South African metropolitan municipality is created by notice of the provincial government, not by agreement between district and local municipalities. History Metropolitan municipalities were brought about during reforms of the 1990s so that cities could be governed as single entities. This was a response to apartheid policy which had broken up municipal governance. For example, Soweto had, until 1973 been administered by the Johannesburg City Council, but after 1973 was run by an Administration Board separate from the city council. This arrangement deprived Soweto of vital subsidies that it had been receiving from Johannesburg. A key demand of anti-apartheid civics in the 1980s was for 'one city, one tax base' in order to facilitate the equitable distribution of funds within what was a functionally integrated urban space. Local government reform after apartheid produced six Transitional Metropolitan Councils following the 1995/6 local government elections. These were characterized by a two-tier structure. From 2000, these six Metropolitan Councils were restructured into their final single-tier form. In 2011, Buffalo City (East London) and Mangaung (Bloemfontein) were added to the category of metropolitan municipality. List of metropolitan municipalities See also Urban planning in Africa References Other sources Government Communication & Information Services (2005) Categories of municipalities Parliament of the Republic of South Africa (1996) Constitution of the Republic of South Africa, Chapter 7: Local Government Parliament of the Republic of South Africa (1998) Local Government: Municipal Structures Act, Act 117 of 1998. South African Local Government Association
{ "pile_set_name": "Wikipedia (en)" }
Nahsville Predators goalie Pekka Rinne made one of the best saves of the NHL playoffs on Monday to prevent a wild goal against the Chicago Blackhawks. During the first period, the Predators had the puck in the Blackhawks’ zone on a power play. Blackhawks defenseman Johnny Odyua cleared the puck, smacking it down the ice and against the boards. However, as the puck hit the boards, it took a wild bounce and was redirected right toward the Predators’ net. Rinne was outside of the net, anticipating collecting the puck from behind the net, and had to dive back, reaching his stick in front of the net to prevent the goal. Incredible. The replay and slo-mo from the announcers shows just how close the Predators came to giving up a wacky goal.
{ "pile_set_name": "Pile-CC" }
× October surprise / äkˈtōbər sə(r)ˈprīz/ noun any political event orchestrated (or apparently orchestrated) in the month before an election, in the hopes of affecting the outcome. October shit show / äkˈtōbər SHit·shō / noun a video leaked to the Washington Post that reveals a presidential candidate talking about attempting to have sex with married women and how he is entitled to grab women's genitals because he's "a star." a Category 2 hurricane wreaking havoc on the the lower U.S. into the second tier of news coverage. , a guy who has Republican National Committee chairman Reince Priebus condemned Donald Trump's sexually aggressive comments about women in a curt statement: “No woman should ever be described in these terms or talked about in this manner. Ever.” Representative Barbara Comstock, R-Virginia: “This is disgusting, vile, and disqualifying. No woman should ever be subjected to this type of obscene behavior and it is unbecoming of anybody seeking high office. In light of these comments, Donald Trump should step aside and allow our party to replace him with Mike Pence or another appropriate nominee from the Republican Party.
{ "pile_set_name": "OpenWebText2" }
Stamporama provides a free on-line auction to its registered members. While anyone visiting Stamporama can view the auction listings, only registered members can use the auction platform to buy and sell. Auction Rules Please read the Auction Rules before using the auction service. Refer back to the rules often to check for updates. Remember, buyers and sellers are required to know the rules and abide by them. Failure to do so may result in the loss of privileges to use this service, or even termination of Stamporama membership. Looking for Help? Stamporama has resources available that provide information regarding how the auction platform works. The two links below are a good place to start. Additional help can be obtained by posting a question on the Stamporama Discussion Board. The Discussion Board is monitored regularly. There are plenty of people who frequent the Discussion Boards and are ready to answer questions. Sometimes a problem arises or a question is not answered clearly or completely. When that happens there are members who are available to assist:
{ "pile_set_name": "Pile-CC" }
WRKE-LP WRKE-LP is a Variety formatted broadcast radio station licensed to Salem, Virginia, serving Salem and Roanoke in Virginia. WRKE-LP is owned and operated by Roanoke College. The station currently employs 70 Roanoke College Students and broadcasts from the main floor of the Colket Center of Roanoke College. Background WRKE is a student run, non-commercial radio station licensed by the Federal Communications Commission to Roanoke College. WRKE is a advised under a professional advisory board. The station is under manager and adviser Rick Mattioni, and program director Elijah Wilhelm. The station plays on an automated system playing alternative/indie music when student DJ's are not on air. Student shows vary from sports-talk, alternative, pop, or hip-hop. History The station got its start in 1998 when Jim Goodwin formed a club to establish a radio station on campus. Equipment was obtained and the station went on the air November 2005 with music played via computer automation until students returned after winter break to establish programming and shows. The station is building a fan base on campus and beyond as far as its low power transmitter can carry. While the broadcast radius is a mere 3 miles, this covers most of the Roanoke College community. Over the summer of 2016 a new, larger, more accessible on-air studio was built on Colket's main floor just a few feet from the Center's front doors and across the hall from the Commons. The upgrade enhances WRKE's ability to conduct interviews in roomier surroundings; produce live (and recorded) specials such as election night coverage and music performances; and increases the station's visibility. Beginning in early 2017, WRKE experienced unprecedented growth resulting in the station growing from 13 students, hosting 8 shows to 75 students, hosting 34 shows. WRKE also began its live and special events during this time. Events included Outlast, a competition similar to Big Brother, live radio dramas with the theater department, and live events from down in the Cavern. For their progress WRKE, under Program Director Elijah Wilhelm, won the Student Organization of the Year award for both the 2016-17 school year and the 2017-2018 school year. This is the only time the award has been won by the same organization in back to back years. Programming The station is DJ'd by Roanoke College students throughout the week during the academic year, and the station plays on an automated system playing alternative music when student DJ's are not on air. Student shows vary from sports-talk, alternative, pop, or hip-hop. Over the past year WRKE has also expanded its speciality event programming included joint collaborations with the RC theatre and athletic departments, remote broadcasts from local businesses, live audience shows for students, radio dramas, and a reality game competition. References External links 100.3 WRKE Online Category:2005 establishments in Virginia Category:Variety radio stations in the United States Category:Radio stations established in 2005 RKE-LP RKE-LP RKE-LP Category:Roanoke College
{ "pile_set_name": "Wikipedia (en)" }
/* SPDX-License-Identifier: GPL-2.0 */ /* * Copyright (c) 2018 MediaTek Inc. * Author: Weijie Gao <weijie.gao@mediatek.com> */ #ifndef _MT753X_H_ #define _MT753X_H_ #include <linux/list.h> #include <linux/mutex.h> #include <linux/netdevice.h> #include <linux/of_mdio.h> #include <linux/workqueue.h> #include <linux/gpio/consumer.h> #ifdef CONFIG_SWCONFIG #include <linux/switch.h> #endif #include "mt753x_vlan.h" #define MT753X_DFL_CPU_PORT 6 #define MT753X_NUM_PHYS 5 #define MT753X_DFL_SMI_ADDR 0x1f #define MT753X_SMI_ADDR_MASK 0x1f struct gsw_mt753x; enum mt753x_model { MT7530 = 0x7530, MT7531 = 0x7531 }; struct mt753x_port_cfg { struct device_node *np; int phy_mode; u32 enabled: 1; u32 force_link: 1; u32 speed: 2; u32 duplex: 1; }; struct mt753x_phy { struct gsw_mt753x *gsw; struct net_device netdev; struct phy_device *phydev; }; struct gsw_mt753x { u32 id; struct device *dev; struct mii_bus *host_bus; struct mii_bus *gphy_bus; struct mutex mii_lock; /* MII access lock */ u32 smi_addr; u32 phy_base; int direct_phy_access; enum mt753x_model model; const char *name; struct mt753x_port_cfg port5_cfg; struct mt753x_port_cfg port6_cfg; int phy_status_poll; struct mt753x_phy phys[MT753X_NUM_PHYS]; int phy_link_sts; int irq; int reset_pin; struct work_struct irq_worker; #ifdef CONFIG_SWCONFIG struct switch_dev swdev; u32 cpu_port; #endif int global_vlan_enable; struct mt753x_vlan_entry vlan_entries[MT753X_NUM_VLANS]; struct mt753x_port_entry port_entries[MT753X_NUM_PORTS]; int (*mii_read)(struct gsw_mt753x *gsw, int phy, int reg); void (*mii_write)(struct gsw_mt753x *gsw, int phy, int reg, u16 val); int (*mmd_read)(struct gsw_mt753x *gsw, int addr, int devad, u16 reg); void (*mmd_write)(struct gsw_mt753x *gsw, int addr, int devad, u16 reg, u16 val); struct list_head list; }; struct chip_rev { const char *name; u32 rev; }; struct mt753x_sw_id { enum mt753x_model model; int (*detect)(struct gsw_mt753x *gsw, struct chip_rev *crev); int (*init)(struct gsw_mt753x *gsw); int (*post_init)(struct gsw_mt753x *gsw); }; extern struct list_head mt753x_devs; struct gsw_mt753x *mt753x_get_gsw(u32 id); struct gsw_mt753x *mt753x_get_first_gsw(void); void mt753x_put_gsw(void); void mt753x_lock_gsw(void); u32 mt753x_reg_read(struct gsw_mt753x *gsw, u32 reg); void mt753x_reg_write(struct gsw_mt753x *gsw, u32 reg, u32 val); int mt753x_mii_read(struct gsw_mt753x *gsw, int phy, int reg); void mt753x_mii_write(struct gsw_mt753x *gsw, int phy, int reg, u16 val); int mt753x_mmd_read(struct gsw_mt753x *gsw, int addr, int devad, u16 reg); void mt753x_mmd_write(struct gsw_mt753x *gsw, int addr, int devad, u16 reg, u16 val); int mt753x_mmd_ind_read(struct gsw_mt753x *gsw, int addr, int devad, u16 reg); void mt753x_mmd_ind_write(struct gsw_mt753x *gsw, int addr, int devad, u16 reg, u16 val); void mt753x_irq_worker(struct work_struct *work); void mt753x_irq_enable(struct gsw_mt753x *gsw); /* MDIO Indirect Access Registers */ #define MII_MMD_ACC_CTL_REG 0x0d #define MMD_CMD_S 14 #define MMD_CMD_M 0xc000 #define MMD_DEVAD_S 0 #define MMD_DEVAD_M 0x1f /* MMD_CMD: MMD commands */ #define MMD_ADDR 0 #define MMD_DATA 1 #define MII_MMD_ADDR_DATA_REG 0x0e /* Procedure of MT753x Internal Register Access * * 1. Internal Register Address * * The MT753x has a 16-bit register address and each register is 32-bit. * This means the lowest two bits are not used as the register address is * 4-byte aligned. * * Rest of the valid bits are divided into two parts: * Bit 15..6 is the Page address * Bit 5..2 is the low address * * ------------------------------------------------------------------- * | 15 14 13 12 11 10 9 8 7 6 | 5 4 3 2 | 1 0 | * |----------------------------------------|---------------|--------| * | Page Address | Address | Unused | * ------------------------------------------------------------------- * * 2. MDIO access timing * * The MT753x uses the following MDIO timing for a single register read * * Phase 1: Write Page Address * ------------------------------------------------------------------- * | ST | OP | PHY_ADDR | TYPE | RSVD | TA | RSVD | PAGE_ADDR | * ------------------------------------------------------------------- * | 01 | 01 | 11111 | 1 | 1111 | xx | 00000 | REG_ADDR[15..6] | * ------------------------------------------------------------------- * * Phase 2: Write low Address & Read low word * ------------------------------------------------------------------- * | ST | OP | PHY_ADDR | TYPE | LOW_ADDR | TA | DATA | * ------------------------------------------------------------------- * | 01 | 10 | 11111 | 0 | REG_ADDR[5..2] | xx | DATA[15..0] | * ------------------------------------------------------------------- * * Phase 3: Read high word * ------------------------------------------------------------------- * | ST | OP | PHY_ADDR | TYPE | RSVD | TA | DATA | * ------------------------------------------------------------------- * | 01 | 10 | 11111 | 1 | 0000 | xx | DATA[31..16] | * ------------------------------------------------------------------- * * The MT753x uses the following MDIO timing for a single register write * * Phase 1: Write Page Address (The same as read) * * Phase 2: Write low Address and low word * ------------------------------------------------------------------- * | ST | OP | PHY_ADDR | TYPE | LOW_ADDR | TA | DATA | * ------------------------------------------------------------------- * | 01 | 01 | 11111 | 0 | REG_ADDR[5..2] | xx | DATA[15..0] | * ------------------------------------------------------------------- * * Phase 3: write high word * ------------------------------------------------------------------- * | ST | OP | PHY_ADDR | TYPE | RSVD | TA | DATA | * ------------------------------------------------------------------- * | 01 | 01 | 11111 | 1 | 0000 | xx | DATA[31..16] | * ------------------------------------------------------------------- * */ /* Internal Register Address fields */ #define MT753X_REG_PAGE_ADDR_S 6 #define MT753X_REG_PAGE_ADDR_M 0xffc0 #define MT753X_REG_ADDR_S 2 #define MT753X_REG_ADDR_M 0x3c #endif /* _MT753X_H_ */
{ "pile_set_name": "Github" }
More materials by author Dustin Kein Don't worry about the content. Let's assume that you have already got something to show and we're now going to arrange all this fast and correctly. For myself, I chose the sexiest image of a portfolio - the shape of a notebook. You are free to create and contrive for your own pleasure though: We at Web Design Library would like to wish you a Happy Valentine’s Day. The tutorial below is our gift to you. Prepared with beginners in mind, the first half explains how to draw a realistic 3D cube (a box). The second half, intended for advanced Photoshop artists, explains how to decorate... Have you ever seen pictures that change or appear in new ways while opening in Internet Explorer? In this short tutorial I'll teach you the technique for making such pictures yourself. The technique is very simple... Have you ever had the desire to create a cool signature on a forum, an animated banner, or just something that moves? Here I will tell you how to create an inscription, giving you skills that will serve as the foundation for further experimentation. You can create ...
{ "pile_set_name": "Pile-CC" }
{default_translation_domain domain='bo.default'} <div id="condition-add-operators-values" class="form-group"> <label for="operator">{$label}</label> <div class="row"> <div class="col-lg-6"> {$operatorSelectHtml nofilter} </div> <div class="input-group col-lg-6"> <input type="text" class="form-control" id="{$inputKey}-value" name="{$inputKey}[value]" value="{$currentValue}"> </div> </div> </div>
{ "pile_set_name": "Github" }
1. Field of the Invention This invention relates to improvements in sealing rings and more particularly, but not by way of limitation, to an improved self-sealing ring adapted for utilization with the closure members of valves. 2. Description of the Prior Art O-rings are in widespread use today for sealing between the closure member of a valve and the valve seat. These sealing members are efficient in that the generally circular cross sectional configuration of the O-rings lend themselves readily for distortion to fill the seal ring groove to provide a sealing for the valve in the closed position thereof. In actual usage, however, it has been found that these O-ring seals are frequently dislodged from the annular recess or groove, particularly in the event that the closure member is a rotatable gate member. When a rotatable gate member is utilized within a valve body, the sealing member is exposed to the full pressure existing within the valve, and during rotation of the gate member an O-ring may be swept from the groove, and any loss of the sealing ring results in a leakage of fluid around the gate member, causing an inefficient valve. In order to overcome this disadvantage, a flanged sealing was developed as shown in my prior U.S. Pat. No. 2,886,284, issued May 12, 1959, and entitled "Flanged Sealing Ring". Whereas this sealing ring greatly improved the efficiency of the O-ring type sealing in combination with a rotatable gate member in a valve, it has been found that the flexibility of the sealing ring during compression may not be sufficiently great as to afford the maximum sealing efficiency. In addition, it has been found that the outwardly extending flange may require additional reinforcement in some installations.
{ "pile_set_name": "USPTO Backgrounds" }
Jakob Nikolayevich Popov Jakob Nikolayevich Popov (1802 (1798?) - after 1852 (1859?)) was a Russian architect. His most noted work is the Demidovsky Pillar. Category:1802 births Category:1859 deaths Category:Russian architects Category:19th-century Russian architects
{ "pile_set_name": "Wikipedia (en)" }
RALEIGH, N.C. (WNCN) — A North Carolina State University football player has been suspended after police say he assaulted a female, according to officials. Damontay Jaqual Rhem, 22, was arrested Friday afternoon on campus at Wolf Village by N.C. State University police, according to arrest records. Rhem, of Wendell, is a running back for the Wolfpack, according to the N.C. State football team roster. Rhem, who played in the final eight games of the 2018 season, graduated from East Wake High School and attended UNC Pembroke before transferring to N.C. State. Rhem is charged with assault on a female, which authorities say took place Thursday, according to arrest records. N.C. State University released a statement saying that Rhem was suspended. “We’re aware of the charges and await additional details. Damontay is suspended from athletic participation pending resolution of this matter in accordance with the NC State Student-Athlete Code of Conduct,” Senior Associate Athletics Director Fred Demarest said the statement released to CBS 17. No other details were available.
{ "pile_set_name": "OpenWebText2" }
Genuine experienced guy seeks couples for fun Hi i am marc I am 6ft tall slim dark hair blue eyes, 36, from bristol well educated, imaginative, broadminded and fun!! Into most things, toys, vids, pics, oral, roleplay and dressing up, love helping to give double penetration and double blowjob. Why not try a genuine experienced fun guy i am discreet, clean and genuine. I am looking 4 cpls and ladies for erotic pleasure. I am able to perform, normal and fun. If you'd like to know more or have a chat feel free to e-mail me. Happy to be a non-participation third person or camermanalso into phone!!!! I can Travel at short notice and am free days as well as evenings/nights or late night locally.
{ "pile_set_name": "Pile-CC" }
Earlier this week [Monday], the Los Angeles Rams hired former Chicago Bears quarterback coach Mike Groh. He’ll operate as the team’s receivers coach/passing game coordinator. On late Thursday evening, according to a source, the Los Angeles Rams nabbed themselves another coordinator familiar with the Bears, acquiring former running backs coach - Skip Peete - to the same role. Peete is no stranger to this position, having been a running backs coach for almost the entirety of the last two decades; with 2015 being the only year in which he didn’t hold the position: Chicago Bears [2013-2014] Dallas Cowboys [2007-2012] Oakland Raiders [1998-2006] UCLA [1996-1997] Michigan State [1993-1994] The good news for Peete is that he’s walking into a great situation with the Rams, given the talent at the position. He’ll inherit Rookie-of-the-Year hopeful, Pro Bowl running back Todd Gurley. And the team’s backups - Tre Mason and Benny Cunningham - provide solid depth. If the name "Peete" struck you as familiar, that’s probably because it is. Skip’s brother Rodney played quarterback in the NFL for 16 seasons. Their father, Willie, was a long-time coach in the NFL, dating back to 1960 where he was an assistant coach at the University of Arizona. It’s fair to assume Skip learned a thing or two from his pops, as well, as Willie was a RB’s coach in the NFL from 1983-1995 with the Chiefs, Packers, Buccaneers, and Bears. For a team as talented as the Rams, complaints are often made that it’s the coaching preventing the team from getting over .500, earning a playoff berth, etc. It’s early in the offseason, but it appears the Rams are taking the necessary steps to ensure the rubber meets the road when the season kicks off in 2016.
{ "pile_set_name": "OpenWebText2" }
This past week, Seagate finally announced a 6TB hard drive, which is three years after their 4TB hard drive. Of course, Hitachi announced their hermetically-sealed helium 6TB hard drives in November, 2013, but only to OEM and cloud customers, not for retail sale. Hard drive capacities are slowing down as shown in the chart below. To account for the shrinking form factors in the earlier part of the history, and to account for exponential growth, I've scaled the vertical axis to be the logarithm of kilobytes (1000 bytes) per liter. This three year drought on hard drive capacity increases is represented by the plateau between the last two blue dots in the graph, representing 2011 and 2014. The red line extension to 2020 is based on Seagate's prediction that by then they will have 20TB drives using HAMR technology, which uses a combination of laser and magnetism. However, if the trendline from 2004-2011 had continued, by linear extrapolation on this log scale, hard drives would have been 600TB by 2020. This is not good news for users of Big Data. Data sizes (and variety and number of sources) are continuing to grow, but hard drive sizes are leveling off. Horizontal scaling is no longer going to be an option; the days of the monolithic RDBMS are numbered. Worse, data center sizes and energy consumption will increase proportional to growth in data size rather than be tempered by advances in hard drive capacities as we had become accustomed to. We haven't reached an absolute peak in hard drive capacity, so the term "peak hard drive" is an exaggeration in absolute terms, but relative to corporate data set sizes, I'm guessing we did reach peak hard drive a couple of years ago.
{ "pile_set_name": "OpenWebText2" }
FBI interviews accuser; Yale friend remembers heavy drinker FBI agents on Sunday interviewed one of the three women who have accused Supreme Court nominee Brett Kavanaugh of sexual misconduct as Republicans and Democrats quarreled over whether the bureau would have enough time and freedom to conduct a thorough investigation before a high-stakes vote on his nomination to the nation’s highest court. The White House insisted it was not “micromanaging” the new one-week review of Kavanaugh’s background but some Democratic lawmakers claimed the White House was keeping investigators from interviewing certain witnesses. President Donald Trump, for his part, tweeted that no matter how much time and discretion the FBI was given, “it will never be enough” for Democrats trying to keep Kavanaugh off the bench. And even as the FBI explored the past allegations that have surfaced against Kavanaugh, another Yale classmate came forward to accuse the federal appellate judge of being untruthful in his testimony to the Senate Judiciary Committee about the extent of his drinking in college. In speaking to FBI agents, Deborah Ramirez detailed her allegation that Kavanaugh exposed himself to her at a party in the early 1980s when they were students at Yale University, according to a person familiar with the matter who was not authorized to publicly discuss details of a confidential investigation. Kavanaugh has denied Ramirez’s allegation. The person familiar with Ramirez’s questioning, who spoke to The Associated Press on condition of anonymity, said she also provided investigators with the names of others who she said could corroborate her account. But Christine Blasey Ford, a California professor who says Kavanaugh sexually assaulted her when they were teenagers, has not been contacted by the FBI since Trump on Friday ordered the agency to take another look at the nominee’s background, according to a member of Ford’s team. Kavanaugh has denied assaulting Ford. In a statement released Sunday, a Yale classmate of Kavanaugh’s said he is “deeply troubled by what has been a blatant mischaracterization by Brett himself of his drinking at Yale.” Charles “Chad” Ludington, who now teaches at North Carolina State University, said he was friend of Kavanaugh’s at Yale and that Kavanaugh was “a frequent drinker, and a heavy drinker.” “On many occasions I heard Brett slur his words and saw him staggering from alcohol consumption, not all of which was beer. When Brett got drunk, he was often belligerent and aggressive,” Ludington said. While saying that youthful drinking should not condemn a person for life, Ludington said he was concerned about Kavanaugh’s statements under oath before the Senate Judiciary Committee. Speaking to the issue of the scope of the FBI’s investigation, White House press secretary Sarah Huckabee Sanders said White House counsel Don McGahn, who is managing Kavanaugh’s nomination, “has allowed the Senate to dictate what these terms look like, and what the scope of the investigation is.” “The White House isn’t intervening. We’re not micromanaging this process. It’s a Senate process. It has been from the beginning, and we’re letting the Senate continue to dictate what the terms look like,” Sanders said. White House counselor Kellyanne Conway said the investigation will be “limited in scope” and “will not be a fishing expedition. The FBI is not tasked to do that.” Senate Judiciary Committee member Jeff Flake, R-Ariz., requested an investigation last Friday — after he and other Republicans on the panel voted along strict party lines in favor of Kavanaugh’s confirmation — as a condition for his own subsequent vote to put Kavanaugh on the Supreme Court. Another committee member, Sen. Lindsey Graham, R-S.C., said Sunday that testimony would be taken from Ramirez and Kavanaugh’s high school friend Mark Judge, who has been named by two of three women accusing Kavanaugh of sexual misconduct. “I think that will be the scope of it. And that should be the scope of it,” Graham said. Sen. Dianne Feinstein of California, the top Democrat on the Judiciary Committee, called on the White House and the FBI to provide the written directive regarding the investigation’s scope. In a letter Sunday, she also asked for updates on any expansion of the original directive. Sen. Susan Collins said Sunday she is confident in the investigation and “that the FBI will follow up on any leads that result from the interviews.” The Maine Republican supports the new FBI investigation and is among a few Republican and Democratic senators who have not announced a position on Kavanaugh. Republicans control 51 seats in the closely divided 100-member Senate and cannot afford to lose more than one vote on confirmation. Collins and Flake spoke throughout the weekend. Senate Republicans discussed the contours of the investigation with the White House late Friday, according to a person familiar with the call who was not authorized to discuss it publicly. Senate Majority Leader Mitch McConnell, R-Ky., had gathered Judiciary Committee Republicans in his office earlier. At that time, the scope of the investigation was requested by Flake, Collins and Sen. Lisa Murkowski of Alaska, said McConnell’s spokesman Don Stewart. Murkowski is not on the committee, but also has not announced how she will vote on Kavanaugh’s confirmation. Republicans later called the White House to discuss the scope of the probe, the person said. McConnell’s office declined to elaborate Sunday on which allegations would be investigated, reiterating only that it would focus on “current credible allegations.” Stewart said the investigation’s scope “was set” by the three GOP senators Friday and “has not changed.” But Democratic Sen. Mazie Hirono of Hawaii, a Judiciary Committee member, doubted how credible the investigation will be given the time limit. “That’s bad enough, but then to limit the FBI as to the scope and who they’re going to question, that – that really – I wanted to use the word farce, but that’s not the kind of investigation that all of us are expecting the FBI to conduct,” she said. Trump initially opposed such an investigation as allegations began mounting but relented and ordered one on Friday. He later said the FBI has “free rein.” “They’re going to do whatever they have to do, whatever it is they do. They’ll be doing things that we have never even thought of,” Trump said Saturday as he departed the White House for a trip to West Virginia. “And hopefully at the conclusion everything will be fine.” He revisited the “scope” question later Saturday on Twitter, writing in part, “I want them to interview whoever they deem appropriate, at their discretion.” Sanders said Trump, who has vigorously defended Kavanaugh but also raised the slight possibility of withdrawing the nomination should damaging information be found, “will listen to the facts.” At least three women have accused Kavanaugh of years-ago misconduct. He denies all the claims. The third woman, Julie Swetnick, accused Kavanaugh and Judge of excessive drinking and inappropriate treatment of women in the early 1980s, among other accusations. Kavanaugh has called her accusations a “joke.” Judge has said he “categorically” denies the allegations. Swetnick’s attorney, Michael Avenatti, said Saturday that his client had not been contacted by the FBI but was willing to cooperate with investigators. Ford also has said Judge was in the room when a drunken Kavanaugh sexually assaulted her. Judge has said he will cooperate with any law enforcement agency that will “confidentially investigate” sexual misconduct allegations against him and Kavanaugh. Judge has also denied misconduct allegations. Sanders spoke on “Fox News Sunday,” Conway appeared on CNN’s “State of the Union” and Graham and Hirono were interviewed on ABC’s “This Week.” Like this: THE LATEST FROM ALABAMA TODAY Alabama Today will be the first place Alabamians of influence go for an inside look at breaking statewide, political and business news. Led by Apryl Marie Fogel, a political activist with over a decade of experience, the Alabama Today team includes freelance news reporters, as well as guest columnists from around the state. According to Fogel: "My biggest goal for Alabama Today is to provide up to the minute information that will influence the direction of our state. Alabama Today is a platform unlike any other in the state, where news is news and opinion is opinion. I'm excited to focus on two subject areas that I am passionate and dedicated to -- political news and opinions and recognizing and growing influential women." Alabama Today is also the host of Context Alabama where voices of experience and influence statewide will regularly be published. Featuring Alabama's leading political consultants, grassroots leaders and influencers Context Alabama will provide commentary on the hottest topics.
{ "pile_set_name": "Pile-CC" }
/* * Author: illuz <iilluzen[at]gmail.com> * File: AC_merge_n.cpp * Create Date: 2014-11-27 14:41:15 * Descripton: merge two array and find, O(n+m) + log(n+m) * complexity is too large, but it had AC! */ #include <bits/stdc++.h> using namespace std; const int N = 0; class Solution { public: double findMedianSortedArrays(int A[], int m, int B[], int n) { vector<int> C; int pa = 0, pb = 0; // point of A & B while (pa < m || pb < n) { if (pa == m) { C.push_back(B[pb++]); continue; } if (pb == n) { C.push_back(A[pa++]); continue; } if (A[pa] > B[pb]) C.push_back(B[pb++]); else C.push_back(A[pa++]); } if ((n + m)&1) return C[(n+m)/2]; else return (C[(n+m)/2 - 1] + C[(n+m)/2]) / 2.0; } }; int main() { int n, m; int A[100], B[100]; Solution s; while (cin >> n) { for (int i = 0; i < n; i++) cin >> A[i]; cin >> m; for (int i = 0; i < m; i++) cin >> B[i]; cout << s.findMedianSortedArrays(A, n, B, m) << endl; } return 0; }
{ "pile_set_name": "Github" }
ve m(w(s)). -2*s + 10 Suppose -2*z = -3*q - 22, 5*z + 3*q - 21 = 2*q. Let d(j) = z*j - 3*j + 0*j. Let c(s) = 3*s - 5*s + 4*s - 3*s. Calculate c(d(u)). -2*u Let z(v) = -6*v. Let q(u) = 3*u - 5264. Determine q(z(m)). -18*m - 5264 Let h(o) = -2*o. Let s(d) be the second derivative of d**3/3 - 71*d**2/2 + 666*d. Calculate h(s(w)). -4*w + 142 Let b(i) = -5*i. Let d(o) be the third derivative of 17*o**4/12 + 412*o**2. Calculate b(d(t)). -170*t Let z(l) = 170*l**2. Let u(v) = -7*v**2 + 5998 + 5*v**2 - 5998. What is z(u(m))? 680*m**4 Let c(x) = 32*x. Let v(j) = -31*j. Let q(u) = -2*c(u) - 3*v(u). Let a(z) = 47*z**2 - 22*z**2 - 26*z**2. Determine a(q(s)). -841*s**2 Let o(h) be the first derivative of 18*h**2 + 5. Let i(y) = y - 1. Let m(t) = -t + 2. Let r(p) = -4*i(p) - 2*m(p). Give o(r(b)). -72*b Let j(w) = -22*w. Let t(s) = 24514*s**2. Give t(j(i)). 11864776*i**2 Let v(d) be the second derivative of d**4/4 + 2*d + 24. Let t(l) = 123*l - 2. Give t(v(g)). 369*g**2 - 2 Let k(s) = -965501*s. Let u(o) = -2*o**2. Determine u(k(l)). -1864384362002*l**2 Suppose -6*z - 25 = -7. Let f(h) = -10*h**2 - 8*h - 8. Let d(j) = -3*j**2 - 3*j - 3. Let k(a) = z*f(a) + 8*d(a). Let p(l) = l. Give k(p(v)). 6*v**2 Let o(c) = 97*c**2 + 3. Let j(b) be the first derivative of -b**3/3 - 66. Determine j(o(f)). -9409*f**4 - 582*f**2 - 9 Let u(j) = -4*j - 9. Let d(a) = a + 2. Let w(n) = 9*d(n) + 2*u(n). Let p(m) be the first derivative of -10*m**3/3 + 237. Give w(p(f)). -10*f**2 Let r(a) = -69*a**2. Let l(c) = -2*c + 2*c + 188*c**2 - 189*c**2. What is r(l(z))? -69*z**4 Let s(d) be the third derivative of -d**5/20 - 2*d**2. Let w(r) = -36*r. What is s(w(k))? -3888*k**2 Let m(h) = -8*h**2. Let b(w) = 2994*w**2. Calculate b(m(c)). 191616*c**4 Let s be 2 - (-2 + 2 + 2). Let m(r) = 3*r - r + s*r. Let b(g) = -10*g**2 + 10*g + 5. Let c(h) = -h**2 + 2*h + 1. Let l(n) = b(n) - 5*c(n). Calculate l(m(i)). -20*i**2 Let g(u) = 3*u**2 + 2*u. Let d(v) = -16*v**2 - 11*v. Let h(l) = -2*d(l) - 11*g(l). Let t(p) = -2*p**2 - 39. Calculate h(t(a)). -4*a**4 - 156*a**2 - 1521 Let i(c) = 10*c + 11*c**2 + 13*c**2 - 10*c. Let g(b) = 2*b**2 + 3*b. Let d(t) = 3*t**2 + 4*t. Let l(n) = 3*d(n) - 4*g(n). Calculate l(i(m)). 576*m**4 Let c(n) = -10*n. Let j(u) = -6*u**2 - 11*u - 11. Let h(z) = -z**2 - 2*z - 2. Let y(x) = 33*h(x) - 6*j(x). What is y(c(s))? 300*s**2 Let x(w) = 307*w**2 - 81. Let k(n) = -n**2. Calculate x(k(i)). 307*i**4 - 81 Let i(k) be the first derivative of k**2 + k**2 - 19 + k**2 - 4*k**2. Let g(o) = 7*o. What is i(g(l))? -14*l Let z(o) = -13*o. Suppose 13*i = 3 + 23. Let u(d) = 15*d**2 - 25*d**i + 14*d**2. What is u(z(b))? 676*b**2 Let p(a) = 2*a + 2*a**2 - 2*a. Let c(m) = 193709*m - 387418*m + 193704*m. Determine c(p(d)). -10*d**2 Let x(k) = 7*k. Let a(r) be the second derivative of r**6/180 - r**3/3 + 2*r**2 - 17*r. Let g(t) be the second derivative of a(t). What is x(g(q))? 14*q**2 Let f(k) be the first derivative of 0*k**2 + 23 + 7/3*k**3 + 0*k. Let m(b) = -b. Calculate m(f(d)). -7*d**2 Let d(a) = 4*a. Let x = 1076 + -1076. Let k(f) be the third derivative of 3*f**2 - 1/20*f**5 + x*f + 0 + 0*f**4 + 0*f**3. What is d(k(u))? -12*u**2 Let m(t) be the third derivative of -11*t**4/24 + 27*t**2. Let b(h) be the second derivative of -h**3/3 + 8*h. What is b(m(c))? 22*c Let q(g) be the first derivative of -2*g**3/3 + 88. Let p(b) = 3*b**2 - 6*b. Give p(q(r)). 12*r**4 + 12*r**2 Let x(q) = -2*q. Let v(d) = d. Let j(z) = -v(z) - x(z). Let t(o) = o + 8. Let s be t(-6). Let c(l) = 9*l**s + 5*l**2 + 3*l**2 + 4*l**2. Calculate j(c(f)). 21*f**2 Let j(i) = -3*i**2. Let h(s) = -7*s + 30. Let c(w) = 16*w - 52. Let n(r) = 2*c(r) + 5*h(r). What is n(j(x))? 9*x**2 + 46 Let y(u) be the second derivative of 1/4*u**4 + 0 + 0*u**2 - 7*u + 0*u**3. Let a(q) = -3*q. Let i(h) = -h. Let x(b) = a(b) - 6*i(b). Calculate y(x(l)). 27*l**2 Let y(j) = -2*j. Let q(m) be the third derivative of -19*m**4/12 + 66*m**2. What is q(y(u))? 76*u Suppose 3*k - 3 = -5*s, 5*s + 21 = -3*k + 4*k. Let l(q) = 22*q + 4*q**2 - k*q**2 - 23*q. Let g(m) = m**2. Calculate l(g(r)). -2*r**4 - r**2 Let x(b) = -234 - 244 + 478 - 411*b**2. Let w(h) = h. Determine w(x(s)). -411*s**2 Let v(x) = x**2. Let q be (0 - 3 - -2) + (-10 - -13). Let w(s) = s - 21*s**2 + 7*s**q - s. Calculate v(w(u)). 196*u**4 Let l(w) be the third derivative of w**4/24 + 2*w**2 + 68*w. Let d(o) = -2*o - 240. Give d(l(n)). -2*n - 240 Let g(a) = a. Let x(p) be the second derivative of -p**4/6 - 5*p**3/6 + 48*p. Let o(n) = -10*g(n) - 2*x(n). Let h(r) = -25*r. Determine o(h(v)). 2500*v**2 Let y(r) = 2*r - 48. Let m(o) = 73*o + 5. Give y(m(a)). 146*a - 38 Let o(l) = 3244*l. Let x(r) = 68*r**2 - 1. Determine x(o(a)). 715600448*a**2 - 1 Let g(d) = -2*d**2. Let u(a) = 57240*a - 57240*a - 256*a**2. Give u(g(q)). -1024*q**4 Let j(u) = 5*u**2. Let a(s) = 2*s**3 - s + 1. Let h be a(1). Let k(z) = z + 4*z - h*z. What is k(j(b))? 15*b**2 Let x = -638 + 638. Let b(t) be the second derivative of x*t**2 - 1/6*t**3 + 0 + 6*t. Let v(h) = -3*h**2. Give b(v(o)). 3*o**2 Let g(a) = 4*a. Let i(d) be the second derivative of -98*d**3/3 - d**2/2 + 664*d. What is g(i(l))? -784*l - 4 Let p(k) = 52*k**2 - 68*k + 17. Let o(c) = -17*c**2 + 24*c - 6. Let t(s) = -17*o(s) - 6*p(s). Let r(a) = 7*a**2. What is r(t(b))? 3703*b**4 Suppose 36 = 16*b - 28. Suppose 0 = -3*i + 3*s - 9, 4*i + 4 = 2*s + 2. Let q(t) = -2*t + t + b*t + i*t. Let u(l) = 3*l**2. Determine u(q(k)). 75*k**2 Let c(u) = -291*u**2 - 1. Let b(k) = -47*k - 1. Give c(b(t)). -642819*t**2 - 27354*t - 292 Let c(t) = -2*t**2. Suppose -11 = -2*b - 5. Suppose b*x + 2*x = 0. Let v(i) = 0*i + x*i + 2*i. Determine c(v(g)). -8*g**2 Let c(t) = 37*t**2 - 2*t. Let x(k) = -2*k + 2557. What is x(c(p))? -74*p**2 + 4*p + 2557 Let m(s) = 16*s**2. Let z(d) = -402*d**2 + 1. Determine z(m(j)). -102912*j**4 + 1 Let f(p) = -p. Let y(q) = -2*q**2 + 5*q. Let u(z) = -3*z**2 + 9*z. Suppose -7*x - 24 = 11. Let b(m) = x*u(m) + 9*y(m). What is f(b(g))? 3*g**2 Let q(j) = 1491*j + 1484*j - 2971*j. Let a(c) = 55*c. Determine q(a(f)). 220*f Let a(w) = w**2 + 9*w + 5. Let s be a(-14). Let y(b) = -157*b + s*b + 93*b. Let z(g) = g**2. Determine y(z(l)). 11*l**2 Let u(c) = c**3 - 7*c**2 - 6*c + 42. Let s be u(7). Let n(k) be the second derivative of s + 1/3*k**3 + 0*k**2 + 6*k. Let l(d) = -d. Calculate l(n(p)). -2*p Let r(c) = -c + 4. Let q(f) be the first derivative of f**2/2 + 375. What is r(q(v))? -v + 4 Let h(o) = 14*o - 8. Let b(f) = 9*f - 5. Let z(y) = -8*b(y) + 5*h(y). Let n(u) = -144*u. Give n(z(g)). 288*g Let v(n) = 3*n**2. Let d(f) = 21901*f**2 - 726. Let j(u) = -121*u**2 + 4. Let q(c) = -2*d(c) - 363*j(c). Calculate v(q(o)). 43923*o**4 Let a(t) = -31*t. Let v(i) be the second derivative of 0*i**2 + 2 + 1/3*i**3 - 17*i. Give v(a(h)). -62*h Let u(v) = 6155*v**2. Let o(h) = 23*h + 1. Calculate o(u(n)). 141565*n**2 + 1 Let d(r) be the first derivative of -2*r**3/3 - 7*r**2/2 + 66. Let h(a) = 6*a**2. Calculate d(h(i)). -72*i**4 - 42*i**2 Let h(s) = -s + 287. Let m(y) = 48*y**2. Determine h(m(n)). -48*n**2 + 287 Let v(d) = 5*d**2. Let c(k) = -34031*k - 1. Give v(c(t)). 5790544805*t**2 + 340310*t + 5 Let p(o) = o. Let x(t) = -2*t. Let l(m) = 10*p(m) + 6*x(m). Let j(b) = b + 3. Let i(w) = -1. Let v(h) = 3*i(h) + j(h). What is l(v(a))? -2*a Let b(v) = -13*v - 1. Let t(j) = -2*j**2 - 5*j. Suppose -8*d = d + 45. Let u(l) = 2*l**2 + 4*l. Let x(h) = d*u(h) - 4*t(h). Give x(b(y)). -338*y**2 - 52*y - 2 Let k(c) = -124334*c**2. Let r(o) = -10*o**2. Determine r(k(s)). -154589435560*s**4 Let q(c) = c**2 - 2*c**2 + 2*c**2 + 0*c**2. Suppose 3 = i, i = -0*z + 5*z - 7. Let d(p) = 4*p**2 - 8*p**2 + 5*p**2 + z*p**2. Give q(d(b)). 9*b**4 Let o(c) = 4*c**2 + 40. Let s(t) = -2*t**2 - 21. Let k(a) = -6*o(a) - 13*s(a). Let u(i) = 3*i. Give k(u(d)). 18*d**2 + 33 Let j(q) = -2*q. Let g(s) = -s**2 + s + 1. Let f be (-3 - -4)/((-3)/3). Let y(a) = -12*a**2 + 3*a + 3. Let v(x) = f*y(x) + 3*g(x). Calculate v(j(o)). 36*o**2 Let d(h) = 6*h**2 - 3*h + 3. Let i(z) = -6*z**2 + 2*z - 2. Let p(q) = -4*d(q) - 6*i(q). Let m(l) = 6*l. Calculate p(m(c)). 432*c**2 Let x(f) = 11*f. Let h(l) = -2435*l**2. What is h(x(y))? -294635*y**2 Let l(h) = 77*h - 25. Let k be l(5). Let p(c) = -k + 176 + 184 - 4*c. Let x(u) = 10*u. Giv
{ "pile_set_name": "DM Mathematics" }
更新 大阪・ミナミが韓国の若者で“沸騰”している。特に3月から新学期が始まる韓国の春休みにあたる2月は、道頓堀から戎橋にかけて、大学生らのグループが殺到した。一帯は“爆買い”の中国人が多いと思われがちだが、実は、韓国では最近、旅行先として大阪の人気が急上昇。韓国最大手の旅行会社がホームページ上で「自由旅行で行ってみたい都市」というテーマでアンケートしたところ、大阪がフランス・パリに次いで2位に入った。ハワイのホノルルや米ニューヨークよりも上位だ。同社を利用した日本旅行の行き先でも、大阪方面は昨年1年間で4割超を占めトップ。なぜ韓国の人に大阪が愛されるのか。(張英壽) 「人が親切」「道がきれい」と絶賛 「きのう、来たんですが、韓国人ばっかりなので、びっくりしました。道がきれいだし、人も親切。人々は秩序を守っており、食べ物もおいしい」 2月のある日、人々でごった返すミナミの道頓堀。韓国南東部の慶尚南道からやってきた大学生の女性、李賀彬(イ・ハビン)さん(20)の印象だ。日本は初めてという。「(社会などが)どんなに発達しているか、気になっていた」と話す。 李さんは一泊2千円程度のホテルに宿泊していると打ち明けた。激安の旅行だ。 道頓堀で出会った韓国人の多くが大阪の印象として同様に、「人々が親切」や「道がきれい」なことをあげた。ほとんどの若者がグループで歩いている。 3月に高校生から大学生になる慶尚南道の男子(18)も「道を聞いたら連れて行ってくれる。日本はいい国ですよ」と感想。日韓の女性について問うと、ちょっと間を置いて「韓国はきれいな人が多い。日本はかわいいですね。僕の趣味は日本の女の子ですが…」とはにかんだ。 かつて九州北部が人気だったが…理由は「韓国人そっくり」だから(?)
{ "pile_set_name": "OpenWebText2" }
Go to about any public square, and you see pigeons pecking at the ground, always in search of crumbs dropped by a passerby. While the pigeons’ scavenging may seem random, new research by psychologists at the University of Iowa suggest the birds are capable of making highly intelligent choices, sometimes with problem-solving skills to match. The study by Edward Wasserman and colleagues centered on the “string task,” a longstanding, standard test of intelligence that involves attaching a treat to one of two strings and seeing if the participant (human or animal) can reel in that treat by pulling the correct string. Photo courtesy of Edward Wasserman. Photo courtesy of Edward Wasserman. In this case, the UI researchers took the pigeons into the digital age: The birds looked at a computer touch screen with square buttons connected to either dishes that appeared to be full or empty. If the bird pecked the correct button on the screen, the virtual full bowl would move closer, ultimately to the point where the pigeon would be rewarded with real food. “The pigeons proved that they could indeed learn this task with a variety of different string configurations—even those that involved crossed strings, the most difficult of all configurations to learn with real strings,” says Wasserman, Stuit Professor of Experimental Psychology and the corresponding author of the study published in the journal Animal Cognition. In experiments, the authors found the pigeons chose correctly between 74 percent and 90 percent of the time across three varieties of string tests. The breadth of the string tests, coupled with the pigeons’ accuracy, suggest that virtual string tests can be used in place of conventional string experiments—and with other animal species as well, the researchers say. In videos that the researchers took, the pigeons in many instances scan and bob their heads along the string “often looking toward and pecking at the dish as its moves down the screen,” the authors write, suggesting the birds noted the connection between the virtual strings and the dishes. “We believe that our virtual string task represents a promising innovation in comparative and developmental psychology,” says Wasserman, whose department is in the College of Liberal Arts and Sciences . “It may permit expanded exploration of other species and variables which would otherwise be unlikely because of inadequacies of conventional string task methodology or sensorimotor limitations of the organisms.” “These results not only testify to the power and versatility of our computerized string task, but they also demonstrate that pigeons can concurrently contend with a broad range of demanding patterned-string problems, thereby eliminating many alternative interpretations of their behavior,” the authors write. The paper is titled, “Pigeons learn virtual patterned-string problems in a computerized touch screen environment” and was first published online in March. Contributing authors include Leyre Castro Stephen Brzykcy, from the UI and Yasuo Nagasaka from the Riken Brain Science Institute in Japan. The UI psychology department funded the study.
{ "pile_set_name": "OpenWebText2" }
Increased plasma S-adenosyl-homocysteine levels induce the proliferation and migration of VSMCs through an oxidative stress-ERK1/2 pathway in apoE(-/-) mice. Although S-adenosyl-homocysteine (SAH) is considered to be a more sensitive predictor of cardiovascular disease than homocysteine, the underlying mechanisms of its effects remain unknown. We investigated the in vivo and in vitro effects of SAH on vascular smooth muscle cells (VSMCs) proliferation and migration related to the development of atherogenesis in apolipoprotein E-deficient (apoE(-/-)) mice. A total of 72 apoE(-/-) mice were randomly divided into six groups (n= 12 for each group). The control group was fed a conventional diet, the M group was fed a 1% methionine-supplemented diet, the A group was fed a diet that was supplemented with the SAH hydrolase (SAHH) inhibitor adenosine-2, 3-dialdehyde (ADA), the M+A group was fed a diet that was supplemented with methionine plus ADA, and two of the groups were intravenously injected with retrovirus that expressed either SAHH shRNA (SAHH(+/-)) or scrambled shRNA semi-weekly for 8 weeks. Compared with the controls, the mice in the A, M+A, and SAHH(+/-) groups had higher plasma SAH levels, larger atheromatous plaques, elevated VSMC proliferation, and higher aortic reactive oxygen species and malondialdehyde levels. In cultured VSMCs, 5 μM ADA or SAHH shRNA caused SAH accumulation, which resulted in increased cell proliferation, migration, oxidative stress, and extracellular-regulated kinase 1/2 (ERK1/2) activation. These effects were significantly attenuated by preincubation with superoxide dismutase (300 U/mL). Our results suggest that elevated SAH induces VSMC proliferation and migration through an oxidative stress-dependent activation of the ERK1/2 pathway to promote atherogenesis.
{ "pile_set_name": "PubMed Abstracts" }
K-Edge GO BIG Pro Saddle Rail Mount (1/4x20) K-Edge GO BIG Pro Saddle Rail Mount (1/4x20) The K-Edge Universal Camera ¼”-20 Pro Saddle Rail Mount provides a rock-solid platform for any camera to capture the unique angle of a trailing rider. Use the ¼"-20 (tripod) mount for almost every camera, including: Garmin, SONY, Contour, Drift, Rollei. Or remove this adapter and use the base GoPro mount for all Hero cameras.
{ "pile_set_name": "Pile-CC" }
Environmental Inspiration in Your Own Backyard Plants In Your Backyard April 15, 2010 In our last plant post we talked about plants that you might find just outside of your backyard, in a nearby wooded area or marsh. Today, though, I want to talk about two of my favorite backyard plants. These plants are ubiquitous in yards and roadsides all over Pennsylvania. They are both considered weeds by most, but to me they will always be beautiful and fascinating plant neighbors. The first is one everyone is familiar with. For the life of me I do not know why everyone hates this “weed” so much, as nothing makes an otherwise plain yard more beautiful than to be speckled with bright yellow flowers. Yes, the dandelion! The dandelion’s name comes from the french, “dente de lion,” which translates to “lion’s tooth.” This name comes from the jagged edged leaves of the dandelion plant, whose little triangles you can see in the following picture: Dandelions are my favorite flower. Besides the beautiful yellow color, they are also very tough, as anyone who has tried to rid their yard of them will know. They are hearty, and grow back against all odds, which I thought made them a great role model as a young girl growing up. Dandelions are also useful for other reasons. Their leaves can be eaten, although they are best earlier in the spring before the flowers bloom. The flower petals can be used to make dandelion wine, and the root can be roasted and crushed to make a caffeine-free coffee. People often made this coffee in the American Civil War, actually, when real coffee was scarce. The next plant that you can find all around you is another of my personal favorites–onion grass. Onion grass is actually from South Africa, but has been naturalized in the United States. It is not really a grass, as the name implies, but a small plant with leaves that are upright and long. You can see a picture of some from the yard near my apartment here: Onion grass does in fact have small wild onion bulbs and can be eaten (as long as no chemicals were used on the lawn!) much like chives are. I am not completely sure why I loved onion grass so much as a kid, but I really did. It could be because I used to make “soups” from different plants and mud in old jars to pretend to eat, and the small onions were a perfect ingredient! Once in kindergarten or first grade we had to bring in a ‘sign of spring’ for show and tell, and I brought in a clump of onion grass. The teacher was confused and asked me if it was, in fact, an indicator of spring, because she had never heard of it before. I had never thought, until then, that maybe I was the only one excited to see it when warmer months finally arrived. It is impressive to me, in retrospect, how much time my brother and I must have spent outside growing up. How was I familiar enough with our yard and the plants that grew there to associate onion grass with Spring at such a young age? It is the type of knowledge you can only get by spending hours and hours immersed in the small things, crawling around in the grass. Now that I am so much higher up off the ground as an adult, I’m not sure I would even notice onion grass if not for the memories I have from childhood. For this reason it is important we get kids out there when they are young and still have the size and imagination to experience the natural world up close and without separation. I would like to add that while my brother and I did spend a lot of time outside by ourselves, we only did so because my parents first spent the time taking us out there and exploring with us. Without that influence I am not sure if we would have been so eager to get out there! So head outside and see what there is to see! You don’t need to go far, especially if you have a young child with you! There is plenty to explore even within a five foot radius. What types of plants do you see? Do you know their names? Are they similar to each other or different? Can you find any insects or evidence that they have been eating there? What is the soil like? What kind of conditions make it a good place for those plants and insects to live in? What else would you expect to see living in this type of place? There are so many things to learn. It can seem overwhelming, especially if you aren’t very familiar with the plants and animals that live in your neighborhood. It doesn’t have to be, though! Let’s each try to learn just one new thing today. I, for one, am going to try to figure out what that small purple-flowered plant is that I saw growing next to the onion grass.
{ "pile_set_name": "Pile-CC" }
Story highlights Jeb Bush said that Democrats win black voters with 'free stuff' The comment echoed criticism from Mitt Romney in 2012, when he said Barack Obama won minorities with "gifts" Washington (CNN)Jeb Bush told a South Carolina crowd Thursday that Democrats play to African-American voters by offering "free stuff," a similar comment to a contentious one that Mitt Romney made in the days after his 2012 loss to President Barack Obama. Bush, analyzing Republicans' chances with black voters, said that his party needs to make a better case to the traditionally Democratic voting bloc. "Our message is one of hope and aspiration. It isn't one of division and 'Get in line and we'll take care of you with free stuff,'" Bush said Thursday at an event in Mount Pleasant, South Carolina. "What the president, president's campaign did was focus on certain members of his base coalition, give them extraordinary financial gifts from the government, and then work very aggressively to turn them out to vote," Romney said at the time, according to audio obtained by ABC News. Read More Other Republicans quickly denounced Romney's comments and party leaders, as part of their assessment of how to win in 2016, determined that they needed to do a better job reaching out to minorities. A Bush spokeswoman did not immediately respond to a CNN request for comment Friday, but declined to address the "free stuff" phrase directly in a reply to The New York Times. "We will never be successful in elections without communicating that conservative principles and conservative policies are the only path to restoring the right to rise for every single American," Bush spokeswoman Kristy Campbell told the Times. Bush's "free stuff" comment came as he explained another comment, from Tuesday, that the U.S. should not be a "multicultural" society. "We're a pluralistic society. We're diverse, we have people that come from everywhere," Bush told reporters Thursday. "We're not multicultural. We have a set of shared values that defines our national identity, and we should never veer away from that because that creates the extraordinary nature of our country."
{ "pile_set_name": "Pile-CC" }
213 Cal.App.2d 78 (1963) CONTINENTAL CASUALTY COMPANY, Plaintiff and Appellant, v. HARTFORD ACCIDENT AND INDEMNITY COMPANY et al., Defendants and Respondents. Civ. No. 20211. California Court of Appeals. First Dist., Div. One. Feb. 18, 1963. Carroll, Davis, Burdick & McDonough and J. D. Burdick for Plaintiff and Appellant. Hadsell, Murman & Bishop, Bacon, Mundhenk, Stone & O'Brien, Herbert Chamberlin, Bishop, Murray & Barry and Michael J. Murray for Defendants and Respondents. *80 SULLIVAN, J. We are presented with a question of liability deriving from conflicting "other insurance" clauses in three separately issued policies of automobile liability insurance. Briefly stated, our inquiry is as to whether in the instance at hand the policies provided for primary or excess insurance. We have concluded that the matter before us is governed by our decision in Athey v. Netherlands Ins. Co. (1962) 200 Cal.App.2d 10 [19 Cal.Rptr. 89], that all three policies provide for excess insurance and that the judgment appealed from should be reversed. The parties have agreed upon the facts. One James R. Corcoran, Jr. rented an automobile from Paul J. Muldoon, doing business as the Peninsula Lease Company. Corcoran, a resident of Massachusetts, was an executive of Fenwal, Inc., a firm located in that state, and was reimbursed by Fenwal for the rental charge which included a charge for insurance. While driving the rented automobile in the scope of his employment by Fenwal, Corcoran became involved in an accident as a result of which one Victoria Pucci suffered personal injuries and property damage. The latter commenced an action against Corcoran, Peninsula Lease Company and others. At the time of the accident the following policies of automobile liability insurance were in effect: so-called driverless car liability policy issued by Continental Casualty Company (hereafter called Continental) to Paul J. Muldoon, dba Peninsula Lease Co. as named insured; a Massachusetts motor vehicle liability policy issued by Hartford Accident and Indemnity Company (hereafter called Hartford) to Corcoran as named insured; and a schedule automobile liability policy issued by Lumbermens Mutual Casualty Company (hereafter called Lumbermens) to Fenwal as named insured. Continental appeared and defended the above legal action and eventually settled the Pucci claim for $3,276.15. The parties before us have agreed that such settlement was fair and reasonable. Continental demanded of both Hartford and Lumbermens that they appear for and defend Corcoran and contribute to any judgment or settlement. Continental thereupon brought the instant action in declaratory relief against Hartford and Lumbermens seeking an adjudication of the liabilities of all three parties. The case was submitted to the court below on an agreed statement of facts, the parties further stipulating that whatever proration was ordered by the court in respect to the above payment of *81 $3,276.15 to the injured party would be applicable to an outstanding property damage claim of $618.94. The court adopted the agreed statement of facts as its findings of fact and rendered judgment denying all recovery to Continental. From such judgment Continental has taken this appeal on the judgment roll with copies of the policies and the automobile rental agreement as appended exhibits. [1] Since no extrinsic evidence was introduced in the court below in aid of construction, the construction of the instant policies presents a question of law. We are not bound by the trial court's interpretation of them and we therefore proceed to make our own determination of their meaning from an examination of their applicable provisions. (Continental Cas. Co. v. Phoenix Constr. Co. (1956) 46 Cal.2d 423, 430 [296 P.2d 801, 57 A.L.R.2d 914]; Estate of Platt (1942) 21 Cal.2d 343, 352 [131 P.2d 825].) The Continental policy, under its insuring agreements, provides coverage to the insured for bodily injury liability arising out of defined hazards which include, inter alia, "[t]he ownership, maintenance or use of (a) any automobile of the private passenger or commercial type while rented without chauffeurs to others from locations in the United States of America. ..." It defines the word "insured" to include the named insured and also "any person, firm, association, partnership or corporation to whom an automobile has been rented without a chauffeur. ..." Continental concedes that its policy extended coverage to Corcoran, the driver, for the accident in question. Pertinent to the problem at hand is the following provision which is found among the "conditions" of the above policy: "17. Other Insurance. The insurance under this policy shall be excess insurance over any other valid and collectible insurance available to the insured, either as an insured under another policy or otherwise." We observe at this point that the above provision contained in the lessor's policy in the instant case is identical in language with the provision contained in the policy issued by Netherlands Insurance Company to the lessor of the automobile in the Athey case. (Athey v. Netherlands Ins. Co., supra, 200 Cal.App.2d 10.) The Hartford policy, issued to Corcoran in Massachusetts, provided under its insuring agreements coverage to him for statutory bodily injury liability in accordance with and arising out of the Massachusetts Compulsory Automobile Liability *82 Security Act (Coverage A) and in addition coverage for bodily injury liability other than statutory "arising out of the ownership, maintenance or use of the motor vehicle." (Coverage B.) Paragraph V of the insuring agreements provides as follows: "V. Use of Other Motor Vehicles--Coverages B, C and D: If the named insured is an individual or husband and wife and if during the policy period such named insured, or the spouse of such individual if a resident of the same household, owns a private passenger motor vehicle covered by this policy, such insurance as is afforded by this policy under coverages B, C and division 2 of coverage D with respect to said motor vehicle applies with respect to any other motor vehicle, subject to the following provisions: (a) With respect to the insurance under coverages B and C the unqualified word 'insured' includes (1) such named insured and spouse; and (2) any other person or organization legally responsible for the use by such named insured or spouse of a motor vehicle not owned or hired by such other person or organization." Important to the issues before us is the following "other insurance" clause included among the conditions of the policy: "11. Other Insurance--Coverages A, B and C: If the insured has other insurance against a loss covered by this policy the company shall not be liable under this policy for a greater proportion of such loss than the applicable limit of liability stated in the declarations bears to the total applicable limit of liability of all valid and collectible insurance against such loss; provided, however, the insurance with respect to temporary substitute motor vehicles under Insuring Agreement IV or other motor vehicles under Insuring Agreement V shall be excess insurance over any other valid and collectible insurance." (Italics added.) [2a] The Lumbermens policy, issued in Massachusetts to Fenwal, Corcoran's employer, under its insuring agreements provides coverage to the insured for bodily injury liability arising out of defined hazards which along with hazards relating to owned automobiles and hired automobiles, [fn. 1] included the following provision relating to nonowned automobiles: *83 "Division C. Non-Owned Automobiles--The use, by any person other than the named insured, of any non-owned automobile of the private passenger type in the business of the named insured as stated in the declarations, and the use in such business, by any employee of the named insured, of any non-owned automobile of the commercial type if such use of such automobile is occasional and infrequent." By definition under the insuring agreements the word "insured" includes the named insured (Fenwal) and "also includes ... under division C of the Definition of Hazards, any executive officer of the named insured." However, the declarations and the declarations schedule of the Lumbermens policy make clear that it does not insure for the hazards of owned automobiles or hired automobiles (Divisions A and B of Hazards--see footnote 1) but only for the hazards of nonowned automobiles (Division C--ante). Finally, among its conditions, the policy includes the following "other insurance" clause pertinent here: "13. Other Insurance. If the insured has other insurance against a loss covered by this policy the company shall not be liable under this policy for a greater proportion of such loss than the applicable limit of liability stated in the declaration bears to the total applicable limit of liability of all valid and collectible insurance against such loss; provided, however, the insurance under division B of the Definition of Hazards with respect to a hired automobile insured on a cost of hire basis and under division C of the Definition of Hazards and Insuring Agreement IV (e) shall be excess insurance over any other valid and collectible insurance." (Italics added.) A comparison of the "other insurance" clauses contained in the Hartford and Lumbermens' policies shows that, except for particular paragraph and division references, they are practically the same and that the operative language of each is identical. We also note at this point, that subject to the same exceptions, such clauses are practically the same as the "other insurance" clause contained in the policy issued by the National Grange Mutual Liability Fire Insurance Co. to the driver-renter of the automobile in the Athey case. [fn. 2]*84 It is the position of Continental here that the coverage provided by all of the policies is excess coverage and should therefore be prorated. It is Hartford's position that the coverage afforded by the Continental policy is the primary coverage and that although the Hartford policy extended "incidental coverage" to Corcoran in respect to the accident in question, such coverage was "excess." It is Lumbermens' position that in the first place, its policy did not extend coverage to the Pucci accident since the automobile rented by Corcoran from Muldoon was a "hired automobile" which, as we have pointed out, is a hazard not insured against, and, secondly, that if the policy does cover the accident, such coverage was excess, Continental's coverage to its named insured (Muldoon) and additional insured (Corcoran and Fenwal) being primary. We consider first the effect of the three "other insurance" clauses. We will then take up Lumbermens' additional contention that its policy does not cover the accident in any event. All three policies provide excess insurance. On the first issue we are presented with the same question as that presented to this court in Athey v. Netherlands Ins. Co., supra, 200 Cal.App.2d 10. [fn. 3] In that case, the plaintiff Athey rented an automobile from the Hertz Co. and thereafter became involved in a collision. There, as here, the injured party commenced an action against both the driver-renter (Athey) and the owner-lessor (Hertz). There, as here, the policy furnished the driver provided primary insurance for loss arising out of his operation of his own automobile, provided that if he had other insurance while operating his own car, the liability of the carrier under the policy should be prorated, but that if he were driving a nonowned automobile, the driver's policy would be excess insurance over other valid and collectible insurance. We have already pointed out that the "other insurance" clause in the driver's policy in *85 Athey is practically the same as that contained in the policy of the driver (Corcoran) and his employer (Fenwal) in the instant case. (See footnote 2--ante.) [3a] In the Athey case, as here, the policy issued to the owner-lessor provided primary insurance coverage for the use of the owner's automobiles by the owner, his employees or any permissive user, provided no coverage in respect to nonowned automobiles and provided further that in the event of any other valid and collectible insurance, all coverage under the policy no matter who the driver was, would be excess insurance. Under such circumstances, as we stated in Athey "there no longer was any primary coverage." (200 Cal.App.2d 12.) We have also pointed out that the above clause in the owner-lessor's policy in Athey is identical with that in the policy of the owner- lessor here (Muldoon). All crucial clauses are the same in each case, the only difference being that Athey involved two policies while the instant case involves three. We held in Athey that each policy was "other insurance" as to the other, providing excess coverage only and, in order to protect the insured, that the loss be apportioned between the policies. Mr. Justice Bray wrote: "In a situation like this, where both policies provide excess coverage only, there is no justification for choosing one as providing 'other insurance' and the other as not so providing" (200 Cal.App.2d 13), analyzing and distinguishing a number of cases some of which are again cited by respondents herein. We are therefore not persuaded by Hartford's argument that the present problem should be governed by American Automobile Ins. Co. v. Republic Indemnity Co. (1959) 52 Cal.2d 507 [341 P.2d 675]. That case was analyzed and distinguished in Athey and our observations in that respect need not be restated here. (See Athey case, supra, 200 Cal.App.2d pp. 13-14.) It is sufficient to note that in the American Automobile case, the accident occurred while the driver was operating a borrowed car. The "other insurance" clauses contained in the policies of both the driver and the owner were substantially the same, providing for a prorating where other insurance covered the loss and also providing that, in respect to the operation of a nonowned car by the named insured, the insurance would be excess over all other insurance. Under such circumstances, American Automobile gave effect to the excess clause in the driver's policy and refused to give effect to *86 the pro rata clause in the owner's policy, holding that the owner should bear the entire loss. The conflict in that case was between an excess clause and a pro rata clause. In the case at bench, as in Athey, the conflict is between two excess clauses. Hartford also relies heavily upon Citizens Mutual Auto. Ins. Co. v. Liberty Mutual Ins. Co. (6th Cir. 1959) 273 F.2d 189. However this case presents the same situation as in American Automobile with the policies of both the driver and the owner providing in almost identical language for pro rata coverage as to owned automobiles and excess insurance as to nonowned automobiles. Such case therefore falls into the same category as American Automobile and indeed cites and relies upon the latter case. As in American Automobile, the conflict was between an excess clause in the driver's policy and a pro rata clause in the owner's policy with the former clause only being given effect. The case therefore is distinguishable from both Athey and the case at bench. Nevertheless Hartford has attempted to construct upon it the following critique of our decision in Athey. In the latter case we cited, inter alia, Continental Casualty Co. v. New Amsterdam Casualty Co. (Ill. App. 1960) 21 Automobile Cases (2d) 1343, which also involved a driverless car liability policy, presented the same question as in the Athey case and resulted in a proration of liability. Continental in turn cited Oregon Auto. Ins. Co. v. United States Fidelity & Guar. Co. (9th Cir. 1952) 195 F.2d 958, referring to such case as the leading case on the subject. Hartford here directs no attack at the Continental case and apparently ignores its factual similarity noted above. Instead Hartford points out that in the Citizens Mutual case, upon which Hartford relies, Oregon Auto is characterized as a case representing a minority view. The gist of Hartford's criticism then seems to be that we in effect rested our decision in the Athey case on the lesser authority of Oregon Auto to the exclusion of such cases as American Automobile. This attack on the Athey case is completely ineffective. Oregon Auto, like American Automobile and Citizens Mutual, arose from a collision involving a driver of a borrowed car. The driver's policy contained an "other insurance" clause similar to those in the last two cases, that is a pro rata clause as to owned automobiles and an excess clause as to nonowned automobiles. The owner's policy, however, contained *87 a pro rata clause as to owned automobiles but a so- called "escape clause" to the effect that a person other than the named insured having other insurance would not be indemnified under the policy. The court concluded that the two clauses were mutually repugnant, that if literally applied neither insurer would be liable and that the conflicting provisions should be disregarded and the liability prorated. Admittedly this rationale of proration was applied at least in part by the Continental case in resolving the problem of two excess clauses. Nevertheless basic distinctions remain: The clauses in Oregon Auto were not the same as in American Automobile as the Supreme Court observes in the latter case (see American Automobile Ins. Co. v. Republic Indemnity Co., supra, 52 Cal.2d 507, 512, fn.5) or as in Citizens Mutual, as we have pointed out above. Nor are the clauses in any of the foregoing three cases the same as those in Athey where two excess clauses were involved. Thus, when we dealt in Athey with two excess clauses, we did not of necessity ignore the rule of American Automobile in respect to the conflicting excess and pro rata clauses there appearing or wholly espouse the technique of Oregon Auto in respect to the conflicting excess and escape clauses involved in that case. [fn. 4] In our view, Athey rests upon the sound analysis of its own clauses. [4] The inherent nature of the problem of "other insurance" requires a case by case treatment. (See 13 Hastings L.J. 183, 191.) As we said in American Auto. Ins. Co. v. Transport Indem. Co. (1962) 200 Cal.App.2d 543, 544 [19 Cal.Rptr. 558], "each case apparently presents a particularistic and unique problem." Valid reasons exist in the instant case to apply the solution of proration. (See generally 69 A.L.R.2d 1122.) Both respondents have directed our attention to several recent cases allegedly supporting their position but, with apparent reluctance to enter this wilderness of single instances, have refrained from furnishing us with any analysis of the policy provisions involved or comparison of them with those *88 in the case before us. Our views on two of these cases [fn. 5] were given in Athey and need not be reiterated here. In Fireman's Fund Indemnity Co. v. Prudential Assurance Co. (1961) 192 Cal.App.2d 492 [13 Cal.Rptr. 629], decided by this court, not only were the clauses dissimilar to those here but, in addition, the owner's policy did not contain an effective excess insurance clause. McConnell v. Underwriters at Lloyds (1961) 56 Cal.2d 637 [16 Cal.Rptr. 362, 365 P.2d 418] is not in point and did not involve conflicting excess clauses. In McConnell three policies were issued to the same insured; two of them covered the accident and contained effective pro rata clauses; the third policy provided excess insurance which did not become operative until the limits of the first two policies had been exhausted. Finally, Lumbermens claims that the problem presented in the instant case is different from that presented in Athey because in that case "the primary insurance was inadequate to absorb the loss and it was necessary to resort to excess insurance. ..." Such was not our holding in Athey as we think our opinion there makes plain. We held there, irrespective of the adequacy of the policy limits involved, that both policies provided excess coverage. [3b] We therefore hold that the instant case is controlled by our opinion in Athey and that the loss here involved must be apportioned among the three insurers herein. Lumbermens policy covers the accident. As we have pointed out and as Continental concedes, the Lumbermens policy provides coverage for the hazards of "non-owned automobiles" only. It does not provide coverage for the hazard of "hired automobiles." (See footnote 1, ante.) But it is clear that, within the coverage provided, Corcoran was an insured since he was an executive officer of the named insured Fenwal. However, Lumbermens contends that the automobile which Corcoran rented falls within the definition of "hired automobile" in the policy and therefore the policy did not cover the Pucci accident. Paragraph IV of the insuring agreements upon which any such claim must rest provides as follows: "(a) Automobile. Except where stated to the contrary, the word 'automobile' means a land motor vehicle or *89 trailer as follows: (1) Owned Automobile--an automobile owned by the named insured; (2) Hired Automobile--an automobile used under contract in behalf of, or loaned to, the named insured provided such automobile is not owned by or registered in the name of (a) the named insured or (b) an executive officer thereof or a partner therein or (c) an employee or agent of the named insured who is granted an operating allowance of any sort for the use of such automobile; (3) Non-Owned Automobile--any other automobile." For Lumbermens to prevail in the above contention, it must therefore appear that the automobile involved in the Pucci accident was either "used under contract in behalf of" the named insured Fenwal or "loaned to" Fenwal. It is clear under the agreed facts that the car was not loaned to Fenwal. Was it then used under contract in behalf of Fenwal? We think not. [5] The following principles were announced in Continental Cas. Co. v. Phoenix Constr. Co. (1956) 46 Cal.2d 423, 437- 438 [296 P.2d 801, 57 A.L.R.2d 914] as governing the construction of insurance policies: "[A]ny ambiguity or uncertainty in an insurance policy is to be resolved against the insurer. [Citations.] [6] If semantically permissible, the contract will be given such construction as will fairly achieve its object of securing indemnity to the insured for the losses to which the insurance relates. [Citation.] [7] If the insurer uses language which is uncertain any reasonable doubt will be resolved against it; if the doubt relates to extent or fact of coverage, whether as to peril insured against [citations], the amount of liability [citations] or the person or persons protected [citations], the language will be understood in its most inclusive sense, for the benefit of the insured." [2b] The agreed statement of facts which was adopted by the court below as its findings discloses that "James R. Corcoran, Jr. rented from one Paul J. Muldoon, dba Peninsula Lease Company, a certain Lincoln Automobile." Although the parties stipulate therein that at the time of the accident Corcoran was acting within the scope of his employment, there is no agreed statement that the rental contract itself was "in behalf of" Fenwal. On the contrary, the contract which is before us is consistent with the agreed statement and finding that Corcoran rented the car. On the first page of the contract in the space designated "Renter," the name James R. Corcoran, Jr. is printed. Following spaces are *90 completed to give his home address, firm name and address of firm. The contract is signed J. R. Corcoran, Jr. and spaces immediately under such signature are completed so as to indicate that the rental is to be charged and the invoice mailed to Mr. Corcoran at his home address. In an adjoining space designated "charge authorization" appear Corcoran's name, business address and presumably a charge plate number. Thus it appears that Corcoran rented the automobile involved in his own name. There is no basis in the facts that when he did so he was acting for or with the consent of Fenwal. (Cf. Continental Cas. Co. v. Zurich Ins. Co. (1961) 57 Cal.2d 27, 32 [17 Cal.Rptr. 12, 366 P.2d 455].) It would be consistent with the facts here stipulated to that Corcoran was to obtain transportation on his own account and personal responsibility and that his employer would not be responsible therefor, even though it might subsequently reimburse the employee for travel expenses. Lumbermens has referred us to no authorities holding that an automobile rented by the employee in his own name is a "hired automobile" as that term is defined in the policy under examination. In the light of the applicable principles of construction stated above and under the above facts found by the trial court, we therefore conclude that the automobile in question was not a hired automobile but a nonowned automobile, the use of which was a hazard insured by the policy. The judgment is reversed and the cause is remanded with directions to the trial court to amend its conclusions of law and to enter a judgment declaring the relative and respective rights and obligations of the parties to this action in accordance with the views herein expressed. Each party shall bear its own costs on appeal. Bray, P. J., and Molinari, J., concurred. "Division B. Hired Automobiles--The maintenance or use, for the purposes stated in the declaration, of any hired automobile. The definitions in this policy of 'pleasure and business' and 'commercial' apply respectively to private passenger automobiles and to automobiles of the commercial type, except as otherwise provided." NOTES [fn. 1] 1. The policy defined these first two hazards as follows: "Division A. Owned Automobiles--The ownership, maintenance or use, for the purposes stated as applicable thereto in the declarations, of the owned automobile described therein. [fn. 2] 2. According to the opinion in the Athey case, the National policy provided: " 'If the insured has other insurance against a loss covered by Part I of this policy the company shall not be liable under this policy for a greater proportion of such loss than the applicable limit of liability stated in the declarations bears to the total applicable limit of liability of all valid and collectible insurance against such loss; provided, however, the insurance with respect to a temporary substitute automobile or nonowned automobile shall be excess insurance over any other valid and collectible insurance.' (Emphasis added.)" (Athey v. Netherlands Ins. Co., supra, 200 Cal.App.2d 10, 11.) [fn. 3] 3. Our opinion in Athey was filed on February 6, 1962, and after the entry of the instant judgment on July 5, 1961. [fn. 4] 4. One legal writer explains the solution employed in Oregon Auto and cases following it as emerging from a "disenchantment" with the technique of matching or pairing the clauses, i.e. requiring proration where either two pro rata clauses or two excess clauses are opposed. The author also comments upon the reluctance of some courts to emulate Oregon Auto in the disregarding of conflicting clauses and, as in American Automobile "to continue fixing responsibility on one insurer or the other." (The Double Insurance Problem--A Proposal, 13 Hastings L.J. 183.) [fn. 5] 5. Specifically: Truck Ins. Exchange v. Torres (1961) 193 Cal.App.2d 483 [14 Cal.Rptr. 408]; Continental Cas. Co. v. Zurich Ins. Co. (1961) 57 Cal.2d 27 [17 Cal.Rptr. 12, 366 P.2d 455]; see Athey v. Netherlands Ins. Co., supra, 200 Cal.App.2d 10, 14-15.
{ "pile_set_name": "FreeLaw" }
Effect of route of nutrition on recovery of hepatic organic anion clearance after fasting. Previous work documented a 40% depression of hepatic indocyanine green (ICG) clearance (ClICG) in pigs fasted to 20% weight loss, with return to normal within 12 days of food refeeding. ClICG in pigs is insensitive to changes in hepatic blood flow but very sensitive to changes in hepatic function (HF). Serial ClICG determinations were performed to quantify the effect of route of nutrient delivery on recovery of HF. Fourteen pigs were fasted to 20% weight loss (12.8 days average) with both gastrostomy and intravenous catheters placed in each animal midway through the fast. ClICG was measured before fast, after fast, and after 12 days refeeding through the enteral or parenteral route at 125 kcal/kg/day with isonitrogenous, isocaloric diets containing 9% fat. Urine and stool were analyzed for total nitrogen. No significant differences appeared between groups in nitrogen output during fasting (4.5 +/- 1.2 gm/kg enteral, 4.6 +/- 1.2 gm/kg parenteral), in nitrogen intake (800 +/- 19 mg/kg/day enteral, 810 +/- 10 mg/kg/day parenteral), or in before or after fast ClICG, but enteral feeding produced more positive nitrogen balance. ClICG improved significantly with enteral but not with parenteral feeding. Enteral feeding produces faster nitrogen accrual and reverses the depression of major pathways of bilirubin and organic anion excretion associated with malnutrition. Parenteral feeding failed to improve organic anion clearance despite weight gain.
{ "pile_set_name": "PubMed Abstracts" }
Based on Samsung's hacker-friendly track record, you'd generally expect one of it smartphones to come with an unlocked bootloader, making it easy to update or tweak with unofficial ROMs. That's not the case with Verizon's imminent version of the Galaxy S III, however. As the folk at XDA know only too well, this particular iteration of Sammy's flagship comes with a sealed bootloader, which makes it resistant (though not impervious) to hackery. Of course, Sammy has nothing to gain from snubbing the modding community in this way, so it stands to reason that VZW pushed the Korean manufacturer to supply them with a locked bootloader -- despite the fact that all other variants have been left open. We've reached out to Big Red for comment, but in the meantime a clever soul over at Rootzwiki claims they've already found a workaround for root access. (At this point, though, we'd better provide our usual disclaimer: be very careful before you poke around in there, because going up against a locked bootloader can be risky. The apparent safety of modern life is just a shallow skin atop an ocean of blood, guts and bricked devices.)
{ "pile_set_name": "OpenWebText2" }
As you know, we are celebrating our 1st we-left-Dubai anniversary. Birthdays usually involve cakes (at least where I am from), but we’d rather present you with the full meal! Let’s remember what the year was like… on our plates! IT’S FOOD TIME! We started off in ETHIOPIA. I wasn’t a fan of injera, but got to develop a taste for it once I got to have the real deal! I particularly liked Ethiopian vegetarian platters, as the meaty ones tend to be quite heavy! Afterwards we went to INDIA and you know how you tend to take less photos when you are familiar with things? That’s exactly what happened! Still, here’s a little tribute to the one thing we are truly addicted to: chai! Believe it or not, we didn’t take a single food photo while in London, UK – shame on us! I do remember having a delicious pizza overlooking the River Thames and Ashray has quite fond memories of fish and chips. Welcome to PORTUGAL – I am a very proud Portuguese person when it comes to food! Here are some of the delicious things we had during our stay in my motherland: When we reached SPAIN, the land of tapas, we obviously got our kicks! MEXICO was, hands down, one of the best countries we visited food wise – if not THE best! The moment we crossed the Atlantic ocean, we were in love with the rich, spicy flavors of Mexican cuisine. In Mexico, your taste buds wake up fully starting with the very first meal of the day! After what you’ve seen above, it would be hard for any country to match the level of taste that Mexico delivers in every dish. CUBA wasn’t particularly memorable food-wise. The shortages that this country undertakes affect even their cuisine: there is a lack of ingredients and seasonings. But they do have something very good going for them, and that is fresh ingredients! COSTA RICA is well known for it’s typical dish “casado”. Casados present on the same plate a combination of rice, black beans, chicken/beef/fish or mixed vegetables, salad and fried softened plantain. Sometimes, it might include a fried egg, but most commonly not. For me, this is the best combination a traveler could ever wish for: a place full of flavor, energy and variety! While in Costa Rica, we came across a restaurant serving INDIAN FOOD. Oh my!… we couldn’t pass on the opportunity of eating some roti and tandoori delicacies! Surprisingly, this was very, very good! Funny enough, the restaurant would only serve Indian food for dinner but the chef was happy to make an exception and heat up the tandoor just for us… what a privilege! To fly from Costa Rica to Brazil, we took a flight via MEXICO CITY, where we ended up spending 13 beautiful gastronomic hours. It felt like going back home, and food was a big part of it! After one day in BRAZIL, we knew we were in beef-land! That wasn’t necessarily a bad thing, once our friend Dushi, beef lover if there is one in this world, came to visit. Good rodizzio places were SO expensive, that we ended up trying this buffet of grilled meats served at your table at a very so-so place. It was our mistake as we could have had a much better experience. During Carnival in Salvador da Bahia, we had the following: ECUADOR was a refreshing change in our palates after Brazil. We had arrived to the land of fruits! If it’s true that in Brazil we experienced some of the tastiest fresh fruit juices I had ever tasted, in Ecuador we got to try even more new fruits, for me, “exotic frutis”. When you have so many new things, sometimes even rather rustic preparations, things can get boring. As adventurous as one might be, sometimes you’re not in the mood for experimentation and you just crave something familiar. That’s when you go for what I’d like to call “city food”: I love fresh tuna steaks and the GALAPAGOS ISLANDS stole my heart on this one – apart from their beautiful landscapes and wildlife, of course! Staying at the Red Mangrove Hotel we got to enjoy their restaurant multiple times. Their cuisine is a fusion of Ecuadorian and Asian – absolutely brilliant! Finally in CHILE, we got ourselves an apartment in Santiago for 5 weeks. Having a place with a kitchen meant that I could do some cooking – not only were we missing home-made food, I was actually dying to do some cooking. Here are some of the dishes prepared in our Lastarria home: After a couple of weeks in Santiago de Chile, our friend Ayush arrived from India and brought with him a festival of INDIAN FLAVORS: tea leaves, masalas, salty snacks like bhujia and masala chips and even some parathas his Mom was kind enough to make and wrap for us. Santiago de Chile, like most big cities, is good, food-wise as it makes available a variety of restaurants, cuisines, influences… not only we had CHILEAN FOOD, but also Indian, Korean, Sushi… It is this kind of variety and availability that makes me think I have to live in a city. I am too driven by cravings to limit myself to a place with “local” kind of food only. And so in CHILE, we also fell in love with red wine… As we went South in Chile, towards PATAGONIA, it was time for fresh seafood: Still, we never forgot about desert!… In EASTER ISLAND (aka Rapa Nui), was time to enjoy fresh island tuna… again! I couldn’t be happier! A country that knows how to eat well is a country that wins my heart very easily. PERU was extremely successful at that! Let’s see what our intro to Peruvian cuisine was like, in Arequipa: Leaving the coastal region and going towards the ANDES mountains, the cuisine varies a lot, consisting in heavier more caloric preparations: Again in LIMA we had an apartment for almost a week, which allowed us to do some home cooking: When we weren’t eating at home in Lima, we were again digging into some “CITY FOOD:” Food in the AMAZON had nothing to do with in the rest of Peru. Almost anything that crawls in the jungle can be eaten. We didn’t have such extreme preparations such as armadillo stew, grilled lizards and the likes, but we did have some simpler, nevertheless yummy dishes, like the ones below: And so one year of food comes to an end! I love food and I love eating and I do believe that, same as music, eating brings people together! It’s just one of those things, like breathing or sleeping, that we all do, no matter where in the world. Food tells so much of the history and culture of a place that, without the gastronomic experience, traveling wouldn’t be complete!
{ "pile_set_name": "OpenWebText2" }
// Code generated by "go run msg_generate.go"; DO NOT EDIT. package dns // pack*() functions func (rr *A) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDataA(rr.A, msg, off) if err != nil { return off, err } return off, nil } func (rr *AAAA) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDataAAAA(rr.AAAA, msg, off) if err != nil { return off, err } return off, nil } func (rr *AFSDB) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Subtype, msg, off) if err != nil { return off, err } off, err = packDomainName(rr.Hostname, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *ANY) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { return off, nil } func (rr *AVC) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packStringTxt(rr.Txt, msg, off) if err != nil { return off, err } return off, nil } func (rr *CAA) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint8(rr.Flag, msg, off) if err != nil { return off, err } off, err = packString(rr.Tag, msg, off) if err != nil { return off, err } off, err = packStringOctet(rr.Value, msg, off) if err != nil { return off, err } return off, nil } func (rr *CDNSKEY) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Flags, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Protocol, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packStringBase64(rr.PublicKey, msg, off) if err != nil { return off, err } return off, nil } func (rr *CDS) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.KeyTag, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packUint8(rr.DigestType, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.Digest, msg, off) if err != nil { return off, err } return off, nil } func (rr *CERT) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Type, msg, off) if err != nil { return off, err } off, err = packUint16(rr.KeyTag, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packStringBase64(rr.Certificate, msg, off) if err != nil { return off, err } return off, nil } func (rr *CNAME) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Target, msg, off, compression, compress) if err != nil { return off, err } return off, nil } func (rr *CSYNC) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint32(rr.Serial, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Flags, msg, off) if err != nil { return off, err } off, err = packDataNsec(rr.TypeBitMap, msg, off) if err != nil { return off, err } return off, nil } func (rr *DHCID) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packStringBase64(rr.Digest, msg, off) if err != nil { return off, err } return off, nil } func (rr *DLV) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.KeyTag, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packUint8(rr.DigestType, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.Digest, msg, off) if err != nil { return off, err } return off, nil } func (rr *DNAME) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Target, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *DNSKEY) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Flags, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Protocol, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packStringBase64(rr.PublicKey, msg, off) if err != nil { return off, err } return off, nil } func (rr *DS) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.KeyTag, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packUint8(rr.DigestType, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.Digest, msg, off) if err != nil { return off, err } return off, nil } func (rr *EID) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packStringHex(rr.Endpoint, msg, off) if err != nil { return off, err } return off, nil } func (rr *EUI48) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint48(rr.Address, msg, off) if err != nil { return off, err } return off, nil } func (rr *EUI64) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint64(rr.Address, msg, off) if err != nil { return off, err } return off, nil } func (rr *GID) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint32(rr.Gid, msg, off) if err != nil { return off, err } return off, nil } func (rr *GPOS) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packString(rr.Longitude, msg, off) if err != nil { return off, err } off, err = packString(rr.Latitude, msg, off) if err != nil { return off, err } off, err = packString(rr.Altitude, msg, off) if err != nil { return off, err } return off, nil } func (rr *HINFO) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packString(rr.Cpu, msg, off) if err != nil { return off, err } off, err = packString(rr.Os, msg, off) if err != nil { return off, err } return off, nil } func (rr *HIP) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint8(rr.HitLength, msg, off) if err != nil { return off, err } off, err = packUint8(rr.PublicKeyAlgorithm, msg, off) if err != nil { return off, err } off, err = packUint16(rr.PublicKeyLength, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.Hit, msg, off) if err != nil { return off, err } off, err = packStringBase64(rr.PublicKey, msg, off) if err != nil { return off, err } off, err = packDataDomainNames(rr.RendezvousServers, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *KEY) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Flags, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Protocol, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packStringBase64(rr.PublicKey, msg, off) if err != nil { return off, err } return off, nil } func (rr *KX) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Preference, msg, off) if err != nil { return off, err } off, err = packDomainName(rr.Exchanger, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *L32) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Preference, msg, off) if err != nil { return off, err } off, err = packDataA(rr.Locator32, msg, off) if err != nil { return off, err } return off, nil } func (rr *L64) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Preference, msg, off) if err != nil { return off, err } off, err = packUint64(rr.Locator64, msg, off) if err != nil { return off, err } return off, nil } func (rr *LOC) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint8(rr.Version, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Size, msg, off) if err != nil { return off, err } off, err = packUint8(rr.HorizPre, msg, off) if err != nil { return off, err } off, err = packUint8(rr.VertPre, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Latitude, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Longitude, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Altitude, msg, off) if err != nil { return off, err } return off, nil } func (rr *LP) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Preference, msg, off) if err != nil { return off, err } off, err = packDomainName(rr.Fqdn, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *MB) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Mb, msg, off, compression, compress) if err != nil { return off, err } return off, nil } func (rr *MD) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Md, msg, off, compression, compress) if err != nil { return off, err } return off, nil } func (rr *MF) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Mf, msg, off, compression, compress) if err != nil { return off, err } return off, nil } func (rr *MG) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Mg, msg, off, compression, compress) if err != nil { return off, err } return off, nil } func (rr *MINFO) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Rmail, msg, off, compression, compress) if err != nil { return off, err } off, err = packDomainName(rr.Email, msg, off, compression, compress) if err != nil { return off, err } return off, nil } func (rr *MR) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Mr, msg, off, compression, compress) if err != nil { return off, err } return off, nil } func (rr *MX) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Preference, msg, off) if err != nil { return off, err } off, err = packDomainName(rr.Mx, msg, off, compression, compress) if err != nil { return off, err } return off, nil } func (rr *NAPTR) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Order, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Preference, msg, off) if err != nil { return off, err } off, err = packString(rr.Flags, msg, off) if err != nil { return off, err } off, err = packString(rr.Service, msg, off) if err != nil { return off, err } off, err = packString(rr.Regexp, msg, off) if err != nil { return off, err } off, err = packDomainName(rr.Replacement, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *NID) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Preference, msg, off) if err != nil { return off, err } off, err = packUint64(rr.NodeID, msg, off) if err != nil { return off, err } return off, nil } func (rr *NIMLOC) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packStringHex(rr.Locator, msg, off) if err != nil { return off, err } return off, nil } func (rr *NINFO) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packStringTxt(rr.ZSData, msg, off) if err != nil { return off, err } return off, nil } func (rr *NS) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Ns, msg, off, compression, compress) if err != nil { return off, err } return off, nil } func (rr *NSAPPTR) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Ptr, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *NSEC) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.NextDomain, msg, off, compression, false) if err != nil { return off, err } off, err = packDataNsec(rr.TypeBitMap, msg, off) if err != nil { return off, err } return off, nil } func (rr *NSEC3) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint8(rr.Hash, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Flags, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Iterations, msg, off) if err != nil { return off, err } off, err = packUint8(rr.SaltLength, msg, off) if err != nil { return off, err } // Only pack salt if value is not "-", i.e. empty if rr.Salt != "-" { off, err = packStringHex(rr.Salt, msg, off) if err != nil { return off, err } } off, err = packUint8(rr.HashLength, msg, off) if err != nil { return off, err } off, err = packStringBase32(rr.NextDomain, msg, off) if err != nil { return off, err } off, err = packDataNsec(rr.TypeBitMap, msg, off) if err != nil { return off, err } return off, nil } func (rr *NSEC3PARAM) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint8(rr.Hash, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Flags, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Iterations, msg, off) if err != nil { return off, err } off, err = packUint8(rr.SaltLength, msg, off) if err != nil { return off, err } // Only pack salt if value is not "-", i.e. empty if rr.Salt != "-" { off, err = packStringHex(rr.Salt, msg, off) if err != nil { return off, err } } return off, nil } func (rr *NULL) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packStringAny(rr.Data, msg, off) if err != nil { return off, err } return off, nil } func (rr *OPENPGPKEY) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packStringBase64(rr.PublicKey, msg, off) if err != nil { return off, err } return off, nil } func (rr *OPT) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDataOpt(rr.Option, msg, off) if err != nil { return off, err } return off, nil } func (rr *PTR) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Ptr, msg, off, compression, compress) if err != nil { return off, err } return off, nil } func (rr *PX) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Preference, msg, off) if err != nil { return off, err } off, err = packDomainName(rr.Map822, msg, off, compression, false) if err != nil { return off, err } off, err = packDomainName(rr.Mapx400, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *RFC3597) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packStringHex(rr.Rdata, msg, off) if err != nil { return off, err } return off, nil } func (rr *RKEY) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Flags, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Protocol, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packStringBase64(rr.PublicKey, msg, off) if err != nil { return off, err } return off, nil } func (rr *RP) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Mbox, msg, off, compression, false) if err != nil { return off, err } off, err = packDomainName(rr.Txt, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *RRSIG) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.TypeCovered, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Labels, msg, off) if err != nil { return off, err } off, err = packUint32(rr.OrigTtl, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Expiration, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Inception, msg, off) if err != nil { return off, err } off, err = packUint16(rr.KeyTag, msg, off) if err != nil { return off, err } off, err = packDomainName(rr.SignerName, msg, off, compression, false) if err != nil { return off, err } off, err = packStringBase64(rr.Signature, msg, off) if err != nil { return off, err } return off, nil } func (rr *RT) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Preference, msg, off) if err != nil { return off, err } off, err = packDomainName(rr.Host, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *SIG) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.TypeCovered, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Labels, msg, off) if err != nil { return off, err } off, err = packUint32(rr.OrigTtl, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Expiration, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Inception, msg, off) if err != nil { return off, err } off, err = packUint16(rr.KeyTag, msg, off) if err != nil { return off, err } off, err = packDomainName(rr.SignerName, msg, off, compression, false) if err != nil { return off, err } off, err = packStringBase64(rr.Signature, msg, off) if err != nil { return off, err } return off, nil } func (rr *SMIMEA) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint8(rr.Usage, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Selector, msg, off) if err != nil { return off, err } off, err = packUint8(rr.MatchingType, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.Certificate, msg, off) if err != nil { return off, err } return off, nil } func (rr *SOA) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Ns, msg, off, compression, compress) if err != nil { return off, err } off, err = packDomainName(rr.Mbox, msg, off, compression, compress) if err != nil { return off, err } off, err = packUint32(rr.Serial, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Refresh, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Retry, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Expire, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Minttl, msg, off) if err != nil { return off, err } return off, nil } func (rr *SPF) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packStringTxt(rr.Txt, msg, off) if err != nil { return off, err } return off, nil } func (rr *SRV) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Priority, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Weight, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Port, msg, off) if err != nil { return off, err } off, err = packDomainName(rr.Target, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *SSHFP) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Type, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.FingerPrint, msg, off) if err != nil { return off, err } return off, nil } func (rr *TA) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.KeyTag, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Algorithm, msg, off) if err != nil { return off, err } off, err = packUint8(rr.DigestType, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.Digest, msg, off) if err != nil { return off, err } return off, nil } func (rr *TALINK) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.PreviousName, msg, off, compression, false) if err != nil { return off, err } off, err = packDomainName(rr.NextName, msg, off, compression, false) if err != nil { return off, err } return off, nil } func (rr *TKEY) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Algorithm, msg, off, compression, false) if err != nil { return off, err } off, err = packUint32(rr.Inception, msg, off) if err != nil { return off, err } off, err = packUint32(rr.Expiration, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Mode, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Error, msg, off) if err != nil { return off, err } off, err = packUint16(rr.KeySize, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.Key, msg, off) if err != nil { return off, err } off, err = packUint16(rr.OtherLen, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.OtherData, msg, off) if err != nil { return off, err } return off, nil } func (rr *TLSA) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint8(rr.Usage, msg, off) if err != nil { return off, err } off, err = packUint8(rr.Selector, msg, off) if err != nil { return off, err } off, err = packUint8(rr.MatchingType, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.Certificate, msg, off) if err != nil { return off, err } return off, nil } func (rr *TSIG) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packDomainName(rr.Algorithm, msg, off, compression, false) if err != nil { return off, err } off, err = packUint48(rr.TimeSigned, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Fudge, msg, off) if err != nil { return off, err } off, err = packUint16(rr.MACSize, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.MAC, msg, off) if err != nil { return off, err } off, err = packUint16(rr.OrigId, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Error, msg, off) if err != nil { return off, err } off, err = packUint16(rr.OtherLen, msg, off) if err != nil { return off, err } off, err = packStringHex(rr.OtherData, msg, off) if err != nil { return off, err } return off, nil } func (rr *TXT) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packStringTxt(rr.Txt, msg, off) if err != nil { return off, err } return off, nil } func (rr *UID) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint32(rr.Uid, msg, off) if err != nil { return off, err } return off, nil } func (rr *UINFO) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packString(rr.Uinfo, msg, off) if err != nil { return off, err } return off, nil } func (rr *URI) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packUint16(rr.Priority, msg, off) if err != nil { return off, err } off, err = packUint16(rr.Weight, msg, off) if err != nil { return off, err } off, err = packStringOctet(rr.Target, msg, off) if err != nil { return off, err } return off, nil } func (rr *X25) pack(msg []byte, off int, compression compressionMap, compress bool) (off1 int, err error) { off, err = packString(rr.PSDNAddress, msg, off) if err != nil { return off, err } return off, nil } // unpack*() functions func (rr *A) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.A, off, err = unpackDataA(msg, off) if err != nil { return off, err } return off, nil } func (rr *AAAA) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.AAAA, off, err = unpackDataAAAA(msg, off) if err != nil { return off, err } return off, nil } func (rr *AFSDB) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Subtype, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Hostname, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *ANY) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart return off, nil } func (rr *AVC) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Txt, off, err = unpackStringTxt(msg, off) if err != nil { return off, err } return off, nil } func (rr *CAA) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Flag, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Tag, off, err = unpackString(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Value, off, err = unpackStringOctet(msg, off) if err != nil { return off, err } return off, nil } func (rr *CDNSKEY) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Flags, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Protocol, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.PublicKey, off, err = unpackStringBase64(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *CDS) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.KeyTag, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.DigestType, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Digest, off, err = unpackStringHex(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *CERT) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Type, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.KeyTag, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Certificate, off, err = unpackStringBase64(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *CNAME) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Target, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *CSYNC) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Serial, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Flags, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.TypeBitMap, off, err = unpackDataNsec(msg, off) if err != nil { return off, err } return off, nil } func (rr *DHCID) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Digest, off, err = unpackStringBase64(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *DLV) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.KeyTag, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.DigestType, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Digest, off, err = unpackStringHex(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *DNAME) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Target, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *DNSKEY) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Flags, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Protocol, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.PublicKey, off, err = unpackStringBase64(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *DS) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.KeyTag, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.DigestType, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Digest, off, err = unpackStringHex(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *EID) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Endpoint, off, err = unpackStringHex(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *EUI48) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Address, off, err = unpackUint48(msg, off) if err != nil { return off, err } return off, nil } func (rr *EUI64) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Address, off, err = unpackUint64(msg, off) if err != nil { return off, err } return off, nil } func (rr *GID) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Gid, off, err = unpackUint32(msg, off) if err != nil { return off, err } return off, nil } func (rr *GPOS) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Longitude, off, err = unpackString(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Latitude, off, err = unpackString(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Altitude, off, err = unpackString(msg, off) if err != nil { return off, err } return off, nil } func (rr *HINFO) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Cpu, off, err = unpackString(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Os, off, err = unpackString(msg, off) if err != nil { return off, err } return off, nil } func (rr *HIP) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.HitLength, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.PublicKeyAlgorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.PublicKeyLength, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Hit, off, err = unpackStringHex(msg, off, off+int(rr.HitLength)) if err != nil { return off, err } rr.PublicKey, off, err = unpackStringBase64(msg, off, off+int(rr.PublicKeyLength)) if err != nil { return off, err } rr.RendezvousServers, off, err = unpackDataDomainNames(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *KEY) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Flags, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Protocol, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.PublicKey, off, err = unpackStringBase64(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *KX) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Preference, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Exchanger, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *L32) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Preference, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Locator32, off, err = unpackDataA(msg, off) if err != nil { return off, err } return off, nil } func (rr *L64) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Preference, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Locator64, off, err = unpackUint64(msg, off) if err != nil { return off, err } return off, nil } func (rr *LOC) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Version, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Size, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.HorizPre, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.VertPre, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Latitude, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Longitude, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Altitude, off, err = unpackUint32(msg, off) if err != nil { return off, err } return off, nil } func (rr *LP) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Preference, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Fqdn, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *MB) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Mb, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *MD) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Md, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *MF) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Mf, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *MG) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Mg, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *MINFO) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Rmail, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Email, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *MR) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Mr, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *MX) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Preference, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Mx, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *NAPTR) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Order, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Preference, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Flags, off, err = unpackString(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Service, off, err = unpackString(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Regexp, off, err = unpackString(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Replacement, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *NID) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Preference, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.NodeID, off, err = unpackUint64(msg, off) if err != nil { return off, err } return off, nil } func (rr *NIMLOC) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Locator, off, err = unpackStringHex(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *NINFO) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.ZSData, off, err = unpackStringTxt(msg, off) if err != nil { return off, err } return off, nil } func (rr *NS) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Ns, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *NSAPPTR) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Ptr, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *NSEC) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.NextDomain, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.TypeBitMap, off, err = unpackDataNsec(msg, off) if err != nil { return off, err } return off, nil } func (rr *NSEC3) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Hash, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Flags, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Iterations, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.SaltLength, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Salt, off, err = unpackStringHex(msg, off, off+int(rr.SaltLength)) if err != nil { return off, err } rr.HashLength, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.NextDomain, off, err = unpackStringBase32(msg, off, off+int(rr.HashLength)) if err != nil { return off, err } rr.TypeBitMap, off, err = unpackDataNsec(msg, off) if err != nil { return off, err } return off, nil } func (rr *NSEC3PARAM) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Hash, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Flags, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Iterations, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.SaltLength, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Salt, off, err = unpackStringHex(msg, off, off+int(rr.SaltLength)) if err != nil { return off, err } return off, nil } func (rr *NULL) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Data, off, err = unpackStringAny(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *OPENPGPKEY) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.PublicKey, off, err = unpackStringBase64(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *OPT) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Option, off, err = unpackDataOpt(msg, off) if err != nil { return off, err } return off, nil } func (rr *PTR) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Ptr, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *PX) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Preference, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Map822, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Mapx400, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *RFC3597) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Rdata, off, err = unpackStringHex(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *RKEY) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Flags, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Protocol, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.PublicKey, off, err = unpackStringBase64(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *RP) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Mbox, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Txt, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *RRSIG) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.TypeCovered, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Labels, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.OrigTtl, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Expiration, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Inception, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.KeyTag, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.SignerName, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Signature, off, err = unpackStringBase64(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *RT) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Preference, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Host, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *SIG) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.TypeCovered, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Labels, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.OrigTtl, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Expiration, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Inception, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.KeyTag, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.SignerName, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Signature, off, err = unpackStringBase64(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *SMIMEA) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Usage, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Selector, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.MatchingType, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Certificate, off, err = unpackStringHex(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *SOA) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Ns, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Mbox, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Serial, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Refresh, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Retry, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Expire, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Minttl, off, err = unpackUint32(msg, off) if err != nil { return off, err } return off, nil } func (rr *SPF) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Txt, off, err = unpackStringTxt(msg, off) if err != nil { return off, err } return off, nil } func (rr *SRV) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Priority, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Weight, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Port, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Target, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *SSHFP) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Type, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.FingerPrint, off, err = unpackStringHex(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *TA) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.KeyTag, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Algorithm, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.DigestType, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Digest, off, err = unpackStringHex(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *TALINK) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.PreviousName, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.NextName, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } return off, nil } func (rr *TKEY) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Algorithm, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Inception, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Expiration, off, err = unpackUint32(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Mode, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Error, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.KeySize, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Key, off, err = unpackStringHex(msg, off, off+int(rr.KeySize)) if err != nil { return off, err } rr.OtherLen, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.OtherData, off, err = unpackStringHex(msg, off, off+int(rr.OtherLen)) if err != nil { return off, err } return off, nil } func (rr *TLSA) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Usage, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Selector, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.MatchingType, off, err = unpackUint8(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Certificate, off, err = unpackStringHex(msg, off, rdStart+int(rr.Hdr.Rdlength)) if err != nil { return off, err } return off, nil } func (rr *TSIG) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Algorithm, off, err = UnpackDomainName(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.TimeSigned, off, err = unpackUint48(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Fudge, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.MACSize, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.MAC, off, err = unpackStringHex(msg, off, off+int(rr.MACSize)) if err != nil { return off, err } rr.OrigId, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Error, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.OtherLen, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.OtherData, off, err = unpackStringHex(msg, off, off+int(rr.OtherLen)) if err != nil { return off, err } return off, nil } func (rr *TXT) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Txt, off, err = unpackStringTxt(msg, off) if err != nil { return off, err } return off, nil } func (rr *UID) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Uid, off, err = unpackUint32(msg, off) if err != nil { return off, err } return off, nil } func (rr *UINFO) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Uinfo, off, err = unpackString(msg, off) if err != nil { return off, err } return off, nil } func (rr *URI) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.Priority, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Weight, off, err = unpackUint16(msg, off) if err != nil { return off, err } if off == len(msg) { return off, nil } rr.Target, off, err = unpackStringOctet(msg, off) if err != nil { return off, err } return off, nil } func (rr *X25) unpack(msg []byte, off int) (off1 int, err error) { rdStart := off _ = rdStart rr.PSDNAddress, off, err = unpackString(msg, off) if err != nil { return off, err } return off, nil }
{ "pile_set_name": "Github" }
Herstal Herstal, formerly known as Heristal, or Héristal, is a municipality of Belgium. It lies in the country's Walloon Region and Province of Liège along the Meuse river. Herstal is included in the "Greater Liège" agglomeration, which counts about 600,000 inhabitants. Herstal municipality includes the former communes of Milmort, Vottem, and Liers (partly, the other part being incorporated into Juprelle). A large armaments factory, the Fabrique Nationale or FN, and the biggest industrial zone of Wallonia (Haut-Sart) provide employment locally. History Merovingian and Carolingian golden age The proximity of the Meuse River and the abundance of local resources attracted settlers in this area since the fifth millennium BC. Around the end of the Roman era and at the beginning of the Merovingian period, the hamlet had become a fortified stronghold. The major road that linked Tongeren to Aachen crossed the Meuse here, where a ferry likely carried travelers to Jupille. The name Herstal is of Franconian origin, consisting of the elements hari ("army") and stal ("resting place", compare "stable"). The first mention of Herstal is in Latin documents from ±718 (Cheristalius corrected to Charistalius) and 723 (Harastallius). The first possibly non-Latinized occurrences are Eristail (in 919) and Harstail (1197). Pippin of Herstal (ca 635–714), Mayor of the Palace and de facto ruler of Austrasia and Neustria and founder of the family that established the Carolingian dynasty, probably chose this location as his main residence because of its proximity to the major cities of Tongeren, Maastricht, and Liège. Pippin was the father of Charles Martel, victor of the decisive Battle of Tours that stopped the Arab-Muslim advance into northwestern Europe, and grandfather of Charlemagne, also supposedly born in Herstal. Charlemagne lived for at least fifteen years in Herstal but later established his capital in Aachen, ending Herstal’s period of medieval glory as capital of the empire. Late Middle Ages until now The town was incorporated into the Duchy of Lower Lotharingia, which became part of the Duchy of Brabant at the end of the 12th century. Despite its proximity to Liège, the territory of Herstal did not become part of the Bishopric of Liège until 1740, when the prince-bishop Georges-Louis de Berghes bought it from Frederick II of Prussia. By that time, the town was mainly known for its able craftsmen: ceramists, blacksmiths, and clockmakers. In the 19th century, Herstal became a city of coal and steel. It would, however, become world-famous thanks to the foundation of the Fabrique Nationale, a major armament factory, in 1889. Several motorcycle manufacturers also established themselves in town. On August 7, 1914, at the very beginning of World War I, the invading German army executed 27 civilians and destroyed 10 homes in Herstal. After World War II, heavy industry saw a prolonged period of decline, drastically reducing the number of jobs in these areas. Today, Herstal’s economy is picking up again, with more than 200 companies established on its territory, including Techspace, which manufactures precision parts for the European Space Agency’s Ariane rocket. Politics Herstal is a left-wing/socialist stronghold. It was also the strongest area in support of the far-left Workers' Party of Belgium in the 2019 elections, gaining 27.55% of the votes in Herstal. Sights A museum, housed in a 1664 building typical of the region, shows various artifacts of the Prehistoric and Gallo-Roman periods, a Frankish burial place, and several displays retracing the history of the Pippinid dynasty that originated here. The museum also has a collection of local industrial products, including samples from the FN. The Pippin Tower incorporates a wall section thought to have belonged to the palace of Charlemagne. Notable people Pippin of Herstal, Mayor of the Palace of Austrasia, Neustria and Burgundy (635 or 640–714) Charles Martel, Mayor of the Palace and Duke of the Franks (686–741) Charlemagne, king of the Franks and founder of the Holy Roman Empire (742 or 747–814, birth in Herstal is uncertain) John Browning, American firearms designer (1855–1926) Twin cities : Castelmauro : Kilmarnock : Alès References External links Category:Municipalities of Liège (province) Category:Populated places in Liège (province) Category:Populated places in Belgium
{ "pile_set_name": "Wikipedia (en)" }
Does immunosuppressive pharmacotherapy affect isoagglutinin titers? Preoperative reduction of isoagglutinins leads to successful ABO-incompatible (ABOi) renal transplantation. The strategy includes pretransplantation plasmapheresis, more potent immunosuppressive drugs, splenectomy, and anti-CD20 antibody. It has been reported that low isoagglutinin antibody titers posttransplant were observed among ABOi renal transplants with favorable outcome. The isoagglutinin titers may increase slightly when plasmapheresis is discontinued; however, it never returns to the pretreatment level under immunosuppressive therapy. This raises the question of what occurs to the isoagglutinin titer in ABO-compatible renal transplants under maintenance immunosuppressive pharmacotherapy. We analyzed 10 renal transplant recipients, including seven living and three cadaveric donors. Patients were treated with basiliximab (20 mg) intravenously on day 0 and day 4. Maintenance immunosuppressive therapy involved a calcineurin inhibitor, mycophenolate mofetil, and steroid. Anti-human globulin isoagglutinin titers were routinely examined 1 day before and day 0 and 1, 2, 3, 4, 8, 12, and 24 weeks posttransplant. No ALG or intravenous immunoglobulin or plasmapheresis treatment was provided in the follow-up period. Our preliminary data showed nearly no influence on isoagglutinin titer levels in 6-month follow-up under maintenance immunosuppressive therapy. In addition, no significant difference in isoagglutinin titer was observed between tacrolimus and cyclosporine groups. Maintenance immunosuppressive pharmacotherapy did not affect isoagglutinin titer levels in ABO-compatible kidney transplants. Further study is needed to investigate the mechanisms of persistent low-level isoagglutinin titers among successful ABOi renal transplantation patients.
{ "pile_set_name": "PubMed Abstracts" }
Q: Using PHP with LDAP returns all results into one connected line I'm trying to get user information from my Active Directory through LDAP. Im using for loops to retrieve each username for a specific AD OU. All results are showing in one line without any separation. If i put $LDAP_CN into array it just creates a lot of different arrays. Here is my PHP code: $entries = ldap_get_entries($ldap_connection, $result); for ($x=0; $x<$entries['count']; $x++){ $LDAP_CN = ""; if (!empty($entries[$x]['cn'][0])) { $LDAP_CN = $entries[$x]['cn'][0]; if ($LDAP_CN == "NULL"){ $LDAP_CN = ""; } } echo($LDAP_CN); } output: Name LastnameName1 Lastname1Name2 Lastname2Name3 Lastname3Name4 Lastname4 ant etc. When i try to var_dump $LDAP_CN it gives output like that: string (13) "Name Lastname" string (15) "Name1 Lastname1" string (15) "Name2 Lastname2" string (15) "Name3 Lastname3" string (15) "Name4 Lastname4" etc.. So i'm guessing that it knows how to separate them. But how ? I tried explode it just creates a lot of arrays.. Also if i put echo out of the loop it just returns last result. A: All results in one array: $LDAP_CN = []; for ($x=0; $x<$entries['count']; $x++){ if (!empty($entries[$x]['cn'][0])) { $LDAP_CN[] = $entries[$x]['cn'][0] == "NULL" ? "" : $entries[$x]['cn'][0]; } } print_r($LDAP_CN);
{ "pile_set_name": "StackExchange" }
Q: In Stargate is there an in-universe explanation of the cumulative effect of Zat'nik'tel (Zat guns)? Several times we see characters who had previously been shot by a zat gun shot again. The guns were supposed to be painful on the first shot, fatal on the second, and matter-destroying on the third. It seems that multiple shots spaced over time did not have this effect. Is there an in-universe explanation as to why the cumulative effect of zat guns wears off? A: Third Zap's the Charm Let's consider the Goa'uld and their needs. They were not known for their patience or tolerance, but their technology was definitely first rate. Only the Ori or the Asgard seemed to have working technology as sophisticated. It is likely the Ancients also had technology as effective, but few working samples remained. The Goa'uld have mastered the ability to compress energy into small devices such as Zat'nik'tel and Staff weapons. Both weapons have amazing capacity for energy emission and matter disruption. If we consider the effects we have seen: the ability to stun almost any creature (there are exceptions but it's a short list) the ability to kill any creature struck more than once in a short period of time, likely due to neural collapse the ultimate disintegration of a target with a third shot The energy of a Zat can be conducted through both metal and water The energy of a Zat is destructive to electronic devices (but does not work on the Replicators) My initial assessment would call this a form of coherent electron-beam weapon or a lightning-gun. The beam of coherent electrons guided by an invisible laser which polarizes the air and allows the electrons to travel to the target. The guidance effect in this case, must allow for a massive number of charged particles to reach the target. This meets several of the criteria for our weapon: Destroys electronics Can be conducted by metal or water Can cause a shock to human or humanoid-like creatures likely by overloading their neural systems But how do we get to the matter-destruction possibility? Perhaps the Goa'uld are advanced enough to use the anti-particle of the electron, the positron. This anti-matter particle would act pretty much like an electron would, but if it could be caused to stay with a target for more than a few minutes, perhaps by clinging to the matter in some undisclosed fashion, a target could: Be effected just like they would if zapped by a heavy charge of electrons They would retain enough charge that a second exposure would exacerbate the first, killing a potential target. A third exposure might cause enough of an anti-particle reaction to cause annihilation (non-explosively?) reducing matter to dust. This might also explain how the Zat'nik'tel could possibly have been used to boost hyperdrive engines when amplified by Ancient knowledge. Granted this would make the Goa'uld weaponry very strange by our standards, but if they could control anti-matter streams in that fashion, it would make most armor obsolete as all any excess matter would do once struck by a Zat is to further hold the positron charge even better. After reviewing all of the sources I could on Zat weaponry, I have to conclude, and is held up by even the show's designers, that no one considered the Zat and its effects very thoroughly and any speculation here is clearly my own. A: To answer your question about multiple shots over long time spans: In one SG-1 episode, Zats are used against a species of swarming creatures that are electromagnetic in nature. The team realizes that their only chance to reach the stargate without being attacked by the creatures is to reach it under the protection of an electromagnetic field. Colonel O'Neill is hit once with Zat fire and he makes his way toward the gate as soon as he is able to stand. However, halfway there, the field around his body begins to dissipate and the creatures begin to break through. This means: Zats impart some sort of charge on its victim that can generate an EM field (has to be an unknown particle that is not an electron, proton or a positron). It also means that the charge dissipates slowly (within 5-20 minutes, judging from the above mentioned episode). Body-wide pain is mostly likely caused by electrical effects on the nervous system. Charge imparted by successive shots is cumulative, and beyond a threshold, is lethal. This explains the lethal second shot and the ability to withstand multiple shots if enough time passes between them. The disintegration effect is poor, both scientifically and as a storytelling device. It was wisely abandoned by the writers. A: Building on HNL's answer it might be possible that the bolt of energy short from the Zat is made of positronium, stabilised in some fashion. IT travels fast enough to reach the target and impose a charge on the target (different regions being charged differently), though it would be largely negative due to some/most of the positrons annihilating. At this point not enough charge has been induced in the victim to kill it. On the second blast about the same amount of negative charge is deposited. With the victim still charged from the first blast this charge may be enough to kill them via the pure level of charge involved. Otherwise when the charge does attempt to ground itself it now has to travel through the heart (not just down the legs) as the victim is on the ground. On the third shot things get interesting. The body now has a large negative charge. This charge acts to repel the electrons and attract the positrons which reach the body faster than before, fewer positrons are annihilated and start a chain reaction with the energy released from the anti-matter reaction which is enough to disintegrate the body. It's dodgy science but it's still science.
{ "pile_set_name": "StackExchange" }
Pro-life marchers rally at the Supreme Court during the 46th annual March for Life in Washington on Jan. 18, 2019. (Joshua Roberts/Reuters) Democrats Are Showing Us All Who They Really Are — Let’s Believe Them Part One: Very late term abortions and infanticide Commentary We are living in a unique era in U.S. history. A political party that long ago embraced radical extremism on many issues was able to hide that from much of the public, with the help of a mainstream media that was willing to turn itself into that political party’s propaganda arm. Throughout the 1960s, 1970s, 1980s, 1990s, and even into the 2000s, the Democratic Party was able to keep its mask of being a pro-American, mainstream political entity firmly in place. Starting with the election of Barack Obama as president in 2008, however, that mask began to slip. Top Democratic politicians and thought leaders, having just won a presidential election against the lackluster campaign of Sen. John McCain of Arizona, engaged in what now looks like foolish triumphalism. The words “Permanent Democratic Majority” were thrown around by talking heads on television, with left-leaning pundits confidently asserting they just couldn’t see how Republicans could possibly win another election, now that Obama was about to lead America to the Promised Land. Eight years of watching how Obama intended to use the powers of the federal government to coerce America to enter the Democratic version of the Promised Land caused many Americans to revolt. So, far from shuffling off to its demise, the Republican Party recovered from its 2008 defeat well enough to make gains in Congress in the 2010, 2012, 2014, and 2016 elections. Despite the electoral setbacks, the Democratic Party was still smugly confident of retaining control of the executive branch and its all-important control of federal government agencies and also appointments to the judiciary, as I wrote in an earlier column: “Control of the executive branch means control of the federal agencies and indirect control of the judicial branch through appointments to the bench. As long as you have control of the executive branch, you have control of the federal government. You can use that to hold a hostile Congress in checkmate. “See, Democrats can stand not having complete control of Congress. They can put up with that. They just need their activist judges and their Democrat president to keep those Republican troublemakers in the House and/or the Senate in check.” And then, as I love saying, a miracle happened: Donald Trump won the 2016 election, and all the things he’s accomplished in the Oval Office since—deliberately reversing all the ‘progress’ the Democrats had made—have literally goaded them into dropping their mask and showing all Americans their true face. Late Term Abortions and Infanticide The still-unfolding fiasco in Virginia, in which state Delegate Kathy Tran introduced a bill—she didn’t even seem to understand all that well—that literally would legalize infanticide, grabbed the attention of the entire country. Video footage of Tran having to have her own legislation explained to her by a lawyer from NARAL (National Abortion Rights Action League) appears to reveal that Tran was only submitting the measure on behalf of NARAL and other pro-abortion groups. She didn’t even understand it herself, and it’s becoming clear that neither did other members of the House of Delegates who had backed it. As if that wasn’t bad enough, Virginia Gov. Ralph Northam went on a radio show and droned on, in completely unemotional fashion, about how a baby born alive would be kept ‘comfortable’ while a discussion ensued as to whether to end that infant’s life or not. As you can imagine, when a video of that interview went viral on social media, plenty of people were horrified. For decades, Democrats have been able to hide their radical positions on late-term abortion and infanticide, while simultaneously painting the other side in this crucially important national debate—the pro-life side—as being the “real extremists.” They’ve been able to pull off this neat double-trick with the complicity of the media, which allows them to lie by using weasel words and euphemisms to describe killing viable infants in the second and third trimesters. For decades, slanted press coverage allowed the Democratic Party to hide its real positions on the abortion issue. But in just the past few years, the ground has shifted and things have changed dramatically in this country’s national debate on abortion. The Gosnell Case The absolute horror uncovered in the Kermit Gosnell case shined a long-delayed spotlight on late-term abortions in the United States. Gosnell not only murdered viable infants very late in a pregnancy, but horrified investigators also discovered he actually kept trophies stored away in freezers on the premises. In case you’ve never read the grand jury’s report on the horrors found inside Gosnell’s clinic, I covered that extensively on my blog. The mainstream media tried to hide the scandal of the Gosnell case by simply refusing to cover it and dismissing it as a “local news story” that only people in the state of Pennsylvania were interested in. Premature Viability Now at 24 to 26 Weeks Many state legislatures are recognizing that medical advancements in the past decade are allowing for premature babies to be saved at a much earlier stage. In a dramatic change, as young as 24 weeks is now considered the cutoff point of viability for a preemie. Despite intense opposition from the pro-abortion lobby, states such as Texas were successful in tightening the legal abortion limit to 20 weeks (5 months) from 26 weeks (6 ½ months). Note that by week 26, four of every five premature infants are being saved. This explains why states are right to take this into account when looking at their legal abortion limit. Roe v. Wade, the landmark 1973 Supreme Court ruling that legalized abortion nationwide by striking down all state laws prohibiting the practice, used viability as the standard. In the 1970s, it was rare for babies born three months premature to survive. That’s no longer the case, and it’s of paramount importance for state laws to reflect that truth. Undercover Videos of Planned Parenthood A series of undercover videos shot by citizen journalists from The Center for Medical Progress demonstrated beyond a reasonable doubt that Planned Parenthood, the nation’s largest abortion provider, was engaged in a conspiracy to violate the laws on human organ sales. Planned Parenthood used the courts to block the release of further videos while using its media allies to attack the videos as ‘edited’ and ‘doctored.’ In a recent court case, however, it was ruled the videos are authentic. The Gosnell case and the videos forced Democrats out of the tall grass of the vapid, vague, ill-defined “pro-choice” language they’d been successfully using and into the open, where they had to explicitly argue for what they wanted. And what they wanted was for abortion to stay legal in states at 6 ½ months into the pregnancy. And now? It’s becoming increasingly clear the Democratic Party is going “all-in” on late-term abortions, even late into the third trimester, as New York State’s recent abortion law and the Virginia scandal starkly demonstrate. The polls have consistently shown over the decades of the heated abortion debate in the United States that support for legal abortion drops precipitously after the first trimester. Only an average of 8 to 14 percent of Americans support legal abortion in the third trimester. Gallup may soon have to add a fourth line to this poll: “Those who think the born infant should be kept comfortable while a discussion about his/her fate ensues.” You can thank the Democratic Party for that. Brian Cates is a writer based in South Texas and author of “Nobody Asked For My Opinion … But Here It Is Anyway!” He can be reached on Twitter at @drawandstrike. Views expressed in this article are the opinions of the author and do not necessarily reflect the views of The Epoch Times.
{ "pile_set_name": "OpenWebText2" }
Please note that the blogs/experiences presented here are being shared by persons who invoke their First Amendment right to freely share opinion and experiential information. These blog/experiences and opinion/information are not to be considered typical and are disavowed as being product claims or labeling. Those sharing information may or may not (we do not vet the statements given to us) have a financial interest in Parent Essential Oils products. We have not paid for any of these blog statements.These blog entries are presented for the purpose of sharing experiences in the hopes of stimulating each person’s further research and in the hopes of increasing awareness and understanding of how Parent Essential Oils affect cell oxygen content and overall health. These experiences and opinions are not verified by the operators of this web site and are subject to errors of interpretation that only scientific studies could rule out. We do not guarantee these blog details to be accurate. Therefore, these blog entries must not be relied upon in predicting the results of anyone else. Blog/Experiences •I became a believer in PEOs when after taking 4 PEO soft gels before beginning a 28 mile cross-country bike race, I finished the race with no joint pain and no muscle cramps. In fact, at the end of the 28 miles, I could have kept going while my companions were completely spent. This was the first time I had finished without being in pain in 4 years of bicycle racing.If that wasn’t enough, after just two months use of PEOs, after a routine medical checkup, my doctor asked me in a very serious way what I had been doing differently? Because I have been runningslightlyhighonmycholesterolfor several years, I expected to be told that my cholesterol had become worse and that I needed to go on drugs to reduce it. Instead, he told me that my cholesterol had come down 40 points, with my good cholesterol being higher and my bad cholesterol being much lower. I told him the only thing different that I had been doing was supplementing with PEOs.Patrick S•At 70 and after having a serious spinal injury one of my challenges is intensifying my physical workouts. I am intent on rebuilding lean muscle mass and endurance. Cramping and pain from lactic acid buildup during aggressive workouts plague everyone and especially people my age. Since consuming Parent Essential Oils before my workouts and again later in the day, I have seen dramatic results. My recovery is amazing even when I have gone way too far and past common sense. Hank H•Since using these parent oils for the first time now I can tell that my mind is sharper. It’s been about a week since I started taking them and have been able to notice the difference. Very encouraging. Sue C. •I just wanted to provide some feedback on one patient that I implemented the use of PEOs for atopic dermatitis. He had been taking 6 grams of fish oil per day as he was on some bizarre, weight-lifting-crazy, low-fat diet where the only fat he took was fish oil. I stopped his fish oil (of course!) and started PEOs in combination with good skin care and some mild topical steroids…. After fifteen doctors, seven years of severe, almost debilitating, eczema was gone in two months. An absolutely fabulous case!! Made my day!!Dr. Jonathan Carp, M.D.•High blood pressure has been my problem for a long time. I have been on medication for this and my real desire is to not take this any longer. After just one week of using Parent Essential Oils my blood pressure is reading normal! This is the first time I have seen a ‘supplement’ have this kind of effect on me. I am very encouraged. Corene C.•I knew that if you guys recommended these PEOs I would probably be pleased with the outcome. I was still surprised that I could tell a definite increase of quality energy after taking them even the first time. As a retired electrical engineer I analyze things quite well and I am excited about using the parent essential fatty acids long term. Marcie R.•I did order two bottles of the Parent Essential Oils and started taking them on Tuesday. I immediately noticed that I experienced less effort to do physical exercise and had a quicker recovery afterwards on my very next morning workout. Since I am in my 70’s I usually do not notice benefits so quickly when beginning to use a new supplement.Mel W. blog Oxygen4Cells.com PEOs - The Smart Way toOxygenate Your Cells! •I became a believer in PEOs when after taking 4 PEO soft gels before beginning a 28 mile cross-country bike race, I finished the race with no joint pain and no muscle cramps. In fact, at the end of the 28 miles, I could have kept going while my companions were completely spent. This was the first time I had finished without being in pain in 4 years of bicycle racing.If that wasn’t enough, after just two months use of PEOs, after a routine medical checkup, my doctor asked me in a very serious way what I had been doing differently? Because I have been runningslightlyhighonmycholesterolfor several years, I expected to be told that my cholesterol had become worse and that I needed to go on drugs to reduce it. Instead, he told me that my cholesterol had come down 40 points, with my good cholesterol being higher and my bad cholesterol being much lower. I told him the only thing different that I had been doing was supplementing with PEOs.Patrick S•At 70 and after having a serious spinal injury one of my challenges is intensifying my physical workouts. I am intent on rebuilding lean muscle mass and endurance. Cramping and pain from lactic acid buildup during aggressive workouts plague everyone and especially people my age. Since consuming Parent Essential Oils before my workouts and again later in the day, I have seen dramatic results. My recovery is amazing even when I have gone way too far and past common sense. Hank H•Since using these parent oils for the first time now I can tell that my mind is sharper. It’s been about a week since I started taking them and have been able to notice the difference. Very encouraging. Sue C. •I just wanted to provide some feedback on one patient that I implemented the use of PEOs for atopic dermatitis. He had been taking 6 grams of fish oil per day as he was on some bizarre, weight-lifting-crazy, low-fat diet where the only fat he took was fish oil. I stopped his fish oil (of course!) and started PEOs in combination with good skin care and some mild topical steroids…. After fifteen doctors, seven years of severe, almost debilitating, eczema was gone in two months. An absolutely fabulous case!! Made my day!!Dr. Jonathan Carp, M.D.•High blood pressure has been my problem for a long time. I have been on medication for this and my real desire is to not take this any longer. After just one week of using Parent Essential Oils my blood pressure is reading normal! This is the first time I have seen a ‘supplement’ have this kind of effect on me. I am very encouraged. Corene C.•I knew that if you guys recommended these PEOs I would probably be pleased with the outcome. I was still surprised that I could tell a definite increase of quality energy after taking them even the first time. As a retired electrical engineer I analyze things quite well and I am excited about using the parent essential fatty acids long term. Marcie R.•I did order two bottles of the Parent Essential Oils and started taking them on Tuesday. I immediately noticed that I experienced less effort to do physical exercise and had a quicker recovery afterwards on my very next morning workout. Since I am in my 70’s I usually do not notice benefits so quickly when beginning to use a new supplement.Mel W.
{ "pile_set_name": "Pile-CC" }
Monday, April 1, 2013 So Much Going On, So Little to Report It’s an interesting time of the year right now.So much is going on yet nothing is going on.I want to write a blog post about something but no single subject can adequately fill a post.So let me break up THIS post into four pieces and maybe that will fill up the better part of a page.I can say that once things start breaking loose, there will be no shortage of things to talk about! TABBY LANE Tabby was covered early last week by Eastwood Dacat (Storm Cat – Western Eternity) and now we wait.She was exhibiting signs of heat even after her first cover so she went back to the shed.After that she was fine, so now we will wait for a pregnancy check in another week or so.She handled the experience like a champ for a maiden mare and all the credit goes to Lisa Duoos at Dove Hill Farm and Reproductive Services for her patience and expertise. ELUSIVE EDITION Our 2-year old Minnesota bred filly (Late Edition – Mystical Elusion) is doing as well as can be expected in the snow covered Minnesota tundra.Thankfully the snow is melting away and she should be getting her work in on a larger oval shortly instead of the makeshift oval she has been using – BUT that’s WAY better than nothing and foundation miles are foundation miles.Also, she’s learning her trade: how to be handled like a racehorse, what’s expected of her in the morning, how to interact with her humans – all important as well.As a 2-year old, I wouldn’t expect her over to the track for a few more months but we are really anxious to see her stretch her legs and see what we have. CLAIMING GROUP I have to say that this is one of the most energetic and fun groups I’ve ever put together.There is a core of really strong handicappers and racing fans that love exchanging opinions which has made our Yahoo Group a ton of fun.It doesn’t hurt that many of the horses that we discuss dropping a slip for go on to win.It’s become a pretty profitable betting angle for the group. We have dropped slips on three occasions and been outshook each and every time.While that DOES tell us we are on the right track, it’s still frustrating as all get out.We have reached out for a private purchase that we hope to be able to consummate this week that will get us a really nice horse that should fit well into the upper claiming/possibly allowance ranks at Canterbury for the summer.But we will wait and see – it’s way too early to count chickens!That said, the horse would be coming from a great outfit and I’d be really excited to have him part of our stable. CANTERBURY RACING CLUB The management of the Canterbury Racing Club really begins in earnest now that signups are done and we’ll start looking for horses.It looks like we exceeded last year’s membership by about 17%, so that’s nice and gives us the basis for a nice start with a couple of horses with Clay Brinson out of Hawthorne.We also got a nice mention in the Paulick Report’s Good News Friday column which was fun to see we well. As I said, there is a lot going on but, for now, everything is on edge.Once it all starts to break loose, however, it should be one very busy summer and this keyboard should get beat pretty good!
{ "pile_set_name": "Pile-CC" }
The functional nature of the projection from the frontal eye field to the brain stem has been studied in the rhesus monkey. Like the frontotectal projection, the frontopontine projection contains cells which discharge in association with eye movements or visual fixation, but not cells which have exclusive peripheral visual responses. The nature of the visual stimuli evoking smooth pursuit were was studied using open-loop visual methods. Superimposition of open- loop position and velocity errors during pursuit maintenance resulted in the generation of eye velocities that indicated that stimulus position as well as stimulus velocity is an important stimulus for the maintenance of smooth pursuit. The time course and dynamics of uniocular saccadic adaptation were studied in monkeys who were made to adapt to a weakened eye. At first the weakened eye had a hysteresis in orbital position, and an orbital-position-dependent saccadic inaccuracy. Both the hysteresis and the orbital position dependent effects were compensated for in a point by point manner with experience. The results suggest that the oculomotor system has a complicated and sensitive corrective mechanisms for the non-linearity of orbital mechanics. Any physical derangement causes maladjustment of this compensation, which can be adapted in time.
{ "pile_set_name": "NIH ExPorter" }
Localization and enzymatic activity profiles of the proteases responsible for tachykinin-directed oocyte growth in the protochordate, Ciona intestinalis. We previously substantiated that Ci-TK, a tachykinin of the protochordate, Ciona intestinalis (Ci), triggered oocyte growth from the vitellogenic stage (stage II) to the post-vitellogenic stage (stage III) via up-regulation of the gene expression and enzymatic activity of the proteases: cathepsin D, carboxypeptidase B1, and chymotrypsin. In the present study, we have elucidated the localization, gene expression and activation profile of these proteases. In situ hybridization showed that the Ci-cathepsin D mRNA was present exclusively in test cells of the stage II oocytes, whereas the Ci-carboxypeptidase B1 and Ci-chymotrypsin mRNAs were detected in follicular cells of the stage II and stage III oocytes. Double-immunostaining demonstrated that the immunoreactivity of Ci-cathepsin D was largely colocalized with that of the receptor of Ci-TK, Ci-TK-R, in test cells of the stage II oocytes. Ci-cathepsin D gene expression was detected at 2h after treatment with Ci-TK, and elevated for up to 5h, and then slightly decreased. Gene expression of Ci-carboxypeptidase B1 and Ci-chymotrypsin was observed at 5h after treatment with Ci-TK, and then decreased. The enzymatic activities of Ci-cathepsin D, Ci-carboxypeptidase B1, and Ci-chymotrypsin showed similar alterations with 1-h lags. These gene expression and protease activity profiles verified that Ci-cathepsin D is initially activated, which is followed by the activation of Ci-carboxypeptidase B1 and Ci-chymotrypsin. Collectively, the present data suggest that Ci-TK directly induces Ci-cahtepsin D in test cells expressing Ci-TK receptor, leading to the secondary activation of Ci-chymotrypsin and Ci-carboxypeptidase B1 in the follicle in the tachykininergic oocyte growth pathway.
{ "pile_set_name": "PubMed Abstracts" }
Off with his hair! Prince William finally caves and shaves head Prince William has never been shy about his receding hairline, but now he has debuted a dramatic new look. The Duke of Cambridge appeared pleased with his new look as he smiled and waved at fans at the launch of the Step into Health programme at the Evelina London Children’s Hospital on Thursday. Wearing a smart suit, he didn’t look a bit nervous as he greeted staff and kids inside. Stopping to chat to some of the young patients, William knelt down and allowed the camera a close up of his head, laughing the whole time. While he’s been sporting a receding look for a few years now, it’s easily his closest shave to date. Prince William greeted patients at the hospital. Source: Getty. The look was very appropriate for the visit, as he met armed forces veterans. The programme helps former members of the armed forces and their partners take up careers in the NHS. Ad. Article continues below. William’s growing bald patch is a long-running joke within the royal family, and back in 2014, his wife Catherine joked that alpaca wool could be a great solution for his head while she visited the Sydney Royal Easter Show. According to reports at the time, she saw the wool before pointing to her husband’s head and saying: “You need it more than me.” Meanwhile his brother Prince Harry certainly hasn’t held himself back, and back in 2010, while speaking to a World War Two veteran in Barbados, he joked William “was already bald aged 12”. Prince William once had much thicker hair a few years ago. Fans shared their thoughts on social media, and one tweeted: “William gets the crown, Harry got the hair,” while another added: “Prince William has more strength and pride than @realDonaldTrump He finally just let that last bit of hair go. Good for him.” Ad. Article continues below. Others weren’t as impressed, with one user even writing: “Prince William is a fool. He has more than enough money so he should’ve got a hair transplant the minute he saw a speck of baldness. Now look at him.” An expert has since weighed in, and claimed the new cut shows the Duke has courage. Celebrity hair stylist Joe Mills told the Independent: “I think him just literally going I’m going to crop my hair off is a really strong statement, a very confident statement. If you are photographed and in the public eye to that degree it’s very, very hard to look great all the time.” What do you think of the Prince’s new look? Did you prefer his longer hair, or shaved look now?
{ "pile_set_name": "Pile-CC" }
Q: Delta de Dirac Function linear? Show that $\delta_0$, Dirac function, defined than $\left<{\delta_0,\phi}\right> = \phi(0)$ is linear. I trying: Let be $\phi_1,\phi_2$ $\in W^{m,p}(\Omega)$ then $\delta_0(\phi_1+\phi_2)=(\phi_1+\phi_2)(0)$, but I need more steps. A: Hint: What is the definition of a linear functional? Just plug this into the definition and see if it works. Under trying, $(\phi_1 + \phi_2)(0)=\phi_1 (0) + \phi_2 (0)$
{ "pile_set_name": "StackExchange" }
Do neurological disorders in childhood pose a risk for mental health in young adulthood? To assess whether juvenile-onset epilepsy or motor disability is complicated by an increased number of mental health disorders or experience of psychosomatic symptoms in young adulthood, we studied 81 subjects with epilepsy and 52 with motor disabilities at the age of 19 to 25 years and compared them with 211 controls. The main diagnostic tool, the Present State Examination, was administered to those attending the interviews in person who were of normal intelligence; there were 62, 38, and 123 subjects in the three categories, respectively. Compared with the controls, the subjects with epilepsy showed an equal prevalence of psychiatric disorders whereas those with motor disabilities had a significantly higher prevalence, particularly of depressive disorders. The reported prevalence of psychosomatic symptoms confirmed this main result. Psychological illness affected everyday life of two out of five subjects with motor disabilities, but only half of those in the other two groups. It is concluded that motor disability since childhood, but not epilepsy, could be a factor that increases susceptibility to psychiatric morbidity, especially depression, and causes a large number of psychosomatic symptoms. The results challenge staff of clinics working with such adolescents to find individual approaches in preventing the negative influence of psychological disorders on social life.
{ "pile_set_name": "PubMed Abstracts" }
Q: how to set header font color in Latex Is it possible to change the header font color in Latex? A: You could have a look at the sectsty package. The secsty package provides a set of commands for changing the fount 1 used for the various sectional headings in the standard LATEX 2ε document classes From the manual: Make sure you include the package in your document by saying in your document preamble: \usepackage{sectsty} You will then have some new commands available. For example: \allsectionsfont{\sffamily} will give you sanserif for all sectional headings.   Here is the full manual
{ "pile_set_name": "StackExchange" }
Involvement of phosphorylation of Tyr-31 and Tyr-118 of paxillin in MM1 cancer cell migration. We demonstrated previously that rat ascites hepatoma MM1 cells require both lysophosphatidic acid (LPA) and fibronectin (FN) for phagokinetic motility and transcellular migration and that these events are regulated through the RhoA-ROCK pathway. It remains to be elucidated, however, how the signals from both LPA and FN are integrated into cell migration. To examine this, total cellular lysates after stimulation with LPA or FN were subjected to time-course immunoblot analysis with anti-phosphotyrosine antibodies (Abs). Consequently, tyrosine-phosphorylation of paxillin was obviously persistent after stimulation with FN + LPA as compared to after stimulation with either alone. Tyrosine-phosphorylated paxillin comprised 2 components; slowly and fast migrating ones. Immunoblotting of anti-paxillin immunoprecipitates with phosphorylation site-specific Abs revealed the following: tyrosine-phosphorylation was enhanced preferentially on a slowly migrating component after stimulation with FN + LPA; this component contained phosphorylation at both tyrosine residue (Y) 31 and Y118; and phosphorylation of paxillin at Y181 was constitutive and not augmented by stimulation with either FN or LPA. Amiloride, an inhibitor of the Na+/H+ antiporter downstream of ROCK, suppressed cell motility and correspondingly paxillin tyrosine-phosphorylation at both Y31 and Y118. Paxillin phosphorylation weakly induced by FN alone, insufficient for cell migration, was not inhibited by amiloride. These results demonstrate that LPA collaborates with FN for persistent tyrosine phosphorylation of paxillin at both Y31 and Y118, regulated by the Na+/H+ antiporter downstream of ROCK and that this phosphorylated paxillin is essential for MM1 cancer cell migration.
{ "pile_set_name": "PubMed Abstracts" }
Q: Bearing features (2RS versus 2RSH) I have done sufficient googling to discover that a -2RS bearing is one with two rubber seals. I haven't ascertained much beyond that. For an application currently using a 2RSH bearing, can I replace it with a 2RS bearing? When is the answer yes, and when is the answer no? Thanks! A: Per this dictionary the RS and RSH parts mean the same thing. They mean Contact seal made of acrylonitrile-butadiene rubber (NRB)... That would make it seem as though different manufacturers may use different acronyms for the same thing. If that is really the case then you can totally interchange the 2. However, I'm not super-familiar with the nomenclature of wheel bearings, and they could mean something different. If anyone here knows with certainty that the above is right or wrong, let me know. But from what I can scrounge up it appears that they are completely interchangeable.
{ "pile_set_name": "StackExchange" }
Philosophy of Religion Monday, December 04, 2006 FINAL EXAM PHL 320: FINAL PAPERS Please select one of these questions and write a 5-6 page paper in response.This is not a research paper, but make sure you properly cite your sources.Here’s a website that offers some helpful direction in how to write a philosophy paper:http://www.uwm.edu/~cbagnoli/paperguidelines.htmlIf all else fails, imagine you are talking to me, and that you are able to answer my question using your texts and notes, open-book style. 1. How can a person believe in the traditional theistic God when there is so much suffering and evil in the world? Discuss various responses that have been given in detail. How do you personally answer this question? 2. Explain and evaluate the various positions people take on the relationship of faith and reason. Which one do you find most convincing? Why? How would it pertain to the relationship of science and religion? 3. I Peter 3:15 says: “Always be prepared to give an answer to everyone who asks you to give the reason for the hope that you have.” How have Christians “answered” unbelievers? Be sure to explain and evaluate, in detail, at least three arguments for the existence of God. Do you find any of them convincing? If so, which one, and why? If not, why not? 4. Is it possible to talk meaningfully about God? Discuss some of the ways people have understood religious language. How would you respond? 5. W. Clifford said, “It is wrong, always, everywhere and for anyone, to believe anything upon insufficient evidence.” Explain what he meant, and its implications for religious belief. Do you agree with him? Why or why not? 6. You may have a topic that you would rather write upon than the ones above. If so, you must clear it with me by e-mail before Thursday at noon. Please place your completed papers in my mailbox by 1:00 pm on THursday, December 14 in the Faculty Building. Steve will be picking them up for me that afternoon. Thanks so much for your participation in this class. It will be remembered that the world view that formed the backdrop to the Deist controversy was a model of the universe as a Newtonian world-machine that bound even the hands of God. So ironclad a view of natural law is, however, untenable. Natural law is today understood essentially as description, not prescription. This does not mean that it cannot serve as a basis for prediction, for it does; but our formulation of a natural law is never so certain as to be beyond reformulation under the force of observed facts. Thus an event cannot be ruled out simply because it does not accord with the regular pattern of events. The advance of modern physics over the Newtonian world-machine is not that natural law does not exist, but that our formulation of it is not absolutely final. After all, even quantum physics does not mean to assert that matter and energy do not possess certain properties, such that anything and everything can happen; even indeterminacy occurs within statistical limits and concerns only the microscopic level. On the macroscopic level, firm natural laws do obtain.{62} But the knowledge of these properties and laws is derived from and based on experience. The laws of nature are thus not 'laws' in the rigid, prescriptive sense, but inductive generalizations. This would appear to bring some comfort to the modern believer in miracles, for now he may argue that one cannot rule out a priori the fact that a certain event has occurred which does not conform to known natural law, since our formulation of natural law is never final and so must take account of the fact in question. It seems to me, however, that while this more descriptive understanding of natural law re-opens the door of possibility to certain anomalous events in the world, it does not help much in settling the question of miracles. The advantage gained is that one cannot rule out the occurrence of a certain event a priori, but the evidence for it must be weighed. The defender of miracles has thus at least gained a hearing. But one is still operating under the assumption, it would appear, that if the event really did run contrary to natural law, then it would be impossible for it to have occurred. The defender of miracles appeals to the fact that our natural laws are only inductive generalizations and so never certain, in order to gain admittance for his anomalous event; but presumably if an omniscient mind knew with certainty the precise formulations of the natural laws describing our universe then he would know a priori whether the event was or was not actually possible, since a true law of nature could not be violated. As Bilynskyj argues, whether one adopts a regularity theory of natural law (according to which laws are simply descriptive of events and have no special modal quality) or a necessitarian theory (according to which natural laws are not merely descriptive of events but possess a special sort of modality determining nomic necessity/possibility), still so long as natural laws are conceived of as universal inductive generalizations the notion of a 'violation of a law of nature' is incoherent.{63} For on the regularity theory, since a law is a generalized description of whatever occurs, it follows that an event which occurs cannot violate a law. And on the necessitarian theory, since laws are universal generalizations which state what is physically necessary, a violation of a law cannot occur if the generalization is to remain truly universal. So long as laws are conceived of as universal generalizations, it is logically impossible to have a violation of a true law of nature. Suppose that one attempts to rescue the notion of a 'violation' by introducing into the law certain ceteris paribus conditions, for example, that the law holds only if either (1) there are no other causally relevant natural forces interfering, or (2) there are no other causally relevant natural or supernatural forces interfering. Now clearly, (1) will not do the trick, for even if there were no natural forces interfering, the events predicted by the law might not occur because God would interfere. Hence, the alleged law, as a purportedly universal generalization, would not be true, and so a law of nature would not be violated should God interfere. But if, as (2) suggests, we include supernatural forces among the ceteris paribus conditions, it is equally impossible to violate the law. For now the statement of the law itself includes the condition that what the law predicts will occur only if God does not intervene, so that if he does the law is not violated. Hence, so long as natural laws are construed as universal generalizations about events, it is incoherent to speak of miracles as 'violations' of such laws. The upshot of Bilynskyj's discussion is that either natural laws ought not to be construed as universal generalizations about events or that miracles should not be characterized as violations of nature's laws. He opts for the first alternative, arguing that laws of nature are really about the dispositional properties of things based on the kinds of things they are.{64} He observes that most laws today, when taken as universal generalizations, are literally not true. They must include certain ceteris paribus clauses about conditions which seldom or perhaps never obtain, so that laws become subjunctive conditionals concerning what would occur under certain idealized conditions. But that means that laws are true counterfactuals with no application to the real world. Moreover, if laws are merely descriptive generalizations, then they do not really explain anything; rather than telling why some event occurs, they only serve to tell us how things are. Bilynskyj therefore proposes that natural laws ought to be formulated as singular statements about certain kinds of things and their dispositional properties: things of kind A have a disposition to manifest quality F in conditions C, in virtue of being of nature N.{65} Laws can be stated, however, as universal dispositions, for example, 'All potassium has a disposition to ignite when exposed to oxygen.' On this understanding, to assert that an event is physically impossible is not to say that it is a violation of a law of nature, since dispositional laws are not violated when the predisposed behavior does not occur; rather an event F is not produced at a time t by the powers (dispositions) of the natural agents which are causally relevant to F at t.{66} Accordingly, a miracle is an act of God which is physically impossible and religiously significant.{67} On Bilynskyj's version of the proper form of natural laws, then, miracles turn out to be physically impossible, but still not violations of those laws. I have a great deal of sympathy for Bilynskyj's understanding of natural law and physical impossibility. So as not to create unnecessary stumbling blocks, however, the defender of miracles might ask whether one might not be able to retain the standard necessitarian theory of natural laws as universal generalizations, while jettisoning the old characterization of miracles as 'violations of the laws of nature' in favor of 'events which lie outside the productive capacity of nature.' That is to say, why may we not take a necessitarian theory of natural law according to which laws contain ceteris paribus conditions precluding the interference of both natural and supernatural forces and hold that a miracle is not, therefore, a violation of a law of nature, but an event which cannot be accounted for wholly by reference to relevant natural forces? Natural laws are not violated by such events because they state what will occur only if God does not intervene; nevertheless, the events are still naturally impossible because the relevant natural causal forces do not suffice to bring about the event. Bilynskyj's objections to this view do not seem insuperable.{68} He thinks that on such a view it becomes difficult to distinguish between miracles and God's general providence, since according to the latter doctrine every event has in a sense a supernatural cause. This misgiving does not seem insurmountable, however, for we might construe God's providence as Bilynskyj himself does, as God's conservation of (and, we might add, concurrence with) all secondary causes and effects in being, while reserving only his immediate and extra-concurrent causal activity in the world for inclusion in a law's ceteris paribus conditions. Bilynskyj also objects that the physical impossibility of a miracle is the reason we attribute it to supernatural causation, not vice versa. To define physical impossibility in terms of supernatural causation thwarts the motivation for having the concept of physical impossibility in the first place. But my suggestion is not to define physical impossibility in terms of supernatural causation, but, as Bilynskyj himself does, in terms of what cannot be brought about wholly by natural causes. One may argue that some event E is not a violation of a natural law, but that E is naturally impossible. Therefore, it requires a supernatural cause. It seems to me, therefore, that even on the necessitarian theory of natural law, we may rid ourselves of the incoherent notion of 'violation of the laws of nature' and retain the concept of the naturally impossible as the proper characterization of miracle. So although an initial advantage has been won by the construal of natural laws as descriptive, not prescriptive, this advantage evaporates unless one abandons the incoherent characterization of a miracle as a 'violation of a law of nature' and adopts instead the notion of an event which is naturally impossible. Now the question which must be asked is how an event could occur which lies outside the productive capacity of natural causes. It would seem to be of no avail to answer with Clarke that matter has no properties and that the pattern of events is simply God's acting consistently, for, contrary to his assertion, physics does hold that matter possesses certain properties and that certain forces such as gravitation and electromagnetism are real operating forces in the world. Bilynskyj points out that Clarke's view entails a thorough-going occasionalism, according to which fire does not really burn nor water quench, which runs strongly counter to common sense.{69} Nor will it seem to help to answer with Sherlock and Houtteville that nature may contain within itself the power to produce events contrary to its normal operation, for this would not seem to be the case when the properties of matter and energy are sufficiently well-known so as to preclude to a reasonably high degree of certainty the occurrence of the event in question. Moreover, though this might secure the possibility of the event, so as to permit a historical investigation, it at the same time reduces the event to a freak of nature, the result of pure chance, not an act of God. It seems most reasonable to agree with modern science that events like the feeding of the 5000, the cleansing of the leper, and Jesus' resurrection really do lie outside the capability of natural causes. But that being admitted, what has actually been proved? All that the scientist conceivably has the right to say is that such an event is naturally impossible. But with that conclusion the defender of miracles may readily agree. We must not confuse the realms of logical and natural possibility. Is the occurrence of a miracle logically impossible? No, for such an event involves no logical contradiction. Is the occurrence naturally impossible? Yes, for it cannot be produced by natural causes; indeed, this is a tautology, since to lie outside the productive capacity of natural causes is to be naturally impossible. The question is: what could conceivably make miracles not just logically possible, but really, historically possible? Clearly the answer is the personal God of theism. For if a personal God exists, then he serves as the transcendent cause to produce events in the universe which are incapable of being produced bycauses within the universe (that is to say, events which are naturally impossible. But it is to such a personal, transcendent God that the orthodox defenders of miracles appealed. Given a God who conserves the world in being moment by moment (Vernet, Houtteville), who is omnipotent (Clarke), and free to act as He wills (Vernet, Less), the orthodox thinkers seem to be entirely justified in asserting that miracles are really possible. The question is whether given such a God miracles are possible, and the answer seems obviously, yes. It must be remembered that even their Deist opponents did not dispute God's existence, and Clarke and Paley offered elaborate defenses for their theism. But more than that: if the existence of such a God is even possible, then one must be open to the historical possibility of miracles. Only an atheist can deny the historical possibility of miracles, for even an agnostic must grant that if it is possible that a transcendent, personal God exists, then it is equally possible that He has acted in the universe. Hence, it seems that the orthodox protagonists in the classical debate argued in the main correctly against their Newtonian opponents and that their response has been only strengthened by the contemporary understanding of natural law. Wednesday, November 01, 2006 Religious langugage: Noncognitivist, but meaningful In this article, Borg approves of Strauss' move to reject both the rationalist and supernaturalist reading of scripture, in favor of a non-cognitivist, subjective reading. It would seem that his position is very much like Hare's: religious language is non-cognitive (not a matter of statements and truth or falsity) but it is meaningful (true for me). Note that this "metaphorical" theory of religious language differs from THomas Aquinas analogy theory of religious language. Strauss and Borg are non-cognitivists; Thomas is a cognitivist. Unfortuantely, due to copyright restrictions, I cannot reproduce the article here, but it is an extremely well-written, clear exposition of a non-cognitivist view. Tuesday, October 24, 2006 Modern Biblical Scholarship, Philosophy of Religion and Traditional Christianity by Professor Eleonore Stump Stump is currently the Robert J. Henle, S.J. Professor of Philosophy at Saint Louis University. She obtained her bachelor’s degree from Grinnell College in 1969, her master’s degrees from Harvard University in 1971 and Cornell University in 1973, and her Ph.D. from Cornell University in 1975. She is the 2005–06 president and 2003–04 vice president of the Central Division of the American Philosophical Association, and past president of the Society of Christian Philosophers and the American Catholic Philosophical Association. Stump’s research interests include the philosophy of religion, metaphysics and medieval philosophy. She was also a recent Gifford Lecturer. ---------------------------------------------------In recent decades biblical scholarship as practiced in secular universities has been dominated by a certain historical approach to biblical studies. I have in mind the sort of biblical studies represented by the work of F. M. Cross, O. Cullmann, E. Haenchen, E. Kasemann, and G. E. Wright, for example. Operating in conjunction with the related disciplines of archaeology, classical languages, and near-Eastern studies, this approach has made significant contributions to our understanding of the historical context in which the biblical texts were composed. But to many outsiders what has been at least equally noteworthy about this approach is the havoc it has wreaked on traditional Christian and Jewish beliefs. In their effort to discover and present what is historically authentic in the Bible, the practitioners of this approach have in effect rewritten the Bible. They have cut the Old and New Testaments into a variety of snippets; some they have discarded entirely as not historically authentic, and others they have reassembled in new ways to form what these scholars consider the truly original historical documents or traditions. They have denied the traditional authorship of certain books of the Bible-for example, they tend to hold that the pastoral apostles (the one to Titus and the two to Timothy) were not really written by Paul-and they have claimed to find the sources for other biblical texts in such clearly human products as Hittite suzerainty treaties and Hellenistic philosophy. The general result of such scholarship is, for example, that a text which a church father such as Augustine may have used to support a particular theological doctrine on the grounds that the text was composed by a disciple of Jesus who was an eye-witness to the events recorded may now be classified as a much later document fabricated by certain anonymous Christians for theological motives and derived by them from identifiable pagan sources. But if the biblical passages on which traditional doctrines are based are truly of such a character, they provide no credible support for the doctrines. And so the general effect of this approach to biblical studies has been a powerful undermining of classical Christian doctrines and a powerful impetus to religious skepticism.... Saturday, September 23, 2006 An Interview With Al Plantinga Q. If we accept belief in God as rational on the grounds which you have presented, how do we also know that this belief is true? A. You have to think about that in the context of the same question with respect to perception or memory or other minds. Fundamentally, in these cases it is a matter of trusting one's cognitive faculties, I guess. It seems true. One's inclined to believe in other minds, one's inclined to believe in the past, one's inclined to believe in immaterial objects and many of us are also under certain circumstances inclined to believe in God. I don't know if there's any way of getting outside of our faculties in these cases and sort of checking the matter independently. I don't know how one would do this. Q. Some English philosophers-Farrer, Mascall, Trethowan, Owen, Lewis-see belief in God as being a fundamental insight triggered by certain experiences or under certain circumstances instead of seeing it as the end-product of an inferential process. A. I'd go along with that. But I would say that for many people it's not so much a matter of coming to an insight by virtue of long hard thinking. It's more a matter of just being inclined, as Calvin says, under various circumstances, to form this belief about God. Q. I would think these philosophers would hold that an unlettered farmer in touch with nature would be better endowed in coming to such an insight than a sophisticated city-dweller. A. I have no objection to that. In Reason and Belief in God I mention a bunch of circumstances that according to Calvin-and I think he's right-call forth belief in God: gratitude, a sense of contingency, just beholding the beauties of nature sometimes- mountains, flowers and the like of that-being in danger. All sorts of things. Q. These philosophers, as I understand them, would compare this fundamental insight to something like visual perception: its truth cannot be checked with reference to external criteria: you see it to be true: it is self-guaranteeing or self- authenticating. A. I don't think it's self-authenticating or self- guaranteeing in any of these cases. In the case of perception, for example, it could be we're all mistaken, it could be we're being deceived by a Cartesian demon or something like that. Q. But at least we can be certain that we are having an apparent perception. A. It's logically possible that these other things are so. I don't for a minute think that they are so, and I don't think that the fact that these are logically possible means that we don't know any of these. One question is whether you know these things. And another question is what's your evidence or how do you prove these things. I think you know something when that belief is true and when it's produced in you by your faculties working properly. God has created us with a lot of faculties and I know a perceptual belief is a true proposition when I believe it and it's true and it's produced in me by my faculties working the way they were designed to work. But that doesn't mean I can prove it to some sceptic. That's a whole different question. Knowledge is one thing, being able to prove it to a sceptic is a wholly different thing. Q. The fact you can't prove the truth of it does not mean you haven't had an insight or that the insight isn't valid. A. It doesn't mean you don't it know either. It's not as if in order to know it you have to be able to prove it to the sceptic. Q. Doesn't an inferential argument for God's existence already presuppose this fundamental insight? The conclusion is implicit in the premises. A. The argument might be a probabilistic one of some kind like Swinburne's. I guess it wouldn't have to be the case that the argument somehow presupposes the belief to start off with. But an argument always presupposes that you trust or that you're relying upon some other faculties or some other belief-producing processes or cognitive processes. You reason from them to the one in question. Q. How do you relate this belief to common sense or common experience; God's existence has seemed obvious to the vast majority of mankind. A. It has seemed true to the vast majority of mankind that some being worthy of worship, whom we all worship and who is responsible for our existence and the like, that some such being does exist. I think that has been obvious to the bulk of mankind. That's fundamentally what Calvin is saying when he speaks of the sensus divinitatis. Q. So would you say you're remaining faithful to common sense and the common experience of mankind? A. Yes. Right, I would. I don't have any objection to giving the arguments [for God's existence] and the arguments are no doubt useful in some contexts. All I say is they're not necessary either for rationality or for knowledge. Q. Arguments may be useful in triggering off the insight into God's existence. A. They may be useful for that. Well, they could be useful, I suppose, in a variety of ways. They reveal connections between other things one already believes like the ontological argument. And maybe they move certain people closer to belief in God. Q. Some critics of this approach claim that the examples of properly basic beliefs you cite (e.g. seeing a tree) are not sufficiently similar to belief in God for the latter to qualify as properly basic. For one thing, they say, belief in God is not universal like some of the other beliefs. A. People have said that, but I don't know why I should believe that. That is, I'm not arguing that belief in God is properly basic because it so greatly resembles such experience. I was just trying to point out various analogies, illustrating points about proper basicality of theistic belief by pointing to similar things in the case of sense-experience. It was an illustration or analogy rather than a matter of arguing from its similarity to a sense-experience to its being properly basic like sense-experience. Things that are properly basic come in a wide variety of forms. Memory, a priori reasoning, what you're taught or told by other people, and so on. Q. How do you think the "theism as a properly basic belief" approach should be presented in the philosophical community? A. It's not a presentation of theism as such. To present theism is to say what God is like-about His attributes, knowledge, foreknowledge, middle knowledge, power, whether He's simple. It's not that. It has to do, rather, with what might be called epistemology of religious belief. Q. Do you think this is the most fruitful approach? A. I think it is. I think it's true anyway and so, I guess, most fruitful.
{ "pile_set_name": "Pile-CC" }
The Arabidopsis floral homeotic gene PISTILLATA is regulated by discrete cis-elements responsive to induction and maintenance signals. PISTILLATA is a B-class floral organ identity gene required for the normal development of petals and stamens in Arabidopsis. PISTILLATA expression is induced in the stage 3 flowers (early expression) and is maintained until anthesis (late expression). To explore in more detail the developmentally regulated gene expression of PISTILLATA, we have analyzed the PISTILLATA promoter using uidA (beta)-glucuronidase gene) fusion constructs (PI::GUS) in transgenic Arabidopsis. Promoter deletion analyses suggest that early PISTILLATA expression is mediated by the distal region and that late expression is mediated by the proximal region. Based on the PI::GUS expression patterns in the loss- and gain-of-function alleles of meristem or organ identity genes, we have shown that LEAFY and UNUSUAL FLORAL ORGANS induce PISTILLATA expression in a flower-independent manner via a distal promoter, and that PISTILLATA and APETALA3 maintain PISTILLATA expression (autoregulation) in the later stages of flower development via a proximal promoter. In addition, we have demonstrated that de novo protein synthesis is required for the PISTILLATA autoregulatory circuit.
{ "pile_set_name": "PubMed Abstracts" }
import { onMounted, Ref, ref, onUnmounted } from '@vue/composition-api'; export function useIntersectionObserver( target: Ref<HTMLElement>, options: IntersectionObserverInit = { root: null, rootMargin: '0px' } ) { const intersectionRatio = ref(0); const isIntersecting = ref(false); const isFullyInView = ref(false); function observe() { if (target.value) { observer.observe(target.value); } } let observer: IntersectionObserver; onMounted(() => { observer = new IntersectionObserver(([entry]) => { intersectionRatio.value = entry.intersectionRatio; if (entry.intersectionRatio > 0) { isIntersecting.value = true; isFullyInView.value = entry.intersectionRatio >= 1; return; } isIntersecting.value = false; }, options); observe(); }); function unobserve() { if (!observer) return; if (target.value) { observer.unobserve(target.value); } } onUnmounted(unobserve); return { intersectionRatio, isIntersecting, isFullyInView, observe, unobserve }; }
{ "pile_set_name": "Github" }
Friends Chief Inquisitor Mayaku In a sea of fragments, if you ever disappoint me again, I will find the most miserable fragment and seal you there! You were miserable in the first place as a human, so it'll fit to send you to a really miserable world!
{ "pile_set_name": "Pile-CC" }
Fenyuan Fenyuan Township () is a rural township in Changhua County, Taiwan. Geography Fenyuan encompasses and a population of 23,843, including 12,533 males and 11,310 females as of January 2017. Administrative divisions The township comprises 15 villages: Dapu, Dazhu, Fengkeng, Fenyuan, Jiapei, Jiaxing, Jinfen, Jiushe, Shekou, Tongan, Xianzhuang, Xitou, Zhonglun, Zhulin and Zunqi. Tourist attractions Alice's Garden Baozang Temple Notable natives Lin Shu-fen, member of 7th, 8th and 9th Legislative Yuan References External links Fenyuan Government website Category:Townships in Changhua County
{ "pile_set_name": "Wikipedia (en)" }
Authored by James Howard Kunstler via Kunstler.com, The Monster Mash The sad reality is that last week’s Pittsburgh synagogue massacre is only the latest float in the long-running parade of ghastly homicidal spectacles rolling across this land and will be just as forgotten in one week as was last year’s Las Vegas Mandalay Bay slaughter of 58 concert-goers plus over 800 wounded and injured, a US record for non-military acts of violence. The Pittsburgh shootings elbowed the mass pipe bomber, Cesar Sayoc, out of the news cycle - but then Sayoc didn’t manage to actually hurt any of the high-profile figures he targeted with his mailings. What I wonder - and what the news media has so far failed to report - is just how incompetent a bomb-maker Sayoc was. Fake news meets fake bombs. One of the strange side effects of an epic American political hysteria is this strange ADD-like inability of the public to focus on anything for more than a few moments, even the most arresting atrocities. The hysteria itself is too compelling, like the actions of a human limbic system driving the collective public psyche from fight to flight on the wild horses of pure emotion. Reason has been discarded by the wayside just as a super-drunk person will shed his clothing even on a freezing night. Total culture war now beats a path toward all-out civil war, with the looming mid-term election as a fulcrum of history. The country is not “divided,” it’s sliced-and diced like a victim in one of the Halloween bloodbath movies now so beloved by movie audiences that they must be regularly updated. It’s hardly a stretch to say that the US public sees its collective self as a throng of zombies lurching across the ruined landscape in search of a dwindling supply of brains, and they even seem to take a certain comfort in that endeavor, as though the zombies were performing a meritorious public service ridding the nation of as many brains as possible. The Democratic Party could not be more in tune with this monster mash of collapse politics. The party has been living in a haunted house of its own construction for much of this century, and methodically adding to its roster of resident blood beasts month by month in an orgy of monster creation. They remind me of the chanting and stomping “natives” in any of the long line of King Kong movies, summoning the giant ape to the gate of their Great Wall so as to scare off the party of feckless white adventurers from faraway Hollywood. Only in this edition of the story, King Kong is the Golden Golem of Greatness at 1600 Pennsylvania Avenue, and it annoys him greatly to be summoned by these tiny savages beating their drums. Of course, America-the Horror-Movie doesn’t add up as a coherent narrative. And so the nation sinks into bloody incoherence. The Democratic Party war on white people and their dastardly privilege has been the theme all year long, with its flanking movement against white men especially and super-especially the hetero-normative white male villains who rape and oppress everybody else. Anyway, that’s the strategy du jour. I’m not persuaded that it’s going to work so well in the coming election. The party could not have issued a clearer message than “white men not welcome here.” Very well, then, they’ll vote somewhere else for somebody else. And if it happens that the Dems don’t prevail, and don’t manage to get their hands on the machinery of congress — then what? For one thing, a lot of people get indicted, especially former top officers from various glades of the Intel swamp. It shouldn’t be a surprise, given the numbers of them already called before grand juries and fingered by inspectors general. But it may be shocking how high up the indictments go, and how serious the charges may be: sedition… treason…? These midterm election may bring the moment when the Democratic Party finally blows up, at least enough to sweep away the current coterie of desperate idiots running it. It’s time to shove the crybabies offstage and allow a few clear-eyed adults to take the room, including men, yes even white men. And let all the shrieking, clamoring, marginal freaks return to the margins, where they belong.
{ "pile_set_name": "OpenWebText2" }
tag:blogger.com,1999:blog-4926135785142716401.post6958251692143783572..comments2015-11-01T10:14:12.183-08:00Comments on Come Away Home...: ~Happy Halloween~Karenhttp://www.blogger.com/profile/02450252740228915993noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-4926135785142716401.post-50489750913053184752015-11-01T10:14:12.183-08:002015-11-01T10:14:12.183-08:00Hope you had a great Halloween too. Ours was kinds...Hope you had a great Halloween too.<br />Ours was kinds quiet, but then we life a ways off &quot;main street&quot; ;)<br />Purrs Georgia and Julie,<br />Treasure and JJ Georgia and Juliehttps://www.blogger.com/profile/06327653720965629172noreply@blogger.comtag:blogger.com,1999:blog-4926135785142716401.post-34040238977551949652015-05-08T21:40:49.059-07:002015-05-08T21:40:49.059-07:00very nicevery niceGargi Joshihttps://www.blogger.com/profile/00894315839357941938noreply@blogger.com
{ "pile_set_name": "Pile-CC" }
The present invention relates to a video signal processing apparatus which converts an analog video signal to a digital signal. Recently, a liquid crystal display device is mainly developed as a video apparatus to replace a cathode ray tube (CRT) display therewith. Video signals received from a personal computer by a display device such as a liquid crystal display (LCD) device are analog signals, and the signal level thereof changes in the unit of dot period. Therefore, a sampling clock signal matching to the dot period is needed for signal processing when the signal is written to a memory, when the signal is displayed on a matrix display device, and the like. However, most personal computers do not have an output terminal of such a sampling clock signal. Therefore, it is necessary to reproduce the sampling clock signal based on horizontal synchronization signal or the like received from a computer or the like. Further, the analog video signal cannot be obtained correctly if it is not sampled at a timing in a dot period when a stable signal is outputted. Therefore, the sampling timing has to be appropriate. Then, an appropriate timing of the sampling clock signal is set manually. In a video apparatus, the sampling clock signal can be reproduced with a phase-locked loop (PLL) circuit by multiplying the input horizontal synchronization signal and by making both frequency and phase match to those of the input signal. However, the output signal of the PLL circuit has a phase delay because the timing signal required for display control is generated in a logic circuit at a later stage. Because this phase delay depends on the frequency of the input signal, it can not determined uniquely in a video apparatus which can receive various input signals. Therefore, scattering of the timing signal due to phase delay is a problem, especially on sampling. In order to optimize the sampling point, a video information apparatus disclosed in Japanese Patent laid open Publication 9-149291 (1997) uses auto-correlation of video signals between frames. That is, a delay time of sampling clock signal is changed successively, and the auto-correlation between frames of video signals after analog-to-digital conversion is determined for each delay time. Then, a point having low correlation is adopted as a point at which the signal is changed. Then, by changing the sampling clock delay, an optimum sampling point is determined at a midpoint or thereabout of the signal-changing point. However, this conventional optimizing circuit needs a frame memory in order to determine the correlation value. Therefore, a complicated memory control circuit and high-speed clock signal are needed. A method using multiple A/D converter circuits is known as a method not using a memory. However, this method has a problem that a plurality of delay circuits for sampling clock are necessary. An object of the invention is to provide a video signal processing apparatus which optimizes the sampling point when an analog video signal is converted to a digital signal. A first video signal processing apparatus according to the present invention comprises: a clock generator which generates a sampling clock signal for digitizing a video signal based on an input synchronization signal; a phase controller which controls phase of the sampling clock signal at one of a plurality of phases in one period of the sampling clock signal; a first signal generator which generates a first signal when the input video signal is larger than a threshold level; a first counter which counts the first signal received from the first signal generator in a predetermined time; a second signal generator which generates a second signal when the input video signal is larger than another threshold level, at a timing according to the sampling clock signal controlled by the phase controller; a second counter which counts the second signal received from the second signal generator in the predetermined time; and a controller which makes the phase controller sequentially change the phase of the sampling clock signal in a period of the sampling clock signal, repeats the phase change over one or more periods of the sampling clock signal and sets the phase of the optimum sampling clock signal based on a difference between the output signals of the first and second counters obtained for each of the changed phases. For example, the controller set the optimum phase of the sampling clock signal according to a plurality of the subtraction results obtained by the subtractor which performs subtraction between the output signals of the first and second counters. Thus, the phase of the sampling clock signal can be controlled by using a simple structure that the times of the cases when the video signal exceeds the threshold level is counted by the two counters. Further, timing control of the output signal of the binarizer circuit and that of an analog-to-digital converter are not needed. Further, high speed sampling clock signal is not needed to control the phase of the sampling clock signal, so that consumption power can be decreased. Further, because the sampling clock signal is not needed after the output of the binarizer circuit and the analog-to-digital converter, the counters can process a high speed signal. Therefore, this decreases consumption power and is advantageous for fabricating a large scale integrated circuit thereof. In the video signal processing circuit, the optimum sampling clock timing can be set in various ways. For example, the controller sets a phase of the sampling clock signal, at which an absolute value of the count values of the first and second signals is equal to or smaller than a predetermined value, to the phase of optimum sampling clock signal. Alternatively, the controller sets a phase of the sampling clock signal, at which an absolute value of the count values of the first and second signals is equal to or smaller than a predetermined value and the absolute value is smallest, to phase of optimum sampling clock signal. Alternatively, the controller makes the phase controller change sequentially the phase of sampling clock signal in a period of the sampling clock signal, and when the controller continuously detects a phase of the sampling clock signal, at which an absolute value of count values of the first and second signals is equal to or smaller than a predetermined value, the controller sets a center value of the continuously detected phases of the sampling clock signal to the phase of optimum sampling clock signal. Alternatively, the controller makes the phase controller change sequentially the phase of sampling clock signal in a period of the sampling clock signal, and when the controller detects two or more phases of the sampling clock signal, at which an absolute value of count values of the first and second signals becomes maximum, the controller sets a center value of the two or more phases of the sampling clock signal to the phase of optimum sampling clock signal. Further, in the video signal processing circuit, the controller preferably stops to control the phase controller when the output value of the first counter is equal to or smaller than a predetermined value. Thus, the phase control is stopped for video signal which does not change much, so that malfunction is prevented when the optimum sampling point is detected. Further, in the video signal processing circuit, the controller preferably further comprises a threshold level controller which controls the threshold level of the first signal generator, and a comparator which compares the output signal of the second signal generator with a different threshold level. The controller decides whether the output value of the first counter is equal to or smaller than the predetermined level. Then, it decreases the threshold levels of the first signal generator and of the comparator when the output value of the first counter is equal to or smaller than the predetermined value. The output of the first counter is equal to or smaller than the predetermined value when the video signal has low level. Then, in such a case, the level for signal detection is decreased, so that the optimum sampling point can be detected even when the video signal has low level. In a first video signal processing method according to the invention, a sampling clock signal is generated for digitizing a video signal based on an input synchronization signal, and phase of the sampling clock signal at one of a plurality of phases is changed sequentially in one period of the sampling clock signal. The phase setting is repeated over one or more periods of the sampling clock signal, and for each of the phase setting, a first signal is generated when the input video signal is larger than a threshold level and the first signal is counted in a predetermined time. Further, a second signal is generated when the input video signal is larger than another threshold level at a timing according to the sampling clock signal, and the second signal is counted in a predetermined time. Then, a phase of optimum sampling clock signal is set based on differences between the count values obtained by repeating the phase change. In the video signal processing method, preferably, the phase control is stopped when the count value of the first signal is decided to be equal to or smaller than a predetermined value. In the video signal processing method, preferably, when the count value of the first signal is decided to be equal to or smaller than a predetermined value, threshold levels for the first and second signals are decreased. A second video signal processing apparatus according to the invention comprises: a signal generator which binarizes an input video signal; a clock generator which generates a sampling clock signal based on an input synchronization signal; a phase controller which controls phase of the sampling clock signal at one of a plurality of phases in one period of the sampling clock signal; a delay circuit which delays an output signal of the signal generator by one sampling period; a maximum detector which receives the output signal of the signal generator and that of the delay circuit and performs subtraction of the two output signals to provide a maximum value of the absolute value of the subtraction; and a controller which makes the phase controller sequentially change the phase of sampling clock signal by the phase controller in a period of the sampling clock signal, repeats the phase setting over one or more periods of the sampling clock signal to decide the largest value in distribution of maximum values detected by the maximum detector and sets the phase of the largest value to an optimum sampling point. According to this invention, the sampling timing can be controlled by using a simple structure where subtraction results are obtained on video signal around one sampling, and distribution of absolute value of maximum value is detected. Further, by detecting the distribution of absolute value of maximum value, change in signal level can be detected, and correct sampling phase can be set. In a second video signal processing method according to the invention, a sampling clock signal which digitizes a video signal is generated based on input synchronization signal, and phase of the sampling clock is changed sequentially at one of a plurality of phases in one period of the sampling clock signal. For each of the phase change, the input video signal is binarized, and the binarized signal is delayed by one sampling period, and the binarized signal. The delayed signal are received in a predetermined time and subtraction of the two output signals is performed to detect a maximum value of the absolute value of subtraction. Then, the largest value is decided in distribution of the detected maximum values, and the phase of the largest value is set to an optimum sampling point. A third video signal processing apparatus according to the invention comprises: a clock generator which generates a sampling clock signal based on an input synchronization signal; a phase controller which controls phase of the sampling clock signal generated by the clock generator; a signal generator which receives a video signal which changes alternately at a frequency of the sampling clock signal and binarizes the video signal at a timing of the sampling clock signal; a two-phase processor which subjects an output signal of the signal generator to two-phase processing; a plurality of level change detectors which detect the existence of level change for a plurality of output signals of the two-phase processor; and a controller which makes the phase controller change phase of the sampling clock sequentially and sets a phase, at which any of the level change detectors does not detect level change, to an optimum sampling point. Therefore, the sampling clock can be optimized at low speed processing. In a third video signal processing method according to the invention, a sampling clock signal for digitizing a video signal is generated based on an input synchronization signal, and phase of the sampling clock signal is changed sequentially in a period of the sampling clock signal. The phase change is repeated over one or more periods of the sampling clock signal, wherein for each of phase change, a video signal which changes alternately at a frequency of the sampling clock signal is received, the video signal is binarized at a sampling timing of the sampling clock signal, the binarized signal is subjected to two-phase processing, and the level change is detected for a plurality of the output signals obtained in the two-phase processing. Then, a phase, at which the level change is not detected for any of the output signals, is set to an optimum sampling point. This summary of the invention does not necessarily describe all necessary features so that the invention may also be a sub-combination of these described features.
{ "pile_set_name": "USPTO Backgrounds" }
Q: How to explain Real Big Numbers? Mathematicians, and esp. number theorists, are used to working with big numbers. I have noted on several occasions that lots of people don't have a clear understanding of big numbers as far as the real world is concerned. I recall a request for a list of all primes of less than 500 digits. Another example is homeopathic dilutions. I understand they use dilutions like 200C, which is 1 in $10^{400}$. An absurd number in view of the fact that the total number of particles in the universe is estimated (safe margin) to be less than a googol. How would you give people insight in big numbers? I'm not talking about Skewes' Number or Graham's Number; for most practical purposes $10^{20}$ is equal to infinity. edit To whoever voted me down: if you vote this down, please also tell me why. Thanks A: Very few (if any) mathematicians have significant insight regarding huge natural numbers (cf. various ultrafinitism arguments). Perhaps the only exceptions are logicians who work with esoteric ordinal notations. This is one of the few ways one can gain any insight into arbitrarily large numbers - using various complicated inductions to show that some property holds for all naturals - thus lifting our intuition up from small naturals to arbitrarily large naturals. For example, see the Goodstein sequence (or, more graphically: the Hercules vs. Hydra game) which encodes the ordinals below $\epsilon_0 = \omega^{\omega^{\omega^{\cdot^{\cdot^\cdot}}}} \;$ into huge natural numbers. A: Though I don't quite know that you actually want to hear - what kind of numbers do you want to give people insight in, whom and why? - I'll give a few thoughts. I) Real cases That's just understanding of natural sciences - numbers of particles in the universe, number of cells in a body ... Try to first of all break down the number by using smaller parts of the example - e.g. count bacteria in a drop of water and not in a whole lake. II) Thought experiments (explaining probabilities, complexity etc.) Extremely big numbers arise when you try to visualize probabilities or complexities, especially when exponential growth is involved. What about getting the jackpot ten times successively or trying to solve a TSP for 100 cities. When you know people aren't comfortable with that big numbers, decide: Is it really important to know the number? Maybe, extremely long or extremely improbable is just the important fact. Can you find an easier to grasp example (special units)? Longer than the universe is old is better than insert giant amount of milliseconds. Can you describe the growth differently? If your problem with 999 cities can be solved in a certain amount of time and you take one additional city, you'll need 1000 times longer III) Data Especially in the context of CS / cryptography, numbers can often most accurately be explained as some data you can calculate with. E.g. RSA (as in your link) is of course a mathematical, number-based algorithm, but in fact, you're encrypting data, so why not say a 500 char key instead of explaining the giant number involved there.
{ "pile_set_name": "StackExchange" }
Every day until Opening Day, Baseball Prospectus authors will preview two teams—one from the AL, one from the NL—identifying strategies those teams employ to gain an advantage. Today: how production comes from unexpected places for the Orioles and Marlins. Week 1 previews: Giants | Royals | Dodgers | Rays | Padres | Astros | Rockies | Athletics | Yankees | Mets Week 2 previews: Nationals | Tigers | Pirates | Mariners | Indians | Brewers PECOTA Team Projections Record: 79-83 Runs Scored: 675 Runs Allowed: 696 AVG/OBP/SLG (TAv): .253/.306/.406 (.266) Total WARP: 25.9 (7.1 pitching, 18.8 non-pitching, including 0.0 from pitchers) By now, the bit players in Dan Duquette’s great masterpiece are well known. By now, their value is being debated and critiqued as part of the franchise’s master plan for the future. By now, men that few had heard of and even less cared about are key pieces on the organization’s chess board. Names like Caleb Joseph and Steve Pearce never inspired fear in opposing managers, at least not until last season. Guys like Alejandro De Aza were cast off by other organizations, only to find a home on a playoff-bound club. Jimmy Paredes was cast off by one of the worst teams in baseball to spend time in the minors with both 2014 ALCS teams, even producing nearly half a win over 18 games in Baltimore. The list goes on. Here is a table that I included in the Orioles’ essay in Baseball Prospectus 2015 that shows the top five teams in terms of bench production last season: Team Bench WARP Dodgers 7.5 Mets 5.3 Padres 4.6 Orioles 3.5 Pirates 3.1 Not included in that list are two of the biggest contributors to the O’s success last season: Steve Pearce and Caleb Joseph. Both were omitted because they received too much playing time to meet the bench-player requirements used to create the table. Neither was expected to be a key contributor (or ever really), but the two combined to produce an additional 6.9 WARP (this includes 1.3 WARP from BP’s new CSAA numbers for Joseph). Below is an exhaustive list of every bench and role player that donned an O’s jersey last season. None of the above players were expected to be big contributors for the Orioles in 2014. Some of the guys were acquired via trade when their teams soured on them or couldn’t find place on their rosters for them. Pearce and Joseph are the highlights on the list but players like David Lough, Delmon Young, and Ryan Flaherty all played big roles in a successful season. The remarkable thing about that list is that only three of the fifteen players on the list produced negative WARP last year despite the collective reputations of the group. Dan Duquette has a long history of being able to find contributors on the waiver wire; it’s one of the things that helped him find success in Montreal and Boston. In Baltimore, Duquette has been fortunate to couple this knack for finding unwanted contributors with a manager that has shown a deft hand in deploying those assets. It’s important not to ignore Buck Showalter’s role in the team’s success with these unwanted players. Showalter has embraced Duquette’s emphasis on depth and utilized these players in ways that extracts positive value without betting the house on every player being able to sustain their small sample size successes. Delmon Young’s signing is a perfect example. The Orioles were chastised for signing the flawed slugger, but his 2014 season can’t be categorized as anything but a success. Young has two primary problems: he can’t hit righties and he’s a butcher in the field. Well, Showalter avoided those issues with aplomb. 62 percent of Young’s plate appearances came against lefties, and he saw less than 160 innings in the field. The result was the second best offensive and defensive seasons of Young’s career. Similar stories could be told about Paredes, De Aza, or Lough, all of whom saw their playing time correlate closely with their strengths on the field. Duquette builds a 40-man roster full of cast-offs and flawed players and Buck Showalter deploys each in such a way that success is much more likely than it would be if such care wasn’t taken to maximize the players’ talents. 2015 is likely to be yet another season where Duquette and Showalter will be put to the test. Paredes is making the case to be included on the 25-man roster with a strong spring. Catching depth in the form of Steve Clevenger, Ryan Lavarnway, and J.P. Arencibia will be tested. Mid-season cameos from Julio Borbon and Rey Navarro could very well be in the cards. And there’s no doubt that this team will rely more heavily on some of those names from the table above, specifically Pearce, De Aza, Joseph, and Young. Duquette and Showalter will need to manage their talent wisely, and the Orioles’ playoff hopes will rely on how well they do that. If recent history holds any predictive value, the 2015 Orioles might be a lot better than many think they can be. Baltimore’s Moneyball isn’t about targeting on-base guys or spending on injured draft picks. No, Baltimore’s Moneyball is all about acquiring the players that nobody seems to want and finding ways to extract as much value out of the talents of these flawed players as possible. For the Orioles, it’s all about playing to the strengths of your front office and manager. And you know what? It has worked.
{ "pile_set_name": "OpenWebText2" }
Read More "Obviously four years is absolutely nothing for what he did and will never be enough for the amount of pain and upset he has caused for Ronan’s family and friends." Enache, who has been behind bars for some time, could be free in October 2019 because of time served. "The more I think about it , the fact he gets out in 2019 is absolutely pathetic," Ronan's friend added. "At school we were always taught forgiveness but there's only a certain amount you can offer. "A life sentence would have been the best outcome. Obviously it wouldn't bring Ronan back, but for him to take Ronan's whole future and life away from him and get out of jail in two years is an absolute joke." If you have been affected by any of the content in this story and need to speak to someone, you can contact Childline on 0800 1111 or talk to a counsellor online at Childline website. Alternatively you can call Lifeline on 0808 808 8000 for their 24/7 phone crisis helpline and counselling service.
{ "pile_set_name": "Pile-CC" }
Will Rogers Stakes The Will Rogers Stakes is an American Grade IIIT Thoroughbred horse race. Run annually in the latter part of May at Hollywood Park Racetrack in Inglewood, California, the race is open to three-year-old horses. It is run over a distance of one mile on turf and currently carries a purse of $100,000. Run as a handicap prior to 2001. Run at one mile since 1995. Run exclusively on turf since 1969. Run for 3-year-olds & up in 1938, 1944. The race was named for legendary American humorist and horseman Will Rogers who died in 1935. Among the notable winners of this race are two U.S. Racing Hall of Fame inductees. Swaps won the 1955 Kentucky Derby and, in his next outing, won the Will Rogers Stakes by twelve lengths. Round Table won the race in 1957 by three and a half lengths. In 2010, the Will Rogers was lengthened to 1 1/16-mile. Winners of the Will Rogers Stakes since 2000 Earlier winners (partial list) 1999 - Eagleton 1998 - Magical 1997 - Brave Act 1996 - Let Bob Do It 1995 - Via Lombardia 1994 - Unfinished Symph 1993 - Future Storm 1992 - The Name's Jimmy 1991 - Compelling Sound 1990 - Itsallgreektome 1989 - Notorious Pleasure 1988 - Word Pirate 1987 - Something Lucky 1986 - Mazaad 1985 - Pine Belt 1984 - Tsunami Slew 1983 - Barberstown 1982 - Give Me Strength 1982 - Sword Blade 1981 - Splendid Spruce 1980 - Stiff Diamond 1979 - Ibacache 1978 - April Axe 1977 - Nordic Prince 1976 - Madera Sun 1975 - Uniformity 1974 - Stardust Mel 1973 - Groshawk 1972 - Quack 1971 - Dr. Knighton 1971 - Fast Fellow 1970 - Lime 1970 - Whittingham 1969 - Tell 1968 - Poleax 1967 - Jungle Road 1966 - Ri Tux 1966 - Aqua Vite 1965 - Terry's Secret 1964 - Count Charles 1963 - Viking Spirit 1963 - Bre'r Rabbit 1962 - Wallet Lifter 1962 - Prince Of Plenty 1961 - Four-and-Twenty 1960 - Flow Line 1959 - Ole Fols 1958 - Hillsdale 1957 - Round Table 1956 - Terrang 1955 - Swaps 1954 - Don McCoy 1953 - Imbros 1952 - Forelock 1951 - Gold Note 1950 - (Not run) 1949 - Blue Dart (Run at Santa Anita) 1948 - Speculation 1947 - On Trust 1946 - Burra Sahib 1945 - Quick Reward 1944 - Phar Rong 1943 - (Not run) 1942 - (Not run) 1941 - Battle Colors 1940 - Sweepida 1939 - Time Alone 1938 - Dogaway References The Will Rogers Stakes at the NTRA (retrieved November 8, 2007) The Will Rogers Stakes at Pedigree Query Category:Horse races in California Category:Hollywood Park Racetrack Category:Graded stakes races in the United States Category:Flat horse races for three-year-olds Category:Turf races in the United States
{ "pile_set_name": "Wikipedia (en)" }
Q: Receiving error of "The type or namespace name 'LayoutsPageBase' could not be found" To give you entire perspective, I am trying to create a custom ribbon in SharePoint. For that I am following this tutorial. I created the required feature and was able to deploy and test it with simple JavaScript alert. Now I am trying to call an ASPX page on click of ribbon button. For that I created an Application Page in my project. But in the code behind file of ASP.NET page I get the following error: The type or namespace name 'LayoutsPageBase' could not be found (are you missing a using directive or an assembly reference?) C:\Users\Administrator\Documents\Visual Studio 2012\Projects\CustomRibbonButton\CustomRibbonButton\Layouts\CustomRibbonButton\ApplicationPage1.aspx.cs I have imported (I hope thats what you call it in C#) Microsoft.SharePoint.WebControls with statement using Microsoft.SharePoint.WebControls; From this question on StackOverflow I was able to figure that LayoutsPageBase class is not available in sandbox solutions (with path as \UserCode\assemblies). So in my project I went to References > Microsoft.SharePoint, right-clicked on it to view its Properties. Its Path in Properties window is shown as C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.dll. What can be the reason for this error and how can it be solved? A: You can check whether or not a SharePoint project is Sandboxed by right clicking the project in Solution Explorer and viewing the properies. There is a true/false property called Sandboxed Solution.
{ "pile_set_name": "StackExchange" }
There were 60,000 Jews living in Krakow before the war, a quarter of the population; today, there are about 200. The Jewish district now feels at once like a tribute to and a caricature of what it used to be: Outside the Old Synagogue (Poland’s oldest), bright green trolleys advertising tours of Schindler’s factory bounce along the cobblestone. Near Remuh Synagogue and the Old Cemetery are kitschy, Jewish-looking restaurants with Hebrew signs and waiters offering picture menus to passers-by. Whatever the sentiment, seeing “Heil Hitler” signs in the neighborhood was jarring. But the legality of it, I soon learned, is an ongoing debate. Article 256 of the Polish criminal code states: “Whoever publicly promotes a fascist or other totalitarian system of state or incites hatred based on national, ethnic, race or religious differences or for reason of lack of any religious denomination shall be subject to a fine, the penalty of restriction of liberty or the penalty of deprivation of liberty for up to two years.” It continues: “Whoever, in order to distribute, produces, records or brings, buys, stores, possesses, presents, transports or sends” the aforementioned items, or those with “fascist, communist or other totalitarian symbolism,” can be subject to that punishment. But there’s a caveat: Violators are exempt “if the act was committed as part of artistic, educational, collecting or scientific activity.” Sales of antiques or collectibles with some historic, academic or artistic value are permitted; sales of recent reproductions are not. It’s one of many ambiguities that makes the provision ineffective, according to Katarzyna du Vall, a lawyer in Krakow. “You don’t know who is an artist, educator, collector or researcher,” she said. Another is the vocabulary. “The biggest problem from a legal perspective is what, exactly, ‘totalitarian state’ means,” Ms. du Vall said, explaining how contemporary scholars — political scientists, sociologists, lawyers and other experts — cannot agree on one binding definition. “We talk about Article 256 a lot in Poland, and people don’t understand why those proceedings end up with nothing,” she said. “It’s really hard to enforce a law that is not clear for anybody — for judges, for those who punish, for those who commit those crimes.” Rober Opas, a deputy chief of police in Warsaw, acknowledged the law and said violators would be punished. But “from our point of view,” he said in an email, “the problem is sporadic, and we do not receive many reports of this kind.” He also noted that it is the Polish court that determines the punishment for each perpetrator.
{ "pile_set_name": "OpenWebText2" }
229 Amakihi Way, Kaanapali, Hawaii – $1,690,000 229 Amakihi Way Beautiful island home located in the prestigious Kaanapali Golf Estates, overlooking the world famous Kaanapali resort and beach. This elegant residence has a spacious living room, dining room, kitchen, lanai, master suite and office located at entry level. Two guest suites and a large family/media room are located on the lower level, opening to a large pool and deck. Cabinets by Siematic "American Series", Corian counter tops throughout kitchen and all baths. Fourth bedroom has been converted into an office. Status Changed from Pending - Cont. To Show to Cntngnt Escrow Canceling – 02/21/2019 Status Changed from Active to Pending - Cont. To Show – Fee Simple Listed by: Elite Pacific Properties, LLC. Listings provided courtesy of the REALTORS® Association of Maui. This information is believed to be accurate. It has been provided by sources other than the Realtors Assoc. of Maui and should not be relied upon without independent verification. You should conduct your own investigation and consult with appropriate professionals to determine the accuracy of the information provided and to answer any questions concerning the property and structures located thereon. Featured properties may or may not be listed by the office/agent presenting this brochure. This information is believed to be accurate. It has been provided by sources other than the Realtors Assoc. of Maui and should not be relied upon without independent verification. You should conduct your own investigation and consult with appropriate professionals to determine the accuracy of the information provided and to answer any questions concerning the property and structures located thereon. Featured properties may or may not be listed by the office/agent presenting this brochure.
{ "pile_set_name": "Pile-CC" }
The Reagan Administration strongly promoted the concept of "Star Wars." Reagan was also famous for his constant references to the "alien invasion." They made a super combination. Canadian cartoonist Ben Wicks drew it from the alien perspective. Cartoonist Wicks looked at the alien search for Star Wars and the associated arms control talks. Stayskal at the Chicago Tribune also looked at the Star wars connection to the new Reagan Defense plan.
{ "pile_set_name": "Pile-CC" }
'I can shield myself from the ugly insults of the general public, but with you I imagined I was safe to let my guard down, to relax without being judged' When you placed your scrawny hand on my arm and expressed concern over my weight, it hit me harder than you will ever know. Where to begin? Perhaps the first point is the fact that I am well aware that I am fat. The painful rash due to my thighs rubbing together under my summer skirt drives that home perfectly well, thank you. So why do you feel it's OK to point it out to me? Our relationship isn't the place for a hideous American "intervention" – as if I am out of control and need reining in by someone else. I thought you were my friend. I can shield myself from the ugly insults of the general public, tell myself they don't matter and that fat is a feminist issue, but with you I imagined I was safe to let my guard down, to relax without being judged. I thought friends didn't criticise each other. It wasn't just touching a nerve, it was like 10,000 volts rattling my whole body. It has made me wonder whether my size embarrasses you? Is that really why you brought it up, under the guise of concern over my health? It's true I feel ashamed when we're out together and I catch sight of us in the mirror. "Willowy lovely plus hefty sidekick" is my imagined caption. But I had hoped that you weren't bothered, and liked me for me, rather than for my appearance. It has made me cringe for every cake I have baked for you, every meal we've shared. Were you totting up the calories with my every mouthful? I used to enjoy our conversations over lunch. I never realised there was an elephant in the room – and that it was me. • Tell us what you're really thinking at mind@theguardian.com
{ "pile_set_name": "OpenWebText2" }
Development of an introductory course in child protection. The maltreatment of children is a significant public health and social problem. Healthcare professionals have a crucial role to play working with other agencies to protect children from abuse and neglect. The need for training, support and clinical supervision in this work has been identified. This article discusses the collaborative work that led to the establishment of an introductory course in child protection (English National Board 970) at one school of nursing and midwifery and outlines the benefits of undertaking such a course. The course has attracted participants from a range of healthcare settings and has proved to be well evaluated and oversubscribed. Practitioners have returned to their work setting with increased awareness of child maltreatment and an understanding of the need for a proactive approach to child protection.
{ "pile_set_name": "PubMed Abstracts" }
Welcome The European OpenSource & Free Software Law Event (EOLE) is an event that aims to promote the sharing and dissemination of legal knowledge on Free & Open Source Software (FOSS) licences, as well as the development and promotion of good practices in the field. Mr. Orsi has practiced as an intellectual property attorney since 1991, working initially for IBM and later for Pirelli SpA; he then joined Hewlett-Packard in 1998. At present he leads HP worldwide intellectual property transaction team for HP’s imaging and printing business, providing legal advice on IP transaction matters such as IP licenses, including FOSS licenses, right to use opinions, mergers & acquisitions, strategic sourcing contracts and IP disputes.
{ "pile_set_name": "Pile-CC" }
Nabata Station is a train station in Ikoma, Nara Prefecture, Japan. Lines Kintetsu Ikoma Line Surrounding Area Higashi-Ikoma Station Tezukayama University Higashiikoma Campus Adjacent stations Category:Railway stations opened in 1927 Category:Railway stations in Nara Prefecture
{ "pile_set_name": "Wikipedia (en)" }
Account - Sign in or register Account - Create your Pocketmags Account Jellyfish Connect Ltd (owner of Pocketmags) would like to send you occasional emails regarding our loyalty scheme, special promotions, products and complimentary services. We'll always keep your data safe and secure and you may unsubscribe from these emails at any time. If you would prefer NOT to receive these please tick here. Account - Welcome back! This website use cookies and similar technologies to improve the site and to provide customised content and advertising. By using this site, you agree to this use. To learn more, including how to change your cookie settings, please view our Cookie Policy About Build It Looking for ideas and practical advice to help you create a tailor-made home that suits your lifestyle perfectly? The July issue of Build It has you covered. Here's a sneak peek at what's inside: - Self-build secrets: 10 things you need to know before you build - How to get planning permission for your extension - PVCu windows: are they the right choice for your project? - The best ways to insulate foundations - Going off-grid: options for heating & electrics - Budgeting basics: how to get an accurate estimate for your build ...and more!
{ "pile_set_name": "Pile-CC" }
Atacama (disambiguation) The Atacama Desert is the most arid desert in the world which is located in Chile. Atacama may refer to: People Atacama people (Likan Antaí), indigenous people of Chile Places Atacama Region, first-order administrative division of Chile Atacama Province, Bolivia, former province of Bolivia Atacama Province, Chile, former province of Chile Atacama Department, former department of Bolivia, now in Chile Geological formations Atacama Trench, oceanic trench running along the west coast of South-America Puna de Atacama, high plateau in the Andes Atacama Fault Other Atacama border dispute, territorial dispute between Chile and Bolivia Puna de Atacama dispute, territorial dispute between Chile and Argentina 18725 Atacama, a minor planet
{ "pile_set_name": "Wikipedia (en)" }
polygon 1 1.396830E+01 4.108947E+01 1.396947E+01 4.109373E+01 1.396996E+01 4.109558E+01 1.397060E+01 4.109694E+01 1.397395E+01 4.110243E+01 1.398138E+01 4.110329E+01 1.398622E+01 4.110317E+01 1.398685E+01 4.110313E+01 1.398721E+01 4.110303E+01 1.398738E+01 4.110298E+01 1.399328E+01 4.110216E+01 1.400929E+01 4.110618E+01 1.402354E+01 4.110576E+01 1.402389E+01 4.110576E+01 1.402437E+01 4.110581E+01 1.402624E+01 4.110606E+01 1.402801E+01 4.110640E+01 1.402833E+01 4.110649E+01 1.402891E+01 4.110684E+01 1.402880E+01 4.110646E+01 1.402879E+01 4.110617E+01 1.402893E+01 4.110508E+01 1.402939E+01 4.110206E+01 1.403038E+01 4.110103E+01 1.403141E+01 4.110003E+01 1.403230E+01 4.109885E+01 1.403379E+01 4.109661E+01 1.403418E+01 4.109547E+01 1.403430E+01 4.109451E+01 1.403357E+01 4.109077E+01 1.403276E+01 4.108704E+01 1.403226E+01 4.108613E+01 1.403198E+01 4.108548E+01 1.403190E+01 4.108508E+01 1.403143E+01 4.108150E+01 1.403142E+01 4.108066E+01 1.403455E+01 4.107516E+01 1.403482E+01 4.107474E+01 1.403520E+01 4.107432E+01 1.403564E+01 4.107397E+01 1.403654E+01 4.107340E+01 1.403730E+01 4.107319E+01 1.403846E+01 4.107302E+01 1.403954E+01 4.107300E+01 1.403966E+01 4.107300E+01 1.404049E+01 4.107358E+01 1.404100E+01 4.107456E+01 1.404046E+01 4.107604E+01 1.403963E+01 4.107705E+01 1.403943E+01 4.107721E+01 1.403931E+01 4.107732E+01 1.403835E+01 4.107811E+01 1.403689E+01 4.107923E+01 1.403678E+01 4.107939E+01 1.403650E+01 4.107982E+01 1.403625E+01 4.108021E+01 1.403611E+01 4.108096E+01 1.403714E+01 4.108202E+01 1.403765E+01 4.108252E+01 1.403894E+01 4.108330E+01 1.403999E+01 4.108387E+01 1.404066E+01 4.108423E+01 1.404313E+01 4.108551E+01 1.405869E+01 4.109174E+01 1.405911E+01 4.109190E+01 1.405960E+01 4.109210E+01 1.406020E+01 4.109207E+01 1.406183E+01 4.109153E+01 1.406475E+01 4.108984E+01 1.406541E+01 4.108707E+01 1.406533E+01 4.108661E+01 1.406437E+01 4.108326E+01 1.406256E+01 4.107803E+01 1.406086E+01 4.107383E+01 1.405791E+01 4.107274E+01 1.405715E+01 4.107086E+01 1.405975E+01 4.106356E+01 1.406555E+01 4.106555E+01 1.406640E+01 4.106479E+01 1.406820E+01 4.106169E+01 1.406815E+01 4.105982E+01 1.406805E+01 4.105626E+01 1.406573E+01 4.105456E+01 1.406703E+01 4.105243E+01 1.407708E+01 4.105058E+01 1.408079E+01 4.104991E+01 1.408163E+01 4.103356E+01 1.407428E+01 4.103230E+01 1.406491E+01 4.103067E+01 1.403375E+01 4.102500E+01 1.403275E+01 4.102479E+01 1.403238E+01 4.102472E+01 1.403219E+01 4.102463E+01 1.403066E+01 4.102396E+01 1.401376E+01 4.101483E+01 1.400907E+01 4.101244E+01 1.400445E+01 4.101008E+01 1.400562E+01 4.102265E+01 1.400591E+01 4.104183E+01 1.400549E+01 4.105195E+01 1.400530E+01 4.106377E+01 1.400560E+01 4.106490E+01 1.400562E+01 4.106497E+01 1.400605E+01 4.106607E+01 1.400679E+01 4.106723E+01 1.400724E+01 4.106769E+01 1.400754E+01 4.106799E+01 1.400791E+01 4.106878E+01 1.400797E+01 4.106891E+01 1.400806E+01 4.106911E+01 1.400801E+01 4.107000E+01 1.400799E+01 4.107034E+01 1.400759E+01 4.107101E+01 1.400690E+01 4.107184E+01 1.399118E+01 4.108307E+01 1.399033E+01 4.108362E+01 1.397639E+01 4.109077E+01 1.397592E+01 4.109084E+01 1.397537E+01 4.109092E+01 1.396830E+01 4.108947E+01 END END
{ "pile_set_name": "Github" }
So much pussy, so little time! Never has this much sweet sloppy slit been packed in one room! These hot horny bitches are climbing all over each other to get to the colossal cocks waiting to spray spunk all over their tits and ass! You think you've seen an orgy sex party before, you ain't seen nuthin' until you've seen this cum spattered group fuck!
{ "pile_set_name": "Pile-CC" }
--- author: - Nachi Gupta - Raphael Hauser bibliography: - 'ieee-tac3.bib' title: Kalman Filtering with Equality and Inequality State Constraints --- Introduction ============ Kalman Filtering [@Kalman1960] is a method to make real-time predictions for systems with some known dynamics. Traditionally, problems requiring Kalman Filtering have been complex and nonlinear. Many advances have been made in the direction of dealing with nonlinearities (e.g., Extended Kalman Filter [@BLK2001], Unscented Kalman Filter [@JU1997]). These problems also tend to have inherent state space [*equality*]{} constraints (e.g., a fixed speed for a robotic arm) and state space [*inequality*]{} constraints (e.g., maximum attainable speed of a motor). In the past, less interest has been generated towards constrained Kalman Filtering, partly because constraints can be difficult to model. As a result, constraints are often neglected in standard Kalman Filtering applications. The extension to Kalman Filtering with known equality constraints on the state space is discussed in [@SAP1988; @TS1988; @SC2002; @WCC2002; @Gupta2007]. In this paper, we discuss two distinct methods to incorporate constraints into a Kalman Filter. Initially, we discuss these in the framework of equality constraints. The first method, projecting the updated state estimate onto the constrained region, appears with some discussion in [@SC2002; @Gupta2007]. We propose another method, which is to restrict the optimal Kalman Gain so the updated state estimate will not violate the constraint. With some algebraic manipulation, the second method is shown to be a special case of the first method. We extend both of these concepts to Kalman Filtering with inequality constraints in the state space. This generalization for the first approach was discussed in [@SS2005].[^1] Constraining the optimal Kalman Gain was briefly discussed in [@Q1989]. Further, we will also make the extension to incorporating state space constraints in Kalman Filter predictions. Analogous to the way a Kalman Filter can be extended to solve problems containing non-linearities in the dynamics using an Extended Kalman Filter by linearizing locally (or by using an Unscented Kalman Filter), linear inequality constrained filtering can similarly be extended to problems with nonlinear constraints by linearizing locally (or by way of another scheme like an Unscented Kalman Filter). The accuracy achieved by methods dealing with nonlinear constraints will naturally depend on the structure and curvature of the nonlinear function itself. In the two experiments we provide, we look at incorporating inequality constraints to a tracking problem with nonlinear dynamics. Kalman Filter {#sec::kf} ============= A discrete-time Kalman Filter [@Kalman1960] attempts to find the best running estimate for a recursive system governed by the following model[^2]: $$\label{kfsm} x_{k} = F_{k,k-1} x_{k-1} + u_{k,k-1}, \qquad u_{k,k-1} \sim \mathcal{N}\left(0,Q_{k,k-1}\right)$$ $$\label{kfmm} z_{k} = H_{k} x_{k} + v_{k}, \qquad v_{k} \sim \mathcal{N}\left(0,R_{k}\right)$$ Here $x_{k}$ is an $n$-vector that represents the true state of the underlying system and $F_{k,k-1}$ is an $n \times n$ matrix that describes the transition dynamics of the system from $x_{k-1}$ to $x_{k}$. The measurement made by the observer is an $m$-vector $z_{k}$, and $H_{k}$ is an $m \times n$ matrix that transforms a vector from the state space into the appropriate vector in the measurement space. The noise terms $u_{k,k-1}$ (an $n$-vector) and $v_{k}$ (an $m$-vector) encompass known and unknown errors in $F_{k,k-1}$ and $H_{k}$ and are normally distributed with mean 0 and covariances given by $n \times n$ matrix $Q_{k,k-1}$ and $m \times m$ matrix $R_{k}$, respectively. At each iteration, the Kalman Filter makes a state prediction for $x_k$, denoted $\hat{x}_{k|k-1}$. We use the notation ${k|k-1}$ since we will only use measurements provided until time-step $k-1$ in order to make the prediction at time-step $k$. The state prediction error $\tilde{x}_{k|k-1}$ is defined as the difference between the true state and the state prediction, as below. $$\label{se1} \tilde{x}_{k|k-1} = x_{k} - \hat{x}_{k|k-1}$$ The covariance structure for the expected error on the state prediction is defined as the expectation of the outer product of the state prediction error. We call this covariance structure the error covariance prediction and denote it $P_{k|k-1}$.[^3] $$\label{P-outer1} P_{k|k-1} = \mathbb{E}\left[\left(\tilde{x}_{k|k-1}\right)\left(\tilde{x}_{k|k-1}\right)'\right]$$ The filter will also provide an updated state estimate for $x_{k}$, given all the measurements provided up to and including time step $k$. We denote these estimates by $\hat{x}_{k|k}$. We similarly define the state estimate error $\tilde{x}_{k|k}$ as below. $$\label{se2} \tilde{x}_{k|k} = x_{k} - \hat{x}_{k|k}$$ The expectation of the outer product of the state estimate error represents the covariance structure of the expected errors on the state estimate, which we call the updated error covariance and denote $P_{k|k}$. $$\label{P-outer2} P_{k|k} = \mathbb{E}\left[\left(\tilde{x}_{k|k}\right)\left(\tilde{x}_{k|k}\right)'\right]$$ At time-step $k$, we can make a prediction for the underlying state of the system by allowing the state to transition forward using our model for the dynamics and noting that $\mathbb{E}\left[u_{k,k-1}\right] = 0$. This serves as our state prediction. $$\label{kfsp} \hat{x}_{k|k-1} = F_{k,k-1} \hat{x}_{k-1|k-1}$$ If we expand the expectation in Equation , we have the following equation for the error covariance prediction. $$\label{kfcp} P_{k|k-1} = F_{k,k-1} P_{k-1|k-1} F_{k,k-1}' + Q_{k,k-1}$$ We can transform our state prediction into the measurement space, which is a prediction for the measurement we now expect to observe. $$\label{kfmp} \hat{z}_{k|k-1} = H_{k} \hat{x}_{k|k-1}$$ The difference between the observed measurement and our predicted measurement is the measurement residual, which we are hoping to minimize in this algorithm. $$\label{kfi} \nu_{k} = z_{k} - \hat{z}_{k|k-1}$$ We can also calculate the associated covariance for the measurement residual, which is the expectation of the outer product of the measurement residual with itself, $\mathbb{E}\left[\nu_k \nu_k'\right]$. We call this the measurement residual covariance. $$\label{kfic} S_{k} = H_{k} P_{k|k-1} H_{k}' + R_{k}$$ We can now define our updated state estimate as our prediction plus some perturbation, which is given by a weighting factor times the measurement residual. The weighting factor, called the Kalman Gain, will be discussed below. $$\label{kfsu} \hat{x}_{k|k} = \hat{x}_{k|k-1} + K_{k} \nu_{k}$$ Naturally, we can also calculate the updated error covariance by expanding the outer product in Equation .[^4] $$\label{kfcu} P_{k|k} = \left(\operatorname{I}- K_{k} H_{k}\right) P_{k|k-1} \left(\operatorname{I}- K_{k} H_{k}\right)' + K_k R_k K_k'$$ Now we would like to find the Kalman Gain $K_k$, which minimizes the mean square state estimate error, $\mathbb{E}\left[\left|\tilde{x}_{k|k}\right|^2\right]$. This is the same as minimizing the trace of the updated error covariance matrix above.[^5] After some calculus, we find the optimal gain that achieves this, written below.[^6] $$\label{kfkg} K_{k} = P_{k|k-1} H_{k}' S_{k}^{-1}$$ The covariance matrices in the Kalman Filter provide us with a measure for uncertainty in our predictions and updated state estimate. This is a very important feature for the various applications of filtering since we then know how much to trust our predictions and estimates. Also, since the method is recursive, we need to provide an initial covariance that is large enough to contain the initial state to ensure comprehensible performance. For a more detailed discussion of Kalman Filtering, we refer the reader to the following book [@BLK2001]. Equality Constrained Kalman Filtering ===================================== A number of approaches have been proposed for solving the equality constrained Kalman Filtering problem [@TS1988; @SAP1988; @WCC2002; @SC2002; @Gupta2007]. In this paper, we show two different methods. The first method will restrict the state at each iteration to lie in the equality constrained space. The second method will start with a constrained prediction, and restrict the Kalman Gain so that the estimate will lie in the constrained space. Our equality constraints in this paper will be defined as below, where $A$ is a $q \times n$ matrix, $b$ a $q$-vector, and $x_k$, the state, is a $n$-vector.[^7] $$\label{constraints} A x_k = b$$ So we would like our updated state estimate to satisfy the constraint at each iteration, as below. $$\label{kfsu-con} A \hat{x}_{k|k} = b$$ Similarly, we may also like the state prediction to be constrained, which would allow a better forecast for the system. $$A \hat{x}_{k|k-1} = b$$ In the following subsections, we will discuss methods for constraining the updated state estimate. In Section \[sec::aic\], we will extend these concepts and formulations to the inequality constrained case, and in Section \[sec::csp\], we will address the problem of constraining the prediction, as well. Projecting the state to lie in the constrained space {#sec::pue} ---------------------------------------------------- We can solve the following minimization problem for a given time-step $k$, where $\hat{x}_{k|k}^{P}$ is the constrained estimate, $W_k$ is any positive definite symmetric weighting matrix, and $\hat{x}_{k|k}$ is the unconstrained Kalman Filter updated estimate. $$\label{eq-proj-problem} \hat{x}_{k|k}^{P} = \operatorname*{arg\,min}_{x \in \mathbb{R}^n} \ \left\{\left(x - \hat{x}_{k|k} \right)' W_k \left(x - \hat{x}_{k|k} \right) : A x = b\right\}$$ The best constrained estimate is then given by $$\label{bce-xP} \hat{x}_{k|k}^{P} = \hat{x}_{k|k} - W_k^{-1} A' \left( A W_k^{-1} A' \right)^{-1} \left(A \hat{x}_{k|k} - b \right)$$ To find the updated error covariance matrix of the equality constrained filter, we first define the matrix $\Upsilon$ below.[^8] $$\Upsilon = W_k^{-1} A' \left(A W_k^{-1} A' \right)^{-1}$$ Equation can then be re-written as following. $$\label{xeq} \hat{x}_{k|k}^P = \hat{x}_{k|k} - \Upsilon\left(A \hat{x}_{k|k} - b \right)$$ We can find a reduced form for $x_k - \hat{x}_{k|k}^P$ as below. $$\begin{aligned} x_k - \hat{x}_{k|k}^P &= x_k - \hat{x}_{k|k} +\Upsilon \left(A \hat{x}_{k|k} - b - \left(A x_k - b \right)\right) \\ &= x_k - \hat{x}_{k|k} +\Upsilon \left(A \hat{x}_{k|k} - A x_k\right) \\ &= -\left(\operatorname{I}- \Upsilon A \right) \left(\hat{x}_{k|k} - x_k\right)\end{aligned}$$ Using the definition of the error covariance matrix, we arrive at the following expression. \[bce-PP\] $$\begin{aligned} P_{k|k}^P &= \mathbb{E}\left[\left(x_k - \hat{x}_{k|k}^P\right)\left(x_k - \hat{x}_{k|k}^P\right)'\right] \\ &= \mathbb{E}\left[\left(\operatorname{I}- \Upsilon A \right) \left(\hat{x}_{k|k} - x_k\right) \left(\hat{x}_{k|k} - x_k\right)' \left(\operatorname{I}- \Upsilon A \right)'\right] \\ &= \left(\operatorname{I}- \Upsilon A \right) P_{k|k} \left(\operatorname{I}- \Upsilon A \right)' \\ &= P_{k|k} - \Upsilon A P_{k|k} - P_{k|k} A' \Upsilon' + \Upsilon A P_{k|k} A' \Upsilon' \\ &= P_{k|k} - \Upsilon A P_{k|k} \\ &= \label{Peq} \left(\operatorname{I}- \Upsilon A \right) P_{k|k} \end{aligned}$$ It can be shown that choosing $W_k = P_{k|k}^{-1}$ results in the smallest updated error covariance. This also provides a measure of the information in the state at $k$.[^9] Restricting the optimal Kalman Gain so the updated state estimate lies in the constrained space ----------------------------------------------------------------------------------------------- Alternatively, we can expand the updated state estimate term in Equation using Equation . $$A \left( \hat{x}_{k|k-1} + K_{k} \nu_{k} \right) = b$$ Then, we can choose a Kalman Gain $K_k^R$, that forces the updated state estimate to be in the constrained space. In the unconstrained case, we chose the optimal Kalman Gain $K_k$, by solving the minimization problem below which yields Equation . $$K_k = \operatorname*{arg\,min}_{K \in \mathbb{R}^{n \times m}} {\ensuremath{\textnormal{trace}}}\left[ \left(\operatorname{I}- K H_{k}\right) P_{k|k-1} \left(\operatorname{I}- K H_{k}\right)' + K R_k K'\right]$$ Now we seek the optimal $K_k^R$ that satisfies the constrained optimization problem written below for a given time-step $k$. $$\label{min-con} \begin{split} K_k^R = \operatorname*{arg\,min}_{K \in \mathbb{R}^{n \times m}} & {\ensuremath{\textnormal{trace}}}\left[ \left(\operatorname{I}- K H_{k}\right) P_{k|k-1} \left(\operatorname{I}- K H_{k}\right)' + K R_k K'\right] \\ \textnormal{s.t. } & A \left( \hat{x}_{k|k-1} + K \nu_{k} \right) = b \end{split}$$ We will solve this problem using the method of Lagrange Multipliers. First, we take the steps below, using the vec notation (column stacking matrices so they appear as long vectors, see Appendix \[app::kv\]) to convert all appearances of $K$ in Equation into long vectors. Let us begin by expanding the following term.[^10] $$\begin{gathered} \nonumber{\ensuremath{\textnormal{trace}}}\left[\left(\operatorname{I}- K H_{k}\right) P_{k|k-1} \left(\operatorname{I}- K H_{k}\right)' + K R_k K' \right] \qquad \qquad \qquad \qquad \qquad \qquad \qquad\\ \begin{aligned} &\stackrel{\hphantom{\eqref{kfic}}}{=}{\ensuremath{\textnormal{trace}}}\left[ P_{k|k-1} - K H_{k} P_{k|k-1} - P_{k|k-1} H_{k}' K' + K H_{k} P_{k|k-1} H_{k}' K' + K R_k K' \right] \\ &\stackrel{\eqref{kfic}}{=} {\ensuremath{\textnormal{trace}}}\left[ P_{k|k-1} - K H_{k} P_{k|k-1} - P_{k|k-1} H_{k}' K' + K S_k K' \right] \\ &\stackrel{\hphantom{\eqref{kfic}}}{=}\label{trace-separated}{\ensuremath{\textnormal{trace}}}\left[ P_{k|k-1} \right] - {\ensuremath{\textnormal{trace}}}\left[ K H_{k} P_{k|k-1} \right] - {\ensuremath{\textnormal{trace}}}\left[ P_{k|k-1} H_{k}' K' \right] + {\ensuremath{\textnormal{trace}}}\left[ K S_k K' \right] \end{aligned}\end{gathered}$$ We now expand the last three terms in Equation one at a time.[^11] $$\label{KHP} \begin{aligned} {\ensuremath{\textnormal{trace}}}\left[ K H_{k} P_{k|k-1} \right] \stackrel{\eqref{tr-ab}}{=} {\ensuremath{\textnormal{vec}\left[{\left(H_k P_{k|k-1}\right)'}\right]}}' {\ensuremath{\textnormal{vec}\left[{K}\right]}} \\ = {\ensuremath{\textnormal{vec}\left[{P_{k|k-1} H_k'}\right]}}' {\ensuremath{\textnormal{vec}\left[{K}\right]}} \end{aligned}$$ $${\ensuremath{\textnormal{trace}}}\left[ P_{k|k-1} H_{k}' K' \right] \stackrel{\eqref{tr-ab}}{=} {\ensuremath{\textnormal{vec}\left[{K}\right]}}' {\ensuremath{\textnormal{vec}\left[{P_{k|k-1} H_k'}\right]}}$$ $$\label{KSK} \begin{aligned} {\ensuremath{\textnormal{trace}}}\left[ K S_k K' \right] &\stackrel{\eqref{tr-ab}}{=} {\ensuremath{\textnormal{vec}\left[{K}\right]}}' {\ensuremath{\textnormal{vec}\left[{K S_k}\right]}} \\ &\stackrel{\eqref{vec-ab}}{=} {\ensuremath{\textnormal{vec}\left[{K}\right]}}' {\ensuremath{\left({S}\otimes{\operatorname{I}}\right)}} {\ensuremath{\textnormal{vec}\left[{K}\right]}} \end{aligned}$$ Remembering that ${\ensuremath{\textnormal{trace}}}\left[ P_{k|k-1} \right]$ is constant, our objective function can be written as below. $$\begin{aligned} {\ensuremath{\textnormal{vec}\left[{K}\right]}}' \left(\operatorname{I}\otimes S_k \right) {\ensuremath{\textnormal{vec}\left[{K'}\right]}} &- {\ensuremath{\textnormal{vec}\left[{P_{k|k-1} H_k'}\right]}}' {\ensuremath{\textnormal{vec}\left[{K}\right]}}\\ &- {\ensuremath{\textnormal{vec}\left[{K}\right]}}' {\ensuremath{\textnormal{vec}\left[{P_{k|k-1} H_k'}\right]}} \end{aligned}$$ Using Equation on the equality constraints, our minimization problem is the following. $$\begin{split} K_k^R = \operatorname*{arg\,min}_{K \in \mathbb{R}^{n \times m}}& \ {\ensuremath{\textnormal{vec}\left[{K}\right]}}' {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} {\ensuremath{\textnormal{vec}\left[{K}\right]}} \\ &- {\ensuremath{\textnormal{vec}\left[{P_{k|k-1} H_k'}\right]}}' {\ensuremath{\textnormal{vec}\left[{K}\right]}} \\ & - {\ensuremath{\textnormal{vec}\left[{K}\right]}}' {\ensuremath{\textnormal{vec}\left[{P_{k|k-1} H_k'}\right]}} \\ \textnormal{s.t. } & \left( \nu_{k}' \otimes A \right) {\ensuremath{\textnormal{vec}\left[{K}\right]}} = b - A \hat{x}_{k|k-1} \end{split}$$ Further, we simplify this problem so the minimization problem has only one quadratic term. We complete the square as follows. We want to find the unknown variable $\mu$ which will cancel the linear term. Let the quadratic term appear as follows. Note that the non-“${\ensuremath{\textnormal{vec}\left[{K}\right]}}$" term is dropped as is is irrelevant for the minimization problem. $$\left({\ensuremath{\textnormal{vec}\left[{K}\right]}} + \mu \right)' {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} \left( {\ensuremath{\textnormal{vec}\left[{K}\right]}} + \mu \right)$$ The linear term in the expansion above is the following. $${\ensuremath{\textnormal{vec}\left[{K}\right]}}' {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} \mu + \mu' {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} {\ensuremath{\textnormal{vec}\left[{K}\right]}}$$ So we require that the two equations below hold. $$\begin{aligned} {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} \mu &= -{\ensuremath{\textnormal{vec}\left[{P_{k|k-1} H_k'}\right]}} \\ \mu' {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} &= -{\ensuremath{\textnormal{vec}\left[{P_{k|k-1} H_k'}\right]}}' \end{aligned}$$ This leads to the following value for $\mu$. $$\begin{aligned} \mu &\stackrel{\eqref{kron-inv}}{=} - {\ensuremath{\left({S_k^{-1}}\otimes{\operatorname{I}}\right)}} {\ensuremath{\textnormal{vec}\left[{P_{k|k-1} H_k'}\right]}} \\ &\stackrel{\eqref{vec-abc}}{=} -{\ensuremath{\textnormal{vec}\left[{P_{k|k-1} H_k' S_k^{-1}}\right]}} \\ &\stackrel{\eqref{kfkg}}{=} -{\ensuremath{\textnormal{vec}\left[{K_k}\right]}} \end{aligned}$$ Using Equation , our quadratic term in the minimization problem becomes the following. $$\left({\ensuremath{\textnormal{vec}\left[{K - K_k}\right]}} \right)' {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} \left( {\ensuremath{\textnormal{vec}\left[{K - K_k}\right]}} \right)$$ Let $l = {\ensuremath{\textnormal{vec}\left[{K - K_k}\right]}}$. Then our minimization problem becomes the following. $$\begin{aligned} K_k^R = \operatorname*{arg\,min}_{l \in \mathbb{R}^{mn}} & \ l' {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} l \\ \textnormal{s.t. }& \left( \nu_{k}' \otimes A \right) \left(l + {\ensuremath{\textnormal{vec}\left[{K_{k}}\right]}}\right) = b - A \hat{x}_{k|k-1} \end{aligned}$$ We can then re-write the constraint taking the ${\ensuremath{\textnormal{vec}\left[{K_k}\right]}}$ term to the other side as below. $$\begin{aligned} \left( \nu_{k}' \otimes A \right) l & = b - A \hat{x}_{k|k-1} - \left( \nu_{k}' \otimes A \right) {\ensuremath{\textnormal{vec}\left[{K_{k}}\right]}} \\ & \stackrel{\eqref{vec-abc}}{=} b - A \hat{x}_{k|k-1} -{\ensuremath{\textnormal{vec}\left[{A K_{k} \nu_k}\right]}} \\ & = b - A \hat{x}_{k|k-1} - A K_{k} \nu_k \\ & \stackrel{\eqref{kfsu}}= b - A \hat{x}_{k|k} \end{aligned}$$ This results in the following simplified form. $$\label{first-SDPT3} \begin{aligned} K_k^R = \operatorname*{arg\,min}_{l \in \mathbb{R}^{mn}}&\ l' {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} l \\ \textnormal{s.t. }& \left( \nu_{k}' \otimes A \right) l = b - A \hat{x}_{k|k} \end{aligned}$$ We form the Lagrangian $\mathcal{L}$, where we introduce $q$ Lagrange Multipliers in vector $ \lambda = \left( \lambda_1, \lambda_2, \ldots, \lambda_q \right)'$ $$\begin{aligned} \mathcal{L} = & l' {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} l - \lambda' \left[\left( \nu_{k}' \otimes A \right) l - b + A \hat{x}_{k|k}\right] \end{aligned}$$ We take the partial derivative with respect to $l$.[^12] $$\label{partial1} \frac{\partial \mathcal{L}}{\partial l} = 2 l' {\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}} - \lambda' \left( \nu_{k}' \otimes A \right) \\$$ Similarly we can take the partial derivative with respect to the vector $\lambda$. $$\frac{\partial \mathcal{L}}{\partial \lambda} = \left( \nu_{k}' \otimes A \right) l - b + A \hat{x}_{k|k}$$ When both of these derivatives are set equal to the appropriate size zero vector, we have the solution to the system. Taking the transpose of Equation , we can write this system as $Mn = p$ with the following block definitions for $M,n$, and $p$. $$\label{M-matrix} M = \begin{bmatrix} 2 {\ensuremath{{S_k}\otimes{\operatorname{I}}}} & \nu_{k} \otimes A' \\ \nu_{k}' \otimes A & 0_{{\ensuremath{\left[{q}\times{q}\right]}}} \end{bmatrix}$$ $$\label{n-vector} n = \begin{bmatrix} l \\ \lambda \end{bmatrix}$$ $$\label{p-vector} p = \begin{bmatrix} 0_{{\ensuremath{\left[{mn}\times{1}\right]}}} \\ b - A \hat{x}_{k|k} \end{bmatrix}$$ We solve this system for vector $n$ in Appendix \[app::Mnp\]. The solution for $l$ is pasted below. $$\left(\left[S_k^{-1} \nu_k \left(\nu_{k}' S_k^{-1} \nu_k \right)^{-1}\right] \otimes \left[A' \left(A A' \right)^{-1} \right]\right) \left(b - A \hat{x}_{k|k}\right)$$ Bearing in mind that $b - A \hat{x}_{k|k} = {\ensuremath{\textnormal{vec}\left[{b - A \hat{x}_{k|k}}\right]}}$, we can use Equation to re-write $l$ as below.[^13] $${\ensuremath{\textnormal{vec}\left[{A' \left(A A' \right)^{-1}\left(b - A \hat{x}_{k|k} \right) \left(\nu_{k}' S_k^{-1} \nu_k \right)^{-1} \nu_k' S_k^{-1}}\right]}}$$ The resulting matrix inside the vec operation is then an $n$ by $m$ matrix. Remembering the definition for $l$, we notice that $K - K_k$ results in an $n$ by $m$ matrix also. Since both of the components inside the vec operation result in matrices of the same size, we can safely remove the vec operation from both sides. This results in the following optimal constrained Kalman Gain $K_k^R$. $$K_k - A' \left(A A' \right)^{-1}\left(A \hat{x}_{k|k} - b \right) \left(\nu_{k}' S_k^{-1} \nu_k \right)^{-1} \nu_k' S_k^{-1}$$ If we now substitute this Kalman Gain into Equation to find the constrained updated state estimate, we end up with the following. $$\hat{x}_{k|k}^R = \hat{x}_{k|k} - A' \left(A A' \right)^{-1}\left(A \hat{x}_{k|k} - b \right)$$ This is of course equivalent to the result of Equation with the weighting matrix $W_k$ chosen as the identity matrix. The error covariance for this estimate is given by Equation .[^14] Adding Inequality Constraints {#sec::aic} ============================= In the more general case of this problem, we may encounter equality and inequality constraints, as given below.[^15] $$\label{ineq-constraints} \begin{split} A x_{k} = b\\ C x_{k} \leq d \end{split}$$ So we would like our updated state estimate to satisfy the constraint at each iteration, as below. $$\begin{split} A \hat{x}_{k|k} = b \\ C \hat{x}_{k|k} \leq d \end{split}$$ Similarly, we may also like the state prediction to be constrained, which would allow a better forecast for the system. $$\begin{split} A \hat{x}_{k|k-1} = b \\ C \hat{x}_{k|k-1} \leq d \end{split}$$ We will present two analogous methods to those presented for the equality constrained case. In the first method, we will run the unconstrained filter, and at each iteration constrain the updated state estimate to lie in the constrained space. In the second method, we will find a Kalman Gain $\check{K}_k^R$ such that the the updated state estimate will be forced to lie in the constrained space. In both methods, we will no longer be able to find an analytic solution as before. Instead, we use numerical methods. By Projecting the Unconstrained Estimate {#sec::pue-ineq} ---------------------------------------- Given the best unconstrained estimate, we could solve the following minimization problem for a given time-step $k$, where $\check{x}_{k|k}^{P}$ is the inequality constrained estimate and $W_k$ is any positive definite symmetric weighting matrix. $$\begin{aligned} \check{x}_{k|k}^{P} = \operatorname*{arg\,min}_{x} &\ \left(x - \hat{x}_{k|k} \right)' W_k \left(x - \hat{x}_{k|k} \right) \\ \textnormal{s.t. } & A x = b \\ & C x \leq d \end{aligned}$$ For solving this inequality constrained optimization problem, we can use a variety of standard methods, or even an out-of-the-box solver, like `fmincon` in Matlab. Here we use an active set method [@Fletcher1981]. This is a common method for dealing with inequality constraints, where we treat a subset of the constraints (called the active set) as additional equality constraints. We ignore any inactive constraints when solving our optimization problem. After solving the problem, we check if our solution lies in the space given by the inequality constraints. If it doesn’t, we start from the solution in our previous iteration and move in the direction of the new solution until we hit a set of constraints. For each iteration, the active set is made up of those inequality constraints with non-zero Lagrange Multipliers. We first find the best estimate (using Equation for the equality constrained problem with the equality constraints given in Equation plus the active set of inequality constraints. Let us call the solution to this $\check{x}_{k|k,j}^{P*}$ since we have not yet checked if the solution lies in the inequality constrained space.[^16] In order to check this, we find the vector that we moved along to reach $\check{x}_{k|k,j}^{P*}$. This is given by the following. $$s = \check{x}_{k|k,j}^{P*} - \check{x}_{k|k,j-1}^P$$ We now iterate through each of our inequality constraints, to check if they are satisfied. If they are all satisfied, we choose $\tau_{\max}=1$. If they are not, we choose the largest value of $\tau_{\max}$ such that $\hat{x}_{k|k,j-1} + \tau_{\max} s$ lies in the inequality constrained space. We choose our estimate to be $$\check{x}_{k|k,j}^P = \check{x}_{k|k,j-1}^{P} + \tau_{\max} s$$ If we find the solution has converged within a pre-specified error, or we have reached a pre-specified maximum number of iterations, we choose this as the updated state estimate to our inequality constrained problem, denoted $\check{x}_{k|k}^P$. If we would like to take a further iteration on $j$, we check the Lagrange Multipliers at this new solution to determine the new active set.[^17] We then repeat by finding the best estimate for the equality constrained problem including the new active set as additional equality constraints. Since this is a Quadratic Programming problem, each step of $j$ guarantees the same estimate or a better estimate. When calculating the error covariance matrix for this estimate, we can also add on the safety term below. $$\left(\check{x}_{k|k,j}^P - \check{x}_{k|k,j-1}^{P}\right)\left(\check{x}_{k|k,j}^P - \check{x}_{k|k,j-1}^{P}\right)'$$ This is a measure of our convergence error and should typically be small relative to the unconstrained error covariance. We can then use Equation to project the covariance matrix onto the constrained subspace, but we only use the defined equality constraints. We do not incorporate any constraints in the active set when computing Equation since these still represent inequality constraints on the state. Ideally we would project the error covariance matrix into the inequality constrained subspace, but this projection is not trivial. By Restricting the Optimal Kalman Gain -------------------------------------- We could solve this problem by restricting the optimal Kalman gain also, as we did for equality constraints previously. We seek the optimal $K_k$ that satisfies the constrained optimization problem written below for a given time-step $k$. $$\label{min-con} \begin{aligned} \check{K}^R_k = \operatorname*{arg\,min}_{K \in \mathbb{R}^{n \times m}} & {\ensuremath{\textnormal{trace}}}\left[\left(\operatorname{I}- K H_{k}\right) P_{k|k-1} \left(\operatorname{I}- K H_{k}\right)' + K R_k K'\right] \\ \textnormal{s.t. } & A \left( \hat{x}_{k|k-1} + K_{k} \nu_{k} \right) = b \\ & C \left( \hat{x}_{k|k-1} + K_{k} \nu_{k} \right) \leq d \end{aligned}$$ Again, we can solve this problem using any inequality constrained optimization method (e.g., `fmincon` in Matlab or the active set method used previously). Here we solved the optimization problem using SDPT3, a Matlab package for solving semidefinite programming problems [@TTT1999]. When calculating the covariance matrix for the inequality constrained estimate, we use the restricted Kalman Gain. Again, we can add on the safety term for the convergence error, by taking the outer product of the difference between the updated state estimates calculated by the restricted Kalman Gain for the last two iterations of SDPT3. This covariance matrix is then projected onto the subspace as in Equation using the equality constraints only. Dealing with Nonlinearities {#sec::nl} =========================== Thus far, in the Kalman Filter we have dealt with linear models and constraints. A number of methods have been proposed to handle nonlinear models (e.g., Extended Kalman Filter [@BLK2001], Unscented Kalman Filter [@JU1997]). In this paper, we will focus on the most widely used of these, the Extended Kalman Filter. Let’s re-write the discrete unconstrained Kalman Filtering problem from Equations and below, incorporating nonlinear models. $$\label{kfsm-nl} x_{k} = f_{k,k-1} \left(x_{k-1}\right) + u_{k,k-1}, \qquad u_{k,k-1} \sim \mathcal{N}\left(0,Q_{k,k-1}\right)$$ $$\label{kfmm-nl} z_{k} = h_{k} \left(x_{k}\right) + v_{k}, \qquad v_{k} \sim \mathcal{N}\left(0,R_{k}\right)$$ In the above equations, we see that the transition matrix $F_{k,k-1}$ has been replaced by the nonlinear vector-valued function $f_{k,k-1}\left(\cdot\right)$, and similarly, the matrix $H_k$, which transforms a vector from the state space into the measurement space, has been replaced by the nonlinear vector-valued function $h_k\left(\cdot\right)$. The method proposed by the Extended Kalman Filter is to linearize the nonlinearities about the current state prediction (or estimate). That is, we choose $F_{k,k-1}$ as the Jacobian of $f_{k,k-1}$ evaluated at $\hat{x}_{k-1|k-1}$, and $H_k$ as the Jacobian of $h_k$ evaluated at $\hat{x}_{k|k-1}$ and proceed as in the linear Kalman Filter of Section \[sec::kf\].[^18] Numerical accuracy of these methods tends to depend heavily on the nonlinear functions. If we have linear constraints but a nonlinear $f_{k,k-1}\left(\cdot\right)$ and $h_k\left(\cdot\right)$, we can adapt the Extended Kalman Filter to fit into the framework of the methods described thus far. Nonlinear Equality and Inequality Constraints --------------------------------------------- Since equality and inequality constraints we model are often times nonlinear, it is important to make the extension to nonlinear equality and inequality constrained Kalman Filtering for the methods discussed thus far. Without loss of generality, our discussion here will pertain only to nonlinear inequality constraints. We can follow the same steps for equality constraints.[^19] We replace the linear inequality constraint on the state space by the following nonlinear inequality constraint $c\left(x_k\right) = d$, where $c\left(\cdot\right)$ is a vector-valued function. We can then linearize our constraint, $c\left(x_k\right) = d$, about the current state prediction $\hat{x}_{k|k-1}$, which gives us the following.[^20] $$c\left(\hat{x}_{k|k-1}\right) + C \left(x_k - \hat{x}_{k|k-1} \right) \lessapprox d$$ Here $C$ is defined as the Jacobian of $c$ evaluated at $\hat{x}_{k|k-1}$. This indicates then, that the nonlinear constraint we would like to model can be approximated by the following linear constraint $$\label{puenl} C x_k \lessapprox d + C \hat{x}_{k|k-1} - c\left(\hat{x}_{k|k-1}\right)$$ This constraint can be written as $\tilde{C} x_k \leq \tilde{d}$, which is an approximation to the nonlinear inequality constraint. It is now in a form that can be used by the methods described thus far. The nonlinearities in both the constraints and the models, $f_{k,k-1}\left(\cdot\right)$ and $h_k\left(\cdot\right)$, could have been linearized using a number of different methods (e.g., a derivative-free method, a higher order Taylor approximation). Also an iterative method could be used as in the Iterated Extended Kalman Filter [@BLK2001]. Constraining the State Prediction {#sec::csp} ================================= We haven’t yet discussed whether the state prediction (Equation ) also should be constrained. Forcing the constraints should provide a better prediction (which is used for forecasting in the Kalman Filter). Ideally, the transition matrix $F_{k,k-1}$ will take an updated state estimate satisfying the constraints at time $k-1$ and make a prediction that will satisfy the constraints at time $k$. Of course this may not be the case. In fact, the constraints may depend on the updated state estimate, which would be the case for nonlinear constraints. On the downside, constraining the state prediction increases computational cost per iteration. We propose three methods for dealing with the problem of constraining the state prediction. The first method is to project the matrix $F_{k,k-1}$ onto the constrained space. This is only possible for the equality constraints, as there is no trivial way to project $F_{k,k-1}$ to an inequality constrained space. We can use the same projector as in Equation so we have the following.[^21] $$F_{k,k-1}^P = \left(\operatorname{I}- \Upsilon A \right) F_{k,k-1}$$ Under the assumption that we have constrained our updated state estimate, this new transition matrix will make a prediction that will keep the estimate in the equality constrained space. Alternatively, if we weaken this assumption, i.e., we are not constraining the updated state estimate, we could solve the minimization problem below (analogous to Equation ). We can also incorporate inequality constraints now. $$\begin{aligned} \check{x}_{k|k-1}^{P} = \operatorname*{arg\,min}_{x} &\ \left(x - \hat{x}_{k|k-1} \right)' W_k \left(x - \hat{x}_{k|k-1} \right) \\ \textnormal{s.t. } & A x = b \\ & C x \leq d \end{aligned}$$ We can constrain the covariance matrix here also, in a similar fashion to the method described in Section \[sec::pue-ineq\]. The third method is to add to the constrained problem the additional constraints below, which ensure that the chosen estimate will produce a prediction at the next iteration that is also constrained. $$\begin{aligned} A_{k+1} F_{k+1,k} x_k &= b_{k+1} \\ C_{k+1} F_{k+1,k} x_k &\leq d_{k+1} \end{aligned}$$ If $A_{k+1}, b_{k+1}, C_{k+1}$ or $d_{k+1}$ depend on the estimate (e.g., if we are linearizing nonlinear functions $a\left(\cdot\right)$ or $b\left(\cdot\right)$, we can use an iterative method, which would resolve $A_{k+1}$ and $b_{k+1}$ using the current best updated state estimate (or prediction), re-calculate the best estimate using $A_{k+1}$ and $b_{k+1}$, and so forth until we are satisfied with the convergence. This method would be preferred since it looks ahead one time-step to choose a better estimate for the current iteration.[^22] However, it can be far more expensive computationally. Experiments =========== We provide two related experiments here. We have a car driving along a straight road with thickness 2 meters. The driver of the car traces a noisy sine curve (with the noise lying only in the frequency domain). The car is tagged with a device that transmits the location within some known error. We would like to track the position of the car. In the first experiment, we filter over the noisy data with the knowledge that the underlying function is a noisy sine curve. The inequality constrained methods will constrain the estimates to only take values in the interval $[-1,1]$. In the second experiment, we do not use the knowledge that the underlying curve is a sine curve. Instead we attempt to recover the true data using an autoregressive model of order 6 [@BJ1976]. We do, however, assume our unknown function only takes values in the interval $[-1,1]$, and we can again enforce these constraints when using the inequality constrained filter. The driver’s path is generated using the nonlinear stochastic process given by Equation . We start with the following initial point. $$\label{ickf1-x0} x_0 = \begin{bmatrix} 0 \text{\ m}\\ 0 \text{\ m} \end{bmatrix}$$ Our vector-valued transition function will depend on a discretization parameter $T$ and can be expressed as below. Here, we choose $T$ to be $\pi/10$, and we run the experiment from an initial time of 0 to a final time of $10 \pi$. $$f_{k,k-1} = \begin{bmatrix} \left(x_{k-1}\right)_1 + T \\ \sin \left(\left(x_{k-1}\right)_1 + T \right) \end{bmatrix}$$ And for the process noise we choose the following. $$Q_{k,k-1} = \begin{bmatrix} 0.1 \text{\ m}^2 & 0 \\ 0 & 0 \text{\ m}^2 \end{bmatrix}$$ The driver’s path is drawn out by the second element of the vector $x_k$ – the first element acts as an underlying state to generate the second element, which also allows a natural method to add noise in the frequency domain of the sine curve while keeping the process recursively generated. First Experiment ---------------- To create the measurements, we use the model from Equation , where $H_k$ is the square identity matrix of dimension 2. We choose $R_k$ as below to noise the data. This considerably masks the true underlying data as can be seen in Fig. \[fig-ickf1\].[^23] $$\label{ickf1-R} R_{k} = \begin{bmatrix} 10 \text{\ m}^2 & 0 \\ 0 & 10 \text{\ m}^2 \end{bmatrix}$$ ![We take our sine curve, which is already noisy in the frequency domain (due to process noise), and add measurement noise. The underlying sine curve is significantly masked.[]{data-label="fig-ickf1"}](ickf.ps){width="\columnwidth"} For the initial point of our filters, we choose the following point, which is different from the true initial point given in Equation . $$\hat{x}_{0|0} = \begin{bmatrix} 0 \text{\ m}\\ 1 \text{\ m} \end{bmatrix}$$ Our initial covariance is given as below.[^24]. $$P_{0|0} = \begin{bmatrix} 1 \text{\ m}^2 & 0.1\\ 0.1 & 1 \text{\ m}^2 \end{bmatrix}$$ In the filtering, we use the information that the underlying function is a sine curve, and our transition function $f_{k,k-1}$ changes to reflect a recursion in the second element of $x_k$ – now we will add on discretized pieces of a sine curve to our previous estimate. The function is given explicitly below. $$f_{k,k-1} = \begin{bmatrix} \left(x_{k-1}\right)_1 + T \\ \left(x_{k-1}\right)_1 + \sin \left(\left(x_{k-1}\right)_1 + T \right) - \sin \left(\left(x_{k-1}\right)_1\right) \end{bmatrix}$$ For the Extended Kalman Filter formulation, we will also require the Jacobian of this matrix denoted $F_{k,k-1}$, which is given below. $$F_{k,k-1} = \begin{bmatrix} 1 & 0 \\ \cos \left(\left(x_{k-1}\right)_1 + T \right) - \cos \left(\left(x_{k-1}\right)_1\right) & 1 \end{bmatrix}$$ The process noise $Q_{k,k-1}$, given below, is chosen similar to the noise used in generating the simulation, but is slightly larger to encompass both the noise in our above model and to prevent divergence due to numerical roundoff errors. The measurement noise $R_k$ is chosen the same as in Equation . $$Q_{k,k-1} = \begin{bmatrix} 0.1 \text{\ m}^2 & 0 \\ 0 & 0.1 \text{\ m}^2 \end{bmatrix}$$ The inequality constraints we enforce can be expressed using the notation throughout the chapter, with $C$ and $d$ as given below. $$C = \begin{bmatrix} 0 & 1 \\ 0 & -1 \end{bmatrix}$$ $$d = \begin{bmatrix} 1\\ 1 \end{bmatrix}$$ These constraints force the second element of the estimate $x_{k|k}$ (the sine portion) to lie in the interval $[-1,1]$. We do not have any equality constraints in this experiment. We run the unconstrained Kalman Filter and both of the constrained methods discussed previously. A plot of the true position and estimates is given in Fig. \[fig-ickf2\]. Notice that both constrained methods force the estimate to lie within the constrained space, while the unconstrained method can violate the constraints. ![We show our true underlying state, which is a sine curve noised in the frequency domain, along with the estimates from the unconstrained Kalman Filter, and both of our inequality constrained modifications. We also plotted dotted horizontal lines at the values -1 and 1. Both inequality constrained methods do not allow the estimate to leave the constrained space.[]{data-label="fig-ickf2"}](ickf2.ps){width="\columnwidth"} Second Experiment ----------------- In the previous experiment, we used the knowledge that the underlying function was a noisy sine curve. If this is not known, we face a significantly harder estimation problem. Let us assume nothing about the underlying function except that it must take values in the interval $[-1,1]$. A good model for estimating such an unknown function could be an autoregressive model. We can compare the unconstrained filter to the two constrained methods again using these assumption and an autoregressive model of order 6, or AR(6) as it is more commonly referred to. In the previous example, we used a large measurement noise $R_k$ to emphasize the gain achieved by using the constraint information. Such a large $R_k$ is probably not very realistic, and when using an autoregressive model, it will be hard to track such a noisy signal. To generate the measurements, we again use Equation , this time with $H_k$ and $R_k$ as given below. $$H_k = \begin{bmatrix} 0 & 1 \end{bmatrix}$$ $$R_k = \begin{bmatrix} 0.5 \text{\ m}^2 \end{bmatrix}$$ Our state will now be defined using the following 13-vector, in which the first element is the current estimate, the next five elements are lags, the six elements afterwards are coefficients on the current estimate and the lags, and the last element is a constant term. $$\hat{x}_{k|k} = \begin{bmatrix} y_k & y_{k-1} & \cdots & y_{k-5} & \alpha_1 & \alpha_2 & \cdots & \alpha_7 \end{bmatrix}'$$ Our matrix $H_k$ in the filter is a row vector with the first element 1, and all the rest as 0, so $y_{k|k-1}$ is actually our prediction $\hat{z}_{k|k-1}$ in the filter, describing where we believe the expected value of the next point in the time-series to lie. For the initial state, we choose a vector of all zeros, except the first and seventh element, which we choose as 1. This choice for the initial conditions leads to the first prediction on the time series being 1, which is incorrect as the true underlying state has expectation 0. For the initial covariance, we choose $\operatorname{I}_{{\ensuremath{\left[{13}\times{13}\right]}}}$ and add $0.1$ to all the off-diagonal elements.[^25] The transition function $f_{k,k-1}$ for the AR(6) model is given below. $$\begin{bmatrix} \min\left(1, \max\left(-1, \alpha_1 y_{k-1} + \cdots + \alpha_6 y_{k-6} + \alpha_7 \right) \right)\\ \min\left(1, \max\left(-1,y_{k-1} \right) \right)\\ \min\left(1, \max\left(-1,y_{k-2} \right) \right)\\ \min\left(1, \max\left(-1,y_{k-3} \right) \right)\\ \min\left(1, \max\left(-1,y_{k-4} \right) \right)\\ \min\left(1, \max\left(-1,y_{k-5} \right) \right)\\ \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_6 \\ \alpha_7 \end{bmatrix}$$ Putting this into recursive notation, we have the following. $$\begin{bmatrix} \min\left(1, \max\left(-1, \left(x_{k-1}\right)_7 \left(x_{k-1}\right)_1 + \cdots + \left(x_{k-1}\right)_{13} \right) \right)\\ \min\left(1, \max\left(-1, \left(x_{k-1}\right)_1 \right) \right)\\ \min\left(1, \max\left(-1, \left(x_{k-1}\right)_2 \right) \right)\\ \min\left(1, \max\left(-1, \left(x_{k-1}\right)_3 \right) \right)\\ \min\left(1, \max\left(-1, \left(x_{k-1}\right)_4 \right) \right)\\ \min\left(1, \max\left(-1, \left(x_{k-1}\right)_5 \right) \right)\\ \left(x_{k-1}\right)_7 \\ \left(x_{k-1}\right)_8 \\ \vdots \\ \left(x_{k-1}\right)_{12} \\ \left(x_{k-1}\right)_{13} \end{bmatrix}$$ The Jacobian of $f_{k,k-1}$ is given below. We ignore the $\min \left( \cdot \right)$ and $\max \left( \cdot \right)$ operators since the derivative is not continuous across them, and we can reach the bounds by numerical error. Further, when enforced, the derivative would be 0, so by ignoring them, we are allowing our covariance matrix to be larger than necessary as well as more numerically stable. $$\begin{bmatrix} \begin{BMAT}{c.c}{c.c} \begin{BMAT}{c.c}{c.c} \begin{BMAT}{cc}{c} \left(x_{k-1}\right)_7 & \cdots \end{BMAT} & \left(x_{k-1}\right)_{12} \\ \operatorname{I}_{{\ensuremath{\left[{5}\times{5}\right]}}} & 0_{{\ensuremath{\left[{5}\times{1}\right]}}} \end{BMAT} & \begin{BMAT}{c}{c.c} \begin{BMAT}{cccc}{c} \left(x_{k-1}\right)_{1} & \cdots & \left(x_{k-1}\right)_{6} & 1 \\ \end{BMAT} \\ 0_{{\ensuremath{\left[{5}\times{7}\right]}}} \end{BMAT} \\ 0_{{\ensuremath{\left[{7}\times{6}\right]}}} & \operatorname{I}_{{\ensuremath{\left[{7}\times{7}\right]}}} \end{BMAT} \end{bmatrix}$$ For the process noise, we choose $Q_{k,k-1}$ to be a diagonal matrix with the first entry as 0.1 and all remaining entries as $10^{-6}$ since we know the prediction phase of the autoregressive model very well. The inequality constraints we enforce can be expressed using the notation throughout the chapter, with $C$ as given below and $d$ as a 12-vector of ones. $$C = \begin{bmatrix} \begin{BMAT}{c.c}{c} \begin{BMAT}{c}{c.c} \operatorname{I}_{{\ensuremath{\left[{6}\times{6}\right]}}} \\ -\operatorname{I}_{{\ensuremath{\left[{6}\times{6}\right]}}} \end{BMAT} & 0_{{\ensuremath{\left[{12}\times{7}\right]}}} \end{BMAT} \end{bmatrix}$$ These constraints force the current estimate and all of the lags to take values in the range $[-1,1]$. As an added feature of this filter, we are also estimating the lags at each iteration using more information although we don’t use it – this is a fixed interval smoothing. In Fig. \[fig-ickfb\], we plot the noisy measurements, true underlying state, and the filter estimates. Notice again that the constrained methods keep the estimates in the constrained space. Visually, we can see the improvement particularly near the edges of the constrained space. ![We show our true underlying state, which is a sine curve noised in the frequency domain, the noised measurements, and the estimates from the unconstrained and both inequality constrained filters. We also plotted dotted horizontal lines at the values -1 and 1. Both inequality constrained methods do not allow the estimate to leave the constrained space.[]{data-label="fig-ickfb"}](ickfb.ps){width="\columnwidth"} Conclusions =========== We’ve provided two different formulations for including constraints into a Kalman Filter. In the equality constrained framework, these formulations have analytic formulas, one of which is a special case of the other. In the inequality constrained case, we’ve shown two numerical methods for constraining the estimate. We also discussed how to constrain the state prediction and how to handle nonlinearities. Our two examples show that these methods ensure the estimate lies in the constrained space, which provides a better estimate structure. Kron and Vec {#app::kv} ============ In this appendix, we provide some definitions used earlier in the chapter. Given matrix $A \in \mathbb{R}^{ m \times n}$ and $B \in \mathbb{R}^{p \times q}$, we can define the right Kronecker product as below.[^26] $$\left( A \otimes B \right) = \begin{bmatrix} a_{1,1} B & \cdots & a_{1,n} B \\ \vdots & \ddots & \vdots \\ a_{m,1} B & \cdots & a_{m,n} B \end{bmatrix}$$ Given appropriately sized matrices $A, B, C,$ and $D$ such that all operations below are well-defined, we have the following equalities. $$\label{kron-trans} \left( A \otimes B \right)' = \left( A' \otimes B' \right)$$ $$\label{kron-inv} \left( A \otimes B \right) ^{-1} = \left( A^{-1} \otimes B^{-1} \right)$$ $$\label{kron-dist} \left( A \otimes B \right) \left( C \otimes D \right) = \left( AC \otimes BD \right)$$ We can also define the vectorization of an ${\ensuremath{\left[{m}\times{n}\right]}}$ matrix $A$, which is a linear transformation on a matrix that stacks the columns iteratively to form a long vector of size ${\ensuremath{\left[{mn}\times{1}\right]}}$, as below. $${\ensuremath{\textnormal{vec}\left[{A}\right]}} = \begin{bmatrix} a_{1,1} \\ \vdots \\ a_{m,1} \\ a_{1,2} \\ \vdots \\ a_{m,2} \\ \vdots \\ a_{1,n} \\ \vdots \\ a_{m,n} \end{bmatrix}$$ Using the vec operator, we can state the trivial definition below. $$\label{vec-sum} {\ensuremath{\textnormal{vec}\left[{A+B}\right]}} = {\ensuremath{\textnormal{vec}\left[{A}\right]}} + {\ensuremath{\textnormal{vec}\left[{B}\right]}}$$ Combining the vec operator with the Kronecker product, we have the following. $$\label{vec-ab} {\ensuremath{\textnormal{vec}\left[{AB}\right]}} = {\ensuremath{\left({B'}\otimes{\operatorname{I}}\right)}} {\ensuremath{\textnormal{vec}\left[{A}\right]}}$$ $$\label{vec-abc} {\ensuremath{\textnormal{vec}\left[{ABC}\right]}} = \left(C' \otimes A \right) {\ensuremath{\textnormal{vec}\left[{B}\right]}}$$ We can express the trace of a product of matrices as below. $$\label{tr-ab} {\ensuremath{\textnormal{trace}\left[{AB}\right]}} = {\ensuremath{\textnormal{vec}\left[{B'}\right]}}'{\ensuremath{\textnormal{vec}\left[{A}\right]}}$$ $$\begin{aligned} {\ensuremath{\textnormal{trace}\left[{ABC}\right]}} &= \label{trace-1} {\ensuremath{\textnormal{vec}\left[{B}\right]}}' \left(\operatorname{I}\otimes C\right) {\ensuremath{\textnormal{vec}\left[{A}\right]}} \\ &= \label{trace-2} {\ensuremath{\textnormal{vec}\left[{A}\right]}}' \left(\operatorname{I}\otimes B \right) {\ensuremath{\textnormal{vec}\left[{C}\right]}} \\ &= \label{trace-3} {\ensuremath{\textnormal{vec}\left[{A}\right]}}' \left(C \otimes \operatorname{I}\right) {\ensuremath{\textnormal{vec}\left[{B}\right]}}\end{aligned}$$ For more information, please see [@LT1985]. Analytic Block Representation for the inverse of a Saddle Point Matrix {#app::spm} ====================================================================== $M_S$ is a saddle point matrix if it has the block form below.[^27] $$\label{spm} M_S = \begin{bmatrix} A_S & B_S' \\ B_S & -C_S \end{bmatrix}$$ In the case that $A_S$ is nonsingular and the Schur complement $J_S = -\left(C_S + B_S A_S^{-1} B_S'\right)$ is also nonsingular in the above equation, it is known that the inverse of this saddle point matrix can be expressed analytically by the following equation (see e.g., [@BGL2005]). $$M_S^{-1} = \begin{bmatrix} A_S^{-1} + A_S^{-1} B_S' J_S^{-1} B_S A_S^{-1} & -A_S^{-1} B_S' J_S^{-1} \\ -J_S^{-1} B_S A_S^{-1} & J_S^{-1} \end{bmatrix}$$ Solution to the system $Mn=p$ {#app::Mnp} ============================= Here we solve the system $Mn=p$ from Equations , , and , re-stated below, for vector $n$. $$\label{Mnp} \begin{bmatrix} 2 {\ensuremath{{S_k}\otimes{\operatorname{I}}}} & \nu_{k} \otimes A' \\ \nu_{k}' \otimes A & 0_{{\ensuremath{\left[{q}\times{q}\right]}}} \end{bmatrix} \begin{bmatrix} l \\ \lambda \end{bmatrix} = \begin{bmatrix} 0_{{\ensuremath{\left[{mn}\times{1}\right]}}} \\ b - A \hat{x}_{k|k} \end{bmatrix}$$ $M$ is a saddle point matrix with the following equations to fit the block structure of Equation .[^28] $$\begin{aligned} A_S & = 2 {\ensuremath{{S_k}\otimes{\operatorname{I}}}} \\ B_S & = \nu_{k}' \otimes A \\ C_S & = 0_{{\ensuremath{\left[{q}\times{q}\right]}}}\end{aligned}$$ We can calculate the term $A_S^{-1} B_S'$. $$\begin{aligned} A_S^{-1} B_S' & = \left[ 2{\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}}\right]^{-1} \left( \nu_{k}' \otimes A \right)' \\ &\stackrel{\eqref{kron-trans}\eqref{kron-inv}}{=} \frac{1}{2} {\ensuremath{\left({S_k^{-1}}\otimes{\operatorname{I}}\right)}} \left( \nu_{k} \otimes A' \right) \\ &\stackrel{\eqref{kron-dist}}{=} \frac{1}{2} \left( S_k^{-1} \nu_k \right) \otimes A'\end{aligned}$$ And as a result we have the following for $J_S$. $$\begin{aligned} J_S & = - \frac{1}{2} \left( \nu_{k}' \otimes A \right) \left[ \left( S_k^{-1} \nu_k \right) \otimes A' \right] \\ &\stackrel{\eqref{kron-dist}}{=} - \frac{1}{2} \left( \nu_{k}' S_k^{-1} \nu_k \right) \otimes \left(A A' \right) \end{aligned}$$ $J_S^{-1}$ is then, as below. $$\begin{aligned} J_S^{-1} & = -2 \left[ \left( \nu_{k}' S_k^{-1} \nu_k \right) \otimes \left( A A' \right)\right]^{-1} \\ &\stackrel{\eqref{kron-inv}}{=} -2 \left(\nu_{k}' S_k^{-1} \nu_k \right)^{-1} \otimes \left(A A' \right)^{-1}\end{aligned}$$ For the upper right block of $M^{-1}$, we then have the following expression. $$\begin{aligned} A_S^{-1} B_S' J_S^{-1} &= \left[\left( S_k^{-1} \nu_k \right) \otimes A' \right] \left[\left(\nu_{k}' S_k^{-1} \nu_k \right)^{-1} \otimes \left(A A' \right)^{-1}\right] \\ &\stackrel{\eqref{kron-dist}}{=} \left[S_k^{-1} \nu_k \left(\nu_{k}' S_k^{-1} \nu_k \right)^{-1}\right] \otimes \left[A' \left(A A' \right)^{-1} \right]\end{aligned}$$ Since the first block element of $p$ is a vector of zeros, we can solve for $n$ to arrive at the following solution for $l$. $$\left(\left[S_k^{-1} \nu_k \left(\nu_{k}' S_k^{-1} \nu_k \right)^{-1}\right] \otimes \left[A' \left(A A' \right)^{-1} \right]\right) \left(b - A \hat{x}_{k|k}\right) \\$$ The vector of Lagrange Multipliers $\lambda$ is given below. $$-2 \left[\left(\nu_{k}' S_k^{-1} \nu_k \right)^{-1} \otimes \left(A A' \right)^{-1} \right] \left(b - A \hat{x}_{k|k}\right)$$ [^1]: The similar extension for the method of [@WCC2002] was made in [@GHJ2005]. [^2]: The subscript $k$ on a variable stands for the $k$-th time step, the mathematical notation $\mathcal{N}\left(\mu,\Sigma\right)$ denotes a normally distributed random vector with mean $\mu$ and covariance $\Sigma$, and all vectors in this paper are column vectors (unless we are explicitly taking the transpose of the vector). [^3]: We use the prime notation on a vector or a matrix to denote its transpose throughout this paper. [^4]: The $\operatorname{I}$ in Equation represents the $n \times n$ identity matrix. Throughout this paper, we use $\operatorname{I}$ to denote the same matrix, except in Appendix \[app::kv\], where $\operatorname{I}$ is the appropriately sized identity matrix. [^5]: Note that $v'v = {\ensuremath{\textnormal{trace}\left[{vv'}\right]}}$ for some vector $v$. [^6]: We could also minimize the mean square state estimate error in the $N$ norm, where $N$ is a positive definite and symmetric weighting matrix. In the $N$ norm, the optimal gain would be $K^N_k = N^{\frac{1}{2}}K_k$. [^7]: $A$ and $b$ can be different for different $k$. We don’t subscript each $A$ and $b$ to avoid confusion. [^8]: Note that $\Upsilon A$ is a projection matrix, as is $\left(\operatorname{I}- \Upsilon A\right)$, by definition. If $A$ is poorly conditioned, we can use a QR factorization to avoid squaring the condition number. [^9]: If $M$ and $N$ are covariance matrices, we say $N$ is smaller than $M$ if $M-N$ is positive semidefinite. Another formulation for incorporating equality constraints into a Kalman Filter is by observing the constraints as pseudo-measurements [@TS1988; @WCC2002]. When $W_k$ is chosen to be $P_{k|k}^{-1}$, both of these methods are mathematically equivalent [@Gupta2007]. Also, a more numerically stable form of Equation with discussion is provided in [@Gupta2007]. [^10]: Throughout this paper, a number in parentheses above an equals sign means we made use of this equation number. [^11]: We use the symmetry of $P_{k|k-1}$ in Equation and the symmetry of $S_k$ in Equation . [^12]: We used the symmetry of ${\ensuremath{\left({S_k}\otimes{\operatorname{I}}\right)}}$ here. [^13]: Here we used the symmetry of $S_k^{-1}$ and $\left(\nu_{k}' S_k^{-1} \nu_k \right)^{-1}$ (the latter of which is actually just a scalar). [^14]: We can use the unconstrained or constrained Kalman Gain to find this error covariance matrix. Since the constrained Kalman Gain is suboptimal for the unconstrained problem, before projecting onto the constrained space, the constrained covariance will be different from the unconstrained covariance. However, the difference lies exactly in the space orthogonal to which the covariance is projected onto by Equation . The proof is omitted for brevity. [^15]: $C$ and $d$ can be different for different $k$. We don’t subscript each $C$ and $d$ to simplify notation. [^16]: For the inequality constrained filter, we allow multiple iterations within each step. The $j$ subscript indexes these further iterations. [^17]: The previous active set is not relevant. [^18]: We can also do a midpoint approximation to find $F_{k,k-1}$ by evaluating the Jacobian at $\left(\hat{x}_{k-1|k-1} + \hat{x}_{k|k-1}\right)/2$. This should be a much closer approximation to the nonlinear function. We use this approximation for the Extended Kalman Filter experiments later. [^19]: We replace the ‘$\leq$’ sign with an ‘$=$’ sign and the ‘$\lessapprox$’ with an ‘$\approx$’ sign. [^20]: This method is how the Extended Kalman Filter linearizes nonlinear functions for $f_{k,k-1}\left(\cdot\right)$ and $h_k\left(\cdot\right)$. Here $\hat{x}_{k|k-1}$ can be the state prediction of any of the constrained filters presented thus far and does not necessarily relate to the unconstrained state prediction. [^21]: In these three methods, the symmetric weighting matrix $W_k$ can be different. The resulting $\Upsilon$ can consequently also be different. [^22]: Further, we can add constraints for some arbitrary $n$ time-steps ahead. [^23]: The figure only shows the noisy sine curve, which is the second element of the measurement vector. The first element, which is a noisy straight line, isn’t plotted. [^24]: Nonzero off-diagonal elements in the initial covariance matrix often help the filter converge more quickly [^25]: The bracket subscript notation is used through the remainder of this paper to indicate the size of zero matrices and identity matrices. [^26]: The indices $m,n,p$, and $q$ and all matrix definitions are independent of any used earlier. Also, the subscript notation $a_{1,n}$ denotes the element in the first row and $n$-th column of $A$, and so forth. [^27]: The subscript $S$ notation is used to differentiate these matrices from any matrices defined earlier. [^28]: We use Equation with $B_S'$ to arrive at the same term for $B_s$ in Equation .
{ "pile_set_name": "ArXiv" }
P2P Security Authors: Overview Security is an essential component of any computer system, and it is especially relevant for P2P systems. In the following sections we will outline the main topics of P2P security, including: The Need for Security Consequences of poor Security Current Security methods Security in the Future Need for Security In these turbulent times you would think that P2P security would be the least of the world’s problems. However corporate fraud and loss of revenue due to attacks on their internal networks has brought P2P to the forefront in the IT world. Napster was the headliner but since its high profile court case more and more P2P applications have been causing the corporate world headaches, which it could do without. With better security protocols this headache could be turned into a valuable asset for the corporate world and for the world. The diagram on the next page illustrates the gaps in security when using P2P applications. We can see that we are letting these applications get inside our networks. The security of our “secure” network is now in jeopardy. Following on from this, is the question of what must we protect ourselves against. We must outline the elements that our important to use, before we address the issue of the security. The main points of this are connection control, access control, operation control, anti-virus, and of course the protection of the data stored on our machines. The connection, access, and operation control are the priority issues here. If we can make these secure, the other two points should follow from these. The diagram illustrates all the main points that we must deal with. Outlined in this section is a selection of threats that P2P applications are vulnerable to. External Threats P2P networking allows your network to be open to various forms of attack, break-in, espionage, and malicious mischief. P2P doesn’t bring any novel threats to the network, just familiar threats such as worms and virus attacks. P2P networks can also allow an employee to download and use copyrighted material in a way that violates intellectual property laws, and to share files in a manner that violates an organisations security policies. Applications such as Napster, Kazaa, Grokster and others have been popular with music-loving Internet users for several years, and many users take advantage of their employers' high-speed connections to download files at work. This presents numerous problems for the corporate network such as using expensive bandwidth and being subject to a virus attack via an infected file download. Unfortunately, P2P networking circumvents enterprise security by providing decentralized security administration, decentralized shared data storage, and a way to circumvent critical perimeter defences such as firewalls and NAT devices. If users can install and configure their own P2P clients, all the network managers server-based security schemes are out the window. Theft: Companies can lose millions of euros worth of property such as source code due to disguising files using P2P technologies. P2P wrapping tools, such as Wrapstar (a freeware utility (http://members.fortunecity.com/wrapster), can disguise a .zip file, containing company source code, as an MP3 of a music hit. As a result an accomplice outside the company can use Morpheus to download the disguised file. To the companies security this looks like a common transaction, even if the company has frowned upon employees using P2P in music sharing. Little do they know is that their company has just been robbed, and possibly millions of euros worth of software has been lost. Bandwidth Clogging and File Sharing: P2P applications such as Kazaa (www.kazaa.com), Gnutella (http://gnutella.wego.com) and FreeNet (http://freenet.sourceforge.net) make it possible for one computer to share files with another computer located somewhere else on the Internet. A major problem with P2P file-sharing programs is that they result in heavy traffic, which clogs the institution networks. The rich audio and video files that P2P users share are very big. This affects response times for internal users as well as e-business customers and that results in lost income. Bugs: In order for P2P file-sharing applications to work the appropriate software must be installed on the users system. If this software contains a bug it could expose the network to a number of risks e.g. conflict with business applications or even crash the system. Encryption Cracking: Distributed processing is another P2P application. Taking lots of desktop computers and adding them together, results in a large amount of computing power to apply to difficult problems. Distributed.Net is a prominent example of this. In 1999 Distributed.Net along with the Electronic Frontier Foundation (www.eff.org) launched a brute-force attack on the 56-bit DES encryption algorithm. They broke DES in less then 24 hours. Distributed.Net were able to test 245 billion keys per second. At the time DES was the strongest encryption algorithm that the US government allowed for export. Trojans, Viruses, Sabotage: A user could quite possibly download and install a booby-trapped P2P application that could inflict serious damage. For example a piece of code that looks like a popular IM or file-sharing program could also include a backdoor to allow access to the user’s computer. An attacker would then be able to do serious damage or to obtain more information then they should have. P2P software users can easily configure their application to expose confidential information for personal gain. P2P file-sharing applications can result in a loss of control over what data is shared outside the organisation. P2P applications get around most security architectures in the same way that a Trojan horse does. The P2P application is installed on a “trusted device” that is allowed to communicate through the corporate firewall with other P2P users. Once the connection is made from the trusted device to the external Internet attackers can gain remote access to the trusted device for the purpose of stealing confidential corporate data, launching a Denial of Service attack or simply gaining control of network resources. Backdoor Access: P2P applications such as KazaA, Morpheus (www.morpheus.com) or Gnutella enable people all over the world to share music, video and software applications. These applications expose data on a users computer to thousands of people on the Internet. These P2P applications were not designed for use on corporate networks and as a result introduce serious security vulnerabilities to corporate networked if installed on networked PCs. For example if a user starts Gnutella and then clicks into the corporate Intranet to check their email, an attacker could use this as a backdoor to gain access to the corporate LAN. Non-encrypted IM: Instant messaging applications like those provided by AOL, Microsoft and Yahoo, also pose an information threat to a company. If these applications are used to discuss sensitive information, an attacker can read all the messages that are sent back and forth across the network or Internet by using a network-monitoring program. IM applications are been developed and enhanced with new capabilities such as voice messaging and file sharing. Adding file sharing to the IM application also adds all of the risks of the file-sharing applications as described previously. Confidentiality: Kazaa and Gnutella give all clients direct access to files that are stored on a user’s hard drive. As a result it is possible for a hacker to find out what operating system the peer computer has and connect to folders that are hidden shares, thus gaining access to folders and information that is confidential. Authentication: There is also the issue of authentication and authorization. When using P2P you have to be able to determine whether the peer accessing information is who they really say they are and that they access only authorized information. Internal Threats Along with the external threats previously described there are a few internal issues that have to be dealt with. Interoperability: Interoperability is a major security concern within P2P networks. The introduction of different platforms, different systems, and different applications working together in a given infrastructure opens a set of security issues we associate with interoperability. The more differences in a given infrastructure, the more compounded the security problems. Private Business on a Public Network: Many companies conduct private business on a public network. This leads to an exposure to various security risks. These risks must be addresses in order to avoid the liability this use entails. Adding and Removing Users: There must be a feasible method to add/delete users to/from the network without increasing vulnerability. The system is under the most threat from users and former users who know the ins and outs of the system e.g. the existence of trapdoors etc. Distributed Dangers: When using distributed processing applications the user is required to download, install and run an executable file on their workstation in order to participate. A denial of service could result if the software is incompatible or if it contains bugs. The People Problem: There will always be malicious users who are intent on gaining clandestine access to corporate networks. And no matter what security protocols are put in place a skilful attacker, given enough time, will find a way around them. So all that the security buffs need to do is to keep ahead of the hackers by creating bigger and better protocols. But that’s easier said then done! Existing Security standards and techniques in P2P networks At an alarming rate, people are adopting, in an ad hoc fashion, the tools of the Peer-to-Peer (P2P) revolution. Company files are increasingly made available by being published to the world directly from a user's PC. Databases, spreadsheets, even entire applications, are becoming enabled with P2P features and critical information is flowing out from every PC. P2P systems typically provide mechanisms that include searching for specific content or documents, discovering other peers running the software, and implementing any number of other application level tools, such as collaborative editing, instant messaging, or remote wireless mobility support So it is easy to see why security is such a crucial factor in P2P networks. Defending against the threats of ad hoc P2P deployment, and managing or reducing the risks of loss of information or availability of systems requires foresight, planning, and careful selection of the P2P infrastructure upon which your P2P enabled applications and services will be built. Security Mechanisms All security mechanisms deployed today are based on either symmetric/secret key or asymmetric/public key cryptography, or sometimes a combination of the two. Here we will introduce the basic aspects of the secret key and public key techniques and compare their main characteristics. Secret Key Techniques: Secret key techniques are based on the fact that the sender and recipient share a secret, which is used for various cryptographic operations, such as encryption and decryption of messages and the creation and verification of message authentication data. This secret key must be exchanged in a separate out of bound procedure prior to the intended communication (using a PKI for example). Public Key Techniques: Public Key Techniques are based on the use of asymmetric key pairs. Usually each user is in possession of just one key pair. One of the pair is made publicly available, while the other is kept private. Because one is available there is no need for an out of band key exchange, however there is a need for an infrastructure to distribute the public key authentically. Because there is no need for pre-shared secrets prior to a communication, public key techniques are ideal for supporting security between previously unknown parties. Asymmetric Key Pairs: Unlike a front door key, which allows its holder to lock or unlock the door with equal facility, the public key used in cryptography is asymmetric. This means just the public key can encrypt a message with relative ease but decrypt it, if at all, with considerable difficulty. Besides being one-way functions, cryptographic public keys are also trapdoor functions- the inverse can be computed easily if the private key is known. Protocols Mechanisms for establishing strong, cryptographically verifiable identities are very important. These are industry standard authorization protocols that allow peers to ensure that they are speaking with the intended remote system. Secure Sockets Layer (SSL) protocol: For protection of information transmitted over a P2P network, some P2P’s employ the industry-standard Secure Sockets Layer (SSL) protocol. This guarantees that files and events sent will arrive unmodified, and unseen, by anyone other than the intended recipient. Moreover, because both peers use SSL both sides automatically prove who they are to each other before any information is transferred over the network. The protocol provides mechanisms to ensure tamperproof, confidential communications with the right counterpart, using the same, well-proven techniques used by all major website operators to protect consumer privacy and financial information transmitted on the Internet. IPSec technologies: Most VPNs (virtual private networks) use IPSec technologies, the evolving framework of protocols that has become the standard for most vendors. IPSec is useful because it is compatible with most different VPN hardware and software, and is the most popular for networks with remote access clients. IPSec requires very little knowledge for clients, because the authentication is not user-based, which means a token (such as Secure ID or Crypto Card) is not used. Instead, the security comes from the workstation's IP address or its certificate (e.g. X.509), establishing the user's identity and ensuring the integrity of the network. An IPSec tunnel basically acts as the network layer protecting all the data packets that pass through, regardless of the application. Public Key Infrastructure (PKI) An industry standard: A full-featured X.509 Public Key Infrastructure (PKI) over a Secure Sockets Layer (SSL) network backbone - the combination of X.509 PKI authentication and SSL transport encryption is the established cryptographic standard for Internet e-commerce. Use of X.509 PKI authentication allows security certificates from Endeavors, or from any other recognized X.509 certificate authority, to be used to establish the true identity of any peer device when it comes on-line. Use of SSL point-to-point security encryption enables each pair of peers that communicate with each other to have a unique key for that pairing. The advantage of SSL encryption is that when a peer goes off-line from a community, all its unique pairing keys become invalid, but no pairing keys between other members of the community are affected. What about VPN Security? The key word in "virtual private networks" is private. The last thing a business wants is to have sensitive corporate information end up in the hands of some hacker, or worse, the competition. Fortunately, VPNs are widely considered extremely secure, despite using public networks. Why are they secure? In order to authenticate the VPNs users, a firewall will be necessary. All VPNs require configuration of an access device, either software- or hardware-based, to set up a secure channel. A random user cannot simply log in to a VPN, as some information is needed to allow a remote user access to the network, or to even begin a VPN handshake. When used in conjunction with strong authentication, VPNs can prevent intruders from successfully authenticating to the network, even if they were able to somehow capture a VPN session. The Future of P2P Security The constant running theme in the security of P2P is that of trust. Trust in the other users who we interact with, and trust within the software vendors who supply us with the necessary applications. If we could have more faith in this trust, or feel a greater sense of security, maybe the development of P2P would grow even faster than it is already doing. Many proposals are already being studied. People are acknowledging that security is an area P2P must address, if it is to be accepted by consumers. Users Gaining Their Own Trust: One very interesting idea recently proposed, is that of users gaining trust within the P2P community. All users would be assigned a unique digital signature, like IP, but per user and not per machine. Associated with this digital signature would be a level of trust. Trust levels would vary from say zero, to twenty. Depending on a users behaviour in the past, their trust level would either be promoted on the grounds of valid use of the network, of demoted with acts of malice and misuse. The proposed plan states that all users trust level would begin at a rather low level. This is merely to combat unwanted users creating new accounts, and abusing the new high trust level immediately. Users would have to be active on the network for some time ( say one/two months), before their trust level would be pushed up a level. Users could also keep a local record of other known users, to which they may want to share a local trust level, and bypass the global trust policy. This proposal has many hurdles to jump of course. It is merely an idea to be developed. The problem that it overcomes is that of the centralized managing authority. Instead, the users of the network are the authority. If the general public continuously try to demote a user, he/she will eventually lose all their privileges, and become silenced from other users. This idea also rewards genuine users, for their efforts in keeping the network policed, and for their good behaviour on the network. The idea is possibly a bit too naive, as we all know that must humans(especially adolescent ones), will do the exact opposite of what they are meant to do, if given no choice. In other words, people do not like to be told what to do. Biometrics: Biometrics involves the use of a person’s unique characteristics to authenticate them. Traits that are commonly utilized include a person’s facial image, signature, fingerprint or retinal pattern. One key feature of biometrics is that the user is no longer required to remember any passwords or store any key data, a major weakness in conventional authentication systems. Ultimately, the technology could find its strongest role as an integrated and complementary piece of a larger authentication system, perhaps in combination with the cryptographic certificates mentioned above, rather than a stand-alone single point of defense. In the future, many experts foresee biometrics both playing a key role in enabling public key infrastructure deployment by protecting public and private keys and residing in smart card technology in an effort to support personalized e-commerce. Quantum Key Cryptography: For the short term, The US Government is adopting a new encryption standard called Advanced Encryption Standard (AES), which will eventually replace DES. "When approved, the AES will be a public algorithm designed to protect sensitive government information well into the 21st century." If that's true, what will be used after AES? One idea currently being proposed is the notion of Quantum Cryptography. Many modern encryption systems depend on the difficulty in mounting brute force attacks on secret keys, due to processing and time constraints. Although still at the theoretical stage, the performance improvements given by a hypothetical quantum computer would render many algorithms useless. Obviously new encryption algorithms would be needed. Quantum encryption uses photon state as the key for encoding information. According to the Heisenberg uncertainty principle, it's impossible to discover both the momentum and position of a particle at any given instant in time. Therefore, in theory, an intruder can't discover secret keys based on particle state information; the intruder would need the actual particle to decipher any data encrypted with a key. Unfortunately this concept is, for the moment, incredibly complex to implement. IBM scientists constructed the first working prototype of a quantum key distribution (QKD) system in the late 80’s. Back then they could transmit quantum signals just under half a meter through open air. Today, fiber optic cables can transmit the signal up to 31 miles. This still isn't very far, but it is definitely good progress. And although we might not see QKD come to market for quite some time, the technology sounds incredibly promising. Conclusion It is obvious from the above that security is a crucial issue when it comes to designing and implementing P2P systems. At the moment it is probably the main inhibiting factor for the growth of P2P. It is vital that users become confident in the ability of the security measures being utilised to protect them, in order for P2P technology to reach its full potential. At the moment, security measures in general are failing to inspire consumer confidence, a problem that must be addressed immediately.
{ "pile_set_name": "Pile-CC" }
The present invention relates to the field of magnetic recording, particularly to the measuring of an error rate for disk drives, based on the detection of data by means of a digital data processing method.
{ "pile_set_name": "USPTO Backgrounds" }
A novel technique to facilitate laparoscopic repair of large paraesophageal hernias. Laparoscopic repair of large paraesophageal hernias (LPEH) is technically challenging, and requires advanced laparoscopic skills. We have developed a novel technique for facilitating laparoscopic repair of LPEHs safely and easily, using a Nelaton catheter. Seven patients with LPEHs were operated on through a laparoscopic approach. During surgery, the left lobe of the liver and right diaphragmatic crus were elevated using a suspended thread covered by a Nelaton catheter. All patients were operated on laparoscopically using this technique. No patient required conversion to open method. The median operating time was 205 minutes and the range was from 155 to 295 minutes. No intraoperative or early complications occurred in any patient. Late complications occurred in 2 patients due to a small sliding hernia: a slipped fundoplication in 1 patient, and a gastric ulcer in the other. In conclusion, laparoscopic repair of LPEH is a challenging procedure that requires wide experience in laparoscopic gastroesophageal surgery. Further refinement for this operation may be necessary.
{ "pile_set_name": "PubMed Abstracts" }
// // Generated by class-dump 3.5 (64 bit) (Debug version compiled Oct 25 2017 03:49:04). // // class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2015 by Steve Nygard. // #import <Install/IFDInstallController.h> @interface IFDInstallController (IA_InstallController_PreInstallActionsExtensions) - (id)applicationsToQuitBeforeInstallation; @end
{ "pile_set_name": "Github" }
Apple has begun signaling to education shoppers that it plans to relaunch and overhaul its online store in the near future, making it easier to do business. AppleInsider was alerted on Wednesday to the notice that can be found on the top of Apple online stores geared for kindergarten through 12th grade institutions. There, shoppers are greeted with an alert about "the new Apple Store." "Apple is launching a new online store, which is your tool to shop and place orders with Apple," the note reads. "Proposal creation, order status, and a dramatically simplified user interface will make it easier to do business with Apple — all in a secure and reliable environment." For now, it appears that any forthcoming changes will just be related to the company's specialty stores designed for education institutions. However, the change could be a sign that Apple plans to do enact a similar overhaul of its general consumer storefront at some point in the near future. The note atop the store for K-12 buyers also shares information on "getting ready" for the upcoming transition, stating that it will be "easy." "Your current Apple ID and password will continue to work," it states. "In the coming weeks, you will receive more information about the store's features, benefits, and launch date."
{ "pile_set_name": "OpenWebText2" }
There was a time when you couldn’t turn on an NFL game without hearing the announcers opine on fullbacks: goliaths of the gridiron who mowed down defenders. Backs like Bronko Nagurski and Larry Csonka played the position with such grit and ruggedness that they are remembered years after their final, bruising 3-yard runs. Fullbacks are easy to root for. Their ability to both absorb and mete out punishment sometimes seems superhuman. When Csonka, the Miami Dolphins fullback, was inducted into the NFL Hall of Fame in 1987, his longtime head coach Don Shula gave perhaps the greatest and most apt description of a fullback ever uttered: “He was blood and guts. Dirt all over him. He had 12 broken noses.” But despite enduring in the sport’s collective memory, the style of football that produced these bruisers was very much of a particular time and place. It’s no exaggeration to say that the modern NFL has all but abandoned the position. According to Over The Cap, a website that tracks player contracts, just 14 of the NFL’s 32 teams currently have a fullback signed to a multi-year deal, and the average yearly salary for the position is just $1.16 million. Philadelphia head coach Doug Pederson has admitted that the Eagles don’t invest resources in the position, and the data seems to suggest that much of the league agrees with their approach. On the field, too, the second back has become an endangered species: During the 2006 regular season, there were 13,157 total offensive snaps from formations with two running backs. By 2018, that number had plummeted to 3,714. And fullbacks don’t have to look far for a villain to blame: pass-happy offenses. In a league increasingly focused on moving the ball through the air, it turned out that the fullback was the most obvious position to lose importance. Since every NFL offense has to include five linemen, who are ineligible receivers, and a quarterback, there are only so many ways you can mix and match the remaining five eligible players to meet your offensive goals. As NFL teams moved to formations with multiple receivers to support their passing attacks, it was inevitable that some position had to feel the pinch. Yet, not everyone agrees that fullbacks are obsolete. In fact, some of the NFL’s best teams this season are featuring them. Through Week 6 of the 2019 season, there are six NFL teams who’ve run 100 or more offensive snaps with two backs on the field — and their combined record is an impressive 23-10-1. Even more interesting, the two teams that trot out fullbacks the most — the New England Patriots and San Francisco 49ers — are the only two undefeated teams left in the league. New England and San Francisco play fullbacks the most Teams by frequency of offensive snaps with two running backs Rank Team Record Snaps Yds/ Play Yds/ Att. Yds/ Rush Success%* 1 New England 6-0 178 4.5 6.6 3.0 37.1% 2 San Francisco 5-0 139 7.4 11.8 5.7 54.0% 3 Minnesota 4-2 134 6.4 8.5 5.5 41.8% 4 Detroit 2-2-1 107 5.0 7.2 3.9 37.4% 5 Denver 2-4 106 5.6 7.7 4.2 44.3% 6 Baltimore 4-2 103 4.5 5.9 4.2 41.7% 7 New Orleans 5-1 87 5.7 7.8 4.4 41.4% 8 L.A. Chargers 2-4 79 4.9 8.0 3.0 40.5% 9 Oakland 3-2 74 5.4 9.4 4.2 50.0% 9 Green Bay 5-1 74 5.1 6.9 3.6 45.9% 11 Chicago 3-2 71 3.7 4.7 3.8 36.6% 12 Buffalo 4-1 70 4.2 6.1 3.2 44.3% 13 Carolina 4-2 62 5.5 7.8 4.5 35.5% 14 Atlanta 1-5 60 6.6 10.7 4.1 50.0% 15 N.Y. Giants 2-4 39 5.7 8.2 2.7 46.2% 16 N.Y. Jets 1-4 38 5.0 8.1 2.5 23.7% 17 Dallas 3-3 36 4.1 4.7 3.7 33.3% 18 Kansas City 4-2 29 4.0 8.1 2.2 44.8% 19 Miami 0-5 26 3.9 6.2 1.7 30.8% 20 Houston 4-2 24 2.9 5.0 2.1 37.5% 21 Seattle 5-1 17 1.8 20.0 1.3 35.3% 22 Arizona 2-3-1 16 10.6 15.4 6.0 62.5% 22 Tampa Bay 2-4 16 2.5 5.6 1.1 31.3% 24 Tennessee 2-4 12 1.8 1.6 3.3 41.7% 25 Cincinnati 0-6 9 1.6 5.2 2.0 44.4% 26 Indianapolis 3-2 8 3.0 7.0 0.6 37.5% 26 Washington 1-5 8 1.4 4.3 -0.4 37.5% 26 Philadelphia 3-3 8 2.8 11.5 1.4 37.5% 29 Pittsburgh 2-4 7 2.9 3.4 1.5 57.1% 30 Jacksonville 2-4 5 1.0 0.0 3.7 20.0% 31 Cleveland 2-4 2 10.5 0.0 21.0 50.0% 32 L.A. Rams 3-3 0 0.0 0.0 — — 2019 totals through Week 6. *Success rate is the share of plays with positive expected points added. Source: ESPN Stats & Information Group Before we get carried away, the Patriots probably can’t credit two-back sets with too much of their success. Their expected points added per play for the package is actually negative, and they’ve gained just 4.5 yards per play with a fullback on the field. Still, four of the six teams to feature two-back formations extensively are generating positive EPA on those plays, and none more than Kyle Shanahan’s 49ers. The Niners have run plays with a fullback on 139 of 348 offensive snaps through five games in 2019, and they’re gaining about a quarter of a point per play. Yet what’s surprising about the 49ers and the rest of the teams utilizing a fullback is that their offensive gains aren’t coming from the obvious play type: the run. Having an added blocker in the backfield hasn’t really made running the ball more effective. Instead, the most successful gains from the package have come from the pass plays when a fullback is present. Even as the two-running-back personnel package has become more rare, passing from it has become increasingly effective. In all but three seasons from 2006-18 we can say with a high degree of confidence that passing with two backs was better than rushing with two backs, on average. All told, passing with a fullback on the field has been a winner. The question is: Why? The dominant narrative around two-running-back sets is that they benefit rushing, not passing. But there are at least a couple of reasonable explanations for passing’s surprising effectiveness, starting with former 49ers head coach Bill Walsh, who extolled the virtues of the fullback in the passing game nearly 40 years ago. He believed the fullback was the “critical part” of the 49ers passing attack because of the matchup difficulties the position presents. (A speedy, athletic fullback can be particularly hard for a middle linebacker to defend in space, but the defense has no choice but to send one out to counter the threat of the run he presents as an added blocker in the backfield.) And while most teams still run more than they pass with a fullback in the game, there is some evidence that NFL coaches have taken Walsh’s ideas to heart. The gap between the frequency of rush and pass plays called in two-back groupings has narrowed slightly since 2006, and creative play-callers like Shanahan are lining up their top receiving weapons at the fullback with success. The play below from Week 6 shows 49ers tight end George Kittle lined up as the fullback in the offset I-formation. It ends with the Niners gaining 45 yards and setting up a first and goal. Aside from the matchup difficulties a talented fullback presents, deception is another likely explanation for passing success out of two-back personnel. When teams bring out a fullback, the defense is strongly incentivized to stack the box and defend the run, leaving parts of the field vulnerable to attack with the pass. On the pass to Kittle, the play design called for him to sell to the middle linebacker that he was going to lead block. Kittle does a good job fooling No. 51 Troy Reeder, and then he breaks across the field — away from the linebacker — and is wide open for a big gain. Still, if deception is so effective, why don’t more teams do it? One explanation is the negative perception it appears to have around the league. Recently, former Ravens linebacker Ray Lewis was asked for his thoughts on the decline of the fullback. Lewis lamented that defenses resort to trickery to win, and appeared to long for simpler times when the physicality of the game was paramount. “The game is about men battling men … that’s what the game was. The game was not tricks. Now we tricky. Now everybody’s smart,” Lewis said. Teams might prefer to win by lining up and punching the other guy in the mouth, but the data supports Walsh’s view of the world. Given the precedent set by Walsh in San Francisco, there’s a certain symmetry to Shanahan and the Niners leading a mini fullback-resurgence, but it’s not just vapid nostalgia. When you combine a consistent matchup advantage with deception, you probably have a recipe for success in the NFL. As a potent passing weapon, perhaps the fullback is back to stay. Check out our latest NFL predictions.
{ "pile_set_name": "OpenWebText2" }
ISDA ISDA may refer to: International Swaps and Derivatives Association, trade organization of participants in the market for over-the-counter derivatives International Semiconductor Development Alliance, technology alliance between IBM, AMD/GlobalFoundries, Freescale, Infineon, NEC, Samsung, STMicroelectronics and Toshiba. Irish Student Drama Association, association for intercollegiate competition in Irish amateur student theatre Independence of Smith-dominated alternatives, a voting system criterion.
{ "pile_set_name": "Wikipedia (en)" }
The Senate Commerce Committee held a markup today where Senator John McCain's (R-AZ) pipeline safety legislation, S. 2438, was approved. The overall outcome was not unexpected -- the final legislation contained several provisions that went a little bit further than Enron and INGAA would have liked, but there was rigorous debate over issues like States' role and mandated integrity measures, and close votes on several amendments. Senator McCain and other bill proponents like Senator Slade Gorton realize those issues will need to be addressed before the legislation proceeds to the Senate floor. The Committee considered several amendments to the legislation: 1) McCain substitute amendment -- Approved by voice vote. The amendment substituted the revised "Chairman's mark" text that was developed over the last several days in place of the original version of S. 2438. All further amendments at the markup were in the context of this substitute text. 2) McCain amendment to Section 13 (b) (on operator assistance investigations) -- Approved by voice vote. This amendment attempts to fix language in the substitute text that was problematic to just about everyone in industry and on the Hill because it unconstitutionally forced an operator to choose between exercising constitutional rights to protect himself in a criminal investigation or keeping his job. The McCain amendment did not seem to improve things much, and many Senators could not understand it. Senator John Breaux (D-LA) objected to it, and asked that the entire section be deleted, and there was some good debate on both sides of the issue. In the end, McCain asked that the amendment be passed with a promise that he would work with Breaux and others to fix it before Senate floor consideration. 3) Sen. John Kerry (D-MA) Amendment on Enforcement -- Approved by voice vote. Another confusing vote, in which many members did not understand the changes being made, but agreed to it on the condition that clarifications be made before Senate floor action. Late last night, Enron led a group including companies from INGAA and AGA in providing comments to Senator Kerry which caused him to make substantial changes to his amendment before it was voted on at markup, including dropping provisions allowing citizen suits and other troubling issues. In the end, the amendment that passed was acceptable to industry. 4) Sen. Sam Brownback (R-KS) amendment on Advisory Council Pilot Program -- Rejected 12 to 4. Brownback's amendment would have replaced the Advisory Council Pilot Program as proposed in the substitute text with a requirement that already existing technical advisory committees to OPS be required to meet regionally. There was a good debate on the issue, but Senator Gorton made a strong plea to members that the new Advisory Councils would only be a pilot program and would not have any binding role. He also agreed to work with industry on improvements, such as representation of pipelines on the councils, before Senate floor action. 5) Brownback amendment on State Role -- Rejected 10 to 9. This amendment was written by Enron and El Paso, striking language from the bill that was overly broad in suggesting that a State had oversight authority over "other activities" beyond proper areas like accident investigation, construction-related activities, and inspection. There was a heated debate, with Brownback and Breaux making excellent arguments for us, but in the end we were 1 vote short. In fact 2 Democrats switched their votes at the last minute to tip the balance. I have already had discussions with Senate staff after the markup indicating that this language will need to be fixed to our satisfaction before Senate floor action. 6) Sen. Ron Wyden (D-OR) amendment increasing public right-to-know requirements -- Approved by voice vote. This amendment increases the kinds of information that need to be made widely available to the public. Because of a state of confusion in which many of the Senators left the room to go vote on the Senate floor, there was no opposition voiced to the amendment and it passed without much discussion. 7) Breaux amendment on Integrity Inspection Program -- Rejected by voice vote. This was another written by Enron and El Paso that would have removed the bias of substitute text language toward internal inspections and pressure testing and broadened the range of tools that a pipeline could include in an integrity management plan. It also would have removed a requirement that any state and local official would have the chance to comment on the plan and receive a response. The debate was heated over this amendment, with Senator Gorton indicating that he could accept the first part of the amendment (removing the bias toward internal inspection/pressure testing) but not the second, which he felt weakened state and local opportunity to comment on pipleines' integrity plans. The committee agreed to reject the amendment, but work to fix the second part before Senate floor action. Procedurally, it was a strange markup session. A quorum of the Committee (11 members) needs to be present to vote on final passage of the bill. A vote on the Senate floor came up as Senator Kerry was explaining his amendment (#3), and as Senators were leaving, Senator McCain very quickly asked that the entire bill be passed on voice vote before a quorum left the room, subject to consideration of amendments when they returned. The bill was passed before many knew what was happening. Of course, only about 7 members came back to the room for the rest of the markup, so much of the voting was done without full everyone's participation, or by proxy -- which had serious effects on the outcome of several amendments. As you can see from the notes on many of the amendments, there will be an opportunity to fix the legislation before it moves any further. Several Senators indicated after the markup that they had voted with Senator Gorton on several of the amendments as a courtesy to him, but would not support the bill's movement to the Senate floor without serious changes being made. One key ally is Senator Trent Lott (R-MS), who supported all industry amendments. His control of the Senate floor schedule as Senate Majority Leader will be key in driving changes to the legislation. If satisfactory changes are not made, the bill will move no further. There will also be an opportunity to provide comments for Committee Report language that will accompany the final bill that was passed. The report will give explanations of several of the concerns that were raised, and provide legislative history that will help clarify areas where there is uncertainty about the real effect of some of the provisions. We did have several victories. Many of the amendments that were expected -- worse language from the Administration or Patty Murray legislation -- were not even offered because of the advance work we were able to do with Republican and Democratic Commerce Committee members and staff. We were able to affect a number of changes to the "Chairman's mark" or substitute amendment before the markup even occured. Another interesting note: Senator Kay Bailey Hutchison was a complete non-factor in the markup. She is the Subcommittee chair responsible for this issue, yet she did not make an opening statement and she did not vote at all on several key amendments of concern to industry. We will be working on the "where do we go from here" strategy in the coming weeks, and will report as developments occur. Thanks again to all who helped out in preparation for the markup in what was an amazing team effort. The technical support of Dave Johnson, Colleen Raker and Lou Soldano was extremely valuable. Phil Lowry's and Dave Johnson's availability to do key meetings with Senators and staff was critical in our efforts. And Dave and I can attest to the fact that Enron was in a leadership role leading up to the markup, guiding the efforts of INGAA and others and directly influencing the direction this legislation will take in the coming weeks and months. Jeff Keeler (202) 466-9157
{ "pile_set_name": "Enron Emails" }
"The biggest advance of the abortion industry in America has been the passage of Obamacare," she said in Dallas. Actual pro-choice advocates beg to differ about the Affordable Care Act. But maybe they're blinded by those fictional millions.
{ "pile_set_name": "Pile-CC" }
Burrilanka Burrilanka is situated in East Godavari district in Andhra Pradesh State. References Category:Villages in East Godavari district
{ "pile_set_name": "Wikipedia (en)" }
""The night was..."" "(thunder)" ""The night was..."" ""The night..."" "(grunts)" "The Phantom of the Novel... ..is coming to haunt the pages of Larry Donner." "Jeez, what the hell am I doing?" "Hello, Op. Remember me?" "Professor Blank?" "We're back." "Well, I'm sure you all know the name Margaret Donner." " Unless you live under a rock." " No!" "Ladies and gentlemen, the author of the best-selling novel "Hot Fire"." " Author of the best-selling novel?" " You must be very proud of yourself." " Well, I..." "I am." " The woman stole my book!" " What's to be proud of?" "..and there's just you,... ..you facing you in that mirror, do you say "Margaret, you did it"?" " Yes, I do." " Slut!" "She's a slut!" "Look at her!" "Slut!" " Hey!" "Hey, man, your wife's on TV!" " Ex-wife, Lester." "Ex-wife." " She looks good!" " Criminals don't age." "It's a common fact." " What the hell do you want?" " Something for a gig." " You got anything in green?" " You borrowed all my green." "It is a lot more difficult for women to get themselves published..." "How about women thieves, Oprah?" "Big difference." "I would say that, once I was divorced,... ..blissfully divorced,... ..a freedom overcame me, and I was allowed to be the writer, the artist,... ..that I was never allowed to be within the confines of my prison-like marriage." "Did you think about writing when you were with this beast?" " Well, I did." "I did, yes." " (laughter)" "Because he fancied himself a sort of a hack writer." "I watched him at the typewriter and thought:" "Oh, God, I can do that in spades." "Then did, clearly." "How do you think he will feel now about your success?" " Frankly, I don't really care how he feels." " (laughter)" " Well, Margaret..." " (applause)" "So she stole your book." "Write another one and forget it." " I am writing." " Oh, you're writing?" " Yes, I am." " Yeah, right! "The night was..."" " The night was what?" " I just started." " You been on "The night was" since July." " It takes place in the Yukon." "..severed the ties." "You live in Hawaii now." " Yes." "I just adore a tropical climate." " Look at those earrings, man!" "It's your money." "How was it that you, Margaret Donner,... ..produced such a brilliant piece of writing in... just your first time out?" "Well, Oprah, I mean..." "it's the story of my life." "It's my life, Margaret!" "And I want it back!" ""Hot Fire", ladies and gentlemen." "Thank you so much for being on the show." " My pleasure." " I want it back!" " Owen!" " (thunder)" "Owen!" "Owen!" "Owen!" "Owen!" " What?" "!" " Get me a soda with some ice in it!" " Owen, hurry up!" " Momma..." " Chop chop, Owen!" "Come on!" " All right!" " You were writing a letter!" " No, Momma!" "You were writing to tell them to take me away!" "You want them to take me away!" "I'm writin' a story for class, Momma!" "I don't want 'em to take you away!" " Yes, you do!" " Owen loves his momma!" "(mimics) "Owen loves his momma."" "(singsong) Owen loves his momma, Owen loves his momma..." " Hurry up with that soda!" " Coming, Momma." "(mimics) "Coming, Momma." I'm choking to death, you moron!" " You're too damn slow!" " I'm sorry, Momma." "(mimics) "I'm sorry, Momma."" " Where are the salted nuts?" " The salted ones are no good for you." "The unsalted ones make me choke!" "Aarrghhh!" "Momma!" "You clumsy poop!" "What'd you do that for?" "Come on, move it, lard-ass!" "Pick up every piece!" ""'Dive,... ..dive!" "' yelled the captain through the thing."" ""So the man who makes it dive pressed a button or something and it dove,... ..and the enemy was foiled again."" ""'Looks like we foiled them again' said Dave."" ""'Yeah' said the captain."" ""'We foiled those bastards again, didn't we, Dave?" "'"" ""'Yeah' said Dave."" ""The end."" "OK." "Here we have all the elements of drama." "We have the tension, the horror of war..." "Uh..." "Mrs Hazeltine, when you're writing a novel that takes place on a submarine,... ..it's not a bad idea to know the name of the instrument... ..that the captain speaks through." " I used to know that." " And your similes... need a little work." ""His guts oozed nice like a melted malted."" "Well, it's, um... a little..." " Too harsh?" " A tad." "Otherwise it was, um... ..very good, it was, uh... ..very real." "Here's another one." "This is a real classic by Mr Pinsky." "It's entitled 100 Girls I'd Like to Pork." "Pork?" "It's a coffee-table book." "100 Girls I'd Like to... hm." ""Chapter 1 :" "Kathleen Turner."" ""Chapter 2:" "Cybill Shepherd." "Chapter 3:" "Suzanne Pleshette."" ""Chapter 4:" "The Girl in the Taco Commercial."" ""Chapter 5:" "The Woman in 4B." "Chapter 6:" "The Oriental Laker Girl."" ""Chapter 7:" "Chris..."" "Mr Pinsky, this is not literature." "Well, you know, I would put in photographs,... ..a brief character sketch, like a biography,... ..and a nice dust jacket." " Mr Pinsky, what is this?" " It's..." "literature." "It's a fantasy." "Like Melville." "This is my great white whale." " It's whacking material." " lsn't that literature?" "How do you associate Moby Dick to a list of women you'd like to have sex with?" " I think it's brave." " (Mrs Hazeltine) He's vulgar." " They said Twain was." " I'm saying he is." " I think you're vulgar." " You're a no-talent shit." " Maybe I should change the title." " (man) I like the title." " (bell)" " OK, I'll see you Wednesday, class." "Good work today." "Remember, a writer writes... always." "Argh!" "Oh!" " This is your tie." " Oh, God." " You dropped it." " Oh." "How'd that get in there?" "It got wet." "I was afraid it would be ruined." "Thanks." " Hi." "I'm Owen." " I know." "Why didn't you read my story in class?" " Your story?" " Yeah." "Murder at my Friend Harry's." "Why didn't you read it?" " I did." " You did?" "What'd you think of it?" "It's raining, Irwin." "Can't we discuss it tomorrow?" "Owen." "Didn't you like it?" " Well... no." "No, I didn't." " Why not?" "It was three pages long." "It was a murder mystery... ..that, by the way, was no big trick in finding the murderer." " What gave it away?" " You only had two characters,... ..one of which was dead on page two!" "Well, one guy killed the other guy." " It wasn't motivated!" " Sure it was." "A guy in a hat killed the other guy in a hat." "I have to go now, Owen." "Thanks." " (bleep)" " Hi, Beth." "It's me." "It's, uh... 10.30... ..and I did it again." "I'm sorry." "It's just that..." "Margaret was on the..." "Oy, there I go again." "It's..." "I'll..." "Look, I'm sorry." "That's all." "Bye." "(tuts)" "So Margaret's a big star." "That's life." "This is life, too." "That goes on, this goes on." "Hm." "The night was hot." "Wait, no." "The night..." "The night was... ..humid!" "The night was humid." "No, wait." "Hot." "Hot!" "The night was hot." "The night was hot and wet." "Wet and hot." "The night was wet and hot... hot and wet..." "That's humid." "The night was humid." "Maybe the night isn't humid." "Maybe... the night isn't humid." "Maybe it was humid in the morning and at night it was cold." "That gives you fog." "Ha!" "The night was foggy." "The night..." "The night was..." "The night was..." "The night..." "The night was dry, yet it was raining." "The..." "The..." "The..." "The streets were wet,... ..but the night... was as bright... ..as... the... ..earrings in Margaret Donner's ears!" "My God!" "I'm goin' outta my mind!" "Fuck it!" "The... night... was... humid!" "That's it and that's all." "(grunts)" "Stop it, dammit!" "I got a wax ball in my ear." "Get it out." " Momma!" " You were writing to her, weren't you?" "Don't start that again, Momma." "And don't hit me!" " You love her." " There's no "her", Momma." " You were writing a letter." " I'm writin' a story for class, Momma." " I take a class." "I take a nice class." " Yeah, yeah, yeah." " And I'm gonna be a writer some day." " You know how that typing upsets me." "I'm sorry, Momma." "A writer writes." "You're gonna be nothing." "You're gonna be nothing." "You'll never get to first base." "All you do is type, type, type, type, type!" "You sit there typing all day like a fat little pigeon." "You won't ever hear it again, Momma." "I promise." "Aarrghhh!" " Momma..." " I think you got it, sonny." "I don't know what I'd do without you, Owen baby." " I know, Momma." " Owen, my little baby." " I know." " Owen, my little baby boy." "(Owen) And even though he was mortally wounded,... ..the guy in the hat got up... ..and pulled himself up and... ..staggered out into the dark night..." "..like a milkman... ..going out on his route." "(man) There he is, Owen." " Professor Donner..." " Not now, Owen." "Another time." " Beth..." " Don't bother." " I gotta talk to you." " You're 16 hours late." "And I'm sorry." "It's just that..." "she was on the television." "Margaret was on TV and my mind went nuts." "My mind went crazy." "I saw nothing but hate and death." "And I'm sorry, and I did call." "I called you around 10.30." " Where were you?" " Did you think I'd stay in all night... ..staring at my tandoori chicken?" "I know you're angry and... you have every right to be angry." "It's just that I..." " It's my class." " Ah." "How do you do?" "(all) Hi." " What's your name?" " Beth Ryan." "She teaches anthropology." "Beth..." "Ryan?" "Don't even think about it." "(man) Hang in there, Pinsky." "Come on." "I'm sorry." "Do you know that we are this far away from having a date?" "We're just this far." " (as Beth) Hello, you're late." " I'm sorry." " You probably made tandoori chicken." " (as Beth) Come on in, Larry." "Oh..." "Come on, gimme a second chance." "What do you say?" "Come on!" " I can't believe I'm saying this." "Yes." " Yeah?" "Great!" "Professor Donner, I saw your wife on Oprah!" "Ex-wife." "Ex!" "What's with him?" "(sighs) I can't get over this." " Why?" " Why?" "Why?" "!" "She steals my book!" "OK?" " She goes on national television..." " I'm not gonna sit here and be barked at." "Wait." "Listen to me, Beth." "The problem is... ..I can't write." "I'm dead inside." "I have no passion." "You have passion." "When you talk about Margaret, you have passion." "Slut!" "No, my ex-wife is a major slut." " But you're not a slut." " Thank you!" "I'm talking about passion!" "It's selfless, committed, overflowing!" "Not hate and murder!" "And slut!" "I hate her!" "I wish she was dead!" "Chill out." "Thank you." "It's so beautiful!" "It's very good, Phil." "Very, very good." "Forty yards of Naugahyde, a girl and a dream." "What can I say?" "It's..." "Well, I wrote it just like I lived it." "Which is what you've been teaching us: to write what we know." "It's very good." "It'll blow the lid off the upholstery business as we know it." " Thank you." " OK." "Next is Murder at my Friend Harry's by Owen..." "Lift." ""Chapter 1." "The night was humid."" "Class dismissed." "I have an enormous headache in my eye." "Yuck." "OWEN" "(sighs)" "(car door closes)" "(Larry) I'm so sorry about the cafeteria." "It was Norman Bates in concert." "It's just that I see her and I..." " You wanna kill her." " Metaphorically, yes." "Specifically, without a doubt." " I'm glad you called." " Me, too." "I like trains." "Every great romance or mystery has a train in it." " Is this gonna be a great romance?" " Could be." "We have a train, we have the moon, we have compatible body parts." " Would you like to kiss me?" " I don't know." "You don't know?" "Oh, you're so sweet, Beth." "Oh, Larry, would you like to date me?" " This isn't a date?" " Mmm..." "I was speaking euphemistically." " So you're... you're saying...?" " Yeah, yeah." "So you're saying...?" "Yeah, you are." "I can feel that." " You're saying no?" " Kids are gonna sit here tomorrow." " I wanna make love to you, Beth." " Oh, Larry..." "Sometime in the very near future." " Huh?" " I can't." " What?" " I have writer's block." " Everywhere?" " Yeah." "It's no good." "I can't." " On the train?" " Oh, I wanna ring the bell." "(moans and sighs)" " (Beth) Oh, take me, Casey." " Who?" " Take me, Casey." " I'm taking you, I'm taking you." " Hi." " Oh, God!" "Do you know Owen?" "This is Owen." "Owen is a dead man." " Did I come at a bad time?" " Owen..." "I'm dating." "You read it, right?" "My story?" " Yes." " Good." "Owen, you cannot follow me around asking me questions." "This stuff belongs in the classroom." "Not on my time." "Do you understand me?" "Oh." "I'm sorry." "Good night, Miss Ryan." " Did you like it?" " No!" "(ringing)" "(Owen) Why didn't you like it?" "Because in a novel people have to have more of a reason to commit a murder." "If they just do it because they're crazy, it's not strong enough for a novel." "You mean, if someone ruined you permanently, then you could murder 'em?" "Yes." "Which brings us to the second point." "The motive: you have to eliminate it." " Eliminate the motive." " Correct." "I'll give you an example." "My ex-wife." "I hate her guts, right?" "Yeah." "I overheard you in the cafeteria." "She really ruined you." "Yes, she did." "And I hate her with a passion." "But I would never murder her." " You'd get caught." " Absolutely right." "I would get caught because I have a motive and people know that." "I got a similar problem with my momma." "Well, look at this, Owen." "This is amazing." "You and I have something in common." " We do?" " Absolutely." "Think about what I'm saying." "And then you just draw on your own personal experiences." "You mean... how do you murder and not get caught?" "So how do you not get caught?" "Owen?" "How do you not get caught?" "By eliminating the motive and establishing an alibi." " How?" " I can't tell you everything." "I don't know." " Go see a Hitchcock film." " Wanna go to the movies with me?" "No, I don't wanna go to the movies." "It's real late and I..." "I'm gonna go now." "You're gonna be fine." "OK?" " Thank you, Professor Donner." " Good night, Owen." "Eliminate the motive." "Eliminate..." "Elim..." "Eliminate the motive." "Motive." "Hitchcock." "Let's say that you'd like to get rid ofyour wife." "Let's say that you have a very good reason." " No, let's not..." " No, no." "Let's say." "You'd be afraid to kill her." "You know why?" "You'd get caught." "And what would trip you up?" "The motive." "Ah." "Now here's my idea." "It's so simple, too." "Two fellows meet accidentally, like you and me." "Each one has somebody that he'd like to get rid of." "So... they swap murders." " Swap murders?" "(laughs)" " Each fellow does the other's murder." "Then there's nothing to connect them." "Each one has murdered a total stranger." "Like... you do my murder, I do yours." "Your wife, my father." "Crisscross." "Some people are better off dead." "Each one has murdered a total stranger." "They swap murders." "You do my murder." "Crisscross." "I do yours." "They swap murders." "Crisscross." "Crisscross." "(Beth) "Choo-Choo Charlie said to himself:" "I think I can." "I know I can."" ""He opened up his throttle and pushed and pulled and pushed and pulled... ..and it spluttered some more and then... and then he... and then..."" " How's that blockage problem?" " All aboard!" " (Beth) Woo-woo!" " (phone rings)" "(Beth) No, no." "Don't." "Don't." " I'm buying a gun, Owen." " You must wonder what happened to me." " Uh... no, not at all." " I saw that movie." "Uh-huh, uh-huh." "I know now what you tried to tell me, Professor Donner." " Oh, good." " Crisscross." "(Momma) Owen!" " I'll call you in a few days, pal." " Take your time, Owen." "OK?" " Bye." " Owen!" " Coming, Momma!" " Who were you talking to?" " I just called the weather." " You were talking to a machine?" "Two minutes ago you were my agent, and now..." "Larry, I'm sorry." "That's the way the mop flops." "After seven years I get "That's the way the mop flops"?" "Larry, you must feel like this is the lowest point of your entire life." "Arnie, what are you doing?" "Don't bend the fern." "Fluff it!" "How, as a human being, can you give up on somebody after seven years?" "Larry, this is a whole other agency." "Besides, you've never written anything." "What about Hot Fire?" "This may be a tad bit disturbing, but we've just signed your wife on as a client." "Get me a doctor, Joel." "I'm having a heart attack." "The book sold two million copies!" "Arnie..." "But I wrote that and you know that!" "Arnie knows that!" "Margaret couldn't write her name in the snow!" "Larry, for seven years I've given you assignments." "I've made your deals." "And for seven years you didn't want to "compromise your art"!" "Oh, boy." "It is just like an agent to think that a writer can't be an artist." " Don't change the subject." "Where was I?" " You were letting me go." "Thank you." "For four years you've been writing a novel no one has ever seen." "So you get the shaft from your wife." "It's another excuse for not writing." "Well, go ahead, Larry!" "Go!" "Go to Mexico!" "Write your heart out!" "Andele, arriba!" "But I handle writers, Larry, not artists." "You go be an artist." "Let the rest of the world make a living." "Here." "It's my favourite fuchsia." "Live and be well." "That's the house." "This is good here." " I'll be right back." "I'm gonna visit my aunt." " OK, bro." " (object drops)" " Oh, shit." "( Hawaiian music)" "Why, Mr Lopez." "Can I borrow that... towel?" "My big Chihuahua." " Oh!" " (Mr Lopez growls)" "(Margaret laughs)" "Grrr!" "Ruff, ruff!" "Ruff, ruff!" "Ruff!" "Grrr!" "(phone rings)" "Ruff, ruff, ruff!" " Ruff!" "Ruff!" " (Margaret moans)" "(phone still ringing)" " Ruff, ruff!" " Hello?" "Oh, hi, Joel." "It's my agent." " Ruff!" " My agente." "How's LA, darling?" "Oh, nothing much." "I'm having a little trouble with the... gardener." " Ruff... ruff..." " What sound, sweetie?" "Oh, that's the... the TV." " Ruff!" "Ru-ruff!" " Uh..." "Old Yeller." "Yes, I know they shot him in the end." "Huh?" "Yes, darling." "I'm aware of the book signing tonight... in Maui." "I'll be there." "Kiss, kiss." "(growling and moans ofpleasure)" "(motor starts)" " (toots horn)" " Bye, Mr Lopez." "( Hawaiian music)" "Last boat to Maui!" "This is my first trip to the islands." "(sings along with music)" "Oh!" "Oh, my God!" "(scream merging with laughter)" "The night... ..was...n't." "Oy." "There's probably halibut right here who could write better than me." "Boy." "The night was..." "If you got a line, fish, just yell it out." "I'm up for grabs." " (engine fails to start)" " Perfect." "(sighs)" "(faraway phone ringing)" "(phone still ringing)" "What about Brenda Lee?" "You like Brenda Lee?" " The jockey?" " No." "(ringing)" " Cloudy." " (Owen) Aloha." " What?" " I said aloha." " Hello." " Professor Donner?" " Hello?" " Hello!" " Hello." " Professor Donner, stay by the phone." "I don't want 'em to be able to trace the call." "(dialling tone)" "(groans)" "(phone rings)" " Who is this?" " Professor Donner?" " Aloha!" "Hello!" " Who is this?" "It's Owen!" " Owen?" " Yeah, from class!" "What do you want?" "It's done." "You want anything from Hawaii?" " Hawaii?" " Wiki-wiki." "Aloha 'oe" " What are you doing in Hawaii?" " Crisscross." "You know." "Owen, I'm hanging up on you now." "Oh, yeah, right!" "I got ya!" "Boy, are you smart!" "(phone rings)" "She didn't feel a thing." "I know how important that is to you." " Who?" " Your wife." "She had a little trouble walking', but that was from the gardener." " You saw my wife?" " She was kind of a tart, Larry." "But I can see why you married her." "She was very beautiful." "Owen, you stay away from my wife." "(Owen hangs up)" "(phone rings)" "Did you hear what I said, Owen?" "You stay away from my wife." " I told you, it's done." "Nobody saw." " What the hell are you talking about?" "Crisscross, like in the movie." "Done!" "Owen, what the hell did you do to my wife?" "I..." "I don't wanna say on the phone." "All I can tell you is that I killed her last night." "(dialling tone)" "My God!" "Oh, my God." "Oh, my God." "(phone rings)" "Owen, what the hell did you do?" "Tell me the truth." "Meet me tonight at 7.30 at Mulholland Drive and Cahuenga Pass." " We'll discuss your end ofthe bargain." " What are you talking about?" " You gotta kill my mother." " Kill your mother?" "!" "7.30, Cahuenga Pass." "Crisscross." " Owen, I..." " (Owen hangs up)" "Crisscross..." "Criss..." "Holy shit." "He did it." "He did it!" "The little bastard did it!" "He killed my wi..." "No, he didn't do it." "He couldn't poss..." "No, he didn't do it." "I'll call her up, call her up on the phone." "She'll answer the phone, I'll hang up, and she won't be dead." "That's it and that's all." "(phone rings)" "Come on, come on, Margaret." "Answer the phone." "Answer the phone, Margaret." "Why the hell isn't she answering the phone?" "Because she's dead." "That's why she's not answering the phone." "You don't answer the phone when you're dead!" "They're gonna think I did it." "They're gonna think that I did it!" "Why would they think that I did it?" "Because I hate her guts!" "That's why they'll think that I did it." "My God..." "I have no alibi." "Lester!" "I was with you yesterday!" "Lester!" " I was with you yesterday, right, Lester?" " Right." "And now I'm with Ms Gladstone." "I left the club at one." "I couldn't have done it!" "That's right." "You couldn't have done it." "Close the door on your way out." "I couldn't have gone to Hawaii and back in that time." "So what am I worried about?" "That's what I'd like to know." "Hawaii and back?" "No way, man." "You're cool." "Well..." "Now, if he left the club at one, he could catch the three o'clock flight." "You could be in Hawaii by five, given the three-hour time change." " She's got a point." " You could spend four hours there... ..and still catch the last flight out by ten." " That's true." " You'd arrive at LAX by 6am." "An hour on the freeway..." "You'd be home by seven." "Whatever it is, you're fuckin' guilty, man." "Meet Senior Flight Supervisor Gladstone." "Hi." "Pleased to meet you." "Listen, I've got a flight schedule in here." " I was on a rock!" " A rock?" "They're gonna think I did it!" "I'm all motive and no alibi!" "I have no alibi!" "Motives are no problem." "I'm up to my ass in motives." "I'm majoring in motives." "Alibi, nowhere near my neighbourhood." "But motives: whooo-ooh!" " I gotta go!" "Can I borrow your car?" " Sure, man." "The keys are on the table... ..next to the door." "Goodbye." "I gotta find Beth." "What am I doing?" "What the hell am I doing?" "!" "She's not dead." "I'm crazy." "I'm a crazy man." "She is not dead." "He didn't kill her." "News from Hawaii:" "Novelist Margaret Donner is missing and presumed dead." " According to the police..." " Holy Christ!" "I'm not crazy!" "She's presumed!" "She was last seen boarding a boat to Maui." "She disappeared along route." " A search is on for the body." " Margaret is dead!" "Poor Margaret!" "She is the renowned author of the best-selling novel "Hot Fire"." "That slut!" "She is a slut!" "Slut!" "Foul play has not been ruled out." "I'm gonna fry!" "( didgeridoo playing)" "(door shuts)" "I'm in really deep shit, Beth." "Ah." "Look who's decided to breeze back into my life." " I gotta talk to you." " You have to talk to me?" "You don't return my calls or get in touch." "But hey, you need something from me?" "Make yourself at home." "Have a seat." "Can I offer you something?" "A cool drink?" "Would you listen to me?" "I'm in a lot of trouble!" " I'm on the edge of my seat." " It's Margaret." "Don't tell me!" "She's come back." " She's..." " Pregnant, put on weight..." "She's dead!" "Oh, God." " It's awful." " Larry, I'm sorry." " Did she suffer?" " He said she didn't feel a thing." " Her doctor?" " Her killer." " She was killed?" "!" " She's dead." "It works that way." "Wait!" "Did the police find a clue or have a motive or..." "If there was a motive, the fat little bastard never would've killed her!" "Don't you see?" "Eliminate the motive, establish an alibi." " Just like I told him!" " Just like you told him?" "!" " We were speaking hypothetically." " Larry Donner!" "Did..." " Did you pay a man to kill your wife?" " No!" "He just did it." " Oh, my God!" " Hey, look." "I told a guy something... ..and he took it the wrong way." "What, uh..." "What did you say to him?" " "Don't kill my wife", wink, wink?" " I don't like your tone." " Oh, God." "You better leave." " I got noplace else to go." " My head is spinning." " Oh, and my head isn't creamed corn?" "You announced that you may or may not have inadvertently murdered someone." "Mm-hm." "This puts me in a highly distracted state of mind." "I am nauseous." "Great!" "Margaret is dead, I have no alibi, and you're mad that I upset your stomach." "Well, excuse me." "I'll just go so the police can start their manhunt!" "One little murder and I'm Jack the Ripper!" "Jeez, you think you know somebody." "Cahuenga Pass." "(Larry's voice) Leave a snappy message." "Bye." "Larry, it's Beth." "Um..." "Call me at home right now, OK?" "As soon as you get in." "I'm very confused." "You... came in with just..." "murder and Margaret, and I just..." "Call me... now." "Thank you." " Larry!" " Get in the car, Owen." " This isn't your car." " I borrowed it." "Get in!" "I thought you might like to have this." "It belonged to Margaret." "Get in the car." "Get in the car!" "Look what you did." "You killed my wife!" "No, I didn't." "Yes, I did." "You're sick, Owen." "You need care." "I am taking you to the police." "Did you know Hawaii was a series of islands... ..that was all spit up by the same volcano?" "I never knew that." "You killed somebody!" "You're a murderer!" "You took a life!" "You're right." "I'm no good." "How could I do that?" "I'm a sick, sick per..." "Cows!" " Why did you kill my wife?" "!" " I thought you wanted me to." "You said you wished she was dead." "I told you I wished my momma was dead." "I kill your wife, you kill my momma." "That's fair." "I am not killing your mother." " You have to turn yourself in." " No." "It was part of our plan." "What plan?" "!" "There was no plan, you moron!" "You killed a person and I'm takin' you to the police!" "I'll just tell 'em that you did it." "You got the motive." " You're gonna tell them the truth." " Huh?" "You're gonna tell the police the truth or I'll kill us both, I swear!" " I didn't do this only for me." " Say goodbye, Owen." "Larry..." "Larry, slow down." "You're goin' a tad fast." "Please slow down, Larry!" "I don't like goin' fast!" " Are you gonna tell 'em you did it?" " Please slow down!" "Huh?" " Yeah." "I'll do it." " OK." " Please stop the car!" " I can't!" " Stop the car!" " There's no brakes!" " What?" "!" "Watch out for that car!" " (car horn)" " Oh." "You're a good driver, Larry." " Just shut up!" "Larry!" "Larry!" "Larry, you're originally from the East, aren't you?" " Owen!" " A man on our block was from the East." "Mr Brockman." "He was in the button business." " Is that right?" " Yeah." "This is good!" "It's like the Flintstones car wash." "Larry!" "Ooh, I can't look!" "(truck horn)" "Whoa!" "So are you telling me you weren't driving that car?" "Look, man." "It's like I told you." "He borrowed the car yesterday morning and I haven't seen him since." " Did he say where he was going?" " No." "Mumbled about motives and alibis." "Motives and alibis?" "He figured that since Margaret ruined his life, you'd think he killed her or somethin'." " You don't think he did?" " No, I don't think so." "Personally, if the bitch stole my book, I'd kill her." "But that's me." "Larry?" "There's no way he coulda done it." "What makes you say that?" "Because Larry never did... ..anything." "OK." "Here we go." "Eggs a la Owen." "(sighs) Owen, get it through your thick head." "I may be a lot of things, but I am not a killer." "You don't have to blow her brains out." "Thank you!" "That takes the pressure right off(!" ")" "She's old." "She's got a bad ticker." "All you gotta do is jerk around a lot when you talk to her." " "Nice to meet you, Mrs Lift!"" " Would you stop it?" "Well, just meet her." "Maybe she'd be somebody you'd like to kill." "(Momma) Owen!" "What the hell's going on out there?" "!" "Nothing, Momma!" " (whispers) We woke her up." " Who are you talking to?" "Who's in there?" " Nobody, Momma!" " (door bangs)" "Who's this?" "This is Cousin Paddy." "He's gonna be stayin' with us for a while." "Isn't that nice?" "You don't have a Cousin Paddy!" "You lied to me!" "(Larry groans)" " That's it." "That's all." " Larry..." "Larry." "I'm sorry." "She makes me so nervous." "Come on." "Go in there and sit down." "I'll get you some ice." "So what do you think of her?" "I think she could relax a little bit." " Are you gonna do it?" " Owen..." "I'm not gonna kill your mother." "If you wanna do it, you do it." "A guy kills my wife." "He can't even kill his own mother." " You wanna see my coin collection?" " No!" "I collect coins." " I got a dandy collection." " I don't wanna see it, Owen." " But it's my collection." " I don't care." "Look, Owen." "I'm just not in the mood." "OK?" "Never showed it to anyone before." " All right, I'll look at it." " No, it's OK." " Show it to me." " No, you don't mean it." " Show me the damn coins!" " All right." "This one is a nickel." "This one also is a nickel." "And here's a quarter." "And another quarter." "And a penny." "See?" "Nickel, nickel, quarter, quarter, penny." "Are any of these coins worth anything?" "No." "And here... is another nickel." " Why do you have them?" " What do you mean?" "The purpose of a coin collection is that the coins are worth something, Owen." "Oh, but they are." "This one here I got in change... ..when my dad took me to see Peter, Paul and Mary." "And this one I got in change when I bought a hot dog at the circus." "My daddy let me keep the change." "He always let me keep the change." "This one... ..is my favourite." "This is Martin and Lewis at the Hollywood Palladium." "Look at that." "See the way it shines on the little eagle?" "I loved my dad a lot." " So this whole collection is..." " Change my daddy let me keep." " What was his name?" " Ned." "He used to call me his little Ned." "That's why Momma named me Owen." "I really miss him." " It's a real nice collection, Owen." " Thank you, Larry." " Owen!" "Food!" " In a minute, Momma!" "Don't you "in a minute" me!" "Get off your fat little ass or I'll break it for you!" "I want two soft-boiled eggs, white toast and some of that grape jelly, goddammit!" "And don't burn the toast!" " Kill her, Larry." " I can't." "You gotta kill her for me, Larry." "Don't you understand?" "Crisscross." "Crisscross!" "You gotta do it, Larry!" "If you don't, I will." "I swear I will." "Move it, fat boy!" " That's it!" "I'm gonna choke her to death!" " No, Owen!" "I swear to God, I'm gonna kill her!" " Calm down, Owen." " Larry..." "It's gonna be OK." "It's gonna be OK, Owen." "I promise." "Will you do it?" "Yeah." "I'll do it." "Larry, you're the best pal a guy ever had." "Here." "Look." "I want you to have this." "Look." "Here." "It's a souvenir from the London Bridge gift shop in Arizona." "Look." "See?" "They brought this bridge over from London, England, stone by stone." "See the little bridge?" "See the stones there?" " Yeah, I see 'em." " Here." "You can have that." "Crisscross." " When did you last see him?" " I can't remember." "I told you." " The day after his wife disappeared?" " I didn't say that." " Miss Ryan, why are you protecting him?" " I'm not." "I just don't think he killed her." "Where do you think he is, Miss Ryan?" "I don't know!" " He's in a lot of trouble." " He didn't do it." "Then who do you suppose did?" " Somebody else." " So you do think it was murder." " You know who killed her, don't you?" " No." " Yes, you do." " I do not!" "He wouldn't tell me!" "Did Professor Donner hire a man to kill his wife?" "No!" "He said... not really." "And I heard him scream out "l hate her!" "I wish she were dead!"" "Yeah, I heard him." "He said "l hate her." "I wish she was dead."" "That's what he said." ""l hate her." "I wish she was dead."" "He called her a very bad name... ..and screamed "l hate her." "I wish she were dead."" "It's a coffee-table book." "(TV on)" "All right, Momma." "Turn off the TV." "OK." "Goodbye, Momma." " She'll be sleepin' in a couple of minutes." " I graduated from Yale." "All right." "Out you go." " Out?" "Out where?" " Out on the ledge." "Go out on the roof..." " No, no." "I'm not going out on any ledge." " You gotta make it look like a burglar." " You go in, you mess up things..." " No, Owen." "This is going too far." " You want outta this?" " Yes." "Then fulfil your end of the bargain." "You go in, stuff a pillow over her face and leave." "You walk out that door." "You never have to see me again, OK?" "Oh, God!" " OK." "Out you go." " Shit." "I hate heights." "Larry!" " You all right?" "Move your hand." " Why?" "Gotta close the window." "And like this you kill an evening." "Rats!" "Now I got Willard here." "I'm bein' held captive by a little troll who should be hanging off a rear-view mirror." "I'm not doin' this." " Aargh!" " What are you doin'?" "I'm selling The Watchtower!" "What do you think?" "!" "You got rats the size of Oldsmobiles here!" "Rats." "OK, forget about the burglar stuff." "Just go through her door." "It's less dramatic but I don't wanna make you uncomfortable." "Here, use this." " I really don't like you, Owen." " OK, I gotta go." "If I'm late for my lane, they tack on an extra buck." "Ugh." "What a week." "Mrs Lift, I know you don't want to hear anything derogatory about your son." "I understand that." "Because he's not a bad man, Mrs Lift." "He's a nice man, actually." "He is a lunatic." "No, Mrs Lift, he is." "He's a lunatic." "And, um..." "I don't have to be here now, Mrs Lift." "I could be in Mexico, out of all of this." "I'm only here to stop him fr..." "How do you say this, Mrs Lift?" "Listen to me, Mrs Lift." "Your son... killed my wife." "And now he wants me to kill you." "(snores)" " Mrs Lift?" " (snores)" " I'm gonna go read the paper now and..." " (snoring continues)" "I'm just glad we had this chance to talk." "(snores)" "I'm a fugitive." "The little bastard turned me into Richard Kimble." "He shit and shoved me in it." "I gotta get outta here!" "I gotta go!" "Where am I running?" "Evidence." "Incriminating evidence!" "Nothing." "Ha!" "A lei." "Poha jelly." "Not enough!" "Poha jelly, a lei and a doll." "I need some evidence." "Aha!" "Bingo." "The mother lode." "A plane ticket!" "A little careless, Owen, aren't we?" "Los Angeles, Hawaii, 10am!" "(cackles) I got you!" "A plane ticket!" "With my name on it." "Oh, God!" "(knock at door)" "Cops!" "(knocking continues)" "Mr Lift?" "(heart thumping)" "We'd like to ask you a few questions." "I hope there's no trouble." "I was at the bowling' alley all night." "I hope there's nothin' wrong." "No, we'd like to ask you about Professor Donner." " Professor Donner?" " Yes." "Would you mind if we came in?" " In the house?" " Just a few questions." "I..." "I'm sorry." "You can't." "My momma's real sick." "I don't think it's a good idea." " lt'll only take a minute." " No, I..." "OK!" "All right." "Come on up." "If it'll only take a minute, that won't be so bad." "Would you like to meet my momma?" "We understand you take Professor Donner's course at Valley College." "Yeah." "Creative writing." "I'm gonna put my bowling' ball away." " I'm his star pupil." " Do you have any idea where he is?" "Did you try his apartment?" "He goes there a lot." "He keeps his stuff there." "You guys wanna have some tea?" "We've got orange pekoe,... ..we've got Irish breakfast,... ..we've got Darjeeling..." "Mr Lift, have you ever heard Professor Donner talk about his wife?" "Professor Donner?" "Oh, he... he always talked about her like she was an angel." "He loved his wife." "He worshipped the ground she..." "Hi." "Hi, tea!" "Hi, tea." "Hi." "You never heard him say anything bad about her?" "Oh." "This is Irish breakfast." "I'll get Darjeeling." "Whatever you want, Mr Lift." "We won't be havin' any." " What was that question again?" " (Momma) Owen!" "Momma!" "You're alive!" "Old people - you have to reassure them." "Mr Lift, can we get back to Professor Donner?" "Yes." "By all means, let's get back to him." " Mr Lift..." " Oh!" "You know what?" "This..." "This box is empty." "Could you get me some tea in the pantry?" "Sure." "No problem." "(detective) You said that he loved his wife." "What gave you that idea?" "Oh!" "I'm sorry, officer." "I have some tea right here on the stove." "It won't be necessary." " Oh." "No problem." " What was that question again?" "What makes you think that he loved his wife?" "But you know what you could get me is some sugar." "Is it in the pantry?" " Yeah." " No problem." "I shouldn't use sugar in my tea because I'm carryin' around this spare tyre." "I'd like to get rid of it but it's so hard to cut back." " Owen!" "You did it, didn't you, Owen?" " No, Momma, I didn't." " Yes, you did!" " No!" "Honest to God, I didn't do it!" "You told them to take me away!" "Oh." "No, Momma." "No." " You came to take me away!" " I'm sorry." "My momma's not feelin' well." "Not feeling well, my foot!" " I'm sorry." " You little bastard!" "I said you'd desert me, and you did!" "The only way they'll get me outta here is to drag me out!" "You're gonna have to take me out in a pine box!" "Get out!" "Get outta my house!" "Owen!" "You're grounded!" "I can't believe that you brought them here!" " Why didn't you kill Momma?" " Because I'm not a killer!" "I can't put a pillow over her face and squeeze the life out of her!" " You see that door with the hook on it?" " Yeah." "Every night around nine o'clock, she yells "Bath" and hangs her shawl on that hook." " I'll bet that's where I come in." " Yeah." " Now, how did I know that?" " Come on, come on." " Watch out for my skates." " Ow!" "(hinge falls down stairs)" "She'll get out of her chair, she'll go to the door,... ..you go behind her and pow!" "Down the stairs she goes." " And where are you gonna be?" " Howie's Lanes." "Come on." "To hell with this guy!" "What am I?" "Crazy?" "I'm outta here." "I can't stay here." " I can't stay in this house!" " (siren)" "Stand back, please." "Milk and Mallomars..." "Bath!" " Who the hell are you?" "!" " I'm Owen's friend." " Owen doesn't have a friend!" " That's because he's shy." "No, he's not." "He's fat and he's stupid." "Get outta my house!" "Where is Owen?" " Owen went bowling." " I want Owen!" " He'll be back soon." " I want my bath and my medicine!" " I can get it for you." " Who the hell are you?" " Let me hang that up." " I can do it myself!" " I know, but I'd like to hang it up for you." " Get out of my way, you black bastard!" "What?" "!" "Mrs Lift!" " He tried to kill me." " What?" "I said, he's tryin' to kill me!" "Mrs Lift!" "Don't..." "I can hang up my own goddamn shawl." "He's trying to kill me!" "I asked for the salted nuts!" "He brought me the unsalted nuts!" "The unsalted nuts make me choke!" "Aargh!" "Pain in the ass!" "Oh, no!" "Your friend had an accident." "He's dead!" "You go bowling and leave a corpse to take care of me!" " He's dead?" " See for yourself!" "Larry!" "My friend!" " My friend!" "Larry!" " (mimics) "My friend!" "My friend!"" "You little crybaby!" "Go bury him in the yard before he stinks up the place!" "Larry, you're alive!" "You killed her." "Holy shit!" "What a dream I was having!" "Louis Armstrong was trying to kill me!" " Mrs Lift?" " Get away from me, you horse's ass!" "(groans)" "She's not a woman." "She's the Terminator." "(dinging)" "The ex-husband of missing novelist Margaret Donner... ..is wanted for questioning but has now himself disappeared." "If you have any information regarding his whereabouts, contact your local police." "Owen!" "There's a murderer in the house!" "Hello, police?" "I found him!" "The wife murderer!" " He's here!" " Gimme the phone!" "I'm on the next train to Mexico." " No!" "This is no time to panic." " This is the perfect time to panic!" "She turned me in!" "Do what you want with her." "I gotta think about myself." "Larry, don't leave me!" "Larry!" "(cracking)" "I'm sorry, Larry." "It's OK, Owen." " I messed everything up." " Owen, it's gonna be OK." "I ruined your life." "Come on, Owen." "Sometimes things happen in life for a reason." "No, really." "Maybe I was meant to go to Mexico to be a writer." "You never know." "This is a great ending." "I don't have the beginning, but this is a great ending." "Story of my life." "I always have great endings and no beginnings." "That's not good for a writer, is it?" "No, it's not." "How about "The night was humid"?" " It's hot in here." " Yeah, hot and close." " Moist." " Right. "The night was moist."" "This is what I'm talking about." "It's writing." "Finding the perfect word." "The perfect start. "lt was the best of times, it was the worst of times"." ""Now is the winter of our discontent"." "See what I'm saying?" "Perfect beginnings." "Perfect words." "It's like us." "We're on a train to Mexico." "We're on the lam." "It's exciting, it's kinda mysterious." "Do you say "The night was humid" or "The night was moist"?" "That's writing!" "The night was sultry." "I'm getting the hell outta here!" "Too goddamn sultry in here!" "Where you goin'?" "I'm gonna kill the bitch." " You want anything?" " You can get me a Chunky." " Come here, Mrs Lift." " You stay away from me, you murderer!" "Momma..." "Mrs Lift!" " (Owen) Momma!" " Murderer!" "Murderer!" "Momma..." "Momma..." "Sultry?" "I'll show you something sultry, Mrs Lift!" "G, 54." "Stupid bingo!" "Don't you idiots know there's a murderer loose on this train?" "!" " Mrs Lift, come back here." " Bingo bastards." "My mother's a little overmedicated." "Murderer!" "Murderer!" "There's a murderer on the train." "Wake up, you nutheads!" "Murderer!" "Excuse me." "Excuse me." "Wake up!" "There's a murderer on the train!" "Mrs Lift?" "Aarrghhh!" " Mrs Lift, be careful!" " Get away from me, you murderer!" " No!" " (Momma yells)" " Mrs Lift..." " Let me go, you murderer!" "Owen!" " Larry, you'll kill her!" " Save me!" "I'm not tryin' to kill her!" "I'm tryin' to save her, you toad!" "Come on, Mrs Lift!" "Owen!" "Save me!" "Aarrghhh!" "Owen!" "Save me!" "Owen!" " What are you doin' to my momma?" "!" " Making a wish!" "What do you think?" "!" "Help!" "Oh, no!" "Larry!" "Oh, you saved me, my Owen!" " Mrs Lift, are you OK?" " Beat it, chump!" " Larry!" " Aarghhh!" "Bye, Larry!" "To tell you the truth, it was all a little bit embarrassing." "My earring fell over the rail." "I bent over to retrieve it." "The last thing I remember, I was in the water." "She fell off the boat." "She fell off the boat." "The little bastard never laid a hand on her." "This wonderful Adonis of the deep..." "Oh, I love that. "Adonis of the deep"." "She's rescued by a fishing boat!" "The woman is priceless." " You gotta love this woman." " Do you love this woman?" "Shikamoto nursed me back to health, and we're going to be married." "Who asked this guy to pull her outta the water?" "(chuckles)" "Margaret Donner, author of the best seller "Hot Fire",... ..has sold the movie rights of her ordeal at sea for $1.5 million." "Will wonders never cease!" "Back to you, Stan." "She's a genius." "She's getting $1.5 million and I'm getting glucose four times a day." "I'm getting something down the hall." " What?" " Anything." "I can't take this any more." "Every ten minutes, it's "Margaret this" and "Margaret that"." "I'm sorry." "I just can't help it." "Hate's no good." "I'm not living here with you in hate." "Get rid of it altogether, Larry, or I'm leaving you." "Oh!" "Oh, Larry!" "Argh!" "(static on TV)" "(gentle snoring)" "His name was Owen... ..and he wanted me to kill his mother." "When I asked him why,... ..he said because he didn't like her." "When I asked him why me,... ..he said it was my idea." "I was teaching college..." "(continues typing)" "(rings)" " Hello." " Aloha." " Owen!" "How are ya!" " I'm fine." " Where are you?" " In Topanga Canyon." "I just killed Beth." " What?" " Nah, I'm kiddin'." "Look out your window." "You little couch potato!" " Hi, Owen!" " Hi, Larry." " Owen, come on up." " OK." " I missed you." " I missed you, too." "It's been a year." " Wow." "You look terrific." " Thank you." "So do you." " Well, thanks." "How's Momma?" " Dead." " Oh, I'm sorry." " Mm, well..." "Did you, um..." "No!" "No." "Natural causes." " That's good." " Yeah, well." " I see you're writing." " I started the day I got outta the hospital." "And I haven't stopped, Owen." "I'm half a paragraph away from finishing my book!" " That's great!" " Yeah, it's really somethin'." "Well, you know what they say." "(both) A writer writes... always." "Well, look." "I won't keep you from it." " I just came by to say hello and..." " You just got here!" "Actually, I gotta catch a plane." "I'm goin' to New York because..." "I wrote a book." " What?" " It's gonna be on the stands in two days." "You wrote a book?" "And it's gonna be published?" " Yeah." " Owen, that's unbelievable!" "It's called Momma and Owen and Owen's Friend Larry... ..and it's all about you and me and Momma and our experiences together." "What's your book about?" "You wrote a book called Momma and Owen and Owen's Friend Larry?" "Yeah." " It's all about our..." " (both)..experiences together." " Slut!" "You slut!" " Are you angry with me?" " I don't like you, Owen!" " You want me to leave?" "No, I want you dead and in hell!" "(Owen gags)" " I can't breathe!" " Because I'm choking you, you moron!" "Here!" "Here." "I want you to have this." " What is this?" " That's my book." " What is this?" "You wrote a pop-up book?" " Yeah." "Yeah." "See?" "Here's where we meet in class." "Pull that tab." "See?" "And..." "And this is you and me and Beth on the choo-choo." "Toot, toot!" "And here's where you meet Momma." "See her cane?" "(mimics Momma) "Bath!"" " And these are my coins." "See my coins?" " It's your coin collection!" "And here's..." "See, instead of you chuckin' her off a train, we go on a picnic together." " What do we do?" "Devil-egg her to death?" " Oh, no." "There's no death." " There's no death?" " No, this is a kids' book." " You wrote a pop-up book!" " Yeah." " This is the cutest thing I've ever seen!" " Yeah, and here's the best part." "We all go on vacation in Hawaii." "You... and me... and Beth." "(Beth) "Hate makes you impotent, love makes you crazy."" ""Somewhere in between you can survive."" " Gets better every time I read it." " Thank you." " Except for the last line." " I beg your goddamn pardon?" "Hate makes you impotent, love makes you crazy." "In the middle you can survive?" " Yeah!" " It's cryptic!" " Cryptic, he says." " That's right." "Cryptic." "Cryptic." "Watch this, Beth." "Have you ever seen a weeble snorkel?" "Look at him." "You know, actually, I find it a little confusing." " Are you kidding me?" "!" " Just that last line." " Just the last line!" " Yeah." "This is great." "So you and Sancho Panza agree on this?" "You're taking criticism from somebody who had his book signing at Toys 'R' Us!" "I don't believe this." " You're really agreeing with him?" " He's got a point." " He's got a point?" "!" " Yeah!" "I'm a Book-of-the-Month Club alternate, on the best-seller list." "With his book, you get a free balloon!" "I don't understand it." " (Beth) He's entitled to his opinion." " I know he's entitled to his opinion." "But look at him." "He's a buoy with hair." "Keep going a little further, Owen." "Maybe somebody'll harpoon you." "("Shikisha" by Sipho Mabuse)" "People, get ready" "Wherever you are" "People, get ready" "No matter how far" "Everywhere in the village" "People are dancing in the streets" "It's the beginning of the new life" "You can turn it up on your radio" "Shikisha" "Shikisha" "Shikisha" "Shikisha, wah, Shikisha" "Shikisha" "Shikisha" "Shikisha" "Shikisha, wah, Shikisha" "People are dancing in the streets" "Right across the Limpopo" "It's the beginning of the new age" "So turn it up on your radio" "Stomp your feet to the beat" "Feel the heat" "Stomp your feet to the beat" "Feel the heat" "Stomp your feet to the beat" "Feel the heat" "Stomp your feet to the beat" "Feel the heat" "Wah, Shikisha" "Shikisha!" "Stomp your feet" "Stomp your feet" "Stomp your feet" "Stomp your feet" "SubRip:diamarg"
{ "pile_set_name": "OpenSubtitles" }