text
stringlengths
9
175k
3D human face description: landmarks measures and geometrical features Enrico Vezzetti, Federica Marcolin Dipartimento di Sistemi di Produzione ed Economia dell’Azienda Politecnico di Torino Abstract Morphometric measures and geometrical features are widely used to describe faces. Generally, they are extracted punctually from landmarks, namely anthropometric reference points. The aims are various, such as face recognition, facial expression recognition, face detection, study of changes in facial morphology due to growth, or dysmorphologies. Most of the time, landmarks were extracted with the help of an algorithm or manually located on the faces. Then, measures are computed or geometrical features are extracted to perform the scope of the study. This paper is intended as a survey collecting and explaining all these features, in order to provide a structured user database of the potential parameters and their characteristics. Firstly, facial soft-tissue landmarks are defined and contextualized; then the various morphometric measures are introduced and some results are given; lastly, the most important measures are compared to identify the best one for face recognition applications. 1. Introduction Face study has been carried out in these decades for many applications: maxillofacial surgery, delict investigation, authentication, historical research, telecommunications or even games. Recognition is surely the largest branch of this diversified field, embracing subfields such as citizens identification, recognition of suspects, corporate usages in access control and on line banking. Since a new trend emerged to measure and evaluate 3D facial models, for the past decades three dimensional facial data were obtained mostly by direct anthropometric measurements. Anatomical landmarks have been used for over a century by anthropometrists interested in qualifying cranial variations. A great body of work in craniofacial anthropometry is that of Leslie Farkas (Farkas, 1994; Farkas, 1996) who established a database of anthropometric norms by measuring and comparing more than 100 dimensions (linear, angular and surface contours) and proportions in hundreds of people over a period of many years. These measurements include 47 landmark points to describe the face (Čarnický et al., 2006). Nowadays the information in which researchers are interested are more complete and dynamic. The interest is to use facial landmarks as reference points of the subjects and extract geometrical features from them, in order to keep information of how the examined face is. Their uses are various and may depend on the research area. The attention to facial landmark is due to the fact that they are points which all faces join and that have a particular biological meaning. Hard-tissue landmarks lie on the skeletal and may be identified only through lateral cephalometric radiographs; soft-tissue landmarks are on the skin and can be identified on the 3D point clouds generated by the scanning or on images. This study only deals with soft-tissue landmarks; the most famous ones are shown in Figure 1. Actually, the set of facial landmarks is much larger than this. In fact, there are approximately 60 indentifiable soft-tissue points on human face, but they may change depending on the application they are used for. One of the most important application that deal with facial landmarks is face recognition, whose large applications are: citizenship identification in borders, passports, I.D. documents, Visas; criminal identification in database screening, surveillance, alert, mob control and anti-terrorism; corporate usages in access control and time attendance in luxurious buildings, sensitive offices, airports, pharmaceutical factories; utility laptop, desktop, web, airport/sensitive console log-on and file encrypt; on line banking; gaming in casinos and watch-lists; hospitality industries such as hotel and resort CRM’s; important sites like power plants and military installations. The purposes are various, but belong to two big branches: face verification, or authentication, to guarantee secure access, and face identification, or recognition of suspects, dangerous individuals and public enemies by Police, FBI and other safety organizations (Jain et al., 2005). Much research are carried out on this topic. In his various publications, Rohr et al. proposed multi-step differential procedures for subvoxel localization of 3D point landmarks, addressing the problem of choosing an optimal size for a region-of-interest (ROI) around point landmarks (Frantz et al., 1998; Frantz et al., 1999). They introduced an approach for the localization of 3D anatomical point landmarks based on deformable models. To model the surface at a landmark, they used quadric surfaces combined with global deformations (Frantz et al., 2000; Alker et al., 2001). Then proposed a method based on 3D parametric intensity models which are directly fitted to 3D images, introducing an analytic intensity model based on the Gaussian error function in conjunction with 3D rigid transformations as well as deformations to efficiently model anatomical structures (Wörz et al., 2006). Finally introduced a novel multi-step approach to improve detection of 3D anatomical point landmarks in tomographic images (Frantz et al., 2005). Romero et al. presented a comparison of several approaches that use graph matching and cascade filtering for landmark localization in 3D face data. For the first method, they apply the structural graph matching algorithm relaxation-by-elimination using a simple distance-to-local-plane node property and a Euclidean-distance arc property. After the graph matching process has eliminated unlikely candidates, the most likely triplet is selected, by exhaustive search, as the minimum Mahalanobis distance over a six dimensional space, corresponding to three node variables and three arc variables. A second method uses state-of-the-art pose-invariant feature descriptors embedded into a cascade filter to localize the nose tip. After that, local graph matching is applied to localize the inner eye corners (Romero et al., 2009). Then described and evaluated their pose-invariant pointpair descriptors, which encode 3D shape between a pair of 3D points. Two variants of descriptor are introduced: the first is the point-pair spin image, which is related to the classical spin image of Johnson and Hebert, and the second is derived from an implicit radial basis function (RBF) model of the facial surface. These descriptors can effectively encode edges in graph based representations of 3D shapes. Here they show how the descriptors are able to identify the nose-tip and the eye-corner of a human face. simultaneously in six promising landmark localisation systems (Romero et al., 2009). Ruiz et al. (Ruiz et al., 2008) presented an algorithm for automatic localization of landmarks on 3D faces. An Active Shape Model (ASM) is used as a statistical joint location model for configurations of facial features. The ASM is adapted to individual faces via a guided search whereby landmark specific Shape Index models are matched to local surface patches. Similarly, Sang-Jun et al. (Sang-Jun et al., 2008) applied the Active Shape Models to extract the position of the eyes, the nose and the mouth. Salah et al. (Salah et al., 2006) proposed a coarse-to-fine method for facial landmark localization that relies on unsupervised modeling of landmark features obtained through different Gabor filter channels. D’Hose et al. (D’Hose et al., 2007) presented a method for localization of landmarks on 3D faces using Gabor wavelets to extract the curvature of the 3D faces, which is then used for performing a coarse detection of landmarks. A connected but quite different field is face detection, which consists in identifying one or more faces in an image, where many other objects can be present. Most of the literature concerning face detection investigates face detection in two-dimensional (2D) images. Colombo et al. (Colombo et al., 2006) presented an innovative method that combines a feature-based approach with a holistic one for 3D face detection. Salient face features, such as the eyes and nose, are detected through an analysis of the curvature of the surface. Each triplet consisting of a candidate nose and two candidate eyes is processed by a PCA-based classifier trained to discriminate between faces and non-faces. Nair et al. (Nair et al., 2009) presented an accurate and robust framework for detecting faces, localizing landmarks and achieving fine registration of face meshes based on the fitting of a facial model. Face detection is performed by classifying the transformations between model points and candidate vertices based on the upper-bound of the deviation of the parameters from the mean model. Landmark localization is performed on the segmented face by finding the transformation that minimizes the deviation of the model from the mean shape. Jesorsky et al. (Jesorsky et al., 2001) presented a shape comparison approach to achieve fast and accurate face detection that is robust to changes in illumination and background. The proposed method is edge-based and works on greyscale still images. Takács et al. (Takács et al., 1997) described a general approach for the detection of faces and landmarks based on biologically motivated image representation and classification schemes. The optimal set of face, eye pair, nose and mouth feature models, respectively, is found by an enhanced SOFM approach using cross-validation and corrective training. Yow et al. (Yow et al., 1997) identified that a feature-based approach was able to detect faces efficiently over large viewpoint and illumination variations. They enhanced the approach by proposing the use of active contour models to detect the face boundary, and subsequently use it to verify face candidates. Rodrigues et al. (Rodrigues et al., 2005) studied the importance of multi-scale keypoint representation, i.e. retinotopic keypoint maps which are turned to different spatial frequencies. They showed that this representation provided important information for Focus-of-Attention (FoA) and object detection. In particular, they showed that hierarchically-structured saliency maps for FoA can be obtained, and that combinations over scales in conjunction with spatial symmetries can lead to face detection through grouping operators that deal with keypoints at the eyes, nose and mouth, especially when non-classical receptive field inhibition is employed. A similar application is facial expression recognition, a branch of recognition which deals with identifying different facial expressions. Unlike face recognition, a little work has been done to study the usefulness of facial data for recognizing and understanding facial expressions. Some researchers worked on this topic. In their various papers, Tang et al. (Tang et al., 2008; Tang et al., 2008) performed person and gender independent facial expression recognition based on properties of the line segments connecting certain 3D facial feature points. They proposed an automatic feature selection method based on maximizing the average relative entropy of marginalized class-conditional feature distributions and apply it to a complete pool of candidate features composed of normalized Euclidean distances between 83 facial feature points in the 3D space. Soyel et al. (Soyel et al., 2007; Soyel et al., 2008; Soyel et al., 2009) described a pose invariant three-dimensional facial expression recognition method using distance vectors retrieved from 3D distributions of facial feature points to classify universal facial expressions. Their works are based on the theories of Paul Ekman, a psychologist who has been a pioneer in the study of emotions and their relation to facial expressions. His theory is that the expressions associated with some emotions were basic or biologically universal to all humans. He devised a list of 6 basic emotions from cross-cultural research: anger, disgust, fear, happiness, sadness and surprise (Ekman, 1992; Ekman, 1999). For his precious and unique work, Ekman has been considered one of the 100 most eminent psychologists of the twentieth century. Nowadays, many authors involved in studies of facial expressions used his theory to concentrate their researches on expressions referred to emotions considered basic. Another field in which facial landmarks are applied is the study of facial morphology. The purposes are various, such as the analysis of facial abnormalities, dysmorphologies, growth changes, aesthetic or purely theoretical. The discipline that deals with this kind of studies is Anthropometry, which is directly connected to maxillofacial surgery, namely aesthetic, plastic and corrective. Facial landmarks do not only appear in the applications of this discipline, but even belong to it. In fact, they were defined by surgeons in order to have a common name for every specific part of the face. A pioneer in Anthropometry is surely Leslie G. Farkas, who used anatomical landmarks to provide an essential update on the best methods for the measurement of the surfaces of the head and neck (Farkas, 1994). He gathered a set of anthropometric measurements of the face in different ethnic groups (Farkas et al., 2005). Then examined the effects on faces of some syndromes, such as Treacher Collins’s (Kolar et al., 1985), Apert’s (Kolar et al., 1985), cleft lips, nasal deformity (Kohout et al., 1998) and children’s cleft palate (Farkas et al., 1972). He studied the changes of the head and face during the growth (Farkas et al., 1992) and also researched on facial beauty and neoclassical canons in face proportions (Dawei et al., 1997; Le et al., 2002; Farkas et al., 2000; Farkas, 1995). There are two quite different applications which face landmarks are used for. The first one is face correction. It consists in detecting and correcting imperfections in group photos, such as close eyes, inappropriate, unflattering and goofy faces. Dufresne (Dufresne) presented a method for diagnose and correct these issues. Face and facial landmarks are detected by an implementation of Bayesian Tangent Shape Model search. Then trained an SVM classifier to identify unflattering faces. Bad faces are then warped to match nearest neighbour faces from the good face set. The other application is the performance evaluation of technical equipments. If the examined equipment is able to identify facial landmark correctly, his performance is considered effective. Enciso et al. (Enciso et al., 2004) investigated on methods for generating 3D facial images such as laser scans, stereo-photogrammetry, infrared imaging and CT and focused on validation of indirect three-dimensional landmark location and measurement of facial soft-tissue with light-base techniques. They also evaluated precision, repeatability and validation of a light-based imaging system. Aung et al. (Aung et al., 1995) analyses the development of laser scanning techniques enabling the capture of 3D images, especially for surface measurements of the face. They used a laser optical surface scanner to took 83 facial anthropometric measurements, using 41 identifiable landmarks on the scanned image. Then demonstrated that the laser scanner can be a useful tool for rapid facial measurements in selected anatomical parts of the face. In fact, accurate location of landmarks and operator skill are important factors to achieve reliable results. Once landmarks are extracted from faces, manually or automatically, they become useful if it is possible to extrapolate the precious information their particular position give them. Gupta et al. (Gupta et al., 2007; Gupta et al., 2010) indeed investigated the effect of the choice of facial fiducial points on the performance of their proposed 3D face recognition algorithm. They repeated the same steps with distances between arbitrary face points, instead of the anthropometric fiducial points. These points were located in the form of a $5 \times 5$ rectangular grid positioned over the primary facial features of each face. They chose these particular facial points as they measure distances between the significant facial landmarks, including the eyes, nose and the mouth regions, without requiring localization of specific fiducial points. They showed that in their algorithms, when anthropometric distances are replaced by distances between arbitrary regularly spaced facial points, their performances decrease substantially. As a matter of fact, landmarks have both a geometrical and biological meaning on the human face and for this reason the extraction of measures and features from their links become necessary for providing a complete face description. Next section faces this task. 2. Features types: classification Facial landmarks lie in zones of the faces which have peculiar geometric and anthropometric features. These features were extrapolated from the faces by various authors in many different ways, depending on the usages they were assigned to. The scope is to extract accurate geometric information of the examined face and allow the comparison with other faces from which the same corresponding information were previously extracted. For face recognition applications, the computation of the Euclidean or geodesic distances between landmarks is a method widely used. They are considered measures, rather than real features. Particularly, in anthropometry applications, these measures, are called morphometric. They are generally distances or angles and their property is that one measure involves more than one landmark. As a matter of fact, both Euclidean and Geodesic distances refer to two points, while angles involve three landmarks. But the information obtained by these reference points may be more geometric in nature, keeping for instance specific data of curvature or shape. 2.1 Euclidean distance The Euclidean distance or Euclidean metric is the “ordinary” distance between two points that one would measure with a ruler, and is given by the Pythagorean formula. It is shown in Figure 2. By using this formula as distance, Euclidean space becomes a metric space. The Euclidean distance between points \( P \) and \( Q \) is the length of the line segment connecting them (\( PQ \)). In Cartesian coordinates, if \( P = (p_1, p_2, ..., p_n) \) and \( Q = (q_1, q_2, ..., q_n) \) are two points in Euclidean \( n \)-space, then the distance from \( P \) to \( Q \), or from \( Q \) to \( P \) is given by: \[ d(P,Q) = d(Q,P) = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + ... + (q_n - p_n)^2} = \sqrt{\sum_{i=1}^{n} (q_i - p_i)^2}. \] In three-dimensional Euclidean space, the distance is: \[ d(P, Q) = \sqrt{(q_1 - p_1)^2 + (q_2 - p_2)^2 + (q_3 - p_3)^2}. \] The Euclidean distance between landmarks is used by most authors as a morphometric measure. Once landmarks are obtained from a facial image or a three-dimensional face, they select some significant distances between them and compute the corresponding Euclidean distances. Then these distances are used to compare faces for face recognition purposes or to perform studies on face morphometry, as said above. The Euclidean-distance-based morphometric measures are chosen depending on the application. There is wide previous work on this topic. Gupta et al. (Gupta et al., 2007; Gupta et al., 2010) presented three-dimensional face recognition algorithms, which employ Euclidean distances between these anthropometric fiducial points as features along with linear discriminant analysis classifiers. Prabhu et al. (Prabhu et al.) addressed the problem of automatically locating the facial landmarks of a single person across frames of a video sequence. By calculating the mean of the Euclidean distances between the coordinates of each of the 79 landmarks that were fitted by the tracking method to those that were manually annotated, they obtained the fitting error for a particular frame. Similarly, Zhao et al. (Zhao et al., 2010) formed a vector by 11 Euclidean distances between facial expression sensitive landmarks. Moreno et al. (Moreno et al.) performed an HK segmentation, i.e. based on the signs of the mean \((H)\) and Gaussian \((K)\) curvatures, for isolating regions of pronounced curvature on 420 3D facial meshes. After the segmentation task, a feature extraction is performed. Among them, Euclidean distances between some fiducial points were computed. Gordon (Gordon, 1992) presented a face recognition system which uses features extracted from range and curvature data to represent the face. She extracted high level features which mark salient events on the face surface in terms of points, lines and regions. Since the most basic set of scalar features describing the face correspond measurements of the face, she firstly computed the Euclidean distance of: left eye width, left eye width, eye separation, total width (span) of eyes, nose height, nose, width, nose depth and head width. Likewise, Lee et al. (Lee et al., 2005) calculated with Euclidean distance relative lengths of extracted facial feature points. Efraty et al. (Efraty et al., 2009; Efraty et al., 2010) studied the silhouette of face profile and introduced a new method for face recognition that improves robustness to rotation. They achieved this by exploring the feature space of profiles under various rotations with the aid of a 3D face model. Based on the fiducial points on the profile silhouette, they extracted a set of rotation-, translation- and scale-invariant features which are used to design and train a hierarchical pose-identity classifier. Euclidean distance was chosen by him as one type of measurements between landmarks. Daniyal et al. (Daniyal et al., 2009) represented the face geometry with inter-landmark distances within selected regions of interest to achieve robustness to expression variations. The proposed recognition algorithm first represents the geometry of the face by a set of Euclidean Inter-Landmark Distances (ILDs) between the selected landmarks. These distances are then compressed using Principal Component Analysis (PCA) and projected onto the classification space using Linear Discriminant Analysis (LDA). Soyle et al. (Soyel et al., 2007; Soyle et al., 2008) used six different Euclidean distances between feature points to form a distance vector for facial expression recognition. They are: openness of eyes, height of eyebrows, openness of mouth, width of mouth, stretching of lip and openness of jaw. During the recognition experiments, a distance vector is derived for every 3D model and the whole procedure is repeated numerous times. Ras et al. (Ras et al., 1996) introduced stereophotogrammetry as a three-dimensional registration method for quantifying facial morphology and detecting changes in facial morphology during growth and development. They used six sets of automatically extracted 3D landmarks coordinates to calculate the Euclidean distances between exocanthion and chelion, chelion and pronasale, exocanthion and pronasale for both sides of the face. Changes in facial morphology due to growth and development were analysed with an analysis of variance of these distances. The last field in which Euclidean distances between landmarks were applied is performance evaluation of technical equipments. Enciso et al. (Enciso et al., 2004) used a digitizer to obtain landmarks and then directly measured Euclidean distances between them. These distances were compared with the indirect homologous distances measured on the scans with our computer tools. Aung et al. (Aung et al., 1995) firstly carried out direct Euclidean-distance-based anthropometric measurements of the face using standard anthropometric landmarks as defined by Farkas. The subject was then laser scanned with the optical surface scanner and the laser scan measurements were done using selected landmarks identifiable on the laser scan image. The same number of corresponding sets of measurements from the direct and indirect methods were then compared, in order to evaluate laser scanner performance. 2.2 Geodesic distance and arc-length distance A geodesic is a generalization of the notion of a “straight line” to curved spaces. In the presence of a metric, geodesics are defined to be (locally) the shortest path between points in the space. The term “geodesic” comes from Geodesy, the science of measuring the size and shape of Earth; in the original sense, a geodesic was the shortest route between two points on the Earth's surface, namely, a segment of a great circle. More generally, on a sphere, the images of geodesics are the great circles. The shortest path from point A to point B on a sphere is given by the shorter arc of the great circle passing through A and B. If A and B are antipodal points (like the North pole and the South pole), then there are “infinitely many” shortest paths between them. It is shown in Figure 3. Figure 3. Geodesic distance between pronasal and right exocanthion. Formally, geodesics can then be defined as curves whose osculating planes contain the normals to the surface. The parametrized curves $\gamma : I \rightarrow \mathbb{R}^2$ of a plane along which the field of their tangent vectors $\gamma'(t)$ is parallel are precisely the straight lines of that plane. The parametrized curves that satisfy an analogous condition for a surface are called geodesics. More precisely, a nonconstant, parametrized curve $\gamma : I \rightarrow S$ is said to be geodesic at $t \in I$ if the field of its tangent vectors $\gamma'(t)$ is parallel along $\gamma$ at $t$; that is, $$\frac{D\gamma'(t)}{dt} = 0;$$ $\gamma$ is a parametrized geodesic if it is geodesic for all $t \in I$. Immediately $|\gamma'(t)| = const. = c \neq 0 \Rightarrow$ is obtained. Therefore, the arc-length $s = ct$ may be introduced as a parameter, and it is possible to conclude that the parameter $t$ of a parametrized geodesic $\gamma$ is proportional to the arc length of $\gamma$. A parametrized geodesic may admit self-intersections. However, its tangent vector is never zero, and thus the parametrization is regular. The notion of geodesic is clearly local. The previous considerations allow to extend the definition of geodesic to subsets of $S$ that are regular curves. A regular connected curve $C$ in $S$ is said to be a geodesic if, for every $p \in S$, the parametrization $\alpha(s)$ of a coordinate neighborhood of $p$ by the arc length $s$ is a parametrized geodesic; that is, $\alpha'(s)$ is a parallel sector field along $\alpha(s)$. Every straight line contained in a surface satisfies this definition. From a point of view exterior to the surface $S$, the definition is equivalent to saying that $\alpha''(s) = kn$ is normal to the tangent plane, that is, parallel to the normal to the surface. In other words, a regular curve $C \subset S (k \neq 0)$ is a geodesic if and only if its principal normal at each point $p \in C$ is parallel to the normal to $S$ at $p$. The above property can be used to identify some geodesics geometrically. The great circles of a sphere $S^2$ are geodesics. Indeed, the great circles $C$ are obtained by intersecting the sphere with a plane that passes through the centre $O$ of the sphere. The principal normal at a point $p \in C$ lies in the direction of the line that connects $p$ to $O$ because $C$ is a circle of centre $O$. Since $S^2$ is a sphere, the normal lies in the same direction, which verifies our assertion. For the case of the sphere, through each point and tangent to each direction there passes exactly one great circle, which, as we proved before, is a geodesic. Therefore, by uniqueness, the great circles are the only geodesics of a sphere. For the right circular cylinder over the circle $x^2 + y^2 = 1$, it is clear that the circles obtained by the intersection of the cylinder with planes that are normal to the axis of the cylinder are geodesics. That is so because the principal normal to any of its points is parallel to the normal to the surface at this point. On the other hand, the straight lines of the cylinder (generators) are also geodesics. To verify the existence of other geodesics on the cylinder $C$ we shall consider a parametrization $$x(u, v) = (\cos u, \sin u, v)$$ of the cylinder in a point $p \in C$, with $x(0, 0) = p$. In this parametrization, a neighborhood of $p$ in $C$ is expressed by $x(u(s), v(s))$, where $s$ is the arc length of $C$. Then, $x$ is a local isometry which maps a neighborhood $U$ of $(0, 0)$ of the $uv$ plane into the cylinder. Since the condition of being a geodesic is local and invariant by isometries, the curve $(u(s), v(s))$ must be a geodesic in $U$ passing through $(0, 0)$. But the geodesics of the plane are the straight lines. Therefore, excluding the cases already obtained, $$u(s) = as, \quad v(s) = bs, \quad a^2 + b^2 = 1.$$ It follows that when a regular curve $C$ (which is neither a circle or a line) is a geodesic of the cylinder it is locally of the form $$(\cos as, \sin as, bs),$$ and thus it is a helix. In this way, all the geodesics of a right circular cylinder are determined. Observe that given two points on a cylinder which are not in a circle parallel to the $xy$ plane, it is possible to connect them through an infinite number of helices. This fact means that two points of a cylinder may in general be connected through an infinite number of geodesics, in contrast to the situation in the plane. Observe that such a situation may occur only with geodesics that make a “complete turn”, since the cylinder minus a generator is isometric to a plane (Do Carmo, 1976). In metric geometry, a Geodesic is a curve which is everywhere locally a distance minimizer. More precisely, a curve $\gamma : I \rightarrow M$ from an interval $I$ of the reals to the metric space $M$ is a geodesic if there is a constant $v \geq 0$ such that for any $t \in I$ there is a neighborhood $J$ of $t$ in $I$ such that for any $t_1, t_2 \in J$ we have \[ d(\gamma(t_1), \gamma(t_2)) = v|t_1 - t_2|. \] This generalizes the notion of geodesic for Riemannian manifolds. However, in metric geometry the geodesic considered is often equipped with natural parametrization, i.e. in the above identity $v = 1$ and \[ d(\gamma(t_1), \gamma(t_2)) = |t_1 - t_2|. \] If the last equality is satisfied for all $t_1, t_2 \in I$, the geodesic is called a minimizing geodesic or shortest path. In general, a metric space may have no geodesics, except constant curves. At the other extreme, any two points in a length metric space are joined by a minimizing sequence of rectifiable paths, although this minimizing sequence need not converge to a geodesic. Some authors used geodesic distance between facial landmarks. First of all, Bronstein et al. (Bronstein et al., 2003; Bronstein et al., 2004; Bronstein et al., 2005; Bronstein et al., 2005; Bronstein et al., 2006) proposed to model facial expressions as isometries of the facial surface. The facial surface is described as a smooth compact connected two-dimensional Riemannian manifold (surface), denoted by $S$. The minimal geodesics between $s_1, s_2 \in S$ are curves of minimum length on $S$ connecting $s_1$ and $s_2$. The geodesics are denoted by $C^*_S(s_1, s_2)$. The geodesic distances refer to the lengths of the minimum geodesics and are denoted by \[ d_S(s_1, s_2) = \text{length}(C^*_S(s_1, s_2)). \] A transformation $\psi : S \rightarrow Q$ is called an isometry if \[ d_S(s_1, s_2) = d_Q(\psi(s_1), \psi(s_2)) \] for all $s_1, s_2 \in S$. In other words, an isometry preserves the intrinsic metric structure of the surface. The isometric model, assuming facial expressions to be isometries of some neutral facial expression, is based on the intuitive observation that the facial skin stretches only slightly. All expressions of a face are assumed to be “intrinsically” equivalent (i.e. have the same metric structure), and “extrinsically” different. Broadly speaking, the intrinsic geometry of the facial surface can be attributed to the subject’s identity, while the extrinsic geometry is attributed to the facial expression. The isometric model tacitly assumes that the expressions preserve the topology of the surface. This assumption is valid for most regions of the face except the mouth. Opening the mouth changes the topology of the surface by virtually creating a hole. Based on this model, expression-invariant signatures of the face were constructed by means of approximate isometric embedding into flat spaces. They applied a new method for measuring isometry-invariant similarity between faces by embedding one facial surface into another. Promising face recognition results are obtained in numerical experiments even when the facial surfaces are severely occluded. Gupta et al. (Gupta et al., 2007; Gupta et al., 2007; Gupta et al., 2010) worked on the same assumption, namely that different facial expressions could be regarded as isometric deformations of the face surface. These deformations preserve intrinsic properties of the surface, one of which is the geodesic distance between a pair of points on the surface. Based on these ideas they presented a preliminary study aimed at investigating the effectiveness of using geodesic distances between all pairs of 25 fiducial points on the surface as features for face recognition. Instead of choosing a random set of points on the face surface, they considered facial landmarks relevant to measuring anthropometric facial proportions employed widely in facial plastic surgery and art. They calculated geodesics using the Dijkstra’s shortest path algorithm by defining 8 connected nearest neighbors about each point. Twenty-five fiducial points, as depicted in, were manually located on each face. Three face recognition algorithms were implemented. The first employed 300 geodesic distances (between all pairs of fiducial points) as features for recognition. The fast marching algorithm for front propagation was employed to calculate the geodesic distance between pairs of points. The second algorithm employed 300 Euclidean distances between all pairs of fiducial points as features. The normalized L1 norm where each dimension was divided by its variance, was used as the metric for matching faces with both the Euclidean distance and geodesic distance features. The arc-length is the length of an irregular arc segment and is also called “rectification of a curve”. Thanks to its definition, it is strictly connected to the geodesics. Efraty et al. (Efraty et al., 2009; Efraty et al., 2010) were interested in profile-based face recognition. They defined five types of measurements based on the properties of the profile between two landmarks. One of them was exactly the arc-length between landmarks. Nevertheless, Aung et al. (Aung et al., 1995), who used facial landmarks to evaluate the performance of a laser scanner, argued that tangential or arc measurements were slightly more complex and needed careful positioning of the image before accurate measurements could be made. 2.3 Ratios of distances The ratios of geometric features are common in nature, the golden ratio \( \Phi = \frac{1 + \sqrt{5}}{2} \) being the most familiar. Many artists utilize the golden ratio to make their painting and sculpture more appealing. Scientists believe that some human faces are more attractive because their features are related by the golden ratio. It was demonstrated that the perception of face beauty is not based entirely on cultural influences and that length of the internal features can cause different perceptions of beauty. The ratios of face features play a crucial role in the classification of faces. In the past 20 years, researchers and practitioners in anthropology and aesthetic surgery have analyzed faces from a different perspective. They use a set of canonical points on a human face that are critical for face reconstruction. These points, and distances between them are used to represent a face. In fact, artists developed a set of neoclassical canons (ratios of distances) to represent faces as far back as the Renaissance period. All these observations motivate researchers to explore the role of ratios of the distances between face landmarks in face recognition (Shi et al., 2006). Generally, in face study, ratios are defined for the Euclidean distances or geodesic distances among landmarks. These ratios are often normalized distances, obtained by dividing a distance between points by face width. Shi et al. (Shi et al., 2006) investigated how well-normalized Euclidean distances (special ratios) can be exploited for face recognition. Exploiting symmetry and using principal component analysis, they reduced the number of ratios to 20. They are free from translation, scaling and 2D rotation of face images. The normalized distances for a face are then defined as \[ r(l_i, l_j) = \frac{d(l_i, l_j)}{d(l_a, l_b)}, \quad \forall l_i, l_j \in \{l_1, ..., l_N\}, \] where \( \{l_1, ..., l_N\} \) are the landmarks, \( N \) is their cardinality, \( l_a \) and \( l_b \) are two landmarks whose Euclidean distance is defined as a benchmark distance. Together with Euclidean distances and geodesic distances, Gupta et al. (Gupta et al., 2007; Gupta et al., 2007) used ratios. They presented an anthropometric three-dimensional (Anthroface 3D) face recognition algorithm, which is based on a systematically selected set of discriminatory structural characteristics of the human face derived from the existing scientific literature on facial anthropometry. Anthropometric cranio-facial proportions are ratios of pairs of straight-line and/or along-the-surface distances between specific cranial and facial fiducial points. For example, the most commonly used nasal index $N_1$ is the ratio of the horizontal nose width to the vertical nose height. Lee et al. (Lee et al., 2005) used relative ratios between feature points to perform face recognition. Tang et al. (Tang et al., 2008; Tang et al., 2008) performed facial expression recognition. They devised a set of features based on properties of the line segments connecting certain facial feature points on a 3D face model. Among them, normalized distances were extracted. Mao et al. (Mao et al.) took care of studying facial change due to growth. They formulated a new inverse flatness metric, the ratio of the geodesic distance and the Euclidean distance between landmarks, to study 3D facial surface shape. With this ratio, they were able to analyze curvature asymmetry, which cannot be detected by studying the Euclidean distance alone. They also attempted to combine it with the conventional Euclidean inter-landmark distances based symmetric method to express facial symmetry in terms of both surface flatness and also the geometric symmetry of landmark positions (captured by the Euclidean distances), to give a better overall description of three-dimensional facial symmetry. If $GD_{i,j}$ is the geodesic distance between points $i$ and $j$, and $ED_{i,j}$ is the Euclidean distance, then the ratio of the geodesic to Euclidean distance $$R = \frac{GD_{i,j}}{ED_{i,j}}$$ is employed in their work to analyze surface flatness, since it can reflect the inverse flatness of the geodesic curve that samples the surface on which the two end points $(i,j)$ lie. Therefore, this ratio is capable of capturing obvious differences in facial curvature. 2.4 Curvature and shape Punctual values of curvature and shape are precious information about facial surface behaviour. Although their valuable contribution, they are not used as often as distances. That is because they are not as easily tractable and extractable from faces. The necessity to condensate and formalize their values becomes basic in this field, where generally surfaces are not real, but described by point clouds or meshes. Several techniques have been developed to estimate the curvature information in the last two decades. From the mathematical viewpoint, the curvature information can be retrieved by the first and second partial derivatives of the local surface, the local surface normal and the tensor voting (Worthington et al., 2000). An interesting curvature representation was proposed by Koenderink et al. (Koenderink et al., 1992). It is based on the parametrization of the structure in two features maps, namely the Shape Index $S$ and the Curvedness Index $C$. The formal definition of Shape Index can be given as follows: $$S = -\frac{2}{\pi} \arctan \left( \frac{k_1 + k_2}{k_1 - k_2} \right), \quad S \in [-1,1], \quad k_1 \geq k_2,$$ where $k_1$ and $k_2$ are the principal curvatures. It describes the shape of the surface. Koenderink et al. proposed a partition of the range $[-1,1]$ in nine categories, which correspond to nine different surfaces. Nevertheless, Dorai et al. (Dorai et al., 1995; Dorai et al., 1996, Dorai et al., 1997) employed a modified definition to identify the shape category to which each surface point on an object belongs. With their definition, all shapes can be mapped on the interval $S \in [0,1]$, conveniently allowing aggregation of surface patches based on their shapes: \[ S = \frac{1}{2} \arctan \frac{k_1 + k_2}{k_1 - k_2}. \] Dorai et al. addressed the problem of representing and recognizing arbitrarily curved 3D rigid objects when the objects may vary in shape and complexity, and no restrictive assumptions are made about the types of surfaces on the object. They proposed a new and general surface representation scheme for recognizing objects with free-form (sculpted) surfaces from range data. \( S \) does not give an indication of the scale of curvature present in the shapes. For this reason, an additional feature is introduced, the Curvedness Index of a surface: \[ C = \sqrt{\frac{k_1^2 + k_2^2}{2}}. \] It is a measure of how highly or gently curved a point is and is defined as the distance from the origin in the \((k_1, k_2)\)-plane. Whereas the Shape Index scale is quite independent of the choice of a unit of length, the curvedness scale is not. Curvedness has the dimension of reciprocal length. In practice one has to point out some fiducial sphere as the unit sphere to fix the curvedness scale. Since principal curvatures may be computed punctually, then both \( S \) and \( C \) may be too. This advantage allow to extract shape and curvedness information from landmarks or fiducial points, guaranteeing a formalization for these features. Few authors used Shape and Curvedness Indexes for recognition. Worthington et al. (Worthington et al., 2000) investigated whether regions of uniform surface topography can be extracted from intensity images using shape-from-shading and subsequently used for the purposes of thirty object recognition. They drew on the constant Shape Index maximal patch representation of Dorai et al. Song et al. (Song et al., 2005) described a 3D face recognition method using facial Shape Indexes. Given an unknown range image, they extracted invariant facial features based on the facial geometry. For face recognition method, they defined and extracted facial Shape Indexes based on facial curvature characteristics and perform dynamic programming. Shin et al. (Shin et al., 2006) described a pose invariant three-dimensional face recognition method using distinctive facial features. They extracted invariant facial feature points on those components using the facial geometry from a normalized face data and calculated relative features using these feature points. They also calculated a Shape Index on each area of facial feature point to represent curvature characteristics of facial components. Calignano (Calignano, 2009) used Shape and Curvedness Indexes for a morphological analysis methodology for soft-tissue landmarks automatic extraction. Nair et al. (Daniyal et al., 2009; Nair et al., 2009) dealt with face recognition, face detection and landmark localization. In isolation-of-candidate-vertices-phase, in order to characterize the curvature properly of each vertex on the face mesh they computed two feature maps, namely the Shape Index and the Curvedness Index. The low-level feature maps were computed after Laplacian smoothing that reduced outliers arising from the scaling process. The smoothed and decimated mesh is only used for the isolation of the candidate vertices. Zhao et al. (Zhao et al., 2010) analysed facial expressions. To describe local surface properties, they computed Shape Index of all points on the local grids and concatenate into vector \( SI \). They choose Shape Index because it has been proven to be an efficient feature to describe local curvature information and is independent of the coordinate system. The Shape Index is computed on each vertex on local grids and the feature \( SI \) is constructed by concatenating those values into a vector. Other parameters and methodologies were used to extract shape and curvature information from facial landmarks or fiducial points. Moreno et al. (Moreno et al.) performed face recognition using 3D surface-extracted descriptors. Averages and variances of the mean and Gaussian curvatures, evaluated in points belonging to the various regions which face surface was divided by, were extracted. Gordon (Gordon, 1992) defined a set of features which describe the nose ridge and... are based on measurement of curvature. They are: maximum Gaussian curvature on the ridge line, average minimum curvature on the ridge above the tip of the nose, Gaussian curvature at the bridge, Gaussian curvature at the base. The maximum Gaussian curvature will occur approximately at the tip of the nose, and provides some description of local shape at that point. The average minimum curvature between the bridge and the tip of the nose is meant provide a simple measure of the curvature along the ridge. Xu et al. (Xu et al., 2004) developed an automatic face recognition method combining the global geometric features with local shape variation information. The scattered 3D point cloud is first represented with a regular mesh. Then the local shape information is extracted to characterize the individual together with the global geometric features. They firstly defined a metric to describe the 3D shape of the principle areas with a 1D vector and then used the Gaussian-Hermite moments to analyze the shape variation. Efraty et al. (Efraty et al., 2009; Efraty et al., 2010) computed for each pair of landmarks the mean curvature of the region between landmarks and the $L_2$-norm of curvature along the contour between landmarks (proportional to bending energy). Wang et al. (Wang J. et al., 2006) dealt with facial expression recognition. They proposed an approach to extract primitive 3D facial expression features from the triangle meshes of faces. They performed principal curvature analysis, which produced a set of attributes that describes the surface property at each vertex. Among them, principal curvatures, representing the maximum and the minimum degrees of bending of a surface, and steepness are included. Using these geometric attributes, they were able to classify every vertex into a category. 2.5 Other features Other geometrical features were extracted from face landmarks. Ras et al. (Ras et al., 1996) studied facial morphology and computed angles between fiducial points. Particularly, the angles exocanthion-chelion-pronasal, exocanthion-pronasal-exocanthion, pronasal-exocanthion-chelion and between the two planes formed by exocanthion, chelion and pronasal of both sides were calculated. Changes in facial morphology due to growth and development were analyzed with an analysis of variance with the angles. Lee et al. (Lee et al., 2005) performed face recognition calculating relative angles among facial feature points. Moreno et al. (Moreno et al.) computed angles, regions, areas of regions and centroids of regions. Zhao et al. (Zhao et al., 2010), for face recognition purposes, used the multi-scale LBP operator, a powerful texture measure used widely in 2D face analysis. It extracts information which is invariant to local gray-scale variations of the image with low computational complexity. They also computed a landmark displacement vector. The displacement of a landmark means to capture the change of the landmark location when an expression appears on a neutral face. It is informative because it represents the difference between the face with an expression and the neutral one. Similarly, Sun et al. (Sun et al., 2008) derived the displacement vector between each individual frame and the initial frame, namely the neutral expression one. Dufresne (Dufresne) utilized the vectors between selected facial points as features for 2D face correction. He showed that simply measuring the width and height of the mouth does not indicate what pose that mouth is in, i.e. smiling, scowling or smirking. Vectors were selected due to being particularly expressive. That is, a human could understand the expression if only these vectors were presented to them. Tang et al. (Tang et al., 2008) extracted slopes of the line segments connecting a subset of the 83 facial feature points for facial expression recognition purposes. Daniyal et al. (Daniyal et al., 2009) analyzed the performance of different landmark combinations (signatures) to determine a signature that is robust to expressions for the purpose of face recognition. The selected signature is then used to train a Point Distribution Model for the automatic localization of the landmarks. As a validation, Jesorsky et al. (Jesorsky et al., 2001) used relative error for face detection. Relative error is based on the distances between the expected and the estimated eye position, so it must not be considered as a normalized distance. Other authors extracted depth and texture features from landmarks or face zones. A texture coding provides information about facial regions with little geometric structures, such as hair, forehead and eyebrows, while a depth coding provides information about regions where there is little texture such as the chin, jawline and cheeks. Particularly, Wang et al. (Wang Y. et al., 2002) extracted shape and texture features from defined feature points for face recognition purposes. BenAbdelkader et al. (BenAbdelkader et al., 2005) worked on face coding for recognition and identification. They designed a pattern classifier for three different inputs: depth map, texture map, and both depth and texture maps. Hüskens et al. (Hüskens et al., 2005) included both texture and shape as typical 2D and 3D representations of faces. 3. Results and conclusions Depending on the application field, these measures were judged by the researchers as valid, effective and suitable to a face description. Since the fields which all these geometrical features are used for are really various, it is out of the scope of this paper to add here the results these measures give in their applications. However, it is possible to give an overview of how functional are the most important features, namely Euclidean and geodesic distances, in recognition usage, i.e. the main field. These evaluations are given by those authors who employed both the two measures and compare the obtained results. Bronstein et al. (Bronstein et al., 2003; Bronstein et al., 2004; Bronstein et al., 2005; Bronstein et al., 2005; Bronstein et al., 2005), which used geodesic distances, obtained promising face recognition results on a small database of 30 subjects even when the facial surfaces were severely occluded. They also demonstrated that the approach has several significant advantages, one of which is the ability to handle partially missing data. This is exactly the contrary of what was proven by Gupta et al. in his first study (Gupta et al., 2007), who tested both Euclidean and geodesic distance. The two algorithms based on Euclidean or geodesic distances between anthropometric facial landmarks performed substantially better than the baseline PCA algorithm. The algorithms based on geodesic distance features performed on a par with the algorithm based on Euclidean distance features. Both were effective, to a degree, at recognizing 3D faces. In this study the performance of the proposed algorithm based on geodesic distances between anthropometric facial landmarks decreased when probes with arbitrary facial expressions were matched against a gallery of neutral expression 3D faces. This suggests that geodesic distances between pairs of landmarks on a face may not be preserved when the facial expression changes. This was contradictory to Bronstein et al.’s assumption regarding facial expressions being isometric deformations of facial surfaces. In conclusion, geodesic distances between anthropometric landmarks were observed to be effective features for recognizing 3D faces, however they were not more effective than Euclidean distances between the same landmarks. The 3D face recognition algorithm based on geodesic distance features was affected by changes in facial expression. Lately, Gupta et al. (Gupta et al., 2010) gained other results. He obtained that, for expressive faces, the recognition rates of the algorithm that was based on both the Euclidean and geodesic facial anthropometric distances were also generally higher than those of the algorithm that was based on only Euclidean distances. This suggests that facial geodesic distances may be useful for expression invariant 3D face recognition and further strengthens Bronstein et al.’s proposition that different facial expressions may be modeled as isometric deformations of the facial surface. An exhaustive set of morphometric measures and geometrical features extractable from facial landmarks were here presented and explained. The most popular ones are certainly Euclidean and geodesic distance, which were used by many authors, also as benchmarking elements of comparison. The application which involve them the most is recognition, with its various subfields, such as face recognition, facial expression recognition and face detection. Landmarks are the starting point for this study, being exactly the reference points which information are extracted from. This is due to the fact that from various evaluations it resulted necessary use fiducial points. As a matter of fact, most of the work concerning 3D facial morphometry refers exactly to landmarks. References
[REMOVED]
Tomographic Particle Image Velocimetry Using Colored Shadow Imaging Thesis by Meshal K Alarfaj In Partial Fulfillment of the Requirements For the Degree of Master of Science King Abdullah University of Science and Technology Thuwal, Kingdom of Saudi Arabia Insert Approval Date Saudi Aramco: Company General Use EXAMINATION COMMITTEE APPROVALS FORM The dissertation/thesis of [Student Name] is approved by the examination committee. Committee Chairperson [insert name] Committee Co-Chair (if appropriate) [insert name] Committee Member [insert name] ABSTRACT Tomographic Particle Image Velocimetry Using Colored Shadow Imaging by Meshal K Alarfaj, Master of Science King Abdullah University of Science & Technology, 2015 Tomographic Particle image velocimetry (PIV) is a recent PIV method capable of reconstructing the full 3D velocity field of complex flows, within a 3-D volume. For nearly the last decade, it has become the most powerful tool for study of turbulent velocity fields and promises great advancements in the study of fluid mechanics. Among the early published studies, a good number of researches have suggested enhancements and optimizations of different aspects of this technique to improve the effectiveness. One major aspect, which is the core of the present work, is related to reducing the cost of the Tomographic PIV setup. In this thesis, we attempt to reduce this cost by using an experimental setup exploiting 4 commercial digital still cameras in combination with low-cost Light emitting diodes (LEDs). We use two different colors to distinguish the two light pulses. By using colored shadows with red and green LEDs, we can identify the particle locations within the measurement volume, at the two different times, thereby allowing calculation of the velocities. The present work tests this technique on the flows patterns of a jet ejected from a tube in a water tank. Results from the images processing are presented and challenges discussed. # TABLE OF CONTENTS <table> <thead> <tr> <th>Section</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>Abstract</td> <td>iv</td> </tr> <tr> <td>List of Abbreviations</td> <td>vi</td> </tr> <tr> <td>List of Figures</td> <td>vii</td> </tr> <tr> <td>1 Introduction</td> <td>1</td> </tr> <tr> <td>1.1 PIV basic principal</td> <td>2</td> </tr> <tr> <td>1.2 Seeding particles</td> <td>3</td> </tr> <tr> <td>1.3 Tomographic PIV standard layout</td> <td>4</td> </tr> <tr> <td>1.3 Tomographic PIV applications and limitations</td> <td>7</td> </tr> <tr> <td>1.4 Commonly used Illumination sources</td> <td>8</td> </tr> <tr> <td>2 Objectives</td> <td>11</td> </tr> <tr> <td>3 Experimental setup</td> <td>14</td> </tr> <tr> <td>3.1 LED illumination source</td> <td>14</td> </tr> <tr> <td>3.2 Cameras</td> <td>20</td> </tr> <tr> <td>3.2.1 Cameras calibration</td> <td>22</td> </tr> <tr> <td>3.2.2 self calibration</td> <td>26</td> </tr> <tr> <td>3.3 Water tank</td> <td>26</td> </tr> <tr> <td>3.4 Flow-field and seeding system</td> <td>30</td> </tr> <tr> <td>3.5 Signal and air generators</td> <td>31</td> </tr> <tr> <td>4 Data processing and results</td> <td>33</td> </tr> <tr> <td>5 Discussion and conclusions</td> <td>69</td> </tr> <tr> <td>References</td> <td>71</td> </tr> </tbody> </table> **LIST OF ABBREVIATIONS** <table> <thead> <tr> <th>Abbreviation</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Bmp</td> <td>Bitmap image file</td> </tr> <tr> <td>CCD</td> <td>Charge coupled device</td> </tr> <tr> <td>CMOS</td> <td>Complementary Metal Oxide Sensors</td> </tr> <tr> <td>CW</td> <td>Continuous wave</td> </tr> <tr> <td>DSLR</td> <td>Digital Single-Lens Reflex</td> </tr> <tr> <td>fps</td> <td>Frame per second</td> </tr> <tr> <td>JPEG</td> <td>Joint Photographic Experts Group</td> </tr> <tr> <td>LASER</td> <td>Light Amplification by Stimulated Emission of Radiation</td> </tr> <tr> <td>LED</td> <td>Light emitting diodes</td> </tr> <tr> <td>MART</td> <td>Multiplicative Algebraic Reconstruction Technique</td> </tr> <tr> <td>NEF</td> <td>Nikon electronic format</td> </tr> <tr> <td>PIV</td> <td>Particle image velocimetry</td> </tr> <tr> <td>PSV</td> <td>Particle shadow velocimetry</td> </tr> <tr> <td>SRS</td> <td>Stanford Research Systems</td> </tr> </tbody> </table> LIST OF FIGURES Fig 1.1: A schematic drawing showing Displacement of a tracer particle at two consecutive times................................................................. 2 Fig 1.2: Illustration of Tomo-PIV experimental setup and working principle, with 4 cameras and a laser volume-illumination [3]............................. 5 Fig 2.1: Subsection of a Nikon photo demonstrating the basic idea of using two colors to imbed time-information into a single image ....................... 12 Fig 2.2: Sketch showing how the shadows appear to reverse the order of the light illumination ........................................................................ 13 Fig 3.1: A schematic drawing for the LED system layout and octagonal water tank model.................................................................................. 15 Fig 3.2: Photos of the Red & Green LED Chips by Luminus [6] ....................... 16 Fig 3.3: A photo of the aspheric condenser lens used with the LED system........ 16 Fig 3.4: Timing diagram for the LED and camera systems used in the Tomo-PIV setup......................................................................................... 17 Fig 3.5: Schematic drawing and photo of the LED mounting on the heat Sink [6]........................................................................................................................................20 Fig 3.6: A photo of the commercial cameras used in the Tomo-PIV ......................... 22 Fig 3.7: Schematic drawing & photo for the cameras connection ......................... 24 Fig 3.8: Photos of the calibration setup: (a) calibration plate by Lavision, (b) calibration stepper motor by Pollux ................................................................. 25 Fig 3.9: Drawing and photo of Tank-A model used with 4 LEDs system ............. 28 Fig 3.10: Drawing and photo of Tank-B model used with 8 LEDs system .......... 29 Fig 3.11: Photos of seeding mechanisms used with Tomo-PIV: (a) with Tank-A model, (b) with Tank-B model ............................................. 30 Fig 3.12: A photo of the function and delay generators .................................... 31 Fig 3.13: A photo of the air dispenser ..................................................................... 32 Fig 4.1: Top panel: Original image using red/green shadows .......................... 34 Fig 4.2: Photos of RGB image and plot of the vertically average intensities of the colors .............................................................................................................. 35 Fig 4.3: Plots of the actual intensities with the particles (top image) and the average intensities of RGB image (bottom plot) ............................................. 36 Fig 4.4: Photos of particles before & after color separation of RGB image: (a) Enhanced RGB image, (b) Red channel, (c) Green channel, (d) Blue channel .................................................................................................. 37 Figure 4.5: The details of an image subsection containing the shadows of one particle ......................................................... 38 Figure 4.6: The separation of the two particle images, by using the difference between the Red and the Green channels .......................................................... 40 Figure 4.7: The probability distribution of the intensity values of the pixels average over the entire area of interest, where the particles in the vortex ring are most visible .......................................................... 42 Figure 4.8: The average color fields for the four different cameras .................. 43 Fig 4.9: Photo of a particle image (green component) after Unifying or subtracting background variation ......................................................... 45 Fig 4.10: Photo showing Inverted green channel of particle image .................. 45 Figure 4.11: The particle images using the shadow in the green color-channel, from all four cameras................................................................. 47 Figure 4.12: Close-up particle images from the previous figure ..................... 48 Figure 4.13: The Green-Red fields for all 4 cameras, which correspond to the second time-flash ................................................................. 49 Figure 4.14: Close-up sections of the images in Figure 4.13......................... 50 Figure 4.15: Direct comparison of the particle images in the same area from the Green (left panel) and Green-Red (right panel) at the two different times........................................................................................................... 51 Figure 4.16: Direct PIV calculated from the Tomo-images ................. 52 Figure 4.17: The intensity distribution of the particles in the reconstructed volume, showing clearly where the particles are concentrated ............ 54 Fig 4.18: plots showing the intensities profile in the z-direction through the reconstructed volume.................................................................................. 55 Figure 4.19: One reconstruction plane out of a total of 1035 adjacent planes..... 56 Figure 4.20: Velocity vectors in a plane near the centerline of the vortex ring..... 57 Figure 4.21: Closeup of the right-side cut through the vortex ring in fig. 4.20..... 58 Figure 4.22 The color indicates the magnitude of the horizontal component of the velocity vector.................................................................................... 59 Figure 4.23: The color indicates the magnitude of the vertical component of the velocity vector.................................................................................... 60 Figure 4.24: The out of plane velocity in a plane near the edge of the vortex ....... 61 Figure 4.25: Subsection from a typical Red-Blue LED image ..................... 63 Figure 4.26: Another example of the colour separation using RED-BLUE LEDs ........................................................................................................... 64 Figure 4.27: The pdf of the RED and BLUE pixel intensities, as well as the intensities of the difference RED-BLUE (black curve) ......................... 65 Figure 4.28: Subsection of the previous figure, showing the individual color channels .......................................................... 66 Figure 4.29: Image using Red/Blue LEDs. Shows an image subsection with a vortex ring with numerous large bubbles around the core of the ring........ 67 Fig 4.30: Photos and plots for Red and Blue LEDs experiment ......................... 68 CHAPTER 1 Introduction Tomographic Particle Image Velocimetry (Tomo-PIV) is one of several systems used to measure velocity fields in fluid mechanics. As all PIV techniques it tracks particle motions with time. However, it is the only technique which successfully can measure the full 3D velocity field in a volume and is increasingly being used for analysis of complex or turbulent flows. It has sufficient accuracy to allow calculations of all the velocity gradients and thereby to obtain the 3-D vorticity field, which is fundamental to the study of turbulence dynamics, through the study of coherent structures. For nearly the last decade since the seminal work of Elsinga et al. in 2006 [8], it has become a powerful tool and resulted in great advancements in conducting studies in the field of fluid mechanics [1]. Among these studies, a good number of researches have been published suggesting enhancements and optimizations in the different aspects of this technique, to improve its effectiveness and reduce the computational cost. One major aspect, which is the core of the present work, is related to the cost reduction of Tomographic PIV setup, using consumer cameras. The remainder of this chapter provides introduction to the general PIV technique, discusses the standard setups of tomographic PIV, along with its main applications and limitations. It also highlights the commonly used illumination systems. The subsequent chapters will detail our modifications of the Tomo-PIV technique. 1.1 PIV Basic Principle: PIV is a non-intrusive technique able to provide quantitative measurement of instantaneous velocity fields over a relatively large surface with measurements documented at a large number of points simultaneously. It first appeared in literature in the mid eighties for studying turbulence, and continued to be the most practical method and played major role in flow visualization [2]. The basic concept of PIV is to obtain the velocity from the short-term displacement of solid particles imbedded inside the flow-field (Figure 1.1). In other words, the Fig 1.1: A schematic drawing showing Displacement of a tracer particle at two consecutive times the velocity vector “\( \vec{U} \)” is calculated using the basic definition of a derivative, considering the tracer displacement “\( \vec{\Delta X} \)” between two successive observations. When the first observation is obtained at time “\( t \)” and the second one is at “\( t + \Delta t \)”, then: \[ \vec{U} = s \frac{\Delta \vec{X}}{\Delta t} \] (1.1) where “s” represent the magnitude of the displacement, or length of the displacement vector. For high seeding densities, as used in PIV, the displacement of individual particles cannot be identified, but the most likely shifting of a collection of particles is obtained using cross-correlations. 1.2 Seeding particles: For PIV to give accurate information about the underlying velocity field, the tracers must be small enough to follow the flow in presence of large local and randomly fluctuating accelerations. How well the particles follow the flow is characterized by the so-called Stokes number, \[ St = \frac{\Delta \rho \ast \Delta u \ast d}{\mu_{liq}} \] (1.2) Where “\( \Delta \rho \)” is the difference in the density of the liquid and the particles, “\( \Delta u \)” is the velocity difference between the particle and surrounding liquid and “\( \mu_{liq} \)” is the liquid dynamic viscosity and “d” is the particle diameter. Conceptually, the Stokes number compares the inertia difference between the particle and the surrounding fluid, vs the viscous force the fluid applies to reduces this difference in the two velocities. For the particles to faithfully follow the flow, the value of the Stokes number needs to be small. The most straightforward way of accomplishing this is to match the density of the particle as close as possible to that of the liquid [8]. Using small particles also helps in this respect. The particle also must be smaller than the finest velocity structures of the flow-field, to be subjected to approximately a constant velocity over its surface. However, the seeding particles cannot be too small as they need to be visible by the camera sensor. Small particles reflect less light than larger particles. Furthermore, the images of the particles on the sensor must be larger than the pixel size, to accurately capture them. For standard PIV, a light sheet is formed by a pulsed light source to illuminate the particles. The duration of the illuminated light pulse must be short enough that the particles are almost still during each pulse. However, in our setup we use shadows, which have quite different optical properties than light scattering, but are also constrained by the same intensity requirements. The depth of field of the imaging also enters the optical design, as will be discussed in a later section. 1.3 Tomographic PIV standard layout: A standard Tomographic PIV system consists of a pulsed laser with volume optics, which creates an illuminated volume slice in the flow, which is seeded with tracers. Fig 1.2 Illustration of Tomographic PIV experimental setup and working principle, with 4 cameras and a laser volume-illumination [3]. The particle images are recorded with four digital cameras, which view the particle field from different directions. The cameras and illumination are synchronized using a computer to control the system, store the data and perform the required analysis. In Tomographic PIV, the 3D velocity fields are measured using particle-based interrogation process, as illustrated in Figure 1.2. The process starts with having several camera-views of the tracer particles illuminated by the laser light sheet. These different viewing directions are captured simultaneously and the region of study corresponds to the overlap of all of these fields of view of the cameras. This is followed by reconstructing the three-dimensional particle field. In essence, this is done by triangulating the images from the different cameras and finding the location in 3-D where they overlap. In practice, this is done by an iterative reconstruction method called Multiplicative Algebraic Reconstruction Technique (MART). Finally, when two particle-field volumes, taken at different times $t_1$ and $t_2$, have been reconstructed, we can use three-dimensional cross-correlation, on small sub-volumes within the larger volume, to obtain the local 3-D velocity vector, representing the particle motions. This provides velocity values over a regular 3-D grid spanning the entire measurement volume [3]. The illumination volume can be formed from different lighting sources as will be discussed in section 1.4. An important factor that affects the results is the exposure time, i.e. the duration of the illumination pulse. For best quality, one needs to ensure that the exposure time has to be short so that particles motion is “frozen” without “streaks”. However, it should not be too short in order to guaranty a good illumination of particles for sufficient intensity needed for the camera sensor. Using Q-switched lasers this is usually not a serious constraint, as these pulses are typically around 5 ns long. However, for LEDs this needs to be optimized as will be discussed later. For recording, the high-speed charge coupled device (CCD) or Complementary Metal Oxide Sensors (CMOS) video cameras are now commonly used since they have the capability to capture multiple frames at very high speed. The distance between the cameras and the orientation of their planes of view which is an important parameter to the imaging process and sensitivity to the out of plane motions. The magnification characterizes the pixel size on the sensors. The four cameras and the laser are connected through a synchronizer, which is controlled by a computer and dictates the timing of the camera sequence in conjunction with the firing of the laser [9]. 1.4 Tomographic PIV Applications & Limitations: Since its early development, Tomographic PIV has been widely used in many applications, such as the following areas [4]: - Time resolved cylinder wake [11] - Flow around a cylinder [14] - Boundary layers [12] - Round jet [4, 15, 16, 17] • Shock wave [13] Moreover, this technique proved to be a promising candidate in the ongoing research and development in different industries including the aeronautics and automotive and also in the medical and biological fields [3]. As with any other experimental systems, However, tomographic PIV has limitation factors. These factors were listed in the review by Scarano [1] and summarized below: - High lighting power is required for illumination of a volume. - The size of recorded images needs to be much larger than for regular planar or stereo PIV. - The hardware used are of high cost and presents some safety concerns - Large computational effort is required to analyze the recorded images as compared with the conventional planar PIV. This is particularly true of the 3-D cross-correlation. - Complicated and sensitive 3-D calibration procedure. 1.5 Commonly used illumination sources: The most common way of PIV illumination is by using the Light Amplification by Stimulated Emission of Radiation (LASER). This is because its ability to emit monochromatic light with high energy density, which can easily be formed into thin light sheets. Light pulses can be obtained with pulsed lasers or with continuous wave (CW) lasers, when combined with a chopping system for producing light pulses and/or simply by shuttered recording of the video camera. An optical system must be added to generate the illumination needed for specific purpose and each has its own importance. Lenses and mirrors are first used to generate from the laser beam a light sheet or light volume in the desired position within the test section. Second, two types of lenses are used: a cylindrical lens to expand the beam into a plane and a spherical lens to compress the plane into a thin sheet. Lastly, mirrors can be used to deflect the beam in the desired position, or scan it through the test volume [4]. Scanning beams have higher intensity when they hit the individual particles and can in that way improve the signal to noise ratio. The requirements of the optics for Tomo-PIV volume illumination are in a sense not as demanding as regular PIV, as the volume does not have to be as well defined, as the very thin sheets used in planar PIV. The Tomo-PIV processing extracts the 3-D location of the particles within the volume, whereas in regular PIV this location is determined by the laser sheet itself. An alternative and effective method to obtain the needed illumination is by using high power Light-Emitting Diodes (LED). They can be used with a specialized PIV technique called particle shadow velocimetry (PSV) which has been validated through many PIV applications [5]. It works by focusing the LED light into a collimated beam which is directly toward the camera while seeding. The shadow of the seeding particles will result in a bright background with negative appearance, or dark region where the particles are present. This illumination method features a significant cost reduction when compared with the previous method. The output power can reach up to 1 mJ per pulse when operated at high frequency. The handling and installation is simple with least maintenance required. With recent developments of the blue LED, which earned Isamu Akasaki, Hiroshi Amano and Shuji Nakamura the Nobel Prize in Physics in 2014 [18], the entire visible light spectrum of wavelengths range from 460 nm in blue spectrum up to 645 nm in Red are accessible to in-expensive illumination. Also it is workable with any type of cameras, from high-speed video cameras to the normal low-speed digital consumer cameras. CHAPTER 2 Objectives The main objective of this work was to test a new technique to perform the Tomographic PIV in a more economical way, than the expensive specialized setups used in research laboratories today. Different setup was proposed that offers great cost reduction by exploiting the rapid technological advances, currently occurring in the lighting sources and the image recording devices. The consumer electronics technology is advancing by leaps and bounds, essentially following the Moore’s Law of chip development, by doubling in capability every 18 months [19]. Consumer cameras are in this way increasing the number of pixels of a sensor every year, with the most recent cameras having up to 50 Mpx on a single sensor. Video cameras with 4k sensors are similarly becoming commonplace, with frame-rates up to 60 fps. The newest 8k video has been announced recently. The idea is therefore to use consumer cameras for Tomo-PIV and ride this rapid advance to realize inexpensive experimental techniques for general use in research and industry. However, using single-frame cameras introduces complications regarding separating the two time-information of the locations of the particles. We propose to solve this by encoding the two images on the same frame, using the color information, green for one time and red for the other. Two inexpensive color LEDs are used to illuminate the measurement volume rather than the LASER which is usually used in standard setups. Likewise, normal low-speed digital consumer cameras replaced the very Figure 2.1: Subsection of a Nikon photo demonstrating the basic idea of using two colors to embed time-information into a single image. Here a vortex ring is generated by rupturing a membrane, which spans the opening of a cylinder. This cylinder, which holds a suspension of particles, extends just out of the water surface, to give hydrostatic pressure to force the vortex ring from the bottom. The syringe needle is visible at the center, as two dark shadows, with the first shadow in the green color and the second in the red. Here the amount of particles is too high to perform Tomo-PIV, but they show clearly the overall motions. Note the fully dark shadows in the overlap regions, where both the green and red lights have been blocked, by a different set of particles. expensive high-speed video cameras, or the specialized dual-frame cameras often used in PIV. Figure 2.1 shows a typical image from the Nikon camera, left after the two pulses. The red pulse comes first then the green, but the image seems to indicate the opposite. The explanation is given in the sketch in Figure 2.2. Figure 2.2: Sketch showing how the shadows appear to reverse the order of the light illumination. In other words, the original location of the particle is marked by the green flash, which is the second flash. Similarly, the second location of the particle is red from the first flash, whereas the surrounding area are yellow, a combination of the two colors. Figure 2.2 shows the technique in a nutshell. It shows the two shadows from one particle, which has shifted during the time between the background pulses. The first pulse is red and the particle is located at 1, marked in the figure. Then the particle moves to 2 and the green pulse is flashed. The final sketch show the intensities left on the camera sensor after the two pulses. Where both red and green have flashed on the same pixel the resulting color is yellow. CHAPTER 3 Experimental Setup The experimental setup for the novel Tomographic PIV technique, using the normal low-speed digital cameras and multi-color LEDs is illustrated in Figure 3.1 and consisted of the following components: 1) LED Illumination sources 2) Four DSLR Cameras 3) Specially designed water tank 4) Cylindrical chamber for seeding the system 5) Function and delay generators 6) In combination with the Davis Tomo-PIV software from LaVision. 3.1 LED Illumination Sources: In this work we use colored-shadow imaging. The images of the particles formed as shadows, produced by illumination of a background diffuser screen. This screen is a thin sheet of drafting paper and is illuminated by ultra-bright LEDs. To produce a sufficiently bright background, we ended up using 4 separate LEDs for each color, i.e. a separate LED to backlight a diffuser screen facing each of the four cameras. The same applies for the green and red colors, for a total use of 8 colored LED chips (Figure 3.2). The two colors are generated by consecutive electrical current pulses, so that each color indicates a specific illumination time. The light produced by each ![LED schematic diagram](image) **Fig 3.1:** A schematic drawing for the LED system layout and octagonal water tank model. LED unit is focused through an aspheric condenser lens (Figure 3.3) onto the diffuser film to make it evenly distributed. The emitted light has a wavelength of 623 nm and 525 nm in the red and green spectrums respectively. The drive condition for the red and green LEDs was set on 30 A giving output illumination flux of 1400 lm Fig 3.2: Photos of the Red & Green LED Chips by Luminus [6]. for the red and 3100 lm for the green. The minimum pulse exposure time needed was 10 µsec and the time interval separating the two pulses ranged between 10-20 msec. However, for stronger particle contrast, we often needed to use longer exposure time of about 20 µs. Exposures longer than that would lead to significant smearing of the fastest particles. Fig 3.3: A photo of the Aspheric condenser lens used with the LED system. Using a function generator, the LED and camera systems were operated at 1 Hz through a delay generator that was used to synchronize both systems and control the timing. One reason for using such a slow frequency between subsequent pulse-sequences is to minimize vibrations of the cameras from the opening of the mirror, which must move out of the way before exposing the sensor. This issue can be avoided with mirror-less cameras, which are becoming more common (see for example new Sony 7R), or by disabling the mirror in the up position. Figure 3.4 Fig 3.4: Timing diagram for the LED and Cameras timing diagram for the Tomographic PIV setup. shows a diagram for the timing of the LED and Camera systems. The timing sequence is the following: i. The camera shutters are opened up and stay open for 1 sec. ii. The green LEDs are turned on for about 100 µsec. iii. After waiting for a time duration of about $\text{dt}=10 \text{ ms}$ the red LEDs are turned on for about 100 µsec. The red and green LED chips were mounted next to each other on a common heat sinks for cooling purpose, as illustrated in Figure 3.5. The LED source including the LED evaluation driver cards and the heat sink are manufactured by Luminus, PhlatLight® [6]. LEDs can often be driven at higher currents for short durations than for continuous running, which would burn them up. In this way, one can get much larger illumination intensity for the pulses. This is quite crucial for Tomo-PIV, as the imaging of a volume demands small apertures on the camera lenses, to get sufficiently large depth of field. Smaller aperture, in turn, demands larger illumination strength, than would be required for planar measurements. Fig 3.5: photo of the LED setup with lenses. 3.2 Cameras: The Tomo-PIV images were recorded from a typically setup of 4 viewing directions using four DSLR cameras. The cameras were arranged symmetrically with viewing angle "θ" which is the angle between the outer cameras viewing direction and z-axis as illustrated in Figure 3.7. Since best results can be obtained at \( \theta = 30^\circ \), the cameras were placed accordingly [8]. The cameras used were Nikon D3X Model, single-lens reflex type with 24.5 Megapixels sensor size and capable of taking images at up to 5 frames per second (fps). In our PIV application, the cameras were set to record images at frame rate of 1 Hz and exposure of 1 second, which gives essentially single-shot imaging of the flow-field, unless the flow evolves very slowly. The aperture was set on f-number of 11 (f-number is inversely proportional to the aperture). 50 mm Nikkor lenses were fitted on all cameras to have the same magnification. To minimize vibrations of the system the cameras were mounted on Manfrotto heavy-duty tripod heads, which were in turn mounted on X95 optical rails, as shown in **Figure 3.6**. The triggering of the start of the camera exposure was initiated by the same delay generator that is used to control the LEDs pulses. Between the cameras and delay generator, the signal is converted and the cable is split into four cables terminated with 10-pin connections. **Schematic of the cameras connections is shown in figure 3.7**. For each experiment, two images are obtained by single camera exposure and two successive pulses of the different colors LEDs. The recorded images are saved in two formats: raw 14-bit format (NEF) and JPEG. The images are then uploaded to a specialized computer to be further processed and analyzed as explained in Chapter 4. 3.2.1 Camera calibration To be able to triangulate the location of minute particles in the 3-D volume, it is crucial to have an accurate calibration, between every pixel on a sensor and the corresponding line it views through the illuminated volume. The accuracy of this calibration therefore has to be less than the particle size, or ideally less than the size of the pixel on the sensor. This requires an elaborate 3-D calibration procedure, which must be carried out in-situ, under exactly the same conditions as the experiment is conducted under. In other words, the test section must be full of Fig 3.7: Schematic drawing & photo for the cameras connection. water, both to have the same refractive index field as well as same shape of the outer wall, which can bend slightly due to the hydrostatic pressure from the water. The calibration of the four cameras was done simultaneously by using Lavision 3D calibration plate Type # 11 as calibration target (see photo in Figure 3.8 (a)). The plate size is 100 x 100 mm. The plate is made of black anodized aluminum, with numerous precision dots, each of diameter 2.2 mm with an in-between spacing of 10 mm. The plate surface has regular grooves to make the surface essentially represent two parallel planes of dots. The plate is traversed in a direction perpendicular to its surface, to calibrate through the volume. A Pollux Motorized stepper-motor driven by a Micos controller shown in figure 3.8 (b) was controlled by personal computer to translate the plate in the z-direction over a total of 50 mm with 5 mm between each step. Eleven separate views of the calibration plate were in this way recorded so that it covers the whole measurement volume. The recorded images are saved in similar formats to the PIV images for further processing. The Davis program performs this calibration, to relate each pixel to the line cut through the measurement volume. It automatically finds the center of each of the white dots in each image. This is done separately for each of the cameras. In practice it is found that even this careful calibration procedure is not sufficient for the best results and a follow-on correction procedure is required for the best results, as will be described in the following section. Fig 3.8: Photos of the calibration setup: (a) calibration plate by Lavision, (b) calibration stepper motor by Pollux. 3.2.2 Self calibration This is done by using the actual particle images during an experimental run, rather than the calibration plate. In essence we search the particle images for especially bright particles, which are clear in all four cameras. If we know the identity of a particular particle, we can check whether the pixel lines from the different cameras intersect at a point in the reconstructed volume, as they should for a perfect calibration. The deviations from a perfect intersection can then be corrected. This will only work if the distortions are consistent between adjacent particle images, but not if these distortions are different for each particle in the field, which would indicate random noise. It may be surprising that this would work better than the highly controlled initial calibration using the translating grid, but in practice it has been noticed that things can change slightly between calibration to experimental runs. To list only one possibility for this change, it could be due to changes in temperature of the working fluid, when the experiment is running, which in turn will alter the refractive index, or expand the plexiglass walls, thereby shifting the lines which are extrapolated from each pixel on the cameras into the volume of the experiment. 3.3 Water tank: Two water tanks were built for this work. The first one (Tank-A) was of a perfect octagonal shape (Figure 3.9) and the second one (Tank-B), more irregular one, as shown in Figure 3.10. Both tanks were made from PVC and used as a medium for the PIV measurement. The experiment with Tank-A was associated with illumination system that consists of 2 LEDs per color and designed such that four cameras face two adjacent rectangular sides from front and two LED systems face two adjacent sides from backside. The shape of Tank-B was designed to minimize optical distortions caused by the plexiglass walls. In the first design the viewing through the walls was at a slight angle, whereas in this new tank all the cameras look exactly perpendicular to the walls. This tank required 4 LEDs per color to get sufficient illumination intensity. This tank has a customized shape in a way that four camera views are identical, where they look through four adjacent rectangular sides and four LED systems face two adjacent rectangular sides. Fig 3.9: Drawing and photo of Tank-A model used with 4 LEDs system. Fig 3.10: Drawing and photo of Tank-B model used with 8 LEDs system. 3.4 Flow-field and Seeding system: Seeding particles: The particles selected for this application were glass microsphere (soda lime glass) with two different size ranges. For Tank-A model, 100-212 μm clear particles were used, while 90-100 μm silver coated particles were used for Tank-B model. The particles are supplied from Cospheric. The particles have a density of 1.36 g/cc. Seeding mechanism: The seeding is done using two Release mechanisms. For Tank-A model, a random mass of the seeded particles is dropped manually through 8 inch test sieve U.S.A std made by Fisherbrand with Mesh size of 355 µm into the experiment medium (see Figure 3.11 (a)). While for Tank-B model, the seeding was done by ejecting premixed water with seeding particles into the experiment medium as shown in Figure 3.11 (b). In this part, a plastic bottle connected to an air tube was used. This bottle is covered from top with plastic tube that is having the pre-mixed solution of water and the seeds. The mix is sealed from air by a thin membrane. Then the seeding is obtained by applying sudden pressure generated by Air dispenser on the bottom of the membrane. 3.5 Signal & Air Generators: The signals sent to the LEDs and cameras were generated using 3.1 MHz Synthesized function generator (Model DS 335) while the timing control and ![Fig 3.12: A photo of the function and delay generators.](image) synchronizing was the part done by the Digital Delay generator (model DG645). Both the function and delay generators were manufactured by Stanford Research Systems (SRS). Photos of both generates are displayed in Figure 3.12. The generation of air was through an air dispenser connected to an air source. The air dispenser is equipped with "SHOT" switch for checking shot volume and time. Using a regulator, the air pressure was set on 204 kPA to release the right amount of particles. CHAPTER 4 Data Processing and results Prior to starting the Tomo-PIV data reduction, several steps were necessary to transfer the raw RGB images from the Nikon cameras into a format that can be processed with the Davis 8.0 software from Lavision [7]. The Nikon NEF images, obtained during both the calibration and experimental runs, were first converted to tiff image files (tif), which are without compression. The uploading of these images from the camera memory to the LaVision software was done manually, at this stage and is therefore somewhat tedious. However, in future applications, this could be done automatically through a Matlab program. The image resolution remains as the original 5056 × 4032 pixels with 14-bit color depth. The subsequent steps explain the additional processing to split the colors and perform the 3-D reconstruction of each single color channel separately. The two reconstructions are subsequently combined to allow for the direct 3-D cross-correlation to find the 3-D velocity vectors. RGB Image Color Splitting: The process of color splitting represented a challenge to determine the positions of the red and green particles. As detailed below, analysis was performed on the particle images and revealed that clear separation of red and green shadows is hardly obtained especially where they overlap with high intensities. Figure 4.1: Top panel: Original image using red/green shadows. This is a 2958x1968 pixel subarea of the full 24 Mpx image, where the particles in the vortex ring are visible. Bottom panel: Subsection of 500 x 300 pixels form the center of the above image, which has been brightened and its contrast enhanced. It highlights the very faint green particle images, as compared to the red images. This makes the separation of the two time-images difficult, as explained in the text. Figure 4.1 shows a typical raw RGB image, while figure 4.2 shows one of the better enhanced images, which contains the two colors from the LED pulses. It is clear from the image that the green pulse is stronger on the left side and the red on the right side. The figure also shows a plot of the vertically averaged intensities for ![Image of RGB image and plot of vertically averaged intensities.] **Fig 4.2:** Photo of RGB image and plot of the vertically average intensities of the colors. Fig 4.3: Plots of the actual intensities with the particles (top image) and the average intensities of RGB image (bottom plot), where the spikes from the particles have been removed by averaging in the vertical direction. Fig 4.4: Photos of particles before & after color separation of RGB image: (a) Enhanced RGB image, (b) Red channel, (c) Green channel, (d) Blue channel. the three color components obtained by generating a MATLAB code. Here the green flash is stronger on the left side of the image, while the red is stronger on the right side. Such strong spatial variation in the background color makes it difficult to write general programs to separate the two color shadows. Similarly, other plots of average intensities were obtained and presented in Figure 4.3, where the green color is stronger over the entire image. Figure 4.4 shows the three different color channels RGB which constitute the combined color image shown on the left panel. Even though the color image seems quite easy to tell which is what, when looking at the Red channel we can see shadows from both the red and green (see also Fig. 4.5 below). The green channels is clearer. From this image it is clear that one must use some tricks to automatically separate the two images, as will be explained below. Figure 4.5: The details of an image subsection containing the shadows of one particle. When the particle is at location 1 the Red LED flash is illuminated and when it is at 2 the Green LED illuminates. The mid panel shows the three color channels along a horizontal cut though the center of the above image. The bottom curve shows the difference between the two LED colors, i.e. (Green – Red). The intensities are here in 16-bit format. MATLAB program was also written to split the color image into red and green channels respectively. This produced two images, which represent the two positions of the particles separated by the time difference between the two LEDs. Example of image after color splitting is shown in figure 4.4. However, even though the two colors appear well separated in fig. 4.4(a), it can be noted here that an overlap exists in the Red component making it difficult to distinguish the respective particle’s position of this channel. This is even better shown in figure 4.4 where we plot the pixel intensities across these particle images. The bottom curves in fig. 4.5 show the intensities of the three color channels. The first pulse is red and the particle is located at 1, marked in the figure. Then the particle moves to 2 and the green pulse is flashed. The second green pulse fills in the original shadow from the red light. However, picture 50 shows that in practice this is not the idealized picture, as the green light has a big red component and therefore leaves a hole in the red where the particle is during the green flash. This very strong cross-talk between the two colors, makes the subsequent separation of the particle images from the two different times, difficult! This we have attempted in the following way. The second green pulse is easier to separate, as it has clear lack of signal in the green channel. The first red pulse can be obtained by looking at the difference in intensities between the red and green colors. In Fig. 4.5(c) we calculate this difference (Green – Red) from Figure 4.5(b). This difference signal is positive and much sharper at the location of the first pulse, than at the second pulse. However, keep in mind that the specific particle image shown in this Figure is selected for clarity and many others are less clearly defined. Using this difference between red and green can work well over limited areas of the images, as is shown in figure 4.6. Figure 4.6: The separation of the two particle images, by using the difference between the Red and the Green channels. This image is from a region with very sparse particle density. The dark particle images are much larger than the bright ones. However, keep in mind that this depends somewhat on the shifting of the zero intensity value, relative to the background. However, this method shown in Figures 4.5 and 4.6 will not work all over the entire domain, due to the strongly varying background intensity of the different LED lights, which are not even, due to the large size of the LEDs, which do not fit at the center of focus of the lenses, shown in figures 3.5. The two LED chips facing each of the four cameras, use the same hemispheric lens to collimate the light to form a large spot on the diffuser, as was show in Fig. 3.5. The background variability of the two colors is shown along an average line in Figures 4.2 and 4.3. The intensities are functions of both x and y and over the entire image area it is larger than the particle signal. We therefore must first estimate and then subtract the local background intensities of the different color channels. To find this intensity, it is not enough to simply locally average the image, as the particles are ever-present and skew this estimate. We therefore find the average in a two-stage process. First, we use a moving average of 61x61 pixels, to calculate the average. Following this we do a smaller average over a 25x25 pixel area, but condition the average by the pixel intensity. In other words, if the value of the intensity is too far away from the first average, we discard this number, as it is most likely associated with a particle and not the background. In our preliminary Matlab program, this conditioned averaging takes a lot of computational power and we perform it only around every 5th pixel, filling the gaps with the same value. This is reasonable as the average background intensity changes slowly and not significantly over tens of pixels. This background subtraction can of course be optimized using matrix calculations and packaged software, but for our proof of concept, we are not very concerned about the extent of the required computation time, which is performed on our labtop computer. If the particles are sparse it might be faster to sort the intensity values over all the pixels in the local pixel area and then simply select the median value. This two-stage process provides the background intensity fields for each color channel, as are shown in Figure 4.8 for all 4 cameras. Here we have only shown the areas of the images which contain the vortex ring and where the three-dimensional reconstruction should take place. Figure 4.7: The probability distribution of the intensity values of the pixels average over the entire area of interest, where the particles in the vortex ring are most visible. For this case this area is over a total of 6.847 Mega pixels. The green curve indicates the Green channel and the Red curve is the (Green – Red) channel. The local mean values of the intensities have been subtracted. The black curve corresponds to the red curve flipped about the zero value, to assess the symmetry of the distribution. Ideally, the effective area of the flow should occupy the entire image, which should be possible with the appropriate optical setup (not reached at this stage in this experimental setup), which must take into account mainly three factors. First, the particle size (i.e. how many pixels span across each particle) should be of the order of 10 px, so the Bayer-filter on the camera sensor will not distort the particle location in the different colors. Secondly, the depth of focus of the lens should be sufficient to retain good focus over the whole relevant flow depth. This is primarily determined by the aperture of the lens, i.e. the smaller the aperture the larger the depth of focus. The trade-off is however the amount of light which reaches the camera sensor. Figure 4.8: The average color fields for the four different cameras. The average eliminates the individual particle images, trying to approximate the background intensities. Where the particles are particularly dense, there can be areas of slightly darker patches. The trade-off is usually an aperture of either 16 or 22. Thirdly, the number of particles visible through the depth of the imaging region should not give an image density which is more than 0.05 ppp (particle-per-pixel). **Characterizing the pixel intensity fields for different colours** One method we used to estimate the success of the colour separation was to look directly at the image intensity of the different colours. Figure 4.7 shows the resulting probability density functions (PDF) of the intensities of the green field and the (Green – Red) field. These pdfs have a very different shapes. While both are peaked around the background value, where the intensity is zero, the distribution of the green intensities is very skewed, with few positive values. The skewness is -3.7 and the flatness an astounding 24. On the other hand, the (Green – Red) intensities appear slightly more symmetric, with almost as many positive and negative values, with a skewness of -3 and flatness of 28. The dark line is the flipped distribution around the zero value, which demonstrates differences between the two tails of the pdf, showing how far they are from symmetric. The skewness of the (Green – Red) is also towards negative values. This is of concern as we are trying to find the positive values. This opposite sign of the skewness is probably due to the large amount of cross-talk from the green flash. The tail of the actual intensity distribution is also much wider for the green color. The negative intensity-values in the tails of the distribution (away from the local background intensity) are exactly indicative of the presence of the particle, which is what we are trying to determine. This further highlights that it should be easier to extract the particle locations indicated by the green shadow, than the red shadow. The pdfs for the images from the other three cameras show similar characteristics. Figure 4.8 shows how uneven the background average red and green fields are for all four of the cameras. **Images Pre-Processing:** In the pre-processing step, image enhancements are applied using filters, background subtraction or mask overlays. Functions that were used included inverting to compute Fig 4.9: Photo of a particle image (green component) after Unifying or subtracting background variation the negative of the particles shadow images, smoothing to remove the noise, and masking to mask out the areas with low illumination. Examples of enhancements are shown in figure 4.9 & 4.10. Fig 4.10: Photo showing Inverted green channel of particle image. After merging the resulted data sets, the timing information was added and camera numbers were assigned for all frames. Having these steps done, the images were ready for 3D volume reconstruction. The results for each of these steps are presented below. Figure 4.11-4.14 show the resulting separation of the two fields, Green and (Green-Red). In Figure 4.12 and we show the separated particles for the Green field, over the entire vortex ring, with Figure 4.12 focusing in on the left side of the ring, as viewed in each camera. The distribution of particles looks similar in all for views, indicating that the vortex ring is fairly axisymmetric. Figures 4.13 and 4.14 shows similar images for the Green-Red shadows. Clearly, this color-separation is not perfect and the images are much more pronounced in the Green channel. The particles in the Green-Red are also more “blotchy”, with outer noisy sections, which will affect the accuracy of the velocity fields. Figure 4.15 makes a direct comparison between the particle images from the two times, demonstrating that the Green channel has much sharper particle images. Figure 4.16 shows direct correlation performed on the images, before they are used to reconstruct the 3-D particle locations. This is not expected to give reliable velocity vectors. Figure 4.11: The particle images using the shadow in the green color-channel, from all four cameras. The average intensity has been subtracted first and the intensity magnitudes of the particle images has been adjusted to make the brightest images close to the maximum 8-bit intensity of about 250. Finally, the shadows have been inverted to make the background dark and the particles bright. The closeup marked by the red outline is show in the following figure. The width of each panel is about 2500 px. Close-up images are shown in the following figure. Figure 4.12: Close-up particle images from the previous figure. The width of each panel is about 700 px. Figure 4.13: The Green-Red fields for all 4 cameras, which correspond to the second time-flash. The following Figure shows more close-up images. Figure 4.14: Close-up sections of the images in Figure 4.13. Figure 4.15: Direct comparison of the particle images in the same area from the Green (left panel) and Green-Red (right panel) at the two different times. Figure 4.16: Direct PIV calculated from the Tomo-images. Only the velocities at the outer edge of the vortex ring are close enough to two-dimensional to give good velocity results. The magnitudes of the velocity vector, indicated by the length of the arrows, look reasonable, with a strong decay away from the vortex core. The vectors near the center of the vortex are obviously erroneous. This is easily understood due to the out-of-plane motion of particles due to the overall axisymmetric structure. This further highlights why tomo-PIV is needed for this type of study. In section 4.2 we will show that using Red and Blue flashes works much better to separate the two time-flashes. However, this is demonstrated using only one of the Nikon cameras and the availability and intensity of the LEDs in the lab, did not allow us to pursue this color-combination in this theses and will remain for future work. We therefore attempt to use the Red/Green combination in this thesis, but knowing that future improvements are possible with the Red/Blue lighting. **Volume self-calibration:** As explained in section 3.2.2 it is necessary to correct the original calibration by performing the so-called self-calibration, to correct the calibration coefficients. From averaging these inaccuracies, 3D disparity maps are generated and the calibration function is corrected accordingly. In our experiments for a proof of concept we skipped this step. However, by showing below that we are able to get reasonable velocity fields without self-calibration, one can expect to get an improved result if self-calibration is included. Indeed being able to produce any reasonable velocity field without the self-calibration indicates that our initial calibration is of quite high quality. Figure 4.17: The intensity distribution of the particles in the reconstructed volume, showing clearly where the particles are concentrated. In our measurements we confined the particles to seeding the vortex ring fluid inside the piston. **Volume Reconstruction:** After applying the self-calibration method (skipped herein), the images are ready for the volume reconstruction to calculate the volumetric particle distribution in the 3D voxel space. The processing multiplicative operation “Fast MART” has been used to reconstruct the 3D particle distribution with 5 iterations. Figures 4.17 and 4.18 show the 3D distribution of particles’ intensities and z-profile for the reconstructed volume. Figure 4.18 shows that the average brightness of the reconstructed Fig 4.18: plots showing the intensities profile in the z-direction through the reconstructed volume. particles are quite uniform across the reconstructed volume. This can be expected as we are looking at shadows which are uniformly dark across the volume, which have subsequently been inverted. This compares well with the conventional laser-illuminated volume slices where the intensity of the particles will depend greatly on where they are within the laser sheet. Figure 4.19 shows typical particle reconstructions, where each plane corresponds to a thickness of one voxel. The reconstruction we performed is 1035 planes deep in the z-direction. Each particle is observed in about 10 adjacent planes. Figure 4.19: One reconstruction plane out of a total of 1035 adjacent planes. Keep in mind that the width into the board of this plane is only 1 voxel, whereas the total horizontal width of the upper image is about 6000 voxels. 3D Cross-Correlation: At this step the 3D velocity vector field can be calculated from the reconstructed volume using the processing operation “Direct Correlation”. Initial pass of 256 X 256 pixels interrogation region with a 1:1 elliptical weighting is usually used and followed by two 128 X 128 passes with interrogation region overlap of 75%. This ![Figure 4.20](image-url) **Figure 4.20**: Velocity vectors in a plane near the centerline of the vortex ring. This is one of 27 reconstructed Tomo-PIV planes. The absolute value of the vorticity out of the plane is plotted by the color field. The right side of the vortex has a clear rotation and higher vorticity near the center of the vortex. The vorticity is broken up around the core on the left side. results in 27 adjacent velocity planes, where we have about 90 x 80 velocity vectors in each plane, giving a total of 90 x 80 x 27 = 194,000 fully 3-D velocity vectors. Figure 4.20 shows a typical plane of velocities. The region on the right side outside the vortex, is so devoid of particles due to low illumination intensity that no reliable vectors were found there. It is instructive to compare the right side of the vortex in Figure 4.20 to that from the projected image in Figure 4.16, where no information can be extracted in that region. Figure 4.22 Velocity information in the center-plane cut through the vortex. The color indicates the magnitude of the horizontal component of the velocity vector. Showing the outflow on top of the vortex, whereas on the bottom the flow is towards the centerline. Figure 4.23: The color indicates the magnitude of the vertical component of the velocity vector. It shows the up-flow near the center and on top of the vortex, whereas on the outer edges of the vortex flow are flowing downwards. Figures 4.20 - 4.24 show some more results from this velocity volume. Figure 4.24: The out of plane velocity in a plane near the edge of the vortex. The red color shows the top of the vortex coming towards us, with the bottom going into the board. This is consistent with predominantly out of plane motions near the edge of the vortex. The three-dimensional nature of the velocity field is clearly demonstrated by the out-of-plane component shown in Figure 4.24. Here a plane cuts through the edge of the vortex, so the top flow is coming towards us and the bottom is going into the board. 4.2 Using Red and Blue LEDs: Near the end of the work on this thesis, the difficulty of separating the Red and Green shadows became very apparent, as there was a lot of cross-talk between the colors. To try to fix this we therefore attempted some experiments with Red and Blue flashes, which are better separated in the wavelength space and will therefore have less cross-talk between these two channels. This was only possible using one camera, due to time-constraints and LED availability. This worked considerably better and should be pursued in future work. Figure 4.25 shows a typical example. When the green LEDs were replaced by Blue, splitting was enhanced and the identification of particles positions has determined to be more feasible. Figure 4.26 illustrates these enhancements with main featured advantages as follows: - Less overlap in spectral space - Much clearer separation of color-layers Figure 4.25: Subsection from a typical Red-Blue LED image. The width of this panel is 1544 px. Figure 4.26: Another example of the colour separation using RED-BLUE LEDs. The large “particles” are bubbles which are attracted to the cores of the vortex ring. The Green channel shown in the middle is almost entirely dark. Figure 4.27: The pdf of the RED and BLUE pixel intensities, as well as the intensities of the difference RED-BLUE (black curve). The arrows point at two prominent peaks which indicate the pixels of most of the particles, which would make them significantly easier to separate than the RED-GREEN images used earlier in this thesis. Figure 4.27 shows the pdfs of the intensities of the Red, Blue and their difference, calculated from the image in Figure 4.25. There are clear peaks on both sides of the zero value, representing the particles. There is an additional peak at positive difference, which is due to uneven background intensities, which have not been subtracted in this case. Figure 4.28: Subsection of the previous figure, showing the individual color channels. The width of each panel is 784 px. Figure 4.26 and 4.28 shows another example for the Red/Blue LED illumination. In this case the vortex ring has trapped a number of medium-size bubbles along its core. These bubbles are each close to spherical, as they are of the order of a millimeter, which is smaller than the capillary length in water, which is 2.7 mm. The capillary length characterizes the size where the buoyancy and surface tension are balanced. The color of the bubbles are quite distinct, showing their motion between the flashes. It is nice to see the fully dark region where the two images of the bubble overlap. Figure 4.29 shows another realization where the bubbles are larger and aligned along the vortex core. Figure 4.29: Image using Red/Blue LEDs. Shows an image subsection with a vortex ring with numerous large bubbles around the core of the ring. The bubbles are attracted to the core, by the low Bernoulli pressure there. These bubbles are moving up with the translation of the vortex ring, while the outermost bubbles, at left of image, are shown to move downwards due to the vertical motions. This is of course not what such an image should be used for, as it represents a projection of many particles at different depth into the board, but for this case we only had an image from this one direction. Figure 4.30 shows the intensity cut through two particle shadows, demonstrating a clear separation of the two pulses. This should be compared to the cut in Figure 4.5, for the red/green LEDs, now showing a much better result, which should be used in further experiments with this Tomo-PIV method. Fig 4.30: photos and plots for Red and Blue LEDs experiment. CHAPTER 5 Discussion and Conclusions The work presented in this thesis provides an evaluation of a low-cost colored shadow imaging method for conducting tomographic PIV measurements. This new method is based on using two different color LEDs for the illumination and four commercial DSLR CMOS cameras for the imaging. The two different pulse-times were encoded in the images using two different colors (red and green) from the LED pulses. Two different experimental setups were used, to try to reduce optical distortions. As anticipated, the highest quality images were obtained with 8 LEDs (Tank-B model) to illuminate the volume, i.e. by using 4 pairs of two LEDs one for each color and where the pulse from each of these pairs of LEDs was directed toward a diffuser screen opposite to each camera. Different particle-seeding mechanisms were implemented, while a large pulse-driven vortex ring formed a flow patterns that allowed for the successful tomographic PIV measurement presented herein. The processing was carried out by transferring the images manually to the Davis commercial software from LaVision (Germany). The color images were converted and split up into the red and green images using localized background subtraction and differences in the Red and Green channels. Following this we could successfully reconstruct two separate particles fields, corresponding to the two different illumination times, thereby allowing for cross-correlations to get the three-dimensional velocity field. In conclusion, we were able to perform a proof-of-concept realization using red/green LED illuminations. However, this demonstrated excessive cross-talk between the red and green channels on the sensor, due to overlapping sensitivities on the green wavelengths, into the red component. Finally, this result compelled us to try red/blue LED illumination, where the different color pixels are fully separated, which gave much better results for a single camera. This suggests that using red and blue LEDs shadows could give higher-quality Tomo-PIV results. This shows great promise, but will have to wait for future work. REFERENCES
Generic Registry-Registrar Protocol Requirements Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2002). All Rights Reserved. Abstract This document describes high-level functional and interface requirements for a client-server protocol for the registration and management of Internet domain names in shared registries. Specific technical requirements detailed for protocol design are not presented here. Instead, this document focuses on the basic functions and interfaces required of a protocol to support multiple registry and registrar operational models. Conventions Used In This Document The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. Table of Contents 1. Introduction ........................................... 2 1.1 Definitions, Acronyms, and Abbreviations ........... 2 2. General Description ..................................... 4 2.1 System Perspective .................................... 4 2.2 System Functions ...................................... 4 2.3 User Characteristics ................................. 5 2.4 Assumptions ........................................... 5 3. Functional Requirements ............................... 5 3.1 Session Management ................................. 6 3.2 Identification and Authentication ..................... 6 3.3 Transaction Identification ............................ 7 3.4 Object Management .................................... 7 3.5 Domain Status Indicators ............................ 13 1. Introduction The advent of shared domain name registration systems illustrates the utility of a common, generic protocol for registry-registrar interaction. A standard generic protocol will allow registrars to communicate with multiple registries through a common interface, reducing operational complexity. This document describes high level functional and interface requirements for a generic provisioning protocol suitable for registry-registrar operations. Detailed technical requirements are not addressed in this document. 1.1 Definitions, Acronyms, and Abbreviations ccTLD: Country Code Top Level Domain. "us" is an example of a ccTLD. DNS: Domain Name System gTLD: Generic Top Level Domain. "com" is an example of a gTLD. IANA: Internet Assigned Numbers Authority IETF: Internet Engineering Task Force IP Address: Either or both IPv4 or IPv6 address. IPv4: Internet Protocol version 4 IPv6: Internet Protocol version 6 RRP: Registry-Registrar Protocol TLD: Top Level Domain. A generic term used to describe both gTLDs and ccTLDs that exist under the top-level root of the domain name hierarchy. Exclusive Registration System: A domain name registration system in which registry services are limited to a single registrar. Exclusive Registration Systems are either loosely coupled (in which case the separation between registry and registrar systems is readily evident), or tightly coupled (in which case the separation between registry and registrar systems is obscure). Name Space: The range of values that can be assigned within a particular node of the domain name hierarchy. Object: A generic term used to describe entities that are created, updated, deleted, and otherwise managed by a generic registry-registrar protocol. Registrant: An entity that registers domain names in a registry through the services provided by a registrar. Registrants include individuals, organizations, and corporations. Registrar: An entity that provides front-end domain name registration services to registrants, providing a public interface to registry services. Registry: An entity that provides back-end domain name registration services to registrars, managing a central repository of information associated with domain name delegations. A registry is typically responsible for publication and distribution of zone files used by the Domain Name System. Shared Registration System: A domain name registration system in which registry services are shared among multiple independent registrars. Shared Registration Systems require a loose coupling between registrars and a registry. Thick Registry: A registry in which all of the information associated with registered entities, including both technical information (information needed to produce zone files) and social information (information needed to implement operational, business, or legal practices), is stored within the registry repository. Thin Registry: A registry in which all elements of the social information associated with registered entities is distributed between a shared registry and the registrars served by the registry. Zone: The complete set of information for a particular "pruned" subtree of the domain space. The zone concept is described fully in [RFC1035]. 2. General Description A basic understanding of domain name registration systems provides focus for the enumeration of functional and interface requirements of a protocol to serve those systems. This section provides a high-level description of domain name registration systems to provide context for the requirements identified later in this document. 2.1 System Perspective A domain name registration system consists of a protocol and associated software and hardware that permits registrars to provide Internet domain name registration services within the name spaces administered by a registry. A registration system can be shared among multiple competing registrars, or it can be served by a single registrar that is either tightly or loosely coupled with back-end registry services. The system providing registration services for the .com, .net, and .org gTLDs is an example of a shared registration system serving multiple competing registrars. The systems providing registration services for some ccTLDs and the .gov and .mil gTLDs are examples of registration systems served by a single registrar. 2.2 System Functions Registrars access a registry through a protocol to register objects and perform object management functions. Required functions include session management; object creation, update, renewal, and deletion; object query; and object transfer. A registry generates DNS zone files for the name spaces it serves. Zone files are created and distributed to a series of name servers that provide the foundation for the domain name system. 2.3 User Characteristics Protocol users fall into two broad categories: entities that use protocol client implementations and entities that use protocol server implementations, though an entity can provide both client and server services if it provides intermediate services. A protocol provides a loose coupling between these communicating entities. 2.4 Assumptions There is one and only one registry that is authoritative for a given name space and zone. A registry can be authoritative for more than one name space and zone. Some registry operations can be billable. The impact of a billable operation can be mitigated through the specification of non-billable operations that allow a registrar to make informed decisions before executing billable operations. A registry can choose to implement a subset of the features provided by a generic registry-registrar protocol. A thin registry, for example, might not provide services to register social information. Specification of minimal implementation compliance requirements is thus an exercise left for a formal protocol definition document that addresses the functional requirements specified here. A protocol that meets the requirements described here can be called something other than "Generic Registry Registrar Protocol". The requirements described in this document are not intended to limit the set of objects that can be managed by a generic registry-registrar protocol. 3. Functional Requirements This section describes functional requirements for a registry-registrar protocol. Technical requirements that describe how these requirements are to be met are out of scope for this document. 3.1 Session Management [1] The protocol MUST provide services to explicitly establish a client session with a registry server. [2] In a connection-oriented environment, a server MUST respond to connection attempts with information that identifies the server and the default server protocol version. [3] The protocol MUST provide services that allow a client to request use of a specific protocol version as part of negotiating a session. [4] The protocol MUST provide services that allow a server to decline use of a specific protocol version as part of negotiating a session. [5] A session MUST NOT be established if the client and server are unable to reach agreement on the protocol version to be used for the requested session. [6] The protocol MUST provide services to explicitly end an established session. [7] The protocol MUST provide services that provide transactional atomicity, consistency, isolation, and durability in the advent of session management failures. [8] The protocol MUST provide services to confirm that a transaction has been completed if a session is aborted prematurely. 3.2 Identification and Authentication [1] The protocol or another layered protocol MUST provide services to identify registrar clients and registry servers before granting access to other protocol services. [2] The protocol or another layered protocol MUST provide services to authenticate registrar clients and registry servers before granting access to other protocol services. [3] The protocol or another layered protocol MUST provide services to negotiate an authentication mechanism acceptable to both client and server. 3.3 Transaction Identification [1] Registry operations that create, modify, or delete objects MUST be associated with a registry-unique identifier. The protocol MUST allow each transaction to be identified in a permanent and globally unique manner to facilitate temporal ordering and state management services. 3.4 Object Management This section describes requirements for object management, including identification, registration, association, update, transfer, renewal, deletion, and query. 3.4.1 Object Identification Some objects, such as name servers and contacts, have utility in multiple registries. However, maintaining disjoint copies of object information in multiple registries can lead to inconsistencies that have adverse consequences for the Internet. For example, changing a name server name in one registry, but not in a second registry that refers to the server for domain name delegation, can produce unexpected DNS query results. [1] The protocol MUST provide services to associate an object identifier with every object. [3] An object’s identifier MUST NOT change during the lifetime of the object in a particular repository, even if administrative control of the object changes over time. [4] An object identifier MUST contain information that unambiguously identifies the object. [5] Object identifier format specified by the protocol SHOULD be easily parsed and understood by humans. [6] An object’s identifier MUST be generated and stored when an object is created. 3.4.2 Object Registration [1] The protocol MUST provide services to register Internet domain names. [2] The protocol MUST permit a starting and ending time for a domain name registration to be negotiated, thereby allowing a registry to implement policies allowing a range of registration validity periods (the start and end points in time during which one normally assumes that an object will be active), and enabling registrars to select a period for each registration they submit from within the valid range based on out-of-band negotiation between the registrar and the registrant. Registries SHOULD be allowed to accept indefinitely valid registrations if the policy that they are implementing permits, and to specify a default validity period if one is not selected by a registrar. Registries MUST be allowed to specify minimal validity periods consistent with prevailing or preferred practices for fee-for-service recovery. The protocol MUST provide features to ensure that both registry and registrar have a mutual understanding of the validity period at the conclusion of a successful registration event. [3] The protocol MUST provide services to register name servers. Name server registration MUST NOT be limited to a specific period of time. Name servers MUST be registered with a valid IPv4 or IPv6 address when a "glue record" is required for delegation. A name server MAY be registered with multiple IP addresses. Multiple name servers using distinct server names MAY share an IP address. [4] The protocol MUST provide services to manage delegation of zone authority. Names of name servers MUST NOT be required to be tied to the name of the zone(s) for which the server is authoritative. [5] The protocol MUST provide services to register social information describing human and organizational entities. Registration of social information MUST NOT be limited to a specific period of time. Social information MAY include a name (individual name, organization name, or both), address (including street address, city, state or province (if applicable), postal code, and country), voice telephone number, email address, and facsimile telephone number. [6] Protocol services to register an object MUST be available to all authorized registrars. 3.4.3 Object Association [1] The protocol MUST provide services to associate name servers with domain names to delegate authority for zones. A domain name MAY have multiple authoritative name servers. Name servers MAY be authoritative for multiple zones. [2] The protocol MUST provide services to associate IP addresses with name servers. A name server MAY have multiple IP addresses. An IP address MAY be associated with multiple name server registrations. [3] The protocol MUST provide services to associate social information with other objects. Social information associations MUST be identified by type. "Registrant" is an example social information type that might be associated with an object such as a domain name. [4] The protocol MUST provide services to associate object management capabilities on a per-registrar basis. [5] Some managed objects represent shared resources that might be referenced by multiple registrars. The protocol MUST provide services that allow a registrar to associate an existing shared resource object with other registered objects sponsored by a second registrar. For example, authority for the example.tld zone (example.tld domain object managed by registrar X) and authority for the test.tld zone (test.tld domain object managed by registrar Y) might be delegated to server ns1.example.tld (managed by registrar X). Registrar X maintains administrative control over domain object example.tld and server object ns1.example.tld, and registrar Y maintains administrative control over domain object test.tld. Registrar Y does not have administrative control over server object ns1.example.tld. 3.4.4 Object Update [1] The protocol MUST provide services to update information associated with registered Internet domain names. [2] The protocol MUST provide services to update information associated with registered name servers. [3] The protocol MUST provide services to update social information associated with registered human and organizational entities. [4] The protocol MUST provide services to limit requests to update a registered object to the registrar that currently sponsors the registered object. [5] The protocol MUST provide services to explicitly reject unauthorized attempts to update a registered object. 3.4.5 Object Transfer [1] The protocol MUST provide services to transfer domain names among authorized registrars. Name servers registered in a domain being transferred MUST be transferred along with the domain itself. For example, name servers "ns1.example.tld" and "ns2.example.tld" MUST be implicitly transferred when domain "example.tld" is transferred. [2] The protocol MUST provide services to describe all objects, including associated objects, that are transferred as a result of an object transfer. [3] The protocol MUST provide services to transfer social information objects among authorized registrars. [4] Protocol transfer requests MUST be initiated by the registrar who wishes to become the new administrator of an object. [5] The protocol MUST provide services to confirm registrar authorization to transfer an object. [6] The protocol MUST provide services that allow the requesting registrar to cancel a requested object transfer before the request has been approved or rejected by the original sponsoring registrar. Requests to cancel the transfer of registered objects MUST be limited to the registrar that requested transfer of the registered object. Unauthorized attempts to cancel the transfer of a registered object MUST be explicitly rejected. [7] The protocol MUST provide services that allow the original sponsoring registrar to approve or reject a requested object transfer. Requests to approve or reject the transfer of registered objects MUST be limited to the registrar that currently sponsors the registered object. Unauthorized attempts to approve or reject the transfer of a registered object MUST be explicitly rejected. [8] The protocol MUST provide services that allow both the original sponsoring registrar and the potential new registrar to monitor the status of both pending and completed transfer requests. [9] Transfer of an object MAY extend the object’s registration period. If an object’s registration period will be extended as the result of a transfer, the new expiration date and time MUST be returned after successful completion of a transfer request. Requests to initiate the transfer of a registered object MUST be available to all authorized registrars. Registrars might become non-functional and unable to respond to transfer requests. It might be necessary for one registrar to assume management responsibility for the objects associated with another registrar in the event of registrar failure. The protocol MUST NOT restrict the ability to transfer objects in the event of registrar failure. 3.4.6 Object Renewal/Extension The protocol MUST provide services to renew or extend the validity period of registered domain names. If applicable, the new expiration date and time MUST be returned after successful completion of a request to renew or extend the validity period. Requests to renew or extend the validity period of a registered object MUST be limited to the registrar that currently sponsors the registered object. Unauthorized attempts to renew or extend the validity period of a registered object MUST be explicitly rejected. 3.4.7 Object Deletion The protocol MUST provide services to remove a domain name from the registry. The protocol MUST provide services to remove a name server from the registry. The protocol MUST provide services to remove a social information object from the registry. Requests to remove a registered object MUST be limited to the registrar that currently sponsors the registered object. Unauthorized attempts to remove a registered object MUST be explicitly rejected. 3.4.8 Object Existence Query This section describes requirements for a lightweight query mechanism whose sole purpose is to determine if an object exists in a registry. The protocol MUST provide services to determine if a domain name exists in the registry. Domain names MUST be searchable by fully qualified name. [2] The protocol MUST provide services to determine if a name server exists in the registry. Name servers MUST be searchable by fully qualified name. [3] The protocol MUST provide services to determine if a social information object exists in the registry. Social information MUST be searchable by a registry-unique identifier. [4] A query to determine if an object exists in the registry MUST return only a positive or negative response so that server software that responds to this query can be optimized for speed. [5] Requests to determine the existence of a registered object MUST be available to all authorized registrars. 3.4.9 Object Information Query This section describes requirements for a query mechanism whose purpose is to provide detailed information describing objects that exist in a registry. [1] The protocol MUST provide services to retrieve information describing a domain name from the registry. Returned information MUST include the identifier of the current sponsoring registrar, the identifier of the registrar that originally registered the domain, the creation date and time, the expiration date and time (if any), the date and time of the last successful update (if any), the identifier of the registrar that performed the last update, the date and time of last completed transfer (if any), the current status of the domain, authorization information, identifiers describing social information associated with the domain, and the subordinate name servers registered in the domain. Authorization information MUST only be returned to the current sponsoring registrar. [2] The protocol MUST provide services to retrieve information describing a name server from the registry. Returned information MUST include the identifier of the current sponsoring registrar, the identifier of the registrar that originally registered the name server, the creation date and time, the date and time of the last successful update (if any), the identifier of the registrar that performed the last update, the date and time of last completed transfer (if any), and the IP addresses currently associated with the name server. [3] The protocol MUST provide services to retrieve social information from the registry. Returned information MUST include identification attributes (which MAY include name, address, telephone numbers, and email address), the identifier of the registrar that originally registered the information, the creation date and time, the date and time of the last successful update (if any), the identifier of the registrar that performed the last update, the date and time of last completed transfer (if any), and authorization information. Authorization information MUST only be returned to the current sponsoring registrar. [4] The protocol MUST provide services to identify all associated object references, such as name servers associated with domains (including delegations and hierarchical relationships) and contacts associated with domains. This information MUST be visible if the object associations have an impact on the success or failure of protocol operations. [5] Requests to retrieve information describing a registered object MAY be granted by the registrar that currently sponsors the registered object. Unauthorized attempts to retrieve information describing a registered object MUST be explicitly rejected. 3.5 Domain Status Indicators [1] The protocol MUST provide status indicators that identify the operational state of a domain name. Indicators MAY be provided to identify a newly created state (the domain has been registered but has not yet appeared in a zone), a normal active state (the domain can be modified and is published in a zone), an inactive state (the domain can be modified but is not published in a zone because it has no authoritative name servers), a hold state (the domain can not be modified and is not published in a zone), a lock state (the domain can not be modified and is published in a zone), a pending transfer state, and a pending removal state. [2] If provided, protocol indicators for hold and lock status MUST allow independent setting by both registry and registrar. [3] A domain MAY have multiple statuses at any given time. Some statuses MAY be mutually exclusive. 3.6 Transaction Completion Status [1] The protocol MUST provide services that unambiguously note the success or failure of every transaction. Individual success and error conditions MUST be noted distinctly. 4. External Interface Requirements External interfaces define the interaction points between a system and entities that communicate with the system. Specific areas of interest include user interfaces, hardware interfaces, software interfaces, and communications interfaces. 4.1 User, Hardware, and Software Interfaces [1] The protocol MUST define a wire format for data exchange, not an application design for user, hardware, or software interfaces so that any application able to create the same bits on the wire, and to maintain the image of the same integrity constraints, is a valid implementation of the protocol. 4.2 Communications Interfaces [1] Registries, registrars, and registrants interact using a wide spectrum of communications interfaces built upon multiple protocols, including transport layer protocols such as TCP and application layer protocols such as SMTP. The protocol MUST only be run over IETF approved protocols that feature congestion control, such as TCP and SCTP. 5. Performance Requirements [1] Run-time performance is an absolutely critical aspect of protocol usability. While performance is very heavily dependent on the hardware and software architecture that implements a protocol, protocol features can have a direct impact on the ability of the underlying architecture to provide optimal performance. The protocol MUST be usable in both high volume and low volume operating environments. 6. Design Constraints Protocol designers need to be aware of issues beyond functional and interface requirements when balancing protocol design decisions. This section describes additional factors that might have an impact on protocol design, including standards compliance and hardware limitations. 6.1 Standards Compliance [1] The protocol MUST conform to current IETF standards. Standards for domain and host name syntax, IP address syntax, security, and transport are particularly relevant. Emerging standards for the Domain Name System MUST be considered as they approach maturity. [2] The protocol MUST NOT reinvent services offered by lower layer protocol standards. For example, the use of a transport that provides reliability is to be chosen over use of a non-reliable transport with the protocol itself using retransmission to achieve reliability. 6.2 Hardware Limitations [1] The protocol MUST NOT define any features that preclude hardware independence. 7. Service Attributes Elements of service beyond functional and interface requirements are essential factors to consider as part of a protocol design effort. This section describes several important service elements to be addressed by protocol designers, including reliability, availability, scalability, maintainability, extensibility, and security. 7.1 Reliability [1] Reliability is a measure of the extent to which a protocol provides a consistent, dependable level of service. Reliability is an important attribute for a domain name management protocol. An unreliable protocol increases the risk of data exchange errors, which at one extreme can have a direct impact on protocol usability and at the other extreme can introduce discontinuity between registry and registrar data stores. The protocol MUST include features that maximize reliability at the application protocol layer. Services provided by underlying transport, session, and presentation protocols SHOULD also be considered when addressing application protocol reliability. [2] The protocol MUST be run over the most reliable transport option available in a given environment. The protocol MUST NOT implement a service that is otherwise available in an applicable standard transport. [3] Default protocol actions for when a request or event times out MUST be well defined. 7.2 Availability [1] Availability is a measure of the extent to which the services provided by a protocol are accessible for an intended use. Availability of an application layer protocol is primarily dependent on the software and hardware systems that implement the protocol. The protocol MUST NOT include any features that impinge on the underlying availability of the software and hardware systems needed to service the protocol. 7.3 Scalability [1] Scalability is a measure of the extent to which a protocol can accommodate use growth while preserving acceptable operational characteristics. The protocol MUST be capable of operating at an acceptable level as the load on registry and registrar systems increases. 7.4 Maintainability [1] Maintainability is a measure of the extent to which a protocol can be adapted or modified to address unforeseen operational needs or defects. The protocol SHOULD be developed under the nominal working group processes of the IETF to provide a well-known mechanism for ongoing maintenance. 7.5 Extensibility [1] Extensibility is a measure of the extent to which a protocol can be adapted for future uses that were not readily evident when the protocol was originally designed. The protocol SHOULD provide features that at a minimum allow for the management of new object types without requiring revisions to the protocol itself. [2] The requirements described in this document are not intended to limit the set of objects that might be managed by the protocol. The protocol MUST include features that allow extension to object types that are not described in this document. [3] The protocol MUST provide an optional field within all commands whose format and use will be controlled by individual registry policy. 7.6 Security [1] Transactional privacy and integrity services MUST be available at some protocol layer. [2] This document describes requirements for basic user identification and authentication services. A generic protocol MAY include additional security services to protect against the attacks described here. A generic protocol MUST depend on other-layered protocols to provide security services that are not provided in the generic protocol itself. A generic protocol that relies on security services from other-layered protocols MUST specify the protocol layers needed to provide security services. 8. Other Requirements Certain aspects of anticipated operational environments have to be considered when designing a generic registry-registrar protocol. Areas of concern include database operations, operations, site adaptation, and data collection. 8.1 Database Requirements [1] The protocol MUST NOT have any database dependencies. However, efficient use of database operations and resources has to be considered as part of the protocol design effort. The protocol SHOULD provide atomic features that can be efficiently implemented to minimize database load. 8.2 Operational Requirements [1] Registry-registrar interactions at the protocol level SHOULD operate without human intervention. However, intermediate services that preserve the integrity of the protocol MAY be provided. For example, an intermediate service that determines if a registrant is authorized to register a name in a name space can be provided. [2] The protocol MUST provide services that allow clients and servers to maintain a consistent understanding of the current date and time to effectively manage objects with temporal properties. 8.3 Site Adaptation Requirements [1] Registries and registrars have varying business and operational requirements. Several factors, including governance standards, local laws, customs, and business practices all play roles in determining how registries and registrars are operated. The protocol MUST be flexible enough to operate in diverse registry-registrar environments. 8.4 Data Collection Requirements [1] Some of the data exchanged between a registrar and registry might be considered personal, private, or otherwise sensitive. Disclosure of such information might be restricted by laws and/or business practices. The protocol MUST provide services to identify data collection policies. Some of the social information exchanged between a registrar and registry might be required to create, manage, or operate Internet or DNS infrastructure facilities, such as zone files. Such information is subject to public disclosure per relevant IETF standards. 9. Internationalization Requirements [1] [RFC1035] describes Internet host and domain names using characters traditionally found in a subset of the 7-bit US-ASCII character set. More recent standards, such as [RFC2130] and [RFC2277], describe the need to develop protocols for an international Internet. These and other standards MUST be considered during the protocol design process to ensure world-wide usability of a generic registry registrar protocol. [2] The protocol MUST allow exchange of data in formats consistent with current international agreements for the representation of such objects. In particular, this means that addresses MUST include country, that telephone numbers MUST start with the international prefix "+", and that appropriate thought be given to the usability of information in both local and international contexts. This means that some elements (like names and addresses) might need to be represented multiple times, or formatted for different contexts (for instance English/French in Canada, or Latin/ideographic in Japan). [3] All date and time values specified in a generic registry-registrar protocol MUST be expressed in Universal Coordinated Time. Dates and times MUST include information to represent a four-digit calendar year, a calendar month, a calendar day, hours, minutes, seconds, fractional seconds, and the time zone for Universal Coordinated Time. Calendars apart from the Gregorian calendar MUST NOT be used. 10. IANA Considerations This document does not require any action on the part of IANA. Protocol specifications that require IANA action MUST follow the guidelines described in [RFC2434]. 11. Security Considerations Security services, including confidentiality, authentication, access control, integrity, and non-repudiation SHOULD be applied to protect interactions between registries and registrars as appropriate. Confidentiality services protect sensitive exchanged information from inadvertent disclosure. Authentication services confirm the claimed identity of registries and registrars before engaging in online transactions. Access control services control access to data and services based on identity. Integrity services guarantee that exchanged data has not been altered between the registry and the registrar. Non-repudiation services provide assurance that the sender of a transaction cannot deny being the source of the transaction, and that the recipient cannot deny being the receiver of the transaction. 12. Acknowledgements This document was originally written as an individual submission Internet-Draft. The provreg working group later adopted it as a working group document and provided many invaluable comments and suggested improvements. The author wishes to acknowledge the efforts of WG chairs Edward Lewis and Jaap Akkerhuis for their process and editorial contributions. Specific comments that helped guide development of this document were provided by Harald Tveit Alvestrand, Christopher Ambler, Karl Auerbach, Jorg Bauer, George Belotsky, Eric Brunner-Williams, Jordyn Buchanan, Randy Bush, Bruce Campbell, Dan Cohen, Andre Cormier, Kent Crispin, Dave Crocker, Ayesha Damaraaju, Lucio De Re, Mats Dufberg, Peter Eisenhauer, Sheer El-Showk, Urs Eppenberger, Patrik Falstrom, Paul George, Patrick Greenwell, Jarle Greipsland, Olivier Guillard, Alf Hansen, Paul Hoffman, Paul Kane, Shane Kerr, Elmar Knipp, Mike Lampson, Matt Larson, Ping Lu, Klaus Malorny, Bill Manning, Michael Mealling, Patrick Mevzek, Peter Mott, Catherine Murphy, Martin Oldfield, Geva Patz, Elisabeth Porteneuve, Ross Wm. Rader, Budi Rahardjo, Annie Renard, Scott Rose, Takeshi Saigoh, Marcos Sanz, Marcel Schneider, J. William Semich, James Seng, Richard Shockey, Brian Spolarich, William Tan, Stig Venaas, Herbert Vitzthum, and Rick Wesson. 13. References Normative References: Informative References: 14. Editor’s Address Scott Hollenbeck VeriSign Global Registry Services 21345 Ridgetop Circle Dulles, VA 20166-6503 USA EMail: shollenbeck@verisign.com 15. Full Copyright Statement Copyright (C) The Internet Society (2002). All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English. The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns. This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Acknowledgement Funding for the RFC Editor function is currently provided by the Internet Society.
The place created by the building complex has a distinct character where specific yet integrated functions occur. The buildings provide a place for the activities to occur. The functions must be accommodated optimally, with high environmental regard. A strong place presupposes that there is meaningful correspondence between site, settlement and architectural detail. The buildings become a concretisation of the concept of healing. Experiences and meaning come subconsciously while moving around a place. The movement of the body in space provides a measure for things, allowing people to appreciate the splendour and explor that which is hidden; to organise what is there to see, hear, feel, smell and touch in a given environment (Meiss 15:1990). The layout of the buildings affects the orientation and wayfinding of the user. This in turn will either make their experience exciting and helpful, or disorientating and frightening. Wayfinding and spatial orientation are important aspects of an efficient environment. Simplistic environments must be avoided; spatial complexity can be provided without making environments confusing and disorganized. To provide the correct environment to be able to fulfill the functions of the building complex, certain baseline criteria are needed. These criteria ensure that the building accommodates all users, invites participation, monitors safety and health, reduces short term and long term economic costs, considers context and site, selects materials responsibly, and keeps the environment as an important stakeholder in the project through climatic response and environmental concern. 4.1 Sustainability Sustainability is evocative of optimistic and protective ideas, recalling sustenance and therefore a nurturing, or at least good common sense (Steele 1997:ix). Linked as it has been to development, sustainability’s connotations are those of building a solid future and achieving prolonged, lasting, worthwhile progress. What is sustainable architecture? A basic definition is an architecture that meets the needs of the present without compromising the ability of future generations to meet their own needs (Steele 1997:234). More energy is used in running buildings than in their construction and material manufacturing (Day 1990:31). Buildings themselves, their materials, location, services and design have local effects, as well as affecting the health of people that use these places. Sustainable architecture, variously called ecological, biological, green or Gaia architecture, aligns with this critical response to a perceived global imperative that differs from its predecessors (Steele 1997:234). In either active or passive mode, sustainable architecture tries to make connections to other buildings to take maximum advantage of mass, to local typologies that can be identified as climatically and culturally effective over time, to regional microclimates and materials or to global supplier if necessary in the implications that some material choices have for non-renewable resource depletion and for the possibility of technology transfer (Steele 1997:237). “Humanity stands as a defining moment in history. We are confronted with a perpetuation of disparities between and within nations, a worsening of poverty, hunger, ill health and illiteracy and the continuing deterioration of the ecosystems on which we depend for our well-being. However, integration of environment and development concerns and greater attention to them will lead to the fulfilment of basic needs, improved living standards for all, better protected and managed ecosystems and a safer more prosperous future. No nation can achieve this on its own, but together we can in global partnership for sustainable development” (Steele 1997:9), from Agenda 21. Agenda 21 addresses the built environment and the construction industry, which it identifies as “a major source of environmental damage through the degradation of fragile ecological zones, damage to natural resources, chemical pollution, and the use of building materials, which are harmful to human health.” Specifically, as a corrective the report recommends: 1. The use of local materials and indigenous building sources. 2. Incentives to promote continuation of traditional techniques, with regional resources and self-help strategies. 3. Regulation of energy-efficient design principles. 4. Standards that would discourage construction in ecologically inappropriate areas. 5. The use of labour-intensive rather than energy-intensive construction techniques. 6. The restructuring of credit institutions to allow the poor to buy building materials and services. 7. International information exchange on all aspects of construction related to the environment, among architects and contractors, particularly about non-renewable resources. 8. Exploration of methods to encourage and facilitate the recycling and reuse of building materials, specially those requiring intensive energy consumption in their manufacture. 9. Financial penalties to discourage the use of materials that damage the environment. 4.2. Social issues 4.2.1 Indoor environment, Occupant Comfort “The quality of the environment in and around the building has been shown to have a direct impact on health, happiness and productivity of people. Healthier, happier, more effective people contribute to sustainability by being more efficient and therefore reducing resource consumption and waste. The quality of this environment needs to be achieved with minimal cost to the environment” (Gibberd 2004: SBAT). Shelter is the main instrument for fulfilling the requirements of comfort. It modifies the natural environment to approach optimum conditions of liveability. It should filter, absorb or repel environmental elements according to their beneficial or adverse contributions to man’s comfort. Man strives for the point at which minimum energy expenditure is needed to adjust to the environment (Olgyay 1963:15) Lighting and daylighting All facilities must be well lit; daylighting is to be used as much as possible. Day light must be controllable, so that glare is kept to a minimum. If used properly day lighting can reduce electrical consumption, reduce cooling requirements and increase occupant comfort. Facilities should be designed so that electrical lighting is kept to a minimum. Ventilation and indoor air quality Fresh air is necessary to replenish oxygen, and remove stale air. Required ventilation should be provided by natural means, where mechanical ventilation can be minimised, or even excluded from the building. Building orientation and space linkage must enhance natural ventilation. The materials used within the building must not contaminate the indoor air quality. Paints, particle board, adhesives and furnishings can contribute to contaminants found inside new buildings. The least toxic materials should be chosen, along with the design of systems that circulate and distribute fresh air passively. Noise Due to the nature of the facilities, noise levels in many areas of the facility must be kept as low as possible. Functions must be zoned so that noisy and quiet areas are separated, limiting unwanted excessive noise, and preventing interference between groups. The limited vehicular circulation on site keeps traffic noise down to a minimum. Views and visual quality All work and recreational areas have views outside. These views are important, and have influenced the placement of walls and shape of rooms, so that the eye is drawn to outdoor elements. Access to outside Users of the buildings must have easy access to outside green spaces. These spaces provide places for outdoor activities, as well as mental rejuvenation between tasks. 4.2.2 Inclusive Environments “An essential criterion for sustainable buildings is that the building is designed to accommodate everyone, or specially designed buildings need to be provided. Ensuring that buildings are inclusive supports sustainability as replication is avoided and change of use supported” (Gibberd 2004:SBAT). Transport Due to the nature of the facilities located on the site, a major part of the site is limited to pedestrian movement, with controlled vehicular movement. All transport on site accommodates wheelchair users. Larger parking bays are provided near entrances and pathways for disabled users. Routes, signage, level change All routes and circulation space have an even surface that is easily navigable by wheelchair. Increased isle width, and path width is needed to accommodate all users. Outdoor surfaces take into consideration the various users, including wheelchairs. Level change within the building as well as between buildings must be facilitated using ramps with a gradient of 1:12. The surface of the ramp must not be slippery. Handrails and rest platforms must be provided on stairs and ramps. Curbs must be provided on ramps. Visual signs and displays must be clear, simple, and be translated into at least three languages. Visual signals must be used to reinforce audible warning signs, such as a flashing red light used with an audible fire alarm. Certain areas of the buildings are restricted to staff. This must be clearly demarcated and signed. Toilets and bathrooms The correct dimensions for toilet cubicles must be provided to aid wheelchair users. Doors must open outwards, with sufficient room to maneuver into the cubicle. Showers must be of the correct dimension to accommodate disabled users. Handrails and a folding seat must be provided. Water controls, in the shower as well as on basins, must be such that they can be operated by all users. 4.2.3 Access to Facilities “Convention living and working patterns require regular access to a range of services. Ensuring that these services can be accessed easily and in environmentally friendly ways supports sustainability by increasing efficiency and reducing environmental impact” (Gibberd 2004:SBAT). Childcare Childcare facilities are provided for users of the Healing Centre. These facilities are provided off-site in Mamelodi, near the pick-up point for transport to the facility. They are not located at the Healing Centre itself as this may cause distraction to the users. Residential Residential areas of the users as well as the staff are located more than 12km from the facility. Due to this reason transport to and from the Healing Centre is available for its users. A similar transportation system for the staff can be arranged, with a central parking area close to their homes. This parking area should be located close to retail and banking facilities where banking, post and groceries can be handled daily if necessary. 4.2.4 Participation & Control “Ensuring that users participate in decisions about their environment helps ensure that they care for and manage this properly. Control over aspects of their local environment enables personal satisfaction and comfort. Both of these support sustainability by promoting proper management of buildings and increasing productivity” (Gibberd 2004:SBAT). Environmental control and user adaptation Users of the building have reasonable control over the building; this is in terms of opening windows and adjusting blinds and curtains. Furniture and fittings allow arrangement or rearrangement by the user. Personalisation of spaces may take place in the office facilities, and on a limited level in the accommodation facility. Provision should be made for places to put up pictures and notes. Social space Design for easy informal as well as formal interaction between people has been provided. This is accommodated in terms of various indoor and outdoor seating areas, meeting and counselling rooms and studios. This aids interaction between users themselves, as well as staff. Community involvement The community is an important part of this project. The aim of the Healing Centre and the rest of the building complex is to uplift the community, by providing a better psychological state, and so quality of life for its members. Skills training and workshops will benefit the community from the construction phase, through to operation of the buildings. The greater Pretoria community is involved to a certain extent by supporting the Healing facility. Through the Spa and Herbal Centre income, awareness and support is generated to facilitate the functioning and operation of the Healing facility. 4.2.5 Education, Health and Safety “Buildings need to cater for the well being, development and safety of the people that use them. Awareness and environments that promote health can help reduce the incidence of diseases such as AIDS. Safe environments help to limit the incidence of accidents and where these occur, reduce their effect. Learning and access to information is increasingly seen as a requirement of a competitive work force. All of these factors contribute to sustainability by helping ensure that people remain healthy and economically active, thus reducing the ‘costs’ (to society, the environment and the economy) of unemployment and ill health” (Gibberd 2004:SBAT). Lifelong learning / education The nature of the Healing Centre, Herbal Centre and Spa are conducive of education and learning, especially by its users. The staff of all these facilities should periodically be sent on courses, and have access to materials that will further their knowledge, and help them educate users better. Security, health and safety regulations Security of the building complex in general will be aided by check points at the entrances. The property must be securely fenced, especially due to the accommodation facilities located at the Healing Centre. At night security should be increased through the employment of security services. The buildings must comply with health and safety regulations. Policy and checks must be in place to ensure that these are complied with. First-aid kits must be located in central locations. Staff must be trained in first-aid to be able to assist the injured properly. A protocol on dealing with injuries and emergencies must be established and made known to all staff. 4.3 Economic issues 4.3.1 Local Economy “The construction and management of buildings can have a major impact on the economy of an area. The economy can be stimulated and sustained by buildings that make use of and develop local skills and resources” (Gibberd 2004:SBAT). Contractors 80% of the construction should be carried out by contractors based within 100km of the building project. Skilled and unskilled labour must be included, with training programmes and educational tasks. Materials and manufacture 80% of the construction materials: cement, sand and bricks, and the building components, windows, doors and furniture, must be produced within 200km of the site. Outsource opportunities Opportunities should be created for emerging small businesses. This includes outsourcing catering, cleaning and security services, making space and equipment available for these businesses to use. All repairs and maintenance required by the building can be carried out by contractors within 100km of the site. Standardised quality fixtures last longer, and when damaged their components are easier to replace. 4.3.2 Efficiency of Use “Buildings cost money and make use of resources whether they are used or not. Effective and efficient use of buildings supports sustainability by reducing waste and the need for additional buildings” (Gibberd 2004:SBAT) Usable space All buildings must be managed so that they are used productively and generally occupied to ensure efficiency. Programmes and events must be monitored to ascertain which spaces are being used effectively, and which could be used better or more frequently. The use of space must be intensified by space management approaches such as shared work spaces and areas. Some spaces can be adapted and used for more than one function. Non-useable space such as WC’s, plant rooms and circulation must be kept to a minimum. 4.3.3 Adaptability and Flexibility “Most buildings can have a life-span of at least 50 years. It is likely that within this time the use of buildings will change, or that the feasibility of this will be investigated. Buildings which can accommodate change easily support sustainability by reducing the requirement for change and the need for new buildings” (Gibberd 2004:SBAT). Partitions Internal partitions between space are non-load bearing, made from brick, block or plaster board and can be removed or changed relatively easily. Services There is easy access to electrical and communication services in usable space. Provision should be made for easy modification of these systems. Vertical Dimensions Structural dimensions from the underside of the roof, or slab to the floor should be a minimum of 3m. this ensures ease of change, good depth for future services, as well as a comfortable environment for occupants for visual, acoustic and thermal quality. 4.3.4 Ongoing Costs Maintenance Specifications and material specification for low maintenance and or low cost maintenance should be implemented at initial design stages. All plant and fabric have a maintenance cycle of at least two years. Low or no maintenance components, windows, doors, paint and, ironmongery should be selected. Maintenance should be carried out effectively and efficiently, with access to hard-to-reach areas provided for cleaning and repairs. Security Measures must be taken to limit the requirement and costs of security. Alarms and other monitoring devices can be installed to minimise the number of security people necessary. Insurance/ water/ energy/ sewage Costs of insurance, water, energy, and sewage should be monitored. Consumption and costs must be regularly reported to management and users. Policy and management to reduce consumption should be implemented, whereas passive systems can be used for the control of energy saving, such as photo voltaic cells that control ventilators, or supply night-lighting through energy-efficient controls. Disruption and down time Electrical, communication, plant and other services should be located where they can easily be accessed with a minimum of disruption to occupants of the building. Access to these should be from circulation areas and not living and working areas. 4.4 Environmental issues 4.4.1 Environmental Architecture What is here referred to as Environmental Architecture, has many other names: Construction ecology, Green Architecture, Selective Design etc. In general Environmental Architecture is a reaction to environmental degradation. Protection of the globe through re-evaluation of the way in which buildings are designed and constructed, reflects the concerns of the green movement generally. The major impact that building design, construction and maintenance have on national energy consumption began to be widely recognised in the early seventies (Jones 1998:12). The design of any building derives from a considered response to climate, technology, culture and site. Considerations of global sustainability and energy conservation bear directly on these four issues and therefore go right to the heart of architectural design. Under the impact of technological change, there is a growing consensus that architectural objectives and procedure should be realigned to reflect our improved climatic awareness (Hawkes, McDonald, Steemers 2002:17). Global climate change is an issue of widespread social and political concern as it is witnessed by international accords and protocols. The environmental impact of buildings is widely acknowledged, and in the past quarter-century much progress has been made in developing the means to reduce it through technological development and scientific analysis. However, there is a need to locate this within comprehensive architectural paradigms that connect it to the wider historical, cultural and social discourse without which technology remains of purely instrumental value. The challenge is to reach a point where Environmental Architecture is indistinguishable from good architecture. Selective design, as opposed to exclusive design, aims to exploit the climatic conditions to maintain comfort, minimising the need for artificial control reliant on the consumption of energy (Hawkes, McDonald, Steemers 2002:123). This manipulation of climate, to filter selectively positive characteristics of the environment, is achieved through architecture. The form of a building is the most significant consideration with respect to the selective potential of a design. The approach has the following principles: - To maximise the use of ambient, renewable sources of energy in place of generated energy and fossil fuels. - To minimise the use of energy-consuming mechanical plant in processes of environmental control. - To provide the users of buildings with the maximum opportunity to exercise control over their environment and adapt it to their needs. - To use non-toxic materials that affect the health of construction workers or the users. - To reuse, recycle and adapt old structures for future construction. 4.4.2 Water “Water is required for many activities. However the large-scale provision of conventional water supply has many environmental implications. Water needs to be stored (sometimes taking up large areas of valuable land and disturbing natural drainage patterns with associated problems from erosion etc.), it also needs to be pumped (using energy) though a large network of pipes (that need to be maintained and repaired). Having delivered the water, a parallel effort is then required to dispose of this after its use (sewage systems). Reducing water consumption supports sustainability by reducing the environmental impact required to deliver water, and dispose of this after use in a conventional system” (Gibberd 2004:SBAT). Water consumption and efficiency of use All water devices should minimise water consumption and encourage efficiency of use. Recycling and reuse of greywater to flush toilets and water plants is encouraged. Onsite treatment of black water must be accommodated in the design of such services. A borehole should be included if a site is located far from municipal services, ground water levels permitting. Runoff Runoff can be reduced by using pervious and absorbent surfaces. Hard landscapes should be minimised, with pervious surfaces specified for parking and paths. Planting and landscaping Planting must be indigenous with low water requirements. Planting can help to prevent excessive water evaporation, modify the ambient temperature around a building, act as a wind break, help to filter pollution and provide privacy. The character and contours of the site should be retained as far as possible, to assist with water absorption, reducing runoff. water usage (diagramatic representation) 4_20 4.4.3 Energy “Buildings consume about 50% of all energy produced. Conventional energy production is responsible for making a large contribution to environmental damage and non-renewable resource depletion. Using less energy or using renewable energy in buildings therefore can make a substantial contribution to sustainability” (Gibberd 2003:SBAT). Natural lighting Natural lighting is used as much as possible throughout the building complex. There has to be sufficient light for visual focus and to perform the desired task. Glare must be avoided. Artificial lighting should be limited to nighttime. Energy efficient lighting fixtures must be used. Ventilation Natural ventilation is maximised. The interiors are cooled by openable windows, most located near ceiling level to allow stale and warm air out. In areas with high moisture levels and excessive heat, such as bathrooms and kitchen, extractor fans are used to aid ventilation. Heating and cooling Energy efficient systems are used within the building to passively control temperatures. Passive methods for heating include direct gain, trombe walls/floors and fireplaces. Passive cooling uses the building’s thermal mass, as well as ventilation to keep the structure, and so the rooms cool. Openings are shaded to prevent uncontrolled solar gain. Renewable energy Solar hot water systems are used to heat water in summer. Back-up electrical systems are used in very cold weather, or conditions with little sunlight. 4.4.4 Site Buildings have a footprint and a size that take up space that could otherwise be occupied by natural ecosystems which contribute to sustainability by helping create and maintain an environment that supports life. Buildings can support sustainability by, limiting development to sites that have already been disturbed, and working with nature by including aspects of natural ecosystems within the development. Energy A building consumes energy in a number of ways: in the manufacture of building materials, components and systems (embodied energy); in the distribution and transportation of building materials and components to the site (‘grey energy); in the construction of the building (induced energy); and in running the building and its occupants equipment and appliances (operational energy). A building also consumes energy in its maintenance, alteration and final disposal. An energy efficient building looks to reduce consumption on all of these areas (Jones 1998:36). ‘Brownfields’ The site as a whole is largely a brownfields site. The building complex is situated in areas that have already been disturbed by human intervention. The proposed buildings must not cause further environmental degradation. Landscape inputs All new planting must be of indigenous species. Exotic species must be cleared from the site. However, the clumps of exotic Silver Birch are to be retained due to the quality of place they create. The planting and vegetation chosen to be planted on site must take into consideration the natural climatic and soil conditions. 4.4.5 Recycling and Re-use "Raw materials and new components used in buildings consume resources and energy in their manufacture and processes. Buildings accommodate activities that consume large amounts of resources and products and produce large amounts of waste. Reducing the use of new materials and components in buildings and in the activities accommodated and reducing waste by recycling and reuse supports sustainability by reducing the energy consumption and resource consumption" (Gibberd 2004:SBAT). Inorganic waste This waste should be sorted into what can be recycled or re-used, and either stored or arrangements made for the recyclable waste to be taken to an appropriate plant. Organic waste This must be recycled and disposed of on site; greywater can be filtered and re-used, blackwater treated and used for irrigation, and other organic waste can be used for compost. Construction waste Construction waste must be minimised through design management and construction practices. Design allowances should be made for material recovery with disassembly, and adaptive reuse of salvaged building materials. 4.4.6 Materials and Components "The construction of buildings usually requires large quantities of materials and components. These may require large amounts of energy to produce. Their development may also require processes that are harmful to the environment and consume non-renewable resources" (Gibberd 2004:SBAT). Embodied energy studies have assessed the energy taken to bring materials and components to their final position. This includes extraction of the raw material, processing it into a workable material, making components and products, installation and use, removal and demolition, as well as the transport and storage of the product at each stage. Industry and its products can have damaging effects and the environment. If a suitable alternative material can be found which is less damaging to the environment, then it should be used. Materials should be chosen for their local manufacture, low embodied energy and limited environmental damage, their properties for recycling and re-use at a later stage and lastly their aesthetic appeal. Earth and stone found on site make up a major part of the construction materials used in the buildings. Other materials used are found within close proximity of the site. Rammed earth Rammed earth is a method of simple wall construction that uses utilises form work, wood or steel, into which a damp gravely earth mixture is rammed in layers, till total compaction. When the forms are removed the wall is complete, except for curing, and requires no further treatment other than plaster finishes or cosmetic treatments as desired (McHenry 1984:48). The final product is solid and durable. There are many benefits to using earth construction in South Africa. Earth has good thermal properties; it stores energy in the form of heat due to its mass, is warm in winter and cool in summer. Soil is a readily available resource that is relatively cheap, or even free if it is excavated on site. Due to a long tradition of earth construction in this country, many people have the skills to build with earth. Earth construction is labour intensive, and provides jobs. Due to availability of the material, cost and available skills, earth building is a highly affordable alternative to some conventional technologies. Local communities become directly involved in the process and production of the building and generate income from its construction. Ideally soil used in earth construction must contain four elements: course sand or aggregate, fine sand, silt and clay (McHenry 1984:48). Earth construction has good compressive strength, but poor tensile strength. Appropriate structural design and construction has to be addressed. (A more detailed report on rammed earth and the other building materials used is included in the Technical Documentation chapter.) The Accommodation Schedule for the building complex is contained in Appendix E. The Sustainable Building Assessment Tool (SBAT), tables and graph are contained in Appendix F.
MYTH TODAY What is a myth, today? I shall give at the outset a first, very simple answer, which is perfectly consistent with etymology: myth is a type of speech. Myth is a type of speech Of course, it is not any type: language needs special conditions in order to become myth: we shall see them in a minute. But what must be firmly established at the start is that myth is a system of communication, that it is a message. This allows one to perceive that myth cannot possibly be an object, a concept, or an idea; it is a mode of signification, a form. Later, we shall have to assign to this form historical limits, conditions of use, and reintroduce society into it: we must nevertheless first describe it as a form. It can be seen that to purport to discriminate among mythical objects according to their substance would be entirely illusory: since myth is a type of speech, everything can be a myth provided it is conveyed by a discourse. Myth is not defined by the object of its message, but by the way in which it utters this message: there are formal limits to myth, there are no 'substantial' ones. Everything, then, can be a myth? Yes, I believe this, for the universe is infinitely fertile in suggestions. Every object in the world can pass from a closed, silent existence to an oral state, open to appropriation by society, for there is no law, whether natural or not, which forbids talking about things. A tree is a tree. Yes, of course. But a tree as expressed by Minou Drouet is no longer quite a tree, it is a tree which is decorated, adapted to a certain type of consumption, laden with literary self-indulgence, revolt, images, in short with a type of social usage which is added to pure matter. Naturally, everything is not expressed at the same time: some objects become the prey of mythical speech for a while, then they disappear, others take their place and attain the status of myth. Are there objects which are inevitably a source of suggestiveness, as Baudelaire suggested about Woman? Certainly not: one can conceive of very ancient myths, but there are no eternal ones; for it is human history which converts reality into speech, and it alone rules the life and the death of mythical language. Ancient or not, mythology can only have an historical foundation, for myth is a type of speech chosen by history: it cannot possibly evolve from the 'nature' of things. Speech of this kind is a message. It is therefore by no means confined to oral speech. It can consist of modes of writing or of representations; not only written discourse, but also photography, cinema, reporting, sport, shows, publicity, all these can serve as a support to mythical speech. Myth can be defined neither by its object nor by its material, for any material can arbitrarily be endowed with meaning: the arrow which is brought in order to signify a challenge is also a kind of speech. True, as far as perception is concerned, writing and pictures, for instance, do not call upon the same type of consciousness; and even with pictures, one can use many kinds of reading: a diagram lends itself to signification more than a drawing, a copy more than an original, and a caricature more than a portrait. But this is the point: we are no longer dealing here with a theoretical mode of representation: we are dealing with this particular image, which is given for this particular signification. Mythical speech is made of a material which has already been worked on so as to make it suitable for communication: it is because all the materials of myth (whether pictorial or written) presuppose a signifying consciousness, that one can reason about them while discounting their substance. This substance is not unimportant: pictures, to be sure, are more imperative than writing, they impose meaning at one stroke, without analyzing or diluting it. But this is no longer a constitutive difference. Pictures become a kind of writing as soon as they are meaningful: like writing, they call for a lexis. We shall therefore take language, discourse, speech, etc., to mean any significant unit or synthesis, whether verbal or visual: a photograph will be a kind of speech for us in the same way as a newspaper article; even objects will become speech, if they mean something. This generic way of conceiving language is in fact justified by the very history of writing: long before the invention of our alphabet, objects like the Inca quipu, or drawings, as in pictographs, have been accepted as speech. This does not mean that one must treat mythical speech like language; myth in fact belongs to the province of a general science, coextensive with linguistics, which is semiology. **Myth as a semiological system** For mythology, since it is the study of a type of speech, is but one fragment of this vast science of signs which Saussure postulated some forty years ago under the name of semiology. Semiology has not yet come into being. But since Saussure himself, and sometimes independently of him, a whole section of contemporary research has constantly been referred to the problem of meaning: psychoanalysis, structuralism, eidetic psychology, some new types of literary criticism of which Bachelard has given the first examples, are no longer concerned with facts except inasmuch as they are endowed with significance. Now to postulate a signification is to have recourse to semiology. I do not mean that semiology could account for all these aspects of research equally well: they have different contents. But they have a common status: they are all sciences dealing with values. They are not content with meeting the facts: they define and explore them as tokens for something else. Semiology is a science of forms, since it studies significations apart from their content. I should like to say one word about the necessity and the limits of such a formal science. The necessity is that which applies in the case of any exact language. Zhdanov made fun of Alexandrov the philosopher, who spoke of 'the spherical structure of our planet.' 'It was thought until now', Zhdanov said, 'that form alone could be spherical.' Zhdanov was right: one cannot speak about structures in terms of forms, and vice versa. It may well be that on the plane of 'life', there is but a totality where structures and forms cannot be separated. But science has no use for the ineffable: it must speak about life if it wants to transform it. Against a certain quixotism of synthesis, quite platonically incidentally, all criticism must consent to the ascesis, to the artifice of analysis; and in analysis, it must match method and language. Less terrorized by the specter of 'formalism', historical criticism might have been less sterile; it would have understood that the specific study of forms does not in any way contradict the necessary principles of totality and History. On the contrary: the more a system is specifically defined in its forms, the more amenable it is to historical criticism. To parody a well-known saying, I shall say that a little formalism turns one away from History, but that a lot brings one back to it. Is there a better example of total criticism than the description of saintliness, at once formal and historical, semiological and ideological, in Sartre's Saint-Genet? The danger, on the contrary, is to consider forms as ambiguous objects, half-form and half-substance, to endow form with a substance of form, as was done, for instance, by Zhdanovian realism. Semiology, once its limits are settled, is not a metaphysical trap: it is a science among others, necessary but not sufficient. The important thing is to see that the unity of an explanation cannot be based on the amputation of one or other of its approaches, but, as Engels said, on the dialectical co-ordination of the particular sciences it makes use of. This is the case with mythology: it is a part both of semiology inasmuch as it is a formal science, and of ideology inasmuch as it is an historical science: it studies ideas-in-form.² Let me therefore restate that any semiology postulates a relation between two terms, a signifier and a signified. This relation concerns objects which belong to different categories, and this is why it is not one of equality but one of equivalence. We must here be on our guard for despite common parlance which simply says that the signifier expresses the signified, we are dealing, in any semiological system, not with two, but with three different terms. For what we grasp is not at all one term after the other, but the correlation which unites them: there are, therefore, the signifier, the signified and the sign, which is the associative total of the first two terms. Take a bunch of roses: I use it to signify my passion. Do we have here, then, only a signifier and a signified, the roses and my passion? Not even that: to put it accurately, there are here only 'passionified' roses. But on the plane of analysis, we do have three terms; for these roses weighted with passion perfectly and correctly allow themselves to be decomposed into roses and passion: the former and the latter existed before uniting and forming this third object, which is the sign. It is as true to say that on the plane of experience I cannot dissociate the roses from the message they carry, as to say that on the plane of analysis I cannot confuse the roses as signifier and the roses as sign: the signifier is empty, the sign is full, it is a meaning. Or take a black pebble: I can make it signify in several ways, it is a mere signifier; but if I weigh it with a definite signified (a death sentence, for instance, in an anonymous vote), it will become a sign. Naturally, there are between the signifier, the signified and the sign, functional implications (such as that of the part to the whole) which are so close that to analyses them may seem futile; but we shall see in a moment that this distinction has a capital importance for the study of myth as semiological schema. Naturally these three terms are purely formal, and different contents can be given to them. Here are a few examples: for Saussure, who worked on a particular but methodologically exemplary semiological system—the language or langue—the signified is the concept, the signifier is the acoustic image (which is mental) and the relation between concept and image is the sign (the word, for instance), which is a concrete entity. For Freud, as is well known, the human psyche is a stratification of tokens or representatives. One term (I refrain from giving it any precedence) is constituted by the manifest meaning of behavior, another, by its latent or real meaning (it is, for instance, the substratum of the dream); as for the third term, it is here also a correlation of the first two: it is the dream itself in its totality, the parapraxis (a mistake in speech or behavior) or the neurosis, conceived as compromises, as economies effected thanks to the joining of a form (the first term) and an intentional function (the second term). We can see here how necessary it is to distinguish the sign from the signifier: a dream, to Freud, is no more its manifest datum than its latent content: it is the functional union of these two terms. In Sartrean criticism, finally (I shall keep to these three well known examples), the signified is constituted by the original crisis in the subject (the separation from his mother for Baudelaire, the naming of the theft for Genet); Literature as discourse forms the signifier; and the relation between crisis and discourse defines the work, which is a signification. Of course, this tri-dimensional pattern, however constant in its form, is actualized in different ways: one cannot therefore say too often that semiology can have its unity only at the level of forms, not contents; its field is limited, it knows only one operation: reading, or deciphering. In myth, we find again the tri-dimensional pattern which I have just described: the signifier, the signified and the sign. But myth is a peculiar system, in that it is constructed from a semiological chain which existed before it: it is a second-order semiological system. That which is a sign (namely the associative total of a concept and an image) in the first system, becomes a mere signifier in the second. We must here recall that the materials of mythical speech (the language itself, photography, painting, posters, rituals, objects, etc.), however different at the start, are reduced to a pure signifying function as soon as they are caught by myth. Myth sees in them only the same raw material; their unity is that they all come down to the status of a mere language. Whether it deals with alphabetical or pictorial writing, myth wants to see in them only a sum of signs, a global sign, the final term of a first semiological chain. And it is precisely this final term which will become the first term of the greater system which it builds and of which it is only a part. Everything happens as if myth shifted the formal system of the first significations sideways. As this lateral shift is essential for the analysis of myth, I shall represent it in the following way, it being understood, of course, that the spatialization of the pattern is here only a metaphor: [the following is a stripped-down representation of Barthes's original diagram] <table> <thead> <tr> <th>Language</th> <th>Myth</th> </tr> </thead> <tbody> <tr> <td>1. Signifier</td> <td>2. Signified</td> </tr> <tr> <td>3. Sign</td> <td>I. Signified</td> </tr> <tr> <td>III. Sign</td> <td></td> </tr> </tbody> </table> It can be seen that in myth there are two semiological systems, one of which is staggered in relation to the other: a linguistic system, the language (or the modes of representation which are assimilated to it), which I shall call the language-object, because it is the language which myth gets hold of in order to build its own system; and myth itself, which I shall call metalanguage, because it is a second language, in which one speaks about the first. When he reflects on a metalanguage, the semiologist no longer needs to ask himself questions about the composition of the language object, he no longer has to take into account the details of the linguistic schema; he will only need to know its total term, or global sign, and only inasmuch as this term lends itself to myth. This is why the semiologist is entitled to treat in the same way writing and pictures: what he retains from them is the fact that they are both signs, that they both reach the threshold of myth endowed with the same signifying function, that they constitute, one just as much as the other, a language-object. It is now time to give one or two examples of mythical speech. I shall borrow the first from an observation by Valéry. I am a pupil in the second form in a French lycee. I open my Latin grammar, and I read a sentence, borrowed from Aesop or Phaedrus: quia ego nominor leo. I stop and think. There is something ambiguous about this statement: on the one hand, the words in it do have a simple meaning: because my name is lion. And on the other hand, the sentence is evidently there in order to signify something else to me. Inasmuch as it is addressed to me, a pupil in the second form, it tells me clearly: I am a grammatical example meant to illustrate the rule about the agreement of the predicate. I am even forced to realize that the sentence in no way signifies its meaning to me, that it tries very little to tell me something about the lion and what sort of name he has; its true and fundamental signification is to impose itself on me as the presence of a certain agreement of the predicate. I conclude that I am faced with a particular, greater, semiological system, since it is co-extensive with the language: there is, indeed, a signifier, but this signifier is itself formed by a sum of signs, it is in itself a first semiological system (my name is lion). Thereafter, the formal pattern is correctly unfolded: there is a signified (I am a grammatical example) and there is a global signification, which is none other than the correlation of the signifier and the signified; for neither the naming of the lion nor the grammatical example are given separately. And here is now another example: I am at the barber's, and a copy of Paris-Match is offered to me. On the cover, a young Negro in a French uniform is saluting, with his eyes uplifted, probably fixed on a fold of the tricolour. All this is the meaning of the picture. But, whether naively or not, I see very well what it signifies to me: that France is a great Empire, that all her sons, without any color discrimination, faithfully serve under her flag, and that there is no better answer to the detractors of an alleged colonialism than the zeal shown by this Negro in serving his so-called oppressors. I am therefore again faced with a greater semiological system: there is a signifier, itself already formed with a previous system (a black soldier is giving the French salute); there is a signified (it is here a purposeful mixture of Frenchness and militariness); finally, there is a presence of the signified through the signifier. Before tackling the analysis of each term of the mythical system, one must agree on terminology. We now know that the signifier can be looked at, in myth, from two points of view: as the final term of the linguistic system, or as the first term of the mythical system. We therefore need two names. On the plane of language, that is, as the final term of the linguistic system, I shall call the signifier: meaning (my name is lion, a Negro is giving the French salute); on the plane of myth, I shall call it: form. In the case of the signified, no ambiguity is possible: we shall retain the name concept. The third term is the correlation of the first two: in the linguistic system, it is the sign; but it is not possible to use this word again without ambiguity, since in myth (and this is the chief peculiarity of the latter), the signifier is already formed by the signs of the language. I shall call the third term of myth the signification. This word is here all the better justified since myth has in fact a double function: it points out and it notifies, it makes us understand something and it imposes it on us. The form and the concept The signifier of myth presents itself in an ambiguous way: it is at the same time meaning and form, full on one side and empty on the other. As meaning, the signifier already postulates a reading, I grasp it through my eyes, it has a sensory reality (unlike the linguistic signifier, which is purely mental), there is a richness in it: the naming of the lion, the Negro's salute are credible wholes, they have at their disposal a sufficient rationality. As a total of linguistic signs, the meaning of the myth has its own value, it belongs to a history, that of the lion or that of the Negro: in the meaning, a signification is already built, and could very well be self-sufficient if myth did not take hold of it and did not turn it suddenly into an empty, parasitical form. The meaning is already complete, it postulates a kind of knowledge, a past, a memory, a comparative order of facts, ideas, decisions. When it becomes form, the meaning leaves its contingency behind; it empties itself, it becomes impoverished, history evaporates, only the letter remains. There is here a paradoxical permutation in the reading operations, an abnormal regression from meaning to form, from the linguistic sign to the mythical signifier. If one encloses quia ego nominor leo in a purely linguistic system, the clause finds again there a fullness, a richness, a history: I am an animal, a lion, I live in a certain country, I have just been hunting, they would have me share my prey with a heifer, a cow and a goat; but being the stronger, I award myself all the shares for various reasons, the last of which is quite simply that my name is lion. But as the form of the myth, the clause hardly retains anything of this long story. The meaning contained a whole system of values: a history, a geography, a morality, a zoology, a Literature. The form has put all this richness at a distance: its newly acquired penury calls for a signification to fill it. The story of the lion must recede a great deal in order to make room for the grammatical example, one must put the biography of the Negro in parentheses if one wants to free the picture, and prepare it to receive its signified. But the essential point in all this is that the form does not suppress the meaning, it only impoverishes it, it puts it at a distance, it holds it at one's disposal. One believes that the meaning is going to die, but it is a death with reprieve; the meaning loses its value, but keeps its life, from which the form of the myth will draw its nourishment. The meaning will be for the form like an instantaneous reserve of history, a tamed richness, which it is possible to call and dismiss in a sort of rapid alternation: the form must constantly be able to be rooted again in the meaning and to get there what nature it needs for its nutriment; above all, it must be able to hide there. It is this constant game of hide-and-seek between the meaning and the form which defines myth. The form of myth is not a symbol: the Negro who salutes is not the symbol of the French Empire: he has too much presence, he appears as a rich, fully experienced, spontaneous, innocent, indisputable image. But at the same time this presence is tamed, put at a distance, made almost transparent; it recedes a little, it becomes the accomplice of a concept which comes to it fully armed, French imperialism: once made use of, it becomes artificial. Let us now look at the signified: this history which drains out of the form will be wholly absorbed by the concept. As for the latter, it is determined, it is at once historical and intentional; it is the motivation which causes the myth to be uttered. Grammatical exemplarity, French imperialism, are the very drives behind the myth. The concept reconstitutes a chain of causes and effects, motives and intentions. Unlike the form, the concept is in no way abstract: it is filled with a situation. Through the concept, it is a whole new history which is implanted in the myth. Into the naming of the lion, first drained of its contingency, the grammatical example will attract my whole existence: Time, which caused me to be born at a certain period when Latin grammar is taught; History, which sets me apart, through a whole mechanism of social segregation, from the children who do not learn Latin; pedagogic tradition, which caused this example to be chosen from Aesop or Phaedrus; my own linguistic habits, which see the agreement of the predicate as a fact worthy of notice and illustration. The same goes for the Negro-giving-the-salute: as form, its meaning is shallow, isolated, impoverished; as the concept of French imperialty, here it is again tied to the totality of the world: to the general History of France, to its colonial adventures, to its present difficulties. Truth to tell, what is invested in the concept is less reality than a certain knowledge of reality; in passing from the meaning to the form, the image loses some knowledge: the better to receive the knowledge in the concept. In actual fact, the knowledge contained in a mythical concept is confused, made of yielding, shapeless associations. One must firmly stress this open character of the concept; it is not at all an abstract, purified essence, it is a formless, unstable, nebulous condensation, whose unity and coherence are above all due to its function. In this sense, we can say that the fundamental character of the mythical concept is to be appropriated: grammatical exemplarity very precisely concerns a given form of pupils, French imperialty must appeal to such and such group of readers and not another. The concept closely corresponds to a function, it is defined as a tendency. This cannot fail to recall the signified in another semiological system, Freudianism. In Freud, the second term of the system is the latent meaning (the content) of the dream, of the parapraxis, of the neurosis. Now Freud does remark that the second-order meaning of behavior is its real meaning, that which is appropriate to a complete situation, including its deeper level; it is, just like the mythical concept, the very intention of behavior. A signified can have several signifiers: this is indeed the case in linguistics and psycho-analysis. It is also the case in the mythical concept: it has at its disposal an unlimited mass of signifiers: I can find a thousand Latin sentences to actualize for me the agreement of the predicate, I can find a thousand images which signify to me French imperialty. This means that quantitively, the concept is much poorer than the signifier, it often does nothing but re-present itself. Poverty and richness are in reverse proportion in the form and the concept: to the qualitative poverty of the form, which is the repository of a rarefied meaning, there corresponds the richness of the concept which is open to the whole of History; and to the quantitative abundance of the forms there corresponds a small number of concepts. This repetition of the concept through different forms is precious to the mythologist, it allows him to decipher the myth: it is the insistence of a kind of behavior which reveals its intention. This confirms that there is no regular ratio between the volume of the signified and that of the signifier. In language, this ratio is proportionate, it hardly exceeds the word, or at least the concrete unit. In myth, on the contrary, the concept can spread over a very large expanse of signifier. For instance, a whole book may be the signifier of a single concept; and conversely, a minute form (a word, a gesture, even incidental, so long as it is noticed) can serve as signifier to a concept filled with a very rich history. Although unusual in language, this disproportion between signifier and signified is not specific to myth: in Freud, for instance, the parapraxis is a signifier whose thinness is out of proportion to the real meaning which it betrays. As I said, there is no fixity in mythical concepts: they can come into being, alter, disintegrate, disappear completely. And it is precisely because they are historical that history can very easily suppress them. This instability forces the mythologist to use a terminology adapted to it, and about which I should now like to say a word, because it often is a cause for irony: I mean neologism. The concept is a constituting element of myth: if I want to decipher myths, I must somehow be able to name concepts. The dictionary supplies me with a few: Goodness, Kindness, Wholeness, Humaneness, etc. But by definition, since it is the dictionary which gives them to me, these particular concepts are not historical. Now what I need most often is ephemeral concepts, in connection with limited contingencies: neologism is then inevitable. China is one thing, the idea which a French petit bourgeois could have of it not so long ago is another: for this peculiar mixture of bells, rickshaws and opium-dens, no other word possible but Sininess. Unlovely? One should at least get some consolation from the fact that conceptual neologisms are never arbitrary: they are built according to a highly sensible proportional rule. The signification In semiology, the third term is nothing but the association of the first two, as we saw. It is the only one which is allowed to be seen in a full and satisfactory way, the only one which is consumed in actual fact. I have called it: the signification. We can see that the signification is the myth itself, just as the Saussurean sign is the word (or more accurately the concrete unit). But before listing the characters of the signification, one must reflect a little on the way in which it is prepared, that is, on the modes of correlation of the mythical concept and the mythical form. First we must note that in myth, the first two terms are perfectly manifest (unlike what happens in other semiological systems): one of them is not 'hidden' behind the other, they are both given here (and not one here and the other there). However paradoxical it may seem, myth hides nothing: its function is to distort, not to make disappear. There is no latency of the concept in relation to the form: there is no need of an unconscious in order to explain myth. Of course, one is dealing with two different types of manifestation: form has a literal, immediate presence; moreover, it is extended. This stems--this cannot be repeated too often--from the nature of the mythical signifier, which is already linguistic: since it is constituted by a meaning which is already outlined, it can appear only through a given substance (whereas in language, the signifier remains mental). In the case of oral myth, this extension is linear (for my name is lion); in that of visual myth, it is multi-dimensional (in the center, the Negro's uniform, at the top, the blackness of his face, on the left, the military salute, etc.). The elements of the form therefore are related to place and proximity: the mode of presence of the form is spatial. The concept, on the contrary, appears in global fashion, it is a kind of nebula, the condensation, more or less hazy, of a certain knowledge. Its elements are linked by associative relations: it is supported not by an extension but by a depth (although this metaphor is perhaps still too spatial): its mode of presence is memorial. The relation which unites the concept of the myth to its meaning is essentially a relation of deformation. We find here again a certain formal analogy with a complex semiological system such as that of the various types of psychoanalysis. Just as for Freud the manifest meaning of behavior is distorted by its latent meaning, in myth the meaning is distorted by the concept. Of course, this distortion is possible only because the form of the myth is already constituted by a linguistic meaning. In a simple system like the language, the signified cannot distort anything at all because the signifier, being empty, arbitrary, offers no resistance to it. But here, everything is different: the signifier has, so to speak, two aspects: one full, which is the meaning (the history of the lion, of the Negro soldier), one empty, which is the form (for my name is lion; Negro-French-soldier-saluting-the-tricolor). What the concept distorts is of course what is full, the meaning: the lion and the Negro are deprived of their history, changed into gestures. What Latin exemplarity distorts is the naming of the lion, in all its contingency; and what French imperially obscures is also a primary language, a factual discourse which was telling me about the salute of a Negro in uniform. But this distortion is not an obliteration: the lion and the Negro remain here, the concept needs them; they are half-amputated, they are deprived of memory, not of existence: they are at once stubborn, silently rooted there, and garrulous, a speech wholly at the service of the concept. The concept, literally, deforms, but does not abolish the meaning; a word can-perfectly render this contradiction: it alienates it. What must always be remembered is that myth is a double system; there occurs in it a sort of ubiquity: its point of departure is constituted by the arrival of a meaning. To keep a spatial metaphor, the approximative character of which I have already stressed, I shall say that the signification of the myth is constituted by a sort of constantly moving turnstile which presents alternately the meaning of the signifier and its form, a language object and a metalanguage, a purely signifying and a purely imagining consciousness. This alternation is, so to speak, gathered up in the concept, which uses it like an ambiguous signifier, at once intellective and imaginary, arbitrary and natural. I do not wish to prejudge the moral implications of such a mechanism, but I shall not exceed the limits of an objective analysis if I point out that... the ubiquity of the signifier in myth exactly reproduces the physique of the alibi (which is, as one realizes, a spatial term): in the alibi too, there is a place which is full and one which is empty, linked by a relation of negative identity ('I am not where you think I am; I am where you think I am not'). But the ordinary alibi (for the police, for instance) has an end; reality stops the turnstile revolving at a certain point. Myth is a value, truth is no guarantee for it; nothing prevents it from being a perpetual alibi: it is enough that its signifier has two sides for it always to have an 'elsewhere' at its disposal. The meaning is always there to present the form; the form is always there to outdistance the meaning. And there never is any contradiction, conflict, or split between the meaning and the form: they are never at the same place. In the same way, if I am in a car and I look at the scenery through the window, I can at will focus on the scenery or on the window-pane. At one moment I grasp the presence of the glass and the distance of the landscape; at another, on the contrary, the transparency of the glass and the depth of the landscape; but the result of this alternation is constant: the glass is at once present and empty to me, and the landscape unreal and full. The same thing occurs in the mythical signifier: its form is empty but present, its meaning absent but full. To wonder at this contradiction I must voluntarily interrupt this turnstile of form and meaning, I must focus on each separately, and apply to myth a static method of deciphering, in short, I must go against its own dynamics: to sum up, I must pass from the state of reader to that of mythologist. And it is again this duplicity of the signifier which determines the characters of the signification. We now know that myth is a type of speech defined by its intention (I am a grammatical example) much more than by its literal sense (my name is lion); and that in spite of this, its intention is somehow frozen, purified, eternalized, made absent by this literal sense (The French Empire? It's just a fact: look at this good Negro who salutes like one of our own boys). This constituent ambiguity of mythical speech has two consequences for the signification, which henceforth appears both like a notification and like a statement of fact. Myth has an imperative, buttonholing character: stemming from an historical concept, directly springing from contingency (a Latin class, a threatened Empire), it is I whom it has come to seek. It is turned towards me, I am subjected to its intentional force, it summons me to receive its expansive ambiguity. If, for instance, I take a walk in Spain, in the Basque country, I may well notice in the houses an architectural unity, a common style, which leads me to acknowledge the Basque house as a definite ethnic product. However, I do not feel personally concerned, nor, so to speak, attacked by this unitary style: I see only too well that it was here before me, without me. It is a complex product which has its determinations at the level of a very wide history: it does not call out to me, it does not provoke me into naming it, except if I think of inserting it into a vast picture of rural habitat. But if I am in the Paris region and I catch a glimpse, at the end of the rue Gambetta or the rue Jean-Jaures, of a natty white chalet with red tiles, dark brown half-timbering, an asymmetrical roof and a wattle-and-daub front, I feel as if I were personally receiving an imperious injunction to name this object a Basque chalet: or even better, to see it as the very essence of basquity. This is because the concept appears to me in all its appropriative nature: it comes and seeks me out in order to oblige me to acknowledge the body of intentions which have motivated it and arranged it there as the signal of an individual history, as a confidence and a complicity: it is a real call, which the owners of the chalet send out to me. And this call, in order to be more imperious, has agreed to all manner of impoverishments: all that justified the Basque house on the plane of technology--the barn, the outside stairs, the dovecote, etc.--has been dropped; there remains only a brief order, not to be disputed. And the abomination is so frank that I feel this chalet has just been created on the spot, for me, like a magical object springing up in my present life without any trace of the history which has caused it. For this interpellant speech is at the same time a frozen speech: at the moment of reaching me, it suspends itself, turns away and assumes the look of a generality: it stiffens, it makes itself look neutral and innocent. The appropriation of the concept is suddenly driven away once more by the literalness of the meaning. This is a kind of arrest, in both the physical and the legal sense of the term: French imperialism condemns the saluting Negro to be nothing more than an instrumental signifier, the Negro suddenly hails me in the name of French imperialty; but at the same moment the Negro's salute thickens, becomes vitrified, freezes into an eternal reference meant to establish French imperialty. On the surface of language something has stopped moving: the use of the signification is here, hiding behind the fact, and conferring on it a notifying look; but at the same time, the fact paralyses the intention, gives it something like a malaise producing immobility: in order to make it innocent, it freezes it. This is because myth is speech stolen and restored. Only, speech which is restored is no longer quite that which was stolen: when it was brought back, it was not put exactly in its place. It is this brief act of larceny, this moment taken for a surreptitious faking, which gives mythical speech its benumbed look. One last element of the signification remains to be examined: its motivation. We know that in a language, the sign is arbitrary: nothing compels the acoustic image tree 'naturally' to mean the concept tree: the sign, here, is unmotivated. Yet this arbitrariness has limits, which come from the associative relations of the word: the language can produce a whole fragment of the sign by analogy with other signs (for instance one says amiable in French, and not amable, by analogy with aime). The mythical signification, on the other hand, is never arbitrary; it is always in part motivated, and unavoidably contains some analogy. For Latin exemplarity to meet the naming of the lion, there must be an analogy, which is the agreement of the predicate; for French imperialty to get hold of the saluting Negro, there must be identity between the Negro's salute and that of the French soldier. Motivation is necessary to the very duplicity of myth: myth plays on the analogy between meaning and form, there is no myth without motivated form. In order to grasp the power of motivation in myth, it is enough to reflect for a moment on an extreme case. I have here before me a collection of objects so lacking in order that I can find no meaning in it; it would seem that here, deprived of any previous meaning, the form could not root its analogy in anything, and that myth is impossible. But what the form can always give one to read is disorder itself: it can give a signification to the absurd, make the absurd itself a myth. This is what happens when commonsense mythifies surrealism, for instance. Even the absence of motivation does not embarrass myth; for this absence will itself be sufficiently objectified to become legible: and finally, the absence of motivation will become a second-order motivation, and myth will be re-established. Motivation is unavoidable. It is none the less very fragmentary. To start with, it is not 'natural': it is history which supplies its analogies to the form. Then, the analogy between the meaning and the concept is never anything but partial: the form drops many analogous features and keeps only a few: it keeps the sloping roof, the visible beams in the Basque chalet, it abandons the stairs, the barn, the weathered look, etc. One must even go further: a complete image would exclude myth, or at least would compel it to seize only its very completeness. This is just what happens in the case of bad painting, which is wholly based on the myth of what is 'filled out' and 'finished' (it is the opposite and symmetrical case of the myth of the absurd: here, the form mythifies an 'absence', there, a surplus). But in general myth prefers to work with poor, incomplete images, where the meaning is already relieved of its fat, and ready for a signification, such as caricatures, pastiches, symbols, etc. Finally, the motivation is chosen among other possible ones: I can very well give to French imperialty many other signifiers beside a Negro's salute: a French general pins a decoration on a one-armed Senegalese, a nun hands a cup of tea to a bed-ridden Arab, a white school- master teaches attentive pickaninnies: the press undertakes every day to demonstrate that the store of mythical signifiers is inexhaustible. The nature of the mythical signification can in fact be well conveyed by one particular simile: it is neither more nor less arbitrary than an ideograph. Myth is a pure ideographic system, where the forms are still motivated by the concept which they represent while not yet, by a long way, covering the sum of its possibilities for representation. And just as, historically, ideographs have gradually left the concept and have become associated with the sound, thus growing less and less motivated, the worn out state of a myth can be recognized by the arbitrariness of its signification: the whole of Moliere is seen in a doctor's ruff. Reading and deciphering myth How is a myth received? We must here once more come back to the duplicity of its signifier, which is at once meaning and form. I can produce three different types of reading by focusing on the one, or the other, or both at the same time.8 1. If I focus on an empty signifier, I let the concept fill the form of the myth without ambiguity, and I find myself before a simple system, where the signification becomes literal again: the Negro who salutes is an example of French imperialism, he is a symbol for it. This type of focusing is, for instance, that of the producer of myths, of the journalist who starts with a concept and seeks a form for it.9 2. If I focus on a full signifier, in which I clearly distinguish the meaning and the form, and consequently the distortion which the one imposes on the other, I undo the signification of the myth, and I receive the latter as an imposture: the saluting Negro becomes the alibi of French imperialism. This type of focusing is that of the mythologist: he deciphers the myth, he understands a distortion. 3. Finally, if I focus on the mythical signifier as on an inextricable whole made of meaning and form, I receive an ambiguous signification: I respond to the constituting mechanism of myth, to its own dynamics, I become a reader of myths. The saluting Negro is no longer an example or a symbol, still less an alibi: he is the very presence of French imperialism. The first two types of focusing are static, analytical; they destroy the myth, either by making its intention obvious, or by unmasking it: the former is cynical, the latter demystifying. The third type of focusing is dynamic, it consumes the myth according to the very ends built into its structure: the reader lives the myth as a story at once true and unreal. If one wishes to connect a mythical schema to a general history, to explain how it corresponds to the interests of a definite society, in short, to pass from semiology to ideology, it is obviously at the level of the third type of focusing that one must place oneself: it is the reader of myths himself who must reveal their essential function. How does he receive this particular myth today? If he receives it in an innocent fashion, what is the point of proposing it to him? And if he reads it using his powers of reflection, like the mythologist, does it matter which alibi is presented? If the reader does not see French imperialism in the saluting Negro, it was not worth weighing the latter with it; and if he sees it, the myth is nothing more than a political proposition, honestly expressed. In one word, either the intention of the myth is too obscure to be efficacious, or it is too clear to be believed. In either case, where is the ambiguity? This is but a false dilemma. Myth hides nothing and flaunts nothing: it distorts; myth is neither a lie nor a confession: it is an inflection. Placed before the dilemma which I mentioned a moment ago, myth finds a third way out. Threatened with disappearance if it yields to either of the first two types of focusing, it gets out of this tight spot thanks to a compromise--it is this compromise. Entrusted with 'glossing over' an intentional concept, myth encounters nothing but betrayal in language, for language can only obliterate the concept if it hides it, or unmask it if it formulates it. The elaboration of a second-order semiological system will enable myth to escape this dilemma: driven to having either to unveil or to liquidate the concept, it will naturalize it. We reach here the very principle of myth: it transforms history into nature. We now understand why, in the eyes of the myth consumer, the intention, the adhomination of the concept can remain manifest without however appearing to have an interest in the matter: what causes mythical speech to be uttered is perfectly explicit, but it is immediately frozen into something natural; it is not read as a motive, but as a reason. If I read the Negro-saluting as symbol pure and simple of imperiality, I must renounce the reality of the picture, it discredits itself in my eyes when it becomes an instrument. Conversely, if I decipher the Negro’s salute as an alibi of coloniality, I shatter the myth even more surely by the obviousness of its motivation. But for the myth-reader, the outcome is quite different: everything happens as if the picture naturally conjured up the concept, as if the signifier gave a foundation to the signified: the myth exists from the precise moment when French imperialism achieves the natural state: myth is speech justified in excess. Here is a new example which will help understand clearly how the myth-reader is led to rationalize the signified by means of the signifier. We are in the month of July, I read a big headline in France-Soir: THE FALL IN PRICES: FIRST INDICATIONS. VEGETABLES: PRICE DROP BEGINS. Let us quickly sketch the semiological schema: the example being a sentence, the first system is purely linguistic. The signifier of the second system is composed here of a certain number of accidents, some lexical (the words: first, begins, the [fall]), some typographical (enormous headlines where the reader usually sees news of world importance). The signified or concept is what must be called by a barbarous but unavoidable neologism: governmentality, the Government presented by the national press as the Essence of efficacy. The signification of the myth follows clearly from this: fruit and vegetable prices are falling because the government has so decided. Now it so happens in this case (and this is on the whole fairly rare) that the newspaper itself has, two lines below, allowed one to see through the myth which it had just elaborated--whether this is due to self-assurance or honesty. It adds (in small type, it is true): 'The fall in prices is helped by the return of seasonal abundance.' This example is instructive for two reasons. Firstly it conspicuously shows that myth essentially aims at causing an immediate impression--it does not matter if one is later allowed to see through the myth, its action is assumed to be stronger than the rational explanations which may later belie it. This means that the reading of a myth is exhausted at one stroke. I cast a quick glance at my neighbor's France-Soir: I cull only a meaning there, but I read a true signification; I receive the presence of governmental action in the fall in fruit and vegetable prices. That is all, and that is enough. A more attentive reading of the myth will in no way increase its power or its ineffectiveness: a myth is at the same time imperfectible and unquestionable; time or knowledge will not make it better or worse. Secondly, the naturalization of the concept, which I have just identified as the essential function of myth, is here exemplary. In a first (exclusively linguistic) system, causality would be, literally, natural: fruit and vegetable prices fall because they are in season. In the second (mythical) system, causality is artificial, false; but it creeps, so to speak, through the back door of Nature. This is why myth is experienced as innocent speech: not because its intentions are hidden--if they were hidden, they could not be efficacious--but because they are naturalized. In fact, what allows the reader to consume myth innocently is that he does not see it as a semiological system but as an inductive one. Where there is only an equivalence, he sees a kind of causal process: the signifier and the signified have, in his eyes, a natural relationship. This confusion can be expressed otherwise: any semiological system is a system of values; now the myth consumer takes the signification for a system of facts: myth is read as a factual system, whereas it is but a semiological system. **Myth as stolen language** What is characteristic of myth? To transform a meaning into form. In other words, myth is always a language-robbery. I rob the Negro who is saluting, the white and brown chalet, the seasonal fall in fruit prices, not to make them into examples or symbols, but to naturalize through them the Empire, my taste for Basque things, the Government. Are all primary languages a prey for myth? Is there no meaning which can resist this capture with which form threatens it? In fact, nothing can be safe from myth, myth can develop its second-order schema from any meaning and, as we saw, start from the very lack of meaning. But all languages do not resist equally well. Articulated language, which is most often robbed by myth, offers little resistance. It contains in itself some mythical dispositions, the outline of a sign-structure meant to manifest the intention which led to its being used: it is what could be called the expressiveness of language. The imperative or the subjunctive mode, for instance, are the form of a particular signified, different from the meaning: the signified is here my will or my request. This is why some linguists have defined the indicative, for instance, as a zero state or degree, compared to the subjunctive or the imperative. Now in a fully constituted myth, the meaning is never at zero degree, and this is why the concept can distort it, naturalize it. We must remember once again that the privation of meaning is in no way a zero degree: this is why myth can perfectly well get hold of it, give it for instance the signification of the absurd, of surrealism, etc. At bottom, it would only be the zero degree which could resist myth. Language lends itself to myth in another way: it is very rare that it imposes at the outset a full meaning which it is impossible to distort. This comes from the abstractness of its concept: the concept of tree is vague, it lends itself to multiple contingencies. True, a language always has at its disposal a whole appropriating organization (this tree, the tree which, etc.). But there always remains, around the final meaning, a halo of virtualities where other possible meanings are floating: the meaning can almost always be interpreted. One could say that a language offers to myth an open-work meaning. Myth can easily insinuate itself into it, and swell there: it is a robbery by colonization (for instance: the fall in prices has started. But what fall? That due to the season or that due to the government? the signification becomes here a parasite of the article, in spite of the latter being definite). When the meaning is too full for myth to be able to invade it, myth goes around it, and carries it away bodily. This is what happens to mathematical language. In itself, it cannot be distorted, it has taken all possible precautions against interpretation: no parasitical signification can worm itself into it. And this is why, precisely, myth takes it away en bloc; it takes a certain mathematical formula (E = mc²), and makes of this unalterable meaning the pure signifier of mathematicity. We can see that what is here robbed by myth is something which resists, something pure. Myth can reach everything, corrupt everything, and even the very act of refusing oneself to it. So that the more the language-object resists at first, the greater its final prostitution; whoever here resists completely yields completely: Einstein on one side, Paris-Match on the other. One can give a temporal image of this conflict: mathematical language is a finished language, which derives its very perfection from this acceptance of death. Myth, on the contrary, is a language which does not want to die: it wrests from the meanings which give it its sustenance an insidious, degraded survival, it provokes in them an artificial reprieve in which it settles comfortably, it turns them into speaking corpses. Here is another language which resists myth as much as it can: our poetic language. Contemporary poetry is a regressive semiological system. Whereas myth aims at an ultra-signification, at the amplification of a first system, poetry, on the contrary, attempts to regain an infra-signification, a pre-semiological state of language; in short, it tries to transform the sign back into meaning: its ideal, ultimately, would be to reach not the meaning of words, but the meaning of things themselves. This is why it clouds the language, increases as much as it can the abstractness of the concept and the arbitrariness of the sign and stretches to the limit the link between signifier and signified. The open-work structure of the concept is here maximally exploited: unlike what happens in prose, it is all the potential of the signified that the poetic sign tries to actualize, in the hope of at last reaching something like the transcendent quality of the thing, its natural (not human) meaning. Hence the essentialist ambitions of poetry, the conviction that it alone catches the thing in itself; inasmuch, precisely, as it wants to be an anti-language. All told, of all those who use speech, poets are the least formalist, for they are the only ones who believe that the meaning of the words is only a form, with which they, being realists, cannot be content. This is why our modern poetry always asserts itself as a murder of language, a kind of spatial, tangible analogue of silence. Poetry occupies a position which is the reverse of that of myth: myth is a semiological system which has the pretension of transcending itself into a factual system; poetry is a semiological system which has the pretension of contracting into an essential system. But here again, as in the case of mathematical language, the very resistance offered by poetry makes it an ideal prey for myth: the apparent lack of order of signs, which is the poetic facet of an essential order, is captured by myth, and transformed into an empty signifier, which will serve to signify poetry. This explains the improbable character of modern poetry: by fiercely refusing myth, poetry surrenders to it bound hand and foot. Conversely, the rules in classical poetry constituted an accepted myth, the conspicuous arbitrariness of which amounted to perfection of a kind, since the equilibrium of a semiological system comes from the arbitrariness of its signs. A voluntary acceptance of myth can in fact define the whole of our traditional Literature. According to our norms, this Literature is an undoubted mythical system: there is a meaning, that of the discourse; there is a signifier, which is this same discourse as form or writing; there is a signified, which is the concept of literature; there is a signification, which is the literary discourse. I began to discuss this problem in Writing Degree Zero, which was, all told, nothing but a mythology of literary language. There I defined writing as the signifier of the literary myth, that is, as a form which is already filled with meaning and which receives from the concept of Literature a new signification.12 I suggested that history, in modifying the writer's consciousness, had provoked, a hundred years or so ago, a moral crisis of literary language: writing was revealed as signifier, Literature as signification; rejecting the false nature of traditional literary language, the writer violently shifted his position in the direction of an anti-nature of language. The subversion of writing was the radical act by which a number of writers have attempted to reject Literature as a mythical system. Every revolt of this kind has been a murder of Literature as signification: all have postulated the reduction of literary discourse to a simple semiological system, or even, in the case of poetry, to a pre-semiological system. This is an immense task, which required radical types of behavior: it is well known that some went as far as the pure and simple scuttling of the discourse, silence—whether real or transposed—appearing as the only possible weapon against the major power of myth: its recurrence. It thus appears that it is extremely difficult to vanquish myth from the inside: for the very effort one makes in order to escape its strangle hold becomes in its turn the prey of myth: myth can always, as a last resort, signify the resistance which is brought to bear against it. Truth to tell, the best weapon against myth is perhaps to mythify it in its turn, and to produce an artificial myth: and this reconstituted myth will in fact be a mythology. Since myth robs language of something, why not rob myth? All that is needed is to use it as the departure point for a third semiological chain, to take its signification as the first term of a second myth. Literature offers some great examples of such artificial mythologies. I shall only evoke here Flaubert's Bouvard and Pecuchet. It is what could be called an experimental myth, a second-order myth. Bouvard and his friend Pecuchet represent a certain kind of bourgeois (which is incidentally in conflict with other bourgeois strata): their discourse already constitutes a mythical type of speech; its language does have a meaning, but this meaning is the empty form of a conceptual signified, which here is a kind of technological unsatedness. The meeting of meaning and concept forms, in this first mythical system, a signification which is the rhetoric of Bouvard and Pecuchet. It is at this point (I am breaking the process into its components for the sake of analysis) that Flaubert intervenes: to this first mythical system, which already is a second semiological system, he superimposes a third chain, in which the first link is the signification, or final term, of the first myth. The rhetoric of Bouvard and Pecuchet becomes the form of the new system; the concept here is due to Flaubert himself, to Flaubert's gaze on the myth which Bouvard and Pecuchet had built for themselves: it consists of their natively ineffectual inclinations, their inability to feel satisfied, the panic succession of their apprenticeships, in short what I would very much like to call (but I see storm clouds on the horizon): Bouvard-and-pecachet-ity. As for the final signification, it is the book, it is Bouvard and Pecuchet for us. The power of the second myth is that it gives the first its basis as a naivety which is looked at. Flaubert has undertaken a real archaeological restoration of a given mythical speech: he is the Viollet-le-Duc of a certain bourgeois ideology. But less naive than Viollet-le-Duc, he has strewn his reconstitution with supplementary ornaments which demystify it. These ornaments (which are the form of the second myth) are subjunctive in kind: there is a semiological equivalence between the subjunctive restitution of the discourse of Bouvard and Pecuchet and their ineffectualness.13 Flaubert's great merit (and that of all artificial mythologies: there are remarkable ones in Sartre's work), is that he gave to the problem of realism a frankly semiological solution. True, it is a somewhat incomplete merit, for Flaubert's ideology, since the bourgeois was for him only an aesthetic eyesore, was not at all realistic. But at least he avoided the major sin in literary matters, which is to confuse ideological with semiological reality. As ideology, literary realism does not depend at all on the language spoken by the writer. Language is a form, it cannot possibly be either realistic or unrealistic. All it can do is either to be mythical or not, or perhaps, as in Bouvard and Pecuchet, counter-mythical. Now, unfortunately, there is no antipathy between realism and myth. It is well known how often our 'realistic' literature is mythical (if only as a crude myth of realism) and how our 'literature of the unreal' has at least the merit of being only slightly so. The wise thing would of course be to define the writer's realism as an essentially ideological problem. This certainly does not mean that there is no responsibility of form towards reality. But this responsibility can be measured only in semiological terms. A form can be judged (since forms are on trial) only as signification, not as expression. The writer's language is not expected to represent reality, but to signify it. This should impose on critics the duty of using two rigorously distinct methods: one must deal with the writer's realism either as an ideological substance (Marxist themes in Brecht's work, for instance) or as a semiological value (the props, the actors, the music, the colors in Brechtian dramaturgy). The ideal of course would be to combine these two types of criticism; the mistake which is constantly made is to confuse them: ideology has its methods, and so has semiology. The bourgeoisie as a joint-stock company Myth lends itself to history in two ways: by its form, which is only relatively motivated; by its concept, the nature of which is historical. One can therefore imagine a diachronic study of myths, whether one submits them to a retrospection (which means founding an historical mythology) or whether one follows some of yesterday's myths down to their present forms (which means founding prospective history). If I keep here to a synchronic sketch of contemporary myths, it is for an objective reason: our society is the privileged field of mythical significations. We must now say why. Whatever the accidents, the compromises, the concessions and the political adventures, whatever the technical, economic, or even social changes which history brings us, our society is still a bourgeois society. I am not forgetting that since 1789, in France, several types of bourgeoisie have succeeded one another in power; but the same status--a certain regime of ownership, a certain order, a certain ideology--remains at a deeper level. Now a remarkable phenomenon occurs in the matter of naming this regime: as an economic fact, the bourgeoisie is named without any difficulty: capitalism is openly professed. As a political fact, the bourgeoisie has some difficulty in acknowledging itself: there are no 'bourgeois' parties in the Chamber. As an ideological fact, it completely disappears: the bourgeoisie has obliterated its name in passing from reality to representation, from economic man to mental man. It comes to an agreement with the facts, but does not compromise about values, it makes its status undergo a real ex-nominating operation: the bourgeoisie is defined as the social class which does not want to be named. 'Bourgeois', 'petitbourgeois', 'capitalism', 'proletariat' are the locus of an unceasing hemorrhage: meaning flows out of them until their very name becomes unnecessary. This ex-nominating phenomenon is important; let us examine it a little more closely. Politically, the hemorrhage of the name 'bourgeois' is effected through the idea of nation. This was once a progressive idea, which has served to get rid of the aristocracy; today, the bourgeoisie merges into the nation, even if it has, in order to do so, to exclude from it the elements which it decides are allogenous (the Communists). This planned syncretism allows the bourgeoisie to attract the numerical support of its temporary allies, all the intermediate, therefore 'shapeless' classes. A long-continued use of the word nation has failed to depoliticize it in depth; the political substratum is there, very near the surface, and some circumstances make it suddenly manifest. There are in the Chamber some 'national' parties, and nominal syncretism here makes conspicuous what it had the ambition of hiding: an essential disparity. Thus the political vocabulary of the bourgeoisie already postulates that the universal exists: for it, politics is already a representation, a fragment of ideology. Politically, in spite of the universalistic effort of its vocabulary, the bourgeoisie eventually strikes against a resisting core which is, by definition, the revolutionary party. But this party can constitute only a political richness: in a bourgeois culture, there is neither proletarian culture nor proletarian morality, there is no proletarian art; ideologically, all that is not bourgeois is obliged to borrow from the bourgeoisie. Bourgeois ideology can therefore spread over everything and in so doing lose its name without risk: no one here will throw this name of bourgeois back at it. It can without resistance subsume bourgeois theater, art and humanity under their eternal analogues; in a word, it can exnominate itself without restraint when there is only one single human nature left: the defection from the name 'bourgeois' is here complete. True, there are revolts against bourgeois ideology. This is what one generally calls the avant-garde. But these revolts are socially limited, they remain open to salvage. First, because they come from a small section of the bourgeoisie itself, from a minority group of artists and intellectuals, without public other than the class which they contest, and who remain dependent on its money in order to express themselves. Then, these revolts always get their inspiration from a very strongly made distinction between the ethically and the politically bourgeois: what the avant-garde contests is the bourgeois in art or morals--the shopkeeper, the Philistine, as in the heyday of Romanticism; but as for political contestation, there is none. What the avant-garde does not tolerate about the bourgeoisie is its language, not its status. This does not necessarily mean that it approves of this status; simply, it leaves it aside. Whatever the violence of the provocation, the nature it finally endorses is that of 'derelict' man, not alienated man; and derelict man is still Eternal Man. This anonymity of the bourgeoisie becomes even more marked when one passes from bourgeois culture proper to its derived, vulgarized and applied forms, to what one could call public philosophy, that which sustains everyday life, civil ceremonials, secular rites, in short the unwritten norms of interrelationships in a bourgeois society. It is an illusion to reduce the dominant culture to its inventive core: there also is a bourgeois culture which consists of consumption alone. The whole of France is steeped in this anonymous ideology: our press, our films, our theater, our pulp literature, our rituals, our Justice, our diplomacy, our conversations, our remarks about the weather, a murder trial, a touching wedding, the cooking we dream of, the garments we wear, everything, in everyday life, is dependent on the representation which the bourgeoisie has and makes us have of the relations between man and the world. These 'normalized' forms attract little attention, by the very fact of their extension, in which their origin is easily lost. They enjoy an intermediate position: being neither directly political nor directly ideological, they live peacefully between the action of the militants and the quarrels of the intellectuals; more or less abandoned by the former and the latter, they gravitate towards the enormous mass of the undifferentiated, of the insignificant, in short, of nature. Yet it is through its ethic that the bourgeoisie pervades France: practised on a national scale, bourgeois norms are experienced as the evident laws of a natural order--further the bourgeois class propagates its representations, the more naturalized they become. The fact of the bourgeoisie becomes absorbed into an amorphous universe, whose sole inhabitant is Eternal Man, who is neither proletarian nor bourgeois. It is therefore by penetrating the intermediate classes that the bourgeoisie ideology can most surely lose its name. Petit-bourgeois norms are the residue of bourgeois culture, they are bourgeois truths which have become degraded, impoverished, commercialized, slightly archaic, or shall we say, out of date? The political alliance of the bourgeoisie and the petite-bourgeoisie has for more than a century determined the history of France; it has rarely been broken, and each time only temporarily (1848, 1871, 1936). This alliance got closer as time passed, it gradually became a symbiosis; transient awakenings might happen, but the common ideology was never questioned again. The same 'natural' varnish covers up all 'national' representations: the big wedding of the bourgeoisie, which originates in a class ritual (the display and consumption of wealth), can bear no relation to the economic status of the lower middle-class: but through the press, the news, and literature, it slowly becomes the very norm as dreamed, though not actually lived, of the petit-bourgeois couple. The bourgeoisie is constantly absorbing into its ideology a whole section of humanity which does not have its basic status and cannot live up to it except in imagination, that is, at the cost of an immobilization and an impoverishment of consciousness. By spreading its representations over a whole catalogue of collective images for petit-bourgeois use, the bourgeoisie countenances the illusory lack of differentiation of the social classes: it is as from the moment when a typist earning twenty pounds a month recognizes herself in the big wedding of the bourgeoisie that bourgeois ex-nomination achieves its full effect. The flight from the name 'bourgeois' is not therefore an illusory, accidental, secondary, natural or insignificant phenomenon: it is the bourgeoisie ideology itself, the process through which the bourgeoisie transforms the reality of the world into an image of the world, History into Nature. And this image has a remarkable feature: it is upside down. The status of the bourgeoisie is particular, historical: man as represented by it is universal, eternal. The bourgeoisie class has precisely built its power on technical, scientific progress, on an unlimited transformation of nature: bourgeois ideology yields in return an unchangeable nature. The first bourgeois philosophers pervaded the world with significations, subjected all things to an idea of the rational, and decreed that they were meant for man: bourgeois ideology is defined by the abandonment of the name 'bourgeois', myth is constituted by the loss of the historical quality of things: in it, things lose the memory that they once were made. The world enters language as a dialectical relation between activities, between human actions; it comes out of myth as a harmonious display of essences. A conjuring trick has taken place; it has turned reality inside out, it has emptied it of history and has filled it with nature, it has removed from things their human meaning so as to make them signify a human insignificance. The function of myth is to empty reality: it is, literally, a ceaseless flowing out, a hemorrhage, or perhaps an evaporation, in short a perceptible absence. It is now possible to complete the semiological definition of myth in a bourgeois society: myth is depoliticized speech. One must naturally understand political in its deeper meaning, as describing the whole of human relations in their real, social structure, in their power of making the world; one must above all give an active value to the prefix deː-: here it represents an operational movement, it permanently embodies a defaulting. In the case of the soldier-Negro, for instance, what is got rid of is certainly not French imperality (on the contrary, since what must be actualized is its presence); it is the contingent, historical, in one word: fabricated, quality of colonialism. Myth does not deny things, on the contrary, its function is to talk about them; simply, it purifies them, it makes them innocent, it gives them a natural and eternal justification, it gives them a clarity which is not that of an explanation but that of a statement of fact. If I state the fact of French imperality without explaining it, I am very near to finding that it is natural and goes without saying: I am reassured. In passing from history to nature, myth acts economically: it abolishes the complexity of human acts, it gives them the simplicity of essences, it does away with all dialectics, with **Myth is depoliticized speech** And this is where we come back to myth. Semiology has taught us that myth has the task of giving an historical intention a natural justification, and making contingency appear eternal. Now this process is exactly that of bourgeois ideology. If our society is objectively the privileged field of mythical significations, it is because formally myth is the most appropriate instrument for the ideological inversion which defines this society: at all the levels of human communication, myth operates the inversion of anti-physis into pseudo-physis. What the world supplies to myth is an historical reality, defined, even if this goes back quite a while, by the way in which men have produced or used it; and what myth gives in return is a natural image of this reality. And just as bourgeois ideology is defined by the abandonment of the name 'bourgeois', myth is constituted by the loss of the historical quality of things: in it, things lose the memory that they once were made. The world enters language as a dialectical relation between activities, between human actions; it comes out of myth as a harmonious display of essences. A conjuring trick has taken place; it has turned reality inside out, it has emptied it of history and has filled it with nature, it has removed from things their human meaning so as to make them signify a human insignificance. The function of myth is to empty reality: it is, literally, a ceaseless flowing out, a hemorrhage, or perhaps an evaporation, in short a perceptible absence. It is now possible to complete the semiological definition of myth in a bourgeois society: myth is depoliticized speech. One must naturally understand political in its deeper meaning, as describing the whole of human relations in their real, social structure, in their power of making the world; one must above all give an active value to the prefix deː-: here it represents an operational movement, it permanently embodies a defaulting. In the case of the soldier-Negro, for instance, what is got rid of is certainly not French imperality (on the contrary, since what must be actualized is its presence); it is the contingent, historical, in one word: fabricated, quality of colonialism. Myth does not deny things, on the contrary, its function is to talk about them; simply, it purifies them, it makes them innocent, it gives them a natural and eternal justification, it gives them a clarity which is not that of an explanation but that of a statement of fact. If I state the fact of French imperality without explaining it, I am very near to finding that it is natural and goes without saying: I am reassured. In passing from history to nature, myth acts economically: it abolishes the complexity of human acts, it gives them the simplicity of essences, it does away with all dialectics, with any going back beyond what is immediately visible, it organizes a world which is without contradictions because it is without depth, a world wide and wallowing in the evident, it establishes a blissful clarity: things appear to mean something by themselves.21 However, is myth always depoliticized speech? In other words, is reality always political? Is it enough to speak about a thing naturally for it to become mythical? One could answer with Marx that the most natural object contains a political trace, however faint and diluted, the more or less memorable presence of the human act which has produced, fitted up, used, subjected or rejected it.22 The language-object, which 'speaks things', can easily exhibit this trace; the metalanguage, which speaks of things, much less easily. Now myth always comes under the heading of metalanguage: the depoliticization which it carries out often supervenes against a background which is already naturalized, depoliticized by a general metalanguage which is trained to celebrate things, and no longer to 'act them'. It goes without saying that the force needed by myth to distort its object is much less in the case of a tree than in the case of a Sudanese: in the latter case, the political load is very near the surface, a large quantity of artificial nature is needed in order to disperse it; in the former case, it is remote, purified by a whole century-old layer of metalanguage. There are, therefore, strong myths and weak myths; in the former, the political quantum is immediate, the depoliticization is abrupt; in the latter, the political quality of the object has faded like a color, but the slightest thing can bring back its strength brutally: what is more natural than the sea? and what more 'political' than the sea celebrated by the makers of the film The Lost Continent?23 In fact, metalanguage constitutes a kind of preserve for myth. Men do not have with myth a relationship based on truth but on use: they depoliticize according to their needs. Some mythical objects are left dormant for a time; they are then no more than vague mythical schemata whose political load seems almost neutral. But this indicates only that their situation has brought this about, not that their structure is different. This is the case with our Latin-grammar example. We must note that here mythical speech works on a material which has long been transformed: the sentence by Aesop belongs to literature, it is at the very start mythified (therefore made innocent) by its being fiction. But it is enough to replace the initial term of the chain for an instant into its nature as language-object, to gauge the emptying of reality operated by myth: can one imagine the feelings of a real society of animals on finding itself transformed into a grammar example, into a predicative nature? In order to gauge the political load of an object and the mythical hollow which espouses it, one must never look at things from the point of view of the signification, but from that of the signifier, of the thing which has been robbed; and within the signifier, from the point of view of the language-object, that is, of the meaning. There is no doubt that if we consulted a real lion, he would maintain that the grammar example is a strongly depoliticized state, he would qualify as fully political the jurisprudence which leads him to claim a prey because he is the strongest, unless we deal with a bourgeois lion who would not fail to mythify his strength by giving it the form of a duty. One can clearly see that in this case the political insignificance of the myth comes from its situation. Myth, as we know, is a value: it is enough to modify its circumstances, the general (and precarious) system in which it occurs, in order to regulate its scope with great accuracy. The field of the myth is in this case reduced to the second form of a French lycee. But I suppose that a child enthralled by the story of the lion, the heifer and the cow, and recovering through the life of the imagination the actual reality of these animals, would appreciate with much less unconcern than we do the disappearance of this lion changed into a predicate. In fact, we hold this myth to be politically insignificant only because it is not meant for us. Myth on the Left If myth is depoliticized speech, there is at least one type of speech which is the opposite of myth: that which remains political. Here we must go back to the distinction between language-object and metalanguage. If I am a woodcutter and I am led to name the tree which I am felling, whatever the form of my sentence, I 'speak the tree', I do not speak about it. This means that my language is operational, transitively linked to its object; between the tree and myself, there is nothing but my labor, that is to say, an action. This is a political language: it represents nature for me only inasmuch as I am going to transform it, it is a language thanks to which I 'act the object'; the tree is not an image for me, it is simply the meaning of my action. But if I am not a woodcutter, I can no longer 'speak the tree', I can only speak about it, on it. My language is no longer the instrument of an 'acted-upon tree', it is the 'tree-celebrated' which becomes the instrument of my language. I no longer have anything more than an intransitive relationship with the tree; this tree is no longer the meaning of reality as a human action, it is an image-at-one's-disposal. Compared to the real language of the woodcutter, the language I create is a second-order language, a metalanguage in which I shall henceforth not 'act the things' but 'act their names', and which is to the primary language what the gesture is to the act. This second-order language is not entirely mythical, but it is the very locus where myth settles; for myth can work only on objects which have already received the mediation of a first language. There is therefore one language which is not mythical, it is the language of man as a producer: wherever man speaks in order to transform reality and no longer to preserve it as an image, wherever he links his language to the making of things, metalanguage is referred to a language-object, and myth is impossible. This is why revolutionary language proper cannot be mythical. Revolution is defined as a cathartic act meant to reveal the political load of the world: it makes the world; and its language, all of it, is functionally absorbed in this making. It is because it generates speech which is fully, that is to say initially and finally, political, and not, like myth, speech which is initially political and finally natural, that Revolution excludes myth. Just as bourgeois ex-nomination characterizes at once bourgeois ideology and myth itself, revolutionary denomination identifies revolution and the absence of myth. The bourgeoisie hides the fact that it is the bourgeois and thereby produces myth; revolution announces itself openly as revolution and thereby abolishes myth. I have been asked whether there are myths 'on the Left'. Of course, inasmuch, precisely, as the Left is not revolution. Leftwing myth supervenes precisely at the moment when revolution changes itself into 'the Left', that is, when it accepts to wear a mask, to hide its name, to generate an innocent metalanguage and to distort itself into 'Nature'. This revolutionary ex-nomination may or may not be tactical, this is no place to discuss it. At any rate, it is sooner or later experienced as a process contrary to revolution, and it is always more or less in relation to myth that revolutionary history defines its 'deviations'. There came a day, for instance, when it was socialism itself which defined the Stalin myth. Stalin, as a spoken object, has exhibited for years, in their pure state, the constituent characters of mythical speech: a meaning, which was the real Stalin, that of history; a signifier, which was the ritual invocation to Stalin, and the inevitable character of the 'natural' epithets with which his name was surrounded; a signified, which was the intention to respect orthodoxy, discipline and unity, appropriated by the Communist parties to a definite situation; and a signification, which was a sanctified Stalin, whose historical determinants found themselves grounded in nature, sublimated under the name of Genius, that is, something irrational and inexpressible: here, depoliticization is evident, it fully reveals the presence of a myth. Yes, myth exists on the Left, but it does not at all have there the same qualities as bourgeois myth. Left-wing myth is inessential. To start with, the objects which it takes hold of are rare--only a few political notions--unless it has itself recourse to the whole repertoire of the bourgeois myths. Left-wing myth never reaches the immense field of human relationships, the very vast surface of 'insignificant' ideology. Everyday life is inaccessible to it: in a bourgeois society, there are no 'Left-wing' myths concerning marriage, cooking, the home, the theater, the law, morality, etc. Then, it is an incidental myth, its use is not part of a strategy, as is the case with bourgeois myth, but only of a tactics, or, at the worst, of a deviation; if it occurs, it is as a myth suited to a convenience, not to a necessity. Finally, and above all, this myth is, in essence, poverty-stricken. It does not know how to proliferate; being produced on order and for a temporally limited prospect, it is invented with difficulty. It lacks a major faculty, that of fabulizing. Whatever it does, there remains about it something stiff and literal, a suggestion of something done to order. As it is expressively... put, it remains barren. In fact, what can be more meager than the Stalin myth? No inventiveness here, and only a clumsy appropriation: the signifier of the myth (this form whose infinite wealth in bourgeois myth we have just seen) is not varied in the least: it is reduced to a litany. This imperfection, if that is the word for it, comes from the nature of the 'Left': whatever the imprecision of the term, the Left always defines itself in relation to the oppressed, whether proletarian or colonized. Now the speech of the oppressed can only be poor, monotonous, immediate: his destitution is the very yardstick of his language; he has only one, always the same, that of his actions; metalanguage is a luxury, he cannot yet have access to it. The speech of the oppressed is real, like that of the woodcutter; it is a transitive type of speech: it is quasi-unable to lie; lying is a richness, a lie presupposes property, truths and forms to spare. This essential barrenness produces rare, threadbare myths: either transient, or clumsily indiscreet; by their very being, they label themselves as myths, and point to their masks. And this mask is hardly that of a pseudo-physics: for that type of physics is also a richness of a sort, the oppressed can only borrow it: he is unable to throw out the real meaning of things, to give them the luxury of an empty form, open to the innocence of a false Nature. One can say that in a sense, Left-wing myth is always an artificial myth, a reconstituted myth: hence its clumsiness. Does this completeness of the myths of Order (this is the name the bourgeoisie gives to itself) include inner differences? Are there, for instance, bourgeois myths and petit-bourgeois myths? There cannot be any fundamental differences, for whatever the public which consumes it, myth always postulated the immobility of Nature. But there can be degrees of fulfillment or expansion: some myths ripen better in some social strata: for myth also, there are micro-climates. The myth of Childhood-as-Poet, for instance, is an advanced bourgeois myth: it has hardly come out of inventive culture (Cocteau, for example) and is just reaching consumer culture (L'Express). Part of the bourgeoisie can still find it too obviously invented, not mythical enough to feel entitled to countenance it (a whole part of bourgeois criticism works only with duly mythical materials). It is a myth which is not yet well run in, does not yet contain enough nature: in order to make the Child Poet part of a cosmogony, one must renounce the prodigy (Mozart, Rimbaud, etc.), and accept new norms, those of psychopedagogy, Freudianism, etc.: as a myth, it is still unripe. Thus every myth can have its history and its geography; each is in fact the sign of the other: a myth ripens because it spreads. I have not been able to carry out any real study of the social geography of myths. But it is perfectly possible to draw what linguists would call the isoglosses of a myth, the lines which limit the social region where it is spoken. As this region is shifting, it would be better to speak of the waves of implantation of the myth. The Minou Drouet myth has thus had at least three waves of amplification: (1) L'Express; (2) Paris-Match, Elle; (3) France-Soir. Some myths hesitate: will they pass into tabloids, the home of the suburbanite of private means, the hairdresser's salon, the tube? The social geography of myths will remain difficult to trace as long as we lack an analytical sociology of the press. But we can say that its place already exists. Since we cannot yet draw up the list of the dialectal forms of bourgeois myth, we can always sketch its rhetorical forms. One must understand here by rhetoric a set of fixed, regulated, insistent figures, according to which the varied forms of the mythical signifier arrange themselves. These figures are transparent inasmuch as they do not affect the plasticity of the signifier; but they are already sufficiently conceptualized to adapt to an historical representation of the world (just as classical rhetoric can account for a representation of the Aristotelian type). It is through their rhetoric that bourgeois myths outline the general prospect of this pseudo-physis which defines the dream of the contemporary bourgeois world. Here are its principal figures: 1. **The inoculation.** I have already given examples of this very general figure, which consists in admitting the accidental evil of a class-bound institution the better to conceal its principal evil. One immunizes the contents of the collective imagination by means of a small inoculation of acknowledged evil; one thus protects it against the risk of a generalized subversion. This liberal treatment would not have been possible only a hundred years ago. Then, the bourgeois Good did not compromise with anything, it was quite stiff. It has become much more supple since: the bourgeoisie no longer hesitates to acknowledge some localized subversions: the avant-garde, the irrational in childhood, etc. It now lives in a balanced economy: as in any sound joint-stock company, the smaller shares--in law but not in fact--compensate the big ones. 2. **The privation of History.** Myth deprives the object of which it speaks of all History. In it, history evaporates. It is a kind of ideal servant: it prepares all things, brings them, lays them out, the master arrives, it silently disappears: all that is left for one to do is to enjoy this beautiful object without wondering where it comes from. Or even better: it can only come from eternity: since the beginning of time, it has been made for bourgeois man, the Spain of the Blue Guide has been made for the tourist, and 'primitives' have prepared their dances with a view to an exotic festivity. We can see all the disturbing things which this felicitous figure removes from sight: both determinism and freedom. Nothing is produced, nothing is chosen: all one has to do is to possess these new objects from which all soiling trace of origin or choice has been removed. This miraculous evaporation of history is another form of a concept common to most bourgeois myths: the irresponsibility of man. 3. **Identification.** The petit-bourgeois is a man unable to imagine the Other. If he comes face to face with him, he blinds himself, ignores and denies him, or else transforms him into himself. In the petit-bourgeois universe, all the experiences of confrontation are reverberating, any otherness is reduced to sameness. The spectacle or the tribunal, which are both places where the Other threatens to appear in full view, become mirrors. This is because the Other is a scandal which threatens his essence. Dominici cannot have access to social existence unless he is previously reduced to the state of a small simulacrum of the President of the Assizes or the Public Prosecutor: this is the price one must pay in order to condemn him justly, since Justice is a weighing operation and since scales can only weigh like against like. There are, in any petit-bourgeois consciousness, small simulacra of the hooligan, the parricide, the homosexual, etc., which periodically the judiciary extracts from its brain, puts in the dock, admonishes and condemns: one never tries anybody but analogues who have gone astray: it is a question of direction, not of nature, for that's how men are. Sometimes--rarely--the Other is revealed as irreducible: not because of a sudden scruple, but because common sense rebels: a man does not have a white skin, but a black one, another drinks pear juice, not Pernod. How can one assimilate the Negro, the Russian? There is here a figure for emergencies: exoticism. The Other becomes a pure object, a spectacle, a clown. Relegated to the confines of humanity, he no longer threatens the security of the home. This figure is chiefly petit-bourgeois. For, even if he is unable to experience the Other in himself, the bourgeois can at least imagine the place where he fits in: this is what is known as liberalism, which is a sort of intellectual equilibrium based on recognized places. The petitbourgeois class is not liberal (it produces Fascism, whereas the bourgeoisie uses it): it follows the same route as the bourgeoisie, but lags behind. 4. **Tautology.** Yes, I know, it's an ugly word. But so is the thing. Tautology is this verbal device which consists in defining like by like ('Drama is drama'). We can view it as one of those types of magical behavior dealt with by Sartre in his Outline of a Theory of the Emotions: one takes refuge in tautology as one does in fear, or anger, or sadness, when one is at a loss for an explanation: the accidental failure of language is magically identified with what one decides is a natural resistance of the object. In tautology, there is a double murder: one kills rationality because it resists one; one kills language because it betrays one. Tautology is a faint at the right moment, a saving aphasia, it is a death, or perhaps a comedy, the indignant 'representation' of the rights of reality over and above language. Since it is magical, it can of course only take refuge behind the argument of authority: thus do parents at the end of their tether reply to the child who keeps on asking for explanations: 'because that's how it is', or even better: 'just because, that's all'--a magical act ashamed of itself, which verbally makes the gesture of rationality, but immediately abandons the latter, and believes itself to be even with causality because it has uttered the word which introduces it. Tautology testifies to a profound distrust of language, which is rejected because it has failed. Now any refusal of language is a death. Tautology creates a dead, a motionless world. 5. Neither-Norism. By this I mean this mythological figure which consists in stating two opposites and balancing the one by the other so as to reject them both. (I want neither this nor that.) It is on the whole a bourgeois figure, for it relates to a modern form of liberalism. We find again here the figure of the scales: reality is first reduced to analogues; then it is weighed; finally, equality having been ascertained, it is got rid of. Here also there is magical behavior: both parties are dismissed because it is embarrassing to choose between them; one flees from an intolerable reality, reducing it to two opposites which balance each other only inasmuch as they are purely formal, relieved of all their specific weight. Neither-Norism can have degraded forms: in astrology, for example, ill luck is always followed by equal good-luck; they are always predicted in a prudently compensatory perspective: a final equilibrium immobilizes values, life, destiny, etc.: one no longer needs to choose, but only to endorse. 6. The quantification of quality. This is a figure which is latent in all the preceding ones. By reducing any quality to quantity, myth economizes intelligence: it understands reality more cheaply. I have given several examples of this mechanism which bourgeois--and especially petit-bourgeois--mythology does not hesitate to apply to aesthetic realities which it deems on the other hand to partake of an immaterial essence. Bourgeois theater is a good example of this contradiction: on the one hand, theater is presented as an essence which cannot be reduced to any language and reveals itself only to the heart, to intuition. From this quality, it receives an irritable dignity (it is forbidden as a crime of 'lese-essence' to speak about the theater scientifically: or rather, any intellectual way of viewing the theater is discredited as scientism or pedantic language). On the other hand, bourgeois dramatic art rests on a pure quantification of effects: a whole circuit of computable appearances establishes a quantitative equality between the cost of a ticket and the tears of an actor or the luxuriousness of a set: what is currently meant by the 'naturalness' of an actor, for instance, is above all a conspicuous quantity of effects. 7. The statement of fact. Myths tend towards proverbs. Bourgeois ideology invests in this figure interests which are bound to its very essence: universalism, the refusal of any explanation, an unalterable hierarchy of the world. But we must again distinguish the language-object from the metalanguage. Popular, ancestral proverbs still partake of an instrumental grasp of the world as object. A rural statement of fact, such as 'the weather is fine' keeps a real link with the usefulness of fine weather. It is an implicitly technological statement; the word, here, in spite of its general, abstract form, paves the way for actions, it inserts itself into a fabricating order: the farmer does not speak about the weather, he 'acts it', he draws it into his labor. All our popular proverbs thus represent active speech which has gradually solidified into reflexive speech, but where reflection is curtailed, reduced to a statement of fact, and so to speak timid, prudent, and closely hugging experience. Popular proverbs foresee more than they assert, they remain the speech of a humanity which is making itself, not one which is. Bourgeois aphorisms, on the other hand, belong to metalanguage; they are a second-order language which bears on objects already prepared. Their classical form is the maxim. Here the statement is no longer directed towards a world to be made; it must overlay one which is already made, bury the traces of this production under a self-evident appearance of eternity: it is a counter-explanation, the decorous equivalent of a tautology, of this peremptory because which parents in need of knowledge hang above the heads of their children. The foundation of the bourgeois statement of fact is common sense, that is, truth when it stops on the arbitrary order of him who speaks it. I have listed these rhetorical figures without any special order, and there may well be many others: some can become worn out, others can come into being. But it is obvious that those given here, such as they are, fall into two great categories, which are like the Zodiacal Signs of the bourgeois universe: the Essences and the Scales. Bourgeois ideology continuously transforms the products of history into essential types. Just as the cuttlefish squirts its ink in order to protect itself, it cannot rest until it has obscured the ceaseless making of the world, fixated this world into an object which can be for ever possessed, catalogued its riches, embalmed it, and injected into reality some purifying essence which will stop its transformation, its flight towards other forms of existence. And these riches, thus fixated and frozen, will at last become computable: bourgeois morality will essentially be a weighing operation, the essences will be placed in scales of which bourgeois man will remain the motionless beam. For the very end of myths is to immobilize the world: they must suggest and mimic a universal order which has fixated once and for all the hierarchy of possessions. Thus, every day and everywhere, man is stopped by myths, referred by them to this motionless prototype which lives in his place, stifles him in the manner of a huge internal parasite and assigns to his activity the narrow limits within which he is allowed to suffer without upsetting the world: bourgeois pseudo-physics is in the fullest sense a prohibition for man against inventing himself. Myths are nothing but this ceaseless, unending solicitation, this insidious and inflexible demand that all men recognize themselves in this image, eternal yet bearing a date, which was built of them one day as if for all time. For the Nature, in which they are locked up under the pretext of being eternalized, is nothing but an Usage. And it is this Usage, however lofty, that they must take in hand and transform. Necessity and limits of mythology I must, as a conclusion, say a few words about the mythologist himself. This term is rather grand and self-assured. Yet one can predict for the mythologist, if there ever is one, a few difficulties, in feeling if not in method. True, he will have no trouble in feeling justified: whatever its mistakes, mythology is certain to participate in the making of the world. Holding as a principle that man in a bourgeois society is at every turn plunged into a false Nature, it attempts to find again under the assumed innocence of the most unsophisticated relationships, the profound alienation which this innocence is meant to make one accept. The unveiling which it carries out is therefore a political act: founded on a responsible idea of language, mythology thereby postulates the freedom of the latter. It is certain that in this sense mythology harmonizes with the world, not as it is, but as it wants to create itself (Brecht had for this an efficiently ambiguous word: Einverstandnis, at once an understanding of reality and a complicity with it). This harmony justifies the mythologist but does not fulfill him: his status still remains basically one of being excluded. Justified by the political dimension, the mythologist is still at a distance from it. His speech is a metalanguage, it 'acts' nothing; at the most, it unveils—or does it? To whom? His task always remains ambiguous, hampered by its ethical origin. He can live revolutionary action only vicariously: hence the self-conscious character of his function, this something a little stiff and painstaking,muddled and excessively simplified which brands any intellectual behavior with an openly political foundation (‘uncommitted’ types of literature are infinitely more ‘elegant’; they are in their place in metalanguage). Also, the mythologist cuts himself off from all the myth consumers, and this is no small matter. If this applied to a particular section of the collectivity, well and good. But when a myth reaches the entire community, it is from the latter that the mythologist must become estranged if he wants to liberate the myth. Any myth with some degree of generality is in fact ambiguous, because it represents the very humanity of those who, having nothing, have borrowed it. To decipher the Tour de France or the 'good French Wine' is to cut oneself off from those who are entertained or warmed up by them. The mythologist is condemned to live in a theoretical sociality; for him, to be in society is, at best, to be truthful: his utmost sociality dwells in his utmost morality. His connection with the world is of the order of sarcasm. One must even go further: in a sense, the mythologist is excluded from this history in the name of which he professes to act. The havoc which he wreaks in the language of the community is absolute for him, it fills his assignment to the brim: he must live this assignment without any hope of going back or any assumption of payment. It is forbidden for him to imagine what the world will concretely be like, when the immediate object of his criticism has disappeared. Utopia is an impossible luxury for him: he greatly doubts that tomorrow's truths will be the exact reverse of today's lies. History never ensures the triumph pure and simple of something over its opposite: it unveils, while making itself, unimaginable solutions, unforeseeable syntheses. The mythologist is not even in a Moses-like situation: he cannot see the Promised Land. For him, tomorrow's positivity is entirely hidden by today's negativity. All the values of his undertaking appear to him as acts of destruction: the latter accurately cover the former, nothing protrudes. This subjective grasp of history in which the potent seed of the future is nothing but the most profound apocalypse of the present has been expressed by Saint-Just in a strange saying: 'What constitutes the Republic is the total destruction of what is opposed to it.' This must not, I think, be understood in the trivial sense of: 'One has to clear the way before reconstructing.' The copula has an exhaustive meaning: there is for some men a subjective dark night of history where the future becomes an essence, the essential destruction of the past. One last exclusion threatens the mythologist: he constantly runs the risk of causing the reality which he purports to protect, to disappear. Quite apart from all speech, the D.S.19 is a technologically defined object: it is capable of a certain speed, it meets the wind in a certain way, etc. And this type of reality cannot be spoken of by the mythologist. The mechanic, the engineer, even the user, 'speak the object'; but the mythologist is condemned to metalanguage. This exclusion already has a name: it is what is called ideologism. Zhdanovism has roundly condemned it (without proving, incidentally, that it was, for the time being, avoidable) in the early Lukacs, in Marr's linguistics, in works like those of Benichou or Goldmann, opposing to it the reticence of a reality inaccessible to ideology, such as that of language according to Stalin. It is true that ideologism resolves the contradiction of alienated reality by an amputation, not a synthesis (but as for Zhdanovism, it does not even resolve it): wine is objectively good, and at the same time, the goodness of wine is a myth: here is the aporia. The mythologist gets out of this as best he can: he deals with the goodness of wine, not with the wine itself, just as the historian deals with Pascal's ideology, not with the Pensees in themselves. It seems that this is a difficulty pertaining to our times: there is as yet only one possible choice, and this choice can bear only on two equally extreme methods: either to posit a reality which is entirely permeable to history, and ideologize; or, conversely, to posit a reality which is ultimately impenetrable, irreducible, and, in this case, poetize. In a word, I do not yet see a synthesis between ideology and poetry (by poetry I understand, in a very general way, the search for the inalienable meaning of things). The fact that we cannot manage to achieve more than an unstable grasp of reality doubtless gives the measure of our present alienation: we constantly drift between the object and its demystification, powerless to render its wholeness. For if we penetrate the object, we liberate it but we destroy it; and if we acknowledge its full weight, we respect it, but we restore it to a state which is still mystified. It would seem that we are condemned for some time yet always to speak excessively about reality. This is probably because ideologism and its opposite are types of behavior which are still magical, terrorized, blinded and fascinated by the split in the social world. And yet, this is what we must seek: a reconciliation between reality and men, between description and explanation, between object and knowledge. Notes 1 Innumerable other meanings of the word 'myth' can be cited against this. But I have tried to define things, not words. 2 The development of publicity, of a national press, of radio, of illustrated news, not to speak of the survival of a myriad rites of communication which rule social appearances makes the development of a semiological science more urgent than ever. In a single day, how many really non-signifying fields do we cross? Very few, sometimes none. Here I am, before the sea; it is true that it bears no message. But on the beach, what material for semiology! Flags, slogans, signals, sign-boards, clothes, suntan even, which are so many messages to me. 3 The notion of word is one of the most controversial in linguistics. I keep it here for the sake of simplicity. 4 Tel Quel, II, p. 191. 5 Or perhaps Sinity? Just as if Latin/latinity = Basque/x, x = Basquity. 6 I say 'in Spain' because, in France, petit- bourgeois advancement has caused a whole 'mythical' architecture of the Basque chalet to flourish. 7 From the point of view of ethics, what is disturbing in myth is precisely that its form is motivated. For if there is a 'health' of language, it is the arbitrariness of the sign which is its grounding. What is sickening in myth is its resort to a false nature, its superabundance of significant forms, as in these objects which decorate their usefulness with a natural appearance. The will to weigh the signification with the full guarantee of nature causes a kind of nausea; myth is too rich, and what is in excess is precisely its motivation. This nausea is like the one I feel before the arts which refuse to choose between physis and anti-physis, using the first as an ideal and the second as an economy. Ethically, there is a kind of baseness in hedging one's bets. 8 The freedom in choosing what one focuses on is a problem which does not belong to the province of semiology: it depends on the concrete situation of the subject. 9 We receive the naming of the lion as a pure example of Latin grammar because we are, as grown-ups, in a creative position in relation to it. I shall come back later to the value of the context in this mythical schema. 10 Classical poetry, on the contrary, would be, according to such norms, a strongly mythical system, since it imposes on the meaning one extra signifyed, which is regularity. The alexandrine, for instance, has value both as meaning of a discourse and as signifier of a new whole, which is its poetic signification. Success, when it occurs, comes from the degree of apparent fusion of the two systems. It can be seen that we deal in no way with a harmony between content and form, but with an elegant absorption of one form into another. By elegance I mean the most economical use of the means employed. It is because of an age-old abuse that critics confuse meaning and content. The language is never anything but a system of forms, and the meaning is a form. 11 We are again dealing here with the meaning, in Sartre's use of the terms, as a natural quality of things, situated outside a semiological system (Saint-Genet, p.283). 12 Style, at least as I defined it then, is not a form, it does not belong to the province of a semiological analysis of Literature. In fact, style is a substance constantly threatened with formalization. To start with, it can perfectly well become degraded into a mode of writing: there is a 'Malraux-type' writing, and even in Malraux himself. Then, style can also become a particular language, that used by the writer for himself and for himself alone. Style then becomes a sort of solipsistic myth, the language which the writer speaks to himself. It is easy to understand that at such a degree of solidification, style calls for a deciphering. The works of J.P. Richard are an example of this necessary critique of styles. 13 A subjunctive form because it is in the subjunctive mode that Latin expressed 'indirect style or discourse', which is an admirable instrument for demystification. 14 'The fate of capitalism is to make the worker wealthy,' Paris-Match tells us. 15 The word 'capitalism' is taboo, not economically, but ideologically; it cannot possibly enter the vocabulary of bourgeois representations. Only in Farouk's Egypt could a prisoner be condemned by a tribunal for 'anti-capitalist plotting' in so many words. 16 The bourgeoisie never uses the word 'Proletariat', which is supposed to be a Left-wing myth, except when it is in its interest to imagine the Proletariat being led astray by the Communist Party. 17 It is remarkable that the adversaries of the bourgeoisie on matters of ethics or aesthetics remain for the most part indifferent, or even attached, to its political determinations. Conversely, its political adversaries neglect to issue a basic condemnation of its representations: they often go so far as to share them. This diversity of attacks benefits the bourgeoisie, it allows it to camouflage its name. For the bourgeoisie should be understood only as synthesis of its determinations and its representations. 18 There can be figures of derelict man which lack all order (Ionesco for example). This does not affect in any way the security of the Essences. 19 To induce a collective content for the imagination is always an inhuman undertaking, not only because dreaming essentializes life into destiny, but also because dreams are impoverished, and the alibi of an absence. 20 'If men and their conditions appear throughout ideology inverted as in a camera obscura, this phenomenon follows from their historical vital process...' (Marx, The German Ideology). 21 To the pleasure-principle of Freudian man could be added the clarity-principle of mythological humanity. All the ambiguity of myth is there: its clarity is euphoric. 22 cf. Marx and the example of the cherry-tree, The German Ideology. 23 cf. p.94. 24 It is remarkable that Krushchevism presented itself not as a political change, but essentially and only as a linguistic conversion. An incomplete conversion, incidentally, for Krushchev devalued Stalin, but did not explain him--did not re-politicize him. 25 Today it is the colonized peoples who assume to the full the ethical and political condition described by Marx as being that of the proletariat. 26 The circulation of newspapers is an insufficient datum. Other information comes only by accident. Paris-Match has given--significantly, as publicity--the composition of its public in terms of standard of living (Le Figaro, July 12th, 1955): out of each 100 readers living in town, 53 have a car, 49 a bathroom, etc., whereas the average standard of living in France is reckoned as follows: car, 22 per cent; bathroom, 13 per cent. That the purchasing power of the Paris-Match reader is high could have been predicted from the mythology of this publication. 27 Marx: '...we must pay attention to this history, since ideology boils down to either an erroneous conception of this history, or to a complete abstraction from it' (The German Ideology). 28 Marx: '...what makes them representative of the petit-bourgeois class, is that their minds, their consciousnesses do not extend beyond the limits which this class has set to its activities' (The Eighteenth Brumaire). And Gorki: 'the petit-bourgeois is the man who has preferred himself to all else.' 29 It is not only from the public that one becomes estranged; it is sometimes also from the very object of the myth. In order to demystify Poetic Childhood, for instance, I have had, so to speak, to lack confidence in Mionou Drouet the child. I have had to ignore, in her, under the enormous myth with which she is cumbered, something like a tender, open, possibility. It is never a good thing to speak against a little girl. 30 Even here, in these mythologies, I have used trickery: finding it painful constantly to work on the evaporation of reality, I have started to make it excessively dense, and to discover in it a surprising compactness which I savoured with delight, and I have given a few examples of 'substantial psycho-analysis' about some mythical objects.
Judgments of linguistic acceptability constitute an important source of evidence for theoretical and applied linguistics, but are typically elicited and represented in ways which limit their utility. This paper describes how magnitude estimation, a technique used in psychophysics, can be adapted for eliciting acceptability judgments. Magnitude estimation of linguistic acceptability is shown to solve the measurement scale problems which plague conventional techniques; to provide data which make fine distinctions robustly enough to yield statistically significant results of linguistic interest; to be usable in a consistent way by linguistically naive speaker-hearers, and to allow replication across groups of subjects. Methodological pitfalls are discussed and suggestions are offered for new approaches to the analysis and measurement of linguistic acceptability.* 1. Grammaticality intuitions as evidence. 1.1. Introduction. For many linguists, intuitions about the grammaticality of sentences comprise the primary source of evidence for and against their hypotheses. Typically provided by the linguist or by close associates, the intuitions are reported in a variety of terms—acceptable, marginally acceptable, unacceptable, good, terrible, etc.—and coded with such symbols as ?, *, **. Although this system has supported a research program of considerable accomplishment over several decades, it presents difficulties that are widely, if informally, recognized, and seldom confronted (for exceptions, see Newmeyer 1983; Sorace 1988, 1990). The purpose of this paper is to characterize some of the major difficulties inherent in current methods of judging grammaticality and to propose a better way to elicit intuitions. To do this we will treat judgments about the grammatical status of sentences as psychological evidence, that is, like judgments about other phenomena on which human perception and cognition work. We will discuss the difficulties of eliciting reliable, consistent judgments revealing subjects’ true powers of discrimination. Over the last century, psychophysics has dealt with problems surprisingly similar to those presented by linguistic judgments, and we will apply some of the lessons of psychophysics to the present problem. We will show how the customary measurement of judged grammaticality loses information and makes it difficult to test hypotheses of current linguistic interest. We attribute these problems to the use of the wrong kind of measurement scale, and we describe what the right kind of scale would be. In the remainder of the paper we present a technique called magnitude estimation, developed by psychophysicists to make maximal use of subjects’ ability to make fine judgments about physical stimuli, and we describe how it can * This work was supported by ESRC Project Grant R000233965 to the authors, whose names are listed in alphabetical order. The authors are grateful to J. Levy and C. Theobald for their advice, to E. Engdahl, S. Garrod, and two anonymous reviewers for comments, and to the subjects for their participation. A preliminary version of this paper was presented at the Spring 1993 meeting of the Linguistics Association of Great Britain. be adapted to the elicitation of judgments about the grammaticality of sentences. We show that it is easy to operate informally, that it can support statistically more robust distinctions than more familiar techniques when applied to a question of linguistic interest, and that it elicits judgments that are consistent within and between subjects. Finally we discuss how magnitude estimation may be applied to make better use of our capacity to make judgments about sentences. 1.2. SOME DIFFICULTIES. By performing the small and imperfect experiment of judging those sentences that are critical to linguistic theories, linguists intend to assess grammaticality, that is, compatibility with the grammar of a particular language, or well-formedness under the assumptions about linguistic competence used to build the grammar. By asking speakers of the language to make judgments about sample strings, linguists test the hypothesis that speakers’ views and linguists’ proposals for the grammar match. Yet eliciting those views does not give direct access to speakers’ linguistic competence. What is observed instead is a particular kind of linguistic behavior, an overt response to the subjects’ opinion about characteristics of the sentence. Thus we can make a three-way distinction among GRAMMATICALITY, a characteristic of the linguistic stimulus itself, ACCEPTABILITY, a characteristic of the stimulus as perceived by a speaker, and the ACCEPTABILITY JUDGMENT which is the speaker’s response to the linguist’s inquiries. The fact that the subject offering the opinion and the linguist generating the proposals are often the same person does not change the fact that the impression on offer is an acceptability judgment, behavioral evidence around which the theory develops. The distinction between acceptability and grammaticality unveils a further distinction between RELATIVE GRAMMATICALITY, which is an inherent feature of the grammar, and RELATIVE ACCEPTABILITY, which is perceived by the subject. Insofar as judgments about acceptability represent effects of the grammar, the overt manifestation of both relative grammaticality and relative acceptability is gradience in acceptability judgments. While the existence of relative acceptability is easily accepted (cf. for instance Newmeyer 1983, Rizzi 1990), inherent gradience within the grammar has a more controversial status, since it appears to be difficult to accommodate within formal linguistic theories (McCarthy & Prince 1993; Sorace 1995). Nonetheless, the possibility remains that acceptability is graded because grammaticality is. Of course, acceptability judgments, like other manifestations of linguistic performance, need not be one-to-one reflections of grammaticality. First, it is always possible that the subject is not reporting directly on grammaticality but is responding to any number of other features of the stimulus (Botha 1973, Quirk & Greenbaum 1970). Impressions of acceptability may be based, for example, on estimated frequency of usage, on conformity to a prescriptive norm or a prestigious register, or on degree of semantic or pragmatic plausibility. 1 While syntactic theorizing in different frameworks is assigning growing importance to the notion of comparison, the consensus is that there is only one optimal output that best satisfies the system of interacting constraints and therefore receives a grammatical interpretation (see Chomsky 1991). Second, where the linguist acts at the same time as the theoretician and the source of the data (Labov 1970), results may be subject to bias, however unconscious, towards an outcome concordant with the judge’s vested interests. Even judges with no direct knowledge of the field can be biased in another way, by the context in which the judgment is made, and in particular by repeated exposure to sentences of particular kinds (Levelt 1972). Finally, details of extralinguistic context may have consistent effects on judgments, which may tell us as much about the process of introspection as about linguistic abilities (Carroll et al. 1981, Nagata 1987a, 1987b, 1988, 1989). Although these difficulties may obscure the primary data linguists need, we seldom react, as cognitive psychologists would, by attempting to develop methods of minimizing the artifacts. Instead, a ‘small is beautiful’ principle seems to operate: the empirical damage is limited by dependence on striking rather than exhaustive examples and judgments are made by a small community of subjects who share an agreed definition of acceptability. Whether or not the small-is-beautiful approach solves the problems of interpretation and of bias remains to be seen. Certainly it does little to mitigate an even greater difficulty, the inherent inadequacy of the measuring instrument used for linguistic acceptability judgments. One symptom of the problem is the fact that symbols used for categorizing example sentences tend to vary in application even within the work of a single author. Consider, for example, the following items drawn from a textbook on GB theory (Haegeman 1991): (1) a. Which t~rncr did Bill go to Rome to visit? (H:35a, p. 500) b. ?Which man do you wonder when to meet? (H:44a, p. 502) c. ?This is a paper that we need someone who understands. (H:50a, p. 505) d. ?Which car did John announce a plan to steal tonight? (H:53a, p. 506) e. *Whom do you know the date when Mary invited? (H:31a, p. 495) f. *Where did Bill go to Rome to work? (H:35b, p. 500) g. *This is a book which reading would be fun. (H:38a, p. 500) h. *With which pen do you wonder what to write? (H:44b, p. 502) i. *This is a paper that we need someone that we can intimidate with. (H:50b, p. 505) j. **This is a pen with which writing would be fun. (H:38b, p. 500) Most of these examples are cited in pairs in their original source. In each case we are invited to note that the second member of the pair is less acceptable than the first. If an impression of relative acceptability were the only goal of acceptability judgments, however, the symbol > would always be adequate to express the critical data. By reassembling the original pairings of the examples in 1, the reader can demonstrate that the implied relative judgments are usually easy to reproduce. The use of the 0—?—*—** scale (where 0 indicates an acceptable sentence) indicates something more, however. As is normal practice, Haegeman is attempting to indicate the absolute acceptability of these sentences. This is where the problem arises. Even though absolute acceptability is usually not of primary interest, the 0–?–*–** scale ought to facilitate building extended linguistic arguments on the basis of acceptability judgments. To deliver this, however, the symbols recording judgments should be capable of consistent application over a few pages of text. Thus, if the scale allowed reasonable representation of both relative and absolute acceptability, then sentences marked with the same symbol should be roughly comparable in acceptability and any sentence marked ** should be worse than any marked *, which should in turn be worse than any marked ?, and all of these should be recognizably less acceptable than an unlabelled acceptable sentence. This condition does not appear to hold in 1. Example 1c seems less acceptable than 1b, for instance, though both are labelled ?, and 1j, marked **, does not seem to be markedly worse than 1e, marked *. That this can be true even when we agree with the original relative judgments means that something is amiss with the scale in which they are represented. The fault is certainly not Haegeman’s. It derives instead from the disproportion between the fineness of judgments people can make and the symbol set available for recording them. Each of the symbols in the 0–?–*–** scale appears to cover a range of acceptability levels. That is, if the sentences in 1 are ranked by acceptability without regard to the grammaticality annotations, and if the annotation scale is adequate, we should find that only four different degrees of acceptability are discriminable. The greater the number of discriminable ranks beyond four, the more information the four-point scale must be hiding from us. The same argument applies to extended scales like 0–?–*–*–** or to the five-point scale often used in empirical studies or indeed to any other scale that predetermines the number of distinctions subjects may use. There is no way of knowing in advance if our sensitivities are limited to a five-way distinction any more than a four-way distinction. It is instructive to illustrate the problem with a five-point numerical scale, which can be used both carefully and to good effect in many domains. In this domain, the relative and the absolute uses of the scale can conflict. Imagine that a sentence Sa and the appreciably less acceptable corresponding sentence Sb might both fall within a carefully defined *3* category. Imagine that Sa and its corresponding Sc differ more than Sa and Sb, but still reside within that part of the range labelled *3*. Neither difference will be recordable on this scale, for all three examples will be coded *3*. Nor will there be any legitimate way to report that one difference is perceived to be larger than the other. Now imagine that the a v. b pair for another sentence, Z, differ in acceptability as noticeably as Sa and Sb, but this time the difference genuinely crosses the carefully marked 2/3 boundary. Now of two equal differences, Sa v. Sb and Za v. Zb, one is lost to view. Of two unequal differences, the smaller, Za v. Zb, can be reported, while the larger, Sa v. Sc, does not register. The only way to get around these difficulties without expanding the scale is to pervert it, that is, to move the boundaries between numbers in order to reflect perceived differences. Thus the first sentence less acceptable than a genuine 3 will be labelled 2 and any subsequent even less acceptable sentence then has to be labelled 1. Confusion between genuine or absolute 2 and forced or relative 2 will then arise. In effect, the subject has to choose between being less than informative and being less than consistent. Working linguists know very well, of course, that each symbol covers a range of judged degrees of acceptability, and that in practice the ranges covered by different symbols will often overlap. Sensibly enough, linguists rely more heavily on the ability of the symbols to express relative acceptability and make less direct use of their dubious relationship to absolute acceptability. If the field has progressed using limited annotations, can it be important that they tend to lose information subjects might be able to provide? For some time it was arguable that the loss was harmless, because the generalizations of interest were fairly broad. More recently, however, the scale for measuring linguistic acceptability has begun to curtail the utility of the elicited judgments. Consider two examples. One is well known. It deals with the relative effects of Subjacency and ECP violations (Chomsky 1986, Rizzi 1990). To support the proposal that one of these principles is ‘stronger’ than the other, it is necessary to elicit intuitions about the relative unacceptability of strings that violate them. Subjects must therefore judge whether a sentence that violates the Subjacency Principle (like 2b and 3b) or a sentence that violates the ECP (like 2c and 3c) is less acceptable. (2) a. John announced a plan to steal Bill’s car late tomorrow. b. *Which car did John announce a plan to steal late tomorrow? c. *When tomorrow did John announce a plan to steal Bill’s car? (3) a. I wonder whether John can solve the problem. b. *Which problem do you wonder whether John can solve? c. *Who do you wonder whether can solve the problem? Here it is not necessary to show that all (c) examples are equally undesirable. The hypothesis about the relative importance of Subjacency and ECP does not preclude the possibility that some (a) sentences will be less acceptable than others or that our judgments may be subject to adventitious effects of the lexical, propositional, or pragmatic contents of each set of sentences. Instead it is important to determine whether, despite all these factors, an ECP violation generally reduces perceived acceptability more than a Subjacency violation in whatever sentence structures they may be instantiated. What is needed is a comparable effect of a violation over a number of sentence structures. This is rather more demanding than finding that for every acceptable (a) example, the (c) version is worse than the (b) version. It requires that, for example, 2b should be less acceptable than 2a to roughly the same degree as 3b is worse than 3a, and that each of these reductions in acceptability should be smaller than the one created by the violations in 2c and 3c. In other words, testing the hypothesis requires comparing differences in acceptability. The difficulty is that we have no obvious way of estimating such differences. Even if it included a different symbol for every sentence, a scale like ‘-***’ would not allow us to subtract the acceptability of 2b from the acceptability of 2a and compare the result with the outcome of the parallel operation in 2c and 2a, 3b and 3a, and so forth. Because there is no scale on which the difference between * and ? can be represented, the notion comparable effect will not find an easy definition. The five-point scale is a tempting alternative here, because operations like subtracting 2 from 4 would seem to allow the necessary comparisons. As we have just seen, however, it might be impossible to perform the arithmetic accurately without assurance that we had encountered the genuine rather than the relative 2. A second example is drawn from Sorace 1992, 1993a, 1993b, 1996 (to which we will return in §5). Here the issue is whether our knowledge of a syntactic generalization is equally secure throughout its domain of application. Sorace proposes that Italian native speakers’ knowledge about the restrictions on combinations of auxiliaries and lexical verbs is not equally determinate for all pertinent verbs. Although, for example, speakers of Italian agree that unaccusative verbs select the auxiliary essere ‘be’ and unergative verbs select avere ‘have’ (Perlmutter 1978, 1989, Burzio 1986), they should not agree to the same extent for all unaccusative and all unergative verbs. For cases toward the core of the system, it is predicted that speakers should very clearly accept the canonical auxiliary and reject the alternative, while for other, more peripheral cases, they should be progressively less definite in their views. In this case, the hypothesis finds at least two natural interpretations. Unfortunately, neither is currently easy to apply. First, indeterminacy might be reflected in differences between the acceptability of canonical (a) and alternate (b) auxiliaries with particular verbs. The two might differ greatly in acceptability in the case of core examples like those in 4 below or relatively little in peripheral instances like 6. For this approach we once again need to be able to subtract the acceptability of the dispreferred form from the preferred, and as we have seen, the scales in use offer no such facility. (4) a. Maria è andata in ufficio a piedi. Maria.FEM.sg is gone.FEM.sg to office on foot b. *Maria ha andato in ufficio a piedi. Maria.FEM.sg has gone.MASC.sg to office on foot c. ‘Maria went to the office on foot.’ (5) a. Paolo è rimasto a letto tutto il giorno. Paolo.MASC.sg is stayed.MASC.sg in bed all the day b. *Paolo ha rimasto a letto tutto il giorno. Paolo.MASC.sg has stayed.MASC.sg in bed all the day c. ‘Paolo stayed in bed all day.’ (6) a. Gli unicorni non sono mai esistiti. Unicorns.MASC.PL not are never existed.MASC.PL b. *Gli unicorni non hanno mai esistito. Unicorns.MASC.PL not have never existed.MASC.SG c. ‘Unicorns never existed.’ The second interpretation is more direct, but even more problematical: indeterminacy might be reflected in the variability of an individual’s or a group’s judgments, whether from verb to verb at a particular position between core and periphery, from trial to trial on the same verb, or from subject to subject. For example, judgments on 4b should be more consistent than those on 5b or 6b, even for subjects who are very secure in the belief that all the (b) examples are less acceptable than the corresponding (a) examples. Yet variability of judgments will be difficult to assess using terms like ?, *, and **. Although we might score each dispreferred example for the number of times it was given each annotation, we will still suffer from the use of a scale that can lose distinctions in apparent acceptability. Deciding whether results are due to genuine confusion about the status of examples or to an inadequate and confusing set of symbols may be more trouble than it is worth. Using more common measures of variability, like the standard deviation, for instance, is simply out of the question with annotations that preclude simple arithmetic. To give such examples the serious study they merit, we need a better way of measuring acceptability. In both cases we need a measure of perceived acceptability so sensitive that we can use all the judgments our subjects produce and so structured that we can at least make simple arithmetic estimates of differences in perceptions. Readers unfamiliar with the history of experimental psychology may feel at this point that we are trying to replace a simple and well-practiced technique with an alternative of unknown and unnecessary complexity. Readers familiar with this history, on the other hand, may recognize the sorts of problems that inspired the development of measurement theory and of a phalanx of judgmentelicitation techniques in experimental psychology. The purpose of this paper is to bring to the service of linguistic investigations one such method originally developed by psychophysicists to elicit subjects’ impressions of various physical phenomena and subsequently adapted for use with a number of psychosocial domains. With the proper application of this method, many of the difficulties outlined here can be overcome. Insofar as linguistics is a branch of psychology that studies a specialized kind of human perception, it is a sister field to psychophysics, the study of relationships between human sensations and the physical universe. Transfer of techniques is more than appropriate. 2. A MEASUREMENT SCALE FOR ACCEPTABILITY. To understand what is at stake here, it will be helpful to recast the problem in terms of the kinds of measurement scales involved. Measurement is often defined as the ‘assignment of numbers to things according to rule’ (after Stevens 1946:667). Four types of scale are commonly distinguished: nominal, ordinal, interval and ratio (Stevens 1946). They are ordered in terms of their formal properties, the kinds of information they use, and, consequently, the kind of mathematical operations that can be performed on the measurements (Stevens 1951; see Michell 1990 for a discussion). Because the scales are effectively ordered in the precision with which they use available information, any type of data will be most adequately measured on the highest applicable scale. Our introductory examples illustrated two claims: first, that the scales on which acceptability has heretofore been measured appear to be too condensed to reflect our intuitions accurately and, second, that whatever their length, these scales are too low in the series either to capture the information that could be made available or to serve the current MAGNITUDE ESTIMATION OF LINGUISTIC ACCEPTABILITY needs of linguistic theories. We will illustrate this claim as we set out the characteristics of the different sorts of scales. The simplest measurements are via **nominal scales**. These are easiest to view as a set of labels assigned according to rule, like *apple, banana, orange*. Nominal scales have one formal property, the property of equality: if it is meaningful to say of two objects A and B that they are either equal or not equal with respect to some attribute or property, then that attribute or property can be measured on a nominal scale. Items measured on a nominal scale can be categorized but not ordered in any way. No mathematical operations can be performed on these measures other than counting the items in each category and comparing the totals. Some kinds of data may be perfectly well measured via a nominal scale. The fruit example is typical. There is no inherent order among the apple, banana, and orange categories. There are no intermediate cases. There is no notion like ‘average fruit’ which we are prevented from expressing because we cannot add apples and oranges or divide by bananas. In many views, grammatical and ungrammatical form an exhaustive nominal scale. This scale will not measure relative ungrammaticality, however, because points intermediate between the two categories will be as impossible to reflect as fruits that are a bit more apple than banana. Even if a nominal scale were expanded to include a doubtful category, it would not order this category between grammatical and ungrammatical any more than it could order pears between apples and bananas. Once order is introduced, the scale is an **ordinal scale**. These have two formal properties: equivalence and order. If two objects are the same with respect to a particular property, while each has more of that property than a third, then the property we are dealing with can be measured on an ordinal scale. An ordinal scale rank orders scale points but makes no commitment to any other kind of difference between them. If we were to put ordinal scale points on one axis of a graph, we would have to assume that the axis was elastic, for the distance between successive points is both unknown and unpredictable. For this reason, we can count the number of items at each rank or groups of ranks, but we would have difficulty performing arithmetic across them. In §1 we suggested that the system of symbols {0, ?, *, **} comprises such a scale: each member indicates less acceptability than the previous one. We showed, however, that this scale is often applied in such a way as to violate both the equivalence and the order conditions. Even if the scale were appropriately applied, the mathematical limitations of ordinal scales, not to mention the non-numerical symbols used to measure acceptability, would stand in the way of testing linguistic hypotheses dependent on notions like ‘comparable difference’. The difference between successive ranks, 0 and ?, for example, is not only an odd concept, but also one that cannot be predicted to be equal to another successive-ranks difference, * and **, or less than a nonsuccessive difference, ? and **. **Interval scales** allow us to measure difference. To equality and order, interval scales add regular difference between successive pairs of measurements. A property is measurable on an interval scale if we can meaningfully compare the differences between pairs of objects with respect to that property. Once we can do this, various useful mathematical operations become available. Skirt length in inches below the knee is an interval scale: a skirt 2 inches below the knee is longer than one 1 inch below the knee by as much as a skirt 4 inches below is than another 3 inches below. Because interval scales have measurement points at equal intervals, they support subtraction. It may seem strange to think of linguistic acceptability as an interval scale, but we contend that only historical accident and the basic nature of early linguistic hypotheses originally led to the use of nominal and ordinal scales of measurement rather than interval. Once it is proposed, as the ECP/Subjacency discussion does, that we can reliably judge the difference in acceptability between one principle-respecting sentence and its principle-violating mate as greater than the corresponding difference for a pair respecting and violating another principle, then linguistic theory has outgrown simpler measurement scales. If interval scales can be applied, our analytic tools multiply. Although there are descriptive and inferential statistics for nominal and ordinal scales, much greater variety is available for interval data. So long as judgments actually take on the necessary characteristics, we should be able to pursue the psycholinguistics of intuitions in detail comparable to what is available in other branches of perceptual and cognitive psychology. More informative still are ratio scales. A property is measurable on a ratio scale if it satisfies the criteria for an interval scale and the additional condition that the ratios between measurements can be discovered. To make this possible, the distance of each item from a common 0-point must be known. For skirt lengths, measurement from the knee gives interval measurement of the differences between lengths, but will not allow us to say that one skirt is 1.5 times as long as another. To do this we need to measure the skirts from their waistband origin, so that we can determine that one skirt is 33 inches long and the other 22 inches long from waist to hem. It may stretch the imagination to suppose that ratio scale measurement is appropriate for judgments of acceptability. If the principled arguments are less compelling here than in the case of interval scales, principally because it is unclear what a string with 0 acceptability would be like, the two scales are linked by a judgment elicitation technique called magnitude estimation: providing that subjects’ abilities are as great as we have supposed, attempts to say which sentence is 1.5 times as acceptable as another, and which .6 times as acceptable, and so forth, can at least give us the interval scales that we need. 3. Magnitude estimation: establishing the subjects’ scale. Magnitude estimation was developed to provide better than ordinal scales for measuring impressions of physical continua (Stevens 1956). As originally applied to the direct estimation of brightness or loudness, magnitude estimation in its simplest --- 2 There is considerable disagreement about the use of parametric statistics with ordinal measurement. For discussions, see Gaito 1980, Townsend & Ashby 1984, Michell 1986. version requires the subject to associate a numerical judgment with a physical stimulus (see Stevens 1975 for a review). Once the initial stimulus, or modulus, is presented and a number associated with it by experimenter or subject, the subject assigns to each successive stimulus a number reflecting the relationship between that stimulus and the modulus. Subjects are explicitly instructed to reflect perceived ratios in their judgments: a stimulus that appears to be 10 times as bright as the first is to be given a number 10 times the original number; one that seems one-third as bright is given a number one third the size. However bizarre they may find the task at first, normal adults can reliably perform it for a large number of physical continua. Magnitude estimation fills exactly the needs which we have been discussing. First, it does not restrict the number of values which can be used to measure the property of interest. Subjects decide whether each stimulus should be assigned the same number as another stimulus or a different number, and they have complete freedom about which of the infinite set of numbers to use. Accordingly both the range of responses and the distribution of individual responses within that range are informative. Second, because ratio-scale judgments subsume an interval scale, it is possible to subtract the number assigned to one stimulus from the number given to another and produce meaningful differences which directly reflect differences in impressions. By the same token it is also possible to calculate the mean and the variance for multiple judgments on a particular type of stimulus. Most important for psychophysics, magnitude estimation provides measurements of impressions on a numerical scale which can be plotted against the objective measure of the physical stimuli giving rise to the impressions. As a result, psychophysical relationships can be viewed as a set of mathematical functions. Although there is dispute about the generality of the finding (see Poulton 1986, 1989 for a critique), when the subject’s estimates of magnitude (or group geometric mean estimates or medians) are plotted in log-log coordinates against the physical dimension, the points tend to follow a straight line with a slope characteristic of the physical property being assessed. The straight line in log-log coordinates means that equal ratios on the physical dimension give rise to equal ratios of judgments. In judgments of brightness, for example, every time the stimulus energy doubles, the subjective brightness becomes 1.5 times larger. In judgments of line length, on the other hand, the function is steeper: doubling physical line length doubles subjective line length as well. The characteristic relationship is reflected in the value of this slope, called $b$ or $B$.² ² Psychophysical relationships of this kind are expressed in the form of equations called power laws with the alternative forms below (Stevens 1957): \[ \psi = R = kS^b \quad \text{or} \quad \log R = \log k + b \log S \] Here $\psi$ is the subjective magnitude of the stimulus, $R$ is the response estimating that magnitude, $S$ is the physical magnitude of the stimulus itself. $k$ is a constant, and $b$ is the exponent that is characteristic of the $S$ (Lodge 1981:13). In its log form, $b$ gives the slope of the straight line function in log-log coordinates and $\log k$ the intercept. Thus, the variable $b$ is what characterizes a sensory domain: in brightness estimation, as we indicated, $b$ is .5, while in line-length estimation it is 1.0. 4. THE CASE FOR MAGNITUDE ESTIMATION OF LINGUISTIC ACCEPTABILITY. 4.1. MAGNITUDE ESTIMATION FOR LINGUISTIC INTUITIONS. Magnitude estimation has often been applied to linguistic stimuli with properties for which some objective interval scale is available: speech rate (Grosjean 1977, Grosjean & Lass 1977, Green 1987), vowel roughness (Toner & Emanuel 1989), similarity of syllables from different languages (Takefuta et al. 1986), quality of synthesized speech (Pavlovic et al. 1990), and speech intelligibility (Fucci et al. 1990). Acceptability differs qualitatively from these examples, however. Unlike apparent vowel roughness, which can be plotted against amplitude of aperiodic energy, linguistic acceptability has no obvious physical continuum to compare with the subjects’ impressions. In theory, it is the predictions of the grammar that should replace objective physical descriptions here: psycholinguistic and psychophysical relationships should be analogous. They are not, because linguistic theory does not make predictions in the same measurement scales as physics does. Even though we need interval scales to test linguistic theories, the theories themselves do not predict precise intervals. At best, the kinds of predictions we have been describing deal in orders of results: they predict, for example, that one error should be worse than another but not how much worse or how absolutely bad. So although we could select a pair of stimuli for a typical psychophysical experiment in such a way that one is twice as bright as the other, or twice as long, or twice as red, we cannot find a sentence that linguistic theory designates as twice as grammatical as some other. In psychophysics, the utility of magnitude estimation is demonstrated when an orderly psychophysical function emerges. Without a suitable ‘physical’ scale for acceptability, we are unable to make such a simple argument here. Instead, we have had to take a multipronged approach to discovering whether magnitude estimation can serve the needs of linguistics. First, in §4.2, we demonstrate that the technique is easy to apply informally with naive or experienced judges and that it produces data with a prima facie resemblance to familiar acceptability judgments. Second, in §5, we demonstrate the ability of magnitude estimation to reveal distinctions of linguistic interest in a statistically robust way. To do this we summarize selected results from a large-scale study (Sorace 1992) which elicited the views of native and non-native speakers of Italian. Third, we address the issue of the missing axis. We have borrowed an extension of magnitude estimation, called CROSS-MODALITY MATCHING, which also originated in psychophysics (J. C. Stevens et al. 1960) but which is widely used in studies of psychosocial domains where the physical axis is missing (see Dawson 1974 and Lodge 1981 for reviews). It validates magnitude estimation not with reference to a physical continuum, but in terms of self-consistency. Section 6 reports a successful validation study of this kind. Finally, we turn to straightforward replication to show that magnitude estimation results will generalize across subjects. Section 7 shows that our subjects in the validation study (§6) replicated the results reported by Sorace (§5). 4.2. AN ILLUSTRATION. This study was designed in the manner of a prepilot, an exercise to catch difficulties and reveal whether the technique or the materi- als were worth continued study. Compared to the larger scale studies we have done, this one is something of a classroom exercise. We offer it here because if magnitude estimation comes to be used regularly by linguists, it is most likely to be applied in this relaxed fashion without large groups of subjects, major training, or complicated apparatus. The study shows how readily the technique can be applied, both to linguists and to naive adult native speakers of English. We have used a set of materials based on the items in 1 to offer further comments on our discussion in §1.2. Although it is not necessary to allow subjects extensive practice with physical magnitude estimation, we used both line and sentence stimuli in this study. For both, the initial stimulus, the modulus, was from the middle of the range of stimuli. The 12 horizontal lines ranged in length from 2mm to 98mm. Each line was displayed horizontally in the middle of a separate page of a small booklet. The lines were used to illustrate the method of magnitude estimation, under the supposition that our subjects would find length estimation easy to understand. The linguistic materials, also displayed one per page, were the 16 sentences below in the order given. We include their original grammaticality/acceptability markings, and their page and number references in Haegeman 1991 for convenience here, though these were not offered to subjects. (7) a. *Which man do you wonder when to meet? (H:44a, p. 502) b. Which book would you recommend reading? (H:41a, p. 501) c. *When does John like the plan to steal the crown jewels? (H:53b') d. **When do you know the man whom Mary invited? (H:31b, p. 495) e. *With which pen do you wonder what to write? (H:44b, p. 502) f. *Whom do you know the date when Mary invited? (H:31a, p. 495) g. *When did John announce a plan to steal Bill's car? (H:53b, p. 506) h. *This is a book which reading would be fun. (H:38a, p. 500) i. ?Which car did John announce a plan to steal tonight? (H:53a, p. 506) j. ?Who did Bill buy the car to please? (H:35a') k. *Where did Bill go to Rome to work? (H:35b, p. 500) l. ?This is a paper that we need someone who understands. (H:50a, p. 505) m. **This is a pen with which writing would be fun. (H:38b, p. 500) n. *This is a paper that we need someone that we can intimidate with. (H:50b, p. 505) o. Who did John invite? (H:22, p. 489) p. *Where did Bill buy the car to drive? (H:35b') Some comments on the set of materials are in order. Most of the sentences were taken directly from Haegeman. Several (7j (= 35a'), 7p (= 35b'), and 7c (= 53b')) were alternative lexicalizations. They were composed to help subjects arrive at the interpretation intended by Haegeman in the absence of explicit instructions as to how sentences should be construed. To invite particular interpretations in materials for psycholinguistic experiments which do not allow stage directions for each example (see Trueswell & Tanenhaus 1991 for examples), lexical content of examples is often manipulated. Here, item 7g, Haegeman's original example 53b, allows the interpretation that when originates in the IP containing announce, though it is intended to represent a sentence in which when is an adjunct from the IP containing steal. Example 7c uses a verb in the upper IP which is harder to construe with when. Examples 7j and 7p are analogues of Haegeman's 35a and 35b (the latter reproduced in 7k) which seemed easier to judge as Haegeman intended, perhaps because they contain less common sequences buy the X to please t; and buy the car to drive t; instead of the more common go to X to visit t; and go to X to work t. In all, we tested four undergraduate anatomy students, none of whom had ever judged acceptability before, and nine experienced linguists. Because so few inexperienced subjects were readily available, we selected the four most experienced linguists as a comparison group. Data from a fifth linguist are examined separately because her use of the scale was unique in this group. We tested subjects in groups in their own departments during a coffee break. We asked all subjects to assign a number to the first line to represent its length and then assign a number to each subsequent line to reflect its length relative to the first, doubling the first number if the second appeared twice as long, dividing it in three if it seemed a third as long. After they had judged the lines, we asked subjects to make analogous numerical estimates of the acceptability of the sentences, again judging each subsequent example relative to the first. We told subjects to judge acceptability of construction rather than meaning, assigning higher numbers to better sentences and lower numbers to worse. We reminded them that there is no limit to the set of positive numbers, that fractions are legitimate numbers, and that all multiples and fractions of any positive number assigned to the modulus would have to be greater than zero. Table 1 shows summary figures for each subject. Much of our argument thus far depends on the possibility that more degrees of acceptability are distinguishable than the usual symbolic scale reflects. The table shows that all the subjects used more than 4 different numbers to express their acceptability judgments. Without further validation, this limited set of data cannot be conclusive, but, compared to more restricted responses, it encourages the belief that subjects genuinely find more than four different levels of acceptability represented in the stimuli. The table also shows that subjects used a wide range of numerical estimates, a result consistent with the view that these sentences represented very different degrees of acceptability. The ratio of the largest to the smallest estimated magnitude for a subject, what we call the max/min ratio, varied from 5 to 500. Finally, a cursory glance at the table shows that although subjects chose quite different moduli, there is no obvious relationship between the modulus and the max/min ratio. There is always some worry that numerical estimates may be distorted by subjects' unwillingness to use large numbers or to calculate --- 4 Full instructions are available from the authors on request. estimates to several decimal places in small ones. These results indicate no major effects of this kind. Characteristics of magnitude estimates can be highlighted by comparing a subject who appeared to follow the instructions, Linguist A (Fig. 1) with the unusual subject, Linguist E, who effectively rejected magnitude estimation, using only the integers of a five-point scale, and telling us subsequently that she could not imagine acceptability being judged in any other way. Although subjects may not be the best judges of their behavior in these tasks, this one seems to have been correct. Note that both linguists put the acceptable sentences 7b and 7o at the top of their respective ranges and sentences originally marked * or ** at the bottom, so we can suppose that both were assigning larger numbers to better sentences. But note also that neither follows the implications of the original classifications on 0–?–*–** scale, for neither maintains a discrete range of numbers for all the items originally marked with a given symbol. Several dissimilarities are clear. The first, unsurprisingly, is the number of values used: Linguist A used more different values in attempting to estimate the acceptability of the 16 stimuli than Linguist E and, accordingly, produced fewer ties. Second, Linguist A not only produced different estimates of acceptability for items directly compared in Haegeman’s original presentation (for example, 7h v. 7m, 7a v. 7e, 7l v. 7n) but also produced differences of different sizes: 7h and 7m show a small difference in estimated acceptability while the 5 It is well known that departures from linearity often occur at the lower end of the psychophysical scales where a preference for integer responses raises estimates that ought to be less than 1.00 to 1.00, since, as the informants have been told, they must not be 0. Though the integer bias may work over the whole range of responses, it produces most distortion at the lower end, where smaller numerical differences represent larger proportional differences. To avoid this problem, researchers often choose a modulus in the middle of the testable range, in the hope that it will inspire a conveniently large initial estimate. It is also customary to advise subjects to assign the modulus a number that is easy to work with in multiplication or division. Although in the last analysis subjects will do what they please, most have the sense not to start with values like 0.73. Figure 1. Results of an informal study showing (a) one linguist estimating magnitude; (b) one linguist using a 5-point scale; (c) averaged results for 4 naive subjects; (d) averaged results for 4 experienced linguists. (d) Linguists 7a **Who did John invite? 7b Which book would you recommend reading? 7c *When did John announce a plan to steal Bill's car? 7d **When do you know the man whom Mary invited? 7e *With which pen do you wonder what to write? 7f *Whom do you know the date when Mary invited? 7g *Who did Bill buy the car to please? 7h *This is a book which reading would be fun. 7i ?Which car did John announce a plan to steal tonight? 7j *Where did Bill go to Rome to work? 7k *This is a paper which we need someone who understands. 7l ?This is a paper that we need someone we can intimidate with. 7m **This is a pen with which writing would be fun. 7n *This is a paper that we need someone who understands. differences between 7a and 7e and between 7l and 7n are larger. If these results are reliable, Linguist A may be expressing an intuition that there is no uniform relationship between the effect of subjacency violations and the effect of ECP violations. Linguist E, on the other hand, produced the kind of pattern used to exemplify the problems with short scales. She rated 7l more acceptable than 7n, making a distinction only where Linguist A had recorded a large difference. Linguist A’s equally large difference for 7a v. 7e corresponds to E’s tie, as does A’s smaller difference between 7h and 7m. As we suggested earlier, it is not clear whether E reports fewer perceived differences because fewer are perceptible or simply because fewer are reportable. For other results we turn to Figures 1c and 1d, which average respectively over the estimates of Anatomists A–D and of Linguists A–D. Again both groups put the unexceptionable sentences at the top of their ranges and an unacceptable sentence at the bottom. Both fail to reserve distinct ranges for items given different symbols in the 0–?–*–** scale. The averaged results suggest that our alternate lexicalizations may be useful. For example, the four linguists found 7i, Haegeman’s 53a (an object extraction without a subjacency violation) less acceptable than 7g, (Haegeman’s 53b, which violates ECP under the intended construal), but more acceptable than the alternate lexicalization of the ECP violation, 7c, which discouraged the unintended construal of the adjunct when. They also found 7p worse than its alternate lexicalization 7k (both committing weak subjacency and ECP violations), though this time both were less acceptable than 7j. If subjects’ assessments are to be relied on, then developing suitable versions of sentence types, and even averaging over different lexicalizations may prove necessary. For insight into the effects of experience, refer to Table 1 and Figs. 1c and 1d. We noted that none of the anatomy students balked at magnitude estimating acceptability or produced results that were radically out of line with the linguists’. Although the groups did disagree on the relative acceptability of some items, especially in the case of those (b) examples with more acceptable competing interpretations, results were substantially alike: linguists’ and anatomists’ geometric means, using the average of estimates over any alternate lexicalizations, show a strong positive correlation ($r = .81, p < .001$). 5. Robustness and delicacy. What we have just described is a mere exercise. To show substantial and replicable effects, it is often necessary to invest in larger scale studies. Sorace (1992, 1993) has successfully used magnitude estimation to test her hypotheses with respect to the use and acquisition of the Italian auxiliaries avere ‘have’ and essere ‘be’ and their syntactic and semantic properties. Our purpose in citing this work here is limited in two ways. First, we still cannot plot a psychophysical function, for we still have a single axis, the judgments themselves, to examine. All we can do is test the statistical robustness of the differences among judgments for different classes of verbs. Second, we do not attempt to defend or even relate the full set of linguistic arguments to which Sorace recruits these results. Not only are those claims too broad in scope for the present paper, but making such claims without inde- pendently establishing the reliability of the technique would bend our argument into a neat circle. Instead we cite examples from Sorace’s data to show that magnitude estimation compares well with more familiar techniques in revealing delicacy of judgment and in supporting robust statistical effects. Briefly summarized, Sorace’s position is that a purely syntactic account of unaccusativity is insufficient to capture the systematic variation exhibited in the use of Italian auxiliary verbs. Instead, she suggests that the unmarked selection of *essere* with unaccusatives and of *avere* with unergatives in compound tenses is sensitive not only to a hierarchy of syntactic configurations (as assumed by the Government-Binding version of the Unaccusativity Hypothesis) but also to lexical-semantic hierarchies that subdivide the range of unaccusative and unergative verbs along gradable dimensions such as *concrete/abstract, dynamic/static, and telic/atelic*, referring to the type of event denoted by the verb. These hierarchies distinguish core or prototypical types of verbs from peripheral ones, and therefore account for the well-recognized fact that some verbs are ‘more unaccusative’ than others, that is, they behave more naturally in particular diagnostics of unaccusativity (cf. Levin & Rappaport Hovav 1994, 1995).6 Conversely, auxiliary selection in syntactically marked ‘restructuring’ constructions (Rizzi 1982, Burzio 1986) induced by certain Raising and Control verbs rests exclusively on the unaccusative syntactic configuration. Sorace predicted that the interaction between syntactic and semantic constraints would give rise to systematic variability in native speakers’ linguistic intuitions, manifested in consistent and determinate acceptability judgments on core types of verbs, and variable and indeterminate judgments on peripheral types of verbs. Moreover, if the terms core and periphery have any general meaning, then learners of Italian as a foreign language should acquire the distinction starting from the core verbs. It follows from this view that advanced learners of Italian, even those who make no production errors in the language and share many intuitions with native speakers, will have their less nativelike intuitions in the periphery of the system, where native speaker judgments are most indeterminate. The study we cite here belonged to a set of three subexperiments, each defined by materials making a pertinent linguistic contrast, distinguishing (a) unergative from unaccusative verbs by means of *ne* cliticization; (b) different lexical-semantic types of unergative and unaccusative verbs; (c) syntactically marked restructuring phenomena (optional ‘transmission’ of auxiliary *essere* from an embedded to a matrix verb; obligatory auxiliary change from *avere* to *essere* under Clitic-Climbing; ungrammaticality of clefting in restructured constructions). For purposes of exemplification, we will restrict ourselves to the results --- 6 Sorace (1996) argues that the lexical-semantic representations identified by the hierarchies belong to a potentially universal ‘semantic space’. What varies from language to language is the mapping of these representations onto positions in argument structure, which in turn determine the unaccusative or unergative syntactic status of a verb. Linking rules, which govern the assignment of lexical-semantic categories onto argument structure positions, are the main locus of cross-linguistic variation within this account. of the unaccusative subexperiment. Here the prediction was that (a) paired unaccusatives, which have a transitive alternant, would be less unacceptable when conjugated with *avere* than unpaired unaccusatives, and (b) within the category of unpaired unaccusatives, motion verbs would be perceived as more core *essere* cases than verbs denoting the continuation or the existence of a state. These materials were assessed via several techniques by 36 native speakers of Italian and by non-native learners of the language at various levels of proficiency. The results of the experiments were largely consistent with the predictions. To a conventional level of significance, the judgments of native Italians were sensitive to lexical-semantic hierarchies of unaccusative and unergative verbs: judgments on both auxiliary selection and *ne* cliticization were more consistent and determinate for core verbs than for peripheral verbs. Among learners, auxiliary selection was acquired earlier with core verbs than with peripheral verbs. Figure 2 indicates the kinds of discriminations the technique revealed. It shows subjects’ strength of preference for the grammatical auxiliary (*essere*) over the ungrammatical (*avere*) with different subclasses of unaccusative verbs. The first thing to notice about this graph is the dependent variable described on the vertical axis: the strength of preference for one form over another. Sorace made use of the interval scale measurement present in magnitude estimates by subtracting the log of a subject’s estimate for the acceptability of the dispreferred *avere* version of a given sentence from the log of his or her estimate for the preferred *essere* version of the same sentence. Strength of preference can be assessed in this way even for sentences never juxtaposed for direct comparison. Figure 2 portrays the arithmetic mean of these between-auxiliary differences averaged over a group of subjects and a group of grammatically equivalent sentences. Logs are used both to keep the scale manageable in the presence of the very large numbers some subjects used and also to provide a straightforward way of dealing with judgments of proportions: when exponentiated, the difference between log estimates provides the ratio of the acceptability of the two versions of the sentence. The second thing to note here is that verb subclasses are arranged along the horizontal axis with core unaccusatives, change-of-location verbs (as in 4) on the left, followed by increasingly peripheral subclasses as we move rightwards. Continuation and existence-of-state verbs are exemplified in examples 5 and 6 above, while 8 and 9 below illustrate unaccusative verbs with transitive and unergative alternants respectively. (8) a. *Le tasse sono aumentate del 20%.* The taxes.FEM.PL are increased.FEM.PL by 20% b. *Le tasse hanno aumentato del 20%.* The taxes.FEM.PL have increased.MASC.SG by 20% c. ‘Taxes have gone up by twenty percent.’ 7 See Sorace 1992 for details. Figure 2. Strength of preference for grammatical over ungrammatical auxiliaries with Italian unaccusative verbs of various subclasses as expressed via magnitude estimation of linguistic acceptability by learners (beginning, intermediate, and advanced), near native speakers and native speakers of Italian (from Sorace 1992). (9) a. Paola è corsa in farmacia. Paola.FEM.SG is run.FEM.SG to pharmacy b. *Paola ha corso in farmacia. Paola.FEM.SG has run.MASC.SG to pharmacy c. ‘Paola ran into the pharmacy.’ Subjects had more decisive views about core unaccusatives than about peripheral items. Preferences get significantly weaker as distance from the core increases (\(p < .001\)) for all classes of subjects: Spearman’s \(p\) for native speakers of Italian is \(-0.363, df = 178\); for near-native speakers \(-0.406, df = 118\); for advanced, intermediate, and beginning learners \(-0.315, df = 158, -0.322, df = 178, -0.340, df = 158\). 8 The reader should recall that it is not possible to space the verb categories along the abscissa of this graph according to the predictions of a linguistic theory that comments only on their order. For this reason, Sorace could not indulge in the kind of statistics psychophysicists use and which we shall attempt to return to in §6. The closest we can come to the kinds of correlations discussed elsewhere in this paper is correlate rankings of all the judgments to the rankings of all the categories. The figures represented by the patterned columns are means, however. The spread around each, represented by error bars, can be considerable. Sorace used Analysis of Variance, which treats verb category as a nominal scale, to establish that variation between verb categories exceeded variation within them. Either considering the behavior of individual subjects or considering the responses to individual sentences, the intercategory variation is greater than intracategory to a degree that is unlikely to be the result of chance responding. To see how the effects of verb category work in detail, we can examine the results for different groups of judges, shown with least advanced learners on the left of each group of patterned bars and native speakers on the right. As Sorace predicted, the increasing length of bars within verb categories shows that intuitions about the unaccusativity hierarchy become more determinate with increasing proficiency. In fact, by Tukey test, a sequel to ANOVA, the English beginner and intermediate groups show only a single significant difference each: between the mean for the core verb type (change of location) and the mean for the verb type hypothesized to be the most peripheral (unergative alternant). Sensitivity to the unaccusativity hierarchy is more clearly evident in the judgments of native speakers of Italian, of near-native speakers, and of advanced learners: their mean preference scores for core verbs are significantly different from their scores for each of the three most peripheral verbs (existence of state, transitive alternant, unergative alternant). Compared to the examples in 7, which involved major differences in type and location of constituent, the sentences of the unaccusative study, like 8a and 8b, differed minimally. Yet in Sorace’s studies, both native and non-native speakers not only produced significant effects, but also judged acceptability via magnitude estimation with at least as much delicacy as they did via a rank-ordering task. For instance magnitude estimation judgments—but not rank ordering responses—distinguished natives from near-natives whose speech and writing were virtually indistinguishable from natives: near-natives produced variable judgments about some sentences which elicited determinate judgments in native speakers. Even with modest differences in materials, magnitude estimation appears to be the tool of choice for distinguishing among subject groups. 6. Validation studies. 6.1. Cross-modality matching. Validation is the process of establishing that a response measure reflects what it is supposed to reflect. With no continuous measure of the stimulus to plot against subjects’ impressions, we might be at a loss to determine what it is that controls magnitude estimates of acceptability (Stevens 1966). Linguistics shares this difficulty with the social sciences, which have nonetheless made good use of magnitude estimation in providing interval scale judgments of such diverse properties as prestige of occupations (Kuennapass & Wikstroem 1963, Dawson & Brinker 1971), support for political policies (Lodge et al. 1976), moral judgments (Ekman 1962), and the stressfulness of events (Zautra et al. 1986; see Lodge 1981 for a more extensive list). In each case, as in the study by Sorace discussed above, the estimates for different stimuli have proved informative in their own right. To solve this problem, social psychologists have borrowed the CROSS-MODALITY MATCHING technique from psychophysics. In the psychophysical version of cross-modality matching, subjects use one sensory modality to estimate magnitudes presented in another. For example, brightness might be estimated by adjusting the length of a line to correspond to the perceived brightness of a light. If the subject thinks that the second light stimulus is twice as bright as the first, she or he draws a line that appears twice as long as the line drawn (however arbitrarily) to represent the brightness of the first stimulus. Two psychophysical functions contribute to the results, the function for brightness perception and the function for line-length perception. The plot of judgment (log of line length) against stimulus (log light energy) is characterized by a straight line, representing a power function with a predictable slope in log-log coordinates. If subjects are using their abilities to judge brightness and line length normally in this unusual situation, then the slope of the cross-modal function should be equal to the characteristic slope for numerical magnitude estimation of the stimulus (for example, .5 for estimated brightness of a point source) divided by the characteristic slope for numerical magnitude estimation of the response (1 for line-length estimation). The psychosocial application of cross-modality matching makes use of this regularity. When two familiar modalities are used to express judgments of dimensions which have no objective physical points of comparison (Cross 1974, Hamblin 1974, Stevens 1969, 1975, Lodge et al. 1976, Lodge 1981), the cross-modality plot of judgments against judgments will still approximate to the predicted slope, that is, to the ratio of the two psychophysical slopes, as long as subjects are able to use the modalities to estimate the new continuum consistently. Thus the appearance of the expected line in a cross-modality plot becomes a test of validity of judgments. As usually applied, the cross-modality technique has two phases. The CALIBRATION PHASE is cross-modality matching in the psychophysical sense, with each modality used to judge stimuli in the other. The proportions holding among stimuli in each modality are the same: subjects are judging the same proportions in two ways. This phase helps familiarize subjects with the concept of proportionality, which underlies the technique of magnitude estimation, and is used \[ \log R_1 = \log a_1 + b_1 \log S_1 \] \[ \log R_2 = \log a_2 + b_2 \log S_2 \] Here, \( R_1 \) and \( R_2 \) represent the responses for the two modalities and \( S_1 \) and \( S_2 \) represent the stimuli. In the case cited, the subject is attending to \( S_1 \), a bright light stimulus, and matching to it \( S_2 \), a line stimulus. In effect, \( S_2 \) is made to equal \( S_1 \), at least insofar as each will have exactly the same relationship with all the other members of their respective series. Since \( S_1 = S_2 \), we can derive the following equation by substitution: \[ (\log R_1 - \log a_1)/b_1 = (\log R_2 - \log a_2)/b_2 \] From which it follows that \[ \log R_1 = (\log a_1 - b_1/b_2 \log a_2) + b_1/b_2 \log R_2 \] Hence the slope of the cross-modal function is \( b_1/b_2 \). to assess subjects’ basic self-consistency in well-understood domains. That is, this exercise should produce a cross-modality (psycho-psychological rather than psychophysical) plot with the slope predicted from classical psychophysics. In the validation phase the same two modalities are used to judge a single set of nonmetric stimuli, in our case, to judge the acceptability of the same sentence types. The question is whether they act as if they are judging the same proportions in two ways. If they are, whatever slope the cross-modal plot had in the calibration phase, it should have here. Cross-modality matching does not invent the linguistic scale of a psycholinguistic plot. On the other hand it does go some way toward confirming that, however unprincipled it seems to them, the spacing of judgments by our subjects is no matter of whim, but a reflection of intuitions on which they can draw repeatedly. Lodge 1981 gives a complete and clear account of the experimental and statistical procedures required to test the hypothesis that subjects’ estimates are operating in the same way on the physical and the ‘social’ stimuli. Here we offer only an abbreviated report of a pair of studies following Lodge’s procedures (for a full description, see Bard et al. 1994), to support our claim that the magnitude estimation technique elicits consistent expressions of opinion. 6.2. CROSS-MODALITY MATCHING OF ACCEPTABILITY JUDGMENTS. 6.2.1. METHOD. Each of our two studies used a separate group of 32 young adult native speakers of Italian, all residents of Italy and all visiting Edinburgh for a brief course in English. They included professional working people, university students and teachers. None were linguistics students and none had participated in any of the other studies described here. CALIBRATION PHASE. To introduce the calibration phase, we demonstrated the notion of simple proportion and allowed subjects some initial practice with judging one of two physical continua. Subjects then performed two psychophysical magnitude estimation tasks that normal adults are known to execute accurately. They gave numerical magnitude estimates of the lengths of 48 horizontal lines where those lengths were distributed more or less evenly over the width of a PC screen. They also used lines to express magnitude estimates of the size of 48 numbers, matched pairwise to the line stimuli so that both represented the same set of ratios. For example, the ratio of largest to smallest stimulus was 17.5 in both sets. Because judgments in each dimension were expressed by manipulations of the other, subjects were always using their feel for both number and length. All that differed was which dimension was stimulus and which was response modality. VALIDATION PHASE. When they had finished judging numbers and line lengths, we showed subjects that linguistic acceptability could be assessed in the same way, giving them examples of more and less acceptable sentences and allowing them to practice on sentences which varied considerably in acceptability and in source of unacceptability. The first group of subjects received no explicit 10 Copies and, where appropriate, translations of the materials will be found in Bard et al. 1994. instructions as to what numbers they should use in their numerical estimates. The second group was asked not to restrict their responses to the 10-point academic marking scale used in Italy. There were no other differences between the methods for the two studies. The linguistic materials in this phase of the studies were 192 Italian sentences presented visually. All were drawn from the materials devised by Sorace (1992). These covered three subexperiments on factors controlling auxiliary choice in Italian: the Unaccusative subexperiment discussed in §5 above, an Unergative subexperiment, and a Restructuring subexperiment. Table 2 outlines the three subdesigns. Each subexperiment had its own factorial design: each alternative value of each variable was combined with each value of every other, producing 48 basic item types, half using essere, half avere. Half of the 48 should be fully grammatical, the other half ungrammatical with varying levels of unacceptability. To make it possible to separate the effect of a syntactic manipulation from the effect of the particular lexical items in a sentence, 4 distinct lexicalizations were devised for each item type. For example, the 4 unergative [+ motional], basic, avere lexicalizations were: (10) a. Maria ha nuotato tutti i giorni quest’estate. ‘Maria swam every day this summer.’ b. Mia zia ha viaggiato molto da giovane. ‘My aunt traveled a lot when she was young.’ c. Carla ha passeggiato nel parco per un’ora. ‘Carla strolled in the park for an hour.’ d. Paola ha camminato in campagna per tre ore. ‘Paola walked in the countryside for three hours.’ The resulting 192 sentences were divided into four groups, each containing one lexicalization from each of the 48 original types. Each subject encountered two groups of sentences, judging one via line lengths and the other via numerical estimates on their first presentation, and then reversing the combination of item and modality in a second session three or four days later. Each sentence was judged by 16 subjects per study. Each study represented all of the materials with the same counter-balancing for first combination of lexicalization and magnitude estimation technique, the order of techniques in the first session, and the ordering change between sessions. To free averaged results from order-based bias (Levelt 1972), 8 different random orders of sentences were used. In all three subexperiments, Sorace’s subjects had performed according to predictions based on the general stance set out in §5.11 Our immediate purpose here was not to retest Sorace’s hypotheses, but to allow the new subjects to 11 In the Unergative subexperiment, subjects judged ‘paired’ unergatives, those with unaccusative counterparts, as more peripheral, that is less unacceptable when conjugated with essere, than ‘unpaired’ unergatives (+ or –motional). Among unpaired unergative verbs, they treated the –motional items as more core avere cases than + motional verbs. For Restructuring materials, native speakers were able to discriminate categorically between possible and impossible, optional and obligatory auxiliary change in restructuring sentences, although these sentences on the whole elicited lower acceptability values than those assigned to core sentences. judge materials that should differ markedly in acceptability. In the validation study, therefore, we did not subdivide the linguistic materials in any way. 6.2.2. RESULTS. CALIBRATION. Figures 3 and 4 show that our subjects, like other people who have contributed to psychophysical experiments, can estimate line length and numerical magnitude, and that exchanging response and stimulus modalities did not interfere with their ability to make such judgments. The figures show the averaged log judgments of length and numerical magnitude plotted against each other, Fig. 3 for the first group, Fig. 4 for the second. In both studies, because the correlations between line length and numerical estimates of the same ratios were effectively perfect (r = 1.00 in Study 1 and 0.99 in Study 2), the points cluster around a straight line. Were our subjects good, self-consistent judges of length and number? The slopes of these lines tell us that they are quite good, but not perfectly self-consistent. Because both the psychophysical functions on which these plots are based should have slopes of 1 in the coordinates used here, the cross-modal plots in these figures should approach a regression line with a slope of 1/1 or 1. Our first group was conservative in their use of measurements, however, particularly in their use of numbers to estimate line lengths: the ratio of the largest to the smallest numerical estimate was only 14.25, though the ratio of Figure 5. Study 1. Cross-modality plot for linguistic stimuli: mean numerical and mean line-length magnitude estimates of the acceptability of the same Italian sentences. (Each of the 48 data points represents 32 numerical and 32 line length estimates on items in a cell of the factorial linguistic design of Table 1; solid line = observed regression line, $B = 0.67$; broken lines = 95% confidence intervals for population $\beta$.) largest to smallest true length was 17.5. Nonetheless, the cross-modal plot in Figure 3 shows a slope of 0.96, a value close to the ratio of the two actual psychophysical slopes (0.98), as it should be, and close to the usual ‘theoretical’ slope of 1.\textsuperscript{12} In the second study, subjects were even less dependable on length and number. The regression line in Fig. 4 has a slope of 0.88, significantly shallower than the predicted value of 1.00, because these subjects slightly underestimated numbers via line lengths (max/min ratio 15), but overestimated lengths via numbers (max/min ratio 23). The difference between the two studies shows the kind of variation we should expect to find in ability or interpretation of a constant set of instructions. Validation. Figures 5 and 6 show the cross-modal plot for estimates of the acceptability of the same sentences expressed by the same subjects in the form of line lengths and numbers. As in Figs. 3 and 4, the figures include only averaged logs of estimates. Both cross-modal plots show nearly perfect correlations \textsuperscript{12} This cross-modal slope is based on an errors-in-both-variables regression model (Cross 1974, Lodge 1981) which minimizes the perpendicular distances from plot points to the regression line (see also Cross 1982). We use ‘close to’ here as an informal paraphrase of ‘contains within its 95% confidence limits’, that is, this is a likely outcome of sampling from a population similar to the one characterized by the theoretical value. across modalities: subjects were self-consistent when rejudging the same sentences via different magnitude estimation modalities, just as they had been when rejudging the same physical and numerical proportions in the calibration phase. The slopes of the lines along which the judgments cluster differ between studies, however. Were our subjects good and consistent judges of acceptability? The subjects in study 1 were not: the shallow slope \( B = 0.67 \) in Fig. 5 falls far short of what theory or the calibration results predict (between 0.96 and 1.00). In study 1, where no explicit instructions were given about avoiding familiar numerical scales, subjects gave more restricted estimates of acceptability ratios when responding in numbers than when responding via lines: the ratio of the highest to the lowest geometric means is only 3.71 for numerical responses, while it is 6.68 for line responses. With only one or two exceptions, this group of subjects used numbers in the range from 2 or 3 to 10, the scale used in the Italian school system for assessment purposes. This result might have indicated an inability to make fine numerical estimates of acceptability, or it might merely be a case of defaulting to a familiar numerical scale in the absence of any suggestions to the contrary. To discover which, we instructed subjects in study 2 not only to use appropriate numbers but also to avoid restricting their choices to the numbers from 1 --- **Figure 6.** Study 2. Cross-modality plot for linguistic stimuli: mean numerical and mean line-length magnitude estimates of the acceptability of the same Italian sentences. (Each of the 48 data points represents 32 numerical and 32 line length estimates on items in a cell of the factorial linguistic design of Table 1; solid line = observed regression line, \( B = 0.93 \); broken lines = 95% confidence intervals for population \( B \).) to 10. Subjects now judged the acceptability of sentences consistently across modalities. Figure 6 meets the psychophysical predictions. Data points closely approximate \((r = 0.99)\) a linear function with a slope \((B = 0.93)\) close to the predicted slope of 1.00. The change gives every appearance of being a direct result of the change in instructions. The range of numerical estimates of acceptability increased very markedly between studies, much more so than the range of line-length judgments: the ratio of highest to lowest mean line length in study 2 was 9.9, 50\% larger than the corresponding ratio in the earlier study (6.7), while the ratio for numerical responses (8.4) was more than double the ratio produced under the old instructions (3.7). Subjects in the second study did not behave as if the instructions forced them to expand what they actually viewed as a severely restricted continuum. Had they been unable to discriminate more finely than the seven usable points of the academic scale allowed, study 2 should have been characterized by the signs of guessing: unsystematic use of numbers to reflect subjects’ attempts to make impossible distinctions, poor agreement between numerical and line responses, and poor match in proportions across modalities. Figure 6 gives no such impression. Instead it displays orderly and consistent use of eightfold differences in judged acceptability. 6.2.3. DISCUSSION. Under suitable instructions even quite naive subjects appear to give self-consistent magnitude estimates of physical dimensions and of linguistic acceptability. In fact, when warned against falling back on familiar assessment scales, subjects were more consistent in maintaining their judgments of relative acceptability of sentences \((B = 0.93)\) than they were at maintaining judgments of relative physical magnitude in well-trodden psychophysical domains \((B = 0.88)\). Whatever subjects do when magnitude-estimating linguistic acceptability, and however odd they find the whole process at first, they clearly have this ability in their psychological repertoire, just as they have the ability to give proportionate judgments of brightness or prestige. If any of these subjective characteristics of the world were only bi-valued, the kind of results we report here would be difficult to produce. At the same time, the artifact in the numerical judgments of study 1 reminds us that the psychophysical tool must be applied carefully. Most of us have copious experience with scales that fail to reflect our full powers of discrimination in many areas, and we succumb to their limitations without complaint. An advantage of magnitude estimation is that it gives us the freedom to express as many distinctions as we can make. It would appear that we have to be explicitly released from our habits to use this freedom. In time, and particularly with subjects whose arithmetic skills are questionable, it may be wiser to use unfamiliar judgment modalities like line length, to avoid the artifacts of our individual relationships with numerical scales. 7. RELIABILITY. For any single study to offer generalizable results, a method of accessing human judgments must be reliable not only within but across subjects. The validation studies just described demonstrate within-subject reliability. In this section, we test for consistent results between groups of subjects: we compare the numerical estimates made by the subjects in our successful validation study, study 2, with those offered for the same sentence stimuli by the subjects in Sorace’s original experiments (1992, 1993a,b). Both groups of subjects were native speakers of Italian living in Scotland. Certain other details differed. In Sorace’s study, the 36 subjects were, on average, longer term residents of the U.K. and slightly older than the present group of 32. In Sorace’s study, sentences were presented individually by overhead projector, timing was controlled by the experimenter, and subjects were tested in small groups, using pencil and paper to record their responses. Numerical magnitude estimation was only one of the techniques they used. In the current studies, subjects worked individually at PCs, controlled the time they took to respond, and performed magnitude estimation in and on several modalities. Replication despite these differences will indicate that magnitude estimation results are stable over some degree of methodological variation. To make the necessary comparison, we applied the appropriate statistical tools to answer two questions. First, we needed to know whether the different groups of subjects produced the same relative acceptability judgments for the same sentences. If they did, we would find a high positive correlation between the numerical magnitude estimates of acceptability by the two groups. Second, we needed to know whether the significant results of Sorace’s study were replicated, that is whether magnitude estimation supports delicate discriminations or is just incidental but loud noise. Effects which are significant at the .05 level may, after all, occur by chance once in 20 tests on a population for which the effect is not generally true. If Sorace’s results were adventitious events in an essentially random process of assigning numbers to impressions, then a replication of an original study might not produce the same results. 7.1. AGREEMENT AMONG ESTIMATES OF ACCEPTABILITY. In the test for agreement between studies, the present work was represented by the numerical magnitude estimates of acceptability from study 2.13 To make comparisons on maximally similar conditions, only first presentations from study 2 were used. Figure 7 plots the averaged logs of estimates from study 2 against those from Sorace 1992 sentence by sentence. The plot shows significantly close agreement: \( r = 0.89, t_{40} = 13.08, p < 0.001 \). The two groups of subjects gave very similar estimates of the relative acceptability of stimulus sentences. As the regression line on Fig. 7 illustrates \( (B = 1.09) \), however, and as we might have expected from the special instructions they were given, study 2 subjects used a somewhat wider range of numerical estimates. 7.2. REPLICATION OF SIGNIFICANT EFFECTS. Although the relative locations of sentences on the acceptability scale appear to be constant across studies, it could still be the case that clear differences between the cells of Sorace’s 1992, 1993 design might fail to reappear. As Table 2 shows, Sorace’s study is actually --- 13 Study 1 was also compared with Sorace 1992. In general, agreement was even stronger than it was with study 2, perhaps because only study 2 included a warning against limiting the range of numerical estimates. composed of three experiments, each of which presents all possible combinations of all the levels of several variables. Only a few of the comparisons among cells or groups of cells were critical to Sorace's theory and these were the basis for a comparison between studies. Sorace's conclusions were supported by Analyses of Variance, which determine whether the differences among cell means outweigh background noise, the incidental differences among items in the same cell. The form of ANOVA Sorace used makes all the comparisons possible in an experimental design, even those that are not directly pertinent to the linguistic issues around which the experiment was designed. For the subexperiments on unergative and unaccusative verbs, it was the interaction between the category of the sentence's main verb and the auxiliary used with the verb that provided the important effects on acceptability, since these interactions compared the acceptability of the preferred and the dispreferred auxiliaries with verbs of each kind from core to periphery. Because an overall observed preference for the correct auxiliary must be found before the change in auxiliary preference can be interpreted, the main effect of auxiliary is critical, too. For the restructuring verbs, it was the interaction of word order and form that should have an effect: in clitic climbing sentences, [+restructuring] versions (with *essere*) should be preferred to [−restructuring] versions (with *avere*); in cleft sentences the reverse preference should hold, while the other two word orders should allow either version. By examining these effects closely, we could determine how far study 2 and Sorace's study would give us different impressions of the effects of certain linguistic factors on acceptability. To find out how far results differed between experiments, we first ran ANOVAs, by subjects and by materials, for each of the three subexperiments which included data from both Sorace's study and study 2. These analyses followed the original designs set out in Table 2 with an additional variable for experiment. Significant interactions with this variable indicated dependable differences between the outcomes of these studies. For each subexperiment we also ran separate ANOVAs with the designs in Table 2. Whenever the crucial effects, or related effects, showed an interaction with experiment, the analyses for the individual studies could be consulted to determine the nature of the difference. In particular, it was important to determine whether any differences could be damaging to Sorace's conclusions, either because acceptability ordering reversed between studies or because the effect, though numerically present in both studies, was significant only in Sorace's. The only important effect to produce a significant interaction with experiment was the auxiliary effect within unaccusative verbs (by materials, \( F = 4.89, df = 1, 30, p < 0.035 \); by subjects, \( F = 1.99, df = 1, 66, p > .10 \)). The difference fell in neither damaging category, however, for both Sorace's subjects and the present group strongly preferred essere (for Sorace 1992: \( F_2 = 232.86, df = 1, 15, p < 0.0001 \); for study 2: \( F_2 = 243.68, df = 1, 15, p < 0.0001 \)). The only difference was that Sorace's subjects showed a stronger preference than the present group. This difference is typical of the two experiments: although none of the other critical results differed significantly, study 2 generally produced less marked contrasts than Sorace's study. 8. Conclusions. The empirical work we have reported suggests that magnitude estimation can be a useful tool in the study of linguistic acceptability judgments. The technique is easy to use informally, but warrants the additional effort needed to mount full-scale experimental studies, for it delivers delicate and robust distinctions among linguistic categories. The cross-modal validation studies indicate that magnitude estimation can be applied to linguistic acceptability in much the same way as to typical psychosocial continua: its validity comes from intrasubject consistency, which was easily achieved with instructions that encourage subjects to make full use of the numerical scale in expressing their impressions. The reliability study demonstrates that the technique gives intersubject consistency as well, despite modifications of procedure. We are unwilling to claim that magnitude estimation of linguistic acceptability is the philosopher's stone. Instead, we see it as a useful tool. Certainly this method should allow us to overcome the problems outlined in §1 of this paper. --- A more conventional treatment might have been to use post hoc tests on the combined data, but these tests are so stringent that, when used with the combined variance levels, they sometimes fail to reveal the significance of those effects whose replication was in question. Measurement can now be as fine as subjects’ capacities allow. With no preemptive limitation of the measurement scale, the tension between relative and absolute measurement is lost as subjects build a whole scale by means of relative judgments. Gradience of grammaticality/acceptability can be captured empirically. Estimates of acceptability can be made consistent across large sets of examples without direct pairwise comparison, as the validation and calibration studies show. Estimates of differences in acceptability and of variation of acceptability judgment can not only be calculated straightforwardly but also produce statistically significant results, as the example from Sorace’s work demonstrated. The lessons of psychophysics can profit us as they have profited social scientists for several decades. Magnitude estimation per se will not do away with the artifacts that plague judgment techniques. In fact, much of the benefit of the technique should be felt in experimental research, where it can provide scope to deploy well-known design strategies against artifacts. For example, effects of context, in particular, of order of stimulus presentation, are as well known in psychophysics as in the study of linguistic acceptability. In magnitude estimation experiments, as in nonjudgment techniques, stimuli are presented in different orders to different subjects, or to the same subjects at different points in time. Because judgments can be affected by the modulus, a different modulus may be chosen for comparison on different trials. Averaged results sample all levels of the artifact in all critical conditions. If the result of interest is larger in scale than the variation induced by the artifact, results are still visible. Again, magnitude estimation will not change the fact that different kinds of subjects may perform differently. It does, however, give us ways of comparing their performance, as Sorace did in the study reported in §5. Here the advantage is that the technique is readily applied without much apparatus and that results may be comparable across studies. In this case, as in others, magnitude estimation is a tool for exploring acceptability judgments as well as insuring against factors that affect them. With these tools in place, subjects’ capacities can more easily become an object of study. Since it is now easy to subtract one estimate of acceptability from another, we should be able to partition acceptability judgments to study their components. As we have seen, Sorace 1992 used difference between estimates for preferred and dispreferred auxiliary to show the effects of changing auxiliary while holding the rest of the sentence constant. Analogously, one might hold all of a judged sentence constant but vary its context, comparing presentation in isolation with presentation in a short text to determine how much judged acceptability differs in the two cases. Or one might track the difference between a preferred and a dispreferred form as the two are offered in different contexts. Similar manipulations for lexicalizations, social norms, frequent usages, and pragmatic plausibility are imaginable. Experiments of these general designs have certainly been performed. What the new measure of acceptability allows is a more powerful way of integrating the results. A flexible response measure and statistical techniques like linear regression should help us to discover the major factors contributing to acceptability judg- ments, and to elaborate theories explaining their operation (Robertson et al. 1993). All these advantages are garnered at considerable empirical cost. How do they relate to small and imperfect exercises in professional judgment by linguists? First, as we have shown, even simple informal exercises in magnitude estimation do yield judgments which are worth pursuing, because we have reason to believe that judges will be self-consistent and will perform like other judges. To take advantage of the technique on an informal basis will not be costly: judging a dozen critical sentences three times in different orders, with a different modulus each time, and then averaging the results should take less than 15 minutes. The effect should be to permit better and more consistent distinctions in conventional cases, and to suggest new data for study. For example, the reader is invited to consider why Linguist A did not maintain a constant effect for all ECP violations. More important among new objects of consideration is the kind of perceptual ability which underlies the formulation of acceptability judgments. We can consider two possibilities here though the data currently do not decide between them. Acceptability might be composed of psychosocial categories. It would then amount to a binary distinction, analogous to U and non-U, with mid-range judgments indicating only error of measurement. Results with this flavor are predicted if the underlying engine of acceptability is a grammar which makes a binary classification between those strings which are within the language and those which are not. It is clear that linguists do not believe that their judgments of acceptability are binary: hence ?, ??, ?*, **, etc. We have offered their conventional explanation for this fact: other factors interact with purely grammatical intuitions, lowering some judgments and raising others, and giving rise to variable mid-range scores which are the product of interacting sensitivities. Whether or not other factors participate in acceptability judgments, acceptability might entail quite a different kind of ability, one that underlingly resembles certain typical continuous psychophysical scales, like apparent brightness or loudness. If so, acceptability judgments should resemble such psychophysical judgments in creating a scale that is genuinely continuous, most orderly in the middle of its range, and most variable at the upper end. The fact that our results show more variance at the lower end of their range may indicate only that subjects were really judging unacceptability. Had we asked them to give the big numbers to the bad examples, the resemblance between our results and the typical sensory measures would have been more striking. These two models for the underlying ability, the psychosocial categories and the psychophysical continuum, make different predictions about the relative robustness of mid-range and extreme judgments. The present materials do not permit the necessary comparison, for they were not designed to cover the full range of acceptability values from the grotesque to the innocuous. With the right data, we should be able test this and other hypotheses about the sources of this important linguistic behavior. On the basis of the results reported in this paper, however, we do have a tool for discovering what kind of perceptions linguistic intuitions create. REFERENCES ——; DAVID CROSS; BERNARD TURSKY; MARY-ANN FOLEY; and H. FOLEY. 1976. The calibration and cross-modal validation of ratio scales of political opinion in survey research. Social Science Research 5.325–47. Human Communication Research Center University of Edinburgh 2 Buccleugh Place Edinburgh EH8 9LW United Kingdom [Received 25 July 1994; revision received 12 July 1995; accepted 31 July 1995]
Writing the Nation Transculturation and nationalism in Hispano-Filipino literature from the early twentieth century Villaescusa Illán, I. Publication date 2017 Document Version Other version License Other Citation for published version (APA): General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. Chapter 3 Imagining a Modern, Independent Philippines: Active Transculturation in Paz Mendoza’s *Notas de viaje* (1929) La satisfacción del que viaja no es precisamente visitar los cabarets, museos y ver una ciudad después de otra, sino en ver de cerca cómo piensa, trabaja y lucha la humanidad. (Mendoza 1929: 103) [The satisfaction of the traveler is not precisely to visit cabarets, museums and see one city after another, but to see closely how humanity thinks, works and fights.] Introduction Through the engagement of Filipino writers with orientalist/colonialist discourses, western models of culture and thought also became a point of reference for the imagination of their own community. Specifically, the ideal of European modernity guided the Hispano-Filipino imagination in the construction of its national identity during the puzzling period following independence from the Spanish empire and the US occupation of the Philippines (1898-1946). Right in the middle of this period, Maria Paz Mendoza Guazón, a Filipino doctor and Professor of Medicine, published *Notas de viaje* (1929), a compilation of travel notes gathered on a world trip she completed between 1926 and 1927, in which she visited over 21 countries in Europe, America and the Middle East as part of an educational project supported by the Filipino government. Mendoza contributed to this project with detailed reflections on foreign customs observed during her trip. She introduces her account as follows in the preface: Mirado desde el punto de vista externo, o sea del indumentario literario, el libro posiblemente no tenga nada de galano, y si hoy lo publico, no me anima otro propósito que el de ceder a los ruegos de algunos amigos, y sobre todo, al impulso de un deber moral y cívico de dar cuenta a mi pueblo de cuanto he visto, observado y aprendido fuera de la tierra donde he nacido. […] el relato de mi viaje podría ser de algún provecho para los míos, presentando ante sus ojos, reflejando en su mente todo lo bueno y útil de los demás pueblos, que su espíritu pudiera asimilar para fortalecerlo, sin perder el sello característico de su individualidad, mejorando lo poco o mucho bueno que tenemos como pueblo oriental moldeado por los ideales y la cultura de Occidente (Mendoza iii, emphasis added) --- 78 Her travel notes are organised in the form of diary entries by country visited: US (Washington, New York, Key West), Cuba, England, France (Paris, Nice, Marseille, Corsica), Belgium, The Netherlands, Germany, Denmark, Sweden, Norway, Austria, Italy (Venice, Florence, Rome, Naples), Spain, Greece, Turkey, Syria, Lebanon, Palestine/Israel and Egypt. [From an external point of view, that is of the literary form, this book possibly lacks some elegance, and if I publish it today, I am only encouraged by yielding to the entreaties of some friends, and especially, to the impulse of the civic and moral duty of giving account to my people of all I have seen, observed and learned outside the land where I was born. (...) the story of my trip could be of use for some of my people, presented in front of their eyes, reflecting in their mind all that is good and useful from other peoples, hoping that their spirit could assimilate in order to gain strength without losing the characteristics of its individuality, improving as much or as little good we have as an oriental people moulded by the ideals and culture of the west] Starting by establishing the social and political dimension of her writing as a moral and civil duty, Mendoza expresses the hope that her travel notes will benefit [“ser de algún provecho”] her people as an educational text. Her intentions to enlighten her people are attached to ideas of assimilation and improvement that can be read as derived from a spirit of achievement or, given the colonial conditions of the Philippines, as expressing a felt necessity to become something foreign perceived as superior. For Mendoza, Filipinos “are an oriental people modeled by the ideas and cultures of the West,” for “good or bad,” who could benefit from “assimilating” to something else that will “strengthen their spirit.” Mendoza’s rich and provocative travel notes are the departing point for my analysis of her construction of a peripheral vision of modernity as an active process of transculturation. I will build on the concept of transculturation as I have used it so far by looking at the work of Fernando Ortiz, Angel Rama and Marie Louise Pratt, where it appears as a dynamic process of intercultural connections that creates possibilities for transforming one’s own community by appropriating parts of other cultural systems. In Pratt’s words, transculturation designates “how subordinated or marginal groups select and invent from materials transmitted to them by a dominant or metropolitan culture. While subjugated people cannot readily control what emanates from the dominant culture, they do determine to various extents what they absorb into their own, and what they use it for” (1992: 6). Mendoza’s travel writings expose the active attachments to and detachments from foreign influences - especially around the idea of modernisation - that affected Filipino culture during the Spanish-American period, and allow me to map the itineraries of these attachments and detachments, which here appear not so much as outcomes of past colonial contact, as in the work of Balmori and Gurrea, but as strategic tools to compose a future vision of an independent Philippines. Notas de viaje will help me to expand the concept of transculturation, most of all in terms of its temporality. I argue that transculturation can not only be read as a consequence of cultural mixing observable in “real life” after cultural contact has occurred, as happens, for instance, with transcultural architecture (Hernández, Millington and Borden 2005) and “regional” literature (Rama 1997) in Latin America,\textsuperscript{79} but can also be interpreted as a precedent to cultural transformation, an aspiration \textit{to be like others} perceived as superior. As an impulse to imagine the possible transformations that contact with others could bring to one’s own community, transculturation produces a hypothesis about the future that incorporates past and present perspectives. In \textit{Notas de viaje}, this hypothesis, concerned with the question of how to conceive modernity in the Philippines, is based on evocations of the past and visions of the future enabled by Mendoza’s present experience of travelling the world. The tentative condition of this hypothesis and the global reach of Mendoza’s journey are what lead me to view \textit{Notas de viaje} as an active transcultural project. After discussing some of the relevant literature on the genre of travel literature, including from a postcolonial perspective, I will analyse significant fragments from \textit{Notas de viaje} to identify the elements that Mendoza considers worth “assimilating to” and the different forms of attachment to (identification) and detachment from (contestation) the places and people she encounters. What attitudes does \textit{Notas de viaje} reveal towards foreign and local forms of modernity? How does Mendoza conceive of the idea of a Filipino national identity in the present and in the future? How can \textit{Notas de viaje} be read as an active project of transculturation? \textbf{Travel Literature and (Post)Colonial Theory} My interest in using a piece of travel writing to explore the concepts of attachment and detachment in the context of (post)coloniality/modernity lies in the approach and themes with which travel writing is concerned: its traditional association with imperialism, its inevitable construction of otherness and the subjective gaze of the traveller/writer. Travel writing has been regarded as a by-product of imperialism that helped Europe and North America to justify and develop colonial enterprises. Two seminal works have emphasized this, albeit in different ways. Edward Said’s \textit{Orientalism} (1987) and Mary Louise Pratt’s \textit{Imperial Eyes: Travel Writing and Transculturation} (1992) both reveal the basis on which travel writing was appropriated by colonialism as it formulated discourses on difference and contributed to the politics of colonial expansion. While Said focuses on the idea of a spreading oriental \textsuperscript{79} Rama (1997) uses the theory of transculturation to explain Latin American literature, specifically the narratives by what he describes as “regional” writers like Márquez, Carpentier and Vargas Llosa, who applied foreign techniques and styles to re-articulate the realities of their countries. Hispanic modernism is also an example of transculturated literature, a re-exploitation of French and Spanish canons attributed to Rubén Darío, as I showed in Chapter 1. Writing the Nation Imagining a Modern Philippines discourse that constructed the idea of Western superiority, Pratt identifies the aesthetics of specific travel narratives, their capitalist/colonial agendas and their assumed universal knowledge production. Having already discussed some aspects of Said and Pratt’s work in Chapter 1, in my analysis of Mendoza’s travel writing I will expand on some of the terms that Pratt elaborates, particularly the ‘contact zone’ and ‘transculturation,’ respectively defined as the location where cultural contact takes place and the process of cultural transformation that such contact produces. A more recent work, *Postcolonial Travel Writing Critical Explorations* (2010), edited by Justin D. Edwards and Rune Graulund, aims to differentiate imperial travel writing - seen as exoticising by most critics - from postcolonial travel writing - perceived as an attempt to decolonise knowledge. It highlights the multi-directional approach of the latter narratives in contradistinction to the uni-directionality of the former.\(^8^0\) The essays contained in the collection (especially those inspired by the works of Amitav Gosh and V.S. Naipaul) demonstrate that postcolonial travel writing is a more complex and varied textual form. It is a genre that articulates the position of the travelling subject in relation to issues derived from border narratives, such as place and space, belonging, identity, nation or race. It also aims at pluralising knowledge by “articulating experiences and ontologies that are often removed from dominant European or North American productions of knowledge” (Edwards and Graulund 2). In postcolonial travel writing, peripheral narratives challenge assumed ideas of the centre(s) as a form of “writing back” (Edwards and Graulund 2). Mendoza’s *Notas de viaje* stands in a special relation to the above-mentioned theoretical frameworks in terms of the historical context of its production: it was written at a time when the Philippines was an ex-colony of the Spanish Empire (since 1898) but a neo-colony of the US. Thus, it is neither a postcolonial text nor strictly an imperialist one - despite the fact that, in places, it participates in colonial and orientalising discourses, much like other examples of early twentieth-century Hispano-Filipino literature. As I will show, *Notas de viaje* undermines neat geographical and political distinctions such as centre/periphery and colonial/postcolonial. The construction of otherness that earned travel writing its bad reputation is an essential part of the genre, even in postcolonial or contemporary narratives.\(^8^1\) Debbie Lisle --- \(^8^0\) I offered an example of how postcolonial writings open up new narrative routes in Chapter 1 when looking at Latin American modernist writing as a form of peripheral orientalism. \(^8^1\) See Edwards and Graulund’s *Postcolonial Travel Writing: Critical Explorations* (2010) and Debbie Lisle’s *The Global Politics of Contemporary Travel Writing* (2006). The latter explores the hegemonic representation of national characters (for instance, the Mexican as inferior and the American as superior in Paul Theroux’s *The (2006) puts it as follows: “Travel writers still need other places and people to visit and write about - which means that travel writers must always engage in the production of difference” (24, emphasis in original). What needs to be asked about travel writing’s inevitable othering is how these narratives of difference are constructed, that is “how travel writers produce, project and pass judgment on this difference” (Lisle 24). Lisle offers a productive method for examining how Mendoza’s position towards contemporary issues in her own country and in the countries she visits is represented in Notas de viaje in a way that is not exclusively imperial, postcolonial or cosmopolitan. With regards to literary travel writing, Mary Baine Campbell (2002) offers a useful summary of its most discussed features: Formal issues that have been fully explored with relation to travel writing in recent decades include the nature and function of the stereotype, lexical matters such as the hidden etymologies (…), the subjective presence of the author(s) in texts of knowledge, truth value in narrative writing, the independent or hard-wired shape of narrative itself, the rhetorical nature of ‘fact’, ‘identification’ in reading (with its consequences in social and political life), the representation of time, inter-cultural ‘translation’, and the function of metaphor and other figures. (263) The questions of subjectivity, representation, truth, knowledge and cultural translation that Baine lists are constitutive of most artistic works, yet the interplay of these aspects in travel writing is particularly complex. Travel writing accounts - especially in the imperial narratives that constituted the basis for Western epistemologies - were ascribed objectivity and truth value. However, just like there is no objective narrative (as I explained in Chapter 2 using Bal’s Narratology), there is no truth value in travel writing but only the construction of a particular vision based on the travel writer’s subjectivity. Locatelli (2012) describes the ability of the travel writer to mobilise certain images among his/her readership with the term ‘eloquence’: The “author” of travel literature re-created a journey that is not simply a referential account of visited places, but is “eloquent” to the point of moving the reader’s imagination, by informing him/her about places in such a way they appear (i.e., they emerge from the obvious, i.e., from the “un-seen” which is under everybody’s eyes). (67) Old Patagonia Express: By Train through the Americas (1979)) as proof of the colonial heritage present in the gaze of contemporary travel writers through what Lisle calls the “colonial vision.” In contradistinction, Lisle refers to the celebration of differences in positive terms, as in Bill Bryson’s Neither Here Nor There: Travels in Europe (1998), as the “cosmopolitan vision,” which, according to her, may simply be “a blander mutation of the colonial vision,” as it still consolidates and reproduces the privileged position of the traveller (3). With the “obvious” or “un-seen” Locatelli refers to that which is no longer noticed by the inhabitants of a place but is perceived by the traveller, who makes them stand out. The quote suggests, moreover, that the travel writer will emphasise those particular elements that the potential reader will find meaningful. Locatelli notes that ‘eloquence’ is particularly important in postcolonial literatures because of the way postcolonial writers appropriate another language to express their own view of the world. Rather than seeing this as a mere symptom of assimilation, I contend that using Spanish to describe the reality of the Philippines as a referential cultural code to express a particular worldview, is an act of re-appropriation that creates space for contestation. In this chapter I will look at how Mendoza’s language and her articulation of cultural stereotypes, cultural translations, facts, comparisons and metaphors acquire deeper meaning when read in the context of her own narrative ‘eloquence.’ I offer an analysis of Notas de viaje that traces Mendoza’s journey from beginning to end, culminating in the reception of her book in the Philippines. I show how Mendoza’s account ‘eloquently’ reveals her preoccupations and interests (in education and the modernisation of the Philippines) but I also point to certain ambiguities that challenge the consistency of her project of active transculturation. **Departure: The Question of (In)dependence** The historical context and personal circumstances in which Mendoza wrote Notas de viaje are key to understanding the text in terms of its content, approach and style, as well as the author’s concerns. Mendoza traveled around the world on two occasions: first in 1921 in the company of her husband (who died in 1924), and again in 1926 with one of her younger sisters. The Philippines had been independent from the Spanish Empire since 1898 and had started to see the results of modernisation policies implemented by the US, most prominently English-language education. However, the nationalist movements for independence in the Philippines had not ceased to exist. The period between 1901 until 1935, right after the independence war from the US (1898-1901) and the establishment of a ten-year Commonwealth Government (1935-1946), the Philippines became a territorial government of the US (Insular Government of the Philippine Islands) governed by US President William Howard Taft. This period was characterised by intense negotiations between the US government and Filipino nationalists such as Manuel L. Quezón, who would become the first Filipino president of the Commonwealth period, and Sergio Osmeña, who would become the fourth president of the Philippines after WWII, from 1944 to 1946. Filipinos supported the Americans during WWI but continued with the independence campaign after. Among the agreements that were reached thanks to the push of Filipino nationalists was the Jones Bill (1912), which asked for the independence of the Philippines within 7 years. This bill was, however, renegotiated and passed again, no longer setting a timeframe for independence but insisting on ‘favorable conditions’ for independence (Wong 1982). This made not only Filipino politicians but also many well-educated Filipinos and intellectuals, such as Paz Mendoza, consider what would make a strong argument for independence, while at the same time envisioning alternative futures for their country that did not involve continued foreign management. It was in the middle of the regency of the Insular Government, between 1921 and 1929, that Mendoza took her two world trips. At the time of her second departure in June 1926, Mendoza was Regent of the College of Medicine of the University of the Philippines, a respected academic in the field of pathology, the editor of the Spanish and English women’s magazine La Mujer, and one of the founders of the Filipino Women’s Association. She was, then, a prominent public figure who earned the support of the Filipino government to go overseas as a representative of her country. Mendoza was determined to collect a compendium of ideas to be implemented upon her return to the Philippines. She wrote over three hundred pages of notes that include private thoughts, historical, political and social reflections, anthropological comments, descriptions of monuments, reflections on urbanism and hygiene, rhetorical questions and hypotheses about the situation home and abroad, and occasional anecdotes that remind the reader that she was also a tourist. At the beginning of Mendoza’s trip, she attended a conference in Williamstown, New York, on Filipino Independence, which, as noted above, was the dominant problem in American-Filipino relations. Mendoza quotes the speeches of different congressmen, amongst whom J. M. Wainwright from New York is particularly relevant as he touched directly on themes of global politics such as the world’s colonial race (in which the Philippines were an attractive trophy wanted by both England and Japan) and the question of Filipino independence. Mendoza quotes Wainwright expressing a desire to keep control over the islands, claiming that if it were not the US it would be some other nation controlling the territory: Hay otras naciones que también desean poseer Filipinas como Inglaterra y Japón. No cree prudente el echar a las islas en el caldero de las cuestiones hirvientes del Este. Cree que no es Japón quien ambiciona Filipinas sino Inglaterra, porque Filipinas está en el camino a China, mientras Japónambiciona Australia. Relató que había encontrado un oficial inglés en Borneo, quien le comunicó que si algún día Japón intentara llegar a Filipinas a la hora de la cena, Inglaterra estaría en Filipinas a la hora del almuerzo. (24)\(^82\) [There are other nations that wish to possess The Philippines, such as England and Japan. He does not believe that is prudent to throw the islands into the boiling pot of issues concerning the East. He thinks that it is not Japan which wants The Philippines but England, since The Philippines is on the way to China; meanwhile, Japan has its eyes on Australia. He told us that he had encountered a British official in Borneo who told him that if on any day Japan attempted to reach The Philippines by dinnertime, England would be in The Philippines by lunchtime] Mendoza continues by quoting Wainwright’s remark that the US had gained sovereignty over the Islands and that Filipinos had reacted “espléndidamente” [splendidly] to the American initiatives (Mendoza 25). On the question of Filipino independence, he explains that it was only a fair request, “puesto que así se les había hecho creer” [since they were led to believe so] (Mendoza 25). Then he suggests that the Philippines have brilliant men who could rule their nation as an independent territory, but counters that the brilliant men he met in Manila were not “los verdaderos representantes de las masas” [true representative of the masses]: “Los Filipinos de Manila difieren mucho de los de los campos. Ahora bien, ¿están estos acaso preparados para ser independientes?” [Filipinos in Manila are very different from those in the countryside. Therefore, are they ready to be independent?] (Mendoza 26, emphasis in original). Wainwright continues his speech by addressing religious issues, such as the division between Muslim Filipinos (“moros”) and Christian Filipinos, claiming that the former were more inclined towards remaining American whereas the latter, as a result of the Spanish colonisation, favoured independence. Mendoza expresses skepticism about the meaning of the American “sympathy” for the moors, suggesting that rather than religious and cultural aspects, it was Mindanao’s economic value as “the land of rubber” that provoked the sympathy of the Americans (25). These notes reveal Mendoza’s deep engagement with the political, social, religious and class tensions that affected the Philippines in the early twentieth century. Numerous conflicts divided the islands: rich vs. poor, urban vs. rural and Muslim vs. Christian. These oppositions ran parallel to other dichotomies, such as rich and educated city-dwellers (Spanish-speaking, pro-independence Filipinos who wanted to get rid of the Americans and their unfulfilled promise of liberation) versus poor, uneducated rural populations (speaking either Tagalog or other dialects), who had been led to believe that the new colonisers were benefactors. Mendoza’s position within these divisions is not easy to pinpoint. She does not \(^{82}\) The original speech was in English but Mendoza paraphrases it in Spanish in her travelogue. regard the Filipino people through the lens of colonial dependency; ideologically, she aligns herself with the Filipino *ilustrados*, who defended the idea that Enlightenment would grant freedom and independence, while also, as *Notas de viaje* demonstrates, wanting to enlist the help of foreign powers, not so much in the form of direct intervention but as models of progress that the Philippines could then follow as an independent country. Unlike Wainwright, who focuses on the divisions within the Philippines, Mendoza insists that the Philippines are indeed ready to be independent, as she explains to an American reporter of the *Boston Transcript* in Washington: > Esta ansia y clamor de mi pueblo por la independencia demuestra claramente que mi país no es un país de salvajes, y que mi pueblo es un pueblo culto y educado. Un pueblo inculto no puede aspirar a ser independiente porque teme lo desconocido, obra según la tradición e ignora qué es ser libre. (3) > [This yearning and cry of my people for independence clearly demonstrates that my country is not a country of barbarians, and that my people are cultured and educated. An uneducated nation cannot aspire to be independent because it fears the unknown, acts upon tradition and ignores what it is to be free.] Mendoza sets out to challenge preconceived notions of the Philippines in the US (and elsewhere) as a tribal land of barbarians at constant religious war, also in order to justify her own position as a female doctor from South East Asia travelling the world in the early 1920 - a time when most European universities did not accept women yet. Mendoza, like most of the Hispano-Filipino writers of the time, belonged to one of the ‘anomalous’ communities in the islands, existing, in the words of Henry James, as ‘Aliens at home.’ She was too Filipino to be Hispanic (or American for that matter) and too Hispanic to be Filipino, as Álvarez notes (2014). Mendoza writes that she is grateful for the liberal political agenda the USA brought to the Philippines, which granted, for instance, women like herself access to education (36). But she is equally proud of the Catholic education brought by the Spanish. This reveals the tension between her attachment to her colonial heritage (in terms of language, religion and education) and her desire for national emancipation. --- 83 In his travel book *The American Scene* (1907), James starts questioning the legitimacy of the term “American” when observing the wave of European immigrants: “Which is the American... which is not the alien?” (qtd. in Carr 80). 84 Most Filipinos have a Spanish first and last name because they were purposely renamed during the Spanish colonial period for the purpose of the census. Mendoza, however, received her name by family ancestry. Her father, Isidro Mendoza, belonged to a wealthy Spanish family and her mother was the daughter of the Governor of Pandakan (Alzona 5). Alzona describes Mendoza as having a “fair complexion, wavy brown hair” and measuring “five feet and four inches” (25), which suggests that she was of mixed Spanish and Filipino heritage. Explorations: Cultural Exchanges and Cultural Stereotyping As a tourist, Mendoza recalls being hassled in Nazareth by groups of poor children, whom she tries to get rid of while murmuring to other tourists that they are “pordioseros prematuros” [premature beggars] (269). The aggressive behavior of her Egyptian guide, who, in challenging the unskilled tourists to ride a camel, deliberately causes trouble in order to ask for a rescue reward, provokes Mendoza to write, angrily: “¡Cuántas ganas tuve de tirarle a la cara todas las monedas que tenía, no por caridad o premio a su servicio, sino para castigar su salvajismo!” [I just felt like throwing all the coins I had into his face, not as charity or to reward him for his services, but to punish his savagery] (283). Mendoza feels irritated by the guide’s behavior and by the begging children, but her choice of words brings further connotations. Her claims that the children are “premature beggars” portraits them as having a preconceived destiny, whereas her wish to punish the guide’s “savagery” elevates herself over the locals as more civilized. This contrasts with her comments concerning multilingual, cosmopolitan travelers, whose company she enjoys, appreciating the intellectual exchanges that hours of sea and land travelling bring her: La franqueza con la que se expresan estos occidentales me encanta y subyuga. Para nada tienen en cuenta que su opinión no coincida con la de su interlocutor, ni les importa el odio de este; no se ve en su fisonomía ese esfuerzo de agradar y de demostrar que todo cuanto oye y ve le satisface, aun cuando interiormente sienta lo contrario. (41) [I find the honesty with which western people express themselves both enchanting and threatening. They do not care if their opinion doesn’t coincide with that of their interlocutor, neither do they care about any ill feeling from him or her; it is impossible to derive from their body language a confirmation that they kindly agree and are satisfied with what they hear and see, even when inside they are feeling otherwise] Uno de los turistas, un viejo millonario holandés, viendo a los bogadores semirecostados en sus botes y hablando tan fuerte como si estuviesen peleando, interrumpió mi meditación: “Doctora” me dijo, “este es el Oriente, donde todos hablan y pasan el tiempo discutiendo sin entenderse unos a otros”. “Tiene usted razón”, le contesté, “pero este es el Cercano Oriente, no el Lejano de donde soy. Allá no se discute mucho, los orientales sólo sabemos trabajar y obedecer. En prueba de ello usted a las posesiones holandesas en Java, Sumatra y otras”. (254) [One of the tourists, an old Dutch millionaire, watching the rowers lying down on their boats and speaking loudly as if they were having a fight, interrupted my meditation: “Doctor,” he said, “this is the East where some speak and spend their time arguing without understanding each other.” “You are right” I replied, “but this is the --- 85 Mendoza spent quite some time on steamboats from New York to Southampton and from Cairo to Niza, as well as on extravagant train rides, such as the Berlin-Baghdad Express, which she describes as “the golden dream of Germany” (259). Middle East, not the Far East where I came from. Over there, we do not argue so much, we Asians only know how to work and obey. The Dutch possessions in Java, Sumatra and elsewhere are proof of this.” These passages show how much of the travelling experience is an exercise in confirming or negating pre-existing stereotypes about oneself and others (Occidentals are frank and prone to arguing intellectually with others, while Orientals are loud and non-confrontational, drawn to work and obedience). The comment made by the ‘Dutch millionaire’ is not retold in Mendoza’s notes as an attack on her own identity or place of origin; on the contrary, she makes sure that she establishes a difference between the Far and the Middle East with regards to their way of speaking - loud versus quiet - challenging her travel companion’s assumptions and placing the negative stereotype on someone else: Mendoza may be Asian but she is not that kind of Asian. Her following comment about East Asians being submissive and docile benefiting Dutch colonisation in Indonesia is also ambiguous. Rather than bringing up a counter-narrative about exploitation, she refers to Asian people’s (perceived) consent to colonialism. Here, she is provocatively suggesting that western domination happened not because of the strength of the West but because of the character of Asian people. Mendoza views colonialism not as a one-directional phenomenon that exclusively brought exploitation and abuse, imposing one culture on another, but as accompanied by a civilising process that was also profitable to the colonised: [Vemos otros que al] “conquistar pueblos mucho más débiles y primitivos que ellos, vigoriza a éstos con su sangre y su espíritu, y así ejercen una influencia bienhechora y civilizadora. Por ejemplo Roma y España” (260) [Some people], when conquering much weaker and primitive people than themselves, invigorate those with their blood and spirit, thus exercising a beneficial and civilizing influence. For instance, Rome and Spain. This comment sounds shocking, but reveals some important characteristics of Mendoza’s thinking. First, it speaks to her scientific gaze; as a Darwinist, she distinguishes ‘primitivism’ and ‘civilisation’ as stages in a process of evolution in which the idea of ‘blood mixing’ is perceived as a way to improve humanity: the fittest (most adaptable) will survive. However, as her comment suggests, not all cultures are equally tolerant of racial and ethnic mixing. For Mendoza, the ones capable of colonising are the fittest. This idea exposes her attachment to an opposition between what she perceives as ‘civilised’ societies (mostly European) and as ‘savage’ ones (such as that of the Egyptian guide), which, in turn, justifies her search for transferable cultural models within the ‘civilised’ world. Second, this quote exposes Mendoza’s ambivalent (dis)identification with her own Asianness. On the one hand, she challenges the Dutch millionaire’s stereotype about all Asians being loud, while confirming it of some Asians, namely those from the Middle East (who happen to be the workers on the boat that she is travelling on). On the other hand, her personal circumstances - her class, education and ethnicity - allow her to talk about Asians but also to identify as one: ‘we Asians.’ Mireille Rosello (1998) explains the practice of entering and leaving stereotypes as “declining the stereotype,” which means producing, on the one hand, “delicate decisions […] potentially strident political statements” or else “apparently innocent and quite socially meaningless activity” (10), and, on the other, a variety of contextual meanings through form and grammatical function: declining a word means acknowledging the various formal identities that one element of language must adopt depending on its position and role within a larger linguistic unit […] it involves paying attention to the formal characteristics of the stereotype so as to control its devastating ideological power. (10-11) Using Rosello’s linguistic metaphor, Mendoza “declines” the stereotype of the Asian by playing with the fixed root ‘Asian’ and the variable endings ‘obedient,’ ‘civilized’ and ‘wealthy.’ Mendoza perceives herself as Asian, but not (quite) the type of Asian that speaks loudly (and is powerless). Rather, she likens herself to the “obedient” Indonesians (which are powerless but “civilized” by colonisation). At the same time, however, she levels up with the (educated) Dutchman by engaging in a political conversation with him, very much unlike the “non-argumentative” Asians that she claims to identify with. The fact that both the Dutch millionaire and the Filipino doctor are tourists with a similar social status brings them closer together than the distance between the geopolitical locations in which they were born. Cultural exchanges and cultural stereotyping are instrumental for Mendoza to reach an understanding of what she is seeing and to ultimately produce her vision of the ideal Filipino nation. Many times, she incorporates into her travel notes examples of contemporary life in other countries that enable her to confirm the national stereotypes she associates with different countries - and ultimately, the ideological power of these stereotypes that Rosello alerts us to. “Now that I know the English a bit better, I think that their commercial success is based on their honesty” (46) […] you can take the word of an English man, those are of leather” (47, English in the original), Mendoza writes, for example, while visiting London. Her confidence that the Englishman working at the shoe store she visits is not lying about the material of the shoes leads her to think that honesty will produce commercial success. Later in Notas de viaje, she generalises about Norwegians, Czechs and Germans: The solidity of the black rocks that constantly distill water, their waterfalls, their fjords, their mountains covered in snow and, especially, the blue-green color of the sea that is reflected in their eyes seem to have moderated the adventurous but firm spirit, reflexive and tolerant, of the Norwegians. The Czech citizen has an honest character. After a few minutes of conversation with him, one realizes that the Czech and the German are like water and oil. The basis for the idea of Norwegians having adventurous but firm spirits is located in their rugged environment. “¿Tiene acaso alguna relación la geografía del suelo y la lucha por la existencia con la manera de ser de los habitantes de un pueblo?” (88) [Is there indeed any relationship between the geography of a place and the fight for existence with the manner of being of its inhabitants?], she wonders. Following this question (which provides yet another example of her Darwinist approach to science and, by extension, culture), Mendoza derives the national character of the Germans (now assessed more positively than in the comparison with the honest Czechs) from the urban planning of Berlin: Berlin seemed to me at first to be a reflection of the plain and severe face of Bismarck or the ex-Kaiser Willem II. Its long, wide, clean and paved streets, its solid buildings and its monuments, which do not embody a morbid and coquettish nudity, but all that inspired love for science, respect for the rulers, veneration of national heroes, stimulation of physical exercise, emulation of courage and admiration of glory, inform the visitor that he is in a city where order, discipline, study and seriousness constitute the core of its inhabitants. Patriotism and national pride are seen to run through Berlin’s statues of ‘national heroes,’ which are reminders of a ‘glorious past’ and inspire ‘respect for the rulers,’ as they do through German citizens’ minds. In Mendoza’s eyes, Berlin’s past is monumental and is monumentally inscribed in the city’s aesthetics: solid, ordered, and serious. The idea of a Writing the Nation Imagining a Modern Philippines national consciousness emanating from the city’s past and present echoes Svetlana Boym’s idea of ‘restorative nostalgia’ (2001), discussed in Chapter 2 as an attachment to the common national past (in opposition to the individual experience of it) produced by the institutionalisation of history - visible in its commemorative monuments - that legitimates the State’s official ideology. Mendoza cannot but encounter European nationalisms as she visits countries like Germany and Italy in the period between the wars. It is perhaps understandable, given the struggle for Filipino independence, that she dedicates twenty-nine pages to German’s history, education, industry, government laws, tax systems and transport infrastructure, and a remarkable fifty-three pages to Italy, mostly on history and architecture, as both these nations were being re-built, literally and metaphorically, upon their past: La primera vez que vine a Europa no pude visitar este país, porque en él reinaba el choa. En cambio, ahora, cualquiera puede viajar en tren tranquilamente, y al final del viaje le presentan la cuente incluyendo la propina. Lo mismo sucede en los hoteles y restaurantes. Creo que todo este orden se debe a Mussolini. (189) [The first time I came to Europe I couldn’t visit this country for it was chaos. Now, however, anybody can travel there by train in peace, and at the end of the trip one is handed the bill including the tip. It is the same in hotels and restaurants. I think all this order is thanks to Mussolini.] For Mendoza, Mussolini is “bringing Italy out of chaos” through setting up safe railway systems, implementing taxes and “unifying” Italy with one language: Entonces le hablé [a la Condesa de O, de Viena] de Italia, de Mussolini. “Un país que como el antogui imperio austro-húngaro estaba compuesto de diferentes razas y cada una de estas con su tradición y lenguaje no es fácil de gobernar. Mussolini es una gran figura de actualidad” dijo ella. (144) [Then she [the Countess O, of Vienna] talked about Italy, about Mussolini. ‘A country that, like the old Austro-Hungarian Empire, was constituted by several races, each of them with its own tradition and language, is not easy to govern. Mussolini is a great figure nowadays’, she said] ¡Ojalá que el esfuerzo del Gran Mussolini, el tribuno moderno, el hombre de hierro, de unificar a todos los de civilización latina, se vea coronado con éxito! (168) [I wish the effort of great Mussolini, the modern magistrate, the iron man, to unify all people of Latin civilization to be successfully attained!] Mendoza’s positive comments are, one the one hand, influenced by her transitory experience of the country as a tourist: she finds it more comfortable to pay bills including taxes and tips than to haggle with interpreters and merchants as in Constantinople’s Gran Bazar. On the other hand, she identifies the historical existence of ‘several races, each of them with its own tradition and language’ and the need to improve the country’s infrastructure as issues that resemble those of her own country. Mendoza sees in Mussolini’s Italy a united and independent community, and imagines the same for her country. Her attachment to the idea of a unified nation emerges with her admiration of nationalist movements in Germany and Italy, but also of other more peripheral communities, such as, for instance, the Catalans and their struggle for independence and the Zionists in Palestine. In a brief account of four pages dedicated to Spain, Mendoza includes a conversation she had with a Catalan proponent of independence: El catalán se considera catalán y no español, prefiere hablar el catalán que se parece al francés, simpatiza con los ideales del pueblo filipino, porque también aspira a ser independiente. “¿pero no sois españoles?” le decía a uno que era un furibundo filipinista e independentista. “No, señora, Cataluña comprendía Gerona, Barcelona, Tarragona y Lérida en España y Rousillon en Francia. Antes del siglo XV teníamos nuestro rey, pero cuando Fernando de Aragón se casó con Isabel de Castilla, la Católica, formamos parte de España y perdimos nuestra independencia. Cataluña es rica e industriosa y puede gobernarse sola.” (201) [Catalans think of themselves as Catalan, not as Spanish; they prefer to speak Catalan, which is similar to French, they sympathise with the Filipino people because they also want to be independent. “But, are you not Spanish?” I asked a man who was a filipinista and independentist. “No ma’am, Cataluña was Gerona, Barcelona, Tarragona and Lerida in Spain and Rousillon in France. Before the fifteenth century we had our king, but when Ferdinand of Aragon married Isabel of Castilla we became part of Spain and lost our independence. Cataluña is rich and industrious and can govern itself.”] These passages show how Mendoza attaches herself to what she perceives will be useful examples for her people in building a Filipino nation. Again, however, there is also ambiguity, as when, despite describing the admirable respect and faithfulness that Germans profess towards their nation, Mendoza recognizes the mechanisms of demagogic politics undertaken through nationalist propaganda. She notes that in German cinemas, once the film has finished, portraits of the greatest German men dead or alive are shown, as well as: Al terminar la película se exhiben los retratos de los grandes hombres alemanes vivos o muertos con una reseña corta de sus vidas. Sus industrias, sus pueblos y puertos también, así es que el alemán cree que casi todos los inventos y descubrimientos del mundo han salido de cerebros alemanes. Esto queda tan impreso en la mente alemana que en una tertulia por poco me hicieron tragar que todos los inventos en medicina fueron hechos por alemanes. Esto explica esa fe ciega en su lema “Deutschland uber alles”, Alemania por encima de todo, Alemania, bien o mal, es lo primero en el corazón y mente alemán; después viene su compatriota, y, después, otra vez Alemania. De este amor fanático nace el “superiority complex” laidea de superioridad, del super homo de sus filósofos. (113) [At the end of the film portraits of the great German men dead or alive are shown with a short review of their lives. [Germany’s] industries, its towns and even its ports, so that the German believes that almost all inventions and world discoveries have come out of German brains. This is so well imprinted on the German mind that, during a talk, they almost made me believe that all the discoveries made in medicine were German. This explains the blind faith in their slogan “Deutschland uber alles,” Germany above all. Germany, for good or bad, is the first thing in the heart and mind of the German person; followed by his compatriot and then again by Germany. From this fanatic love comes the “superiority complex,” the idea of superiority, of the superman, of its philosophers.] Mendoza’s description of Nietzsche’s “superman” and the German “superiority complex” as “ideas” founded on “fanatic love” and “blind faith” (all pejorative terms) demonstrates her detachment from this element of German nationalism. In addition, she is clearly alerting the reader about the dangerous extremism of the constructed German nationalist discourse when, touching on her own field of expertise, she claims that “en una tertulia por poco me hicieron tragar que todos los descubrimientos en medicina fueron hechos por alemanes” [they almost made me believe that all the discoveries made in medicine were German]. In the following passage, Mendoza describes a similar experience in an Italian cinema that ends, however, differently: Antes de dejar Roma, entramos una tarde en un cinematógrafo para ver en la pantalla la vida de Garibaldi y encontramos que la mayoría del público se componía de niños de seis a doce años. Cuando terminó la función todos se levantaron para cantar el himno nacional italiano y dieron vivas a Italia y a Mussolini. ¡Viva! Gritamos también con la esperanza de que estos se unirán algún día cuando cantemos nuestro himno nacional y gritemos ¡Viva la independencia de nuestra amada Filipinas! (191) [Before leaving Rome, we entered a cinemascope to see on the big screen the life of Garibaldi and found that most of the audience were children between six and twelve years old. When the show finished they all stood up and sang the Italian national anthem and praised Italy and Mussolini. ¡Viva! We also screamed, hoping that one day they will join us when we sing our national anthem and claim, ¡Long live the Independence of our beloved Philippines!] The image of Mendoza and her sister standing amongst the Italian children and joining their nationalist chant while dreaming of an independent Philippines is less an expression of support for Italian fascism than a vision of future solidarity between two independent nations. It is also an implicit act of insurgency against the US government that Mendoza allows herself to commit on foreign ground, safe from retaliations. The transitory experience of reality in the traveller’s shifting contact-zones enables such acts of resistance and imagination. In the end, the affirmation, contestation and declining of cultural stereotypes, especially when accumulated, as in Notas de viaje, make it possible to challenge assumptions about both the other and the self. The distance from home and the juxtaposition of the familiar with the foreign prompt an active process of cultural translation - or transculturation - from which new meanings and possibilities can emerge. **Equivalence and Difference in the Philippines of the Future** Travelling and travel writing involve constant exercises of translation, of finding equivalences and establishing differences. Mendoza’s ‘eloquence’ - the narrative power of her subjective gaze - conjures surprising equivalences: a comparison between Hong Kong and Oslo with regards to the “scale-like arrangement of their home lights” (Mendoza 87); a remark on the business opportunities that industrial borrowings between Holland and the region of Laguna in the Philippines could accomplish in the cheese making industry (66); and a comment on the architectural style of Milan’s cathedral resembling, in terms of its length and the accumulation of columns, a coconut forest. Bhabha’s idea of “mimicry” reminds us that imitation always entails both similarity and difference, that it creates a “partial” reality that results in a game of trompe l’oeil (126). Imagination, deception and nostalgia are all at play in the exercise of cultural translation enabled by travelling: > A primera vista, los rascacielos y las casas apiñadas de la Habana con sus arcos y sus verandas, me recordaron Port Said y Hong Kong; pero luego, a medida que los ojos contemplan más de cerca tan lindo conjunto, el viajero se da cuenta de que se trata de verdaderas obras de arte donde campea por su estilo la arquitectura latina e hispana. [At first sight, the skyscrapers and the crowded houses of Havana with their arches and their verandas reminded me of Port Said and Hong Kong; but then, as the eyes contemplate such a lovely ensemble closely, the traveler realizes that they are truly art works championed by Latin and Hispanic style architecture.] In this quote, Mendoza compares instances of modern and colonial architecture from Havana, Port Said and Hong Kong. Perhaps at first sight these cities are indeed similar, also to other harbour cities, such as Shanghai or Mumbai, where colonial architecture prevailed. Regardless of what empire they belonged to, colonial buildings were adapted to the warm climate of most colonies with high ceilings, fans, arches and verandas. However, a closer look at “such a lovely ensemble” reveals the different imprints of Latin, Hispanic or British architecture. In Bhabha’s terms, they are the same but not quite. Through her narrative eloquence, Mendoza allows her readers to see the world, outside their ordinary perspective, as similarly haunted in different places by colonialism, as manifested in its architecture, which lingers even after independence. Sometimes while travelling, juxtapositions of familiar and unfamiliar images disturb the imagination confusing the traveller who has the feeling of being home: Al llegar a Kantara […] el panorama que se desenvolvió en nuestra vista se parece mucho al de Filipinas: campos cubiertos de verde, el arado y la noria, y este parecido tomó cuerpo de realidad por la presencia del carabao. Lo considerábamos tan nuestro que no esperamos encontrarlo en otra [sic] panorama fuera de Filipinas. Pero bien pronto la ilusión se disipó, porque al lado del carabao estaba el ubicuo camello con su giba sepiterna, cuello largo, y carita chata y deforme. (279) [When we arrived in Kantara, the panorama that was unveiled in front of our eyes was very much like The Philippines: green fields, the plough and the waterwheel; and this resemblance became a reality through the presence of the carabao. We thought it belonged to us, so we did not expect to see it in another landscape outside the Philippines. However, the illusion was soon dissipated, as next to the carabao there was the ubiquitous camel with its perpetual hump, long neck and flattened and deformed face.] In this fragment, Mendoza pairs rural Egypt with the rural Philippines; seeing the Egyptian countryside makes the Filipino travellers believe that they are back home, as the landscape presents them with familiar references: the instruments of rural farming and the presence of a carabao. Mendoza and her sister had always thought the carabao belonged to “them,” as if the animal was exclusively Filipino. At the view of the Egyptian rural landscape, including the perceived national symbol, Mendoza confesó: “el espectáculo evocó en mi mente el paisaje filipino, poéticamente descrito por Cecilio Apostol en los siguientes versos” [the spectacle brought to my mind the Filipino landscape, poetically described by Cecilio Apostol” (280).86 She proceeds to quote four stanzas from Apostol’s poem describing, in a pastoral manner, the landscape of the rural Philippines. Echoing the nostalgia of Gurrea’s poems that I analysed in the previous chapter, the poem is sprinkled with images and words referring to local animals, sun-kissed fields and the occasional sight of a nipa house (the most common Filipino house on stilts, made with bamboo canes and a roof of woven nipa leaves, nipa being a type of palm found in the South Pacific). The contemplation of the Egyptian landscape and its agricultural practices, in conjunction with the memory of Apostol’s poem, transports Mendoza and her sister, mentally, to the Philippines. However, the “illusion” that Egypt and the Philippines are --- 86 Cecilio Apostol (1877-1938) was a journalist, lawyer, writer and active independentist who joined the Filipino Revolution in 1896. He was also one of the most respected poets of the Hispanic period. A collection of his poems entitled Pentélicas was published posthumously in 1941 and was used as a compulsory reading in Spanish lessons. His style inspired later poets such as Balmori and Gurrea, especially its vivid descriptions of the Filipino landscape (Revista Filipina, see: http://vcn.bc.ca/~edfar/revista/yankee.htm). exactly the same is dispelled by the figure of the camel, which causes an immediate feeling of detachment. Michel de Certeau claims that the familiarity and routine practices of the everyday prevent us from actually seeing how things really are and that the traveller/voyeur should therefore “disentangle himself from the murky intertwining daily behaviors and make himself alien to them” (93). In the cited passage from Notas de viaje, Mendoza is looking for the familiar, which stops her from seeing what Egypt is really like. Only the alien sight of the camel jolts her into acknowledging that the illusion of being back in the Philippines was precisely that, an illusion. This does not mean that travel cannot reflect on the homeland. Referring to Levi-Straus and Heidegger, de Certeau argues that travelling is in fact like taking a detour to one’s own roots by reading a different code: Travel (like walking) is a substitute for the legends that used to open up space to something different. What does travel ultimately produce if it is not, by sort of reversal, “an exploration of the deserted places of memory”, the return to nearby exoticism by way of a detour through distant places, and the recovery or relics and legends (...) in short, something like and “uprooting in one’s own origins (Heidegger)” (de Certeau 107) The idea is that travel allows one to see one’s home and one’s past in a different light. Thus, it should not be about equating Kantara and the Philippines, but about allowing the view of Kantara to change how one sees the Philippines. In accordance with this, Mendoza constantly refers to the Philippines, imagining what applications could be given there to her findings in other countries. The passage about Kantara, then, is an exception; a rare moment in Notas de viaje in which Mendoza seems to feel slightly homesick, but a moment that she also quickly dismisses herself as unproductive.87 Following these reflections, Mendoza is asked by a fellow tourist, Mr. Vogel from the San Francisco Chamber of Commerce, for her opinion on European colonialism (281). Mendoza, hesitant, writes that, having been “educada en un ambiente oriental que me oblige, a pesar mío, a oír antes de expresar mi opinion ante extraños [educated in an oriental manner that obliges me, despite myself, to listen before giving my own opinion in front of strangers] (281), she feels compelled to return the question to her interlocutor for him to answer first. Mr. Vogel claims that the Philippines are in a much better position than the other countries he has visited from the Pacific to the Mediterranean, since in those countries “se ve ensguida las 87 Another moment of homesickness occurs upon arrival at Key West in Cuba, where the plants “have the perfume of the tropics” and Mendoza writes: “¡Con qué deleitación recordábamos sus nombres haciéndonos la ilusión de que estábamos en nuestra propia tierra! (6) [We remember with great delight their names [of the plants] wishfully thinking that we were in our own land!]. fuertes huellas de la dolorosa explotación” [the painful traces of exploitation are quickly perceived] (281). For Mr. Vogel, it seems, American colonialism is less exploitative than its European counterpart. Mendoza agrees that “better” colonial conditions persist in the Philippines, but also reminds Mr. Vogel to not forget the early civilising task of Spanish colonialism, without which, she argues, Filipinos would be in the same unfortunate situation as their neighbours in “Borneo, Java, Sumatra and Formosa” (281). The sense that Egypt is exactly like the Philippines is quickly dissipated in Notas de viaje by other elements entering Mendoza’s narrative gaze, such as the camel and the conversation with Mr. Vogel. In the end, what is important to Mendoza is not what the Philippines already is, but what the Philippines could become if it were to adopt and adapt certain elements of foreign cultures, just as it did with the education offered by the Spanish. Thus, when visiting the Netherlands, she reflects on what the Philippines could learn from the Dutch struggle against the North Sea: La lucha formidable que mantiene Holanda contra el Mar del Norte me sugiere muchas cosas […] Si los once millones de Filipinos dragáramos los ríos y abríésemos canales por los distintos pueblos que baña el Pasig y la Laguna de Bay, en Aparri que siempre está amenazado por el Mar de China y en los pueblos riverenños en el valle de Cagayán, indudablemente que evitaríamos muchas desgracias en la época de grandes avenidas; protegeríamos los sembrados, las carreteras y los pueblos contra las inundaciones; ahorrariamos muchos millones de pesos en reparaciones; evitaríamos muchas enfermedades y la salud pública mejoraría. Filipinas, la perla de Oriente, con su eterno verdor y sus tesoros escondidos en su suelo y en sus mares, sería un verdadero Edén. Hotel Victoria, Amsterdam, Holanda Octubre, 1926 (70) [The formidable fight that Holland wages against the North Sea evokes many things to me […] If the eleven million Filipinos dredged our rivers and dug canals across the different towns along the Pasig river and the Laguna Bay in Aparri, which is always threatened by the China Sea, as well as in the riverside villages in the valley of Cagayan, undoubtedly we would avoid many catastrophes in the time of the big rains; we would protect the crops, the roads and the villages against flooding; we would save many millions of pesos in repairs; we would avoid many illnesses and public health would be improved. The Philippines, the pearl of the Orient, with its eternal greenness and its treasures hidden in the soil and seas would be a true Garden of Eden. Victoria Hotel, Amsterdam, Holland October, 1926] Mendoza’s future hypothesis (expressed through the word “if”) constitutes the core of her active transcultural project of bringing the best parts of the different locations she visits back to the Philippines and implementing them there in a way that would suit the specific circumstances of the country. The Filipinos might also become professionals capable of fulfilling the demands of the new nation if they were trained as “academic citizens” like in Germany and Denmark: Si nuestra juventud optara por otras profesiones mucho más útiles que las de farmacéutico, abogado o ministro de alguna religión, ganaríamos mucho económicamente. Pero para ello importa que nuestro sistema de enseñanza se reforme, esto es, que estimule las escuelas vocacionales y también que la gente de dinero invierta su capital, no en lujo, sino en crear industrias para fomentar tales profesiones útiles. (66) [If our youth opted for professions much more useful than pharmacist, lawyer or minister of some religion, we would gain a lot economically. For that to happen, it is important that our system of education be reformed, that is, that vocational schools be stimulated and also that people with money would invest their capital in creating industries to foment such useful jobs, instead of investing it in luxuries.] These visions of a future Philippines, crafted on the basis of a selective repertoire of possibilities gleaned while visiting different countries, are at the core of Mendoza’s project of active transculturation, which, as the next section will show, is intimately tied to ideas of modernity. Visions of Modernity Mendoza’s travelogue has a very specific objective beyond the literary; as a woman of action, her agenda is to learn about practical things; hence, she is fascinated by technology, urbanization and hygiene for instance as elements that demonstrate urban modernity. However, Mendoza’s visions of modernity are not restricted to European countries but also take in Cuba, which at the time of Mendoza’s visit had already become an independent republic. After the Cuban War of Independence against the Spanish (1895-1898), the Cuban government was handed to the US temporarily until 1902 when the Cuban Republic was established. Early nationalists from both Cuba and the Philippines shared a transpacific relationship, being similarly caught between empires. As Anderson points out (2005), it was not an accident that the movements for independence reached their peak at a similar time in both islands. The Cuban writer, journalist and leader of the independence movement José Martí (1853-1895) was very aware of the work of José Rizal and vice versa. In fact, Rizal was apprehended by the Spanish prior to his assassination on the charge of filibusterismo --- 88 See Maria Theresa Valenzuela’s 2014 article “Constructing National Heroes: Postcolonial Philippine and Cuban Biographies of José Rizal and José Martí.” (political dissidence) when he was traveling by boat to join the Cubans as a doctor in their own independence war from the Spanish. As I mentioned in Chapter 1, Latin American writers saw in Asian countries a cultural counterpart with whom they shared a history of colonialism and (an idealized) national model that supported their resistance against westernisation. However, the Philippines were Hispanic enough to disappoint their orientalist imaginaries. If, for some Latin Americans, the Philippines was not ‘typically oriental,’ for some progressive Filipinos like Mendoza the signs of modernisation observed in Cuba, which she calls “la republica hermana” (7) [the sister republic] provide positive input for her project of transculturation, because the proximity of Cuban and Filipino realities made it more likely that these modernisations could be reproduced in her homeland.89 In Cuba, Mendoza mostly gathers information regarding urbanism and hygiene, to be put to use under the hypothetical condition of independence from the US, which Cuba, having already been granted it, leads her to feel more hopeful about: He pensado que si Estados Unidos nos concediera la independencia y consignase en dicha concesión, como garantía, la cláusula sobre sanidad como en Cuba, ¿qué haríamos los filipinos para cumplir esa condición! (103) [I have been thinking that if the United States gave us independence and included in such a concession as a guarantee the same health clause as in Cuba, what would we, Filipinos, do in order to fulfill such condition!] El tráfico en las calles constituye uno de los puntos que absorben mi atención en los viajes. Todas mis observaciones sobre este particular pongo a disposición de las autoridades municipales de Manila, porque la regulación del tráfico moderno es una ciudad es, para mí, un signo de eficiencia en este siglo de las máquinas. (10) [Street traffic constitutes one of the points that absorb my attention when travelling. All my observations on this topic I make available to the municipal authorities in Manila because the regulation of modern traffic in a city is, in my opinion, a sign of efficiency in this, the century of machines.] As Mendoza mentions in this quote, the regulation of traffic is one of her main concerns, not simply as a sign of efficiency but, as she writes elsewhere, because it prevents accidents and would serve to educate Filipino citizens, who have the bad habit of “cruzar y zigzaguear las calles por donde les dé la gana” [crossing and zigzagging the roads in any way... 89 Mendoza also writes that, upon arriving to Havana’s harbour, she told the other Cuban travellers on the boat from the US that “se sentía como si llegase a la casa de una hermana a quien no había visto desde el día de su boda, que estaba ansiosa por conocer sus alegrías y sus penas, sus luchas y sus triunfos” (7) [she felt as if she was coming to the home of a sister she had not seen since her wedding day, who was eager to know her joys and sorrows, her struggles and her triumphs]. they want] (11). The discipline with which drivers and pedestrians in Havana obey “la combinación de luces de varios colores” [the color-coded combination lights] (10) fascinates her. As a tourist, she admires prominent feats of architecture (monuments, churches, temples), but she remains most concerned with housing and urban planning. Being a doctor, she knows that hygiene issues are related to housing construction: “Teniendo en cuenta que las viviendas son la base principal de la labor sanitaria tropical, voy a intentar describir el estado de nuestros pueblos en este respecto, que más vivo se le presenta al viajero cuanto más lejos está de su país” (103) [Keeping in mind that housing is the principal sanitary task in the tropics, I will describe the state of our towns in this regard, which comes more clearly to the mind of the traveler the further he is from his country]. Mendoza also observes that the materials, the climate and the organisation of urban developments (following systems like “zonificación,” a separation of institutional, commercial, residential and working areas of the city common in Europe) are essential to lowering the high mortality rate in the Philippines, which, she argues, is wrongly attributed to the climate instead of to the “estado insanitario, casi primitive que existe en muchos pueblos, salvo quizás en las grandes ciudades” [unhealthy, almost primitive state of a lot of villages, with the exception of big cities] (104). Mendoza’s reflections on hygiene and urbanism, even though presented as based on common sense and scientific fact, nevertheless echo the orientalist and colonial discourses that used hygiene as a justification for carrying out a civilising task that masked economic exploitation. Mendoza disparages the conditions in the Philippines (the prevalence of slums, irregular housing, and overcrowding), which she wishes to transform so that Manila could hypothetically resemble the villas of London and Paris, perfectly aligned “con su jardín delante y su huerto atrás” (105) [with a front garden and a vegetable yard]. Nowadays, only some colonial houses in the Philippines have remained as majestic as Paris and London villas, while most of the countryside only has a basic infrastructure and slums prevail on the outskirts of main cities. New developments such as Fort Bonifacio in Manila do not resemble Paris, London or Berlin but the ‘global cities’ that are so prolific in fast-developing countries in Asia.90 In addition, urban and rural areas in Europe in the 1920s were not composed exclusively of the ordered, functional and aesthetically pleasant villas that Mendoza selectively describes in Notas de viaje; by neglecting to comment on the poverty and scarcity 90 See the classic work on global cities by Saskia Sassen (2001) and, more specifically about the Asian context, Tsung-yi Michelle Huang’s Walking between Slums and Skyscrapers: Illusions of Open Space in Hong Kong, Tokyo, and Shanghai (2009). that the First World War had brought to Europe, she is showing her attachment to a colonial vision that idealizes the western metropolis. Speeding up the industrialisation process of the Philippines is, for Mendoza, imperative in order to be competitive on the global market, which some Asian countries had already entered. Technology, expertise and capital are considered necessary to extract and commercialise Filipino resources, and attract tourism. To learn about industrialisation, Mendoza visits cheese factories in Holland, Murano glass and marble factories in Florence, and import and export businesses in Havana. She notes that most products in Havana are foreign and writes: Ya que nuestra industria de tejidos de algodón se encuentra en un estado primitivo de desarrollo, ¿por qué nuestros ricos no forman una sociedad que se encargue de dar el necesario impulso a esta industria para poder competir con los tejidos extranjeros, importando máquinas modernas y hasta expertos o técnicos, y así poder obtener productos baratos y en cantidad comercial? Este es el procedimiento que empleó el Japón para desarrollar sus empresas industriales y la manera como formó sus propios expertos. (20) [Since our industry of cotton textile is in a primitive state of development, why don’t our rich people form an association that provides the necessary impulse to our industry in order to compete with foreign textiles, by means of importing modern machinery and even technical expertise that will allow the production of cheap products in commercial quantities? This is the process that Japan employed to develop their industrial enterprises and the way it trained its own experts.] Advanced capitalism could be brought to the Philippines, Mendoza feels, as it had become the main economic model not just in Europe, but also in Cuba (under the protectorate of the US) and Japan. Japan also provides a model of internationalisation, another marker of modernity. During a stroll through Florence on a sunny day, prompted by the sight of tourists carrying umbrellas to protect themselves from the sun, Mendoza conjectures that on the rare occasions when young girls in the Philippines are seen carrying sun umbrellas, the label “Made in Japan” could probably “con pena” [sadly] be read on them (166). The presence of Japanese products in the Philippines leads her to wish for the internationalisation of Filipino manufacturing: “¿Cuándo tendríamos y usaríamos con orgullo los ‘Made in the Philippines’?” (166) [When would we have and proudly use the “Made in the Philippines?”]. Signs of modern globalisation are perceived in the presence of Japanese products in Asia, but also in the presence of other Asians in the West. Mendoza cannot help but admire the Chinese restaurants in London for imposing their dishes on Europeans: “El chino con su Writing the Nation Imagining a Modern Philippines ‘pansit, gulay’, la morisqueta tostada compite con el europeo, imponiéndole su arte culinario, sin tener que devanarse los sesos para preparar manjares al estilo occidental” [The Chinese with their “pansit [noodles], gulay [vegetables]”, and morisqueta tostada [fried rice] are imposing their culinary art on the European instead of thinking hard about ways to prepare western-style delicacies] (52). Filipinos should be able to do the same, she claims, “una vez educados a comer lo nuestro” [once we are educated in eating our own food] (52). Initially, Mendoza proudly suggests that Filipino food could also be consumed abroad, but further reflection makes her realise that Filipino food is in fact constituted by many different foods (American, Mexican, Chinese), which problematises its identity and the attachment Filipino people have to it.91 Mendoza recognises that most Filipinos are fans of the Chinese noodle shops, but perceives the assimilation of Chinese food in the Philippines as “esa invasion silenciosa, pero decisiva, de nuestros primos del otro lado del mar de la China” [the silent but decisive invasion of our cousins on the other side of the China Sea]. Consequently, she encourages her Filipino “compatriotas” not to allow others “competencia en una cosa tan sencilla” [competence on such a simple thing] (52). Filipino food culture is thus presented as resulting from a process of transculturation, which Mendoza perceives negatively as yet another form of colonialism, “a silent but decisive invasion” that disrupts any national project based on establishing a uniform modernist identity for The Philippines. Returning Home The first edition of Notas de viaje contains a preface by the author and letters from three fellow Filipino intellectuals: Teodoro M. Kalaw, the director of the National Library of the Philippines (February 1930); Pedro Aunario, editor of the newspaper La Patria (October 1929); and Hugo Salazar, a contributor to other Spanish-language newspapers (October 1929). According to Kalaw, Notas de viaje’s main achievement is its excellent and intense educational tendency, as well as its being, by virtue of the various fields and numerous countries that Mendoza includes, “un interesante manual del saber cosmopolita” [an interesting manual of cosmopolitan knowledge] (viii). In addition, he notes that Notas de viaje represents a step forward in the Filipino feminist movement, in which Mendoza was a recognised figure: 91 In his book Authentic Though Not Exotic: Essays on Filipino Identity (2005) Fernando Zialcita claims that Filipinos struggle to recognise and identify with their own food, often claiming that “there really is no Filipino cooking” (2). He further notes that “some Filipinos’ tendency to denigrate, without basis, their major cultural symbols show in other realms, and work against us” (2). La Dra. Mendoza debe ser felicitada. Este es un trabajo que no tiene desperdicio, el libro de viajes más concienzudo escrito por un filipino hasta hoy, y, viendo de la pluma de una mujer, constituye un fuerte alegato en favor del feminismo que - ¡Gracias a Dios! - se está abriendo paso en nuestro país debido a sus propios merecimientos. (viii) [Dr. Mendoza must be congratulated. This is a valuable work, the most thorough travel book written by a Filipino up until now, and, coming from the pen of a woman, it constitutes a strong defense in favor of Feminism, which - thanks to God! - is making its way in our country due to its own merits.] As I mentioned earlier in this chapter, Mendoza was the editor and a columnist of the English and Spanish magazine for women La Mujer and founded, in 1922, the Liga Nacional de Damas Filipinas [National League of Filipino Women], with which she championed women’s suffrage. She also edited a collection of essays written in English and Tagalog entitled My Ideal Filipino Girl (1931) and wrote The Development and Progress of the Filipino Woman (1951), discussed by Denise Cruz (2011) as an example of ‘transpacific Filipina feminism.’ As a feminist, Mendoza argues that “the Filipino woman of the modern type cares less for flattery, but demands more respect; she prefers to be considered a human being, capable of helping in the progress of humanity, rather than to be looked upon as a doll, of muscles and bones” (qtd. in Cruz 21). Her works underscore the advancements of Filipinas in fields of “medicine, nursing, social science, and the humanities and repeatedly emphasize the transpacific Filipina’s rightful place as a leader in the new Philippines” (Cruz 21). This detaches Mendoza from the recurrent orientalising and exoticising images of women - literally presented as dolls - found in some of the works studied in this project, especially Balmori’s poems. Mendoza’s feminine ideal is, moreover, based on the claim that Filipina women were independent and considered equal to men in the Malay past: the woman “we are told, was her brother’s equal in the home, in society, in government, she could hold positions of honor and prestige like him” (Mendoza qtd. in Cruz 23). According to Cruz, the works of transpacific Filipina feminists like Mendoza “feature precolonial, indigenous women as models of feminism with triumphant rhetoric that valorizes indias” (Cruz 24), rather than puritan Catholic or liberal Americans. Cruz explains that part of the US government (colonial) policy of “benevolent assimilation” was the creation of transpacific fellowships “centered on the education of proper Filipino subjects and in reproducing examples of the benefits of American democracy” (20). The aim of these programs, which brought educated Filipinos such as Mendoza to the US (Cruz refers to an earlier visit of Mendoza than the one transcribed in Notas de viaje), was for Filipino men and women to earn “graduate-level degrees, and […] to return as Americanized triumphs” (20). Mendoza records the gratitude she feels towards US initiatives towards women’s education to a journalist during her visit to Buck Hill Falls. He inquires about the way Filipinos perceived the influence of the US in their country and Mendoza responds: Creo que ningún Filipino puede odiar los ideales que nos habéis enseñado y esto lo digo por mí, porque jamás hubiese llegado a ser doctora en medicina si hubiésemos continuado bajo la soberanía española. (36) [I do not believe that any Filipino hates the ideals that you have taught us and I say this based on my own experience. I would have never been able to become a doctor of medicine if we had remained under Spanish sovereignty.] This comment brings out Mendoza’s ambiguous attachments to and detachments from the Philippines’ complex colonial history, this time praising the US and implicitly critiquing the Spanish education system for keeping women from fully participating in society. Her comments on Filipino independence discussed earlier reveal a much more critical view on the US role in the Philippines. A factor that may explain Mendoza’s double articulation of respect for and exasperation with the Americans is that she is writing the travelogue in Spanish and sending fragments of it to the various ministries that could make use of her notes. Notas de viaje was received with great ambivalence by the Filipino (Spanish-speaking) government and other intellectuals. While her travelogue, seen as carried out with “la devoción de misionera del saber de las letras” [missionary devotion towards knowledge and letters] (Aunario ix), received much praise, it also received serious criticism, mostly directed at the impracticability of her proposed modernisation project. In his letter, Salazar questions the usefulness of Mendoza’s book: ¿Qué se puede esperar de un simple relato que usted hace de los procedimientos políticos o sociales, agrícolas o industriales, sanitarios o educacionales que usted expone a la consideración de su pueblo para que los imite y los asimile? (xv) [What can we expect from the simple account that you give of the political or social, agricultural or industrial, sanitary or educational matters that you suggest for the consideration of your people in order for them to imitate or assimilate them?] Salazar’s skepticism towards the potential of Mendoza’s travel diary to transform Filipino society brings back the paradox governing the genre of travel writing discussed earlier in this chapter between its supposed truth or practical value and the subjectivity of its narrative gaze. Mendoza’s narrative ‘eloquence’ fails to capture the imagination of critics such as Salazar, who perceive her work merely as a series of “felices observaciones” [happy observations] and see the notion that the ideas presented could be imitated and assimilated by Filipinos as wishful thinking. The way Salazar refers to the concepts of “imitation” and “assimilation” contrasts with Mendoza’s own use: while she affirms the possibility of assimilating, in a process not of identical replication but of adaptation, elements from multiple other cultures, Salazar questions the malleability of Filipino culture on the basis of the example he gives of three foreign enterprises that failed to be successful in the Philippines.92 Thus, where Salazar sees cultures - or at least Filipino culture - as rigid and unable to mix, Mendoza’s project of transculturation is founded on the idea of ‘cultural plasticity’ (Rama 2009). For Rama, transculturation between Latin American urban and regional spaces was possible thanks to the cultural plasticity of the latter: “modernizing impulses mediated through the cities were able to be integrated within the regions’ own rearticulated structures” (159). Mendoza’s rearticulation of Filipino future modernity similarly integrates the impressions gathered in the European, American, but also Cuban and other urban centres she visited into the Filipino culture (which occupies the place of the regions in Rama’s account), which she considers to be sufficiently adaptable, given that it has already integrated elements of different cultures in being twice colonised. With respect to the notion of cultural adaptability or plasticity, Hernández, Millington and Borden (2005) offer a valuable re-examination of the idea of transculturation using Deleuze and Guattari’s notion of the rhizome. The rhizome is a figure appropriated from biology but used in philosophy to oppose traditional tree-like thought structures represented as foundational, linear and hierarchical in favour of “a dynamic structure that has no point of origin and is capable of establishing multiple connections with any other kind of system while at the same time avoiding stratification” (Hernández, Millington and Borden xv-i). The most important qualities of the rhizome are **connectivity**, **heterogeneity** and **multiplicity**, as well as resistance to traceability: “the dynamism of the rhizome prevents it from being traceable, [rhizomes] are anti-genealogical and cannot be traced but mapped” (Hernández, Millington and Borden xvi). --- 92 The three examples cited by Salazar as demonstrating the difficulty of copying foreign industrial models are Japanese-style fishing in the Bay of Ragay, the local production of castor oil and the attempt to develop the piña textile industry by the Pacific Commercial Company, led by a US businessman (xiv). The failure of these attempts, according to Salazar, was ultimately caused by: (1) the hospitable living conditions of the Philippines, which “make life easy so that it is barely necessary to struggle in order to survive” (xvi); and (2) the alleged indolence Filipinos have irreparably inherited from the colonial system. The many complex attachments to and detachments from other cultures that can be traced in *Notas de viaje* can be seen to demonstrate a rhizomatic capacity on Mendoza’s part to think across cultural boundaries. Organic and unpredictable variation, interconnectedness and multiplicity are seen to apply not only to larger structures such as society and economy, but also to the individuals operating within them. As a rizhomatic map - “detachable, reversible, susceptible to constant modification” (Deleuze and Guattari qtd. in Hernández, Millington and Bord xvi) - *Notas de viaje* reflects the connections, always in flux, between places and people, and reveals the multiplicity that characterises each culture (showing obedient and loud Asians, poor and rich Europeans, Muslim and Christian Middle Eastern people, and the concurrence of modern and traditional elements within a single country). Mendoza’s emphasis on the plasticity of cultures configures the constant becomings of culture as a rhizomatic system as processes of transculturation. Travel writing, itself grounded in continuous movement, is particularly suitable for showing these processes and for channeling them in a particular direction, in this case in the direction of a vision of a modern future for the Philippines. However, as Hernández, Millington and Borden emphasise, rhizomatic structures are not unrestricted but bound by power relations: > cultures have rhizomatic characteristics, they are assemblages of multiplicities that are always in the middle, always in the process of becoming. In their process of becoming, cultures establish simultaneous multiple connections with other cultural formations. As a result, cultures regenerate, change in nature, and recreate themselves constantly. However, these processes are conditioned by institutions of power. Such institutions have a great impact on the way connections are established, and the very notion of unrestricted connectability can be jeopardized by power formations that tend to construct a model of order by stratifying everything. *This is what occurs in the majority of transcultural relations: a power takeover disrupts the rhizomatic nature of processes of cultural becoming by stratifying everything within foundational totalizing systems.* (xvii-xviii, emphasis added) Stratification and totalisation, effected by institutions of power, limit the otherwise endless process of cultural interconnection. In *Notas de viaje*, too, transculturation does not appear as an endless or boundless becoming; for Mendoza, the predetermined end of the processes of transculturation she seeks to set into motion is the particular version of modernity measured against European modernity that Pratt describes (2002). The goal of cultural exchange, for Mendoza, is to (learn to) become like the mostly western cultures that she perceives as already modern. This underlines my argument that transculturation in *Notas de viaje* constitutes a deliberate, active project, meant to be finalised. Mendoza’s imagined future for the Philippines, based on cultural assimilation, will ultimately interrupt the dynamics of rhizomatic cultural transformation by imposing European modernity as a totalising social structure of power supported by a united, well-defined Filipino identity. In his book *The Future as a Cultural Fact* (2013) Arjun Appadurai articulates a difference between an ethics of probability and the ethics of possibility. The ethics of probability comprise a dominant discourse of calculations based on rationality, management, costs and benefits according to which “a genuinely democratic politics cannot be based on the avalanche of numbers—about population, poverty, profit, and predation—that threaten to kill all street-level optimism about life and the world” (299). Salazar’s critique of Mendoza’s project is based on how it does not follow the ethics of probability: Mendoza’s facts are not presented in a quantifiable way. On the other hand, Appadurai defines the ethics of possibility as: > those ways of thinking, feeling, and acting that increase the horizons of hope, that expand the field of the imagination, that produce greater equity in what I have called the capacity to aspire, and that widen the field of informed, creative, and critical citizenship. (295) This, I want to suggest, captures the spirit of Mendoza’s active transculturation as an attempt to imagine, anticipate and aspire to a different future for her community. **Conclusion** In this chapter I have read Paz Mendoza’s travel notes as an example of active transculturation. *Notas de viaje* has allowed me to present transculturation not only as a form of hybridisation resulting from past colonial contact, but also as an active attempt to imagine cultural transformation for the future by (post)colonial subjects. The experience of travel, Mendoza’s work shows, facilitates new contact-zones in which arbitrary and ephemeral interactions with the other can produce new meanings, capable of challenging entrenched stereotypes. Mendoza’s own hybrid cultural identity and social status, for example, allows her to “decline” cultural stereotypes about Asians, as I showed in my analysis of her conversation with the Dutch millionaire. I have located the narrative ‘eloquence’ of Mendoza’s travel writing in her ability to establish multiple, flexible connections between the Philippines and other cultures across the world, including non-western ones. Unlike Balmori and Gurrea, who diagnose the Philippines’ transculturation mainly as an effect of colonialism, for Mendoza transculturation is a future-oriented project. As such, it is based on the question of what the Philippines could or should be like as a modern, independent nation. Mendoza measures the modernity to be achieved in the Philippines against Western modernity and selects what she perceives as signs of progress that could be assimilated in her country, including German and Danish education, Italian manufacturing, Dutch farming and cheese industries, and Parisian and English urban design. In contrast, she rejects what she believes to be signs of backwardness, most notably in the accounts of her visits to Egypt and Turkey. A notable exception to the way Mendoza questions the superiority of the West over the rest are her positive comments on Cuba. These comments can be explained not only by the historical ties and shared nationalist sentiments in Cuba and the Philippines, but also by the fact that adopting the (peripheral) modernity Mendoza perceives in Havana’s traffic control system, urban management and hygiene regulations in the Philippines seems feasible given the similarities between the countries, including their shared double colonisation by Spain and the US. _Notas de viaje_ shows how the unpredictable nature of travelling may also trigger nostalgic and nationalist emotions, such as Mendoza’s illusion in Egypt of being transported back to the rural Philippines or her enthusiasm for the Italian nationalism displayed in the cinema. Comparing the Philippines to other countries also leads Mendoza to attribute certain shortcomings to it. Examples of this are her negative reflections on Filipino food and cultural identity as not sufficiently distinctive and homogenous, and her positioning of the Philippines as lagging behind other Asian countries such as Japan (which is selling its manufactured goods internationally) or China (which is bringing Chinese food to the world) in the global marketplace. Even though, in the end, Mendoza remains attached to the idea of achieving - as a teleological project - the hegemonic form of modernity that propagates Europe as its centre and is intimately linked to coloniality and the global spread of capitalism, at the same time _Notas de viaje_ consistently envisions the Philippine as a site of cultural plasticity, opening it up to ongoing processes of transculturation. Unlike Salazar, Mendoza believes in the plasticity of cultures which can not only be transformed by external influences such as colonisation but internally by envisioning possible ways to transform it according to one’s own criteria. However, Mendoza is trying to enter the realm of central modernity from her periphery by picking and choosing idealised models and sometimes ignoring the problematic aspects of the cultures taken as models (such as fascism in Italy and Germany). A text like Mendoza’s, then, is significant in demonstrating the possibilities of thinking transculturally while, at the same time, showing that transculturation cannot be made into a global project but requires an engagement with the given conditions at a local level. In the next chapter I show how Balmori’s war novel *Los Pájaros de fuego* points to the limits of active transculturation, suggesting that taking another culture as a model should not mean ignoring less attractive aspects of that culture, in this case Japan’s imperialism.
PROBLEMS OF THE SPECTRAL THEORY OF NON SELF ADJOINT OPERATORS by M. V. Keldysh, V. B. Lidskiy 1. ABSTRACT The class of non self adjoint operators, for which the un- conditionally converging expansion of the Eigenfunctions is (correct, has not yet been fully defined (for example, it is not known whether elliptical differential operators with partial derivatives belong to this class). However, it is now clear that the spectral expansion converging on the norm is not a necessary characteristic of the general linear opera- tor. Apparently, further development of the theory will be achieved by establishing the generalized spectral expansion. We note that considerable material has been accumulated in the theory of non self adjoint problems, and it is character- istic that in recent years the theory has been supplemented with a number of new and important studies. Successes have been particularly great in the area of operators with dis- greate spectrum. We will dedicate the first three sections in our review to this theme. <table> <thead> <tr> <th>KEY WORDS</th> <th>LINK A</th> <th></th> <th>LINK B</th> <th></th> <th>LINK C</th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td>ROLE</td> <td>ST</td> <td>ROLE</td> <td>ST</td> <td>ROLE</td> <td>ST</td> </tr> <tr> <td>Eigenfunction</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Eigenvalue</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Differential Equation</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> PROBLEMS OF THE SPECTRAL THEORY OF NON SELF ADJOINT OPERATORS By: M. V. Keldysh, V. B. Lidskiy English pages: 29 Approved for public release; distribution unlimited. PROBLEMS OF THE SPECTRAL THEORY OF NON SELF ADJOINT OPERATORS [Article by M. V. Keldysh, and V. B. Lidskiy; Russian, Trudy Chetvertgo Vsesoyuznogo Matematncheskogo S'ezda [Transactions of the Fourth All-Union Mathematical Conference], Leningrad, 3-12 July 1961, 1963, Vol 1, p 101-120]. Introduction One basic method of production of expansions for the Eigenfunctions of linear Operator A, Operating in Hilbert space H, is a method utilizing the representation of the operator by a contoured integral of its resolvent. This method was used by O. Cauchy, who applied it to the investigation of series of Eigenfunctions of ordinary differential equations. This method is based on the following formula, correct for any limited operator A: \[ A h = \frac{1}{2\pi i} \int \frac{1}{\lambda (A - \lambda E)^{-1} d\lambda}; \] (1) The integral is taken with respect to the contour containing all specifics of the operator resolvent. Equation (1) is established by calculating the residue where \( \lambda = \infty \). Suppose, for example, A is a fully continuous operator. Then, as we know, its resolvent \( R\lambda = (A - \lambda E)^{-1} \) is a meromorphic function with the point of concentration of poles at 0. Suppose there is a sequence of closed contours \( C_k \) approaching 0, such that \[ \lim_{k \to \infty} \| \int_{C_k} (A - \lambda E)^{-1} d\lambda \| = 0. \] (2) Then, using the fact that the residue of the resolvents in each pole is equal to the projection operator \( P_k \) on the corresponding root space, we produce the converging expansion \[ A h = \sum_{k=1}^{\infty} \lambda_k P_k h. \] (3) In particular, when all poles are simple, expansion (3) becomes where $\phi_k$ are the Eigenvectors of Operator $A$, while $\psi_k$ is the adjoint operator $A^\dagger$. We note that when $A$ is a self-adjoint (or in the more general case, normal), fully continuous operator, the secret of contours $C_k$ always exists. This is explained by the specifics of the self-adjoint operator, stating in particular that the resolvent of this operator grows slowly as parameter $\lambda$ approaches the spectrum. Equation (4) in the self-adjoint case is well-known as the "Hilbert-Schmidt theorem." In the case of a general self-adjoint limited operator, when the resolvent is not meromorphic and the spectrum$^2$ may be continuous, integral (1), can still be represented as $$A\lambda = \int \lambda dP(A)$$ and we can produce a spectral expansion$^3$ generalizing formula (3). Whereas in the theory of self-adjoint operators, general results of a final nature were produced relatively long ago, for linear non self adjoint operators, an expansion has only been produced in a few particular cases. The superficial reason for this lies in the difficulties arising in estimation of the resolvent. The true reason, probably, is the complex spectral structure of the non self adjoint operator. The class of non self adjoint operators, for which the unconditionally converging expansion of the Eigenfunctions is correct, has not yet been fully defined (for example, it is not known whether elliptical differential operators with partial derivatives belong to this class). However, it is now clear that the spectral expansion converging on the norm is not a necessary characteristic of the general linear operator. Apparently, further development of the theory will be achieved by establishing the generalized spectral expansion. --- 1 In the case of multiple poles in formula (4) the attached vectors appear in addition to the Eigenvectors. 2 The spectrum means the set of all irregular points of the resolvent. 3 Expansion (5), first produced by Hilbert [1] (1904), has been produced from integral (1) by Hellinger [2] (1909). We note that considerable material has been accumulated in the theory of non self adjoint problems, and it is characteristic that in recent years the theory has been supplemented with a number of new and important studies. Successes have been particularly great in the area of operators with discrete spectrum\(^1\). We will dedicate the first three sections in our review to this theme. §. Completeness of System of Eigenvectors and Attached vectors Since D. Birkhoff produced an expansion of the Eigenfunctions of the non self adjoint boundary problem for an ordinary, linear, nth order differential equation with regular homogeneous conditions at the ends of the finite interval \([a, b]\), in 1908, a number of studies have appeared \([4, 5, 6]\). In these works, the results of Birkhoff have been extended to the case of other boundary problems for ordinary differential equations and systems studied over a finite interval. The expansions in all cases were produced by the Cauchy method described in the Introduction. The problem is that in problems for ordinary differential equations, the asymptote of the solutions can be found if \(x\) changes over a finite interval, while \(\lambda\) is great. Using the asymptote of the solution, we can estimate the Green function of the corresponding problem and prove the existence of an appropriate sequence of contours. The structure of the Green function is more complex in problems with partial derivatives, apparently the reason why works dedicated to the boundary non self adjoint problem for partial derivatives were extremely scarce for some time. The significant contribution to this area was made by the well-known works of T. Carleman \([7]\) (1936). In this work, in the case of a boundary problem for an elliptical type equation \[ L(u) = - \sum_{i,j=1}^{n} a_{ij}(x) \frac{\partial^2 u}{\partial x_i \partial x_j} + \sum_{i=1}^{n} b_i(x) \frac{\partial u}{\partial x_i} + c(x) u = \mu u \] \[ u|_{x=0} = 0 \] where \(n=3\), the primary term of the asymptote of the Eigenvalues of \(\mu_k\) was found as \(k^{\infty}\). In his proof, Carleman developed a new method for producing the asymptote of the Eigenvalues, based on estimation of the track of the iterated Green function with subsequent application of the theorem of Tauber. \(^1\) As the operators are called, the spectrum of which consists of Eigenvalues of finite multiplicity having only one collection point. In his proof, Carleman also utilized his own preceding results, produced in [8] (1921), in which he studied the resolvent of an integral equation with a kernel having an integrable square (Hilbert-Schmidt kernel)\(^1\). The works of Carleman played a leading role in the development of the theory of non self adjoint problems. His methods allowed the resolvent to be estimated as a function of the parameter in the case of problems with partial derivatives. It is also significant that they can be used in the investigation of operators acting in an abstract Hilbert space. However, one of the main problems, namely the question of the completeness of the system of Eigenvectors and attached vectors remained open in the case of problems with partial derivatives for some time following the works of Carleman. In 1951, M. V. Keldysh succeeded in finding broad conditions of completeness in [10]. In this same work, a theorem was proven concerning the asymptote of the Eigenvalues of operators acting in an abstract Hilbert space. It followed from this theorem, in particular that if \(L_1\) is an elliptical, self-adjoint differential operator with discrete spectrum, then when it is perturbed by a differential operator of lower order, the main term of the asymptote of the Eigenvalues is retained. Although we cannot discuss this problem in detail, let us formulate the theorem of M.V. Keldysh of completeness. We will present it in the following abbreviated form: Suppose fully continuous operator \(A\), for which 0 is not an Eigenvalue, has the form \[ A = H(E + Q), \] where \(Q\) is a fully continuous operator, while \(H\) is a self-adjoint, fully continuous operator, such that with a certain \(\rho > 0\) \[ \sum |v_k|^{\rho} < \infty \] \(1\) Estimates of the resolvents for Hilbert-Schmidt kernels were then produced by another method in an important work by Hille and Tamarkin [9]. It was first shown in this work that the Fredholm determinant of the convolution of two Hilbert-Schmidt kernels has 0 order, and a number of other results were produced. (\nu_k \text{ are the Eigenvalues of } H). Then the system of Eigenvectors and attached vectors of operator A \[ \psi_1, \psi_2, \ldots, \psi_N, \ldots \] \hspace{1cm} (10) is complete in Hilbert space \( H \). In other words, no matter what might be element \( h \) and no matter what might be \( \varepsilon \), a \( N \) and a number \( c_k \) are found, such that \[ |\lambda - \sum_{n=1}^{N} c_n \psi_n| < \varepsilon. \] \hspace{1cm} (11) The system of Eigenvectors and attached vectors will be referred to as the system of main vectors in the rest of this paper. The theorem of completeness of the system of main vectors was established not only for problem (6), but for the case of the boundary problem for elliptical equations of any order studied in a finite area of a space of \( n \) measurements. Actually, under these conditions the differential operator \( L \) is \[ L = L_1 + L_2, \] \hspace{1cm} (12) where \( L_1 \) is the self-adjoint elliptical operator, while \( L_2 \) is the operator of lower order, so that operator \( L_2L_1^{-1} \) is fully continuous. Equation \((L_1 + L_2)u = \psi u\) can be represented in the form \((E + L_2L_1^{-1})L_1u = \psi u\), from which \[ u = L_1^{-1}(E + L_2L_1^{-1})L_1 \psi u. \] \hspace{1cm} (13) Assuming here \( L_1^{-1} = H, (E + L_2L_1^{-1})^{-1} = E + Q \) and \( \psi^{-1} = 1 \), we arrive at the problem of the Eigenvalues for operator (8). Condition (9) in this case is fulfilled \(^1\). \(^1\) The completeness of the system of Eigenvectors and attached vectors of the elliptical differential operator was proven by F. Brauder in [45] without referring to the theorem of N. V. Keldysh. Let us now present the proof of the theorem of M. V. Keldysh concerning completeness, retaining in essence the initial proof (cf. [45] and [48]). a) For the case of fully continuous operators of the form \[ A = KH \] where \( H \) is a self-adjoint limited operator, satisfying condition (9), while \( K \) is any limited operator, the theory of Fredholm determinants can be applied. Suppose \( \lambda_1, \lambda_2, \ldots \) are not equal to 0 and are numbered considering the multiplicity of the corresponding values of operator \( A \); \( \nu_1, \nu_2, \ldots \), as before, are the Eigenvalues of \( H \). We find that the following inequality is always correct: \[ \sum_{\nu} |\lambda_{\nu} f| < |K|^n \sum_{\nu} |\nu f|. \] Considering this fact and assuming \( \nu_k = \lambda_k^{-1} \), let us study the following integral function (Fredholm determinant of operator \( A \)): \[ \Delta_A(\nu) = \prod_{k=1}^{n} \left( 1 - \frac{\nu}{\nu_k} \right) \exp \sum_{k=1}^{n} \left( \frac{\nu}{\nu_k} \right)^{i} \cdot \frac{1}{k}. \] Here \( n \) is the least integer satisfying inequality \( n + 1 \geq \rho \). Obviously, with this selection of \( \Delta_A(\nu) \), the operator function \[ D_A(\nu) = \Delta_A(\nu)(E - \nu A)^{-1} \] is also integral. On the strength of known theorems from the theory of functions, \( \Delta_A(\nu) \) is an integral function of order not over \( \rho \). We find that when condition (9) is fulfilled, the order of the integral function \( D_A(\nu) \) is also not over \( \rho \). Thus, the meromorphic operator function \( (E - \nu A)^{-1} \), where \( A \) is an operator of the form of (14) and the Eigenvalues of \( H \) satisfy condition (9), can be represented as the ratio of integral functions, each of which is of order not over \( \rho \). \[ |\Delta_A(\nu)| < \exp C_1 |\nu|^{\rho}, |D_A(\nu)| < \exp C_1 |\nu|^{\rho} \] 1 The inverse values of Eigenvalues are generally called characteristic numbers of an operator. b) Estimates (16), of course, do not indicate the existence of a sequence of contours over which condition (2) would be fulfilled. However, there is no need to prove the completeness of the system of main vectors in this sequence. The problem is, and this is very significant for our further discussion, that investigation of the completeness of the system of main vectors of the fully continuous operator can be reduced to study of a certain fully continuous operator with the unique point of the spectrum as a 0. This allows us to avoid the difficulties which arise in the investigation of the meromorphic resolvent of an operator, and reduce the problem to the study of a certain integral function. For greater generality, we can perform the corresponding discussion in the case of an arbitrary, fully continuous operator, although in the investigation of completeness of operators of a special form (8) it is not used in full volume. Suppose A is an arbitrary, fully continuous operator. Let us represent by \( Q \), the closed linear envelope of the main vectors of this operator \[ \begin{align*} V_1, V_2, \ldots, V_n, \ldots \end{align*} \] relating to the non 0 Eigenvalues. Suppose \( Q_1 \) is the orthogonal complement of \( Q \). Since \( Q_1 \) is the invariant space \( A \), then \( Q_2 \) is the invariant space of the adjoint operator \( A^* \). Let us represent by \( V \) the operator induced by \( A^* \) in \( Q_1 \). We can now show that for completeness of system (17) in the area of values of operator \( A \), it is necessary and sufficient that \[ V = 0. \] (18) Actually, if system (17) is complete and, therefore, \( Ah \in Q_1 \) with any \( h \), then for any \( g \in Q_2 \) we have \( (Ah, g) = 0 \). Consequently, \[ 0 = (Ah, g) = (h, A^*g) = (h, Vg) \] (19) with any \( h \), and therefore \( V = 0 \). Conversely, if condition (18) is fulfilled, then, tracking equation (19) from right to left, we conclude that \( Ah \in Q_1 \) with any \( h \), and, consequently, system (17) is complete. Let us now show that fully continuous operator \( V \) has a unique spectral point at 0 or, as it is sometimes stated, it is a Walter operator. Let us assume the opposite; then \( Vg - \lambda_0 g = 0, \lambda_0 \neq 0 \). Applying scalar multiplication by arbitrary vector \( h \), we produce \[ (Vg - \lambda_0 g, h) = (g, (A - \lambda_0 E)h) = 0. \] It is known that the direct complement to the subspace of all vectors such as \( (A - \lambda_0 E)h \) lies in \( \mathcal{H} \). Therefore, it follows from (20) that \( g = 0 \), and we arrive at a contradiction. Thus, proof of the completeness of the system of main vectors of a fully continuous operator can be reduced to proof of equality of a certain Walter operator to 0. c) Under the conditions of the theorem in question, equation (18) is proven as follows. Let us study the function \[ \omega(u) = ((E - \mu V)^{-1} g, h). \] (21) where \( g \) and \( h \in \mathcal{H} \). Since \( V \) is a Walter operator, then \( \omega(u) \) is an integral function. Representing by \( P \) the projection operator in \( \mathcal{O} \), we have \[ (E - \mu V)^{-1} = P(E - \mu A^*)^{-1} P. \] since operator \( A^* \) has the form of (14), then, according to (16), \( \omega(u) \) is an integral function of order not over \( \rho \). We shall now show that where \( u \rightarrow \infty \) along each ray differing from the real axis, function \( \omega(u) \) remains limited. From this, on the strength of the Fragnan-Lindeloff theorem, it follows that \( \omega(u) = \text{const} \). Since further the fraction \( \frac{\partial \omega}{\partial u} |_{u=0} = (Vg, h) \), consequently, \( (Vg, h) = 0 \) with all \( g \) and \( h \in \mathcal{O} \), and therefore equation (18) actually obtains. Thus, it is sufficient to show that \[ |\omega(u)| \leq C. \] (22) when \( r \rightarrow \infty \). Let us prove this fact. We have \[ (E - \mu A^*)^{-1} = (E - \mu (E + Q^*)^{-1} N)^{-1} = (E + S - \mu N)^{-1} (E + Q^*)^{-1} = \] \[ = (E + (E - \mu N)^{-1} S)^{-1} (E - \mu N)^{-1} (E + Q^*)^{-1}. \] (23) where $S$ represents the fully continuous operator such that $E + S = (E + Q^*)^{-1}$. Assuming $u = e^{10}$, let us estimate the right portion of (23). We note that the Eigenvalues of $(E - uH)^{-1}$ are $(1 - re^{i\Omega})^{-1}$. Since $$\left| (e - re^{i\Omega})^{-1} \right|^2 < \frac{1}{(max e - re^{i\Omega})^2 + \sin^2 \alpha} < \frac{1}{\min e}.$$ operator $(E - uH)^{-1}$ is evenly limited. Furthermore, for fixed $f$ we have $$\|(E - \mu H)^{-1}f\| = \sum_{\lambda_0} \left| \frac{1}{1 - re^{i\Omega}} \right| + \sum_{\lambda_0 < 1} \left| \frac{1}{1 - re^{i\Omega}} \right|.$$ \hspace{1cm} (24) Selecting $N$ sufficiently high, we can first make the second sum $\leq c/2$, after which, by selecting $r$, we can make the first sum $\leq c/2$. Thus, as $r \to\infty$ $$\|(E - \mu H)^{-1}f\| \to 0.$$ \hspace{1cm} (25) Using this fact, we can show $$\lim_{r \to\infty} \|(E - \mu H)^{-1}S\| = 0.$$ \hspace{1cm} (26) Using the fixed $\varepsilon$, let us represent the fully continuous operator $S$ as $$S = S_1 + S_2,$$ where $\|S_1\| < c/2 \sin \alpha$, while $S_2$ is a finite-dimensional operator. Suppose element $h$ is such that $\|h\| < 1$. We then have $$\|(E - \mu H)^{-1}Sh\| < \|(E - \mu H)^{-1}S_1h\| + \|(E - \mu H)^{-1}S_2h\| < \\frac{1}{2} \|A\| + \|(E - \mu H)^{-1}S_2h\|.$$ \hspace{1cm} (27) And since set $S_2h$ is a finite-dimensional and limited set, on the strength of (25) the second component in (17) with sufficiently large $r$ is also $\leq c/2 \|h\|$. Thus, formula (26) actually obtains. Representation (23) now directly indicates limited norm $(E - \mu A^*)^{-1}$ as $r \to\infty$. Consequently, $\|w(re^{i\Omega})\| < C$, which was stated. Let us separate the essential element contained in this proof: finite order of the resolvent of Walter operator $V$, resulting from inequality (9) allows us -- on the strength of the Fragment-Lindelhoff theorem -- on the basis of the behavior of the resolvent in a spectrum not containing the spectrum of operator $A^*$, where it is comparatively easily estimated, to draw a conclusion concerning the resolvent of operator $V$ as a whole. This theorem on completeness was subsequently developed in a number of works, which we will discuss later. The later works were also influenced by a work of M. S. Livshits [11], in which a triangular model was produced for a limited operator of the form \[ A = A_1 + iA_2, \] (28) where the imaginary Hermitian component \( A_2 \) is fully continuous and has a track\(^1\). In particular, this model leads to an integral representation of the Waler operator. M. S. Livshits established also the following fact. If operator (20) is fully continuous and \( A_1 > 0 \), then it is necessary and sufficient for completeness of the system of attached and Eigenvectors that \[ \sum_{i=1}^{n} \lambda_i = S_{nA}, \] (29) This theorem, produced by M. S. Livshits using a triangular model, was then proven significantly more simply by B. R. Mukminov [12]. It can be shown that formula (29) immediately indicates equality of Waler operator \( V \), acting in \( C_n \) to 0; the reverse is also true. 12. Further Theorems on Completeness, Triangular Representation of Waler Operators Let us now go over to later results. Suppose \( A \) is a fully continuous operator. Let us refer to the Eigenvalues \( s_n \) of operator \( \sqrt{A^*A} \) as the singular values of operator \( A \). Obviously, always \( s_n > 0 \). We will study only those operators \( A \) for which with a certain \( \rho > 0 \) \[ \sum_{i=1}^{n} s_i^\rho < \infty. \] (30) Exponent \( \rho \) characterizes the degree of deviation of operator \( A \) from a finite-dimensional operator. The lower the value of \( \rho \), the more rapidly number \( s_n \) approaches 0, and the better the operator is approximated by a finite-dimensional operator. \(^1\) It is stated that fully continuous operator \( A \) has a track if the series of Eigenvalues \( s_n \) of the non-negative operator \( \sqrt{A^*A} \); \( \sum s_n \) converges. Here, the track refers to \( \sum (A_{1k}, x_k) \), where \( x_k \) is a certain orthonormalized base in \( \mathfrak{g} \). If $p=2$, operator $A$ is called a Hilbert-Schmidt operator. Integral operators of this type were studied by Carleman [8]. Where $p=1$, operator $A$ is called a kernel operator (concerning kernel operators, see [46]). Let us introduce one more characteristic of operator $A$. It is known that the sec of values of the quadratic form $(Ah, h)$ in the complex plane fills either a certain angle $w$ with its tip at the origin of the coordinates, or the entire plane. If operator $A$ is self-adjoint and non-negative, the values of $(Ah, A)$ fill the positive half axis. In the general case, multiplying the operator by an appropriate complex constant, it can be arranged that the bisectrix of the angle of values of form $(Ah, h)$ is the positive half axis. The aperture of this angle can be used as a characteristic of the deviation of the operator from a non-negative self-adjoint operator. The following theorem is correct. If operator $A$ satisfies condition (30) where $p>1$ and if $$\frac{-\pi}{2} \leq \text{Arg}(Ah, h) \leq \frac{\pi}{2},$$ then the system of main vectors of operator $A$ is complete in $\mathfrak{H}$. This fact was initially established in a number of particular cases by various methods by V. B. Lidskiy. For the case $p=2$ in [13] using the results of T. Carleman [8]; for the case $p=1$ in [14] based on the formula of tracks $$\sum_{k} (\lambda_k, \chi_k) = \sum_{k} \lambda_k,$$ which, as was proven in [14], is correct for any kernel operator (in formula (32), $\lambda_k$ are the Eigenvalues of $A$, while $\chi_k$ is an arbitrary orthonormalized base). However, after minimality of the first Fredholm minor $D_A(\lambda)$ was proven under condition (30)$^1$, the theorem formulated above was proven by a strong method, applying the Fr"{a}ngne-Lindelhoff theorem to function (21). As B. Ya. Levin and V. I. Matsayev proved, the conditions of completeness of (30) and (31) are precise: with the given convergence indicator of series (30) $p$ and a broader range of values of the quadratic form then (31), we can indicate an operator with an incomplete system of main vectors. Further progress in the investigation of completeness was achieved in the works of N. G. Krejca, L. A. Sakhnovich and M. S. Brodskiy, --- $^1$ See [43]. As the authors have learned, V. I. Matsayev showed that if $V$ is a Walter operator and $s_n = o(n^{1/p})$, then $$\|E - V\| = o((n^{-1/p}).$$ L. A. Sakhnovich and M. S. Brodskiy produced new triangular representations of Walter operators. Let us discuss these works briefly. L. A. Sakhnovich [15], [16], generalizing the results of M. S. Livshits, constructed a triangular model of the limited operator \[ A = A_2 + tA_1, \] having the property that no matter what the two invariant subspaces \( H_1 \) and \( H_2 \) of operator \( A \), \( t > \), and \( \dim \, H_1 \cap H_2 = 1 \) invariant subspace \( H_3 \) of operator \( A \) is found, such that \( H_1 \cap H_3 \neq H_2 \). In particular, as L. A. Sakhnovich demonstrated, this property is shown by any operator (33) if \( A_1 \) is a Hilbert-Schmidt type. In this case, when spectrum \( A \) consists only of the 0, and \( A_1 \) is a Hilbert-Schmidt type, operator (33) is uniquely equivalent to the operator \[ \tilde{A} = \int_0^\infty f(t) N(x,t) \, dt. \] where \( f(t) \) is a vector function, generally infinite-dimensioned, and \( N(x,t) \) is the matrix kernel satisfying the condition \[ \int_0^\infty \left| N(x,t) \right|^2 \, dt < \infty. \] It immediately follows from representation (35) that if \( A \) is a Walter operator and \( A_1 \) is a Hilbert-Schmidt operator, then \( A \) is also of Hilbert-Schmidt type. This fact has significantly influenced a number of later works (see below). In particular, it allowed L. A. Sakhnovich to strengthen the theorem of V. B. Lidskiy concerning completeness in the case of Hilbert-Schmidt operators in the following form. If \( A \) is a fully continuous operator, \( A_0 \geq 0 \) and \( A_1 \geq 0 \), and if \( A_1 \) is a Hilbert-Schmidt operator, then completeness obtains. Another triangular presentation for the Walter operator was produced by M. S. Brodskiy [17]. The triangular presentation of M. S. Brodskiy is effective in the same Hilbert space as operator \( A \), and corresponds with operator \( A \) fully, not with an accuracy to a supplementary component, as occurs in the models of M. S. Livshits and L. A. Sakhnovich. Going over to a presentation of this problem, let us assume initially that \( A \) is a linear transform in an \( n \)-dimensional space, all Eigenvalues of which are equal to 0. Suppose \[ \varphi_1, \varphi_2, \ldots, \varphi_n. \] (36) is an orthonormalized base, in which the conversion matrix is a triangle\(^1\). Then \[ A_1 = 0; \quad A_2 = e_{12}a_2, \ldots, A_n = e_{1n}a_1 + e_{2n}a_2 + \ldots + e_{n-1}a_n. \] (37) Let us represent by \( P_k \) the projection operator onto the space stretched onto the first \( k \) base vectors (36), and suppose \( A_k = P_k - P_{k-1} \). It then immediately follows from formula (37) that with any \( h \) \[ A_h = \sum_{k=1}^{n} P_{k-1} A_h P_k. \] (38) We note also that according to (37), \( \sum_{k=1}^{n} A_k A_{k-1} = 0 \). Going over in this equation to adjoint operators and assuming \( A_1 = 1/2i(A - A^\ast) \), we can write (38) as \[ A_h = 2i \sum_{k=1}^{n} P_{k-1} A_h P_k. \] (39) This representation, as M. S. Brodskiy has shown, is generalized in the case of any Walter (fully continuous) operator \( A \), acting in \( \mathcal{H} \). Namely, any Walter operator can be represented as \[ A = 2i \int_{\mathbb{R}} P(x) A_h dP(x). \] (40) Here \( A_h \), as always, is the imaginary component of operator \( A \), \( \mathbb{R} \) is a certain closed set of sector \([0,1]\), \( P(x) \), \( x \in \mathbb{R} \), is a chain of projection operators, continuous in \( \mathbb{R} \) and monotonically increasing, projecting on the invariant subspaces of operator \( A \), where \( P(0) = 0 \), \( P(1) = E \), and if \((a, \beta)\) is the complementary integral to set \( \mathbb{R} \), operator \( P(\beta) - P(a) \) is unidimensional. Integral (39) is understood as the limit of the sequence of partial sums in the ordinary operator norm. We note that proof of the existence of the chain of projectors \( P(x) \) is based on the Neuman-Aronshein theorem [18] on the existence of a non-trivial invariant subspace with a fully continuous operator acting in \( \mathcal{H} \). A chain of this form was constructed independently by L. A. Suhkhnovich in [15], [16], and is the basis of the results produced there. Representation (39) has been found quite convenient in the study of Walter operators. New, important representations concerning the convergence --- \(^1\) Existence of this type of base is established by the well-known theorem of I. Shur. of integrals such as (40) under conditions when $P(x)$ is a monotonic chain of projectors, not necessarily generated by the fixed Walter operator, while $A_\Gamma$ is a certain self-adjoint, fully continuous operator, were produced by I. Ts. Gokhoyerg, M. G. Kreyn and V. I. Matsayev [19, 20, 21, 22]. These authors, using triangular representations, established the following fact, generalizing the theorem of L. A. Sakhnovich presented above. Suppose $V=V_R+iV_I$ is a Walter operator and suppose $\gamma_k$ are the Eigenvalues of $I$, while $\sigma_k$ are the Eigenvalues of $V_R$. Then where $p>1$, the series $$\sum_{k=1}^{\infty} |\gamma_k|^p \quad \text{and} \quad \sum_{k=1}^{\infty} |\sigma_k|^p$$ (41) (42) converge and diverge simultaneously. Let us emphasize that the statement formulated allows us to judge the growth of integral functions $DA_A(u)$ and $DA_{A}(u)$ in the case of a Walter operator, with information only concerning the imaginary or real component of the operator. The order of these functions with $p>1$ is not over $p$. This allows us to strengthen the completeness theorem formulated above on page 10. If operator $A$ is such that its imaginary portion $A_I=1/2i(A-A^*)$ satisfies condition (30) where $p>1$ and if condition (31) is fulfilled, completeness occurs. Where $p=1$, convergence of series (41) does not generally produce convergence of series (42). One example is the operator $\int f(t)dt$, for which the imaginary component is unidimensional, while the Eigenvalues $\sigma_k=n(n=\pm 1, \pm 2, ...)$. Walter operators, the imaginary components of which have tracks, were subjected to detailed study in the works of M. G. Kreyn [23, 24]. M. G. Kreyn relates the Walter operator $V=V_R+iV_I$ to the analytic function $$f(z) = \text{Det} \{(E-zV_R)(E-zV_I)^{-1}\}.$$ (43) Since $(E-zV_R)(E-zV)^{-1}=E+izV_I(E-zV)^{-1}$ and $V_I$ has a track, the determinant in the right portion of (43) converges evenly and is an integral function (we recall that $V$ is a Walter operator and, consequently, $(E-zV)^{-1}$ is an integral function); the nulls of $f(z)$ are the numbers $\sigma_k^{-1}$. As M. G. Kreyn proved, function $f(z)$ within the upper and lower half planes can be represented as the ratio of 2 limited holomorphic functions. From this, based on the theorem of M. G. Kreyn [25] and the theorem of Levenson [26], it follows that there is a general finite limit $$\lim_{r \to \infty} \frac{r^\nu V_A}{r^{\nu+1}} = \lim_{r \to \infty} \frac{r^{\nu} (r, V_A)}{r^\nu} = \frac{h}{\pi}.$$ (44) Here, \( n_+(r, V_R) \) and \( n_-(r, V_R) \) represent the number of characteristic numbers of operator \( V_R \) in the intervals \((0, r)\) and \((-r, 0)\) respectively. Formulas (44) contain the asymptote of the Eigenvalues of the real component of the Walter operator, the imaginary component of which has a track, and supplement the preceding result of Kreyn, Gokhyerg, and Matsayev. It is remarkable that in the case when \( V_I \geq 0 \), in formula (44) \[ h = \text{Sp} V_I. \] If therefore \( V_I \geq 0 \) and the general limit in (44) is equal to 0, then \( V_I = 0 \), and all of operator \( V \), being a self-adjoint Walter operator, is equal to 0. This established fact relative to Walter operators leads to the following completeness theorem. If fully continuous operator \( A = A_R + iA_I \) is such that \( A_I \geq 0 \) and if one of the two conditions \[ \lim_{r \to 0} \frac{n_+(r, A_R)}{r} = 0, \] or \[ \lim_{r \to 0} \frac{n_-(r, A_R)}{r} = 0, \] is fulfilled, then the system of main vectors \( A \) is complete. This theorem contains the results of V. B. Lidskiy concerning completeness of operators having a track \((\rho = 1)\) as a particular case, since if operator \( A \) has a track, then both conditions (46) and (47) of the theorem of M. G. Kreyn are fulfilled. Further, M. G. Kreyn finds a necessary and sufficient condition of completeness for fully continuous operators \( A \), such that \( A_I \geq 0 \), \( \text{Sp} A_I \leq \infty \). Completeness occurs when and only when \[ \int_{0}^{\infty} \frac{n(r, A_R)}{r} \, dr - \int_{0}^{\infty} \frac{n(r, A)}{r} \, dr = o(1); \] under the condition that \( \rho = \infty \), bypassing a certain set of finite logarithmic length. Here \( n(r, A) \) is the number of characteristic numbers in a circle of radius \( r \). Simultaneously with the work of M. G. Kreyn, an important study appeared by B. Ya. Levin [27], in which the following estimate was produced under the same assumptions \((A_I \geq 0 \text{ and } \text{Sp} A_I \leq \infty)\) as \( r \to \infty \), bypassing a set of finite logarithmic length, as well as a number of other results. In all of these works concerning completeness of the system of main vectors in a fully continuous operator, conditions were stated under which the resolvent of the operator is represented as a ratio of finite order integral functions. Incidentally, an attempt to remove condition (9) from the theorem of M. V. Keldysh, as yet unsuccessful, produces infinite order integral functions. In connection with this, there is great interest in a recent result by V. I. Matsayev [21], according to which the system of main vectors of operator \( A = H(E + Q) \) [cf. (8)] is complete if only \[ \sum_{k=1}^{\infty} \frac{s_k}{2k+1} < \infty. \] where \( s_k \) are the Eigenvalues of \( \sqrt{Q^*Q} \). Condition (9) can be discarded. Under these assumptions, the resolvent is generally not represented by the ratio of finite order integral functions. We have not touched upon an interesting study by D. E. Allakhverdiev [40] concerning the conditions of completeness in the case of weakly perturbed normal operators, in which the author succeeded in extending the theorem of M. V. Keldysh to this case; we have also not mentioned the new, deep theorems of V. I. Matsayev, based on precise estimates of the integral functions, or a number of other studies. However, even our complete review shows that the problem of the conditions of completeness has been greatly advanced in recent years. This progress has been achieved by a combination of geometric and analytic methods. § 3. Theorems on Integrability and Convergence of Series with Respect to Main Vectors It must be emphasized that since the system of main vectors is not orthogonal, its completeness does not indicate convergence of the Fourier series of elements of this system. Furthermore, as examples have shown, under the conditions of completeness found, formally described series such as (3) and (4) generally diverge. It therefore becomes a pressing problem to define the coefficients of linear combination (11) of the attached and Eigenelements approximating a given element \( f \) with predetermined accuracy. For one class of operators, this problem was solved in the work of V. V. Lidskiy [28], in which he set forth the idea of summation of series with respect to main vectors by the method of Abel. Let us briefly discuss this problem. Suppose \( A \) is a fully continuous operator and suppose \( s_k \) are its singular values (natural values of operator \( \sqrt{AA^*} \)). Let us assume that operator \( A \) satisfies condition (30) with a certain \( \rho > 1 \). \[ \sum_{k=1}^{\infty} s_k r_k < \infty \] (50) and with a certain \( \rho' > \rho \), the condition \[ -\frac{\pi}{2\rho} < \text{Arg}(A^*, A) < \frac{\pi}{2\rho'}. \quad (51) \] Assuming for simplicity that all characteristic numbers \( \mu_k \) of operator \( A \) are simple, we represent by \( \phi_k \) the Eigenvectors of \( A \), by \( \psi_k \) the Eigenvectors of \( A^* \), normalized by the condition \( (\phi_k, \phi_k) = 1 \). Suppose \( f = Ah \), where \( h \) is an arbitrary element in a Hilbert space. The formally written series (4) for vector \( f = Ah \) generally diverges. However, the following theorem is correct. If the fully continuous operator \( A \) satisfies conditions (50) and (51), with any \( t > 0 \), the series \[ \sum_{n=1}^{\infty} \left( \sum_{k=1}^{N_n} e^{-r_k^2 t} (f, \phi_n) \phi_n \right) \] (52) converges and \[ \lim_{t \to 0} u(t) = f. \quad (53) \] In formula (52), \( a \) is any number satisfying the condition \( \rho' > a \geq \rho \); \( N_n \) is a certain subsequence of numbers in the natural series, independent of \( t \). Thus, by replacing condition (31) with the somewhat more rigid condition (51), we can guarantee not only completeness of the system of main vectors, but integrability of the corresponding expansions. It can further be shown that under conditions (5) and (51), with any \( f = Ah \), the following estimates are correct: \[ \left| u(t) - \sum_{k=1}^{N_n} e^{-r_k^2 t} (f, \phi_k) \phi_k \right| \leq \exp \left( -\frac{\pi}{2\rho} |f| + |f|^{\rho - 1} \right) \cdot \frac{1}{1} \] (54) and These formulas allow us, using a fixed $\varepsilon$, to select first a sufficiently small $t > 0$, then with the selected $t$, a sufficiently large $N_5$, so that using the coefficients contained in formula (54), we satisfy inequality (11). Proof of the theorem is presented by converting the integrating factor to a Cauchy integral. Suppose $$u(t) = \frac{1}{2\pi i} \int_{\gamma} e^{-At} \frac{dp_0}{p_0^t}.$$ \hspace{1cm} (56) where $\gamma$ is an infinite contour, encompassing all bands of the integrand, and containing the function $\exp(-\mu^2 t)$, in the decreasing sector. Using estimates (16) and considering the minimal nature of type $D_A(u)$ and $\Delta_A(u)$ we can prove the existence of a sequence of contours $\gamma_k$, which diverges at infinity, in which the integrand approaches 0. This allows us to represent the integral by a series of the residues of (52). In connection with formula (52) let us touch upon one problem which is of independent significance. Where $a = 1$, the expansion of (52) becomes $$u(t) = \sum_{k=1}^{\infty} \left( \sum_{l=1}^{k} e^{-2\pi t} (l, \gamma_k) \gamma_k \right)$$ \hspace{1cm} (57) and, as we can easily see, is a solution of the Cauchy problem for the equation $$\frac{du}{dt} + Lu = 0 \quad (L^{-1} = A).$$ \hspace{1cm} (58) with the initial condition $$u_{|_{t=0}} = f.$$ \hspace{1cm} (59) --- 1 Integral (56) is converted to integral (1) if we make the replacement $\lambda = \mu^2$ and assume $t = 0$. -18- Convergence of series (57) where \( t>0 \) means, therefore, that if operator \( L \) in equation (58) has a fully continuous inverse satisfying condition (50-51), the solution of the Cauchy problem (58-59) can be expanded into a Fourier series converging where \( t>0 \) with respect to the main vectors of operator \( L \) (cf. [49]). These conditions are satisfied, for example, by the differential elliptic operators of order \( 2m \), greater than the number \( n \) of independent variables. Consequently, in these cases the solution of the Cauchy problem can be found by the Fourier method. As concerns equation (58) with its elliptical operator, this result apparently can be strengthened, since it was produced using a very general estimate of resolvent (16), not considering the special form of the operator. We note that for a resolvent of elliptical operator (6) with two independent variables, the following estimate is correct, produced by V.B. Lidskiy [44]: \[ \| (L - \mu E)^{-1} \| \leq \exp \left( a^2 |\mu^2 \sum_{i=1}^{\infty} \frac{1}{\sqrt{k} |P_i - P_k|} \right) \] (60) with all \( \mu \). In this formula, \( \mu_k \) are the Eigenvalues of operator (6). Inequality (60) is more precise than the general estimate given by formula (16), and allows us to extend the result formulated above on convergence of the Fourier series to the case of elliptical operator (6) where \( n=2 \). The problem of convergence of series (57) \( t=0 \) even in the case of differential operators with partial derivatives, remains open. Generally, convergence of expansions with respect to main vectors has been established with respect to a very narrow class of operators, as was noted in the introduction. In addition to the well-known old studies on convergence of series in the case of the problem for ordinary differential equations, we can note also the results of B. R. Mukminov [12], I. M. Glazman [29], A. S. Markus [30], in which operators were studied, acting in an abstract Hilbert space \( \mathfrak{A} \). Let us discuss briefly the results of I. M. Glazman. The infinite system of elements \( \phi_k (k=1, 2, \ldots) \) is called the Riss base of its closed linear envelope, if with certain \( m \) and \( M \) and all \( N \) and \( c_k \), the following inequality is correct: \[ \sum_{\alpha=1}^{m} |c_k|^2 \leq \sum_{\beta, \gamma=1}^{k} (\gamma, \gamma_\beta) c_k c_\gamma \leq M \sum_{\beta=1}^{m} |c_k|^2. \] (61) We will not discuss the fact that when condition (61) is fulfilled, system \( \phi_k \) is linearly independent and actually forms a base\(^1\). We note only that condition (61) is obviously fulfilled, when the angles between the vectors of the system are near a right angle. \(^1\) It can be proven that if system \( \phi_k \) forms a Riss base, there is a limited, continuously inversable operator \( C \) which converts system \( \phi_k \) to an orthonormalized base. We find that if $\phi_k$ are the Eigenvectors of a certain dissipative $A$ (i.e., $A_1 > 0$), the angles between them can be estimated using the corresponding Eigenvalues. Namely, the following inequality is correct: $$\| (v_i, v_j) \| > \frac{4\alpha_{l_1}l_2}{|l_1 - l_2|}$$ (it is assumed that $\| \phi_i \| = \| \phi_j \| = 1$.) Using this inequality, the following theorem is proven. If $\phi_k$ is an infinite system of normalized Eigenvectors of a limited dissipative operator $A$ and if $$\lim_{m \to \infty} \frac{l_{m_1}l_{m_2}}{|l_1 - l_2|} = 0,$$ then system $\phi_k$ is a Riesz base of its closed linear envelope. This theorem was produced earlier under more limiting assumptions by B. R. Mukminov by another method. In conclusion, we note that condition (63) and similar conditions place rigid limitations on the Eigenvalues, so that the class of operators for which they are fulfilled in quite narrow. Of the general problems, let us first discuss works which develop the results of N. S. Livshits [11] and are dedicated to conversion of limited operator $A$ to triangular form. We have already indicated, in connection with the representation of Walter operators, that L. A. Sakhnovich [15] succeeded in constructing a triangular model of the limited linear operator, having a sufficient reserve of invariant subspaces. In this case, the model of L. A. Sakhnovich has the form $$\lambda f = \frac{d}{dx} \int_0^x N(s, \eta) f(\eta) d\eta.$$ Operator (64) acts in Hilbert space $\mathcal{H}$ of vector functions $f(t) = (f_1(t), \ldots)$, satisfying the condition \[ \sum_0 \int |f_i(t)|^2 dt < \infty. \] In formula (64), $N(x,t)$ is a certain matrix kernel. Operator $A$ is uniquely equivalent to the initial operator $A$ with an accuracy to a certain invariant subspace relative to $\lambda$ and $\lambda^*$, in which the equality $\lambda A = \lambda^* A$. Under certain additional conditions placed on operator $A$, differentiation can be performed following the integral sign in formula (64), thus simplifying the model. For example, if in formula (35), $A_I$ is a Hilbert-Schmidt type operator and the spectrum of operator $A$ is real, formula (64) becomes \[ \frac{\partial A}{\partial x} = -H(x) f(s) + \int N(x,t) f(t) dt. \] where $H(x)$ is the Hermit operator, while $N(x,t)$ is a matrix kernel satisfying condition (35). We have already indicated the effectiveness of the triangular presentation in the case of Walter operators. M. S. Brodskiy, in [31] (1960), generalizing his earlier result [17], produced a triangular representation of limited operator $A$ with real spectrum and imaginary, fully continuous component $A_I$, under an additional assumption concerning the structure of the invariant subspaces of operator $A$. The triangular representation is as follows: \[ A = \int a(x) dP(x) + 2i \int P(x) A_I dP(x). \] In this formula, $P(x)$ is a monotonic chain of projection operators, projecting onto invariant subspaces of operator $A$ [cf. (40)], while $a(x)$ is a certain real function, the values of which correspond with the spectrum of $A$. As Yu. I. Lybich and V. I. Matsaev showed [32], the conditions place on the invariant subspaces of operator $A$ by M. S. Brodskiy are fulfilled if \[ \int |m_n + m_n + A(t) dt| < \infty, \quad A(t) = \sup_{|\lambda|} \|A - \lambda E\|^{-1}. \] This condition is quite broad; as V. I. Matsaev proved, it is fulfilled if the series $\sum |r_n|$ converges where $r_n$ are the Eigenvalues of $A_I$ and, consequently, practically with any fully continuous $A_I$. We note that on the assumption that $|\lambda_n|^2 < \infty$, triangular representation (66) was produced earlier by I. Ts. Gokhberg and M. G. Kreyn. A further improvement of the triangular presentations can be produced, apparently, by simplifying the \( W_{-1} \) component in formula (65) and (66). Interesting results in this direction were produced by L. A. Sakhnovich [33]. It is also desirable to avoid the condition of reality of the spectrum of the operator. Although a non-real operator spectrum with a fully continuous imaginary portion is discrete, separation of the corresponding invariant subspaces is a far from trivial problem. Let us now touch upon another trend in the theory of linear operators, the theory of spectral operators of N. Danford [35], (1954). It is assumed in this theory that limited operator \( T \), acting in a Banach space has the set of projectors \( E(\delta) \) (\( \delta \) is any set in the complex plane measurable after Borel). Set \( E(\delta) \) is assumed to be evenly limited with respect to \( \delta \): \[ |E(\delta)| < N \tag{67} \] as well as denumerably additive: for each sequence of non-intersecting Borel sets \( \delta_n \), \[ E(\bigcup \delta_n) = \sum_i E(\delta_i) \tag{68} \] where the series on the right converges strongly. Under certain natural additional assumptions concerning set \( E(\delta) \), it has been established that the corresponding operator \( T \) called the spectral operator, can be represented as \[ T = S + N \tag{69} \] where \[ S = \int \chi E(\delta) \tag{70} \] while \( N \) is the generalized 0-power operator in the sense of I. M. Gel'fand \[ \lim_{\|T\| \to 0} \sqrt[n]{\|T^n\|} = 0. \] and the operators \( N \) and \( S \) are commutative. Representation (68) is a full analogue of the Jordan form. As N. Danford shows, for any single-valued function \( f(\lambda) \), analytic in spectrum \( T \), the formula \[ f(T) = \sum_{\lambda \in \mathbb{Z}} \lambda^N \int \lambda^N(\lambda) E(\delta) \tag{71} \] well-known in the finite-dimensional case, is correct. Works on further development of the theory of spectral operators were included in the objective review of N. Danford [35], (1958). We can see from this review that the mathematicians working in this direction have directed their efforts toward the production of sufficient conditions imposed on operator $T$ and its resolvent, under which the operator is spectral. The conditions produced to date contain a requirement of not over exponential growth of the resolvent as parameter $\lambda$ approaches the point of the spectrum. Furthermore, it is required that for any two elements $x$ and $y$ such that the functions $R_{\lambda}x$ and $R_{\lambda}y$ have no common points of irregularity, the inequality $\|x\|\leq c\|x+y\|$, be fulfilled with a certain constant $c$, independent of $x$ and $y$. As is stated in the review, all differential operators with ordinary derivatives and regular boundary conditions (operators studied by D. Birkhof) are spectral operators. The work of N. Danford also presents certain singular problems. For example, it is stated that the operator $$l(y) = -\frac{d^2y}{dx^2} + q(x)y,$$ (72) studied by M. A. Naymark in [36], defined in a variety of functions $y(x) \in L_2(0, \infty)$, $y'(0) = hy(0)$, under the condition that $$\int_0^\infty (1 + s^2)|q(s)|ds < \infty$$ (73) is spectral ($q(x)$ and $h$ are generally not real). Among the differential operators for which no expansion into a Fourier integral was produced earlier, this review states, the following operator is spectral $$l(y) = -\frac{d^2y}{dx^2} + a\frac{dy}{dx} + q(x)y,$$ (74) where $\text{Re}a \neq 0$, $q(x+\pi) = q(x)$. With real $q(x)$ and $a=0$, operator (74) is self-adjoint: its spectrum, as is well-known, is an infinite series of intervals moving off to $\infty$. All points in the spectrum are double. As M. I. Serov [41] has shown, in the case of the complex-valued function $q(x)$ and $a=0$, the picture changes little: the intervals are deformed into curved sectors, asymptotically retaining their length and distance between neighboring n's. If, however, we assume in (74) that $\text{Re}a \neq 0$, the spectrum changes significantly. Several of the first intervals are split into ovals; all remaining lacunas are extended and the twice-added ray is split into a curve asymptotically close to a parabola. M. I. Serov, studying operator (74) on the suggestion of I. M. Gel'fand, estimated the resolvent of the operator, with approximation of the parameter to the spectrum. However, he did not succeed in producing expansion into a Fourier integral. It is even more interesting that this problem is solved from general considerations. It should be noted that proof of the results announced by N. Danford has unfortunately never been published. However, the incomplete formulation of the results and the absence of proof lead to disagreements. For example, in contrast to a statement contained in the review, B. S. Pavlov [42] has shown, but constructing a contradictory example, that operator (72) under condition (73) is not spectral. The corresponding statement is incorrect even if \[ \int_0^\infty (1 + a^m) |g(x)|dx < \infty, \quad (a > 1). \] (75) In connection with the theory of spectral operators, we note an interesting attempt undertaken by V. E. Lyantse [37] to construct a theory of spectral operators under conditions of completeness of the system of invariant subspaces, without assuming even limitation of the spectral set (67) or denumerable additiveness (68). It is to be hoped that this theory will be applied. In conclusion, let us discuss the problem of expansion with respect to Eigenfunctions of an ordinary differential operator in the case of an unlimited area of definition of the functions. We have mentioned the well-known theorem of M. A. Naymark [36] of expansion with respect to Eigenfunctions of the Sturm-Liouville equation with unreal potential \( q(x) \), satisfying condition (73). This theorem of M. A. Naymark was extended by V. N. Funtakov [38] to the case of an even order differential operator \[ I(y) = \sum p_n(x) y^{(n-2)} + \ldots + p_m(x) y. \] acting in \( L_2(0, \infty) \), on the assumption that the coefficients \( p_n(x) \) decrease exponentially as \( x \to \infty \). A new approach to problems of expansion with respect to Eigenfunctions of a differential operator was suggested in a work by V. A. Marchenko [39]. Suppose \[ I(y) = \frac{d^2 y}{dx^2} - q(x) y \] (76) is a differential operator, defined in \( L_2(0, \infty) \) in a manifold of functions satisfying the boundary condition \[ y'(0) = l_0 y(0). \] (77) q(x) is an arbitrary complex function, integrable in each finite interval, while h is a complex number. Suppose ω(s, x) is the solution of equation l(y)+s^2y=0, satisfying the initial conditions \[ ω(s, 0) = 1, \quad ω'(s, 0) = h. \tag{78} \] Let us compare each finite function f(x) to a Fourier ω transform \[ E_f(s) = \int f(x) ω(s, x) \, dx. \tag{79} \] If \( E_g(s) \) is the Fourier ω transform of function g(x), then in the case of real \( q(x) \) and h, as we know, the following equation of Parseval is correct: \[ \int f(x)g(x) \, dx = \int E_f(\sqrt{λ})E_g(\sqrt{λ}) \, dλ. \tag{80} \] where \( ρ(λ) \) is a non-decreasing real function. The right portion of this formula can be interpreted to mean that the Parseval equation is retained with arbitrary \( q(x) \) and h, i.e., in the non self adjoint case. Going over to the presentation of this problem, we note that \( ω(s, x) \) and \( \cos sx \) are related by the transforms \[ ω(s, x) = \cos sx + \int_0^x K(s, t) \cos st \, dt \tag{81} \] \[ \cos sx = ω(s, t) + \int_0^t H(s, t)ω(s, t) \, dt. \tag{82} \] where \( K \) and \( H \) are smooth kernels. Substituting \( ω(s, x) \) from formula (81) into (79), it is easy -- on the basis of the Paley-Wiener theorem -- to see that \( E_f(s) \) is an even, exponential-type function with integrable square on the real axis. Let us represent by \( Z \) the topological space of all integral even functions, integrable on the real axis with the following definition of convergence: \( F_n(λ) \to F(λ) \), if \[ \lim_{n \to \infty} \int |F(λ) - F_n(λ)| \, dλ = 0 \] and the power \( σ \) of functions \( F_n(λ) \) are limited as a set. It is easy to see that the product \( E_f(\sqrt{λ})E_g(\sqrt{λ}) \) belongs to \( Z \), and it can be shown that this set of such derivatives is compact in \( Z \). The right portion in formula (80) can therefore be looked upon as a linear functional in $Z$, fixed in a compact manifold. The latter can be extended to all of space $Z$. Thus, in the self-adjoint case, operator (76) generates a certain continuous functional in $Z$ for which formula (80) is correct. As V. A. Marchenko proves, this affirmation retains its force in the general case, that is, operator (76) can always be related to continuous functional $(R, F(\lambda))$, $F(\lambda) \in Z$, for which the following formula is correct: \[ \int f(x)g(x)\,dx = \langle R, E_\lambda \psi_1, E_\lambda \psi_1 \rangle. \] (83) It is remarkable that V. A. Marchenko succeeded in solving the reverse problem: restore function $q(x)$ and $h$ on the basis of fixed functional $(R, F(\lambda))$. We note, however, that determination of the analytic expression for functional $R$ can be fully performed only with certain additional limitations placed on function $q(x)$. For example, under condition (73) it can be shown that \[ (R, F(\lambda)) = \int \frac{F(\sqrt{\lambda})}{B^\pm(l)} d\left(\frac{2}{\sqrt{\lambda}}\right) - \sum \text{Res } m_1(l) F(\sqrt{\lambda}), \] where \[ B^\pm(l) = y(l \pm \epsilon, 0) \lambda - y(l \pm \epsilon, 0). \] \[ m_1(l) = \frac{m(l)}{1 + m(l)}. \] $m(l)$ is an analogue of the Weil function. In the more general case, the functional can be represented by an integral with respect to the contour encompassing the spectrum of the operator. This contour has not yet been successfully extended to the spectrum. It can be shown that the idea of comparison of a linear operator of a functional in a certain topological space of analytic functions with subsequent study of the carrier of this functional can be applied in the case of a general linear operator. However, up to now this has been realized only in the case of a problem for one ordinary differential second order equation and system (see [47]). 21. V. I. Matsayev, Ob Odnom Klasse Violne Neprepyvyhk Operatorov [One Class of Fully Continuous Operators] (to be printed in DAN SSSR). 37. V. E. Lyantse, Ob Odnom Obobshchenii Ponyatpya Spektral'nogo Operatora [One Generalization of the Concept of the Spectral Operator] (to be printed in DAN SSSR). 42. V. S. Pavlov, O Nesamosopryazhennom Operatore Na Poluosi [A Non Self Adjoint Operator on a Half Axis], DAN SSSR, No 141, No 4, 1961. 44. V. V. Lidskiy, O Razlozhenii v Pyad Fur'e Po Glavnym Vektoram Ellipticheskogo Operatora [Expansion of an Elliptical Operator into a Fourier series with Respect to main Vectors], Matem. Sb., Vol 57, No 2, 1962., p 137-150.
Using Robust Estimation Algorithms for Tracking Explicit Curves Jean-Philippe Tard\textsuperscript{1}, Sio-Song Ieng\textsuperscript{1}, and Pierre Charbonnier\textsuperscript{2} \textsuperscript{1} LIVIC (INRETS-LCPC), 13, Route de la Minière, 78000 Versailles, France. tarel@lpc.fr, ieng@inrets.fr \textsuperscript{2} LRPC, 11, Rue Jean Mentelin, BP 9, 67200 Strasbourg, France. Pierre.Charbonnier@equipement.gouv.fr Abstract. The context of this work is lateral vehicle control using a camera as a sensor. A natural tool for controlling a vehicle is recursive filtering. The well-known Kalman filtering theory relies on Gaussian assumptions on both the state and measure random variables. However, image processing algorithms yield measurements that, most of the time, are far from Gaussian, as experimentally shown on real data in our application. It is therefore necessary to make the approach more robust, leading to the so-called robust Kalman filtering. In this paper, we review this approach from a very global point of view, adopting a constrained least squares approach, which is very similar to the half-quadratic theory, and justifies the use of iterative reweighted least squares algorithms. A key issue in robust Kalman filtering is the choice of the prediction error covariance matrix. Unlike in the Gaussian case, its computation is not straightforward in the robust case, due to the non-linearity of the involved expectation. We review the classical alternatives and propose new ones. A theoretical study of these approximations is out of the scope of this paper, however we do provide an experimental comparison on synthetic data perturbed with Cauchy-distributed noise. 1 Introduction Automatic driving and assistance systems development for vehicle drivers has been subject of investigations from many years [1]. Usually, this kind of problem is decomposed into two different tasks: perception and control. We focus on the particular problem of the lateral control of a vehicle on its lane, or lane-keeping. The perception task must provide an accurate and real-time estimation of the orientation and lateral position of the vehicle within its lane. Since the road is defined by white lane-markings, a camera is used as a perception tool. The control task requires computing, in real time, the wheel angle in such a way that the vehicle stays at the center of the lane. A key problem is to decide about the choice of the parameters transmitted between the control and perception modules. This raises the question of designing an approach which integrates both control and perception aspects. A popular technique in control theory is the well-known Kalman filtering. Kalman theory is very powerful and convenient, but it is based on the assumption that the state and the measures are Gaussian random variables. Most of the time, outputs of vision processes are far from the Gaussian assumption. This has been shown in several vision problems, for instance [3][4][2]. This leads us to consider robust Kalman theory when measures are not Gaussian, but corrupted by outliers. Various algorithms [5][6][7] were proposed to tackle the problem of robust Kalman filtering. The first algorithm proposed in [6] is difficult to apply in practice. Alternatives described in [5] and [7] outline an approach leading to weighted least squares algorithms. However, these approaches are restricted to a small number of convex functions, while the one we propose here is valid for a large class of not necessarily convex functions. Also, contrary to our approach, the estimation step of the algorithm in [5][7] is not iterative. We propose here an overview of the problem based on Lagrange multipliers for deriving the equations of the robust Kalman filtering leading to a iterative reweighted least squares algorithm. To our knowledge, in the existing derivations, the explanation of why the robust Kalman filtering is not exact is rarely discussed. The main advantage of this derivation, which is equivalent to the half-quadratic approach [3][4], is to allow us to see two levels of approximations. One consists in assuming a Gaussian summary of the past and the other concerns the covariance matrix of the estimated state at every time step. Different possible approximate covariance matrices are proposed and experimentally compared. The paper is organized as follows. First, we describe the system inboard the vehicle, and show that the features we are extracting from every image are not Gaussian. Second, for the sake of clarity, we gradually review least squares, recursive least squares, and Kalman filtering theory, and finally derive the robust Kalman filtering. Finally, we show the advantages of the designed robust Kalman filtering for the estimation of lane-markings position on perturbed road images and provide a comparison between the different approximate covariance matrices. 2 Image Feature Extraction ![Fig. 1. Side camera system.](image_url) We have developed a system for measuring the lateral position and orientation of a vehicle using a vertical camera on its side. Due to the camera specifications and position, the accuracy of this system should be about 2 cm in position. Fig. 1 shows a first version of the system. A second version, where the camera is inside the left side mirror, is in progress. The image plane is parallel to the road surface, and the camera is mechanically aligned with the vehicle axis. This geometry reduces the calibration of the system to very simple manipulations. ![Images](image1.png) **Fig. 2.** Typical image without perturbation (a), and perturbations due to another markings, shadows, lighting conditions (b) (c) (d). Solid lines are the fitted lane-markings centers assuming Gaussian noise. Fig. 2(a) displays a typical example of images observed by the camera. The seen lane-marking is very close to a straight line, even in curves. Images (b), (c) and (d) are examples of perturbations due to other markings and lighting conditions. The image processing consists in first, extracting features in each newly grabbed image and second, in robustly fitting a line (or another kind of curve, as described in the next section). The first step is required for real time processing. The set of extracted features must provide a summary of the image content relevant to the application. On every line of an image, a lane-marking is approximatively seen as a white hat function on the intensity profile. Lane-marking centers, on every image line, are chosen as the extracted features. Following the approach in [8], we want to reduce as much as possible the effect of low image contrast on the extracted features. Consequently, we have to design a detector which is relatively invariant to contrast changes. When the threshold on the intensity is reduced, features in images are numerous, and a criterion for selecting these becomes mandatory. We believe that selection based on geometrical considerations is a better alternative than selection based on intensity contrast. Since the system is calibrated, the feature extraction is performed on the width of lane-markings which is assumed to range between 8 and 23 cm. ![Graphs](graph1.png) **Fig. 3.** (a) Distribution of errors, (b) negative of its logarithm as a function of noise b. The obtained set of points is used by the line fitting. The question arises about the probability distribution function (pdf) of the extracted points around the true line. Most of the time, this pdf is assumed to be Gaussian. In Fig. 3(a), the measured pdf from a sequence of more than 100 real images is displayed. The pdf is not Gaussian, since Fig. 3(b) does not look like a parabola. Indeed, deeper investigations have shown that the curve in Fig. 3(b) can be very well approximated by \( \phi(b^2) = \sqrt{1 + \frac{b^2}{\sigma^2}} - c \) with \( \sigma = 5 \), in a range of \([-20, 20]\) pixels around the minimum. For a good approximation on a larger range, a linear combination of the same kinds of functions with different values of \( \sigma \) seem to be needed. 3 Robust Estimation Framework We consider that the lane-marking centers, extracted as described in the previous section, are noisy measurements of an underlying curve explicitly described as a function of one of its image coordinates: \[ y = \sum_{i=0}^{d} f_i(x) a_i = X(x)^t A\tag{1} \] where \((x, y)\) are the image coordinates of a point on the curve, \(A = (a_i)_{0 \leq i \leq d}\) is the coefficient vector of the curve parameters, and \(X(x) = (f_i(x))_{0 \leq i \leq d}\) is a vector of basis functions of the image coordinate \(x\). In the context of our application, the basis functions are chosen as \(f_i(x) = x^i\). The underlying curve is therefore a polynomial of degree \(d\) (i.e., a line when \(d = 1\), a parabola when \(d = 2\)). Other bases may be used with their corresponding advantages or disadvantages. In our model, the vertical coordinate is chosen as the \(x\) and assumed non-random. Thus only the other coordinate of the extracted point, \(y\), is considered as a noisy measurement, i.e., \(y = F(x)^t A + b\). In all that follows, the measurement noise \(b\) is assumed independent and identically distributed (iid), and centered. For an intuitive understanding, we make a gradual presentation of the robust Kalman framework. Non-recursive least squares fitting is first recalled. Then, robust estimators are presented based on Lagrange multipliers approach and approximate inverse covariance matrices are proposed. In the fourth subsection, we introduce recursive and robust least squares (recursive least squares is a simple case of Kalman filter, using a constant state model). Finally, the robust Kalman filter is described. 3.1 Least Squares Fitting First, we remember the very simple situation where only one image is observed and where the noise \(b\) is Gaussian. The goal is to estimate the curve parameters \(A_{LS}\) on the whole \(n\) extracted points \((x_i, y_i), i = 1, ..., n\). This issue is also known as a regression problem. Let \(A\) denote the underlying curve parameters we want to approximate with \(A_{LS}\). Let \(\sigma\) be the standard deviation of the Gaussian noise b. The probability of a measurement point \((x_i, y_i)\), given the curve parameters \(A\), is: \[ p_i((x_i, y_i)/A) = \frac{1}{\sqrt{2\pi\sigma}} e^{-\frac{1}{2}(\frac{y_i - X(x_i)}{\sigma})^2} \] For simpler equations, from now, we denote \(X_i = X(x_i)\). We can write the probability of the whole set of points as the product of the individual probabilities: \[ p \propto \prod_{i=1}^{i=n} e^{-\frac{1}{2}(\frac{X_i^t A - y_i}{\sigma})^2} \] (2) where \(p\) is the so-called likelihood of the point data set, given curve parameter \(A\). \(\propto\) denotes the equality up to a factor. Maximizing likelihood \(p\) with respect to \(A\) is equivalent to minimizing the negative of its logarithm, namely: \[ e_{LS}(A) = \frac{1}{2\sigma^2} \sum_{i=1}^{i=n} (X_i^t A - y_i)^2 \] It is the so-called least squares error. Since the fitting error is quadratic and positive, the minimization of \(e_{LS}\) is equivalent to canceling the vector of its first derivative with respect to \(A\). It gives the well-known normal equations: \[ XX^t A = XY \] (3) where \(Y = (y_i)_{1 \leq i \leq n}\) is the vector of \(y\) coordinates, the matrix \(X = (X_i)_{1 \leq i \leq n}\) is the design matrix, and \(S = XX^t\) is the scatter matrix which is always symmetric and positive. If \(S\) is definite, (3) has the unique solution \(A_{LS} = S^{-1} XY\). Computing the best fit \(A_{LS}\) simply requires solving the linear system (3). As seen before, it is also the Maximum Likelihood Estimate (MLE). Since only \(Y\) is random, the expectation of \(A_{LS}\) is \(\overline{A_{LS}} = S^{-1} XY\). The point coordinates in \(Y\) correspond to points exactly on the underlying curve, thus \(A = S^{-1} XY\). Therefore, \(\overline{A_{LS}}\) equals \(A\), i.e. the estimator \(A_{LS}\) of \(A\) is unbiased. The covariance matrix \(C_{LS}\) of \(A_{LS}\) is \((\overline{A_{LS}} - \overline{A_{LS}})(\overline{A_{LS}} - \overline{A_{LS}})^t = S^{-1} X(Y - \overline{Y})(Y - \overline{Y})^t X^t S^{-t}\). We have \((Y - \overline{Y})(Y - \overline{Y})^t = \sigma^2 I_d\), since the noise \(b\) is iid with variance \(\sigma^2\). \(I_d\) denotes the identity matrix of size \(n \times n\). Finally, the inverse covariance matrix of \(A_{LS}\) is deduced: \[ C_{LS}^{-1} = \frac{1}{\sigma^2} S = Q_{LS} \] (4) \(Q_{LS}\) is also known as Fisher’s information matrix for the set of \(n\) data points. \(Q_{LS}\) is defined as the expectation of the second derivative of \(e_{LS}\) with respect to \(A\). Finally, since \(e_{LS}\) is minimum in \(A_{LS}\) with second derivative matrix \(Q_{LS}\), (2) can be rewritten as: \[ p \propto e^{-\frac{1}{2}(A - A_{LS})^t Q_{LS}(A - A_{LS})} \] (5) As clearly shown on Fig. 2, least squares fitting does not provide correctly fit curves in the presence of image perturbations. 3.2 Robust Fitting We still assume that only one image is observed, and that measurement noises are iid and centered. But now, the noise is not assumed Gaussian, but having heavier tails. The heaviest observed noise is specified by a function \( \phi(t) \) in such a way that the probability of measurement point \( (x_i, y_i) \), given curve parameter \( A \), is: \[ p_i((x_i, y_i)/A) \propto e^{-\frac{1}{2} \phi\left(\frac{x_i^2}{2} \frac{y_i^2}{\sigma^2}\right)} \] Similarly to the half-quadratic approach \([3][4]\), \( \phi(t) \) is assumed: - \( \textbf{H0}: \) defined and continuous on \([0, +\infty]\) as its first and second derivatives, - \( \textbf{H1}: \) \( \phi'(t) > 0 \) (thus \( \phi \) is increasing), - \( \textbf{H2}: \) \( \phi''(t) < 0 \) (thus \( \phi \) is concave). These three assumptions are very different from the ones used in M-estimator approach for the convergence proof. Indeed in \([9]\), the convergence proof requires that \( \rho(b) = \phi(b^2) \) is convex. In our case, the concavity and monotony of \( \phi(t) \) implies that \( \phi'(t) \) is bounded, but \( \phi(b^2) \) is not necessarily convex with respect to \( b \). Note that, the pdf of Sec. 2, observed in practice on real data, verifies these three assumptions. Following \([9]\), the role of this \( \phi \) function is to saturate the error in case of an important measurement noise \( |h_i| = |X_i^TA - y_i| \), and thus to lower the importance of outliers. The scale parameter, \( \sigma \), sets the distance from which a measurement noise has a good chance to be considered as outliers. Notice that with certain \( \phi \), the associated pdf cannot be integrated on its support. Without difficulties, a bounded support with fixed bounds can be introduced to maintain the statistical interpretation of the fitting. Following the same MLE approach than for least squares, the problem is set as the minimization with respect to \( A \) of the robust error: \[ e_R(A) = \frac{1}{2} \sum_{i=1}^{n} \phi\left(\frac{(X_i^TA - y_i)^2}{\sigma^2}\right) \] Notice that the Gaussian case corresponds to the particular case in which \( \phi(t) = t \), but this last function does not strictly agree with assumption (H2). \( e_{LS}(A) \) is indeed a limit case of \( e_R(A) \). Contrary to the Gaussian case, the previous minimization is in general not quadratic. This last minimization can be done iteratively using the Gradient or Steepest Descent algorithms. But, since \( \phi(b^2) \) and thus \( e_R(A) \) are not necessarily convex, these algorithms can be relatively slow when the gradient slope is near zero. Indeed, the speed of convergence is only linear, when quasi-Newton algorithms achieve a quadratic speed of convergence. But generally, with quasi-Newton algorithms, the convergence to a local minimum is not sure. Therefore, we prove next that the used quasi-Newton algorithm always converges towards a local minimum. A global minimum can be obtained using simulated annealing, despite an expensive computational cost \([3]\). We now explain how this \( e_R \) can be solved iteratively, using the well known quasi-Newton algorithm named iterative reweighted least squares. The same algorithm is also a particular case obtained with the half-quadratic approach [4]. First, we rewrite \( e_R(A) \) as the search for a saddle point of the associated Lagrange function. Then, the algorithm is obtained as a alternated minimization of the dual function. First, we rewrite the minimization of \( e_R(A) \) as the maximization of \(-e_R\). This will allow us to later write \(-e_R(A)\) as the extremum of a convex function rather than a concave one, since the negative of a concave function is convex. Second, we introduce the auxiliary variables \( w_i = (\frac{X_i^TA-y_i}{\sigma})^2 \). These variables are needed to rewrite \(-e_R(A)\) as the value achieved at the minimum of a constrained problem. This apparent complication is in fact precious since it allows us to introduce the Lagrange multipliers. Indeed using (H1), \(-e_R(A)\) can be seen as the minimization with respect to \( W = (w_i)_{1 \leq i \leq n} \) of: \[ E(A, W) = \frac{1}{2} \sum_{i=1}^{i=n} -\phi(w_i) \] subject to \( n \) constraints \( h_i(A, W) = w_i - (\frac{X_i^TA-y_i}{\sigma})^2 \leq 0 \). For any \( A \), we now focus on the minimization of \( E(A, W) \) with respect to \( W \) only subject to the \( n \) constraints \( h_i(A, W) \leq 0 \), with respect to \( W \) only. This problem is well-posed because it is a minimization of a convex function subject to convex constraints. Therefore using the classical Kuhn and Tucker’s theorem [10], if a solution exists, the minimization of \( E(A, W) \) with respect to \( W \) is equivalent to the search of the unique saddle point of the Lagrange function of the problem: \[ L_R(A, W, \lambda_i) = \frac{1}{2} \sum_{i=1}^{i=n} -\phi(w_i) + \lambda_i (w_i - (\frac{X_i^TA-y_i}{\sigma})^2) \] where \( \lambda_i \) are Kuhn and Tucker multipliers (\( \lambda_i \geq 0 \)). More formally, we have proved for any \( A \): \[ -e_R(A) = \min_{w_i} \max_{\lambda_i} L_R(A, W, \lambda_i) \] (6) Notice that the Lagrange function \( L_R \) is now quadratic with respect to \( A \), contrary to the original error \( e_R \). Using the saddle point property, we can change the order of the variables \( w_i \) and \( \lambda_i \) in (6). \( L_R(A, W, \lambda_i) \) being convex with respect to \( W \), it is equivalent to search for a minimum of \( L_R(A, W, \lambda_i) \) with respect to \( W \) and to have its first derivative zero. Thus, we deduce: \[ \lambda_i = \phi'(w_i) \] (7) This last equation can be used with (H2) to substitute \( w_i \) in \( L_R \) and then to deduce that the original problem is equivalent to the following minimization: \[ \min_A e_R(A) = \min_{A, \lambda_i} -L_R(A, \phi'^{-1}(\lambda_i), \lambda_i) \] \( \mathcal{E}(A, \lambda_i) = -L_R(A, \phi'^{-1}(\lambda_i), \lambda_i) \) is the dual function. The dual function is con- vex with respect to \( A \). \( \mathcal{E} \) is also convex with respect to \( \lambda_i \) (Indeed, \( \frac{\partial^2 \mathcal{E}}{\partial \lambda_i^2} = \) \[-\frac{1}{2} \frac{d}{d A} \left( \frac{1}{2} (\tilde{A} - A)^2 \right) \]. Since \( e_R(b^2) \) is not convex, it is not necessary that \( E \) is convex with respect to \( A \) and \( \lambda_\mathcal{y} \). Therefore, \( E(A, \lambda_\mathcal{y}) \) does not have a unique minimum. An alternate minimization of the dual function leads to the classical robust algorithm, used in the half-quadratic and M-estimator approaches: 1. Initialize \( A_0 \), and set \( j = 1 \). 2. For all indexes \( i \) (1 \( \leq i \leq n \)), compute the auxiliary variable \( w_{ij} = (\frac{X_i^\dagger (A_{j-1} - y_i)^2}{\sigma}) \). 3. Solve the linear system \( \sum_{i=1}^{n} \phi'(w_{ij})X_iX_i^\dagger A_j = \sum_{i=1}^{n} \phi'(w_{ij})X_iy_i \). 4. If \( \|A_j - A_{j-1}\| > \epsilon \), increment \( j \), and go to 2, else \( A_{\text{RLS}} = A_j \). The convergence test can be also performed on the error variation. A test on a maximum number of iterations can be added too. It can be shown that the previous algorithm always strictly decreases the dual function if the current point is not a stationary point (i.e., a point where the first derivatives are all zero) of the dual function [11]. Using the previous Lagrange function, this proves that the previous algorithm is globally convergent, i.e., it converges towards a local minimum of \( e_R(A) \) for all initial \( A_0 \)’s which are not a maximum of \( e_R(A) \). As a quasi-Newton algorithm, it can also be proved that the speed of convergence of the algorithm around a local minimum is quadratic, when \( S \) is definite. \[ \text{Fig. 4. Fitting on a real image assuming (a) Gauss, (b) quasi-Laplace, (c) Cauchy, and (d) Geman & McClure distributed noise. Thin black lines are the initial } A_0 \text{'s. Thick ones are the fitting results. See Sec. 4 for a definition of the pdfs.} \] Finally, Fig. 4 illustrates the importance of robust fitting in images with many outliers. The thin black lines depict the initial \( A_0 \)’s. The thicker ones are the fitting results \( A_R \), assuming (a) Gauss, (b) quasi-Laplace, (c) Cauchy, and (d) Geman & McClure distributed noise. A correct fitting is achieved only with the last two pdfs which are not convex. ### 3.3 Covariance matrix in Robust Fitting The covariance matrix \( C_R \) of the estimate \( A_R \) is required for a correct management of uncertainties in a recursive process. Contrary to the least squares case, where the covariance matrix was easy to compute using its definition, the estimation of \( C_R \) as the expectation of \( (A_R - \overline{A_R})(A_R - \overline{A_R})^\dagger \) is difficult in the robust framework, due to the non-linearities. An alternative is to use an approximation. Similar to [9], p. 173-175, an approximation based on extending (4) is proposed. The inverse covariance matrix is approximated by the second derivative of \( e_R \) at the achieved minimum: \[ C_{R,Huber}^{-1} = \sum_{i=1}^{i=n} (2w_i \phi''(w_i) + \phi'(w_i))X_iX_i^t \] where \( w_i \) is computed once the minimum of \( \epsilon_R \) is achieved. The value \( \frac{d}{d \epsilon} \phi((\frac{\epsilon}{\epsilon_R})^2) = 2w\phi''(w) + \phi'(w) \) is not always positive, since \( \phi((\frac{\epsilon}{\epsilon_R})^2) \) is not necessarily convex with respect to \( \epsilon \). Nevertheless, the second derivative of \( \epsilon_R \) with respect to \( \epsilon \) at \( \epsilon_R \) is a positive matrix since \( \epsilon_R \) achieves a minimum. This property is a necessary condition for the matrix being interpreted as a covariance matrix. In [5][7], another approximation is implicitly used in the context of approximate robust Kalman filtering. The proposed approximate inverse covariance matrix can be seen as the second derivative of \( -L_R \) with respect to \( \epsilon \), at the achieved saddle point: \[ C_{R,Cipra}^{-1} = \sum_{i=1}^{i=n} \lambda_i X_iX_i^t \] where \( \lambda_i \) is computed when the minimum of \( \epsilon_R \) is achieved. However, p. 175 of [9], Huber warns us against the use of this matrix (9). Another approximation can be obtained if we forget that \( \lambda_i \) is a random variable. Let us rewrite the last equation of the robust algorithm as: \[ XRX^tA = XRY \] where \( R \) is a \( n \times n \) matrix with diagonal values \( \lambda_i \), \( 1 \leq i \leq n \). Using these notations, the covariance matrix \( C_{R,new1} \) is \((\bar{A}_R - \bar{A}_R)(\bar{A}_R - \bar{A}_R)^t\) and equals \((XRX^t)^{-1}XR(Y - \bar{Y})(Y - \bar{Y})^t R^t X^t (XRX^t)^{-1} \). Recalling from Sec. 3.1, that \((Y - \bar{Y})(Y - \bar{Y})^t = \sigma^2 I_d\), we deduce: \[ C_{R,new1}^{-1} = \frac{1}{\sigma^2} (XRX^t)^4 (XRX^t)^{-1} \] We also propose, without justification, another approximation: \[ C_{R,new2}^{-1} = \frac{1}{\sigma^2} \sum_{i=1}^{i=n} \lambda_i^2 X_iX_i^t \] Now, the question is "what is the best choice for an approximate inverse covariance matrix?" A theoretical study is out of the scope of this paper, but we provide an experimental comparison in Sec. 4. ### 3.4 Recursive Fitting We now consider the problem of sequentially processing images. The steady-state situation consists in supposing that we observe, at every time step \( t \), the same underlying curve. Suppose that images are indexed by \( t \) and that for each image \( t \), we have to fit its \( n_t \) data points \((x_{it},y_{it})\), \( i = 1, ..., n_t \). Of course, assuming that every point in every image is iid and centered, it is clear that we could directly apply what is explained in the two previous sections, on the whole data set. However, it is better to take advantage of the sequential arrival of images and deploy a recursive algorithm, in particular for saving memory space and number of computations, especially in the context of real time processing. **Recursive Least Square Fitting:** When least squares error is used, recursive algorithms are based on an exhaustive summary of the data points, observed before \( t \). Indeed, the error of the data points from time 1 to \( t \) is: \[ e_{r,LS,t}(A) = \frac{1}{2\sigma^2} \sum_{k=1}^{k=t} \sum_{i=1}^{i=n} (F(x_i,k)^t A - y_i,k)^2 \] This sum can be rewritten as the sum of the error at time \( t \) alone and of the error from time 1 to \( t-1 \): \[ e_{r,LS,t}(A) = \frac{1}{2} (A - A_{r,LS,t-1})^t Q_{r,LS,t-1} (A - A_{r,LS,t-1}) + \frac{1}{2\sigma^2} \sum_{i=1}^{i=n} (X_i^t A - y_i,t)^2 \] (13) Using (5), the summary of the past error consists in the previously fitted solution \( A_{r,LS,t-1} \) and its Fisher's matrix \( Q_{r,LS,t-1} \). By comparing \( e_{r,LS,t} \) with \( e_{LS} \), the exhaustive summary by \( A_{r,LS,t-1} \) and \( Q_{r,LS,t-1} \) can be interpreted as a Gaussian prior on \( A \) at time \( t \). The error \( e_{r,LS,t} \) is quadratic and using (5) its second order matrix is \( Q_{r,LS,t} \). Taking second derivative of (13), we deduce: \[ Q_{r,LS,t} = Q_{r,LS,t-1} + \frac{1}{\sigma^2} S_t \] (14) where \( S_t = \sum_{i=1}^{i=n} X_i,t X_i,t^t \). The recursive update of the fit is obtained by solving the following linear system obtained by canceling the first derivative of \( e_{r,LS,t} \) with respect to \( A \): \[ Q_{r,LS,t} A_{r,LS,t} = Q_{r,LS,t-1} A_{r,LS,t-1} + \frac{1}{\sigma^2} T_t \] (15) with \( T_t = \sum_{i=1}^{i=n} y_i,t X_i,t \). As a consequence, the recursive fitting algorithm consists of the following steps: 1. Initialize the recursive fitting by setting \( Q_{r,LS,0} \) and \( A_{r,LS,0} \) to zero, and set \( t=1 \). 2. For the data set associated to step \( t \), compute the matrix \( S_t = \sum_{i=1}^{i=n} X_i,t X_i,t^t \) and the vector \( T_t = \sum_{i=1}^{i=n} y_i,t X_i,t \) only related to the current data set. 3. Update the Fisher's matrix \( Q_{r,LS,t} \) using (14). 4. Compute the current fit \( A_{r,LS,t} \) by solving the linear system (15). 5. If a new dataset is available, increment \( t \) and go to 2. The solution, obtained by this recursive algorithm at step \( t \), is the same that the one obtained by standard least squares using all points of time steps from 1 to \( t \). It is the so-called recursive (or sequential) least squares algorithm (subscript \( rLS \)). Note that no matrix inverse is explicitly needed. Only one linear system is solved at every time step \( t \). This can be crucial in real time applications, since the complexity for solving the linear system is \( O(d^2) \), when it is \( O(d^3) \) for a matrix inverse. Note that (14) gives the recursive update of the Fisher’s matrix $Q_{r, LS,t}$ as a function of the previous Fisher’s matrix $Q_{r, LS,t-1}$ and of the current scatter matrix $S_t$. A better initialization of $Q_{r, LS,0}$ than 0 consists in $\beta$ times the identity matrix, where $\beta$ has a positive value close to zero. This initialization insures that the solution of (15) is unique. Indeed, $Q_{r, LS,t}$ is definite for any $t$, even if $S_t$ in (14) is not. This is equivalent to the Ridge Regression regularization [12]. More generally, $Q_{r, LS,0}$ is the inverse covariance matrix on the Gaussian prior on the curve parameters $A$, leading to a Maximum A Posteriori (MAP) estimate. **Recursive Robust Fitting:** Is it possible to generalize this recursive scheme in the robust case? In general, an exact answer is negative: it is not possible to rewrite (13) excepted for a very narrow class of function $\phi$, that do not satisfy our assumptions (see sufficient statistics in [13]). Moreover, to obtain a solution without approximation, the computation of the weights $\lambda_i$ would require storing all past observed points in memory up to the current time step. For real time application, this is a problem, since it means that the number of computations will increase with time $t$. Clearly, a second level of approximation is needed - remember that the first one consists in the approximate computation of the inverse covariance matrix as described in Sec. 3.3. It is usual to consider $A_{r, R, t-1}$ as Gaussian with a covariance matrix $C_{r, R, t-1} = Q_{r, R, t-1}^{-1}$, while it is a seldom pointed out that it is an approximation. The summary by $A_{r, R, t-1}$ and $Q_{r, R, t-1}$ is not exhaustive, but can still be included as a prior during the robust fitting at every time step: $$e_{r, R, t}(A) = \frac{1}{2} \sum_{i=1}^{i=n_t} \phi\left(\frac{X_{i,t}^t A - y_{i,t}}{\sigma}\right)^2 + \frac{1}{2} (A - A_{r, R, t-1})^t Q_{r, R, t-1} (A - A_{r, R, t-1})$$ Thus, $e_{r, R, t}$ can be minimize by following the same approach than in Sec. 3.2. If Huber’s (8) approximate is used, the new approximate inverse covariance matrix at time step $t$ is: $$Q_{r, R, t, Huber} = \sum_{i=1}^{i=n_t} (2w_{i,t}^t \phi''(w_{i,t}) + \phi'(w_{i,t})) X_{i,t}^t X_{i,t}^t + Q_{r, R, t-1, Huber}$$ (16) where $w_{i,t} = w_{i, j, t}$ with $j$ the last iteration when the minimum of $e_{r, R}$ is reached. Similarly, other approximations can be derived using Cipra’s (9) and our approximations (11) and (12). Finally, the recursive and robust algorithm consists of the following steps: 1. Initialize $Q_{r, R, 0}$ and $A_{r, R, 0}$ to zero or to a prior, and set $t=1$, 2. Initialize $A_{0,t} = A_{r, R, t-1}$, and set $j = 1$, 3. For all indexes $i$ ($1 \leq i \leq n_t$), compute the auxiliary variable $w_{i, j, t} = (X_{i,t}^t A_{j-1,t-1} - y_{i,t})^2$, 4. Solve the linear system \[ \sum_{i=1}^{i=n_t} \phi'(w_{i,j,t}) X_{i,t} X_{i,t}^T + Q_{t,R,t-1} A_{j,t} = \sum_{i=1}^{i=n_t} \phi'(w_{i}) X_{i,t} Y_{i} + Q_{t,R,t-1} A_{t,R,t-1} \] 5. If \(\|A_{j,t} - A_{j-1,t}\| > \epsilon\), increment \(j\), and go to 3, else continue. 6. \(A_{t,R,t} = A_{j,t}\) and its approximate inverse covariance matrix \(Q_{t,R,t}\) is given by (16) or similar. If a new dataset is available, increment \(t\), and go to 2. In the recursive context, it is clear that a better estimate of the covariance matrix leads to better recursive estimators. In particular, if the covariance matrix is under-estimated with respect to the true covariance matrix, information about the past will be gradually lost. On the contrary, if the covariance matrix is over-estimated, the impact of the most recent data is always diminished. ### 3.5 Robust Kalman Kalman filtering is a stochastic, recursive estimator, which estimates the state of a system based on the knowledge of the system input, the measurement of the system output, and a model of the link between input and output. We can identify state \(A_t\) at time \(t\) with \(A_{t,LS,t}\) or \(A_{t,R,t}\), depending of the measurement noise pdf. As in Sec. 3.1, we introduce \(Y_t = (y_{i,t})_{1 \leq i \leq n_t}\), which is the so-called measurement vector, and \(X_t = (X_{i,t})_{1 \leq i \leq n_t}\), the measurement matrix. The link between measurements and state can thus be written as \(Y_t = X_t A_t + B\) where \(B\) is a vector of iid, centered measurement noises. This equation is the so-called measurement equation. Compared to the recursive least squares, discrete Kalman filtering consists in assuming linear dynamics for the state model. More precisely, we assume \(A_t = U_t A_{t-1} + V_t + u\) where \(u\) is a centered iid Gaussian model noise. This last equation is the so-called model, or state-transition, equation. As a summary, the Kalman model is: \[ \begin{cases} A_t = U_t A_{t-1} + V_t + u \\ Y_t = X_t A_t + v \end{cases} \] (17) When \(v\) is Gaussian, (17) models the classical Kalman (subscript \(K\)). When \(v\) is non Gaussian, (17) models the robust to non-Gaussian measurement Kalman, or robust Kalman for short (subscript \(RK\)). The steady-state case we dealt with in the previous section, is a particular case of (17), where the first equation is deterministic and reduced to \(A_{t+1} = A_t\). In the dynamic case with \(v\) Gaussian, the prior on \(A\) is not \(A_{t-1}\) but the prediction \(A_{K,t} = U_t A_{K,t-1} + V_t\), given by the model equation. Using the model equation, the covariance matrix of the prediction \(A_{K,t}\) is derived as \(\hat{C}_{K,t} = U_t C_{K,t-1} U_t^T + \Sigma\), where \(\Sigma\) is the covariance matrix of the Gaussian model noise \(u\). Thus the inverse covariance matrix of the prediction \(Q_{K,t} = C_{K,t}^{-1}\) using the matrix lemma, is: \[ Q_{K,t} = \Sigma^{-1} - \Sigma^{-1} U_t (U_t^T \Sigma^{-1} U_t + Q_{K,t-1})^{-1} U_t^T \Sigma^{-1} \] (18) This last equation is interesting in the context of real time applications, since it involves only one matrix inverse at every time $t$. As in (13), the prediction is used as a Gaussian prior on $A$. The associated error, to be compared with (13), is now: $$e_{K,t}(A) = \frac{1}{2} \sum_{i=1}^{\infty} \frac{1}{\sigma^2}(X_i^t A - y_i,t)^2 + \frac{1}{2} (A - \hat{A}_{K,t})^T \hat{Q}_{K,t} (A - \hat{A}_{K,t})$$ The recursive equations of the Kalman filtering are obtained by derivations from $e_{K,t}$. When $\hat{Q}_{K,t}$ is computed, only one linear system has to be solved at every $t$. How does this method extend to the robust case? As before with recursive least squares, generally, an exact solution of the robust Kalman is not achievable. The two levels of approximations must be performed. Like in Sec. 3.4, we assume that $\hat{A}_{RK,t-1}$ is approximatively Gaussian, and its inverse covariance matrix is given by one of the approximations of Sec. 3.3. As a consequence, the associated error is: $$e_{RK,t}(A) = \frac{1}{2} \sum_{i=1}^{\infty} \phi \left( \frac{X_i^t A - y_i,t}{\sigma} \right)^2 + \frac{1}{2} (A - \hat{A}_{RK,t})^T \hat{Q}_{RK,t} (A - \hat{A}_{RK,t})$$ In the robust Kalman, the Huber’s approximation (16), translates as: $$Q_{RK,t,Huber} = \sum_{i=1}^{\infty} (2w_i \phi''(w_i,t) + \phi'(w_i,t)) X_i^t X_i^t + \hat{Q}_{RK,t}$$ Other approximate inverse covariance matrix can be derived using Cipra’s (9) and our approximations (11) and (12). Finally, the robust Kalman algorithm consists of the following steps: 1. Initialize $Q_{RK,0}$ and $A_{RK,0}$ to zero or to a prior, and set $t=1$. 2. Compute the predicted solution $\hat{A}_{RK,t} = U_t A_{RK,t-1} + V_t$, and its covariance matrix $Q_{RK,t}$ using (18). 3. Initialize $\hat{A}_{0,t} = \hat{A}_{RK,t}$, and set $j = 1$. 4. For all indexes $i$ ($1 \leq i \leq n_t$), compute the auxiliary variable $w_{i,j,t} = \left( \frac{X_i^t A_{j-1,t} - y_{i,t}}{\sigma} \right)^2$. 5. solve the linear system $$\sum_{i=1}^{\infty} \phi'(w_{i,j,t}) X_i^t X_i^t + \hat{Q}_{RK,t} A_{j,t} = \sum_{i=1}^{\infty} \phi'(w_i,t) X_i y_i + \hat{Q}_{RK,t} \hat{A}_{RK,t}$$ 6. If $\|A_{j,t} - A_{j-1,t}\| > \epsilon$, increment $j$, and go to 4, else continue. 7. $A_{RK,t} = A_{j,t}$ and its approximate inverse covariance matrix $Q_{RK,t}$ is given by (19) or similar. If a new dataset is available, increment $t$, and go to 2. Note that in [5][7], one single weighted least squares iteration is performed at each time step. We believe for each iteration one should achieve convergence in the approximation done in steps 4-6. Moreover the weights in [7] are binary. This corresponds to a truncated Gaussian pdf, violating (H0). In such a case, the choice of the scale parameter becomes critical: a small variation of the scale parameter can produce a very different solution. As a conclusion, the Lagrange multipliers approach (and half-quadratic approach) of robust fitting allows us to have new insight in why robust Kalman filtering provides approximate estimates. Robust Kalman is not exact because: the amount of past data cannot be reduced without loss of information, and the covariance matrix of the predicted state is an approximation. Contrary to [5][7], this formulation also suggests that it is important to iteratively search for the best solution $A_t$ at every time steps. 4 Experiments <table> <thead> <tr> <th>$\alpha$</th> <th>Name</th> <th>pdf $\propto e^{-\frac{1}{2} \theta(b^2)}$</th> <th>error $= \phi(b^2)$</th> <th>weight $= \phi'(t)$</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Gauss</td> <td>$e^{-\frac{1}{2} b^2}$</td> <td>$b^2$</td> <td>1</td> </tr> <tr> <td>0.5</td> <td>quasi-Laplace</td> <td>$e^{-\frac{1}{2} \sqrt{1+b^2}}$</td> <td>$2(\sqrt{1+b^2} - 1)$</td> <td>$\frac{1}{\sqrt{1+b^2}}$</td> </tr> <tr> <td>0</td> <td>Cauchy</td> <td>$\frac{1}{1+b^2}$</td> <td>$\ln(1+b^2)$</td> <td>$\frac{1}{1+b^2}$</td> </tr> <tr> <td>-1</td> <td>Geman &amp; McClure [3]</td> <td>$e^\frac{1}{2} \sqrt{1+b^2}$</td> <td>$\frac{1}{\sqrt{1+b^2}}$</td> <td>$\frac{1}{1+b^2}$</td> </tr> </tbody> </table> Table 1. Correspondence between particular values of $\alpha$ and classical $\phi$s and pdfs proposed in the literature. We have restrict ourselves in the choice of $\phi$ to the following one parameter family of functions: $$\phi_\alpha(t) = \frac{1}{\alpha}(1 + t^\alpha - 1)$$ These functions verify the three assumptions (H0), (H1), and (H2), when $\alpha < 1$. This family is very convenient, since it allows us to catch many of the classical $\phi$s and pdfs proposed in the literature. Tab. 1 illustrates this fact. Notice that the pdf obtained in the experiments of Sec. 2 corresponds to $\alpha = 0.5$. The pdf obtained for $\alpha = 0.5$, also known as the hypersurface function, is a good differentiable approximation of Laplace's pdf. Thus we have preferred to name it the quasi-Laplace function. <table> <thead> <tr> <th>Name</th> <th>$\sqrt{C_{00}}$</th> <th>$\sqrt{C_{11}}$</th> <th>$\sqrt{C_{00}}$ rel. Std.</th> <th>$\sqrt{C_{11}}$ rel. Std.</th> </tr> </thead> <tbody> <tr> <td>$C_{\text{new}}$</td> <td>0.138</td> <td>0.00474</td> <td>7.8%</td> <td>10.6%</td> </tr> <tr> <td>$C_{\text{Improved}}$</td> <td>0.162</td> <td>0.00555</td> <td>9.8%</td> <td>13.2%</td> </tr> <tr> <td>$C_{\text{Huber}}$</td> <td>0.189</td> <td>0.00647</td> <td>10.5%</td> <td>14.1%</td> </tr> <tr> <td>$C_{\text{new}}$</td> <td>0.190</td> <td>0.00653</td> <td>13.1%</td> <td>17.6%</td> </tr> <tr> <td>reference</td> <td>0.195</td> <td>0.00688</td> <td></td> <td></td> </tr> </tbody> </table> Table 2. Comparison between the covariance matrices obtained with various approximations. The relative standard deviations are also shown. A simulation was performed using 50000 fits on simulated 101 noisy points along a line with true parameters $a_0 = 100$ and $a_1 = 1$. $a_0$ is the pose of the line and $a_1$ is its slope. The noise pdf for each sample is Cauchy $\propto \frac{1}{1+(\frac{x}{\sigma})^2}$ with $\sigma = 1$. The Cauchy noise was simulated by applying the function $\tan(\frac{\pi}{2} v)$ on $v$, a uniform noise on $[-1, 1]$. The variance of the Cauchy pdf is not defined, thus the simulated noise can have very large values (outliers). Robust fits were obtained using the Cauchy-distributed pdf of Sec. 3.2. For every fit, the Huber’s, Cipra’s, and ours approximate covariance matrices were computed and averaged. These are denoted $C_{Huber}$, $C_{Cipra}$, $C_{new1}$ and $C_{new2}$, respectively. The square roots of the averaged diagonal matrix components are shown in Tab. 2. The covariance matrix of the 50000 fits is also estimated and is the reference displayed in the last line of Tab. 2 (Monte-Carlo estimates). ![Fig. 5. Typical image without perturbation (a), and perturbations due to another markings, shadows, lighting conditions (b) (c) (d). Solid lines are the fitted lane-markings centers assuming Geman & McClure noise.](image) Tab. 2 shows that the closest approximation is $C_{new2}$. All these approximations can be ordered in terms of proximity, with respect to the reference one, in the following order: $C_{new2}$, $C_{Huber}$, $C_{Cipra}$ and $C_{new1}$. Notice that the closer to the reference one the matrix is, the larger its relative variation from one fit to another is. For instance, this variation is 7.8% for $C_{new1}$ when it is 13.1% for $C_{new2}$. Clearly, the choice of the approximation is a trade-off between accuracy and stability. We also notice that all the approximations under-estimate the true matrix. A different weighting of the two right terms in (16) and (19) can be introduced for correcting this. Finally, we show in Fig. 5, the same images than in Fig. 2 using Geman & McClure noise assumption. Unlike the Gaussian assumption, the obtained fits are correct even in presence of important perturbations due to other lane-markings and difficult lighting conditions. 5 Conclusion In this paper, we have reviewed the problem of making Kalman filtering robust to outliers, in a unified framework. The link with Lagrange multipliers yields a revised half-quadratic theory and justifies the use of iterative reweighted least squares algorithms in M-estimator theory even for non-convex $\rho(b) = \phi(b^2)$. Moreover, in contrast to previous works, we do not restrict ourselves to a single potential function but the half-quadratic framework is valid for a large class of functions, involving non-convex and hence more robust ones. We have shown that, as soon as non-Gaussian likelihoods are involved, two levels of approximation are needed. First, in contrast with the non-robust case, there is no obvious closed-form expressions for the covariance matrix. After reviewing classical solutions, we proposed new approximations and experimentally studied their behavior in terms of accuracy and stability. An accurate covariance matrix is very important to tackle the problem of missing data on a long sequence of images, an important subject for future investigations. Second, to design a recursive filter in the robust case, the pdf of previous estimates must be considered as Gaussian. In existing algorithms, only one iteration is performed at each time step to obtain the robust estimate. We believe it is better to let the iterative least squares algorithm to converge. Further exploration than presented in [5] is needed to treat the case where the noise involved in the state-transition equation is non-Gaussian. Here the challenge is to derive an integrated and consistent framework. References
[REMOVED]
Modelling regional imbalances in English plebeian migration to late eighteenth-century London Between 1650 and 1750 the population of London grew from approximately 400,000 to 675,000 people. According to E.A. Wrigley, to sustain this rate of growth the metropolis needed to attract in excess of 8,000 migrants per year more than it lost through death or out-migration.¹ In the following half-century, this rate of growth and associated migration accelerated. By 1801 the population was approximately 960,000.² And yet we know very little about who made up this host of newcomers. By applying a multi-variable gravity model to a unique set of 11,500 ‘failed’ migrants removed from London as vagrants between 1777 and 1786, this article seeks to both add to our understanding of the origins of a subset of English migrants to London, and to test the balance of motives that led to their decisions to migrate. By measuring the effect of distance, population at origin county, wages and the cost of living, on the behavior of this vagrant subset of male and female migrants it will add new detail to our understanding of the early industrial English labour market, and the life-cycle experience of many of those who chose to leave their home county for opportunities elsewhere in the country.³ This article builds on both a strong tradition of scholarship applying theoretical modelling of historic population movements in general, and an extensive literature on the case of early modern and nineteenth-century London in particular. The work of E.G. Ravenstein (1885-9) and George Kingsley Zipf (1946) form the starting point for all modern discussions of migration, and successfully defined universal rules for understanding migration patterns. Ravenstein’s seminal work on migration theory was based on the 1871 and 1881 British censuses, and proposed that migration tended to be step-wise, that women tended to travel shorter distances than men, and that long-distance migrants tended to end their journey in a great centre of commerce or industry.⁴ Half a century later, George Zipf’s ‘P1 P2D Hypothesis’ added mathematical precision to this approach, and concluded that people travelled only as far as required to find an acceptable economic opportunity, reinforcing the ¹ Wrigley, ‘Simple Model’, p. 46. ² Wrigley, People, Cities and Wealth, Table 7.3, p. 166. ⁴ Ravenstein, ‘Laws of migration’ (1885) and ‘Laws of migration’ (1889). importance of short-distance migration.\textsuperscript{5} These insights have been used, in turn, to define the concept of a ‘migration field’, or the average distance travelled by migrants, as an important measure of the influence a city had on its hinterland. With few caveats, these early insights and approaches have held up remarkably well in the face of more detailed analysis and direct measurement.\textsuperscript{6} In the specific case of eighteenth-century London, John Wareing has suggested that London’s migration field extended to a radius of 130km by the beginning of the century (having declined from a much larger area of 212km in 1486).\textsuperscript{7} Ian Whyte’s analysis of London beggars has similarly demonstrated the pull of the metropolis on more local migrants, with 38 per cent of adult beggars in the 1790s hailing from within 16km of London, and only half as many coming from elsewhere in England.\textsuperscript{8} Most recently, Jelle van Lottum’s work on London and the Dutch Randstad provides a comprehensive attempt to measure London’s migration field. Using data from the 1851 census, van Lottum has argued that London’s hinterland, or the average distance travelled by migrants was 136km in the early nineteenth century – a near match for Wareing’s estimate for a century earlier.\textsuperscript{9} But van Lottum’s model significantly extends the notion of a ‘migration field’, by dividing this into four distinct regions or zones, defined by a straight-line Euclidian distance from London (see Figure 1). In van Lottum’s model, zone one includes the counties immediately bordering Middlesex, zone two stretches not quite to Bristol in the west, and to the Wash in the north, zone three includes the remainder of England and Wales, and zone four consists predominantly of Scotland and Ireland.\textsuperscript{10} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{hinterland.png} \caption{The hinterland of London during the early modern period} \end{figure} \textit{Source:} van Lottum, ‘Labour Migration and Economic Performance’, p. 542, fig. 4.\textsuperscript{11} \footnotesize \textsuperscript{5} Zipf, ‘The P1 P2D Hypothesis’. \textsuperscript{6} For studies confirming Ravenstein’s ideas, see, Saville \textit{Rural depopulation in England and Wales}; Redford, \textit{Labour migration in England}. Many studies have focused on testing one or more of Ravenstein’s original conclusions. For more on the idea of step-wise migration see, Pooley and Turnbull, ‘Migration and mobility in Britain’, pp. 55-62; Pooley, ‘Residential Segregation’; Withers, and Watson, ‘Stepwise migration and Highland migration’. For an alternative approach based on ‘family reconstitution’ see Souden, ‘Movers and stayers’. \textsuperscript{8} The rest came from Ireland and Scotland. Whyte, \textit{Migration and Society in Britain}, 76. \textsuperscript{9} Clark, ‘Migration in England; pp. 64-68. \textsuperscript{10} Van Lottum’s four regions were defined by distance from London in kilometers. Region 1: 0-60km, region 2: 61-170km, region 3: 171-450km, region 4: >451km. Van Lottum, ‘Labour migration’. \textsuperscript{11} Van Lottum, ‘Labour migration’, 542. Focusing on English internal migration, this paper seeks to build on van Lottum’s work in two ways. First, it uses a new source: the *Vagrant Lives* (2015) dataset containing the details of some eleven and a half thousand vagrants processed by the county of Middlesex between 1777 and 1786. This dataset draws the analysis backwards from 1851, into the pre-census era of the late eighteenth century – halfway between the periods covered by Wareing and van Lottum. And second, it combines a ‘gravity model’, with a range of county-level data to build a more nuanced understanding of the forces that explain the observed regional differences and anomalies in internal English migration patterns. In the process, this article will demonstrate how a geospatial gravity model approach can identify counties and regions that were sending too many or too few vagrants to London, considering their size, distance, local wage rates, and cost of living. I The data analysed in this article come from a series of bills listing vagrants expelled from Middlesex between December 1777 and April 1786 and transported to the county border by the county’s dedicated vagrancy contractor, Henry Adams. Many had been arrested under the authority of the 1744 Vagrancy Act for ‘wandering and begging’ on the streets. Following arrest and an examination designed to determine their parish of origin and legal settlement, the Act permitted local magistrates to punish ‘vagrants’ with hard labour in a house of correction and a whipping, prior to their forcible removal. Others, particularly from 1783 onwards, had applied to the Lord Mayor for a ‘vagrant pass’ which allowed them access to the system of removal to their parish of settlement without suffering hard labour and whipping. Having been carried to the county border, both types of vagrants were then passed on to either the vagrant contractor for the adjacent county or the local constable, to be passed in turn from hand to hand, until they reached home. The surviving records were created by Adams and submitted to the county eight times per year, and detail the names as well as the final destinations of each ‘vagrant’. --- 12 Crymble *et al* ‘Vagrant Lives’. 13 Lovett *et al* ‘Poisson Regression Analysis’. 15 Vagrancy Act of 1744: (17 George II c. 5). The system was not substantially altered until the passage of the Vagrancy Act of 1824: (5 Geo IV c. 83, s.4). For a comprehensive over view of the legal system of vagrancy, see, Eccles, *Vagrancy in Law and Practice*, esp. chs. 2 & 8. 16 Hitchcock *et al* ‘Loose, idle and disorderly’. for work completed; but for historians they represent a detailed account of the origins of a substantial set of lower class ‘failed’ migrants to London. For the nine years between 1777 and 1786, 42 out of a possible 65 lists survive.\textsuperscript{17} And following the geo-coding of the place names, they detail the settlement of 11,489 individuals removed from urban Middlesex. When compared with Adams’ own total figures for removals incorporated in a 1785 report to the Middlesex bench, this amounts to roughly seventy-five per cent of all vagrants transported by him in this period.\textsuperscript{18} As the county’s only dedicated removal contractor, Adams’ records therefore provide an incomplete but unique and substantial snapshot of vagrancy within his jurisdiction. To interpret these data effectively and to recognise potential selection bias resulting from the nature of these sources and the contemporary system of vagrant removal, two further characteristics need to be born in mind. First, Henry Adams was hired to shepherd vagrants from and through Middlesex, and was not directly employed to transport vagrants from the City of London. This meant that while the majority of vagrants arrested in and processed by the City of London went through his hands (Middlesex surrounds the City north of the river), vagrants with a settlement to the south or east of the City, were not dealt with by Adams. As a result, removals to the counties of Norfolk, Suffolk, Essex, Kent, Sussex, and Surrey are not detailed in these lists (Table 2 and Figure 3 excludes these counties), and only a portion of those destined for Hampshire were included. The lists also encompass a substantial number of people from Ireland, Scotland and Wales, for whom detailed settlement information was not included. Because of these limitations, it is not possible for these data to be directly compared with results offered by van Lottum for his ‘zone four’ migrants. And second, as mentioned above, the lists elide two distinct types of vagrants, with very different characteristics: those who had been arrested for disorderly behavior in Middlesex and processed through the houses of correction before being forcibly removed (referred to hereafter as the ‘disorderly poor’), and those who had presented themselves to the Lord \textsuperscript{17} Names of dependents – including wives – are not included; instead the number of dependents is listed. If a woman is the lead vagrant in a family group, her name is given. \textsuperscript{18} In the autumn of 1785 Adams reported to the Middlesex bench, that he had processed 11,183 vagrants in the proceeding five years. The surviving vagrant lists record some 8,365 removals in this period, amounting to seventy-five per cent survival. In terms of dates covered this same period saw a survival rate of only two-thirds. This reflects the extent to which longer lists involving more costs were disproportionately likely to have survived. For Adams' report see London Metropolitan Archives, ‘Middlesex Sessions Papers, April 1786’, in \textit{London Lives}, LMSMPS508090268. Note: thirty-three of the original 11,522 entries could not be geo-coded due to ambiguity and have been left out of the analysis. Mayor requesting a ‘vagrant pass’, and expecting the vagrant removal system to aid their travel home. This group was particularly numerous in the years after demobilization following the American war in 1783 (referred to hereafter as ‘volunteers’). Figure 2. *Map of county of origin of Middlesex vagrants, 1777-86, clustered using Jenks natural breaks classification method.* *Source: Vagrant Lives* dataset. The first of these types of vagrants, the ‘disorderly poor’ were a constant problem for the governors of the metropolis, and the object of a complex system of local policing. ‘Failed’ migrants of this sort were arrested – normally either by a constable, beadle or nightwatchman, and following an examination by a magistrate, taken to the houses of correction at either Tothill Fields in Westminster, or at Clerkenwell north of the City for punishment, before being shipped by cart to the edge of the county by Henry Adams and sent onwards to their place of settlement. The vagrancy laws were very loosely drafted, and we have no detailed records of precisely what these individuals did to warrant arrest beyond appearing offensive to the eyes of authority; nevertheless it is probable that nearly all of these people were noticeably poor and at risk of becoming a burden to the poor relief system in London. There was no doubt a selection bias in the identification of these migrants, formed by the perception and assumptions of those who initially arrested them, but what they shared most fully was an inability to take advantage of the employment opportunities offered by the capital. The *Vagrant Lives* dataset includes a record of 4,333 individuals in this category, making up 3,262 groups (husbands and wives, parents and children), of which 3,309 individuals and 2,020 groups can be linked to a precise settlement. The second type of vagrant, the ‘volunteers’, represent a very different set of people, who had significantly different characteristics and places of origin. They include a higher proportion of adult males traveling on their own than do the disorderly poor; and include demobilized servicemen deposited in South-east England as a result of government policy, eager to get home after the American war, and seasonal labourers heading home following the harvest. From 1783 the City of London appears to have stopped whipping vagrants or putting them to hard labour and instead simply began issuing vagrant passes on request. The ‘volunteers’ include 7,156 individuals, composing 5,431 groups. The demographic breakdown of these two groups by gender and relationship is reflected in Table 1. One final caveat should be noted. Many vagrants were removed as family units, and hence represent a single decision to move to London. To account for this, the following analysis is based on the behavior of ‘groups’ – defined either as a family, or an individual recorded as travelling on their own. This ensures that large families from the same region do not skew the analysis. Any study of a population sub-set must consider the problem of selection bias in the sample, and there can be no doubt that this sample of migrants reflects a series of biases. At the point of entry into our dataset, the ‘disorderly poor’ were actively selected by constables for arrest as a result of their anti-social behaviour on the streets. Conversely, the ‘volunteers’ were in part ‘selected’ by the state, which chose to demobilize soldiers in South-east England; and were in part self-selecting, in that they approached the Lord Mayor for a pass, in the hope of free (if uncomfortable) subsidized travel. Prior to leaving their home county for London or the army, the motivations of the larger group of lower class migrants and army volunteers from which our data are drawn could well have differed from others making similar moves. We cannot say with certainty whether either group was made up of ‘typical’ migrants. To address this issue, this study has adopted a number of strategies. First, the migration patterns analysed have been limited to the English counties for which we have complete coverage (those north and west of London). This ensures that we can test the effects of distance, population density, wages and the cost of living, on groups for which we have comparable data. Second, the two vagrant groups were analysed separately to test for substantial differences between them. None were found.19 And finally, we have tested our results against the migration patterns apparent in a control set of plebeian Londoners – criminals. These samples of both the ‘disorderly poor’ and the ‘volunteers’ remain atypical, and selection bias inevitably remains, and remains difficult to fully account for. The only characteristics that all these plebeians share is their poverty and marginal social position. However, as these were --- 19 Hampshire does show significant variation, but as a ‘partial’ result has been excluded from this analysis. The greatest apparent variation in behavior between ‘volunteers’ and the ‘disorderly poor’ can be found among those removed to Northamptonshire and Lincolnshire, but even here, the distinction is not substantial. Results for Rutland and Huntingdonshire are affected by the very small numbers involved. See Figure 3. the characteristics that marked the lives of a majority of London’s migrants, the behaviour of these eleven thousand vagrants should reflect a wider experience. II The data derived from Adams’ bills have been analysed using county-level aggregates with a negative binomial regression specification of a traditional gravity model to predict the attraction between two geographic points – in this case, the point or county of origin and urban Middlesex. This methodology has the all-important characteristic of both predicting expected migration flows, and allowing us to explore the effects of different variables on observed behavior. Its use makes it possible to identify anomalous origin/destination pairs and to highlight distinct patterns associated with regions, counties and individual urban centres; providing much more information than a migration field alone. The model and data have been used to test the impact of five specific variables for their effect on migrant flows: (log) population at origin (initially at county level), (log) distance from London, average wages in the county of origin, the trajectory of those wages, and the aggregate price of wheat in the county (used as a proxy for cost of living). Population at origin and distance form the basic components of Zipf’s ‘principle of least effort’ – which argues that humans typically travel short distances using easy to travel paths, and that the size of the population at both the origin and destination are important predictors in migrant flows. If applied to the London vagrants this observation would imply that there should be more vagrants coming from Berkshire, which is both heavily populated and close to London, than would come from, for example, Westmorland, which is both farther away and less heavily populated. This approach predicts how common migration from a given county to London should be, and allows us to compare that prediction to the observed levels of migration recorded in Adams’ bills. Distance was calculated using the ‘sp’ (Pebesma and Bivand, 2005) package in R and is measured as the straight-line Euclidean distance in --- 20 For examples of gravity models on migration studies, see, Karema et al. ‘Gravity model analysis’; Lovett et al, ‘Poisson Regression Analysis’. For a close reading, see Pooley and D’Cruze, ‘Migration and Urbanization’. 21 For an example of this type of model in use on historical data, see, Lovett et al ‘Poisson regression analysis’ and for a comprehensive account of migration theory, see Lee (1966) ‘A Theory of Migration’. 22 Logs of population are taken as the populations of largest and smallest counties are an order of magnitude different and exhibit a log-linear relationship with the numbers of migrant moving groups – see Figures A1 and A2 in Appendix. Distance has a similar log-linear relationship with volume of migration. 23 Zipf, ‘Principle of least effort’. See also, ‘Olsson, ‘Explanation, prediction, and meaning variance’ for more on distance interaction models. kilometers between the geometric centroid of Middlesex (as defined by 1851 County boundaries) and the geometric centroid of the parish of settlement (average distances are then calculated for the county) of the vagrants listed. Aggregate county population figures for 1781 have been used throughout, and are based on the work of E.A. Wrigley, using the 1801 census in combination with the marriage rate to generate a model of change over time.\textsuperscript{24} Additionally, we sought to test the impact of wage rates (and whether these were rising or falling), on the decisions of vagrants to migrate. Wage data come from estimates assembled by E.H. Hunt and published as ‘Industrialization and Regional Inequality: Wages in Britain, 1760-1914’.\textsuperscript{25} Here we have used both Hunt’s estimates for absolute wages for each county in 1767-70 (the years just before our data begin) and the rate of change observed between 1767-70 and 1794-5. This was a particularly important factor in the rapidly industrializing counties of the North, and the declining counties of the West. Changing wage rates are likely to reflect rising or falling demand for labour, with associated opportunities and unemployment acting as a direct influence on the decision to migrate. Though Hunt’s figures have been criticized for failing to account for changes in winter and summer employment, we believe they represent a good indicator of relative wages between counties.\textsuperscript{26} Burnette’s research, suggesting that women’s wage rates varied in proportion to male wages, means that these data can also be used as a proxy for the relative wages of women.\textsuperscript{27} Finally, we have used the price of wheat as a measure of the local cost of living. In this case, we have incorporated the data from Blunt and Cannon’s ‘Weekly British Grain Prices from the London Gazette, 1770-1820’ to provide a simplified cost of living index for each county.\textsuperscript{28} Wheat price was calculated as the average price of wheat in the home county of the migrant over the period of 1776-86. As we do not know exactly when a specific vagrant left their place of settlement – it could have been many years prior to their removal – and as prices and wages fluctuated, these added measures are necessarily rough indicators of general conditions at their point of origin. Nevertheless, these aggregate data provide an added variable to the model that can help explain general flows between the origin and destination, enriching our understanding of the factors contributing to the decision to migrate. \textsuperscript{24} Wrigley, ‘English county populations’. \textsuperscript{25} Hunt, ‘Wages in Britain’. \textsuperscript{26} Lyle, ‘Regional Agricultural Wage Variation’. \textsuperscript{27} Burnette, ‘Female day-labourers’. \textsuperscript{28} Cannon and Brunt, \textit{Weekly British Grain Prices}; Cannon and Brunt, ‘English Corn Returns’. By comparing the projections generated by our model, against the observed flows of migrants recorded in Adams’ bills we are able to identify counties, regions, and urban centres that were sending either more or fewer migrants to London than the model predicts. By incorporating wage and cost of living variables into the model, we are able to test the extent to which push and pull factors correlate with, and arguably explain, these anomalous migration flows. A detailed description of the model used can be found in the Appendix: A Five Variable Gravity Model of London Migration. III Since we know the number of vagrants (both disorderly poor and volunteers) that did travel between their point of origin and London, we can use the model described in the appendix to estimate the variation between the distribution of actual vagrants from each county, and the numbers the model predicts. Table 2 details the differences between the observed flows and the modeled predictions for each county, divided between the ‘volunteers’ and ‘disorderly poor’. Table 2. Observed flows versus model estimates: volunteers and disorderly poor 1777-86 Source: Vagrant Lives dataset. What is apparent from this comparison is that the results produced by the model are quite close to observed data for counties within about 160 kilometers of London. At the same time, Table 2 highlights a number of anomalous results that consistently over- or under-supply London’s vagrant population. Figure 3. Difference between observed and estimated flows 1777-86. Source: see tables 5 and 6. --- 29 Totals in this table are different from those in Table 1 as some individuals were dropped from the final analysis, either due to unreliable entries or lack of spatial reference. We also note that analysis of the spatial distribution of the residuals using a Global Moran’s I test indicates no spatial clustering of the residuals (I statistic of -0.03 for the volunteers and 0.2 for the disorderly poor), which might otherwise indicate a misspecification of the model. Figure 3 illustrates these same results, showing the percentage error against observed migration flows for each group, by county. The errors themselves are generally quite small, showing that the model does a good job of estimating the likely flows of migrants. To test if these results are skewed by substantial selection bias in the original Vagrant Lives dataset, we ran the model on another dataset: the Middlesex Criminal Registers from 1801 to 1805, which includes demographic details of 1,642 individuals who entered the gaol system in Middlesex in those years. Though vagrants and criminals entered the historical record via different routes, they all tended to be from similarly economically marginal backgrounds. When compared to the results of the model, the Criminal Registers dataset shows remarkably similar patterns of migration to those generated by the disorderly poor and volunteers, suggesting that any selection bias in the Vagrant Lives dataset reflected a shared experience of poverty common to most migrants, rather than a systemic process of selection associated exclusively with vagrant removal. Table 3 and Figure 3 illustrate that criminals from 1801 to 1805 – two decades after the vagrants were removed – follow an almost identical pattern with almost precisely the same counties being over and under-represented. The model’s percentage error amongst the criminals is frequently between those of the vagabond poor and volunteers (i.e. Northampton and Lincolnshire), suggesting that these flows are representative of migration patterns amongst plebeian migrants more broadly. The consistency apparent in the relationship between the model outputs from these two data sets both re-enforces our trust in the validity of the model itself, and in the significance of observed variations from its predictions. Table 3. Observed flows versus Model estimates, vagrants and criminals Source: Vagrant Lives dataset, Middlesex Criminal Registers. Perhaps the most noteworthy result is that the model’s predictions coincide almost precisely with the observed flows of migrants from a group of counties in the Midlands and Welsh borders, including Leicestershire, Nottinghamshire, Staffordshire and Shropshire. To the north of these counties, however, the observed migrant flows deviate noticeably from those predicted. Cheshire, Derbyshire, and particularly Yorkshire and Lincolnshire contributed substantially fewer migrants than the model predicts. Most of these counties were among those experiencing rapid industrialization and urbanization in this period, providing significant competition for London in the labour market. The cities of the industrial north tripled in size over the course of the eighteenth century, and like London, their growth was fueled by migration.\textsuperscript{30} Those migrants had to come from somewhere.\textsuperscript{31} With London’s gravity pulling at those in the south and east of England, the industrial north seems to have been a bigger draw for those north of the Midlands. Those in rural areas, particularly in the north, would, in many cases, have to pass through another major urban centre on their way to London. In some cases they must have stopped to find work rather than risk the longer journey, adhering to George Kinsley Zipf’s ‘principle of least effort’, and Samuel Stouffer’s theory of intervening opportunities.\textsuperscript{32} As is shown by the z-scores in Tables 4 and 5 in the Appendix, after the size of a county’s population and distance from London, the trajectory of its average wage (rising or falling), and hence demand for labour, is the most important predictor of whether a migrant is likely to make the journey to London, with rapidly rising wages likely to encourage potential migrants to stay at home. Running the model without the wage trajectory data flags that without the influence of rising income, counties such as Lancashire, Leicestershire and Yorkshire would be sending far more lower-class migrants to London. This in turn suggests that the growth in wages and demand for labour, rather than static high wages was a more significant pull factor for migrants. We can also infer from the same anomalies that the counties surrounding this industrializing region were affected by the draw of industrial employment in different and distinct ways – in turn affecting flows of migration into London differently (Figure 3). The different patterns reflected in the figures for Lancashire and Yorkshire illustrate this. Both were industrializing. And both counties sent large numbers of vagrants to London (362 and 376 moving groups respectively), however, the former is flagged by the model as sending more people than we would expect, while the latter sent far fewer. Closer analysis reveals that Lancashire’s migration to London is dominated by people coming from Liverpool, who in turn are composed of a significantly higher proportion of solo male travelers (44 per cent compared to 26 per cent nation-wide). Nearly all of the deviation from the expected values in \textsuperscript{30} Clemens, ‘The rise of Liverpool’. \textsuperscript{31} Whyte, Migration and Society in Britain 1550-1830, 67. \textsuperscript{32} Zipf, Principle of Least Effort; Zipf, ‘The P1 P2/D Hypothesis’; Stouffer, ‘Intervening Opportunities: A Theory Relating to Mobility and Distance’. Lancashire occurs amongst the volunteer group, and may be explained by the importance of the port of Liverpool as a recruiting ground for the navy. As the home to a large itinerant Irish population looking for work, it may be that the semi-settled Irish in the city were amongst those former soldiers and sailors dumped in London at the end of the American war (the first imperial war in which Irish Catholics comprised a substantial proportion of the British Army). Without the volunteers, Liverpool, and thus Lancashire, moves from an anomalous county, to one fairly accurately represented by the model. The data for Yorkshire tell a different story. As the largest English county both in size and by population, it contributed a large absolute number of migrants, but significantly fewer migrants than the model predicts. This strongly suggests that either life in Yorkshire gave would-be migrants proportionately few reasons to leave, or that the draw of regional urban centres such as Leeds, Liverpool and Manchester ensured that fewer people saw a reason to make the trip to London. Wages in North and West Yorkshire nearly doubled between the 1770s and 1795, again suggesting that demand for labour leading to changing wage rates formed a significant explanation for the observed patterns. To the south, the flows are even more varied, notably with the East Midland counties of Northamptonshire and Rutland, along with the eastern counties of Bedfordshire and Huntingdonshire, sending fewer than expected migrants to London, despite their proximity to the capital. Conversely, the West Midland counties of Warwickshire, Gloucestershire, and Worcestershire, as well as neighbouring Berkshire and Oxfordshire, all send more migrants than we would expect, despite, in many cases, being further from the metropolis. This suggests that we can significantly revise van Lottum’s original migration field, defined as a series of concentric circles (Figure 1) to a more complex pattern as revealed in Figure 3. Van Lottum included all of these Midlands counties amongst London’s hinterland. Instead, it would appear that for the inhabitants of the eastern counties, the draw of higher wages in the north was slightly stronger in the late eighteenth century than was the draw of London. The opposite was true for the counties of the West Midlands. This means van Lottum’s original circle, and hence London’s migration field, could be more accurately represented as an ‘L’ shape, stretching across the southern midlands to the west, before turning northwards to follow the Welsh border. --- 33 Denman, ‘Hibernia officina militum’. 34 Hunt, ‘Wages in Britain’. This ‘L’ shaped cluster of counties centred on the West Midlands was the source of a disproportionate supply of urban migrants; and the cities they contain, are also prominent. Birmingham, Bristol, Worcester, Coventry, Bath and Gloucester are particularly prominent starting points for migrants, as can be seen in Table 4 (Exeter also features in this list, though Devon as a whole is not a significant point of departure). More than ten per cent of all vagrants in the data set hailed from one of these towns (781), making this urban-to-urban migration a significant force in the late eighteenth century that transcends the pattern revealed by county-level analysis and the straight-line Euclidean definition of London’s migration field. This urban-to-urban pattern is surprising. Britain was still an overwhelmingly rural society in the eighteenth century. According to Malanima and Volckart, by the mid-eighteenth century 16.4 per cent of the English and Welsh lived in urban centres, rising to 22.3 per cent by the end of the century.35 It would therefore be easy to assume that most newly arrived Londoners were fresh from the farm. However, London vagrants are considerably more urban than the population as a whole, with 28 per cent coming from one of the forty-two largest English towns.36 <table> <thead> <tr> <th>Table 4.</th> <th>Number of London vagrants in the urban centres of the west, focusing on those counties that were sending disproportionate numbers of vagrants to the metropolis.</th> </tr> </thead> <tbody> <tr> <td></td> <td>Source: de Vries, European Urbanisation; Herbert, ‘Gloucester’; Wrigley, ‘English county populations’.</td> </tr> </tbody> </table> According to Wrigley’s estimates of population growth (1761-1801), the counties for which we have full vagrancy data included just over five million people in 1781.37 That is a ratio of 1 vagrant to 435 people. Unfortunately, we do not have accurate populations for these urban --- 35 Malanima and Volckart, ‘Urbanisation’. 36 These include: Bath, Bermondsey, Berwick upon Tweed, Birmingham, Bristol, Cambridge, Canterbury, Carlisle, Chester, Coventry, Deptford, Derby, Dover, Ely, Exeter, Gloucester, Hertford, Hull, Ipswich, Lancaster, Leeds, Leicester, Liverpool, Manchester, Newcastle upon Tyne, Northampton, Norwich, Nottingham, Oxford, Plymouth, Portsmouth, Reading, Salisbury, Shrewsbury, Southwark, St Albans, Stafford, Warwick, Winchester, Windsor, Worcester, and York – all of which had at least 10 vagrants returned during the period. 37 Wrigley, ‘English county populations.’ centres in 1781. Corfield urges a ‘polite skepticism’ of pre-census urban figures, as ‘residents in more than one town expressed disbelief in 1801, when the first census showed their populations to be much smaller than they had expected’. Nevertheless, it is possible to give a rough idea of the ratio of these urbanites to London vagrants by using figures from the turn of the nineteenth century collected by de Vries and Herbert. When compared to the countywide ratios of vagrants per capita, the results show that these urban centres in the west are sending a disproportionate number of vagrants to London. Nearly all of these urban regions are sending more than twice as many vagrants per capita as were sent from the surrounding countryside. Worcester and Bristol sent nearly four times as many as the rural regions of their counties. This means, in turn, that this group of London’s poor were in many cases not from agricultural backgrounds, but were experienced urbanites that were familiar with life in a city, though not one of the size of London. This suggests in turn that migration to London was part of a pattern of trading up from rural-to-urban-to-London. Birmingham and Bristol along with a host of industrial and port cities, grew substantially between 1750 and 1800. Given the rapid growth of these urban centres, it is likely that large numbers of London vagrants who had settlements in these towns were on a second or third leg of migration. A growing city offers many opportunities for gaining a settlement – particularly through either apprenticeship or domestic service – and this evidence suggests a significant pattern of ‘trading-up’ on the way to London, supporting Peter Clark’s work on the period to 1750 and Paul Slack’s findings on vagrancy from the Tudor and Stuart period. Besides their urban starting point, migrants from these western cities were anomalous in a second way: they were much more likely to be female and to be travelling on their own, than were migrants from elsewhere. This is the opposite of Souden’s findings for this part of the country in the early modern period, which argued that male out-migration tended to dominate the narrative in the west. It also implies that step-wise migration could be significantly associated with domestic service opportunities in urban centres. --- 39 Bristol has been considered part of Somerset for the purposes of this paper. 40 Liverpool’s population rose from 22,000 to 78,000 between 1750 and 1800. Manchester rose from 18,000 to 70,000; Glasgow went from 24,000 to 77,000, and Birmingham from 24,000 to 69,000; de Vries, *European Urbanisation*, 270-271. 42 Souden, ‘East, west’, 306-308. Though these western towns – Birmingham, Bristol, Worcester, Coventry, Bath, Exeter, and Gloucester – were all located in the same part of the country, they had significantly different histories. In Bristol, wages were stagnant, and the city was gradually losing out to Liverpool as the chief port on the west coast. In Devon, where wages actually dropped between 1770 and 1795, Exeter sent larger than expected numbers of vagrants – though the county on the whole did not.\footnote{Hunt, ‘Wages in Britain’.} The city’s industry was in decline by this period, but was seeking to re-invent itself as a leisure centre on the model of Bath, and in combination with its role as a county town, and ecclesiastical centre, was therefore home to a large number of domestic servants who may have been tempted by opportunities in London. Other western towns historically focused on textile production also found themselves in decline as the mills of the north began picking up steam.\footnote{Corfield, \textit{Impact of English Towns}, 35; Chalkin, \textit{Provincial Towns}, pp. 33, 49.} The large number of migrants originating in these towns suggests their relative decline encouraged out-migration. Of course, not all of the towns in the region, or among those contributing a disproportionate number of migrants, were in decline. Birmingham was growing as the economic centre for the industries of the Black Country. Unlike in the other towns under discussion, Birmingham’s very success may help to explain its higher than expected contribution to London’s migrants. The city’s importance as a centre of commerce and industry led to the development of both a substantial trade network and canal system.\footnote{Ellis, \textit{The Georgian Town}, p. 39.} By 1777, fifty-two coaches per week made the trip from Birmingham to London, each carrying up to six passengers, meaning that as many as 312 people per week could now make the trip to the capital in relative comfort.\footnote{Money, ‘Birmingham and the West Midlands’, pp. 296-297.} While not all poor migrants would have been able to afford a coach journey, writing in 1768 Arthur Young commented that such journeys were now frequently within the financial reach of servants who saved money to come to London.\footnote{Young, \textit{The Farmer’s Letters to the People of England}, 340.} With the notable exceptions of Bristol and Exeter, all of the large towns in Table 4 are inland rather than ports, meaning the majority of these urban vagrants likely arrived in London over land. The counties in this region are in some cases quite far away from the capital. Both Bristol and Birmingham are approximately 170km as the crow flies. However, Britain’s road network had been dramatically improved over the course of the eighteenth century by the turnpike trusts, which were established to improve the long-distance thoroughfares.\textsuperscript{48} From 1750 and 1790 the average speed of passenger coaches rose from four kilometers per hour to nearly ten, connecting the major towns and cities to London in a new way, and both facilitating internal travel and reducing its cost.\textsuperscript{49} This would have made it much easier for people traveling over land from places such as Bristol and Birmingham to reach London efficiently.\textsuperscript{50} IV At the other end of the country was Northumberland, which sent nearly twice the number of migrants predicted by the model. It stands out as a distinct and unique regional case. This unusual character is confirmed by the higher than expected number of criminals from the region appearing in the Middlesex Criminal Registers. Northumberland’s prominence flies in the face of the wider evidence, which suggests that the North of England was not a major contributor of plebeian migration to London – at least not disproportionately so. A closer look at the vagrants involved reveals that the connection is not county-wide, but is instead once again concentrated on urban migration, with 131 out of 220 vagrants (60 per cent) from Newcastle-upon-Tyne and Berwick-upon-Tweed – the two principle towns in the county. Drilling down even further reveals that a disproportionate number of these migrants are women (42 and 48 per cent respectively, compared to 33 per cent nation-wide). The most likely explanation is the prominence of coastal shipping from Newcastle to London.\textsuperscript{51} Between 1751 and 1792, the weight of goods shipped via Newcastle rose from 21,600 tons to 121,200, the vast majority carried by colliers supplying the energy needs of the capital.\textsuperscript{52} The importance of the ‘coaly Tyne’ meant that ship traffic between Newcastle and London probably exceeded that between any other two points along the British coast, with estimates of between 600 and 1,000 ships participating in the trade.\textsuperscript{53} Despite Northumbria’s distance from London, it contributed disproportionately to the capital’s growth. \textsuperscript{48} Guldi, \textit{Roads to Power}. \textsuperscript{49} Bogart, ‘Turnpike trusts’. \textsuperscript{50} Gerhold, ‘Development of stage coaching’. \textsuperscript{52} Corfield, \textit{Impact of English Towns}, pp. 35-36. \textsuperscript{53} Ville, ‘Total factory productivity’, p. 359. Dorset also emerges as an unusual case, both in deviating from the model estimates by contributing fewer than expected migrants, and because it is geographically isolated from similarly outlying counties. Sherborne, a small Dorset town of approximately 3,000 people, contributed the largest single collection of vagrants from the county. A specialized centre for silk throwing, Sherborne both had strong direct links to the capital, and suffered along with London in the late eighteenth-century decline in that silk industry, suggesting if anything, that Dorset should be producing more vagrants than Adams’ bills contain. However, evidence from Hunt suggests that the county as a whole fared quite well in terms of wages, rising nearly a fifth between 1770 and 1795, compared to neighbouring Devon, where wages dropped eight per cent in the same period. As in the case of Yorkshire, this growth in wages in the area may have resulted in fewer reasons to emigrate. One prominent pattern that is not obvious from Figure 3 is that affecting the area immediately adjacent to the capital. Included in this category is the area within about 130 kilometers of London, in the South East, particularly the regions immediate north and west. The number of distinct places from which vagrants claimed a settlement near the Middlesex border is higher than anywhere else, reflecting the extent to which London voraciously absorbed lower-class migrants from its immediate hinterland – its pull increasing the closer one approached. This is in contrast to the findings in the west, dominated by urban-to-urban flows. For this immediately adjacent area, our findings fully support the work of both van Lottum and Wareing, as well as Souden’s findings about higher female out-migration from counties in the south east – probably driven by domestic service positions in London. The 5-variable gravity model used in this paper has identified a series of anomalous migration flows towards London that begin to describe a more granular pattern of regional and gendered behavior than can be deduced using a straight line ‘migration field’ approach. What emerges from these data and this model are three distinct patterns of lower-class migration. The first, identified on the basis of an analysis of the county-wide figures, affects the industrializing areas of the Midlands and the North. With few exceptions, counties in the industrializing north, sent fewer plebeian migrants to London than the model predicts. The model suggests that the disproportionately strong draw of the industrial north extended at least as far south as --- 55 Souden ‘East, west’. Northamptonshire, if not further, and that Lancashire and Yorkshire experienced the pull of industrial employment in very different ways. Yorkshire benefited from rising local wage rates that kept people at home, whereas Liverpool in particular was subject to higher than expected levels of male migration to London reflecting its role as a major port and transshipment site for Irish migration. The second pattern to emerge concerns the urban centres in the west of England, from Birmingham to Bristol, and on to Exeter. If those in the industrial north generally eschewed London, the people of the West Midlands took the opposite approach. These counties contributed a much higher number of migrants than we would expect given their population, distance, local wage rates, and cost of living. The proximity to the industrializing north seems not to have drawn people from these counties as strongly as it did east of the Pennines – at least not at the expense of migration to London. Within this pattern, women also provide a story of note, with female migrants dominating this urban-to-urban flow. The decline of the textile industries in the West Midlands, and South West and the associated fall in wage rates, and the creation of a more efficient and extensive transport network, may have underpinned this phenomenon. Bristol, Birmingham, Coventry, Worcester, Bath, Exeter, and Gloucester, were substantially over represented among the vagrant poor. The third pattern to emerge from these data relates to the counties and region immediately surrounding London. Within a migration field of approximately 130 km – close to that defined by van Lottum and Wareing – there are few significant anomalies in the overall pattern predicted by our model. These South Eastern counties sent higher proportions of women than men, and evidence a great diversity of places of origin compared to the rest of the country, with vagrants claiming settlement in a huge array of parishes and small towns. This suggests that migration to London from the South East was more likely to be direct – from the countryside to London – rather than disproportionately step-wise; and that the pull of London employments in domestic service were significant. Finally, the counties of Dorset and Northumbria stand out as unusual cases, both nationally and regionally. Dorset’s lack of major urban centres and rising wages appears to have placed it in a different relationship to London than its neighbouring counties. While in Northumberland, the prevalence of cheap transport via the colliers heading from Newcastle to London appears to have encouraged migration. These three distinct patterns of London migration, and the experiences of Dorset and Northumbria provide nuance to our understanding of the migration field defined by van Lottum. And while this work largely confirms the pull of London on its immediate hinterland – as described by both van Lottum and Wareing, as well as Souden – it also suggests a substantial pattern of regional differentiation, in which different forces – primarily rising and declining wage rates, urbanization, and the effect of domestic service – affected individual decisions to migrate. Appendix: A Five Variable Gravity Model of London Migration The model used in this analysis builds upon the work of Flowerdew and Aitkin (1982), Flowerdew and Lovett (1988), Abel (2010), and Congdon (1993), and uses a negative binomial regression approach. The work of these authors is part of the large canon of research using gravity models in migration analysis, all of which can be traced back to the gravity modeling work of Zipf.\(^{56}\) This model, based on the traditional Zipf-style gravity model would take the form: \[ M_{ij} = k \frac{P_i P_j}{d_{ij}^2} \] Where \(M_{ij}\) is the flow of migrant groups between origin \(i\) and destination \(j\), and where \(P_i\) and \(P_j\) are the populations at origin and destination respectively, and \(d_{ij}^2\) represents the distance \(d\) between the origin and destination (squared) and \(k\) is a constant of proportionality. Wilson (1971) formally extended the gravity model into a family of ‘spatial interaction’ models that improved the accuracy of the estimates produced through using either (origin or destination) or (origin + destination) data constraints. One of the problems with the mathematical formulation of the traditional gravity model is that when all flow estimates produced by the model are summed they can exceed any observed flows. The constraints imposed by Wilson ensured this is no longer the case, although Wilson’s approach was subsequently adapted by a number of scholars, including Flowerdew and Aitkin (1982), who noted that by taking the logarithms of the terms in Wilson’s model, we arrive at a more flexible regression model, which allows us to extend the gravity model and test the effects of a range of additional explanatory variables (such as wages and cost of living) on the flows exhibited in the data. When modeling migration, simply taking the logs of both sides of the gravity model and fitting a log-log regression model is not entirely appropriate for a number of reasons. First, since migrant data are counts of individuals in fixed space, they are described by a discrete probability distribution rather than a normal or log-normal probability distribution. When modeling such discrete probabilities, Flowerdew and Aitkin (1982) suggest that a Poisson regression is most appropriate, although, Congdon (1993), Flowerdew (2010), and Abel (2010) contend that where these Poisson migration models are frequently a poor fit (either due to missing explanatory variables or migrant groups not moving entirely independently from each other) and exhibit what is known as ‘over dispersion’ (the observed variation in the data being greater than that in the theoretical model) a more appropriate generalized linear model to use is the negative binomial model.\footnote{Negative binomial models account for unexplained extra variance in the model with an additional parameter – this means that explanatory variable standard errors (and therefore statistical significance) are less likely to be biased by unaccounted for factors. In practice, experimentation with both Poisson and Negative Binomial models in this research produced almost identical fitted estimates (with the exception of one county), however the dispersion parameter indicated that the negative binomial model was more appropriate than the Poisson model and so we use this specification. An additional advantage of using a regression model rather than a conventional multiplicative gravity model is that parameter estimates for each of the independent variables can be easily obtained thus revealing the relative importance of each variable in the model.} The original gravity model in Equation 1 can be re-written in its negative binomial regression form as: \[ \ln \lambda_{ij} = \beta_0 + \beta_1 \ln P_i + \beta_2 \ln P_j + \beta_3 \ln d_{ij} \tag{2} \] Figure A1 – log(population) plotted against log(disorderly poor migrants) Figure A2 - log(population) plotted against log(disorderly poor migrants) Where $\ln \lambda_{ij}$ is the conditional mean of the expected migration flow, which is logarithmically linked ($ln$) to a combination of the logged population and distance variables (see Figures A1 and A2 for the log-linear relationship between population and numbers of disorderly poor and volunteer migrants). The parameters $\beta$ (which indicate the relative importance of each explanatory variable) are estimated by the model. Dropping the London ($P_j$) population from the equation (as London is the only destination in our data and therefore redundant) and incorporating wheat price data ($Wh_i$) at origin, wage ($Wa_i$) and wage trajectory ($WaT_i$) data (not logged) and sub-models for subsets of the full migrant dataset differentiating between migrant types $T$ of disorderly poor or volunteer ($M_{ij}^T$), we arrive at the final model: $$\ln \lambda_{ij}^T = \beta_0^T + \beta_1 \ln P_i + \beta_2 \ln d_{ij} + \beta_3 Wh_i + \beta_4 Wa_i + \beta_5 WaT_i$$ \hspace{1cm} (3)$$ We are then able to estimate expected migrant flows, $\lambda_{ij}$, by simply taking the exponential of Equation 3: $$\lambda_{ij}^T = \exp(\beta_0^T + \beta_1 \ln P_i + \beta_2 \ln d_{ij} + \beta_3 Wh_i + \beta_4 Wa_i + \beta_5 WaT_i)$$ \hspace{1cm} (4)$$ As we only know when a vagrant was expelled from London, but do not know when he or she arrived in the city, we cannot accurately determine the population of their home county when they migrated. Therefore we have used the aggregated county-level population figures for 1781, as published by Wrigley to represent the population at origin $P_i$.\(^{58}\) The relative population of each county is more important than the absolute population, so these estimates are sufficient. A series of models were run to determine the explanatory power of each of the variables. As with any regression modelling exercise, a series of assumptions should be met before we can be confident of the results obtained. Catch-all plots for both vagrant and volunteer models indicate linearity assumptions are met, whilst tests for independence of observations – specifically spatial autocorrelation of the residuals, tested by calculating a global Moran’s I statistic (binary spatial weights matrix, using the Queens case) – showed that there was very little evidence of spatial autocorrelation in both the volunteer and vagrant models. Early experimentation with the wheat price variable (or cost of living proxy) returned a significant (and positive) influence on the flows of both vagrants and volunteers (as wheat prices \(^{58}\) Wrigley, ‘English county populations. increased, so did the flows of groups to London), however the subsequent addition of both wage and wage trajectory data confounded this influence suggesting that wages are a better proxy for understanding the ability someone has to support their existence in an area. In the final models reflected in Tables 5 and 6, we keep wheat in the model rather than rejecting it, as one would expect the interplay between cost of living and wages to vary across the study area and there are no good theoretical reasons for rejecting cost of living as an explanatory factor outright. Both the disorderly poor and volunteers must be modelled separately, as they represent different types of migrants. Tables 5 and 6 show the outputs for these models, with their corresponding significance and a number of goodness of fit statistics. Table 5. Model outputs from model including total disorderly poor moving groups. English counties Table 6. Model outputs from model including total volunteer moving groups. English counties To read these tables you need to understand the raw parameters and the standardized z-scores. The raw parameters are the calculations for each variable. For Table 5 (disorderly poor), the model shows that on average, for every $e$-fold (the base of a natural logarithm is Euler’s constant, $e$, or around 2.72) increase in distance away from London, there were 0.54 fewer moving groups from a particular point of origin. For every $e$-fold increase in a county’s population, there were an extra 1.24 moving groups. For every shilling increase in the price of a bushel of wheat, there were 0.024 fewer moving groups (although the standard errors are such that this figure is not statistically significant) for every single pence of additional wages there were 0.03 fewer moving groups and for every percentage point increase in wages between 1767-1770 and 1794-95, there were 0.014 fewer moving groups migrating to London. As each of the variables are measured on different scales (km, people, £), the standardized (z-scores) make it possible to compare all variables on the same scale and get an impression of the relative importance of each. To assess the goodness-of-fit of each model in a way that is easy to interpret, we calculate a pseudo $R^2$ statistic using McFadden’s method ($R^2 = 1 - \frac{\text{Residual Deviance}}{\text{Null Deviance}}$). Focusing on Table 5: a pseudo $R^2$ value of 0.78, reveals that around 78 per cent of the variation in the migration patterns observed in the data can be explained by the four variables: population mass of the vagrants’ home county (z-score = 10.434), distance from London (-3.922), wages (-2.177) and wage trajectory (-3.034). The standardized coefficient for wheat price is lower (closer to 0) than those for distance and population, indicating that cost of living is a less influential factor than the core gravity model variables, however, it is statistically insignificant. The standardized coefficients reveal the relative importance and direction of the relationship between the variable and migration in the model (positive parameters showing that as the variable increases, so does the volume of migration, negative showing that as one goes up, the other goes down). That means that the model indicates that a large population in the migrant’s home county was the most important indicator of their likelihood to migrate to London, followed by distance, wage trajectory and average wages. The model of the volunteers shows a slightly different relative importance. With an $R^2$ value of 0.80, the model suggests that 80 per cent of the variation in patterns can be explained by the variables, with population (10.370) again the most significant factor, followed this time by wage trajectory (-2.423). All other variables in the Volunteers model are statistically insignificant, but given the very different routes to London taken by this group, this is not entirely unexpected. The raw parameter values in the tables above can be used to directly estimate the numbers of migrants. For example, as Table 2 shows, the model estimate for the disorderly poor from Buckinghamshire is 83 with Table 7 showing the model inputs for that county. | Table 7. Sample Model Inputs for Buckinghamshire. | Given the values for the various predictor variables in the table above, we can use the model parameters in Table 5 or 6 to compute our model estimates: $$\lambda_{ij}^T = \exp(\beta_0^T + \beta_1 \ln P_i + \beta_2 \ln d_{ij} + \beta_3 W h_i + \beta_4 W a_i + \beta_5 W aT_i)$$ Estimated migration, $\lambda$, of vagrant groups between Buckingham (i) and London (j) = 83 = \exp(-3.84814068 + (1.23523249 \times \ln(95936)) + (-0.54165632 \times \ln(46.73214286)) + (-0.02397521 \times 63) + (-0.02517889 \times 96) + (-0.01378304 \times -8.333333333))
Rekeang as a Concept of Sustainability in the House of the Society of Karampuang: Redefining Granary Nita Dwi Estika¹, Feni Kurniati¹, Achmad Syaiful Lathif² ¹ School of Architecture, Planning and Policy Development (SAPPD), Institut Teknologi Bandung, Indonesia ² Desain Interior, Fakultas Industri Kreatif, Telkom University, Indonesia fenikurniati@ar.itb.ac.id ABSTRACT Whether granaries in traditional communities have survived or being neglected due to modern progress has been widely discussed. However, the discussion remains fragmented and isolated solely on the granary itself, without considering the possibility of its more comprehensive relations. This paper aims to delve into the granary roles in Karampuang, a traditional village in South Sulawesi, to understand how the granary (in the local term 'rekeang') functions and sustains while facing modernization. The inquiry focuses on the interactions between rekeang and farming products, which further results in an understanding of interrelationships between rekeang and its surroundings: 1) domestic spaces, 2) family house units, and 3) with other rekeangs. Each relationship reveals that rekeang has a vital role; besides storing up farming products, it also serves as the key to sustaining the community, especially in cultural and ecological contexts. In the end, the paper proposes a critical insight of 'granary as a network,' which might provide an alternative approach to achieve sustainable development in traditional environments. © 2020 IJBESR. All rights reserved. Keywords: Granary, Karampuang Traditional Village, Network, Rekeang, Sustainable Development 1. Introduction Nowadays, traditional villages' indigenous culture faces modernization that forces old customs such as rice production and rituals to adapt and reposition themselves amid rapid change. Changes, such as technological advances, transform their way of life and shift the traditional process. Cultural transformation occurs as a community's response and effort to create a harmonious situation between old custom and modern life. The transformation process interpretation can be seen from the relationship of people with their place. The transformation itself is a process of discovery through community strategies/solutions to adapt to every new condition and achieve sustainable and truly rooted development. The sensibility is needed to see the root, which is supporting the cultural sustainability of a community. [1]–[5] Sustainable development is not a goal or destination, but a process of change which contains 'need' and 'making consistent' that is considering the basic human need and their comfortable way of life, besides the ability to adapt to present and future demand [6], [7]. The SDG 2030 agenda promotes culture and environment as the vital role of development and highlights indigenous people's importance [8], [9]. Sustainability is an issue that cannot be separated from ecological/environmental conditions and has the same significance as cultural sustainability. The critical sustainability issues related to vernacular architecture are cultural norms, values, social behavior, and human practice [10], which bears a meaning of continuous dwelling process in maintaining a connection to land and place. Current sustainable architecture concerns the effect of human activity on earth. The transformation of buildings is part of continuing the vitality of local culture and reworking the vernacular with respect to the past. [7] 1.1. Granary in the context of sustainable development Farming activity and granary have taken numerous sacred roles in traditional communities, especially in Southeast Asia. In Southeast Asia, rice storage called granary begins by raised floor housing [11]. The granary floor level is higher than the house regarding the rice goddess. Granaries are divided into attic granary and separate granary or within the house [12]. Besides the granary, the closed attic granary serves as a shrine [11]. Nowadays, granaries in traditional buildings have been transformed into houses, with enclosure and openings such as walls, windows, or doors [1]. The traditional community of Karampuang practices farming as their livelihood, and therefore the existence of paddy and granary (in local terms called rekeang) cannot be separated from them. Rekeang was the place to store their 'treasure', crops. Karampuang peoples have traditional folklore entitled I La Galigo that narrates To Manurung as the incarnation of Sang Dewi Padi Sangiangseri (local naming of rice goddess). This folklore is sacralized and is one of the links in their farming life cycle. In Karampuang, there is a ceremony named Mabbahang as the first step of Mappogau Hanua, which only can be celebrated after paddy in Karampuang village, either traditional or resident rice fields have been harvested [13], [14]. Mappogau Hanua is the most considerable ceremony in Karampuang, with many traditions and rituals that must be held. The Matuli tradition used three rice ties to represent three rice plants (white, black, and red). All of the paddy types used for the ceremony have to be stored and treated well based on their rules and norms [14]. The understanding of granary transformation is identifying the spatial changes of crops in the society, particularly within the domestic spaces where the crops are placed, layover, and stored. 1.2. The architecture of Karampuang Traditional Community The early Karampuang people lived in the mountains, which are strengthened by traces of megalithic civilization like caves, stone gates, possi tanah, emba, customary forest, traditional rice fields, old wells (buhung lohe), and gong stones. There is a hypothesis that the early Karampuang inhabitant lived in a one-story house with a conical roof and developed into a three-pillar conical house, and a square house called Rumah Besar. This change of the stilted square house coincided with Islam's acceptance in Karampuang around the 17th century. The DI I TII incident in 1967 burned the Rumah Besar so that it was rebuilt by maintaining its shape and symbol. Until now, Karampuang peoples still keep and preserve their traditional lifestyle and living centered on the Rumah Besar [15]. There are two types of Rumah Besar, named Rumah Besar of Puang Tomatoa and Rumah Besar of Puang Gella as the traditional leader, which in Rumah Besar of Puang Tomatoa also dwells Puang Guru and Puang Sanro. Rumah Besar is the mother of the other domestic houses of Karampuang's people called bola. There is a strong conjecture that before the 1980s, the Karampuang people lived in scattered forests and formed a community centered on Rumah Besar. After the 1980s road development, the villagers moved down and built houses along or towards the road. This house is known as bola. The bola architecture is an embodied dwelling culture transformation from the house of society concept at Rumah Besar into dwelling culture that spread into more spreading houses. 1.3. Granary in Karampuang house of society (Rumah Besar) The typology of Rumah Besar is a mountain Bugis type with a closed and inward-oriented space. Every Rumah Besar has a granary that spiritually acts as an object and prays for in traditional ceremonies. In the Karampuang's house of society culture, rekeang is located in a closed and dark attic, which can be reached through the kitchen from a wooden ladder. The harvested farming products from traditional rice fields and plantations were submitted to Rumah Besar for traditional ceremonies [16], and stored in rekeang (see Figure 1). Source: (Magister Teknik Arsitektur Unhas, 2013) Figure 1: Section of Puang Gella house of society with rekeang on the rooftop 1.4. Granary in Karampuang domestic houses (Bola) Rekeang in bola is located in the attic with more secure access from outside. The bola's rekeang is brighter than rekeang in Rumah Besar because there is usually a skylight inside it. Every bola usually has two rekeang that can be accessed from the bedroom and kitchen. Rekeang reached through a ladder from wood or bamboo. Most of the rekeang still function as a granary, but the others have changed their function to only tool storage. As granary, the stored product in rekeang is usually the harvested farming product from the arable land. It is kept by tying it up or put in a sack (see Figure 2). Source: (Author, 2018) Figure 2: Rekeang in domestic houses (bola) Rekeang, which is accessed from the bedroom, seems more sacred because of its accessibility and location near the posbola (the first pillar of the house). It is usually for storing more long term food. On the other hand, there is also a rekeang that is accessed from the kitchen to store daily food. Farming is still vital in the economy and is symbolic for the community in Karampuang. There is a need to investigate further the existence of rekeang beyond the physical entities' perspective because there is still a lack of study about it. This paper aims to delve into the relationship between the vernacular community's rekeang and farming products in traditional village Karampuang to uncover how sustainability can be achieved from the conventional way of life. 2. Material and Methods Karampuang Traditional Village is located in Bulupoddo District, Sinjai Regency, South Sulawesi. The people living there are peasants and still do their hereditary ritual. This study is exploratory using qualitative-exploratory methods [18], and based on a case study approach of the community about the transformation of their life [19]. The study focuses on farming products and rekeang relations within their domestic, cultural, and symbolic settings. 2.1. Collecting data method The data were collected through field study and interviews. The field study was conducted twice: the first was in July 2018, and the second was done between 29th October - 1st November 2018 in Mappagau Sihanua and Mubali Sumange (Karampuang’s annual traditional ceremony). The interview subjects were the Karampuang's key figures called Ade’ Eppa’e, including Puang Amatoa, Puang Gella, Puang Sanro, and Puang Guru. Another interview subject is Puang Juhe, Puang Sitiara, Puang Murni, Puang Asiah, and Puang Samintang as the bola's owner. By experiencing and observing the everyday life of the Karampuang community, including their cultural fest, the analysis/discussion will delve further into relationships that have been established by paddy/farming products. 2.2. Analysis method The collected data is then analyzed through description, interpretation, and the correlation between elements found. The analysis and discussion are divided into three parts to see the relationship between rekeang and its surroundings. Understanding the spatial reconfiguration of farming product placement within domestic spaces might provide critical insights for spatial transformation based on domestic farming activities/paradigm in traditional domestic areas. 3. Results and Discussions As an agricultural community, Karampuang has practiced paddy farming for years, passed down through generations. Farming thus has become the primary occupation for the community in fulfilling their daily needs. Their agricultural livelihood has also formed their built environment with one of its characteristics: an attached granary to the houses, namely 'rekeang’. Rekeang is considered sacred, which can be seen from its position in the house attic. It also becomes rice (paddy) storage that is only put into or removed on special days such as Mappogau Hanua, like most agricultural societies in Southeast Asian regions that treat granary with the greatest reverence [12]. The incoming modernization that immerses Karampuang village can be seen from the increasing use of modern products and electric households like TV, refrigerator, rice cooker, motorbike, car, etc. (see Figure 3 & 4), as well as another job for livelihood. Farming and agriculture have become less favorable as a preference. Although such social and technological transformations are perceived as a kind of development [20], many scholars have claimed this as cultural degradations concerning the decrease of granary and farming activities. Despite these claims, Karampuang benefits from modern life while at the same time maintaining their culture by practicing the kinship systems and annual cultural ceremony. This continuous tradition keeps relation to rekeang and has a role key in achieving cultural and ecological sustainability. Discovering the concept by taking a broader perspective towards granary should be taken because granary (rekeang) is more than just physical entities. It should be mapped with its interrelationships with the surroundings. This paper proposes an approach in looking at relations of paddy and various spaces in Karampuang community, ranging from sacred space of symbolic setting, domestic spaces within a housing unit (*bola*), and cultural space of their kinship system. ### 3.1. Rekeang with domestic spaces Initially, *rekeang* served as a sacred place to store paddy rice as rice was associated with the female goddess and fertility representation. The sacredness also could be traced through the way of rice being treated, which would only be removed for the sacred occasions. The position of *rekeang*, which is at the highest level of the house (attic), also symbolizes the holy world, compared to the middle part of the house (human world) and the ground as a representation of the most profane world. The vertical order of spaces in the house (*bola*) has illustrated the sacredness degree and defines the importance of each space's goods. The change that has taken place in the community's everyday life has gradually transformed spatial order. One particular aspect that best describes this transformation is how the paddy or rice storage is reorganized and placed throughout the house's vertical and horizontal spaces. Vertically, rooms are connected through the wooden ladder: from the ground to the main house, and then to the attic or *rekeang* (see Figure 5). The coming of modernization makes people consider the practicality and functionality of what they always do. So, Karampuang people have reorganized the rice sack, not in vertical order but horizontal order. They place and divide it up from the front of *bola* (veranda) to the living room and even into the bedroom (as the most private place in the house). Thus, in their everyday lives, Karampuang people do not act *rekeang* as granary anymore because of its difficulty. Another concept of spatial arrangement emerging from the reorganization of paddy in the *bola* is temporality. This transformation is seen through domestic household activity: cooking. The rice sacks for daily meals cook will be temporarily placed near the hearth (kitchen). So the house resident can access it more convenient and accessible than if it is stored in the *rekeang*. --- Source: (Author, 2020) Figure 5: Section of Karampuang domestic house (*bola*) Meanwhile, those stored for seasonal stock and annual festivals are placed relatively far from the kitchen yet conveniently accessible. They are reorganized in very temporary places such as the corner of the bedroom, behind doors, or a divider between bedroom and kitchen (see Figure 5). Therefore, this kind of rice sacks placement pattern informs that horizontal space order awareness has strengthened over the vertical. It shows that horizontal order can be functional complementary towards vertically-ordered spaces. Through vertical spaces in bola, Karampuang people practice an ecologically friendly system in their household and benefit from it. They position the kitchen, with the bamboo floor (which allows waste food and paddy beans going through the slits), right above the chicken coop, while their firewoods are placed under the living room due to its dryness. Thus, the vertical order of spaces in bola has demonstrated an ecologically sustainable system from their ancestral wisdom. Simultaneously, it adopts modern principles through practical and functional aspects of space usage in horizontal orders. 3.2. Rekeangs with family house units Based on the observations, it was found that bola in Karampuang has undergone various architectural transformations, namely 1) the original/Bugis's highland type (which is found dominant in the neighborhood), 2) the emerging bola, 3) the wooden-stoned bola, and 4) the stoned bola. This classification is based on the degree of its physical and functional transformations. Type one adopts the characteristics of Bugis's highland bola, particularly the veranda (legoh). The use of the ground (kolong) is different. In Bugis (see Figure 6), it is used for fisherman communal space, while in Karampuang, it mainly facilitates the farmer's domestic activities and sometimes for the carpenter (see Figure 7). However, rekeang in both types is similar, and thus it is considered as type one. Type two, the emerging bola, is the type in which two bolas (the older one, 'the mother' and the younger one, 'the child', emerge into one. This type can be seen in the case of Puang Samintang's bola (see Figure 8). The emerging bola is formed by connecting two bolas from the inside and uniting both into one larger bola, with a sloping roof to cover the two related areas. Every bola has access from the outside; however, in welcoming guests, the 'child' bola is more welcome than the 'mother'. Moreover, although the two have emerged into one, each bola maintains its rekeang and relatively functions it as usual, like for food storage. Type three, the wooden-stone bola is an extension of the wooden bola, the main house. Generally, the stone bola is built beside and attached to the wooden bola, with brick-concrete/stone structures. This case can be seen in Puang Murni's bola consists of two masses/units; the first is a wooden bola on stilts, and the second is a stone *bola* with no *rekeang* (see Figure 9). The dominant activity in stone *bola*, is to welcome guests and serve inhabitants in resting, whereas wooden *bola* functions as a place for cooking and storing food. This phenomenon shows that there has been a function shift from wooden *bola* into the house of food, although the spaces still function the same. Generally, this stoned *bola* (child's house) is an extension house, functionally attaches to the wooden *bola* (mother's house). The four types of *bola* transformation in Karampuang and how *rekeang* adjusts to those changes have indicated that the transformation pattern mainly undergoes two processes: merging and extending. The vital highlight from this discussion is that despite the *bola* transformation, it still preserves its relation with the 'mother' *bola* and *rekeang*. This pattern also applies to the relationship between the *Rumah Besar* and the *bola*. *Rumah Besar* is the representation of the parent house of *jiji* (community group), and the *bola* is a representation of the children's house. Thus, regardless of those transformations, *rekeang* still plays a vital role in achieving *bola* architecture sustainability in Karampuang. ### 3.3. Rekeangs with other rekeangs The entanglement of networks between *rekeangs* is formed by how Ade' Eppa'e relation (that is also manifested through the *Poto Nabi* symbol in *Rumah Besar*) is carried on Karampuang's rice fields. Ade' Eppa'e is the leadership structure of the Karampuang indigenous people. Rice fields in Karampuang are divided into four types: a. **Galung Arajang**: rice fields belong to Arung, also known as *akkinanrena arungnge* (food source for Arung needs). b. **Galung Abungerreng/Accapengngeng**: rice fields for poor people, people affected by a disaster, or indebted people. c. **Galung Hara-hara**: rice fields belong to Arung's families and other cultural officials like Gella, Sanro, and Guru. d. Individual rice fields belong to an individual who is still Karampuang residents. Rice fields, besides individuals owned, are intended for the public interest. Karampuang holds a norm that is based on togetherness and maintains everything in balance as their basic principle. From network theory, we can see that there is a relationship between *rekeangs* in Karampuang. The relation is formed through their agricultural customs: a. When there are guests who come to the village, there will be a sign laid out in their sacred wells. People who see it will know that there is a guest in Karampuang, then they are obliged to bring whatever food in their home to Rumah Besar to make food for the guest. b. In the harvest season, every people in Karampuang helps to harvest paddies from the fields. c. Rice yields that are collected in Rumah Besar are then distributed to every house by Sanro herself. d. The distributed rice in every house besides for daily needs is also being used to make a rite (which can be concluded as harvest or new year ritual) in every house. These customs create a network between rekeang indirectly, and by using these networks, they maintain a system that can hold their natural-built environment ecosystem more resilient. The existing network of rekeang from agriculture activities can be seen in Figure 13 and Figure 14. The network analysis reflects that rekeang plays a vital element in maintaining the network besides Ade' Eppa’e officials. The diagrams also show us how Ade' Eppa’e operates in Karampuang through agricultural customs. This principle is in line with the statement that land is not viewed as a commodity for most indigenous people but rather a sustained endowment with sacred meaning that defines their existence and identity [21]. Thus, upon everything on the land, there is always an ascribed value that interconnects everything in the world. By using this kind of understanding, Karampuang’s people keep their settlement sustainable. 4. Conclusion Rekeang plays a vital role in the cultural and ecological sustainability of the Karampuang community. Although modernization changes and adapts their living, rekeang in bola remains. The behavior of storing the crops changes from the vertical orientation to horizontal order. This storing activity is more temporary and functional than symbolic. The rekeang encounters the transformation, but the rekeang still sustains as an essential part of bola as the symbol and power. The transformation of *bola* as the product of adaptation establishes four types of *bola*. This transformation happens in various ways, like merging, extension, and using modern material. Each type points out how *bola* branch out but still connect to their house of society. The relation between *rekeang* from the house of society and the children’s house (*‘bola’*) is the key to preserve *bola* architecture’s sustainability in Karampuang traditional community. The traditional ceremony that is practiced in Karampuang emphasizes the high symbolic value of paddy/rice. The preservation of ceremony tradition sustains the granary. Paddy/rice becomes the manifestation of a network between Ade’ *Eppape’* and the ordinary inhabitant in Karampuang. It is maintained by *rekeang* presence. So it is essential to keep this kind of relationship system or function to sustain Karampuang community. Furthermore, the house of society concept (*Rumah Besar*) needs to be redefined. There is a conjecture that *bola* can also play the role as *Rumah Besar*, significantly when the settlement expands because of the next generations. **Acknowledgment** This paper is a part of the research grant awarded by ITB under P3MI KK STKA 2018. **References**
INF Annual Report 15/16 Contents 1 Institute of Computer Science (INF) ................................................................. 1 1.1 Address ............................................................................................................. 1 1.2 Personnel .......................................................................................................... 1 2 Teaching Activities .............................................................................................. 3 2.1 Courses for Major and Minor in Computer Science ........................................ 3 2.2 Students ............................................................................................................. 5 2.3 Degrees and Examinations .............................................................................. 5 2.4 Activities .......................................................................................................... 6 2.5 Awards ............................................................................................................... 6 3 Communication and Distributed Systems Group ................................................ 8 3.1 Personnel ............................................................................................................. 8 3.2 Overview ............................................................................................................ 9 3.3 Research Projects ............................................................................................. 9 3.4 Ph.D. Theses ..................................................................................................... 18 3.5 Master's Theses ................................................................................................. 19 3.6 Bachelor's Theses .............................................................................................. 19 3.7 Further Activities ............................................................................................. 19 3.8 Publications ..................................................................................................... 25 4 Computer Graphics Group ................................................................................... 29 4.1 Personnel .......................................................................................................... 29 4.2 Overview ........................................................................................................... 29 4.3 Research Projects ............................................................................................. 30 4.4 Ph.D. Theses ..................................................................................................... 34 4.5 Bachelor's Theses .............................................................................................. 35 4.6 Further Activities ............................................................................................. 35 4.7 Publications ..................................................................................................... 37 5 Computer Vision Group ....................................................................................... 39 5.1 Personnel .......................................................................................................... 39 5.2 Overview ........................................................................................................... 39 5.3 Research Projects ............................................................................................. 40 5.4 Master's Thesis .................................................................................................. 45 5.5 Further Activities ............................................................................................. 46 5.6 Publications ..................................................................................................... 48 6 Logic and Theory Group 6.1 Personnel ................................................. 49 6.2 Overview ................................................. 50 6.3 Research Projects ....................................... 51 6.4 Ph.D. Theses ............................................. 54 6.5 Bachelor’s Theses ....................................... 54 6.6 Further Activities ....................................... 54 6.7 Publications ............................................. 56 7 Software Composition Group 7.1 Personnel ................................................. 59 7.2 Overview ................................................. 59 7.3 Research Projects ....................................... 60 7.4 Ph.D. Theses ............................................. 63 7.5 Master’s Theses ......................................... 63 7.6 Bachelor’s Theses and Computer Science Projects ....... 64 7.7 Awards .................................................... 65 7.8 Further Activities ....................................... 66 7.9 Publications ............................................. 69 8 Administration .............................................. 75 1. Institute of Computer Science (INF) 1.1 Address Neubrückstrasse 10, 3012 Bern, Switzerland Phone: +41 31 631 86 81 E-Mail: info@inf.unibe.ch http://www.inf.unibe.ch 1.2 Personnel Members I. Alyafawi; C. Anastasiades; S. Arjoumand Bigdeli; M. Bärtschi; P. Bertholet; Dr. P. Brambilla; Prof. Dr. T. Braun; A. Caracciolo; J.L. Carrera; Dr. P. Chandramouli; A. Chis; B. Choffat; C. Corrodi; J. Duarte; D. Esser; Prof. Dr. P. Favaro; M. Gasparyan; Dr. M. Ghafari; A. Gomes; Q. Hu; Prof. Dr. G. Jäger; L. Jaun; M. Jin; E. Kalogeiton; M. Karimzadeh; A. Kashev; I. Keller; I. Kokkinis; J. Kurš; Z. Li; M. Manzi; A. Marandi; M. Marti; L. Merino del Campo; N. Milojkovic; Prof. Dr. O. Nierstrasz; M. Noroozi; H. Osman; T. Portenier; Dr. D. Probst; F. Ranzi; T. Rosebrock; J. Saltarin; Dr. K. Sato; Dr. E. Schiller; D. S. Schroth; B. Spasojević; Dr. S. Steila; M. Stolz; Prof. Dr. Th. Strahm; Prof. Dr. Th. Studer; A. Szabo; Y. Tymchuk; J. Walker; X. Wang; S. Wu; Dr. Z. Zhao; Prof. Dr. M. Zwicker Board of directors Prof. Dr. Torsten Braun; Prof. Dr. Paolo Favaro; Prof. Dr. Gerhard Jäger; Prof. Dr. Oscar Nierstrasz; Prof. Dr. Matthias Zwicker Managing director Prof. Dr. Oscar Nierstrasz Director of studies Prof. Dr. Paolo Favaro Administration Bettina Choffat; Dragana Esser; Iris Keller; Daniela Schroth Technical staff Dr. Peppo Brambilla; Alexander Kashev 2 Teaching Activities 2.1 Courses for Major and Minor in Computer Science Autumn Semester 2015 - Bachelor 1st Semester Einführung in die Informatik (Die Dozenten der Informatik, 5 ECTS) Grundlagen der technischen Informatik (T. Studer, 5 ECTS) Programmierung 1 (T. Strahm, 5 ECTS) - Bachelor 3rd Semester Computernetze (T. Braun, 5 ECTS) Diskrete Mathematik und Logik (D. Probst, 5 ECTS) Einführung in Software Engineering (O. Nierstrasz, 5 ECTS) - Bachelor 5th Semester Anleitung zu wissenschaftlichen Arbeiten (5 ECTS) Computergrafik (M. Zwicker, 5 ECTS) Machine Learning (P. Favaro, 5 ECTS) Mensch-Maschine-Schnittstelle (T. Strahm, 5 ECTS) - Master Courses Concurrency: State Models and Design Patterns (O. Nierstrasz, 5 ECTS) Sensor Networks and the Internet of Things (T. Braun, 5 ECTS) 3D Geometry Processing (M. Zwicker, 5 ECTS) Computer Vision (P. Favaro, 5 ECTS) Modal Logic (T. Studer, 5 ECTS) Proof Theory (G. Jäger, 5 ECTS) Seminar: Software Composition (O. Nierstrasz, 5 ECTS) Seminar: Communication and Distributed Systems (T. Braun, 5 ECTS) Seminar: Computer Graphics (M. Zwicker, 5 ECTS) Seminar: Computer Vision (P. Favaro, 5 ECTS) Seminar: Logic and Theoretical Computer Science, (G. Jäger, 5 ECTS) Seminar: Logic and Algebra, (G. Jäger, G. Metcalfe, 5 ECTS) Graduate Seminar Logic and Information (G. Jäger, G. Metcalfe, K. Stoffel, U. Ultes-Nitsche, 5 ECTS) • Service Courses Anwendungssoftware für Naturwissenschaftler (T. Studer, 3 ECTS) Basic Programming for Non-Informaticians. With Practicals. (P. Brambilla, 5 ECTS) Spring Semester 2016 • Bachelor 2nd Semester Datenbanken (T. Studer, 5 ECTS) Datenstrukturen und Algorithmen (M. Zwicker, 5 ECTS) Computer Architecture (A. Szabo, 5 ECTS) Programmierung 2 (O. Nierstrasz, 5 ECTS) • Bachelor 4th Semester Automaten und formale Sprachen (K. Riesen, 5 ECTS) Berechenbarkeit und Komplexität (T. Strahm, 5 ECTS) Betriebssysteme (T. Braun, 5 ECTS) Praktikum Software Engineering (T. Studer, 5 ECTS) 2. Teaching Activities - Bachelor 6th Semester Anleitung zu wissenschaftlichen Arbeiten (5 ECTS) - Master Courses Programming Languages (O. Nierstrasz, 5 ECTS) Mobile Communications (T. Braun, 5 ECTS) Rendering Algorithms (M. Zwicker, 5 ECTS) Seminar: Software Composition (O. Nierstrasz, 5 ECTS) Seminar: Communication and Distributed Systems (T. Braun, 5 ECTS) Seminar: Computer Graphics (M. Zwicker, 5 ECTS) Seminar: Logic and Theoretical Computer Science (T. Studer, T. Strahm, 5 ECTS) - Service Courses Anwendungssoftware für Naturwissenschafter (T. Strahm, 3 ECTS) 2.2 Students - Major Subject Students: AS 2015: 177, SS 2016: 199 - Minor Subject Students: AS 2015: 71, SS 2016: 95 - Ph.D. Candidates: AS 2015: 30, SS 2016: 38 2.3 Degrees and Examinations - PhD: 9 - Master: 15 - Bachelor: 22 - Completion of Minor Studies: 14 (90E:0, 60E: 4, 30E: 7, 15E: 3, 495 ECTS) • Semester Examinations AS2015: 632 (2445 ECTS) • Bachelor’s/Master’s Theses AS 2015: 10 (220 ECTS) • Semester Examinations SS2016: 466 (1826 ECTS) • Bachelor’s/Master’s Theses SS 2016: 10 (160 ECTS) 2.4 Activities • Participation in “Projektwoche Faszination Informatik” organized by “Schweizer Jugend forscht”, Bern, September, 2015 • Contribution to the “National Future Day for Girls and Boys”, Bern, November 12, 2015 • Contribution to the “Bachelor Infotage”, December 1+2, 2015 • Taster course for female students, Bern, March 17, 2016 • Visitor Program, Gymnasium Thun, Bern, June 28, 2016 2.5 Awards • Faculty Prize 2015 for Daniele Perrone’s Ph.D. thesis “Towards a Novel Paradigm in Blind Deconvolution: From Natural to Cartooned Image Statistics” • Faculty prize 2015 for Jürg Weber’s Master’s thesis “Dynamic Adaptation of Transmission Modes for Opportunistic Content-Centric Networks” • Annual Alumni Award 2015 for Tobias Schmid Master’s thesis “Agent-Based Data Retrieval for Opportunistic Content-Centric Networks” • Annual Alumni Award 2015 for Paul Frischknecht’s Bachelor’s thesis “A Proof of the Arithmetical Equivalence of EC with Full Induction and ACA” 3 Communication and Distributed Systems Group 3.1 Personnel Head: Prof. Dr. T. Braun Tel.: +41 31 511 2631 email: braun@inf.unibe.ch Office Manager: D. Schrotth Tel.: +41 31 511 2630 email: schroth@inf.unibe.ch Scientific Staff: I. Alyafawi\* Tel.: +41 31 511 7631 email: alyafawi@inf.unibe.ch ( until 31.08.2015) C. Anastasiades Tel.: +41 31 511 2635 email: anastasi@inf.unibe.ch ( until 01.06.2016) J. Carrera\* Tel.: +41 31 511 7645 email: karimzadeh@inf.unibe.ch ( since 01.11.2015) J. Duarte Tel.: +41 31 511 2639 email: duarte@inf.unibe.ch ( since 01.02.2016) M. Gasparyan\* Tel.: +41 31 511 7645 email: gasparyan@inf.unibe.ch A. Gomes\* Tel.: +41 31 511 2636 email: gomes@inf.unibe.ch E. Kalogeiton\* Tel.: +41 31 511 2638 email: kalogeiton@inf.unibe.ch ( since 01.05.2016) M. Karimzadeh Tel.: +41 31 511 7645 email: karimzadeh@inf.unibe.ch ( since 01.02.2016) Z. Li\* Tel.: +41 31 511 2638 email: li@inf.unibe.ch ( until 30.04.2016) A. Marandi\* Tel.: +41 31 511 2634 email: marandi@inf.unibe.ch J. Saltarin\* Tel.: +41 31 511 2639 email: saltarin@inf.unibe.ch Dr. E. Schiller\* Tel.: +41 31 511 2633 email: schiller@inf.unibe.ch M. Stolz Tel.: +41 31 511 2637 email: stolz@inf.unibe.ch Dr. Z. Zhao\* Tel.: +41 31 511 2639 email: zhao@inf.unibe.ch 3. Communication and Distributed Systems External Ph.D. Students: A. Antonescu email: antonescu@inf.unibe.ch (until 31.10.2015) L. Luceri email: luceri@inf.unibe.ch G. Manzo email: gaetanomanzo@gmail.com (since 01.05.2016) M. Thoma email: thoma@inf.unibe.ch Teaching: Dr. Ph. Hurni email: hurni@inf.unibe.ch (01.08.15-31.01.16) Guests: Dr. A. Neto Department of Informatics and Applied Mathematics, Federal University of Rio Grande do Norte/BR (29.05.16-08.06.16) * with financial support from a third party 3.2 Overview The research group “Communication and Distributed Systems” has been investigating how multimedia applications and cloud computing services with high demands on the quality, reliability and energy efficiency can be supported by mobile communication systems and networks. Moreover, we are investigating localization mechanisms for wireless devices and new Future Internet paradigms such as Information-Centric Networking. 3.3 Research Projects SwissSenseSynergy The SwissSenseSynergy project aims to develop a framework for delivering secure localization and location-based services (LBS) to users, who optimally trade off privacy requirements with user value, network performance and reliability. To achieve this goal, the project targets at building a synergistic platform, which consists of a testbed based on mobile crowdsensing and Internet of Things, a data model for representing the different sources of collected data, and a prediction engine for analyzing the data and producing insights. We have published two scientific papers in peer-reviewed conferences/journals, in which one paper [Li et al., 2016] is about our latest research activity of indoor localization, and another paper [Hossmann et al., 2015] is a joint effort of all the partners, which described the overall approach and concept of the project and its challenges in a comprehensive way. The CDS group is involved in and leading the sub-project of “mobility, localization and tracking”, in which our tasks are indoor localization/tracking and mobility prediction. For indoor localization and tracking, we have developed novel passive positioning system for WiFi target and systems, which can extract channel state information from overheard packets to design enhanced methods for ranging and positioning. The novel ranging method can mitigate multi-path propagation based on channel information and is robust to ranging errors caused by non-line-of-sight propagation based on a new propagation model. An enhanced particle filter has been designed to extend the aforementioned passive positioning system to support passive tracking of mobile targets. It could deal with the inaccurate likelihood estimation and moving model problems. The enhanced particle filter solution fuses received WiFi radio signals, smart-phone embedded inertial sensors and physical information of the environment (floor plans), to achieve high tracking accuracy. Our system has been evaluated in extensive experiments, and it is able to achieve a mean accuracy of 1 m and a 90% accuracy of 2.3 m in real-time. In the task of mobility prediction, our goal is to predict the future locations of mobile users based on their historical and current context, such as GPS locations, frequency and duration of visiting a place, and the smart-phone system information, such as WLAN connections, movement acceleration, running applications, etc. We propose to apply machine learning algorithms to generate decision trees to make predictions. In order to construct a decision tree that is able to classify users’ future location, we have to extract different types of features from their daily behavior traces, such that different places could be represented by different characteristics. To do this, we consider information like user movements’ temporal features, smart-phone system information, such as running application, charging status, etc. We have conducted experiments using different machine learning algorithms such as Bayes Networks, J48 decision tree, ensemble learning methods, etc. With our extracted features, we could achieve the best prediction accuracy of around 85%, which is much better than most of the existing works that only consider temporal features of users’ movements. In addition to our own individual research progress, the CDS group has collaborated with other project partners. For instance, we have tested our indoor localization algorithm in other partners’ office areas to validate our approach, and we are also working on integrating our real-time application 3. Communication and Distributed Systems with the smart office management application from other project partners. **Research staff:** Z. Zhao, Z. Li, J. Carrera, M. Karimzadeh, T. Braun. **Financial support:** Swiss National Science Foundation Sinergia project number 154458 **Mobile Cloud Networking** Mobile Cloud Networking (MCN) was a EU-FP7 large-scale Integrating Project (IP) funded by the European Commission. In total, 19 partners from industry and academia performed research on MCN. MCN was launched in November 2012 and was successfully finished on 30 April 2016. The project was primarily motivated by an ongoing transformation that still drives convergence between the mobile communication and cloud computing industry enabled through the Internet. It led to a number of objectives that had to be investigated, implemented and evaluated over the project life-time. The top-most objectives of the MCN project were to: a) extend cloud computing towards the mobile end-user, b) design a 3GPP-compliant Mobile Cloud Networking architecture that exploited and supported cloud computing, c) enable a novel business actor, i.e., the MCN provider, and d) deliver and exploit the concept of an end-to-end MCN for novel applications and services. The key research and innovation issues of the MCN project were: a) virtualization of Radio Access Networks (RAN), b) design of a cross-domain Infrastructure-as-a-Service (IaaS) control plane, c) virtualization and cloud computing middleware to support highly demanding, real-time network applications and services, d) design, deployment, and operation of mobile communication software components to attain and fully benefit from cloud computing attributes, e) ensure QoE with advanced content and service migration mechanisms for mobile cloud users, and f) support of multiple cross-domain aspects of services interacting with a multitude of business actors and stake-holders. The CDS group was involved in the following work packages (WP): WP3 on Mobile Cloud Infrastructural Foundations, WP4 on Mobile Network Cloud, WP5 on Mobile Platform, WP6 on Experimentation and Evaluation, and WP7 on Dissemination, Exploitation, Standardization activities. Our work-scope within WP3 was to offer a comprehensive framework for the LTE radio access network (RAN). In particular, the framework allowed for virtualization of base stations (running a base station in the cloud). We delivered an execution platform for RAN and a management framework for virtualized RAN. Finally, a distributed cloudified solution having a separate RRH (remote radio head) and BBU (base-band unit) connected through an Ethernet fronthaul was provided. Our scope of WP4 was put on implementation and performance evaluations of Mobility and Bandwidth Prediction as a Service (MOBaaS). Mobility and Bandwidth as a Service (MOBaaS) was an MCN service that generated user mobility and bandwidth prediction. This information could be used by other services to trigger self-adaptation procedures, e.g., optimal run-time configuration, scale-out and scale-in of service instance components, or optimal network function placement. Evaluation results showed that MOBaaS was able to provide single user/group mobility and bandwidth prediction with good prediction accuracy and latency. Our key contribution of WP5 was the implementation of the follow-me cloud concept, which aimed to provide cloud services and data as close as possible to an end user; this minimized delays and improved performance. The performance of our modules was successfully evaluated. They were integrated with other services of MCN in WP6. Finally, we have performed an end-to-end evaluation of a cloudified mobile operator composed of all the MCN services developed. In WP7, we successfully disseminated project results by various mechanisms including, publications, social media channels, events, standards, software releases, etc. Research staff: I. Alyafawi, A. Gomes, Z. Li, E. Schiller, Z. Zhao, T. Braun. Financial support: EU FP7 Large-scale Integrating Project (IP), contract number CNECT-ICT-318109 Testbeds The CDS group possesses a cloud infrastructure based on Dell Power Edge Servers. We have three machines R320, R520, and R530 supporting 100 parallel threads (50 cores) and 448 GB RAM. Moreover, an external storage Dell PowerVault md3800i provides disk space of 10.3 TB in Raid 5 and Raid 6. The network backbone is based on Dell N4032 switches with 48 10 GbE-T ports and 80 Gb/s backbone connection. The infrastructure supports the following services perfectly integrated with the Lightweight Directory Access Protocol (LDAP) of the institute. - Mirantis OpenStack 8.0 (IaaS research cloud) 3. Communication and Distributed Systems - OwnCloud (shared storage) - Wiki (information dissemination) - Etherpad (collaborative real-time editor) - SVN (collaborative version management system) For collaborative administration and monitoring, we use: - Teampass (password management system) - Nagios (monitoring) CDS owns an IoT testbed of 40 MEMSIC Telsob nodes deployed in the building of the Institute of Computer Science of the University of Bern. The testbed consists of the following sensor nodes: - 40 TelosB by Crossbow (now Willow) - Texas Instruments 16 bit microprocessor (TI MSP 430) - 802.15.4 radio interface - Fixed Power Supply via the USB Interface - Temperature, humidity and light sensor - 1 MB external flash - 7 MSB-430 Sensor Nodes - Texas Instruments 16 bit microprocessor (TI MSP 430) - CC1020 radio interface - Temperature, humidity and acceleration sensor - SD memory interface The CDS testbed hence consists of 47 sensor nodes. The network spans across 4 floors of one building of the Institute of Computer Science of the University of Bern. The 7 MSB430 nodes are placed indoors. Out of the 40 TelosB nodes, 39 nodes are placed indoors, in rooms or corridors of the building, and one node is an outdoor node placed on the top window sill of the small tower. FIRE LTE testbeds for open EXperimentation FIRE LTE testbeds for open EXperimentation (FLEX) is an EU-FP7 large-scale Integrating Project (IP) funded by the European Commission. Currently, the consortium holds 16 active partners, but is going to be officially extended with new members that joined through the Second Open Call. CDS also joined FLEX through the Second Open Call for 10 month work between 1.4.2016 and 31.1.2017. FLEX aims to provide fully open and operational LTE experimental facilities. Based on a combination of truly configurable commercial equipment, truly configurable core network software, fully open source components, and on top of those, sophisticated emulation and mobility functionalities, the FLEX facility allows researchers from academia and industry to test services and applications over real LTE infrastructures, or experiment with alternative algorithms and architectures of the core and access network. In the scope of the FLEX project, CDS aims to implement and extensively evaluate the Mobile Edge Computing (MEC) caching framework. We will provide a MEC server collocated with an OpenAirInterface-based macro eNB LTE station. The MEC caching application will be executed as a Virtual Network Function (VNFs) on top of the virtualized environment. Finally, we will evaluate the performance of the newly develop solution. Research staff: E. Schiller, T. Braun. Financial support: EU FP7 Large-scale Integrating Project (IP), contract number 612050 Network Coding Based Multimedia Streaming in Content Centric Networks Information Centric Networking architectures (ICN) have recently gained significant attention in the research community, as they promise to revolutionize the way data is exchanged in the Internet. They move from the traditional paradigm of Internet communication using IP addresses towards using names as addresses. This is motivated by the fact that when users browse the Internet, they care only about the data content and not where the content is stored. On the contrary, the IP model focuses on where the data is located. Several problems are associated with the current IP network architecture like usability, performance, security and resilience to mobility. To cope with some of these limitations, content distribution networks (CDN) and peer-to-peer architectures have been proposed. These methods mainly deal with scalability issues and attempt to better exploit the available network resources. CDN and P2P could be seen as a first step towards ICN. Network coding has been presented a decade ago as an efficient technique for heterogeneous wired and wireless overlay networks to increase the throughput, decrease the delay, enhance resilience, remove the need for coordination between the network nodes etc. There are two major classes of network coding algorithms namely Linear Network Coding (LNC) and Random Linear Network Coding (RLNC). Both methods operate in finite fields. LNC decides about the coding operations centrally, although there are some decentralized designs, whereas RLNC randomly performs operations in finite fields and has only a small performance penalty compared to LNC when operations are in large finite fields. Network coding is interesting for multimedia communication. The challenge with multimedia is that data is often scalable and data delivery should respect the tight decoding deadlines. Content Centric Networking (CCN) and Named Data Networking (NDN) are the most prominent versions of ICN. CCN refers to the architecture project started at PARC, which included leading the development of a software codebase that represents a baseline implementation of this architecture. NDN refers to the NSF-funded Future Internet Architecture project, which originally used CCNx as its codebase, but as of 2013 has forked a version to support the needs specifically related to the NSF-funded architecture research and development (and not necessarily of interest to PARC). In this project, we envisage the design of novel network coding methods that will promote the use of ICN. We are building our techniques on the Content Centric Networking (CCNx) implementation, since it has many advantages like hierarchical prefixes and being open source. Some abstract ideas regarding the use of network coding in CCN have been very recently discussed. It mainly provides some examples motivating the appropriateness of network coding for the ICN framework, rather than specific solutions. In our perspective, specific problems should be resolved prior to employing such technologies. Specifically, open challenges are: what kind of prefixes should be used, security issues, where to cache information, how one can deal with multiple concurrent sessions accessing the network, could data correlation be exploited? In the second year of the project we continued to develop a protocol, namely NetCodCCN, for integrating network coding in CCN. We published a first version of our protocol, based on the work of the first year and part of the second year. Now we are close to submit a new publication with the updated protocol, which improves resiliency in different topologies. More- over, we have changed our code base from CCNx to the NDN project implementation. Moreover, we have developed applications that enable DASH multimedia communications on top of our architecture. This will allow us to show not only how our protocol can improve download times, but also how it can increase the average Quality of Experience seen by DASH clients. Content discovery, i.e., locating the demanded content is one of the major challenges in ICN. This task is performed by routing schemes or resolution-based solutions in ICN. In this project, we focused on NDN and proposed BFR, a Bloom filter-based, fully distributed, content oriented, and topology agnostic routing approach at the intra-domain level for NDN. In BFR, origin servers advertise their contents using Bloom filters in order to reduce bandwidth and storage overhead significantly. The proposed BFR outperforms flooding and shortest path approaches in NDN in terms of mean hit distance, normalized communication cost, average round-trip delay, and robustness to topology changes without requiring extensive signalling between nodes. BFR requires quite a small storage overhead for maintaining content advertisements. Nevertheless, we proposed also storage management strategies in order to keep the storage overhead for content advertisements reasonably low. **Research staff:** J. Saltarin, A. Marandi, T. Braun. **Financial support:** Swiss National Science Foundation project number 149225 **Service-Centric Networking** Content-Centric Networking (CCN) does not well support the concept of services in its architecture. We believe that services, rather than content, should be the center of focus in future network architectures. This is because content is just a subset of services and what applies to services can easily apply to content, but not the other way around. Service-Centric Networking (SCN) is a new networking paradigm where services are at the heart of its architecture. SCN is an object-oriented architecture where services and contents are considered as objects. Our research aims at building the SCN architecture based on CCN with extensions regarding service routing, name resolution, service naming, and service management. We built the L-SCN (Layered-SCN) architecture to support services over CCN. L-SCN uses a two-layer forwarding scheme combined of nodes and Bloom filters. L-SCN aims to minimize protocol overhead and maximize the shared information about services and resources available in the network. We have extended the existing NDN implementation in ndnSIM with new data structures and forwarding techniques. The default CCN routing mechanism was not changed. Therefore, traditional CCN traffic can be forwarded as usual. We have implemented and evaluated our design in ndnSIM by comparing its processing time performance against available forwarding strategies in ndnSIM. The results show that our architecture outperforms the three existing strategies in ndnSIM, which are the Best Route, Multicast, and Random forwarding strategies. We intend to extend L-SCN with important SCN requirements such as session support. We are investigating service session support techniques that can be used in the network to handle service session requests. Research staff: M. Gasparyan, T. Braun. Financial support: Swiss National Science Foundation Project No. 146376 CONtext and conTent Aware CommunicaTions for QoS support in VANETs (CONTACT) Communication in Vehicular Ad Hoc Networks (VANETs) have been investigated during the last years due to the development of applications that may enable safer and more autonomous driving. The main characteristic of VANETs is that the topology constantly changes, because the vehicles move with high speeds and most likely with an unpredictable way. In recent years, applications for VANETs include mostly safety but also infotainment applications. The CONTACT project aims to ensure the Quality of Service (QoS) in VANET networks, by exploiting Content Centric Networking (CCN), Software Defined Networking (SDN) and Floating Content (FC). CCN provides a novel communication approach that differs from traditional communication paradigms, since the messages are exchanged based on their content and not on the location of the hosts. SDN separates the network’s control plane, which performs the routing decision, from the data plane, which is responsible for data forwarding. FC refers to self-organizing information responsible for finding its own storage on top of a highly dynamic set of mobile nodes. The goal is to keep content at some specified location known as anchor zone, despite the unreliability of the devices on which it may be stored. The main dependability requirements of FC are survivability, availability, and accessibility. We aim to combine these three methods in order to develop communication techniques and apply them to VANETs to guarantee high QoS. In particular, we would like to increase the reliability and the scalability of applications in VANETs by exploiting the CCN and SDN paradigms. Moreover, we aim to keep drivers informed about safety on road. Furthermore, the combination of CCN and FC can be advantageous in particular cases. For instance, when content objects are bound to a specific geographic region, and stored on a subset of nodes within the region itself, the replication characteristic of FC introduces redundancy, which can be useful when nodes cannot be continuously active and hence do not make content permanently available due to high node failure rates, intermittent connectivity, load balancing, etc. On the other hand, content searching and finding can become more efficient if the geographic area of the floating content is derived from the content name, and, therefore, CCN Interest messages can be forwarded to that geographic area to meet the requested content. Finally, we combine FC with SDN. The main purpose is a centralized approach that exploits an SDN controller. The latter could establish and resize the size and shape of the anchor zone for the FC. The main advantage is that each node in the anchor zone could receive and replicate the floating content, even in areas with high density of messages and/or vehicles. **Research staff:** E. Kalogeiton, J. Duarte, T. Braun. **Financial support:** Swiss National Science Foundation (SNSF), project number: 164205 ### 3.4 Ph.D. Theses - Florian Antonescu “Service Level Agreements-Driven Management of Distributed Applications in Cloud Computing Environments”, October, 2015 - Carlos Anastasiades “Information-Centric Networking in Mobile and Opportunistic Networks”, June, 2016 3.5 Master’s Theses - Tobias Schmid “Agent-Based Data Retrieval for Opportunistic Content-Centric Networks”, August, 2015 - Danilo Burbano “Indoor Tracking by Particle Filter Combining CIR-based Ranging and Inertial Sensors”, October, 2015 - Jose Luis Carrera “Improve Trilateration Accuracy By LOS/NLOS Identification and MIMO”, October, 2015 3.6 Bachelor’s Theses - Oliver Stapleton “Service Distribution Mechanisms in Information-Centric Networking”, August, 2015 3.7 Further Activities Memberships Torsten Braun - Erweitertes Leitungsgremium Fachgruppe “Kommunikation und Verteilte Systeme”, Gesellschaft für Informatik - SWITCH Stiftungsrat - SWITCH Stiftungsratsausschuss - Interim President of SWITCH foundation (November 2015 to March 2016) and vice president before and after that period - Kuratorium Fritz-Kutter-Fonds • Expert for Diploma Exams at Fachhochschule Bern • Expert for Matura Exams at Gymnasium Neufeld, Bern • Management committee member of COST Action IC1303 Algorithms, Architectures and Platforms for Enhanced Living Environments (AAPELE) • Management committee member of COST Action CA15127 Resilient communication services protecting end-user applications from disaster-based failures (RECODIS) • Management committee substitute member of the COST Action CA15104 Inclusive Radio Communication Networks for 5G and beyond (IRACON) • Board Member (Gesellschafter) of VGU Private Virtual Global University, Berlin, Germany Editorial Boards Torsten Braun • Editorial Board Member of Informatik Spektrum, Springer • Editorial Board Member of MDPI (Multidisciplinary Digital Publishing Institute) Journal of Sensor and Actuator Networks Conference and workshop organization Torsten Braun • 1st International INFOCOM Workshop on Software-Driven Flexible and Agile Networking, Steering committee, San Francisco, CA, USA, April 11, 2016 • Wired/Wireless Internet Communications 2015, Steering committee, Thessaloniki, Greece, May 25-27, 2016 • International Symposium on Quality of Service 2016, Steering committee, Beijing, China, June 20-21, 2016 Conference Program Committees Torsten Braun - 8th International Workshop on Multiple Access Communications (MACOM 2015), Helsinki, Finland, September 3-4, 2015 - 8th ICT Innovations Conference 2015, Ohrid, FYR. Macedonia, September 5-7, 2015 - Leistungs-, Zuverlässigkeits- und Verlässlichkeitsbewertung von Kommunikationsnetzen und verteilten Systemen (MMBnet 2015), Universität Hamburg, Germany, September 10-11, 2015 - 3rd IEEE International Conference on Cloud Networking (CLOUD-NET), Niagara Falls, Canada, October 5-7, 2015 - 9th International Workshop on Communication Technologies for Vehicles (Nets4Cars 2015), Munich, Germany, October 5-7, 2015 - 7th International Congress on Ultra Modern Telecommunications and Control Systems (ICUMT), Brno, Czech Republic, October 6-8, 2015 - IEEE 12th International Conference on Mobile Ad hoc and Sensor Systems (MASS), Dallas, USA, Russia, October 19-22, 2015 - 40th IEEE Local Computer Networks Conference (LCN 2015), Florida, USA, October 26-29, 2015 - 18th ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM) 2015, Cancun, Mexico, November 2-6, 2015 - 11th International Conference on Network and Service Management (CNSM 2015), Barcelona, Spain, November 9-13, 2015 - IEEE Global Communications Conference (GLOBECOM), San Diego, CA, USA, December 6-10, 2015 • IEEE Wireless Communications and Networking Conference (WCNC 2016), Doha, Qatar, April 3-6, 2016 • 31st ACM Symposium on Applied Computing (SAC 2016), Pisa, Italy, April 4-8, 2016 • IFIP Networking 2016 Conference, Vienna, Austria, May 17-19, 2016 • 2nd International Conference on Smart Computing (SMARTCOMP 2016), St. Louis Missouri, USA, May 18-20, 2016 • IEEE International Conference on Communications (ICC 2016), Kuala Lumpur, Malaysia, 23-27 May 2016 • International Conference on Wired and Wireless Internet Communications, Thessaloniki, Greece, May 25-27, 2016 • IEEE/ACM International Symposium on Quality of Service (IWQoS), Beijing, China, June 20-21 • IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM 2016), Coimbra, Portugal, June 21-24, 2016 Project and Person Reviewing Activities Torsten Braun • Research Council of Norway • European Framework Programme for Research Horizon 2020 • Danish Council for Independent Research • Flanders Innovation & Entrepreneurship • German-Israeli Foundation for Scientific Research and Development • Chalmers University • Universität zu Lübeck Journal Article Reviewing Activities Torsten Braun - Elsevier Computer Networks - IEEE Communications Magazine - Hindawi Mobile Information Systems - Elsevier Computers and Electronics in Agriculture - Multidisciplinary Digital Publishing Institute (MDPI) Sensors Open Access Journal - Springer Lecture Notes in Computer Science Eryk Schiller - IEEE Transactions on Network and Service Management - Elsevier Ad-hoc Networks Carlos Anastasiades - IEEE Network Magazine - Elsevier Computer Communications - Springer Wireless Information Networks André Gomes - IEEE Wireless Communications Magazine Eryk Schiller - International Journal of Ad Hoc and Ubiquitous Computing (IJAHUC) Zhongliang Zhao - IEEE Transactions on Multimedia - MDPI Journal Sensors - IEEE Transactions on Vehicular Technology Talks and Tutorials Torsten Braun - Invited talk: “Information-Centric Networking in Wireless and Mobile Networks”, Beihang University, August 26, 2015 - Invited talk: “Information-Centric Networking in Wireless and Mobile Networks”, University of Science and Technology of China, Hefei, August 28, 2015 - Invited talk: “Future Internet Architecture for Mobile / Wireless Networks”, Universidade Federal do Pará, Belem, Brazil, July 29, 2016 PhD Committee Memberships Torsten Braun - Mehdi Asadpour, ETH Zürich - Ahmed Abujoda, Gottfried Wilhelm Leibniz Universität Hannover 3.8 Publications Journal Papers Conference Papers • Li, Z., Acuna, D. B., Zhao, Z., Carrera, J. L., and Braun, T. (2016). Fine-grained indoor tracking by fusing inertial sensor and physical layer information in WLANs. In *IEEE International Conference on Communications (ICC)*. **Technical Reports** 4.1 Personnel **Heads:** Prof. Dr. M. Zwicker Tel.: +41 31 631 3301 email: zwicker@inf.unibe.ch **Office Managers:** D. Esser Tel.: +41 31 631 4914 email: esser@inf.unibe.ch **Scientific Staff:** P. Bertholet* Tel.: +41 31 511 76 01 email: bertholet@inf.unibe.ch S. Bigdeli Tel.: +41 31 511 76 02 email: bigdeli@inf.unibe.ch D. Dhillon* Tel.: +41 31 511 76 02 email: dhillon@inf.unibe.ch D. Donatsch Tel.: +41 31 511 76 01 email: donatsch@inf.unibe.ch M. Manzi* Tel.: +41 31 511 76 06 email: manzi@inf.unibe.ch T. Portenier* Tel.: +41 31 511 76 01 email: portenier@inf.unibe.ch S. Wu Tel.: +41 31 511 76 06 email: wu@inf.unibe.ch * with financial support from a third party 4.2 Overview The Computer Graphics Group (CGG) focuses on fundamental methods to generate and manipulate images using computers. We develop algorithms and systems for realistic and real-time rendering, and animation and modeling of three-dimensional shapes. We are also interested in novel representations for 3D geometry, such as point-based representations. Finally, we investigate signal processing techniques, in particular for multi-view 3D displays. Our research has applications in digital entertainment, multimedia, and data visualization. 4.3 Research Projects UNITED LIVING COLORS.CH: Integrating Evolutionary Developmental Genetics, 3D Computer Graphics, and Natural Photonics for Deciphering Variation & Complexity in Reptilian Color Traits This project integrates the expertise of three research groups in Switzerland (evolutionary and developmental geneticists, University of Geneva; 3D computer graphics scientists, University of Bern; and condensed-matter physicists, University of Geneva) to gain an improved understanding of the mechanisms generating variation, complexity, and convergence of color traits in animals, in particular reptiles. A key issue in evolution is to understand how morphology and physiology are altered to produce new forms serving novel functions. Basically no study to date integrated genomics/transcriptomics, developmental genetics, quantitative genetics, and extensive phenotyping of corresponding traits in natural populations for a better understanding of the link between genotype and phenotype in an ecological and phylogenetic framework. The pigmentation system in vertebrates is promising for exploring that connection: closely-related species as well as natural populations exhibit astonishing variations in color and color patterns, and this variation is of great ecological importance as it plays critical roles in thermoregulation, photoprotection, camouflage, display, and reproductive isolation (hence, speciation). Other advantages of focusing on color traits are that they can be quantified and modeled objectively, some of the involved signalling pathways have been partly uncovered in model organisms, and they provide among the best examples of convergence within and among species. In the context of this project, the Computer Graphics Group develops tools for the acquisition of both 3D geometry and color texture at very high resolution on living animals. Further, we perform the mathematical analysis of the acquired texture phenotypes, mathematical modeling of the mechanisms generating color patterns, as well as computer simulations of reaction-diffusion on 3D geometries acquired from real animals. This project concluded with Daljit Dhillon’s successful Ph.D. defense in November 2015. Research staff: Daljit Dhillon, Matthias Zwicker Efficient Sampling and Reconstruction for Image Synthesis The goal of image synthesis using light transport simulation is to compute images of virtual, three-dimensional environments such that, if it were possible to capture photographs of equivalent physical environments, the simulated images would be visually indistinguishable from the photographs. In an actual digital camera, the brightness of a pixel is determined by measuring the number of photons and their energy incident over the area of the pixel on the sensor. Photons can be thought of as particles that scatter in the physical environment with a certain randomness, tracing out paths from light sources to the camera lens and ultimately onto the sensor, where they are absorbed. The same intuition underlies Monte Carlo methods, a broad class of techniques to simulate light transport and image formation using virtual environments and virtual cameras. They construct light paths with a certain randomness and measure their contributions over some area. In this project, we will develop novel algorithms for two specific approaches in Monte Carlo light transport simulation, progressive photon mapping and adaptive sampling and reconstruction. Our overall goal is to further reduce the computational effort that is required to reach a desired accuracy and to avoid visual artifacts. Photon mapping is one of the main Monte Carlo methods that is widely used in image synthesis. In many scenarios photon mapping techniques are considered superior to other Monte Carlo methods, that is, they can produce more accurate results using the same computation time. A core idea of photon mapping is to estimate generalized measurements of light energy over arbitrary locations in virtual scenes. Unfortunately, using such generalized measurements leads to bias, a systematic error in simulated images. While bias can be reduced by evaluating measurements over arbitrarily small areas, this increases variance, or noise. The conventional wisdom was that this bias-variance trade-off was a fundamental property and inherent disadvantage of photon mapping. Recently it has been shown, however, that a progressive variant of photon mapping can be formulated that manages to circumvent this problem and eliminate bias in the limit. In our own work, we developed a more general theory of progressive photon mapping that frames the approach in the context of a statistical technique that we call progressive density es- The goal of this project is to further develop this theory and to develop advanced algorithms that increase the efficiency and extend the applicability of progressive rendering schemes. An important observation in image synthesis is that different pixels often require varying amounts of computation to achieve the same level of accuracy. In other words, the number of light paths that need to be sampled and evaluated in each pixel may vary. Adaptively determining an appropriate number of samples for each pixel is known as adaptive sampling. In addition, neighboring pixels often use similar light paths. Hence light paths can be shared and averaged across several pixels without causing any visible error, which is known as adaptive reconstruction. Combining adaptive sampling and reconstruction often significantly reduces the number of light paths required to obtain images that are visually indistinguishable from ground truth results. In this project we will build on our previous framework for adaptive sampling and reconstruction, which strives to minimize the error given a certain sample budget and achieves state-of-the-art performance. In particular, we will develop advanced reconstruction filters that will further increase the accuracy of our scheme. Finally, we will combine our approach with a broader range of rendering algorithms. As an overarching research objective, we are striving to develop algorithms that reduce image errors to a minimum under a given sample budget. **Research staff:** Marco Manzi, Matthias Zwicker **Financial support:** Swiss National Science Foundation, grant nr. 143886 **Hand-Held 3D Light Field Photography** The convergence of sophisticated digital cameras and powerful computers in mobile devices such as smartphones and tablet computers has led to a dizzying array of new applications and tools for consumer photography, some more experimental, others firmly established in the mainstream and used by millions of consumers. In academic research, the confluence of computation and photography has led to a new research field commonly known as “computational photography”, which strives to extend the capabilities of conventional digital photography using sophisticated computational algorithms. But computational photography is not limited to operate on conventional 2D images. Instead, it can work with any representation of the distribution of light in a physical environment captured by a camera. Light fields are a natural extension of 2D images. Under the assumptions of geometric optics, they essentially represent the radiance of each light ray traveling in an environment. Light field photography has been first described more than a hundred years ago, but only recently this idea has started to show its full potential when combined with powerful computing devices. Today, various light field cameras are available for research and consumer applications. The main benefit of light field cameras is that they enable additional possibilities that are hard to achieve with conventional cameras, such as digitally refocusing images after the data has been captured. A disadvantage of light field cameras is that they require special hardware and optical systems. Practical designs need to make trade-offs between spatial and angular resolution, leading to systems that are usually not competitive with conventional cameras in terms of pure spatial image resolution. The main idea of this project is to develop algorithms that allow casual users to capture light fields quickly and easily using conventional cameras and a simple interaction metaphor. In addition, we are developing novel algorithms to enable a variety of applications using the captured light field data. A fundamental assumption of our approach is to work with input data consisting of image sequences captured with conventional hand-held cameras along approximately linear trajectories. Users easily acquire such data by “sweeping” the camera along a roughly horizontal path. Camera trajectories may span a few centimeters to a few meters, depending on the scene. We assume the input data consists of a few dozen images captured at several frames per second, for example acquired using a burst mode available in current digital cameras. Capturing such data is a matter of a few seconds and does not require any extra equipment or specialized hardware. Therefore, we believe the limited effort required for this approach will make it attractive to a wide range of users, and our approach will have an impact beyond research contributions to the academic community. While there exist previous techniques for hand-held light field photography, they require several minutes of user engagement and are not suitable for casual photography like our work. We are developing efficient methods to resample input image sequences from hand-held cameras into regularly sampled 3D light fields. These light fields then open up the possibility for a variety of further processing, such as refocusing, alpha matting, depth reconstruction, denoising, etc. This project concluded with Daniel Donatsch’s successful Ph.D. defense in October 2015. **Research staff:** Daniel Donatsch, Matthias Zwicker Data-Driven Modeling in Computer Graphics The objective of this project is to simplify the modeling process for computer graphics content, motivated by the observation that 3D content creation with today’s tools is highly laborious and requires expert knowledge and training. We strive to make visual media production based on computer graphics available to non-specialists, fostering the development and proliferation of new types of visual media, and making visual storytelling using 3D computer graphics widely accessible. Our approach will leverage the concept of data-driven modeling, meaning that content stored in rich databases can be browsed, retrieved, edited, and recombined in intuitive ways. Currently, we are developing methods to acquire real-world 3D data for computer graphics modeling for different types of asset categories, including dynamic, functional part-based 3D objects and complex real-world environments. Research staff: Peter Bertholet, Shihao Wu, Matthias Zwicker Sketch-Based Image Synthesis The ability to express themselves or to be creative is sometimes limited by our own technical skills. One very powerful way to illustrate ideas is through images. Unfortunately, however, not all of us can do so and produce a convincing rendering of an original idea. We therefore propose the development of a computational tool that can aid authors in implementing their concepts. Our tool will take an inaccurate initial sketch of the concept and then automatically produce a realistic rendering of that sketch. The tool will also introduce adjustments to make the rendering realistic if the original sketch had distortions or flaws. Research staff: Tiziano Portenier, Matthias Zwicker, in collaboration with Paolo Favaro Financial support: Swiss National Science Foundation, grant nr. 156253 (co-PI with Paolo Favaro) 4.4 Ph.D. Theses - Daniel Donatsch, Computational Tools for Stereo and Light Field Photography (October 2015) • Daljit Singh Dhillon, On Modeling and Simulating Reptile Skin Coloration (November 2015) 4.5 Bachelor’s Theses • Urs Gerber, Real-Time Face Rendering (September 2015) • Gian-Luca Mateo, KADA: Kinect Assisted Duplo Tracking (September 2015) • Delio Vicini, Improved Gradient-Domain Reconstruction (September 2015) • Oscar Meier, Evaluation of AR Toolkits (December 2015) • Adrian Wälchli, Layered 3D from Perspective Images (May 2016) 4.6 Further Activities Editorial Boards Matthias Zwicker • The Visual Computer, International Journal of Computer Graphics, Associate Editor Conference Program Committees Matthias Zwicker • ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D), February 27 – 28, 2016, Redmond, Washington, USA • Eurographics Symposium on Rendering, June 22 – 24, 2016, Dublin, Ireland • SIBGRAPI, October 04 – 07, 2016, Sao Jose Dos Campos, Brasil Ph.D. and Habilitation Jury Memberships Matthias Zwicker - Liana Manukyan, external co-referee, Faculty of Science, University of Geneva - Stefanos Apostolopoulos, PhD mentor, Graduate School for Cellular and Biomedical Sciences (GCB), University of Bern - Carlos Correa Shokiche, PhD mentor, Graduate School for Cellular and Biomedical Sciences (GCB), University of Bern - Sandro De Zanet, PhD mentor, Graduate School for Cellular and Biomedical Sciences (GCB), University of Bern - Joachim Dehais, PhD mentor, Graduate School for Cellular and Biomedical Sciences (GCB), University of Bern - Pascal Dufour, PhD mentor, Graduate School for Cellular and Biomedical Sciences (GCB), University of Bern - Raphael Meier, PhD mentor, Graduate School for Cellular and Biomedical Sciences (GCB), University of Bern Reviewing Activities Matthias Zwicker - ACM Transactions on Graphics - IEEE Computer Graphics and Applications - IEEE Transactions on Visualization and Computer Graphics - ACM SIGGRAPH conference - ACM SIGGRAPH Asia conference Technical and Research Committees Matthias Zwicker - Expert for Matura Exams at Gymnasium Oberaargau, Langenthal - Steering Committee member of “Prologo: Logo Programmieren in Primarschulen” funded by the Hasler Foundation - Member of Expert Committee “Biomedical Sciences & Biomedical Engineering” for the Graduate School for Cellular and Biomedical Sciences, University of Bern - Technology Advisor, innoBright Technologies, USA 4.7 Publications Journal Publications Refereed Conference Proceedings - Qiyang Hu, Paolo Favaro, Matthias Zwicker: 3D Face Reconstruction with Silhouette Constraints, Vision, Modeling, and Visualization (2016), conditionally accepted. • Siavash Bigdeli, Gregor Budweiser, Matthias Zwicker: Temporally Coherent Disparity Maps using CRFs with Fast 4D Filtering, 3rd IAPR Asian Conference on Pattern Recognition (ACPR), November 2015. Technical Reports 5 Computer Vision Group 5.1 Personnel Heads: Prof. Dr. P. Favaro Tel.: +41 31 631 3301 email: paolo.favaro@inf.unibe.ch Office Managers: D. Esser Tel.: +41 31 631 4914 email: esser@inf.unibe.ch Scientific Staff: Dr. P. Chandramouli Tel.: +41 31 511 76 04 email: chandra@inf.unibe.ch Q. Hu Tel.: +41 31 511 76 04 email: hu@inf.unibe.ch M. Jin Tel.: +41 31 511 76 04 email: jin@inf.unibe.ch M. Noroozi Tel.: +41 31 511 76 04 email: noroozi@inf.unibe.ch A. Szabó Tel.: +41 31 511 76 04 email: szabo@inf.unibe.ch X. Wang Tel.: +41 31 511 76 04 email: wang@inf.unibe.ch 5.2 Overview Prof. Dr. P. Favaro joined the Institute of Computer Science and established the Computer Vision group in June 2012. The Computer Vision group conducts research on the broad areas of machine learning, computer vision, image processing, and imaging and sensor design by employing models, algorithms and analysis tools from optimization theory, probability theory, and applied mathematics. Our general aim is to extract high-level information from images by using digital processing. Such high-level information can be in the form of geometric or photometric quantities about objects in the scene, or semantic attributes such as their category, their function, etc. In order to achieve this aim, we use a systematic approach based on three pillars: modeling, inference and experimental validation. The first step in digital processing requires modeling sensors and distortions of their measured signals such as optical aberrations (defocus and motion blur), noise, spatial loss of resolution and quantization. Moreover, a careful analysis of models allows us to design novel imaging architectures that can more efficiently and accurately capture visual data. For instance, light field cameras (recently become a commercial product) allow for single-snapshot digital refocusing (i.e., the ability to change the focus plane of an image after capture via digital processing) by incorporating a microlens array in conventional cameras. Models also allow us to infer their parameters or a distribution of their parameters by assuming some stochastic description of the data. Parameter estimation can then be performed via optimization techniques, which require a careful selection of suitable algorithms and understanding of their behavior. Finally, both sensor and data models are validated experimentally by using both synthetic and real data. Currently, our efforts have been devoted to problems in: inverse imaging (deblurring, blind deconvolution, super resolution), 3D estimation (multi view stereo, photometric stereo, coded aperture photography), motion estimation (structure from motion, tracking), and unsupervised learning. 5.3 Research Projects Image Deblurring In photography, motion blur is an unpleasant artifact generated by camera shake and object motion during the exposure time. In some cases it is possible to avoid the problem by using the so called “lucky image” method, which amounts to taking many images and selecting the one with the best quality. If it is not possible to take many images of the same event, then the “lucky image” method can not be used. It might also happen that all the images are blurred. In this project, we consider the case where a single blurry image is available and one wants to recover a corresponding sharp image. Since no information on the motion of the camera or of the objects is given, this problem is also called blind deconvolution. To estimate a sharp image one has to estimate some kind of information on the motion that generated the blurry image. This information can be represented mathematically as a function, called Point Spread Function (PSF). Each pixel of the blurry image can be represented as a convex combination of pixels of the sharp image in terms of the PSF. Since the estimation of blur function and sharp image has more unknowns than the dimension of input image, the problem is particularly challenging and a regularization prior is required. Although there are many successful methods, most of them incorporate heuristics. We aim to propose a principled formulation which also achieves performance comparable to state of the art algorithms. In this work, we study the use of an L2 norm constraint for the PSF and show how it helps favor sharp images. Due to this constraint, even with the use of a simple Gaussian prior for the sharp image, we can estimate the PSF and the latent image accurately. Furthermore, we show that a simple Maximum a Posteriori (MAP) formulation is enough to achieve state of the art results. To minimize such a formulation exactly, we use a splitting method that deals with the non-convexity of the L2 norm prior. **Research staff:** Meiguang Jin, Paolo Favaro **Financial support:** Swiss National Science Foundation Project No. 153324 **Light Field Blind Deconvolution** A light field (or plenoptic) camera is endowed with the ability of capturing spatio-angular information of a light field. Because of this ability, it is possible to obtain scene depth maps and render effects such as digital refocusing from a single image. While a conventional camera captures a projection of rays from a 3D scene onto a 2D plane, a light field camera aims to capture the intensity and direction of all incoming rays. The use of plenoptic cameras has been gaining popularity since the past few years. Different models of plenoptic cameras are becoming commercially available for consumer photography as well as for industrial inspection. However, despite their many advantages, light field (LF) cameras are not immune to blur artifacts. In many practical scenarios, either due to camera shake or motion of objects in the scene, a LF image can get motion-blurred. Unfortunately, existent texture rendering algorithms for LF cameras do not have the ability to remove motion blur. Thus, we address for the first time the issue of motion blur in light field images captured from plenoptic cameras. We propose a method for single image blind deconvolution with a space-varying blur due to the depth changes in the scene. Our method employs a layered model that also handles occlusions and partial transparencies due to both motion blur and the out of focus blur of the light field camera. We then reconstruct each layer and the corresponding sharp texture and motion blur via an optimization scheme. The performance of our algorithm is demonstrated on synthetic as well as real light field images with space-varying motion blur. Layer Separation When imaging scenes consisting of transparent surfaces, the radiance components present behind and in front of the transparent surface get superimposed. Separating the two layers from a composite image is inherently ill-posed since it involves determining two unknowns from a single observation. Existing approaches address this problem by using additional information obtained by capturing a sequence of images from a moving camera, or by modifying the data acquisition modality, or by imposing specific models on images. We propose to use a light field camera from which layer separation can be achieved using a single observation. In a light field image of a scene with superimposing reflections, the captured colors are related to the radiances of the reflected and transmitted layers through a point spread function (PSF). The PSF depends on the depth values of the layers and the optical settings of the LF camera. The contributions from both the layers get merged in the observation. Due to the merging of intensities, the standard multi-view correspondence approach cannot be used for depth estimation. We develop a neural network-based classifier for estimating depth maps. Our classifier can also separate the scene into reflective and non-reflective regions. The depth estimation process has a runtime of only a few seconds when implemented on a GPU. With the knowledge of the scene depth, we arrive at the PSFs of the two layers and subsequently reconstruct the radiances within a regularization framework. Research staff: Paramanand Chandramouli, Mehdi Noroozi and Paolo Favaro 3D Face Reconstruction The aim of this project is to reconstruct 3D face models of individuals from collections of images that are captured in uncontrolled environments wherein variations in illumination, pose, and expression are present. Such 3D reconstructions can be useful for face and expression recognition, or to produce facial animations. The quality of the 3D reconstructions is limited since the only constraints on the reconstruction are photometric consistency and correspondence with sparse facial landmarks. Based on previous work on 3D face reconstruction, we introduce silhouette constraints to improve the quality of unconstrained 3D face reconstruction. The main idea is to extract silhouette points on the 3D reconstruction, and match them with automatically detected silhouette points in the input images. We include these constraints in the 3D reconstruction objective, which we solve in an iterative process. In each iteration step, we recompute the silhouette points using the current 3D reconstruction and update the corresponding constraints in the objective. As a consequence, the silhouettes of the 3D reconstruction converge towards the silhouettes in the input images. The results demonstrate that the new silhouette constraints lead to higher reconstruction quality. **Research staff:** Qiyang Hu, Paolo Favaro **Financial support:** Swiss National Science Foundation Project No. 156253 ### Unsupervised Learning of Visual Representations Information processing tasks can be either very easy or very difficult depending on how the information is represented. This general principle is applicable to daily life as well as to machine learning and computer science. A good representation is the one that makes subsequent learning easier. The choice of representation will usually depend on the choice of the subsequent learning task. Convolutional Neural Networks (CNN) have demonstrated an impressive performance in many computer vision tasks when trained on large labeled datasets. We can think of feedforward networks trained by supervised learning as performing a kind of representation learning. While the last layer of a network is typically a linear classifier (softmax), the rest of the network learns to provide a representation to this classifier. We often have a very large amount of unlabeled training data and relatively little labeled data. Training with supervised learning techniques on the labeled subset often results in severe overfitting. Our goal is to learn efficient representations from large scale unlabeled training data. We build a CNN that can be trained to solve Jigsaw puzzles as a pretext task. The network is then repurposed to solve object classification and Jigsaw puzzles have been associated with learning since their inception. They were introduced in 1760 by John Spilsbury as a pretext to help children learn geography. Studies in psychonomics show that Jigsaw puzzles can be used to assess visuospatial processing in humans. Indeed, the Hooper Visual Organization Test is routinely used to measure an individual’s ability to organize visual stimuli. We propose to use Jigsaw puzzles to develop a visuospatial representation of objects in the context of CNNs. To maintain the compatibility across tasks, we introduce the context-free network (CFN), a siamese-ennead CNN. The CFN takes image tiles as input and explicitly limits the receptive field (or context) of its early processing units to one tile at a time. Our experimental evaluations show that the learned representations capture semantically relevant content. We pretrain the CFN on the training set of the ILSVRC2012 dataset and transfer the features to the combined training and validation set of PascalVOC2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with 51.8% for detection and 68.6% for classification, and reduce the gap with supervised learning (56.5% and 78.2% respectively). **Research staff:** Mehdi Noroozi, Paolo Favaro **Unsupervised Viewpoint Estimation** Viewpoint estimation is traditionally solved by supervised techniques using classification. A classifier is trained to predict the viewpoint from annotated samples. In our project we estimate viewpoints without using viewpoint annotation. Recent works have shown that convolutional neural networks produce features that are sensitive to 3D viewpoint changes. This is surprising because they were trained to ignore viewpoint changes. We show in our work that we can exploit this property to establish a link between similar viewpoints of objects in the same category and build a view graph. This is achieved by simulating virtual viewpoints generated by the images and a shape hypothesis. We further develop a probabilistic model to recover the absolute viewpoint assignment using relative viewpoint estimates between image pairs. We train a convnet to classify the relative viewpoints between image pairs. Subsequently, we estimate the absolute viewpoint directly with the convnet, so the reconstruction step is no longer needed. We train the network with image pairs of the same object instance, where the relative changes are known. The objective function measures the consistency of the viewpoint assignment between the image pairs. **Research staff:** Attila Szabó, Paolo Favaro **Financial support:** Swiss National Science Foundation Project No. 149227 ### Exploiting Videos to Learn Object Detection and Categorization in Images We assume that a category is defined by its characteristic textures/colors (if any) and its characteristic 3D shape (up to local or articulated deformations). Images and videos are space-time instances of an object category with additional transformations (e.g., pose, viewpoint, intraclass variation, illumination, occlusions, clutter and so on) that do not characterize the category. Given a model of the object category (textures and 3D shape), the removal of these transformations is relatively well defined. However, when the model is unknown, the problem becomes extremely challenging. The biggest problem is how to relate the content of one image instance with another image instance. In other words one needs to find correspondences between parts of different instances of an object. Because of the high variability of the appearance of instances of an object, this task is extremely difficult. To simplify this step we propose to use short videos instead of images. Our objective is first to learn high-performance visual representations (feature vectors) from videos and then such visual representations can be transferred to other tasks such as object detection/categorization, action recognition, pose estimation and so on. **Research staff:** Xiaochen Wang, Paolo Favaro **Financial support:** China Scholarship Council ### 5.4 Master’s Thesis 5.5 Further Activities Tutorial Organizer Paolo Favaro - “Removing Camera Blur” at ICCV 2015 Ph.D. Thesis Examiner Paolo Favaro Master’s Thesis Examiner Paolo Favaro - “Towards Automatic Segmentation of Longitudinal Brain Tumor”, R. Meier, ISTB, UniBe, 2016 Doctorate Course Paolo Favaro - “Inverse Problems in Imaging” at Università’ di Padova, Italy 2016 Invited Talks Paolo Favaro - Invited Talk at Max Planck Institute, Tuebingen, Germany 2016 - Invited Talk at Siemens Healthcare, Princeton, NJ - USA 2016 - Invited Talk at University of California, Berkeley, USA 2016 - Invited Talk at Stanford University, USA 2016 - Invited Talk at Apple Inc., Cupertino, USA 2016 Conference Program Committees Paolo Favaro - Area chair of ICCV 2015 and CVPR 2016 Journal Committees Paolo Favaro - Associate Editor for IEEE Transactions on Pattern Analysis and Machine Intelligence 2016 - Associate Editor for IEEE Signal Processing Magazine, Special Issue on Signal Processing for Computational Photography and Displays, 2016 Reviewing Activities Paolo Favaro - GCPR 2016 - ICCP 2016 - SIGGRAPH 2016 - ERC grant 2015 - IEEE Transactions on Pattern Analysis and Machine Intelligence 2015 - Springer Book Proposal 2015 Paramanand Chandramouli - IEEE Transactions on Pattern Analysis and Machine Intelligence 2016 - IEEE Transactions on Industrial Electronics 2016 - IEEE Transactions on Image Processing 2015 5.6 Publications Journal Publications Refereed Conference Proceedings 6 Logic and Theory Group 6.1 Personnel **Head:** Prof. Dr. G. Jäger Tel.: +41 (0)31 631 85 60 email: jaeger@inf.unibe.ch **Office Manager:** B. Choffat Tel.: +41 (0)31 631 84 26 email: choffat@inf.unibe.ch **Docents:** Prof. Dr. T. Strahm Tel.: +41 (0)31 631 49 98 email: strahm@inf.unibe.ch Prof. Dr. T. Studer Tel.: +41 (0)31 631 39 84 email: tstuder@inf.unibe.ch **Research assistants:** M. Bärtschi* Tel.: +41 (0)31 511 76 16 email: baertsch@inf.unibe.ch L. Jaun Tel.: +41 (0)31 511 76 10 email: jaun@inf.unibe.ch A. Kashev* Tel.: +41 (0)31 511 76 13 email: kashev@inf.unibe.ch Dr. I. Kokkinis* Tel.: +41 (0)31 511 76 09 email: kokkinis@inf.unibe.ch M. Marti* Tel.: +41 (0)31 511 76 12 email: mmarti@inf.unibe.ch Dr. D. Probst Tel.: +41 (0)31 511 76 07 email: probst@inf.unibe.ch Dr. F. Ranzi* email: ranzi@inf.unibe.ch (until 31.10.2015) T. Rosebrock* Tel.: +41 (0)31 511 76 33 email: rose@inf.unibe.ch Dr. K. Sato* Tel.: +41 (0)31 511 76 21 email: sato@inf.unibe.ch Guests: - Dr. R. Kuznets TU Wien, Fakultät für Informatik, Austria October 2015 - S. Steila Università degli Studi di Torino, Scuola di Dottorato in Scienze della Natura e Tecnologie Innovative, Italy October 2015 - Dr. B. Afshari TU Wien, Institut für Diskrete Mathematik und Geometrie, Austria November 2015 - Dr. R. Bruni Università degli Studi di Firenze, Dipartimento di Lettere e Filosofia, Italy November 2015 - Dr. G. Leigh TU Wien, Institut für Diskrete Mathematik und Geometrie, Austria November 2015 * with financial support from a third party ### 6.2 Overview The Logic and Theory research group (LTG) focuses on theoretical computer science and mathematical logic, especially proof theory, computational logics and theory of computation. We have been dealing for many years with formal methods, analysis of deductions, general computations and, in particular, applications of mathematical logic to computer science. During the previous year the main subject areas have been the following: **Computational Logic:** Logical formalisms are perfectly suited to the specification of complex systems, the representation of knowledge and information, the description of processes (e.g. in distributed multi-agent systems) and for providing formal proofs of important system properties such as, for example, correctness and fairness. The research group has long been interested in the deductive, procedural and dynamic aspects of the corresponding formalisms and in the design of modern deductive systems. New approaches are being developed for information update purposes. In addition, the way in which simple, logical formalisms can be extended to become genuine multi-user systems taking into account the dynamic aspects of ontologies in the data mining context and in connection with the semantic web is being investigated. **Proof Theory:** This research topic focuses on the development and analysis of formal systems of first and second order arithmetic, set theory and of what are known as logical frameworks (type and set theoretical, explicit, constructive, extensional, intentional). Our interests range from feasible subsystems of arithmetic to highly impredicative set and type theories and deals with the interplay between constructive, recursive and operational approaches. In addition, abstract computations and computable knowledge are being investigated. ### 6.3 Research Projects **Algebraic and Logical Aspects of Knowledge Processing** The general framework of this project is the proof-theoretic analysis of systems of second order arithmetic, of explicit mathematics, and of operational set theories. In particular, we examine wellordering proofs in connection with higher types and suitable inductive definitions. A further aspect of research is related to abstract computability theory in an operational setting, thus aiming towards an operational descriptive set theory. **Research staff:** G. Jäger, D. Probst, F. Ranzi, T. Rosebrock, K. Sato, S. Steila, T. Strahm **Financial support:** Swiss National Science Foundation Structural Proof Theory and the Logic of Proofs The Logic of Proofs has been introduced by Artemov to solve a longstanding problem of a classical provability semantics for intuitionistic logic. The main idea of his approach was to introduce proof polynomials into the object language in order to represent the structure of proofs. The idea has proved itself fruitful and resulted in the formal study of proof structure in this and other contexts, including self-referentiality of modal reasoning, epistemic paradoxes, and logical omniscience problem. In this proposal, we continue expanding the benefits of using the more expressive language of the Logic of Proofs to various areas of computer science, focusing on temporal logics, traditionally used for describing properties of reactive systems, and belief revision, which studies operations for changing agents’ beliefs in accordance with new information. We also continue our investigation of the applications of proof polynomials to logics of common knowledge and dynamic epistemic logics, which describe internal epistemic attitudes of rational agents and groups of agents in static and dynamic epistemic scenarios. The new Track B of this proposal sets forth a foundational study of fixed points in the constructive modal framework. While both intuitionistic modal logics and modal mu-calculus have received attention from the scientific community (the latter more than the former), there is virtually no study on constructive modal fixed-points, making the line of investigations proposed in Track B pioneering in this respect. Research staff: G. Jäger, A. Kashev, I. Kokkinis, M. Marti, T. Studer Financial support: Swiss National Science Foundation The operational paradigm: its mathematical and philosophical frontiers This project assesses the limits of mathematical knowledge inherent in and provided by an operational approach – an approach which plays a central role in Feferman’s explicit mathematics and operational set theory – from various mathematical and philosophical perspectives. The notion of predicativity goes back to Russell and Poincare and was formally made precise by Feferman and Schütte, who were also able to exactly characterize predicative mathematics. The first part of this proposal is about an extension of predicativity, which we call metapredicativity, in taking a more liberal approach to “building up set-theoretic universes from below”. We aim at a conceptually and technically convincing classification of those formal systems that are no longer predicative in the sense of Feferman-Schütte but whose proof-theoretic analysis can be carried through by purely predicative methods. Our solution should unravel this dichotomy by providing a foundationally convincing explanation. In addition, we aspire to determine the limit of metapredicativity. The second part is concerned with the design and analysis of strong operational systems and independence results making use of those. For this purpose, new extensions or generalizations of forcing and realizability techniques will be developed. The main products will be scientific results, documented in research articles. In addition, presentations of our results at international conferences, exchange visits, and the training of graduate students are envisaged. The long term impact of this project will provide convincing answers concerning the foundational relevance of an alternative approach to formalizing mathematics which, however, is closer to mathematical practice. Financial support: John Templeton Foundation Logic and Computation This very general project deals with the close connections between mathematical logic and certain parts of computer science, and emphasis is put on a proof-theoretic approach to some of the central questions in this area of research. These include the development of perspicuous and feasible logical frameworks for studying typical questions in computer science like termination and correctness of functional programs, properties of distributed systems and the like. We study applicative theories as well as strongly typed formalisms and are interested in the connections to constructive and explicit mathematics. Furthermore, we are interested in analyzing the close connections between the complexities of computations and proofs in suitable formalizations, ranging from propositional calculi up to abstract frameworks for computations (in higher types). Research staff: All members of the research group 6.4 Ph.D. Theses - I. Kokkinis: Uncertain Reasoning in Justification Logic - F. Ranzi: From a Flexible Type System to Metapredicative Wellordering Proofs 6.5 Bachelor’s Theses - M. Eisele: A Discussion of the Rice, Rice-Shapiro, and Myhill Shepherdson Theorems - R. Imhof: Vollständigkeitsbeweis für Distributed Knowledge - S. Matter: Soundness and completeness of a first order probabilistic logic with approximate conditional probabilities - B. Sugden: An Explanation of Blum’s Speed-Up Theorem 6.6 Further Activities Editorial Boards Gerhard Jäger - Member of the Editorial Board of Archive of Mathematical Logic - Member of the Editorial Board of Logica Universalis Thomas Strahm - Member of the Consulting Board of Dialectica - Member of the Editorial Board of Journal of Symbolic Logic Technical and Research Committees Gerhard Jäger - Swiss Delegate to the IFIP Technical Committee 1 (Foundations of Computer Science) - Board Member of the Platform Mathematics, Astronomy and Physics (MAP) of the Swiss Academy of Sciences (until 2015) 6. Logic and Theory Group - Member of the Ambizione Panel of the Swiss National Science Foundation - Member of the Scientific Council of the European Association for Computer Science Logic - PC Member of the Fifth International Conference on Logic, Rationality, and Interaction, Taipei City, 2015 - Member of the Kantonale Maturitätskommission - Expert for “Maturitätsprüfungen Mathematik und Informatik” Dieter Probst - Expert for “Maturitätsprüfungen Mathematik und Informatik” Thomas Strahm - Board Member of the Swiss Society for Logic and Philosophy of Science - Expert for “Maturitätsprüfungen Informatik” Thomas Studer - President of the Swiss Society for Logic and Philosophy of Science - Swiss Delegate to the International Union of History and Philosophy of Science - Board Member of the Platform Mathematics, Astronomy and Physics (MAP) of the Swiss Academy of Sciences - PC Member of Advances in Modal Logic 2016 - Rapporteur and Member of the PhD Jury for Fabrizio Alberetti, University of Neuchâtel, 2015 - Expert for “Maturitätsprüfungen Mathematik und Informatik” - Local Organizer of the Swiss Olympiad in Informatics Finals, 2016 Jan Walker - Board Member of the Swiss Graduate Society of Logic and Philosophy of Science Organized Events Michael Bärtschi - Together with G. Jäger and K. Sato. Operations, Sets, and Types (international workshop supported by the John Templeton Foundation), Münchenwiler, April 2016. Gerhard Jäger - Together with M. Bärtschi and K. Sato. Operations, Sets, and Types (international workshop supported by the John Templeton Foundation), Münchenwiler, April 2016. Ioannis Kokkinis Kentaro Sato - Together with M. Bärtschi and G. Jäger. Operations, Sets, and Types (international workshop supported by the John Templeton Foundation), Münchenwiler, April 2016. 6.7 Publications - Stefano Berardi and Silvia Steila. Ramsey’s theorem for pairs and $k$ colors as a sub-classical principle of arithmetic. *Journal of Symbolic Logic*, In Press. • Michel Marti and Thomas Studer. Intuitionistic modal logic made explicit. Submitted. • Florian Ranzi and Thomas Adrian Strahm. A flexible type system for the small Veblen ordinal. Submitted. 7 Software Composition Group 7.1 Personnel Head: Prof. Dr. O. Nierstrasz Tel: +41 31 631 46 18 email: oscar@inf.unibe.ch Office Managers: I. Keller Tel: +41 31 631 46 92 email: keller@inf.unibe.ch Scientific Staff: Dr. M. Ghafari* Tel: +41 31 511 7637 email: lungu@inf.unibe.ch A. Caracciolo Tel: +41 31 511 7643 email: caraccio@inf.unibe.ch A. Chiș* Tel: +41 31 511 7643 email: andrei@inf.unibe.ch C. Corrodi* Tel: +41 31 511 7639 email: corrodi@inf.unibe.ch J. Kurš* Tel: +41 31 511 7638 email: kurs@inf.unibe.ch K. Levitin* Tel: +41 31 511 7636 email: levitin@inf.unibe.ch L. Merino* Tel: +41 31 511 7638 email: merino@inf.unibe.ch N. Milojković Tel: +41 31 511 7639 email: nevena@inf.unibe.ch H. Osman Tel: +41 31 511 7644 email: osman@inf.unibe.ch B. Spasojević* Tel: +41 31 511 7636 email: spasojev@inf.unibe.ch Y. Tymchuk* Tel: +41 31 511 7643 email: tymchuk@inf.unibe.ch *with financial support from a third party 7.2 Overview Software systems that are used in practice must evolve over time to main- tain their relevance, yet as systems evolve, they become more complex and harder to evolve. The Software Composition Group carries out re- search into tools, techniques and programming language mechanisms to enable the graceful evolution of complex software systems. 7.3 Research Projects Agile Software Analysis SNSF project #200020-162352 Software developers actually spend much of their time not just producing new code, but analysing the existing code base. Integrated Development Environments (IDEs) however are mostly glorified text editors, and offer only limited support for developers to query and analyse software systems. In this continuation of our SNF project Agile Software Assessment, we proceed to explore new ways to enable developers to efficiently answer detailed questions about the software system under development. The project is organized into four orthogonal tracks. We summarize briefly our progress in each track over the past year: - **Agile Model Extraction.** Before we can analyze software, we must parse it and model it. Given the large range of different programming languages and technologies used in modern projects, this poses a technical challenge. Highly expressive and composable parsing frameworks required by agile modeling cannot compete with high-performance top-down parsers. Furthermore, the improved expressiveness of parsers developed in the recent years causes even higher parsing overhead. We have focused on the performance of parsers for parsing expression grammars (PEGs) and have developed several parsing strategies for different subsets of parsing expressions. As a validation of these techniques we have implemented a parser compiler — source-to-source transformation tool that analyzes and transforms a PEG definition to a high-performance top-down parser that switches between these parsing strategies during a single parsing phase. - **Context-Aware Tooling.** To make informed decisions developers commonly formulate detailed and domain-specific questions about their software systems and use tools to explore available information and answer those questions. Generic development tools, while universally applicable, make it difficult to answer domain-specific questions. Through our work on moldable tools we propose that developers build software using development tools tailored to their specific application domains. To support moldable tools we have introduced moldable development as an approach for developing software in which developers evolve development tools together with their applications. We further investigated how to apply the moldable tools idea to enable contextual-aware searching within an IDE and developed Moldable Spotter, a framework enabling developers to easily create custom searches for domain objects. We also looked at how to apply the Moldable Debugger to improve the debugging of concurrent threads. Debuggers are a central part in the development and testing workflow. Most debuggers are used by setting static breakpoints on the source code and halting the execution when reaching a specific point in the source code. We are exploring ways to improve the way debuggers operate and are used. In particular, we aim to design and implement a debugger — based on object-centric debugging and the moldable tools approach — that allows developers to specify complex and domain-specific breakpoint conditions based on the states of run-time objects and object relations rather than source code locations. We are currently developing a prototype in the Pharo programming system. A large number of visualization tools and techniques have been developed over the years, but it is a non-trivial task to identify a suitable visualization for a given problem at hand and to deploy the visualization for that task. To address this problem we have carried out a systematic literature review of visualization techniques in the literature, and the questions they address. We have also designed a tool that contains a tag-cloud based visualization built from frequent questions that arise during development. In it, tags connect to icons representing suitable visualization examples that were collected from the examples shipped with the Roassal visualization engine. The visualization builds on the tools included in the Pharo programming language and environment allowing developers to (i) spot tags related to their questions, (ii) find linked suitable visualizations, and (iii) adapt the visualization examples to answer their particular question. - **Ecosystem Mining.** Fixing bugs is a key activity in software maintenance. Analyzing a large corpus of Java open source programs, we found out that “missing null check” is the most recurrent bug. We have investigated more deeply null checks in this corpus and empirically shown that the improper use of “null” is a main source of defects in Java systems. We are currently developing an Eclipse plugin that warns developers about potential missing null checks. A bug predictor is a machine learning model trained on software metrics to predict defective software entities. Bug prediction is a well researched field in software engineering. However, previous studies did not take into account the cost effectiveness of using a bug predictor. We did an extensive empirical study, experimenting with different bug prediction configurations such as metrics AI models, and response variables. Two main findings stand out. First, there is no universal solution in bug prediction as every software project is unique and has its own best bug prediction configuration set. Second, bug prediction works fine for some projects but it is not applicable for all projects as the cost effectiveness varies significantly from one project to another. The lack of static type information in dynamically typed languages hampers program comprehension. Type inference algorithms tend to gain in complexity in order to avoid the problem of producing false positives or false negatives. We have explored cheap heuristics that aim to provide a developer with fast and accurate information about the types of variables. These types are then sorted based on different static and dynamic heuristics. The proposed heuristics are reasonably precise. We have found that this approach tends to work well both for library and project-related types. - **Evolutionary Monitoring.** Tool developers often rely on data from other related projects to improve their tools. Unfortunately, there is little to no support for them in terms of software evolution. As software evolves, the gathered data become stale and less useful. We have developed a framework to produce tools that use data from related projects. The framework monitors changes in these projects and updates the required data on regular intervals. We are exploring the applicability of this framework by building experimental tools that leverage different aspects of the framework, such as the distinction between gathering and presenting the data, static versus dynamic analysis, and report-guided heuristics versus direct use of data. One can introduce faulty code into a software system while implementing a new feature or performing a refactoring. This code can have run-time side effects, or make it harder to maintain the software system in general. In many cases faulty code sections can be automatically detected by static analyzers. We offer live feedback allowing developers to immediately see potential problems. The drawback of this approach are false positives that are presented to developers in the same live manner. To maintain the satisfaction of developers we had to listen carefully to their feedback and quickly react by tweaking the rules used by static analyzer. The challenge that we are facing now, is what features do developers need to shape the static analyzer rules themselves? As the static analyzer rules are changing over the time, another problem arises when one wants to analyze the quality evolution of a project. The quality value may change not only because of changes made to the project’s source code, but also because of the changes in the quality rules. We were able to decompose the quality fluctuations by using visualization techniques. Research staff: All members of the research group. Duration: Jan 1, 2016 – Dec. 30, 2018 Financial support: Swiss National Science Foundation For further details, please consult: http://scg.unibe.ch/asa2 7.4 Ph.D. Theses 7.5 Master’s Theses 7.6 Bachelor’s Theses and Computer Science Projects 7. Software Composition Group 7.7 Awards - SPLASH 2015 Distinguished Demo Award for GTInspector: A Moldable Domain-Aware Object Inspector by Andrei Chiş, Tudor Girba, Oscar Nierstrasz, Aliaksei Syrel - European Smalltalk User Group 2015 Technology Innovation Award (1st prize) for GT Spotter by Aliaksei Syrel, Andrei Chiş, Tudor Girba, Juraj Kubelka and Stefan Reichhart 7.8 Further Activities Invited Talks Oscar Nierstrasz - Invited Speaker at FASE-ETAPS 2016 (19th International Conference on Fundamental Approaches to Software Engineering (FASE) – Eindhoven, The Netherlands, April 4-7, 2016) Editorial Boards and Steering Committees Oscar Nierstrasz - AITO – Association Internationale pour les Technologies Objets (Member) - CHOOSE – Swiss Group for Object-Oriented Systems and Environments (President) - Elsevier Science of Computer Programming (Advisory Board Member, Software Section) - JOT — Journal of Object Technology (Steering Committee Member) - Moose Association (Board Member) - PeerJ Computer Science Journal (Editorial Board member) - SATToSE – Seminar Series on Advanced Techniques & Tools for Software Evolution (Steering Committee Member) - SI – Swiss Informatics Society (Board Member) - SIRA – Swiss Informatics Research Association (Board Member) - SNF — Swiss National Science Foundation (Member of the Research Council) Program Committees Oscar Nierstrasz - Co-organizer of Engineering Academic Software (Dagstuhl Perspectives Workshop 16252 – Dagstuhl, Germany, June 19-24, 2016) - PC Member of BENEVOL 2015 (BELgian-NEtherlands software eVOLUTION seminar – Lille, France, Dec 3-4, 2015) - PC Member of ICSME 2015 (International Conference on Software Maintenance and Evolution – Bremen, Germany, Sept 27-Oct 3, 2015) - PC Member of ICSE 2015 (37th International Conference on Software Engineering – Florence, Italy, May 16-24, 2015) - PC Member of SANER 2015 ERA Track (International Conference on Software Analysis, Evolution, and Reengineering – Montreal, Canada, March 2-6, 2015) Andrei Chiş - PC Member of VISSOFT 2016 - NIER and Tools Tracks (4rd IEEE Working Conference on Software Visualization – Raleigh, North Carolina, USA, October 3-4, 2016) - PC Member of IWST 2016 (International Workshop in Smalltalk Technologies – Prague, Czech Republic, August 23-25, 2016) - PC Member of VEM 2016 (4th Workshop on Software Visualization, Evolution and Maintenance – Maringa, Brazil, September 21, 2016) - Publicity Chair for SLE 2016 (9th ACM SIGPLAN International Conference on Software Language Engineering, October 31 - November 1, 2016, Amsterdam, Netherlands) Haidar Osman Leonel Merino - PC Member of VISSOFT 2016 - Artifact Evaluation Committee (4th IEEE Working Conference on Software Visualization – Raleigh, North Carolina, USA, October 3-4, 2016) Yuriy Tymchuk - PC Member of VISSOFT 2016 - NIER and Tools Tracks (4rd IEEE Working Conference on Software Visualization – Raleigh, North Carolina, USA, October 3-4, 2016) Reviewing Activities Oscar Nierstrasz - Elsevier Computer Languages, Systems & Structures - FWO (Research Foundation Flanders) - IEEE Transactions on Software Engineering - NWO (Netherlands Organisation for Scientific Research) Mohammad Ghafari - ICSME 2016, SANER 2016 Haidar Osman Andrei Chiş - Elsevier Science of Computer Programming Nevena Milojkovic - Onward! 2016, SANER 2016, TSE Jan Kurs - IWST 2016, Onward! 2016 Essays, VISSOFT NIER 2015 Leonel Merino - ICSME 2016, SANER 2016 Claudio Corrodi - ICSME 2016, IWST 2016 7.9 Publications Journal Papers Conference Papers - Claudio Corrodi, Alexander Heußner, and Christopher M. Poskitt. A graph-based semantics workbench for concurrent asynchronous programs. In *Proc. International Conference on Fundamental Approaches to Software Engineering (FASE 2016)*, volume 9633 of 7. Software Composition Group Book Chapters Julien Deantoni, Cédric Brun, Benoit Caillaud, Robert B. France, Gabor Karsai, Oscar Nierstrasz, and Eugene Syriani. Domain **Workshop Papers** 8 Administration University: T. Braun: Member of the Committee for Computing Services (Kommission für Informatikdienste) Representative of University of Bern in SWITCH Stiftungsrat G. Jaeger: Member of Kantonale Maturitätskommission Th. Strahm: Board member of Mittelbauvereinigung of University of Bern (until 12.15) Member of Senate (until 12.15) Member of Central Library Commission M. Zwicker: Graduate School for Cellular and Biomedical Sciences: member of the Expert Committee on Biomedical Engineering Faculty: G. Jäger: Member of the Strategy Board P. Favaro: Member of the Board of Studies Joint Master in Computer Science of the Universities of Bern, Fribourg and Neuchatel: Member of the Branch Committee O. Nierstrasz: Chair, Teaching Evaluation Committee, Faculty of Natural Sciences Joint Master in Computer Science of the Universities of Bern, Fribourg and Neuchatel: Member of the Branch Committee Th. Strahm: Member of the Finance Board President of library commission Exakte Wissenschaften Plus Th. Studer: Member of the Strategy Board Institute: T. Braun: Member of Hauskommission Engehalde (until 20.5.16) P. Favaro: Director of Studies O. Nierstrasz: Managing Director of INF, Member of Hauskommission Engehalde (as of 21.5.16) Th. Strahm: Member of Library Committee Exakte Wissenschaften Th. Studer: Member of Hauskommission Exakte Wissenschaften M. Zwicker: Deputy Director of INF
Weak Ties and Contact Initiation in Everyday Life: Exploring Contextual Variations from Contact Diaries Yang-chih Fu Institute of Sociology, Academia Sinica, Taipei 115, Taiwan fuyc@sinica.edu.tw Hwai-Chung Ho Institute of Statistical Science, Academia Sinica, Taipei 115, Taiwan hcho@stat.sinica.edu.tw Hsiu-man Chen Institute of Statistical Science, Academia Sinica, Taipei 115, Taiwan hsiuman@stat.sinica.edu.tw Abstract This study examines how the significance of weak ties varies by contact initiation and purposes of contact in everyday life. Based on data from 55 contact diaries, we analyze the extent to which diary keepers judge each of 102,825 specific contacts as beneficial after they occur, by how well they knew the target person beforehand. Our hypothesis testing and bootstrap resampling show that when a diary keeper initiates a contact, weak ties result in more gains. In contrast, when the other party starts the contact, it is strong rather than weak ties that turn out to be more beneficial to the diary keeper. Such effects vary by other contextual factors, however, particularly the purposes of contacts. Keywords: weak ties; everyday life; contact diary; ego-centered networks. Corresponding author: Yang-chih Fu Institute of Sociology, Academia Sinica 128 Academia Rd., Sec.2 Nankang, Taipei 115, Taiwan Phone: +886 2 2652 5149, +886 922 632220 Fax: +886 2 2652 5050 Email: fuyc@sinica.edu.tw Weak Ties and Contact Initiation in Everyday Life: Exploring Contextual Variations from Contact Diaries Abstract This study examines how the significance of weak ties varies by contact initiation and purposes of contact in everyday life. Based on data from 55 contact diaries, we analyze the extent to which diary keepers judge each of 102,825 specific contacts as beneficial after they occur, by how well they knew the target person beforehand. Our hypothesis testing and bootstrap resampling show that when a diary keeper initiates a contact, weak ties result in more gains. In contrast, when the other party starts the contact, it is strong rather than weak ties that turn out to be more beneficial to the diary keeper. Such effects vary by other contextual factors, however, particularly the purposes of contacts. Keywords: weak ties; everyday life; contact diary; ego-centered networks. 1. Introduction The “strength of weak ties” argument claims that people may receive crucial information or critical help by means of sporadic contacts with “someone only marginally included in the current network” or “individuals whose very existence they have forgotten” (Granovetter, 1973, pp.1371-1372). Extant literature has given inconsistent support to this line of arguments. Some studies have demonstrated that weak ties do bring about unexpected significant gains (Bashi, 2007; Granovetter, 1973, 1983). Other empirical evidence, however, has suggested that stronger ties result in better social outcomes, not only in terms of affective support, but also in instrumental gains (Massey et al., 1994; Wegener, 1991). Still others have argued that some seemingly weak ties can actually be decomposed into links of strong ties (e.g., father’s close friend). The positive outcomes of such weak ties thus may result from the cumulative effects of a series of strong ties, particularly when looking for a job (Bian, 1997). Such inconsistent research findings reveal difficulties in testing the “weak ties hypothesis” in empirical research. Although Granovetter (1973) elaborated his arguments about weak ties based on qualitative interview data, other studies have examined the argument in various contexts. Indeed, the strength of weak ties may manifest in many social facts. Among these, the literature has focused mostly on macro-level or instrumental events, such as the diffusion of influence and information (e.g., the small-world phenomenon), opportunities for mobility (including job searches), community organization, and social cohesion (Granovetter, 1973, 1983; Harvey, 2008; Liu and Duff, 1972; Milgram, 1967). In comparison, micro-level or affective outcomes have received little attention and systematic examination. Extending the recent literature that identifies the need to study personal situations of social networks in daily life (Andersen and Rossteutscher, 2007; Tindall and Cormier, 2008), we aim to revisit the weak ties argument with a research design that helps shed new light on conventional network studies. First, we focus on social interactions in everyday life, rather than specific events that are mostly instrumental. Second, we break down interpersonal relations or ties into contacts, which allows us to examine whether and how the strength of weak ties functions under different circumstances. Third, we differentiate such circumstantial effects by identifying who initiates the contact, a factor key to understanding human motivations and behaviors. While examining the relationship between tie strength and contact initiation, we take into account another key contextual factor, purposes of social interactions. We argue that weak ties in everyday life may be beneficial, particularly when one initiates a contact. Such effects, however, largely depend on the circumstances under which a contact takes place. Weak ties are slightly more beneficial when an individual initiates contacts for instrumental purposes, whereas strong ties bring about positive outcomes when the other party initiates such instrumental contacts. As for affective contexts, especially leisure and social activities, strong ties should remain more beneficial regardless of who initiates the contacts. We draw data from 55 three-month contact diaries that recorded interpersonal interactions in everyday life. The large number of contacts and the attributes attached to each contact entry enable us to model different effects of contacts under various circumstances. As the format of the dataset is uncommon, combining some characteristics of both imbalanced and longitudinal data, our model draws aid from resampling methods to ensure that the sampling distribution follows the Gaussian distribution; most of all, we assume that the variable of the differences among those who kept the diaries obeys a fixed distribution, thus allowing heteroscedasticity among these diary keepers. 2. How Significant Are Weak Ties in Everyday Life? Weak ties appear to be less helpful than strong ties in real life. As early studies suggested, information is often transmitted through strong ties or circulated among people who share similar attributes (Palmore, 1967; Rogers and Bhowmik, 1970). As important as such strong and homophilous ties are in major events, however, they may be limited in offering other benefits. Transmission through strong ties, for example, tends to end or saturate early because messages re-circulate among those who already possess the same information. In other words, by transmitting redundant and overlapping information, homophilous communication poses a structural constraint on diffusion (Liu and Duff, 1972). In contrast, the information transmitted through weak ties or unfamiliar people often leads to non-redundant social circles and brings about different returns that can be at least as significant as the transmissions among strong ties. On many occasions, particularly during job searches or people searches, weak ties can serve as a crucial vehicle that helps reach the goal more effectively. As Granovetter (1973, p.1372, note 17) recalled, “when [he] asked respondents whether a friend had told them about their current job, they said, ‘Not a friend, an acquaintance’.” Seminal studies have focused on how weak ties turn out to be effective for social mobility, job searches, or idea diffusion (Granovetter, 1973, 1983; Milgram, 1967). Other well-known studies have emphasized the potential for social resources and social capital embedded in weak ties or structural holes in networks, which lead to non-redundant ties and bridges that help advance individuals in the job market (Burt, 1992, 2001; Lin, 2001a,b). More recent empirical inquiries have further elaborated how job information is transmitted or leaked through acquaintances or other weak ties (Forse, 2004; Harvey, 2008; McDonald et al., 2009). Although studies of strong ties are spread over various events, as well as the more general life domain, research on weak ties seems to be more constrained within certain areas. As a result, findings based on such empirical inquiries may not be easily generalized to everyday life. To complement preexisting studies on weak ties, researchers need to extend their examinations to more life domains. For example, what do contacts with weak ties mean to individuals in everyday life? Under what circumstances in daily life can such ties bring about unexpected outcomes? Although it is relatively easy to see the contribution of weak ties in major events, how likely is it that actors detect the tangible rewards that weak ties have brought about from ordinary interactions? In other words, do weak ties also play important roles during small events and trivial incidents? Many weak ties may indeed be “forgotten” and only become significant at certain critical moments. When such weak ties are dormant or seemingly “insignificant,” what do people perceive about their contacts with these ties in their everyday experiences? Most of these questions remain unanswered or even unexplored, thus limiting the extent to which the strength of weak ties hypothesis can be applied. As Granovetter (1973, p.1372) noted, chance meetings or mutual friends serve to reactivate such weak ties, which, in turn, provide crucial information. As recent studies have revealed, active social networking in everyday life helps expand citizens’ connections with weak ties and participate more often in social life as effectively as joining formal organizations. Not only does such routine and all-encompassing networking help individuals obtain useful information, but it also enables citizens to polish their civic skills and to adopt specific political preferences (Denters et al., 2007; Andersen and Rossteutscher, 2007; Iglić and Fábregas, 2007; Walzer, 1989). Thus, it would be interesting to explore the extent to which weak ties may also matter in everyday social interactions, and how such effects of tie strength differ across various situations. 3. Contact as the Unit of Analysis 3.1. Relations as Aggregated Contacts Using a contact as the unit of analysis adds another level to the analysis of social networks, thus opening up more opportunities for examining the strength of weak ties in everyday life. Data about weak ties have been more difficult to collect than those about strong ties. Like most other empirical studies about social interactions, social network research has focused more on strong ties, not just for conceptual significance but also for convenience in data collection and construction. As Granovetter (1973, p.1366) observed, many commonly used research instruments do not measure the importance of weak ties. Even though researchers recognize the significance of examining weak ties in everyday life, they still face the challenge of collecting reliable data for rigorous tests. Researchers have designed network generators to help reconstruct network data. Most such generators, however, produce more information about strong ties within one’s immediate social circles. The name generator, for example, sharply limits how many network members a respondent is allowed to list, thus generating friends and family members who often maintain close ties or frequent contacts (Burt, 1984; Marsden, 1990, 2003). In contrast, both the position generator and the resource generator tend to reach larger networks and thus help identify more targets in a network’s outer layers (Lin and Dumin, 1986; Lin et al., 2001; van der Gaag and Snijders, 2005; van der Gaag et al., 2008). When more than one network member fits in a category, however, most survey respondents are still inclined to list the closest ties among these candidates, thus limiting our knowledge about weak ties (Fu, 2008). In addition, unaided recalls from survey respondents can be less reliable and highly selective, which also leads to a bias toward strong ties (de Sola Pool and Kochen, 1978, pp.19-21). To overcome the limitations of such instruments, it is imperative to collect network data that cover factual and exclusive ties. One of the few instruments that aims for such a goal is the contact diary, which tracks and records all the contacts that a diary keeper has had during a specific period of time. Like a time-use diary that helps build a proxy for the investment individuals make in their social roles (Stalker, 2008, p.291), the contact diary helps produce baseline data from everyday life, which enables network researchers to reconstruct actual and comprehensive personal networks (Fu, 2005, 2007; Gurevitch, 1961; Lonkila, 1999). Not only does such a baseline data set allow researchers to study personal networks at both tie and network levels, but it also greatly facilitates their examination of the intricacies of social networking, contact by contact. Understanding the basics of networking at the contact level represents a bottom-up approach similar to the “day reconstruction method” (Kahneman et al., 2004a,b), which explores how people perceive different events in their daily lives through either time-budget measurement or experience sampling. Both of these methods rely on actual events or encounters in everyday life, thus allowing researchers to investigate the “smallest-tightest” unit of social interactions in detail (Lofland, 1976, pp.27-28). Because interpersonal relations and social actions are built upon aggregated contacts, a lack of information about contacts would essentially limit our understanding of how a tie or network forms, sustains, and transforms. With such a bottom-up approach, therefore, researchers are able to distinguish and study situational factors contact by contact, within the same ties or relationships. 3.2. Contact Initiation As Homans (1974, pp.313-315) deliberated in his classic arguments, social exchanges often implicitly regulate who is supposed to initiate an interpersonal contact, particularly when the actors’ statuses are unequal. As a social norm in many societies, who initiates a contact carries important implications for behavioral outcomes. To extend this crucial line of inquiries about contact situations, we further examine the strength of weak ties by contact initiation. When an individual contacts someone in his or her personal network for special needs, the action may be intuitive or the outcome of careful calculation and deliberation, upon which the actor believes the action to be more likely to lead to success. Not all interpersonal contacts take place at actors’ will. People with strong ties may “meet voluntarily and in several contexts,” while contacts among weaker ties are often less voluntary and more specialized (Wellman and Tindall, 1993, p.70). Still, under certain circumstances people avoid having contacts as much as they can. Although such “avoidance relationships” play a role in minimizing negative impacts from unsolicited encounters (Goffman, 1967, p.15), “initiation of contact” often helps distinguish major circumstances in social interactions. Many individuals indeed “manipulate” their networks to achieve specific goals, either through a friend’s friends or other indirectly tied persons, two common types of weak ties. After all, weak ties are important “not only in ego’s manipulation of networks, but also in that they are channels through which ideas, influences, information socially distant from ego may reach them” (Granovetter, 1973, pp.1370-1371). Whether for instrumental or affective needs, everyday life is full of goal-oriented social interactions. To maximize the likelihood of success in social interactions, people often reach beyond their immediate social circles for better-suited targets. Therefore, self-initiated contacts are expected to bring about more significant outcomes in the first place, whereas contacts initiated by others may be less significant. Even though people often initiate contact to get help, under certain circumstances, individuals also take the initiative to offer assistance, for example, when close friends or kin members are desperately in need (Wellman and Tindall, 1993). The aforementioned presumptions leave us with two sets of questions to explore. First, to what extent do contacts with weakly tied others give an individual more significant gains than those with strong ties in everyday life? Second, how do such outcomes of contacts vary by contextual factors, such as contact initiation and the purposes of contacts? More specifically, when one takes the initiative to contact others, does the effect of tie strength differ from that of contacts initiated by others? If the weak ties argument holds, we would expect, in general, that contacts with weak ties in everyday life result in more significant gains. According to the principle of social exchange and interactions, such gains should be particularly significant when one initiates the contacts. Interpersonal contacts in everyday life vary by contents and purposes, however, which also affect the outcome of contacts. To take this factor into account, we further examine the relationship between tie strength and contact initiation along with another major contextual factor – whether a contact is for instrumental, affective, or other purposes. 4. Data and Measures 4.1. Participants and Key Variables Data were drawn from 55 contract diaries collected in Taiwan. In early 2004, 21 respondents from a random sample (who completed a follow-up interview after a nationwide probability survey) volunteered to participate in our diary study; another 34 volunteers were recruited from survey fieldwork supervisors’ personal networks. Of the 55 informants who kept diaries for three consecutive months by recording all one-on-one interpersonal contacts (including anyone with whom they chatted, talked, or discussed matters, whether they personally knew the target person or not),\(^1\) 23 were males and 32 were females. The --- \(^1\)The instruction in each contact diary reads, “Please record the following information for every contact that you have made today, including all kinds of one-on-one contacts such as informants’ median age was 36 (ranging from 21 to 61); about 75% were married and 34.5% had a college education. They worked in various occupations and industries spreading mainly across the service sector, although 13 were not in the job market. About 73% lived in the northern part of Taiwan, the only factor significantly over-represented in the sample. In sum, the informants are not representative of any particular populations, but their diversified backgrounds provide variations for multivariate analyses. These 55 three-month diaries yielded a total of 104,361 contacts, thus providing comprehensive information about individuals’ daily interpersonal contacts. Each contact contained 28 variables, which covered contact situations, demographic and socioeconomic background of the contacted person, and the relationship between the diary keeper (Ego) and each contacted person (Alter) at the time of contact. Among these variables, four are central to this study: familiarity between each pair of Ego and Alter, benefit from contact, contact initiation, and purposes of contact (Table 1). A fifth variable, kin (family and relatives) versus non-kin, distinguishes major types of relations and serves to screen out cases that are too skewed to be analyzed properly. Familiarity (the first variable) resembles “closeness,” which has been regarded as reflecting “the emotional intimacy of a relationship,” probably the best indicator of tie strength (Campbell and Lee, 1991; Granovetter, 1983; Marsden and Campbell, 1984; Mitchell, 1987). Instead of using “preferring to do something with,” “spending time together,” “frequency of contact,” or “type of relationships” (Granovetter, 1973, pp.1371-1376, 1983, p.205; Harvey, 2008), we asked the diary keeper to judge “how well did you know the person before this specific contact?” This variable measures the strength of ties and serves to differentiate weak from strong ties. For practical purposes in conducting our statistical analyses, we divide the original ordinal categories into either strong ties or weak ties. First, we pool (4) “know very well” (46.9%) and (3) “know somewhat well” (22.5%) into “strong ties,” and (2) “not very well” (11.3%) and (1) “not at all well” (19.3%) into “weak ties.” In alternative analyses, we consider only “know very well” as “strong ties,” and compare the findings with other dichotomous measures. To minimize possible biases from outliers, we exclude those cases where the diary keeper made only one contact with either strong or weak ties. The second variable, benefit from contact, measures the extent to which Ego regarded each specific contact as beneficial after it occurred. “Benefit” refers to how much Ego gained from the contact or how important Ego felt about each contact, thus reflecting the subjective evaluation of contact. We also pool the original four ordinal categories into two: about 55.7% of all contacts are recoded saying hello, chatting, talking, meeting, or sending or receiving a message, that occurred face-to-face, over the phone, on the Internet, or by other means of communication.” (Table 1 about here) as “beneficial,” including (4) “very beneficial” and (3) “somewhat beneficial” to Ego, and 44.3% are recoded as “not beneficial,” including (2) “not very beneficial” and (1) “not at all beneficial.” The third variable helps identify who initiated the contact, which lets us examine the strength of weak ties under different contact situations. Of all contacts, about 40.3% were initiated by Ego, 28.5% by Alter, and 31.2% by both parties, casual encounters, or introduced by others. The fourth variable distinguishes between “instrumental” and “affective” contacts. Instrumental contacts refer to those made for work (24.2%) or for business (13.1%), and affective contacts are either for leisure (7.4%) or social reasons (13.1%). The variable allows us to examine how the effect of tie strength varies by the purposes of contacts. Other contacts involved with purposes that were not clearly identifiable, such as daily routine (31.2%), multiple purposes (4.6%), and others (6.4%). 4.2. Comparing Familiarity with Other Indicators of Tie Strength To reassure that the subjective evaluation of familiarity corresponds well with other possible measures of tie strength, we cross-examine the association between the degree of familiarity and two sets of similar variables. The first set is the frequencies of contact between Ego and Alter, both by face-to-face and by phone.\(^2\) The second set refers to years of acquaintanceship between Ego and Alter, in five ordinal categories, from (1) “less than a year” to (5) “more than 20 years”. Despite rare occasions when Ego has many contacts with a particular unfamiliar alter, the results of both Chi-square test (including likelihood ratios for a better approximation when the number of observations is large) and Linear Trend test (for ordinal variables) of independence show that the degree of familiarity is positively associated with the three alternative measures. All the test results are significant at the 0.000 level (See Table 2). Our results are consistent with those from earlier studies of interpersonal contacts, particularly Wellman and Tindall’s (1993) findings in Canada, which showed that those with strong ties talked to each other on the telephone much more often than those with weak ties. Even though these “objective” alternative measures may not reflect voluntary relationship and the actual strength of ties, such positive and strong associations help justify using the degree of familiarity as a measure of tie strength. As with other measures of tie strength, familiarity may carry complex implications. On the one hand, frequent contact may not necessarily foster friendship,\(^2\) --- \(^2\) Another measure, the frequency of mail or email contact, is excluded from the analysis because its distribution is too skewed (about 82.3% of such contacts are with someone Ego has never contacted, with a skewness coefficient of 2.86.) and familiarity may even breed contempt, particularly among unsolicited primary relations. On the other hand, familiarity may engender trust in everyday life (Small, 2009, p.249). In sum, our measure of tie strength provides an alternative that somehow captures both objective and subjective evaluations of a tie.\(^3\) We use this intuitive evaluation of familiarity to define tie strength for two further, practical reasons. First, Ego might contact the same Alter by more than one means (e.g., face-to-face and phone) on the same day, yet record each means of contact in the diary log as a separate and independent entry. Second, research participants recorded the contact diary for only three months, which was a short time compared with the actual length of acquaintanceship. 5. Benefit from Contacts with Weak Ties Having defined our measure of tie strength, we explore our data set by examining whether strong or weak ties bring about more beneficial results to Ego. Figure 1 shows the correlation between the probabilities of Ego’s beneficial contacts obtained from strong ties and weak ties. The intercept and slope of the regression line shown in Figure 1 are 0.224 and 0.623, respectively. The \(R^2\) value is 0.519, showing a strong and positive correlation between the beneficial results Ego obtained from the contacts with both strong and weak ties. The positive parameter of the intercept suggests that when the probability of beneficial results obtained from contacts with strong ties is relatively low, the probability of such results with weak ties is higher. Thus, at first glance, it appears that individual differences rather than the strength of ties may decide to what extent Ego is able to gain from such contacts in everyday life. (Figure 1 about here) Our data set contains a large number of heterogeneous entities that comprise a complex structure requiring careful examination and screening to prevent potential biases, such as those related with Simpson’s paradox. We first break down all contacts by types of relationships (that is, kin versus non-kin ties). The number of contacts with weakly tied kin members turns out very small (only 404, or 1.7%, of 22,787 kin contacts are with weakly-tied family and relatives.) In contrast, 31,073, or 38.8%, of 80,038 contacts with non-kin alters are rated as contacts with weak ties. It could be true that our informants had indeed very \(^3\)A direct translation of the term “closeness” in Chinese would have carried strong implications for emotional attachment in Taiwanese society, which would be further contaminated by the gender issue (e.g., it would be too sensitive for many to admit “very close” cross-gender relations). As a result, we used the Chinese term “shou” in the diaries, which indicated the degree of familiarity in relationships literally. In practice, however, when a diary keeper judged that “we know each other very well,” it often referred to a significant relation, regardless how close they might feel to each other. In addition, when a previously familiar relationship deteriorated, most actors would contact each other less often, the tie strength would weaken, and the diary keepers would downgrade the relationship on the scale of familiarity. limited contact with unfamiliar family and relatives. Alternatively, residents in Taiwan may almost always treat extended family members as strong ties, unlike those in many Western societies, which suggests that the significance of kin may vary from culture to culture. To avoid any biases that such an uneven distribution (or distinctive social norms) may cause, thus, we focus more on contacts with non-kin ties. Because the effects of tie strength may be confounded by factors embedded in routine daily contacts (such as when family or coworkers have contact with each other without specific purposes), we also try to limit our analysis to non-routine contacts in the next step. With detailed information about such contact situations, then, we are able to distinguish and compare how weak ties may bring about positive outcomes between Ego-initiated and Alter-initiated contacts under various circumstances. As descriptive statistics show in Table 3, while 58.8% of all Ego-initiated contacts with strong ties turn out to be beneficial, 62.3% of that with weak ties benefit Ego. The Alter-initiated contacts with weak ties, however, tend to be less beneficial to Ego. Non-routine contacts are more beneficial overall, but the link between benefit and tie strength differs by contact initiation. (Table 3 about here) In order to untangle these potential contextual variations, we introduce in the following section a method that helps clarify whether strong or weak ties result in more fruitful returns. We tailor our methodology to the format of our data set, as conventional methods might be unfitting for our purpose. Since we have broken down relationships into contacts, the unit of analysis becomes contact rather than individual or interpersonal tie. 6. Testing the Strength of Weak Ties by Contact Initiation To test whether the contacts with strong or weak ties bring about more beneficial results to Ego, we run a series of significance tests for the differences between two types of contacts. The variable on which our test statistics are based is the probability for Ego to benefit from a contact. To model this variable of probability, we make the following assumptions. (1) For every Ego, the total number of alters available for making a contact is fixed. (2) All of the contacts can be classified into $K$ different types. (3) For each contact of the same type between an identical pair of Ego and Alter, the probability for Ego to benefit is constant. (4) For any two contacts made $t$ days apart by the same Ego, the interdependence between the two contacts decreases to near zero as $t$ becomes very large. In Assumption (1) we denote the number of alters by $A_i$ for Ego $i$. The first three assumptions serve to ensure the constancy of the underlying probability structures. The last assumption aims to guarantee that the outcomes of all contacts made by the same Ego will not be too strongly correlated with one another, so that certain parameters of interest can be consistently estimated. To simplify notation, the type-$k$ contact made by Ego $i$ with Alter $j$ is abbreviated as contact $(i, j, k)$. Let $\mu_{i,k}$ denote the probability for a type-$k$ contact made by Ego $i$ being beneficial, and $\mu_{i,j,k}$ the probability for contact $(i, j, k)$ being beneficial. From the assumptions above, it follows that for Ego $i$ $$\mu_{i,k} = \sum_{j=1}^{A_i} w_{j,k}^i \mu_{i,j,k} \text{ with } k = 1, 2, ..., K, \quad (2)$$ where $w_{j,k}^i$ denotes the proportion of contact $(i, j, k)$’s in the population of contacts made by Ego $i$ to all alters. To classify the contacts into the types that are particularly relevant to our analysis, we use five indicator functions defined in accordance with the five variables introduced in Section 4.1. An indicator function or a product of some of the five indicator functions specifies the type to which a contact belongs. Then, a natural estimate for $\mu_{i,k}$ is the ratio of the number of Ego $i$’s beneficial type-$k$ contacts to the number of Ego $i$’s all type-$k$ contacts. Under Assumptions (3) and (4) and some mild regularity conditions, the estimate can be shown to be consistent if every Ego had made many contacts in a long period of time. Next, for Ego $i$ we denote the probability of beneficial results obtained from the contacts with strong ties by $S_i$ and that obtained from the contacts with weak ties by $W_i$. In reference to the formulation of model (2), $K$ is 2, that is, strong-tie or weak-tie contact; and for Ego $i$, $\mu_{i,k}$ here is $S_i$ or $W_i$. For all $i$ define $$D_i = S_i - W_i$$ $\{D_i, 1 < i < 55\}$ is regarded as an unobserved sample from a distribution $D(\theta, \nu^2)$, with mean $\theta$ and variance $\nu^2$. In other words, each Ego is independently assigned a prior value $D_i$ sampled from the fixed $D(\theta, \nu^2)$, which represents the distribution of the differences between an ego’s gains $S_i$ obtained from the contacts with strong ties and the gains $W_i$ from weak ties. Our goal is to test whether the expected $\theta$ of the distribution is null or not, or formally, $$H_0 : \theta = 0 \ vs \ H_1 : \theta \neq 0.$$ Let $\hat{S}_i = \frac{I^S_i}{M^S_i}$ denote the empirical probability of beneficial returns brought by the contacts with strong ties; here $M^S_i$ is the total number of contacts with strong ties obtained by Ego $i$ and $I^S_i$ is the number of contacts that result in beneficial returns. Analogously, we define $\hat{W}_i$ as $\hat{W}_i = \frac{I^W_i}{M^W_i}$ for the probability of beneficial results obtained through the contacts with weak ties. We then define \[ \hat{D}_i = \hat{S}_i - \hat{W}_i = D_i + \varepsilon_i, \] where \(\varepsilon_i = \hat{D}_i - D_i\) stands for the approximation error for Ego \(i\). We assume that \(\varepsilon_i\)'s are independent and have zero-expectation and finite-variance, but do not necessarily have the same distribution. Note that under \(H_0\), \(ED_i = 0\). The test statistic we use is \[ Z_n = \frac{\sum_{i=1}^{n} \hat{D}_i}{\sigma \sqrt{n}} \] where \(n\) is the total number of Egos. Denote the variance of \(\hat{D}_i\) by \(\sigma_i^2\) (under the null hypothesis) and assume \[ \lim_{n \to \infty} \frac{\sigma_1^2 + \sigma_2^2 + \ldots + \sigma_n^2}{n} = \sigma^2 < \infty, \] (3) which essentially says that \(\sigma_i^2\)'s center closely around a fixed average \(\sigma^2\). Then, by equation (3) and Lyapunov’s central limit theorem (see Resnick, 1999, pp.319-321), \(Z_n\) converges to a normal distribution \(N(0, \sigma^2)\) as \(n\) tends to infinity. It follows from equation (3) that the sample variance \(\sum_{i=1}^{n} \frac{\hat{D}_i^2}{n}\) serves as a consistent estimate of the limiting variance \(\sigma^2\), and we will later use the sample variance for inference. By taking into account the heteroscedasticity among \(\varepsilon_i\)'s, we also employ the bootstrap resampling method (Efron, 1982), with 500 bootstrap samples for each test, to estimate \(\sigma^2\). That is, for each Ego, we first calculate the difference between the probabilities of beneficial returns obtained through the contacts with strong ties and weak ties. Then we draw 55 random samples with replacement of such a difference \(\hat{D}_i\) and repeat this process 500 times. The bootstrap resampling method enables us to generate bootstrap replications \(\hat{D}_i^*\), and then use these replications to decide the critical values for our test. As stated earlier, we are interested in understanding human motivation and behavior by examining contact initiation in everyday life. So we select contacts initiated by Ego and those initiated by Alter from our database and categorize them into two sub-groups, to draw comparisons with those contacts in which neither Ego nor Alter initiated the interaction. We have also mentioned that very few family members and relatives are considered as weak ties. So within each sub-group of contact initiation, we further compare between those contacts including and excluding kin members. From within each sub-group, we compute the probabilities of benefits obtained through contacts with strong ties and those with weak ties for each Ego, and we then test the difference between the two with the bootstrap resampling method. The results from bootstrap replications appear in Figure 2. The values of Ego-initiated contacts are by and large negative, indicating that weak ties appear to be more beneficial than strong ties. In contrast, the values of Alter-initiated contacts are generally positive, showing that strong ties tend to be more beneficial when Ego is contacted by Alter. The results could vary slightly each time because of the nature of the bootstrap resampling method. None of the results, however, has altered the acceptance or rejection of our null hypothesis, since the $p$-values remain consistent each time. As our method reflects an absolute difference between beneficial contacts obtained through strong and weak ties, we also adopt the Wilcoxon signed ranks test to detect the relative difference between the two. The results of both our model and the Wilcoxon signed ranks test are shown in Table 4. As shown in Figure 2, the probabilities of obtaining beneficial returns through contacts with strong and weak ties clearly differ by contact initiation. Although Ego-initiated contacts tend to benefit more from weak ties, Alter-initiated contacts are more beneficial to Ego when the Ego-Alter ties are strong. In other words, when Ego takes the initiative to contact another person, unfamiliar acquaintances tend to be more helpful; whereas when the other party initiates a one-on-one contact, it is those with whom Ego is more familiar that are potentially more helpful. Such differences are statistically significant, as Table 4 shows. The first two rows (all contacts) of our model indicate that, without taking contact initiation into account, the differences between the gains obtained through strong and weak ties are not significant. This finding is consistent with the result revealed in Figure 1, even though the negative $Z$-scores suggest that weak ties may bring about slightly more fruitful outcomes. When we test the effect of weak ties by contact initiation, however, the results clearly vary. For all Ego-initiated contacts, the negative $Z$-scores ($P = 0.005$ for all contacts) indicate that weak ties result in more positive gains than strong ties. Such differences remain significant ($P = 0.000$) after we exclude kin members that may cause biases because of the skewed distribution on the strength of tie, and also by counting only non-routine non-kin contacts ($P = 0.007$, thus ruling out contacts such as sending messages or chatting for no particular reason). In contrast, when Alter initiates a contact, it is strong rather than weak ties that yield more rewarding results for Ego ($P = 0.023$, Table 4). Such an effect of tie strength remains significant after excluding routine contacts as well. --- 4As our model defines the difference between strong and weak ties as $S_i - W_i$, negative $Z$-scores suggest that the probabilities of beneficial contacts obtained through weak ties are larger than those obtained through strong ties. As we are interested in testing which type of tie is more beneficial, we mainly look at one-sided tests. as contacts with family and relatives ($P = 0.028$). As confirmed by the same results from Wilcoxon signed ranks test, we can assume that the effects of tie strength vary by contact initiation in both absolute and relative terms. Such variations by contact initiation change after we further take into account the purposes of contact, however. If Ego contacts Alter for instrumental purposes (for work and business), weak ties remain slightly more beneficial, but the difference is only marginal ($P = 0.080$). When Ego-initiated contacts are affective (for leisure and social purposes), in contrast, strong ties turn out to benefit Ego more ($P = 0.007$). As for Alter-initiated contacts, the effect of tie strength has little to do with instrumental actions ($P = 0.166$), while strong ties are also highly beneficial to Ego among affective interactions ($P = 0.002$). In other words, weak ties may be beneficial only when Ego contacts someone for instrumental reasons, and such benefits are marginal. When contacts are for leisure or social purposes, strong ties bring about more beneficial outcomes, regardless whether Ego or Alter starts the contact. As with other measures and operational definitions, the significance of tie strength may differ by varying the categorization of measurements. For example, when we run the same analysis using only “know very well” as “strong ties” (the measure of “weak ties” remains the same), the results change slightly. The weak tie effects among Ego-initiated instrumental contacts further diminish ($Z = -0.232, P = 0.409$) but the strong tie effects among Alter-initiated instrumental contacts become significant ($Z = 1.955, P = 0.025$). Thus, compared to those “not very well” or “not at all well” known, the really strong ties turn out to be very helpful when they contact Ego, not just for affective but also for instrumental purposes. 7. Discussions and Conclusions The findings shed new light on how the effects of weak ties in everyday life vary under different circumstances. Two such circumstances, contact initiation and purpose of contact, prove distinctive and critical: When Ego contacts someone, the returns on the contact will be greater if the target is unfamiliar. This effect remains noticeable when we exclude all contacts with family and relatives, as well as routine contacts. When someone else contacts Ego instead, Ego tends to benefit more from strong ties than weak ties. Such relationships between contact initiation and tie strength, however, are further contingent upon the purpose of contact. The benefits of weak ties are positive but marginal among Ego-initiated instrumental contacts. When Alter initiates such instrumental contacts, in contrast, it is those who are very well known that yield most beneficial results to Ego. Furthermore, strong ties are apparently more rewarding in bringing about affective returns, whether Ego or Alter initiates a contact. The differences in returns between Ego-initiated and Alter-initiated contacts, as well as between instrumental and affective contacts, suggest that researchers should pay more attention to such contact situations as initiation and purpose of interactions, which represent circumstantial forces that help explain the intricacies of reciprocity and exchange in social interactions. The findings also call for interpretations based on behavior motivation or purposive social actions. For example, when Ego desperately needs specific help or critical information through personal assistance, his or her unfamiliar acquaintances are less likely to know about such needs, nor would they offer help by contacting Ego. Thus, Ego would need to take the initiative and go ask weak ties for help. For such calculated actions with specific purposes in mind, Ego-initiated contacts with weak ties ought to be more fruitful. Because our contact diaries asked informants to record contacts with anyone, however, an instrumental contact with unknown professional (e.g., when going to the doctor) may turn out to be very beneficial. Future studies may need to take such potential biases into consideration. In contrast, strong ties would be more likely to learn about Ego’s needs, and thus reach out and give Ego a hand. As illustrated in earlier studies (Wellman and Tindall, 1993), those strongly tied others often make an effort to offer not only emotional support but also instrumental assistance when they know their close friends are in urgent need. In that regard, the contacts that strong ties initiate normally become more beneficial to Ego. The study of weak ties has been overwhelmingly limited to specific life events or social phenomena. Our research based on contact diaries complements such a tendency by exploring the strength of weak ties in everyday life. Whether for affective or instrumental needs, everyday life is full of goal-oriented social interactions. To maximize the likelihood of success in social actions, people often reach beyond their immediate social circles to ask better-suited targets for help. Therefore, Ego-initiated contacts are expected to bring about more significant outcomes. By taking into account the interaction between contact initiation and the purpose of contact, our analyses help reveal contextual variations in the extent to which social outcomes vary by tie strength in everyday life. Our approach allows us to examine such contact situations, as well as tie characteristics, contact by contact for 3 months. For example, assumption (3) given in Section 6 allows the strength of ties to vary over time. The probability remains unchanged as long as the contact is of the same type and involves the identical pair of Ego and Alter. More extensive and longitudinal data at contact, tie, and individual levels, over a longer time span, would facilitate further analyses of dynamic social networks. In particular, it would be more feasible to identify how tie strength between Ego and Alter changes over time if some of the contact diaries could be replicated a few years apart. As Ego’s benefits from contacts with Alter may fluctuate from time to time, it would be intriguing to further examine to what extent the changing tie strength with Alter plays a role in inducing the benefits. Alternatively, a tie could also strengthen or weaken as a result of gaining or losing from a series of contacts with Alter. Conceptually, daily lives are mostly so eventless that it is hard to imagine why and how routine contacts with weakly tied acquaintances could bring about any fruitful outcomes. As recent studies have also explored how weak ties are linked to creativity, as well as how structural holes lead to new ideas in the business world (Baer, 2010; Burt, 2004), future studies also should explore other contexts in everyday life where the strength of weak ties has been hidden. References Table 1 Summary of Variables <table> <thead> <tr> <th>A. Ties with the Contacted Person</th> <th>Mean</th> <th>S.D.</th> <th>Min</th> <th>Max</th> </tr> </thead> <tbody> <tr> <td>Familiarity (how well do you know the person?)</td> <td>2.97</td> <td>1.16</td> <td>1</td> <td>4</td> </tr> <tr> <td>(1) not at all well (2) not very well (3) somewhat well (4) very well</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Frequencies of contact</td> <td>3.01</td> <td>1.11</td> <td>1</td> <td>4</td> </tr> <tr> <td>(1) never (2) seldom (3) sometimes (4) often</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>face-to-face</td> <td>2.13</td> <td>1.07</td> <td>1</td> <td>4</td> </tr> <tr> <td>phone</td> <td>1.24</td> <td>0.59</td> <td>1</td> <td>4</td> </tr> <tr> <td>mail/email</td> <td>1.99</td> <td>1.29</td> <td>0</td> <td>4</td> </tr> <tr> <td>Length of acquaintanceship</td> <td>1.99</td> <td>1.29</td> <td>0</td> <td>4</td> </tr> <tr> <td>(0) unknown (1) &lt; 1 year (2) 1-4 years (3) 5-19 years (4) 20+ years</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> | B. Contact Situations | | |-----------------------|------|------|-----|-----| | Benefit from contact | 2.58 | 0.95 | 1 | 4 | | (1) not at all (2) not very (3) somewhat (4) very beneficial | | Initiation | | | by ego | 0.403 | 0.491 | 0 | 1 | | by alter | 0.285 | 0.452 | 0 | 1 | | others | 0.312 | 0.463 | 0 | 1 | | Purposes | | | Instrumental (work & business) | 0.373 | 0.484 | 0 | 1 | | Affective (leisure & social) | 0.205 | 0.404 | 0 | 1 | | Neither (routine, multiple, others) | 0.422 | 0.494 | 0 | 1 | Table 2 Crosstabs of Familiarity and Other Measures of Tie Strength, with Results of $\chi^2$ and Linear Trend Tests <table> <thead> <tr> <th>face-to-face contact</th> <th>how well do you know the person?</th> <th>not at all</th> <th>not well</th> <th>sw. well</th> <th>very well</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>never</td> <td>15,893 (96.6)</td> <td>403 (2.5)</td> <td>97 (0.6)</td> <td>54 (0.3)</td> <td>16,447 (100.0)</td> <td></td> </tr> <tr> <td>seldom</td> <td>2,268 (17.3)</td> <td>4,210 (32.1)</td> <td>3,492 (26.6)</td> <td>3,142 (24.0)</td> <td>13,112 (100.0)</td> <td></td> </tr> <tr> <td>sometimes</td> <td>1,165 (4.4)</td> <td>3,417 (12.9)</td> <td>9,976 (37.7)</td> <td>11,896 (45.0)</td> <td>26,454 (100.0)</td> <td></td> </tr> <tr> <td>often</td> <td>531 (1.1)</td> <td>3,574 (7.6)</td> <td>9,598 (20.5)</td> <td>33,128 (70.8)</td> <td>46,831 (100.0)</td> <td></td> </tr> <tr> <td>Total</td> <td>19,857 (19.3)</td> <td>11,604 (11.3)</td> <td>23,163 (22.5)</td> <td>48,220 (46.9)</td> <td>102,844 (100.0)</td> <td></td> </tr> </tbody> </table> Results of $\chi^2$ Test: value=89,988.7, DF=9, likelihood ratio=80,377.4 ($P = .000$) Results of Linear Trend Test: value=56,905.5 ($P = .000$) <table> <thead> <tr> <th>phone contact</th> <th>how well do you know the person?</th> <th>not at all</th> <th>not well</th> <th>sw. well</th> <th>very well</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>never</td> <td>18,956 (48.3)</td> <td>6,404 (16.4)</td> <td>7,545 (19.3)</td> <td>6,263 (16.0)</td> <td>39,168 (100.0)</td> <td></td> </tr> <tr> <td>seldom</td> <td>698 (2.8)</td> <td>3,619 (14.5)</td> <td>8,243 (33.0)</td> <td>12,459 (49.7)</td> <td>25,019 (100.0)</td> <td></td> </tr> <tr> <td>sometimes</td> <td>174 (0.7)</td> <td>1,239 (5.1)</td> <td>6,398 (26.1)</td> <td>16,684 (68.1)</td> <td>24,495 (100.0)</td> <td></td> </tr> <tr> <td>often</td> <td>29 (0.2)</td> <td>266 (1.9)</td> <td>852 (6.2)</td> <td>12,687 (91.7)</td> <td>13,834 (100.0)</td> <td></td> </tr> <tr> <td>Total</td> <td>19,857 (19.4)</td> <td>11,528 (11.3)</td> <td>23,038 (22.5)</td> <td>48,093 (46.8)</td> <td>102,516 (100.0)</td> <td></td> </tr> </tbody> </table> Results of $\chi^2$ Test: value=50,130.9, DF=9, likelihood ratio=55,453.2 ($P = .000$) Results of Linear Trend Test: value=38,519.5 ($P = .000$) <table> <thead> <tr> <th>duration of acquaintance</th> <th>how well do you know the person?</th> <th>not at all</th> <th>not well</th> <th>sw. well</th> <th>very well</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>16,650 (96.7)</td> <td>492 (2.9)</td> <td>51 (0.3)</td> <td>13 (0.1)</td> <td>17,206 (100.0)</td> <td></td> </tr> <tr> <td>&lt; 1 year</td> <td>1,949 (10.6)</td> <td>6,084 (33.0)</td> <td>6,622 (36.0)</td> <td>3,755 (20.4)</td> <td>18,450 (100.0)</td> <td></td> </tr> <tr> <td>1-4 years</td> <td>804 (2.9)</td> <td>3,548 (13.0)</td> <td>10,665 (39.0)</td> <td>12,364 (45.1)</td> <td>27,381 (100.0)</td> <td></td> </tr> <tr> <td>5-19 years</td> <td>277 (1.1)</td> <td>1,354 (5.2)</td> <td>4,952 (19.2)</td> <td>19,255 (74.5)</td> <td>25,838 (100.0)</td> <td></td> </tr> <tr> <td>&gt; 20 years</td> <td>177 (1.3)</td> <td>143 (1.0)</td> <td>839 (6.0)</td> <td>12,836 (91.7)</td> <td>13,995 (100.0)</td> <td></td> </tr> <tr> <td>Total</td> <td>19,857 (19.3)</td> <td>11,621 (11.3)</td> <td>23,169 (22.5)</td> <td>48,223 (46.9)</td> <td>102,870 (100.0)</td> <td></td> </tr> </tbody> </table> Results of $\chi^2$ Test: value=107,235.9, DF=12, likelihood ratio=97,061.8 ($P = .000$) Results of Linear Trend Test: value=58,657.8 ($P = .000$) Note: Percentages are in parentheses. Table 3 Types of Contacts by Tie Strength and Contact Initiation <table> <thead> <tr> <th></th> <th>Strong Ties</th> <th>Weak Ties</th> </tr> </thead> <tbody> <tr> <td></td> <td>Ego</td> <td></td> </tr> <tr> <td></td> <td>All contacts</td> <td>26,869 (15,797, 58.8%)</td> </tr> <tr> <td></td> <td>Non-routine contacts</td> <td>15,941 (10,601, 66.5%)</td> </tr> <tr> <td></td> <td>Alter</td> <td></td> </tr> <tr> <td></td> <td>All contacts</td> <td>21,326 (11,757, 55.1%)</td> </tr> <tr> <td></td> <td>Non-routine contacts</td> <td>14,767 (8,905, 60.3%)</td> </tr> </tbody> </table> Note: The number in each parenthesis is the number of beneficial contacts followed by its proportion. Table 4 Results of Statistical Test for Beneficial Returns Obtained from Contacts with Strong and Weak Ties <table> <thead> <tr> <th>Contact Initiation</th> <th>Ties &amp; Contexts</th> <th>Our Model</th> <th>Wilcoxon Test</th> </tr> </thead> <tbody> <tr> <td></td> <td>Z</td> <td>P-value</td> <td>Z</td> </tr> <tr> <td>Kin &amp; nonkin</td> <td>-0.488</td> <td>0.313(0.626)</td> <td>0.461</td> </tr> <tr> <td>All Contacts</td> <td>-1.455</td> <td>0.073(0.146)</td> <td>-0.310</td> </tr> <tr> <td>Non-routine nonkin</td> <td>-1.354</td> <td>0.088(0.176)</td> <td>-0.427</td> </tr> <tr> <td>Instrumental</td> <td>-1.080</td> <td>0.141(0.282)</td> <td>-0.865</td> </tr> <tr> <td>Affective</td> <td>3.718</td> <td>0.000***</td> <td>3.715</td> </tr> <tr> <td>Kin &amp; nonkin</td> <td>-2.598</td> <td>0.005**(0.010**)</td> <td>-1.860</td> </tr> <tr> <td>Ego-Initiated</td> <td>-3.261</td> <td>0.000***</td> <td>-2.681</td> </tr> <tr> <td>Nonkin</td> <td>-2.444</td> <td>0.007**(0.014*)</td> <td>-1.692</td> </tr> <tr> <td>Non-routine nonkin</td> <td>-1.402</td> <td>0.080(0.160)</td> <td>-0.615</td> </tr> <tr> <td>Instrumental</td> <td>-1.402</td> <td>0.080(0.160)</td> <td>-0.615</td> </tr> <tr> <td>Affective</td> <td>2.479</td> <td>0.007**(0.013*)</td> <td>1.603</td> </tr> <tr> <td>Kin &amp; nonkin</td> <td>1.997</td> <td>0.023*(0.046*)</td> <td>2.145</td> </tr> <tr> <td>Alter-Initiated</td> <td>1.679</td> <td>0.047*(0.093)</td> <td>1.877</td> </tr> <tr> <td>Nonkin</td> <td>1.910</td> <td>0.028*(0.056)</td> <td>2.086</td> </tr> <tr> <td>Non-routine nonkin</td> <td>0.971</td> <td>0.166(0.332)</td> <td>0.728</td> </tr> <tr> <td>Instrumental</td> <td>2.941</td> <td>0.002**(0.003**)</td> <td>3.023</td> </tr> </tbody> </table> Note: *p < .05; **p < .01; ***p < .001. Numbers in parentheses are 2-tailed p-values. Figure Captions: Fig. 1. Probability of Beneficial Contacts from Strong Ties and Weak Ties Fig. 2. Bootstrap Distributions for Probability of Beneficial Contacts from Strong Ties and Weak Ties Acknowledgements This study was supported by research grants from both National Science Council (grants no. NSC95-2412-H-001-010-SSS, NSC100-2410-H-001-110-MY3) and Academia Sinica (grant no. AS-100-TP2-C01), Taiwan. An earlier version of the paper was presented at the 106th American Sociological Association Annual Meeting. We thank Yen-Sheng Chiang, Kun-Lin Kuo, Lung-An Li, Wen-Shan Yang, Tso-Jung Yen, and two anonymous reviewers of this journal for valuable comments and suggestions.
[REMOVED]
[REMOVED]
Revisiting an Old Predicament: Primacy of the Individual or the Community? José Mesa *Columbia University, Teachers College* We live under the assumption that the most important task for us humans is to shape our own lives and take responsibility for them. It is the call for authenticity and self-determining freedom. In this sense, people prioritize what we have come to call the individual — a concept that encapsulates some of the moral goods deemed important for human flourishing: human dignity, self-responsibility, initiative, authenticity, and difference. How have we come to this? Why do we feel drawn to our present understanding of the self and find it almost impossible not to think in terms of “the individual”? What is the place of community in all this? Of course, individual and community are concepts to help us make sense of our lives and the moral goods we consider important for human flourishing, not ontological entities with a life of their own. **How Have we Come to the Modern Individualism?** To answer this question it is necessary to undertake a task of retrieval. Charles Taylor, Emmanuel Mounier, and Paul Ricoeur will assist us in this task. In *Sources of the Self*, Taylor traces the major facets of the present understanding of human identity. In particular, he points out three aspects important for our purpose. **Modern inwardness:** Augustine begins a process of inwardness — I as a first person standpoint, the way the world is for me — that although he conceived as a path to go upward (to God), becomes with further developments especially through Descartes and Montaigne the source of the modern individualism that regards human life as a personal quest of inner control and/or exploration and/or commitment. The modern idea of an independent individual making sense of human life within is in sharp contrast to previous views in which the individual was just a part of a cosmic (community) order. Thus our internal experience — no the external authority — becomes the final judge. Autonomy becomes the ultimate moral ideal as described by Kant: the agent gives himself his moral law. Montaigne develops another strand that stresses the particularity of each human being and reacts against the universalism of some of the other views. In all this, the concept of responsibility becomes important since we are “anchored in our own self.” **Affirmation of ordinary life:** The affirmation of ordinary life is brought about mainly through the Protestant Reformation and its idea of sanctification of daily life. Deism continues this path stressing the importance of feelings and attacking the ascetic ethics of self-denial. These ideas have also shaped our modern understanding of the self. They have allowed us to discover the potentialities of ordinary life creating a powerful idea of equality. However, the affirmation of ordinary life has also contributed to the current distrust of community as a threat to the individual’s private life. It deepens the rebellion against community as an external pressure that has to be resisted. Expressivist view of the self: the Radical Enlightenment — the free thinkers only believe what they find for themselves and Romanticism — living according to our inner nature revealed by our inner voice — bring the idea of inwards to new depths through their emphasis on trusting our inner nature. Tradition and authority are regarded with suspicion. Modernism continues this path radicalizing the sense of inwards with the concept of the multileveled self — the fragmentary character of our experience. Nevertheless, the subjectivist twist of these movements also presents a problematic aspect: it denies the moral sources that make sense of its liberationist force. Similarly, the radical Enlightenment and Romanticism open a path in which community is regarded with suspicion as an external authority that hinders the path to inwards. A great deal of the current animosity against community seems to be grounded in this development. It tends to be regarded merely as an obstacle to the individual — as an external influence. The concept of the modern self has opened for us some important potentialities: self‐responsible individuals who are aware of their dignity, who have learned to pay attention to their feelings; who know that human life is a task opened to new possibilities. Even controversial developments such as Descartes’ disengaged reason or the idea of self-fulfillment, have opened up for us great possibilities expressed in modern science and the richness of inner life. All these potentialities are captured in the moral ideal of authenticity: being true to oneself. To be sure authenticity can turn into atomism, narrow self-fulfillment or extreme subjectivism. These forms empty human life of its moral sources and often end in nihilism and despair. Community under attack From the perspective of the predicament between the community and the individual the rise of the modern self has leaded to a weakening of the idea of community to the point in which it is regarded only as a means for the well-being of individuals. This is grounded in the idea that we can only be fully humans when we define ourselves within without allowing “external” influences to interfere with our authentic selves. Of course, as Taylor points out, there is a tragic paradox here: in denouncing community as an external interference, even more, as a hostile force to the flourishing of the individual we deny the very fact that the language of individuality has been the result of a particular social experience, in this case the experience of Western societies. No wonder some people today denounce the loss of the sense of community in modernity as one of the biggest problems that we still do not want to face. Ancient and Medieval thinkers took the idea of community as a natural part of what a human life is. Conceiving life without a community was not possible. To be sure, there were many disagreements about what kind of community better suited human beings. Nevertheless, the modern path inaugurated by Descartes problematizes the role of community in human life, in the sense that it begins to be identified with “outwardness:” something outside us. Community begins to be regarded as opposing the individual’s path to his/her inner richness or authenticity. The Contract theory of the seventeenth century apparently makes the shift to a new understanding of community; “previously that people were members of a community went without saying … But now the theory starts from the individual on his own;”⁵ individuals are previous to the community and only their consent founds social life. This is the Copernican revolution that modernity introduces: the individual becomes the center of human life replacing, in a way, the role that community played in the former political theories. As Etzioni claims the individual becomes “the sun, moon and the stars of the new universe.”⁶ Community becomes then, an instrument to the individual’s well being. The problem with the instrumental view is, Taylor explains, that: (1) at a political level it weakens the commitment necessary to make political life possible; after all, what is really important is that I can be myself; thus, the person invests so much time and energy in his/her own projects that the kind of strong social involvement needed for citizenship is weakened, leading to the danger of soft despotism. (2) At a personal level, this instrumental view makes people think that the only good of a relationship is its contribution to personal fulfillment; thus, the only criterion becomes “how much can I get from this relationship?” This makes it hard for people to struggle to keep a relationship or a community when bad times come because there is no sense that relationships or communities are goods in themselves, that is, that they also constitute human flourishing in the form of solidarity, sense of belonging and connectedness. For instance, citizenship in a political community, membership in a religious community, friendships, are not reducible to the benefits they give to a particular person; they are important for themselves. There is a great paradox here: the instrumental view impoverishes human life even though it promises a more self-fulfilling life for the individual; life becomes flattened rather than enriched. The rise of the modern self makes communities (kinship, local, religious, and political communities) problematic because they are accused of hindering the flourishing of the autonomous self. Many modern thinkers, for instance Rousseau and the free thinkers, thought of their historical communities as chains from which individuals had to be liberated. Many of our contemporaries feel the same way as well. But, as Taylor argues, the problem with some strands of modernity is that they do not want to recognize all the goods that are part of the “package” of human life.⁷ I want to say something similar about the predicament community/individual. Both community and the individual are part of the “package” of human flourishing; and we cannot make it without them. But the idea of the package also seems to encircle another important consequence: both community and individuality are equally important; they are neither prior nor secondary to one another since one cannot be sustained without the other. Thus, modernity has made an important contribution to the debate: individuals cannot be seen as mere means for the good of the community, as traditional views argue. The liberation of the individual from some of the oppressive communities of the past is a welcome event that could not have happened without the language of autonomy, inwardness, affirmation of ordinary life, and inner moral sources that modernity brought. THE HARMONIOUS SOLUTION Some strands of modernity responded with a harmonious solution to the predicament community-individual. For Francis Hutchenson working for one’s own sake is best accomplished working for the whole.8 This idea becomes part of the official doctrine of Deism in which “everything is made so that the good of each serves the good of all; so our best interest must be to act for the general good.”9 Promoting the happiness of others leads to our own happiness. However, I would like to extend the Enlightenment objection to the Deistic providential order to this harmonious solution; as Taylor explains in the anti-Panglossian objection: “[it] made the structure of all things a bit too tidy and harmonious for our experience.”10 In other words, the harmonious solution — even if this is kept as an ideal — is against our common experience of the conflict between them. The harmonious solution is “too good to be true.” It cannot make sense of the conflicts that many people experience today between their allegiance to their communities and their personal aspirations. For example, many women and gays know the terrible conflicts they will face when they decide to bring equality to their communities. Many of them experience that the solution cannot be just to leave their communities because they are important part of what life is about. Thus many decide to stay and fight. The harmonious solution is also present in contemporary philosophers in opposite sides of the ideological spectrum. For example Alasdair MacIntyre thinks that the good of the community and the good of the individual must coincide; otherwise the whole quest for a good life will be threatened by the conflict.11 In the same vein, John Dewey insists that the growth of the individual and the community are so intimately related that they cannot be opposed. For Dewey, tensions between individuals and communities are expressions of unresolved adjustments that can be overcome in the long run.12 Thus, the harmonious solution enjoys a great popularity among thinkers, although for different reasons. What is the alternative? In the final part of Sources of the Self, Taylor argues that the analysis of the conflicts of modernity allow for “a perspective critical of most of the dominant interpretations for being too narrow, for failing to give full recognition to the multiplicity of goods and hence to the conflicts and dilemmas they give rise to.”13 Taylor accuses the contending parts of being one-sided and ignoring that the conflicts come from the multiplicity of goods we now enjoy. As he explains in The Ethics of Authenticity: “the right path to take is neither that recommended by straight boosters nor that favoured by outright knockers. Nor will a simple trade-off between the advantages and costs.”14 Hence, Taylor claims, a mere balance between the different elements of the conflict will not do it. I want to extend Taylor’s argument to the account of the particular modern predicament about the community and the individual. The way to make sense of this predicament is to live in the creative tension brought to us by the multiplicity of goods that the concepts of community and the individual encompassed; goods that may conflict in some circumstances of our lives. THE CREATIVE TENSION ACCOUNT (CTA) We need to accept the inescapable conflicts (at least at the present time) involved in human life between the community and the individual. We can regard these conflicts as inviting us to attain a creative tension that has to be maintained in the particular lives of individuals and communities; conflicts that have to be negotiated because the multiplicity of goods involved in them. However, as Taylor also claims for other conflicts of modernity this new understanding is an epistemic gain because they open up for us “real and important human potentialities” both for the individual and for the community. Many of the potentialities of the individual have already been emphasized by the idea of the modern self: human dignity, self-responsibility, importance of feelings, respect for individual differences and the richness of inwardness. However, we can also enjoy many other goods when the importance of community for human flourishing is adequately recognized and integrated: a deeper sense of human solidarity, a deeper appreciation of others, a sense of belonging, a sense of humbleness needed to get along with others and the environment, a sense of fullness as opposed to the individualistic emptiness of our present age. The CTA provides a theoretical framework to understand the conflicts among these different goods; conflicts that in some way are new since only now, as a result of the understanding of the modern self, people have experienced the force and priority of the goods represented under the idea of the individual. It is possible to argue that goods such as self-responsibility and personal freedom were always present. For instance, someone like Socrates felt the conflict between following his inner voice and being a member of the community. Socrates solved this tension accepting the death penalty without backing off his personal views. But overall, the prevalent view was that communal goods outweighed by far those of the individual. The exaltation of the hero who sacrificed all for the community is found everywhere in the literature and the popular culture of the Ancient and Medieval times. For example, the Romans honored people like the brothers Gracchus and Horatius who risked their own lives for the good of the community. The English literature of the twelfth century honors a legendary hero like Robin Hood who robbed the rich to help the poor in the interest of social justice and community values. In the same vein, the Christian world exalted somebody like Saint Francis of Assisi who left behind his personal wealth and family for God’s call to form a community of service and poverty. The idea of someone like Robinson Crusoe, whose heroism is to live on his own without community, is unknown to the pre-modern world and it would have made no sense to them. Only when modernity develops its hostility to the idea of community is that such a hero becomes desirable. Nietzsche will give to this hero a distinctive moral force as someone with the valor to defy convention and face the solitude of living according to his/her own inner strength. These two different kinds of heroism point out to two distinctive conceptions of human flourishing in which the priority has changed. In the first case, it is belonging to a community and willingness to sacrifice all for its sake that is the source of a human life worth that name. On the contrary, in the second case, it is personal freedom and autonomy that are considered the primordial goods that can feed a true life. Nonetheless, as we saw before, these two pictures impoverish human life because they stultify important goods that can be in tension with each other. In this vein, the CTA portrays a different understanding of human flourishing in which the multiplicity of goods represented by the idea of community and the individual are preserved. This account invites us to overcome views that make the multiplicity of goods incompatible, that is, the old view in which individuality had to be surrendered unconditionally to the welfare of the community; and the most modern view of the atomistic individualism, in which community is construed as a mere instrumental means for the sake of the individual. But, this view also rejects a mere harmonious view of the relationship between the individual and the community because this view cannot account for the fact that the goods involved in this relationship may clash; for instance, the good of self-determination and the good of solidarity to a group can easily be in conflict. **The Tension as a Source of Life** We now turn to Mounier’s and Ricoeur’s contributions to the CTA. They will help us to grasp better the nature and consequences of this tension. Mounier recognizes that at a metaphysical level there cannot be a conflict between the person (individual) and the community because true communities and persons need and complement each other. In this sense, Mounier still holds a harmonious solution at the metaphysical level that presents the same problems we have pointed out before. Nonetheless, Mounier argues that at the practical level the theoretical harmony between the individual and the community is replaced by disappointments and tensions between the legitimate aspirations of the person and the limitations and aspirations of real communities. But, for Mounier, this tension is by no means something we should regret or avoid; on the contrary, it can open up new possibilities and enrich human life. Thus, in the practical level, there is tension not harmony. As Mounier argues: “Instead of harmony a ‘tension always at the breaking point. But this tension is the source of life. It protects the individual from anarchy and societies from conformity.” For Mounier the tension between the individual and the community becomes a source of life, of growth, it leads to a more fulfilling life. It is a *creative* tension. To be sure, it is a tension “always at the breaking point,” that is, it is a tension never solved, but always there. Mounier’s vision may be seen as rather too dramatic since there are many occasions in which the individual and the community are not opposed or should not be opposed; however, Mounier rightly points out that the tension can help both the individual and the community to overcome their dark sides. In this sense, Mounier’s account allows not only for an *epistemic gain* in Taylor’s terms, but it provides the reason why the tension is an important *moral gain*; it is the source of a more fulfilling life. It is precisely this insight that is denied in the harmonious solution. Thus, the tension is a not a transitory stage that needs to be eventually balanced or the result of unresolved adjustments as Dewey argues, but rather a moment in which new possibilities are explored and achieved; and deviant forms of community and personhood are challenged. The tension does not threaten the quest for a good life, as MacIntyre claims, but on the contrary, it opens up new possibilities or a qualitative better life. In this sense, an ideal of harmony between community and individuals may hide and hinder the new possibilities and higher fulfillment promised by the tension. Mounier is correct in arguing that tensions may be the result of, for instance, coercion and struggle for homogeneity on the part of the community, and the demands of a self-centered, egoistic, and narcissist individual. In this sense the tension is a symptom of something that is not right in the community or in the individual. But in other cases, as Taylor’s account suggests, the tension can be the result of conflicts between the multiplicity of goods represented by concepts of community and the individual. For example, the tension can be the result of the legitimate call for a strong sense of belonging to a community and the legitimate need for personal freedom in the individual. Moreover, in real life the boundaries between the two kinds of cases described above are often fuzzy. For instance, the case of a person who has to decide whether or not to accept a job promotion that will mean to leave a community that is not only important for her but also in which her participation is vital for the communal well-being. Thus, on one hand, the job promotion fulfills an important personal aspiration and opens up new and challenging opportunities for her and her family. On the other hand, leaving the community is a troublesome event since it is a significant part of who this person and this family are; in a way the professional success of this person is related to the support she has found in the community. In addition, her leaving would also threaten the well-being of the community, at least in the near future, because of the leadership of this person in the community. What to do? Which goods have priority? Could a compromise be reached? For example, taking the new job for a few years and then coming back to the community? No pre-arranged answer could address this difficult dilemma; it has to be lived and the answer find in real life. In this sense, we need to create a practical wisdom that can guide us through these tensions. Mounier calls for such a practical wisdom to help us in dealing with the tension: “It is necessary, therefore, more or less right, more or less wrong, to create, in this subject, a kind of practical wisdom according to our experience.” That is, there are not outside criteria that can be applied to solve the tension. It is always a tension that has to be lived, and in this living we can build a wisdom that can help individuals and communities to find new possibilities and developments. In this way, the tension, as Mounier argues, becomes a source of a more fruitful life. Ricoeur, however, argues that Mounier’s two-term dialectic — individual versus community — needs to be reformulated in a three-term dialectic: self-esteem, solicitude, and just institutions. Ricoeur contends that two different levels of relationships are unified — and somehow lost — in Mounier’s concept of community: (1) face to face relationships modeled under the ideal of friendship; and (2) the more impersonal relationships — institutional — governed by the ideal of justice. It is possible to argue that these two terms — solicitude and just institutions — can be captured in an expansive concept of community. In this vein, Ricoeur’s correction of Mounier’s account is that Mounier neglected these two different dimensions of our relation to others in his concept of community. A similar criticism can be aimed at Taylor since he does not distinguish them. Ricoeur’s account points out to two dimensions that, it can be argued, are intimately related within the concept of community itself. Solicitude and justice coexist in the concept of community since it implies, at least, in most forms of community, a more intimate relation to some and a more institutional relation to others. This is true, for instance, of any school conceived as community in which members are linked to others as friends — or in some kind of intimate way — or as members of a larger body in which institutional norms guaranteed a fear play. It is true, though, that Ricoeur’s third term highlights the fact that relations in society are modeled by the ideal of justice rather than by the requirements of friendship; but still community seems to be a term that can accommodate both levels. In this sense, Ricoeur’s distinction is a caveat to bear in mind in order not to confuse or neglect one of these two dimensions of our relationship to others. Hence, the dialectic individual vs. community still captures a central feature of the way we conceive human flourishing today; a dialectic that can be adequately addressed through the CTA. **CONCLUSION** In sum, Taylor’s account of the modern self has helped to identify three main aspects of modern individualism: the sense of inwardness, the affirmation of ordinary life and the notion of inner nature as the moral source of human life. It shows why so many people find convincing/moving the modern identity. Individualism has really opened up new possibilities: human dignity, diversity, richness of inner life and the ethics of self-responsibility. It also shows some of the most problematic aspects of modern individualism: atomism, narrow self-fulfillment and some forms of egoism and narcissism. Taylor’s account has also helped us to discuss some of the consequences that the rise of modern identity has had for the idea of community, especially since the modern self raised itself in conflict and opposition to the historical forms of communities that it encountered. Thus community is portrayed as a hindrance to the modern individual, as an external authority that has to be overcome or as an instrument justified in terms of its role for the fulfillment of individuals. Some thinkers rebelled against these conceptions and designed a harmonious solution in which the good of the individual and the community coincided or at least, are reconciled in the long run. Nevertheless, in the face of the problems this solution raises I explored a creative tension account in which these conflicts are assumed as part of what human flourishing implies in order not only to preserve the potentialities gained by the modern self but to discover and enjoy the goods and potentialities that community can also bring to us. Mounier’s account — recreated by Ricoeur — points out the importance of community and individuality in human flourishing. Mounier helped to discover the dramatic and enriching character of the tension between the two, rejecting, at the same time, the dark sides of individualism and community that distort human flourishing. 3. Taylor, *Sources of the Self*. 10. Ibid., 417.
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA NPS FIELD EXPERIMENTATION PROGRAM FOR SPECIAL OPERATIONS (FEPSO) TNT 13-1 REPORT by Dr. Raymond R. Buettner LtCol. Carl Oros, USMC (Ret.) Ramsey Meyer Marianna Jones Nelly Turley March 2013 Approved for public release: distribution unlimited. Prepared for: Naval Postgraduate School, Monterey, CA ### Report Documentation Page **Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.** <table> <thead> <tr> <th>1. REPORT DATE (DD-MM-YYYY)</th> <th>2. REPORT TYPE</th> <th>3. DATES COVERED (From-To)</th> </tr> </thead> </table> <table> <thead> <tr> <th>4. TITLE AND SUBTITLE</th> <th>5a. CONTRACT NUMBER</th> <th>5b. GRANT NUMBER</th> <th>5c. PROGRAM ELEMENT NUMBER</th> </tr> </thead> <tbody> <tr> <td>NPS Field Experimentation Program for Special Operations (FEPSO) TNT 13-1 Report</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>6. AUTHOR(S)</th> <th>5d. PROJECT NUMBER</th> <th>5e. TASK NUMBER</th> <th>5f. WORK UNIT NUMBER</th> </tr> </thead> <tbody> <tr> <td>Dr. Raymond R. Buettner, LtCol. Carl Oros, USMC (Ret.), Ramsey Meyer, Marianna Jones, Nelly Turley</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)</th> <th>8. PERFORMING ORGANIZATION REPORT NUMBER</th> </tr> </thead> <tbody> <tr> <td>Naval Postgraduate School, Monterey, CA 93943</td> <td>NPS-FX-13-001</td> </tr> </tbody> </table> <table> <thead> <tr> <th>9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES)</th> <th>10. SPONSOR/MONITOR’S ACRONYM(S)</th> </tr> </thead> <tbody> <tr> <td>Naval Postgraduate School, Monterey, CA 93943</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>12. DISTRIBUTION / AVAILABILITY STATEMENT</th> </tr> </thead> <tbody> <tr> <td>Approved for public release: distribution unlimited.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>13. SUPPLEMENARY NOTES</th> </tr> </thead> <tbody> <tr> <td>This analysis is broken up into two parts, event analysis and insights derived from the experiments themselves. The former includes recommendations regarding the nature of the event organization and the relationship of the sponsors. The later identifies recommendations associated with a technological domain (biometrics) that may need emphasis going forward and suggests attributes that may be associated with these areas. Appendixes provide: the Request for Information (RFI), list of experiments and schedule, experiment descriptions and after action reports. With the exception of the appendix, this document reflects the opinions of the author and does not represent the official policy or position of the Naval Postgraduate School, the United States Navy, or any other government organization. The data in the appendices were provided by the participants and have only been edited for clarity.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>14. ABSTRACT</th> </tr> </thead> <tbody> <tr> <td>Field Experimentation, Naval Postgraduate School and the United States Special Operations Command’s (USSOCOM), Special Operations Research and Development Acquisition Center (SORDAC), Muscatatuck Urban Training Center (MUTC), multi-institutional semi-structured learning environment, After Action Report, biometrics.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>15. SUBJECT TERMS</th> </tr> </thead> <tbody> <tr> <td>Field Experimentation, Naval Postgraduate School and the United States Special Operations Command’s (USSOCOM), Special Operations Research and Development Acquisition Center (SORDAC), Muscatatuck Urban Training Center (MUTC), multi-institutional semi-structured learning environment, After Action Report, biometrics.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>16. SECURITY CLASSIFICATION OF:</th> <th>17. LIMITATION OF ABSTRACT</th> <th>18. NUMBER OF PAGES</th> </tr> </thead> <tbody> <tr> <td>a. REPORT</td> <td>UU</td> <td>29</td> </tr> <tr> <td>b. ABSTRACT</td> <td>UU</td> <td></td> </tr> <tr> <td>c. THIS PAGE</td> <td>UU</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>19a. NAME OF RESPONSIBLE PERSON</th> <th>19b. TELEPHONE NUMBER</th> </tr> </thead> <tbody> <tr> <td>Carl Oros</td> <td>(831)656-3554</td> </tr> </tbody> </table> Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 The report entitled “NPS Field Experimentation Program for Special Operations (FEPSO) TNT 13-1 Report” was prepared for and funded by the Naval Postgraduate School. Further distribution of all or part of this report is authorized. This report was prepared by: Raymond R. Buettner Jr. LtCol. Carl Oros, USMC (Ret.) Associate Professor Research Associate Ramsey Meyer Marianna Jones Research Associate Research Associate Nelly Turley Research Associate Reviewed and Released by: Jeffrey D. Paduan Dean of Research THIS PAGE INTENTIONALLY LEFT BLANK Table of Contents I. Overview and Statistics ..................................................................................................................... 8 II. Event Analysis ..................................................................................................................................... 8 II. Noteworthy Technologies Observed ............................................................................................... 12 III. High-level Tech. Trends and Analysis: Biometrics ......................................................................... 13 APPENDIX A: TNT 13-1 Request for Information (RFI) ................................................................. 16 APPENDIX B: TNT 13-1 Experiment List & Schedule .......................................................................... 23 NPS Field Experimentation Program for Special Operations (FEPSO) TNT 13-1 Report March 2013 Dr. Raymond. R. Buettner, Dir. Field Experimentation LtCol Carl Oros, USMC (Ret.), Research Associate & PhD candidate, Dept. of Information Sciences Mr. Ramsey Meyer, Research Associate, Dept. of Information Sciences Mrs. Marianna Jones, Research Associate, Dept. of Information Sciences I. Overview and Statistics This analysis is broken up into two parts, event analysis and insights derived from the experiments themselves. The former includes recommendations regarding the nature of the event organization and the relationship of the sponsors. The later identifies recommendations associated with a technological domain (biometrics) that may need emphasis going forward and suggests attributes that may be associated with these areas. Appendixes provide: the Request for Information (RFI), list of experiments and schedule, experiment descriptions and after action reports. With the exception of the appendix, this document reflects the opinions of the author and does not represent the official policy or position of the Naval Postgraduate School, the United States Navy, or any other government organization. The data in the appendices were provided by the participants and have only been edited for clarity. II. Event Analysis The Naval Postgraduate School and the United States Special Operations Command’s (USSOCOM) Special Operations Research and Development Acquisition Center (SORDAC) conducted the first of three FY 13 TNT events 30 October to 8 November 2012 at the Muscatatuck Urban Training Center (MUTC), IN and adjacent Jefferson Proving Grounds (JPG). NPS research associates LtCol Carl Oros USMC (Ret.), Mr. Ramsey Meyer, and Mrs. Marianna Jones were in attendance. The focus of the event was Urban Operations. The event had 209 participants representing 86 commercial organizations, 59 federal organizations, and 9 non-profit/academic organizations conducting 45 planned and 1 ad-hoc experiments. The 13-1 event attendance is down markedly from FY-12 (Figure 1) attendance, which itself reflects a decreasing trend that correlates with, but is not necessarily caused by, an increase in structure and an increased emphasis on SORDAC PM/PEO priorities. By comparison, attendance across FY-11 (Figure 2) continued a three year long increase in both the numbers and diversity of attendees as the multi-institutional semi-structured learning environment construct was formalized and implemented. Also of note is that the 13-1 event was attended by only two component and one Theater Special Operations Command (TSOC) S&T representatives, possibly a reflection of the increased value of the event to SORDAC and SOCOM HQ leading to a reduced perception of value for the other members of the special operations community. However, these trends may have other causes. The fiscal uncertainty associated with operating under continuing resolutions and the potential for sequestration have created plausible explanations for decreased travel early in the fiscal year. The lack of experience operating at the Muscatatuck UTC venue, and the isolation of the local venue from desirable accommodations, may easily have led to reduced participation as well. The weather for the event included forecast for relatively cold temperatures and the potential for precipitation. Most significant, however, was the impact of Hurricane Sandy that struck the East Coast, coinciding with TNT 13-1. Several participants traveling from the east coast were forced to cancel their participation. This too had an impact on overall attendance. It should also be noted that the number of experiments submitted was not too far below the historical average and that in general industry participation was only slightly reduced. Somewhat paradoxically, the same fiscal uncertainties that may have reduced government participation in the 13-1 event may lead to increased submissions and industry participation as commercial entities; small companies in particular are driven to seek development assistance and access to sponsorship while budgets are tight. This same uncertainty may continue to reduce government participation and may eventually reduce the event’s value to commercial participants, leading to the potential for the process to break down. However, that outcome is far from certain as this analysis lacks sufficient data to be predictive and thus is primarily informational in nature. The 13-1 event also marked the beginning of a year that will feature the Special Operations Research Development and Acquisition Center (SORDAC) as the sole sponsor of TNT. Previously the events were conducted by a cooperative arrangement with funding augmented by contributions in kind (for example aircraft provided by their home units) and other funding sources (for example JIEDDO/OSD) with SOCOM ensuring that the Special Operations community writ large was served and the NPS maintained an emphasis on fostering a learning event not associated with the acquisition process. In effect this year will more narrowly focus on the USSOCOM HQ, more precisely the SORDAC PM/PEO view of needs, with SORDAC being both the sponsor and primary beneficiary of the event. Manifestations of the difference in event philosophies that have been observed include: referring to commercial participants as “vendors”; discontinuing the practice of providing a shared network space for participants and populating it with the experiment descriptions/report forms (one of SOCOM’s previous innovations); no longer organizing the experiments by challenge area; and a strong shift away from the emphasis on the network that was a hallmark of the pre-SORDAC events. Again, this does not indicate that the changes are negative, so long as the event provides value as measured by SORDAC. These items are identified so that potential for adjustment exists if it becomes necessary or desirable to do so from the sponsor’s perspective. In order to organize and execute TNT events SORDAC has provided a dedicated SOCOM Experimentation Director with two support contractors as well as funding and/or requesting support from other entities such as the National Assessment Group, the Naval Surface Warfare Center (NSWC) Crane, and the US Army Special Operations Command’s (USASOC) Technical Assessment Unit (TAU). SORDAC has duplicated and largely replaced the administrative processes previously executed on its behalf by the Naval Postgraduate School to include: creating and running websites for information and registration, processing RFI and white papers, running planning meetings, conducting ORM, etc. The posting of evaluations to the Joint Lessons Learned websites has replaced the large and unwieldy, and usually untimely, After Action Report. As would be expected given the strong acquisition culture in SORDAC, the emphasis at each event is on process and value added to SOCOM’s PMs/PEOs. This fact is well illustrated by SOCOM and SORDAC accounting for 1/7 of the total registered attendees. The increase in process emphasis was appropriate given that SORDAC, unlike the NPS, is an acquisition entity and needed to make sure that the events were conducted appropriately for this type of sponsor. SORDAC cannot directly fund and simultaneously direct a multi-institutional learning activity that has acquisition implications given its stated goal of engaging its own PMs/PEOs more fully in the process. This event was, and future events are likely to be, lower in institutional diversity. They are also likely to be more structured and less about community learning that may lead to revolutionary change. This is not “wrong”. The more focused PM/PEO oriented events are—and the more focused related measures of effectiveness are—the more likely the events are to revolve around evolutionary improvements as reported/document by its own participants. This is a perfectly reasonable outcome for SORDAC as the funding entity. Indeed, it would be inappropriate given the nature of the funding appropriated for SORDAC activities for this sponsor to support the learning centric aspects of the event that had been the key product of the NPS cooperative model. By the end of the FY-12 the pace of operations for the SORDAC staff was given as the primary rationale to make a variety of changes to TNT: to decrease the frequency of events to three per fiscal year; to drop the MBE type activity; to eliminate two days of experimentation; and to conduct only one event at the NPS’s McMillan Field facility at Camp Roberts. None of this is surprising since the increased formalization of the processes increased the workload of both the execution team and the more formal stakeholder’s body that was engaged in the process. The rate of one TNT event per quarter had been maintained since 2005 and was important to both the tangible and intangible qualities of the event. It both optimized availability for participation by USSOCOM officer students at NPS and drove the rate of innovation by commercial and academic participants to a much more “agile” development model. While not fundamentally opposed to the idea, SORDAC as an entity is not funded to maximize the potential for community learning and innovation or to support the education of SOF officers. One recommendation that would seem to apply regardless of the cause of any change in event participation levels or process is the reduction of the event to 5-7 days vice two 5 day periods. The 10 day model makes it very difficult for government attendees to gain the same density of observations that they have previously enjoyed. It also increases event cost for all participants and creates two classes of engagement, one type for the “low week” and another for the “high week”. There would seem to be little benefit to keeping the event at two weeks since this is an artifact of the deprecated MBE/CBE model that offers little value and dilutes the CBE aspects of the event. SORDAC leadership has expressed satisfaction with the TNT events in various public forums. This should be expected since the purposes of the events more closely reflect their view of appropriate value added to their mission. Viewed from the perspective of the funding sponsor this is both logically consistent and consonant with past statements of senior SORDAC leadership regarding the potential of the event to provide more value to their organization if modified. As noted above NPS has identified areas to monitor as the experimentation process becomes more formalized. In the past the innovative value of the events was a result of the specialized learning environment (more specifically multi-institutional semi-structured learning environment). This construct is not intended to directly improve anything for the SOCOM HQ, and certainly was not narrowly aligned with the SORDAC organization objectives, except as they were part of the larger special operations community. These events did not (and in the new order do not) exist to produce a document or artifact except as required to report the conduct of the event. Rather the learning that takes place at the event is the primary product, with the “deliverable” being a vibrant learning environment for the community. This shift in focus from the community to the headquarters is fundamentally different. To some degree the HQ’s emphasis and redundant infrastructure created by SORDAC reduces the role of the NPS to that of a service provider. Again, this is not wrong. While historically providing a venue for NPS Special Operations students and supporting faculty to work with the special operations community was one of the primary objectives of the USSOCOM-NPS Field Experimentation Cooperative it is normal for the nature of the relationship to change over time. NPS and SORDAC should explore the subject of what is the appropriate role for an educational institution in the conduct of future events and whether or not the SOCOM HQ or SORDAC desire to, or have the means to, support the educational aspects associated with NPS participation in the events. II. Noteworthy Technologies Observed 1. Rockwell Collins Cognitive Networked Electronic Warfare (CNEW) 2. Mega Wave MW3300 Fold-Up tactical DF Array Rockwell Collins (RC), Mega Wave (MW) and CACI SystemWare’s1 “Guardian” {observed at NPS JIFX 12-4} represent a low cost SIGINT trend to monitor, collect, and DF communications at the tactical level with portable, networkable equipment from ~ 30 MHz – 3 GHZ (RC/MW) and up to 6/12 GHz (CACI). All of these products have unique GUIs and DF capabilities. They also vary in their RF algorithms, signature libraries, and network data distribution/visualization capabilities. What is missing from these approaches is a distributed network architecture vision that would allow multiple like systems to disseminate, share, and aggregate collected SIGINT information among multiple stakeholders in a format that is computationally ingestible and analyzable. It is unclear if any of these application even support Cursor on Target (CoT) C2 middleware. 3. General Atomics Rapid Urban Mission Planning with VBS2 and SPIMAP --- 1 See http://www.caci.com/caci-systemware/Guardian.shtml General Atomics (Rapid Urban Planning with VBS2 Virtualization and SPIMAP terrain Generation) demonstrated their software at TNT 13-1. Their mission planning suite consisted of two applications: (1) a SMARTPlanner and (2) SPIMAP (Swift Point Imagery for Mission Awareness Planning). SMARTP allowed the creation of 3D virtual battlespace-2 (VBS2) \(^2\) simulation missions from 2D SMARTPlanner missions. The SPIMAP terrain generation allowed terrain to be generated from the latest imagery. This software demonstrated the ability to extract urban features out of 2D imagery and map data to generate a 3D model where users could insert mission objects (people, threats, etc.) into a battlespace simulation in order to dynamically rehearse actions on the objective and other mission profiles. Limiting the demo’d version was the inability to dynamically take in new imagery, process to the necessary format, and render in 3D. Only funding and development sponsorship have delayed productizing this type of capability. We discuss this technology trend along with Capturx/Adapx in our TNT 13-2 report. 4. Cloud Front Group (PixLogic + Flume (Saratoga Data Systems)) The Cloud Front Group combines and integrated package of technology from PixLogic and Saratoga Data Systems capable of scanning captured video for “notions of interest” (NOI) (i.e. people, vehicles, aircraft, weapon installations, etc.) and then routing these NOI video segments to subscribed operators using Flume acceleration software. This allows the video clips to be transmitted over constrained, low bandwidth, intermittent networks. 5. Harris PRC-152A with ANW2 mesh network waveform. This is the first TNT that Harris demonstrated the mesh capable PRC-152A with the adaptive networking wideband waveform (ANW2). We discuss the tactical wireless networking trends in our TNT 13-2 report. III. High-level Tech. Trends and Analysis: Biometrics Over the past several years a variety of technologies and capabilities have been explored relating to biometric identification. The ability to identify high value targets (HVTs) and potential threats in close contact, together with capabilities to identify key nodes within networks using, at least in part, biometric information has provided some real successes on and off the battlefield. Facial recognition, finger printing and DNA sampling are the predominant means for biometric identification. TNT continues to explore these biometric identification technologies with a recent emphasis being on systems that can perform facial recognition at longer distances. There are indications that technologies based on single mode (face, DNA, finger print) of identification may have reached a plateau with regards to the cost for increased performance of these systems. After an earlier trend of going to nationwide biometric identification systems for internal (drivers licenses) and external (passports) identification \(^2\) See: http://en.wikipedia.org/wiki/VBS2 the trend has gone away from using such systems due primarily to an unacceptable number of failed identifications and false positives. This is not surprising since biometric systems are not isolated devices but rather combinations of sensors and data bases connected via a variety of socio-technical systems that are more complex and less malleable than the individual technologies thought of as biometric devices. This trend away from national biometric systems may offer special operations a reprieve of sorts from the growing fear that the freedom of movement and relative anonymity of these forces across national boundaries, outside of actual combat operations, was in danger of being impaired by biometric identifications. While this may be true in the near term it is likely that the same types of technologies discussed below will eventually enable national systems to be deployed. The special operations community should use intelligence and policy (avoid travel through countries with robust biometric capabilities) to reduce exposure as long as possible while coordinating with USCYBERCOM and others to put in place capabilities to subvert these systems when they cannot be avoided. This trend is a result of several challenges that still need to be addressed in order to support any revolutionary advancement that can be exploited by the special operations community. First there is still much to be learned about the individual distinctiveness of the human population. Second there needs to be a recognition of, and serious study of, the performance of large scale socio-technical systems such as biometric identification systems. Finally there needs to be recognition that outside of laboratory conditions the impact of diverse and complex physical environments will present real limitations to the ability of any one technology to provide stand-off range identification in a reliable fashion. The impact of atmospheric conditions will continue to be a significant challenge for single mode biometric identification methods such as face recognition whether the sensor is high above the earth or a few hundred meters down the road. As every sniper knows near ground atmospheric conditions, from heat turbulence to airborne particulates must be taken into account when one attempts to target something on this dynamic planet that we live on. Most of the success to date in eliminating atmospheric distortion involves either known visual markers or active beaming to determine the degree of distortion and to allow for its correction. The special operations community might consider making a few good references available to vendors who claim to be able to accomplish single mode standoff identification and/or include in any requirement elements that identify the need to be effective even with conditions such as optical turbulence. As mentioned above the social side of a biometric system, most predominately the organizational side, will remain a challenge and given the size and nature of US defense bureaucracies it is unlikely that the special operations community will be able to address the entire issue but to the degree that the problem set can remain “special” the community should be able to demand sufficient priority for its more limited requirements regarding high value targets and similar limited scale operations. The most likely path towards revolutionary capability with regards to biometrics involves the use of multi-modal approaches. The ability to combine fingerprints with photographs (when distance is not a factor) is relatively commonplace and continues to expand. As with most other areas of electronic technology, the military (and government) are no longer leading the development in this arena but rather are seeing industry rapidly rolling out new uses. The familiar names of Apple and Google are both working towards non-security applications of biometric identification and systems that relay and voice and lower quality images will become common in the market place. However many new ways of combining personal characteristics are under way to include using the ear, DNA (rapid testing is already providing results in <60 minutes), gait, sweat, periorcular, iris, heartbeat, sweat and odor (no this is NOT the same as sweat!) Visual technologies focused on the details of one's biology such as your ears, periorcular region, iris, fingerprints and facial features all suffer from a need for proximity and have the same types of pros and cons one might expect. Sweat detection is focused on determining that a subject is sweating. The ability to identify unusual changes in body temperature, such as might be exhibited by a terrorist afraid of detection (or detonation,) has been demonstrated at distances of 150 feet. Similarly systems designed to detect a heartbeat can potentially identify an unusually nervous person at significant distance, potentially up to several hundred yards. Gait identification has achieved success rates at several hundred yards of more than 90%. While current systems are visually based the potential to tap into the cell phone of a suspect and use the onboard accelerometers to identify the owner may offer a new way of thinking about gait based identification. This could be even more powerful in conjunction with voice data. The path to increased effectiveness in the short term will come from layering these technologies. For example potential threats might be initially identified by gross detection systems such as the heartbeat or sweat detection technologies as potentially dangerous. These suspects could then be segregated to prepared areas for additional screening using more traditional methods. Revolutionary improvements will come from being able to more rapidly collect and process multiple data sources with dynamic data bases. Ironically this suggest that rather than any new sensor breakthrough it will be the network that will be the most important single element in any dramatic improvements in the ability of special operations forces to exploit biometric technologies. This suggests that SORDAC S&T should emphasize the importance of any new technologies demonstrated being able to connect to, and share data in, a networked environment appropriate for the anticipated usage. This usage may vary greatly from conditions at a forward operating base, to a boarded vessel or the entry control point back at headquarters. To remain engaged with the expected to be accelerating rate of change in the biometric field it is recommended that SORDAC S&T monitor the National Academies, DARPA, the Congressional Research Service, the National Science and Technology Council's Subcommittee on Biometrics and Identity Management, and of course the Biometric Identity Management Agency. However the most significant insight might best be obtained by monitoring technology blogs such as Wired's Danger Room and Gadgetlab, Techarta and Techtiplib. Online magazines such as IEEE Spectrum and the DataCenter Journal are also good sources of information. Finally, keeping a close eye on companies such as Google that see biometric information as another way to provide personalized services, and security as well, would be well advised. APPENDIX A: TNT 13-1 Request for Information (RFI) Solicitation Number: RFI-TNT-13-1_TNT-Experimentation Notice Type: Special Notice Synopsis: Added: Aug 20, 2012 6:48 am A. INTRODUCTION: Tactical Network Testbed (TNT) Collaboration This Request for Information (RFI) is NOT a solicitation for proposals, proposal abstracts, or quotations. The purpose of this RFI is to solicit technology experimentation candidates from Research and Development (R&D) organizations, private industry, and academia for inclusion in future experimentation events coordinated by the U. S. Special Operations Command (USSOCOM) and the Naval Postgraduate School (NPS). USSOCOM invites industry, academia, individuals and Government labs to submit technology experimentation nominations addressing innovative technologies leading to possible Government/Industry collaboration for development of USSOCOM technology capabilities. The intent is to accelerate the delivery of innovative capabilities to the Special Operations Forces (SOF) warfighter. SOF experimentation will explore emerging technologies, technical applications, and their potential to provide solutions to future SOF capabilities. SOF experimentation focus areas for FY13 triannual TNT events are as follows: - 30 Oct - 8 Nov 2012: Urban Operations at Muscatatuck UTC, IN Additional RFIs will be released to FedBizOpps approximately 75 days prior to each scheduled TNT event to provide additional details. After review of the technology experimentation nomination submissions, the Government may invite select candidates to experiment their technologies at the USSOCOM & NPS sponsored TNT experimentation event. The TNT venue will provide an opportunity for the submitter to interact with USSOCOM personnel for the purpose of USSOCOM assessing potential impact of emerging technology solutions on USSOCOM missions and capabilities. The intent is to accelerate the delivery of innovative capabilities to the Special Operations Forces (SOF) warfighter. Industry participation in experimentation activities does not suggest or imply that USSOCOM or NPS will procure or purchase equipment. B. OBJECTIVE: 1. Background: USSOCOM conducts TNT experimentation events at Muscatatuck UTC, IN; at Avon Park, FL, and in cooperation with NPS at Camp Roberts, CA. These cooperative TNT experiments are conducted with representatives from Government R&D organizations, academia, and private industry. TNT experimentation events provide an opportunity for technology developers to interact with operational personnel to determine how their technology development efforts and ideas may support or enhance SOF capability needs. The environment facilitates a collaborative working relationship between Government, academia, and industry to promote the identification and assessment of emerging and mature technologies for the primary goal of accelerating the delivery of technology discoveries to the SOF warfighter. The event facilitates SOCOM personnel to identify potential technology solutions, impacts, limitations, and utility to meet SOF technical objectives and thrust areas. Materiel solutions brought to the event should be at a Technology Readiness Level (TRL) of 3 or greater. Experiments may be between a half day and five days in duration and be conducted in unimproved expeditionary-like conditions. At the discretion of USSOCOM, respondents may be asked to complete a vendor loan agreement (see attachment). 2. Experimentation Focus: Experiments will be conducted from 30 Oct-08 Nov 2012, at Muscatatuck UTC, IN and will explore emerging technology solutions for Urban Operations (UO). Any technology-based experiment conducted at the event will need to be capable of supporting a SOF unit to provide a revolutionary improvement in SOF operations. Any and all solutions must provide all necessary software and hardware to accomplish the mission. Jointly executed UO include full spectrum operations-offensive, defensive, and stability or civil support-that may be executed, either sequentially or simultaneously, during the conduct of a single urban operation, often with multilingual and interagency components. UO mission sets may include foreign internal defense (FID), unconventional warfare (UW), counterproliferation of weapons of mass destruction, special reconnaissance (SR), Direct Action (DA), counterterrorism, and information operations. Successful urban operations conducted by SOF require a thorough understanding of the urban environment, which may include: - The psychological impact of intense, close combat against a well-trained, relentless, and adaptive enemy. - The effects of noncombatants-including governmental and nongovernmental organizations and agencies-in close proximity to SOF. - A complex intelligence environment requiring lower-echelon units to collect and forward essential information to higher echelons for rapid synthesis into timely and useable intelligence for all levels of command. - The communications challenges imposed by the environment as well as the need to transmit large volumes of information and data. - The medical and logistic problems associated with operations in an urban area including constant threat interdiction against lines of communications and sustainment bases. - Stability and Civil Support Operations - Close combat operations - Fratricide avoidance - Situational Awareness / Urban mapping - Sniper and Countersniper Tactics Techniques and Procedures An exploratory closed cyber (virtual) network infrastructure, and an Electromagnetic Environment (EME) using electronic spectrum recording can be provided based on expressed interest. Please visit http://www.socom.mil/sordac/Directorates/ScienceTechnology/Pages/LocalIn and follow the link to Muscatatuck UTC, IN to gain a better understanding of the uniqueness of Muscatatuck UTC and its capabilities. 3. Security Requirements: Vendors should not submit classified information in the technology experimentation nominations. 4. Respondents interested in conducting experiments using technologies like: lasers, explosives, weapons using live fire, moving equipment, vehicles, and other technologies that present an occupational hazard shall prepare and submit a safety risk assessment. The risk assessment shall address the likelihood and severity of any inherent risks as well as risk mitigation measures required to bring the resultant risk to a low level. The risk assessment shall be submitted as an attachment to the experiment nomination. Reference MIL-STD-882D for instructions and information regarding risk assessments. Also, respondents are responsible for ammunition shipments to include an Interim Hazard Classification and coordination for receipt and storage at Camp Atterbury. 5. Other Special Requirements: DO NOT SUBMIT PROPOSALS. SUBMIT TECHNOLOGY EXPERIMENTATION NOMINATIONS ONLY. EXPERIMENTATION NOMINATION SUBMITTALS FOR THIS RFI WILL ONLY BE ACCEPTED UNTIL THE CLOSING DATE OF 9/20/2012 1600 EST. No contracts will be awarded based solely on this announcement or any subsequent supplemental RFI announcements planned for FY13 TNT events. C. SUBMISSION INSTRUCTIONS: Technology Experimentation nominations shall be submitted electronically via USSOCOM webpage: http://1.usa.gov/TNTExpNom. Note: the URL is case sensitive. Multiple nominations addressing different technology experiments may be submitted by each respondent. Submissions will be reviewed by USSOCOM personnel to determine whether an experiment submission will be accepted for invitation. Each technology experiment nomination must address only one experiment. Select respondents will be invited to participate in USSOCOM experiments. USSOCOM shall provide venues, supporting infrastructure, and assessment (operational and technical, based on availability of resources and written request) personnel at no cost to invited respondent(s). Respondent's travel costs and technology experiments will be at the respondent's expense. The TNT venue will only provide basic access to training areas or ranges to conduct experiments, a facility to connect to the internet, basic venue infrastructure including frequency allocation/deconfliction, and portable power if needed. Invited respondents must be prepared to be self-sufficient during the execution of their experiments and not dependent on venue resources for success. D. BASIS FOR SELECTION TO PARTICIPATE: Selection of respondents to participate will be based on the extent to which the technology represents a particular class or level of capability that can be provided to Special Operations Forces. Other considerations include: - Technical maturity - Relevance of or adaptability to military operations/missions - Relevance to current operational needs - Relevance to Event Focus Area E. ADDITIONAL INFORMATION: All efforts shall be made to protect proprietary information that is clearly marked in writing. Lessons learned by USSOCOM from these experiments may be broadly disseminated but only within the Government. If selected for participation in TNT experimentation, vendors may be requested to provide additional information that will be used in preparation for the experiments. F. USE OF INFORMATION: The purpose of this notice is to gain information leading to Government/Industry collaboration for development of USSOCOM technology capabilities and to assist in accelerating the delivery of these capabilities to the warrior. All proprietary information contained in the response shall be separately marked. Any proprietary information contained in response to this request will be properly protected from any unauthorized disclosure. The Government will not use proprietary information submitted from any one firm to establish future capability and requirements. G. SPECIAL NOTICE: Respondent's attention is directed to the fact that Federally Funded Research and Development Centers (FFRDCs) or contractor consultant/advisors to the Government will review and provide support during evaluation of submittals. When appropriate, non-Government advisors may be used to objectively review a particular functional area and provide comments and recommendations to the Government. All advisors shall comply with procurement Integrity Laws and shall sign non-disclosure and rules of conduct/conflict of interest statements. The Government shall take into consideration requirements for avoiding conflicts of interest and ensure advisors comply with safeguarding proprietary data. Submission in response to this RFI constitutes approval to release the submittal to Government support contractors. H. Per Federal Acquisition Regulation (FAR) 52.215-3 Request for Information or Solicitation for Planning Purposes (Oct 1997): 1. The Government does not intend to award a contract on the basis of this RFI notice or to otherwise pay for the information. 2. Although "proposal" and "respondent" are used in this RFI, your responses will be treated as information only. It shall not be used as a proposal. 3. In accordance with FAR Clause 52.209(c), the purpose of this RFI is to solicit technology experimentation candidates from R&D organizations, private industry, and academia for inclusion in future experimentation events coordinated by USSOCOM. Contracting Office Address: 7701 Tampa Point Blvd MacDill AFB, Florida 33621-5323 Primary Point of Contact: TECH_EXP@socom.mil Vendor Loan Agreement Sample Type: Other (Draft RFPs/RFIs, Responses to Questions, etc.) Posted Date: August 20, 2012 Description: Sample Vendor Loan Agreement for those selected to participate. Contracting Office Address: 7701 Tampa Point Blvd MacDill AFB, Florida 33621-5323 Place of Performance: See RFI Primary Point of Contact: TECH EXPO Database TECH_EXP@socom.mil Secondary Point of Contact: Christine E Johnson, Contracting Officer johnsc1@socom.mil Phone: 813-826-6038 Fax: 813-826-7504 For Help: Federal Service Desk Accessiblity APPENDIX B: TNT 13-1 Experiment List & Schedule Experiment List for TNT 13-1 Muscatatuck, IN: 30 Oct – 9 Nov 12: A. Intelligence, Surveillance, and Reconnaissance (ISR): 1. 3D Scene Reconstruction for Urban and Terrain from Full Motion Video – 2d3 Inc. 4. Broadband, adaptable, fluorescence-based, portable, trace explosives detector – FLIR Systems 6. Handheld Sense Through The Walls (STTW) - Raytheon 7. Hardware Implementation of Multi-Shot Optical Surveillance System (MuSOS) – Lentix, Inc. 8. Hyper Dynamic Range Optical Surveillance System (HyDROS) – Phelps2020, Inc 9. Image Acquisition and Exploitation Camera System (IAECS) – ACAGI, Inc. 10. MARK II Drop Kit – Cobham (RVision) 11. Modular Canine System – Tactical Electronics & ADS 12. Optical ID LIDAR – Arete Associates 13. Palantir Mobile – Palantir USG Inc. 14. Polarized Binoculars – ByField Optics 15. Rapid Deploy ISR System – Moog 16. Rapid Dissemination of High-Priority Video Segments – Cloud Front Group 17. Rapid Field Deployable Area Mapping – Prioria Robotics - CANCELLED 18. Real-Time Atmospheric Parameters for Urban Operations – QinetiQ North America 19. Real-Time Threat Detection Utilizing Multi-Spectral Imaging – QinetiQ North America 21. Shortwave Infrared Solider Systems – UTC Aerospace 22. Sniper Detection and Visualization – HGH Infrared Systems 23. Squad Level Self Rescue Assisted by Micro-UAV Squad Level Self Rescue Assisted by Micro-UAV – Sandia National Laboratories – CANCELLED 25. Ultrabright Long-Wave Infrared (LWIR) Quantum Emitter Beacons – Creative Microsystems Corp 26. Urban Reconnaissance with the SandFlea and RHex Robots – Boston Dynamics 27. PILARw, M2 OTM Vehicle Mounted Gunfire Detection System with Integration on Falconview – 01dB-Metravib B. Command, Control, Communications, and Computers (C4): 2. Antennas for Urban Communications – MegaWave Corporation 4. Hand Held Rangefinder & Locator (DEMONEYE) – US Army ARDEC 5. Long Throw Planar Magnetic Speaker – Aardvark Integrated Systems 6. MIMO Enabled Mesh Network for video/data/voice in Harsh Urban Terrain – Silvus Technologies 7. Next-Generation Push-To-Talk and Messaging from Voxer – Voxer Federal LLC 9. T.R.U.E. Communications in an asymmetric environment – Harris Corp GCSD 10. WorldView – Indiana University/Global C. Medical: 1. Human Performance Technology method to decrease team conflict and increase mission effectiveness – Wherewithal University Professional Solutions LLC 2. iCOT – MIS 2000-Global Defense Electronics-SAIC D. Power and Energy: 1. A lightweight, Flexible, Rapid Man Portable Recharging Solution Even in Low Light for Rapid Field Deployment – Alta Devices 2. Atmospheric Water Generation/Water Generation from Air – Mistral Incorporated 5. Polaris RZR 900 w/ Auto-regulated Motion Power System – Polaris Defense 6. Q-Gen 2.0, 1kw, Single-Man Portable, Multi-Fuel Generator for Tactical Power – QinetiQ North America 7. Squad Power Manager – Protonex 8. TRINITY™ 2000 System – INI Power E. Irregular Warfare (IW): 2. Raytheon Breaching Initiator System (RAYBIS) – Raytheon Technical Company (RTSC) F. Cyberspace Operation (Attack, Defend, Exploit): G. Weapons, Shelters, Barriers and Electronic Attack (EA): 1. Explosively Clad Bi-Metallic Rifle Barrels – TPL Inc. 2. Infared Anatomy Targets – PWT3 Development 3. M8E1 Improved Stun Hand Grenade – Dept of Army 4. Mini Claymore – Picatinny Arsenal – CANCELLED 5. Modular Breaching and Demolition System – Picatinny Arsenal 6. Remote Sniper/Counter Sniper Technology – Precision Remotes LLC 7. SORDAC-ST RAZAR DEMONSTRATION – Sandia National Laboratories 8. Talon Precision Strike & Resupply System – Moog - CANCELLED 9. THINLITE High Hardened Transparent Armor – Dlubak Corporation 10. TNT 13-1 Viper-E Experiment (Viper-E Standoff PGM for Unmanned Aerial Systems (UAS) and Prop Aircraft In Urban Operations) –MBDA Missile Systems H. Mobility: 1. Light Weight, Accurate Carbon Fiber Wrapped Rifle Barrels – Proof Research 2. Portable Three-Dimensional (3D) Driver Vision Enhancer (DVE) System Demonstration – Tactical 3rd Dimension Systems Corporation 3. Return Fire Glass/Light Weight Field Expedient Up-Armor Protection Kit/Off-road ballistic protected, blast resistant, high capacity personnel transport vehicle – Jabriel LLC/Armour Group 4. Special Mission Terrain Vehicle (SMTV) – Defense Technology Solutions, LLC Lunch Briefs 12:00-1300: Moog - Talon Precision Strike & Resupply System LITE MACHINES - The Tiger Moth UAS is a hand or air launchable VTOL UAV weighing less than 5 lbs, and flying for 1-2 hours on rechargeable batteries. The vehicle has a very low acoustic signature, making it effectively silent from a distance of 50-100 ft. or more, depending upon ambient noise. The system is designed to be scalable and configurable, so interchangeable payloads are an option (other sensors, i.e. acoustic, chembio, etc. or lethal/nonlethal weapons payloads, etc.). It can be teleoperated and/or fly autonomously at high altitudes (10K+ ft. MSL) or a few inches above the ground (perch and stare capable), and fly in/out of windows, down hallways or stairwells, into caves and other GPS deprived environments. NSW CRANE - Results of 2012 Urban Combat testing of how a Marine Corps Ring Mount Gunner (RMG) does their job and what senses are required to do their job. Where the current state of the art in EO/IR sensors, meets and does not meet the requirements to move the RMG from a standing exposed in the turret to monitoring sensors sitting down in the Vehicle Data was collected on the ring mount gunner as an individual system, as a system of systems within the organic vehicle, and between vehicles in offensive and defensive missions ### TNT 13 Schedule - Muscatatuck <table> <thead> <tr> <th>Monday</th> <th>Tuesday</th> <th>Wednesday</th> <th>Thursday</th> <th>Friday</th> <th>Saturday</th> </tr> </thead> <tbody> <tr> <td>29-Oct</td> <td>30-Oct</td> <td>31-Oct</td> <td>1-Nov</td> <td>2-Nov</td> <td>3-Nov</td> </tr> <tr> <td><strong>Morning Brief</strong></td> <td>7:30</td> <td>7:30</td> <td>7:30</td> <td>7:30</td> <td>7:30</td> </tr> </tbody> </table> **Travel Day** <table> <thead> <tr> <th>D7</th> <th>D7</th> <th>D7</th> <th>D7</th> <th>D7</th> <th>G7</th> <th>Sandia National Lab</th> </tr> </thead> <tbody> <tr> <td>Protonex</td> <td>Protonex</td> <td>Protonex</td> <td>Protonex</td> <td>Protonex</td> <td>G6</td> <td>Precision Remotes</td> </tr> <tr> <td>D2</td> <td>D2</td> <td>D2</td> <td>D2</td> <td>D2</td> <td>G6</td> <td>Precision Remotes</td> </tr> <tr> <td>Mistral Incorporated</td> <td>Mistral Incorporated</td> <td>Mistral Incorporated</td> <td>Mistral Incorporated</td> <td>Mistral Incorporated</td> <td>G6</td> <td>Precision Remotes</td> </tr> <tr> <td>D6</td> <td>D6</td> <td>D6</td> <td>D6</td> <td>D6</td> <td>QinetiQ</td> <td>QinetiQ</td> </tr> <tr> <td>QinetiQ</td> <td>QinetiQ</td> <td>QinetiQ</td> <td>QinetiQ</td> <td>QinetiQ</td> <td>QinetiQ</td> <td>QinetiQ</td> </tr> <tr> <td>D8</td> <td>D8</td> <td>D8</td> <td>D8</td> <td>D8</td> <td>INI Power</td> <td>INI Power</td> </tr> <tr> <td>INI Power</td> <td>INI Power</td> <td>INI Power</td> <td>INI Power</td> <td>INI Power</td> <td>INI Power</td> <td>INI Power</td> </tr> <tr> <td>B10</td> <td>B10</td> <td>B10</td> <td>B10</td> <td>B10</td> <td>Indiana University</td> <td>Indiana University</td> </tr> <tr> <td>B9</td> <td>B9</td> <td>B9</td> <td>B9</td> <td>B9</td> <td>Harris</td> <td>Harris</td> </tr> <tr> <td>Harris</td> <td>Harris</td> <td>Harris</td> <td>Harris</td> <td>Harris</td> <td>Harris</td> <td>Harris</td> </tr> <tr> <td>A3</td> <td>A3</td> <td>A3</td> <td>A3</td> <td>A3</td> <td>Becatech</td> <td>Becatech</td> </tr> <tr> <td>Becatech</td> <td>Becatech</td> <td>Becatech</td> <td>Becatech</td> <td>Becatech</td> <td>Becatech</td> <td>Becatech</td> </tr> <tr> <td>A16</td> <td>A16</td> <td>A16</td> <td>A16</td> <td>A16</td> <td>Cloud Front Group</td> <td>Cloud Front Group</td> </tr> <tr> <td>Cloud Front Group</td> <td>Cloud Front Group</td> <td>Cloud Front Group</td> <td>Cloud Front Group</td> <td>Cloud Front Group</td> <td>Cloud Front Group</td> <td>Cloud Front Group</td> </tr> <tr> <td>H4</td> <td>H4</td> <td>H4</td> <td>H4</td> <td>H4</td> <td>Defense Technology Solutions</td> <td>Defense Technology Solutions</td> </tr> <tr> <td>A26</td> <td>A26</td> <td>A26</td> <td>A26</td> <td>A26</td> <td>Boston Dynamics</td> <td>Boston Dynamics</td> </tr> <tr> <td>Boston Dynamics</td> <td>Boston Dynamics</td> <td>Boston Dynamics</td> <td>Boston Dynamics</td> <td>Boston Dynamics</td> <td>Boston Dynamics</td> <td>Boston Dynamics</td> </tr> <tr> <td>B4</td> <td>B4</td> <td>B4</td> <td>B4</td> <td>B4</td> <td>ARDEC</td> <td>ARDEC</td> </tr> <tr> <td>ARDEC</td> <td>ARDEC</td> <td>ARDEC</td> <td>ARDEC</td> <td>ARDEC</td> <td>ARDEC</td> <td>ARDEC</td> </tr> <tr> <td>A4</td> <td>A4</td> <td>A4</td> <td>A4</td> <td>A4</td> <td>FLIR Systems</td> <td>FLIR Systems</td> </tr> <tr> <td>A3</td> <td>A3</td> <td>A3</td> <td>A3</td> <td>A3</td> <td>Dept of Army</td> <td>Dept of Army</td> </tr> <tr> <td>Dept of Army</td> <td>Dept of Army</td> <td>Dept of Army</td> <td>Dept of Army</td> <td>Dept of Army</td> <td>Dept of Army</td> <td>Dept of Army</td> </tr> <tr> <td>A19</td> <td>A19</td> <td>A19</td> <td>A19</td> <td>A19</td> <td>QinetiQ</td> <td>QinetiQ</td> </tr> <tr> <td>QinetiQ</td> <td>QinetiQ</td> <td>Honeywell</td> <td>Honeywell</td> <td>Honeywell</td> <td>Honeywell</td> <td>Honeywell</td> </tr> <tr> <td>A9</td> <td>A9</td> <td>A9</td> <td>A9</td> <td>A9</td> <td>ACAGI Night Ops</td> <td>ACAGI Night Ops</td> </tr> <tr> <td>ACAGI Night Ops</td> <td>ACAGI Night Ops</td> <td>LITE MACHINES 12:00-1:00</td> <td>LITE MACHINES 12:00-1:00</td> <td>LITE MACHINES 12:00-1:00</td> <td>LITE MACHINES 12:00-1:00</td> <td>LITE MACHINES 12:00-1:00</td> </tr> <tr> <td>A15</td> <td>A15</td> <td>A15</td> <td>A15</td> <td>A15</td> <td>Moog Night Ops</td> <td>Moog Night Ops</td> </tr> <tr> <td>Moog Night Ops</td> <td>Moog Night Ops</td> <td>2d3</td> <td>2d3</td> <td>2d3</td> <td>2d3</td> <td>2d3</td> </tr> <tr> <td>A13</td> <td>A13</td> <td>A13</td> <td>A13</td> <td>A13</td> <td>Palantir USG</td> <td>Palantir USG</td> </tr> <tr> <td>Palantir USG</td> <td>Palantir USG</td> <td>Voxer</td> <td>Voxer</td> <td>Voxer</td> <td>Voxer</td> <td>Voxer</td> </tr> <tr> <td>A18</td> <td>A18</td> <td>A18</td> <td>A18</td> <td>A18</td> <td>QinetiQ North America</td> <td>QinetiQ North America</td> </tr> <tr> <td>QinetiQ North America</td> <td>QinetiQ North America</td> <td>MegaWave</td> <td>MegaWave</td> <td>MegaWave</td> <td>MegaWave</td> <td>MegaWave</td> </tr> <tr> <td>D1</td> <td>D1</td> <td>D1</td> <td>D1</td> <td>D1</td> <td>Alta Devices</td> <td>Alta Devices</td> </tr> <tr> <td>Alta Devices</td> <td>Alta Devices</td> <td>Alta Devices</td> <td>Alta Devices</td> <td>Alta Devices</td> <td>Alta Devices</td> <td>Alta Devices</td> </tr> <tr> <td>A22</td> <td>A22</td> <td>A22</td> <td>A22</td> <td>A22</td> <td>Atterbury</td> <td>Atterbury</td> </tr> <tr> <td>Atterbury</td> <td>Atterbury</td> <td>Atterbury</td> <td>Atterbury</td> <td>Atterbury</td> <td>Atterbury</td> <td>Atterbury</td> </tr> <tr> <td>A27</td> <td>A27</td> <td>A27</td> <td>A27</td> <td>A27</td> <td>01dB-Metravib</td> <td>01dB-Metravib</td> </tr> <tr> <td>01dB-Metravib</td> <td>01dB-Metravib</td> <td>01dB-Metravib</td> <td>01dB-Metravib</td> <td>01dB-Metravib</td> <td>01dB-Metravib</td> <td>01dB-Metravib</td> </tr> <tr> <td>G5</td> <td>G5</td> <td>G5</td> <td>G5</td> <td>G5</td> <td>Picatinny</td> <td>Picatinny</td> </tr> <tr> <td>Picatinny</td> <td>Picatinny</td> <td>Picatinny</td> <td>Picatinny</td> <td>Picatinny</td> <td>Picatinny</td> <td>Picatinny</td> </tr> <tr> <td>MOOG</td> <td>MOOG</td> <td>MOOG</td> <td>MOOG</td> <td>MOOG</td> <td>MOOG</td> <td>MOOG</td> </tr> <tr> <td>MOOG</td> <td>MOOG</td> <td>MOOG</td> <td>MOOG</td> <td>MOOG</td> <td>MOOG</td> <td>MOOG</td> </tr> <tr> <td>H3</td> <td>H3</td> <td>H3</td> <td>H3</td> <td>H3</td> <td>Jabriel LLC</td> <td>Jabriel LLC</td> </tr> </tbody> </table> ### Hot Wash - 1700/Nite Ops/2200 - 1700 - 1700 - 1700 --- ***Due to the exploratory nature of the TNT event, this schedule is advisory in nature and is subject to change*** - *****Experiments are ALL DAY events unless otherwise noted. Night Events are ALL DAY and continue into NIGHT***** --- **Legend** - TOC - 403 - Prison Complex - Hospital - Cave - Shantytown - JPG - Aircraft - Atterbury - ROC - Reservoir ### TNT 13-1 Schedule - Muscatatuck <table> <thead> <tr> <th></th> <th>Monday</th> <th>Tuesday</th> <th>Wednesday</th> <th>Thursday</th> </tr> </thead> <tbody> <tr> <td>Sunday</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>4-Nov</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>5-Nov</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>6-Nov</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>7-Nov</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>8-Nov</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>7:30</td> <td>7:30</td> <td>7:30</td> <td>7:30</td> <td>Morning Brief</td> </tr> </tbody> </table> <table> <thead> <tr> <th></th> <th>D7</th> <th>D7</th> <th>D7</th> <th>Protronex</th> </tr> </thead> <tbody> <tr> <td>D2</td> <td>Mistral Incorporated</td> <td>Mistral Incorporated</td> <td>Mistral Incorporated</td> <td>Protonex</td> </tr> <tr> <td>D6</td> <td>QinetiQ</td> <td>QinetiQ</td> <td>QinetiQ</td> <td>Protronex</td> </tr> <tr> <td>D8</td> <td>INI Power</td> <td>INI Power</td> <td>INI Power</td> <td>Protronex</td> </tr> <tr> <td>B3</td> <td>Rockwell Collins</td> <td>Rockwell Collins</td> <td>Rockwell Collins</td> <td>Rockwell Collins</td> </tr> <tr> <td>D5</td> <td>Polaris</td> <td>Polaris</td> <td>Polaris</td> <td>Polaris</td> </tr> <tr> <td>A14</td> <td>ByField</td> <td>ByField</td> <td>ByField</td> <td>ByField</td> </tr> <tr> <td>A10</td> <td>Cobham</td> <td>Cobham</td> <td>Cobham</td> <td>Cobham</td> </tr> <tr> <td>B8</td> <td>General Atomics</td> <td>General Atomics</td> <td>General Atomics</td> <td>General Atomics</td> </tr> <tr> <td>A16</td> <td>Cloud Front Group</td> <td>Cloud Front Group</td> <td>Cloud Front Group</td> <td>Cloud Front Group</td> </tr> <tr> <td>B6</td> <td>Silvus Technologies</td> <td>Silvus Technologies</td> <td>Silvus Technologies</td> <td>Silvus Technologies</td> </tr> <tr> <td>A26</td> <td>Boston Dynamics</td> <td>Boston Dynamics</td> <td>Boston Dynamics</td> <td>Boston Dynamics</td> </tr> <tr> <td>C1</td> <td>Wherewithal</td> <td>Wherewithal</td> <td>Wherewithal</td> <td>Wherewithal</td> </tr> <tr> <td>A6</td> <td>Raytheon</td> <td>Raytheon</td> <td>Raytheon</td> <td>Raytheon</td> </tr> <tr> <td>B5</td> <td>Aardvark</td> <td>Aardvark</td> <td>Aardvark</td> <td>Aardvark</td> </tr> <tr> <td>A11</td> <td>Tactical Electronics</td> <td>Tactical Electronics</td> <td>Tactical Electronics</td> <td>Tactical Electronics</td> </tr> <tr> <td>A25</td> <td>Creative Micro</td> <td>Creative Micro</td> <td>Creative Micro</td> <td>Creative Micro</td> </tr> <tr> <td>H2</td> <td>T3D</td> <td>T3D</td> <td>T3D</td> <td>T3D</td> </tr> <tr> <td>A17</td> <td>Prioria Robotics</td> <td>Prioria Robotics</td> <td>Prioria Robotics</td> <td>Prioria Robotics</td> </tr> <tr> <td>A24</td> <td>Artemis</td> <td>Artemis</td> <td>Artemis</td> <td>Artemis</td> </tr> <tr> <td>A12</td> <td>Arete Associates</td> <td>Arete Associates</td> <td>Arete Associates</td> <td>Arete Associates</td> </tr> <tr> <td>G1</td> <td>TPL</td> <td>TPL</td> <td>TPL</td> <td>TPL</td> </tr> <tr> <td>G9</td> <td>Dlubak Corp</td> <td>Dlubak Corp</td> <td>Dlubak Corp</td> <td>Dlubak Corp</td> </tr> <tr> <td>H1</td> <td>Proof Research</td> <td>Proof Research</td> <td>Proof Research</td> <td>Proof Research</td> </tr> <tr> <td>G2</td> <td>PWT3</td> <td>G10</td> <td>PWT3</td> <td>MBDA</td> </tr> <tr> <td>G4</td> <td>Picatinny</td> <td>Picatinny</td> <td>Picatinny</td> <td>Picatinny</td> </tr> <tr> <td>E2</td> <td>Raytheon (RTSC)</td> <td>Phelps 2020</td> <td>Phelps 2020</td> <td>Phelps 2020</td> </tr> <tr> <td>E7</td> <td>Lentix</td> <td>Lentix</td> <td>Lentix</td> <td>Lentix</td> </tr> </tbody> </table> **Legend** - TOC - 403 - Prison Complex - Hospital - Cave - Shantytown - JPG - JPG-Aircraft - Atterbury - ROC - Reservoir **Note:** - Due to the exploratory nature of the TNT event, this schedule is advisory in nature and is subject to change. - Experiments are ALL DAY events unless otherwise noted. Night Events are ALL DAY and continue into NIGHT. --- **Muscatatuck** - Monday: 7:30 - Tuesday: 7:30 - Wednesday: 7:30 - Thursday: Morning Brief --- **Hot Wash** - 1700 - 1700 - 1700/Nite Ops/2200 - 11:00 --- 28 INITIAL DISTRIBUTION LIST 1. Defense Technical Information Center Ft. Belvoir, Virginia 2. Dudley Knox Library Naval Postgraduate School Monterey, California 3. Research Sponsored Programs Office, Code 41 Naval Postgraduate School Monterey, CA 93943 4. Raymond R. Buettner Jr. (2) Naval Postgraduate School Monterey, CA 93943
Spectroscopic and Spectropolarimetric Observations of V838 Mon John P. Wisniewski¹, Nancy D. Morrison¹, Karen S. Bjorkman¹, Anatoly S. Miroshnichenko¹, Amanda C. Gault¹, Jennifer L. Hoffman²,³, Marilyn R. Meade⁴, & Jason M. Nett⁴ ABSTRACT The spectroscopic and spectropolarimetric variability of the peculiar variable V838 Monocerotis during the brighter phases of its multiple outbursts in 2002 is presented. Significant line profile variability of Hα and Si II 6347.10 Å & 6371.36 Å occurred in spectra obtained between 2002 February 5 and 2002 March 14, and a unique secondary absorption component was observed near the end of this time period. Our observations also suggest that multiple shifts in ionization states occurred during the outbursts. Spectropolarimetric observations reveal that V838 Mon exhibited both intrinsic and interstellar polarization components during the initial stages of the second outburst, indicating the presence of an asymmetric geometry; however, the intrinsic component had significantly declined by February 14. We determine the interstellar polarization to be $P_{\text{max}} = 2.746 \pm 0.011\%$, $\lambda_{\text{max}} = 5790 \pm 37\AA$, $PA = 153.43 \pm 0.12^\circ$, and we find the integrated intrinsic V band polarization on February 5 to be $P = 0.983 \pm 0.012\%$ at a position angle of $127.0 \pm 0.5^\circ$. The implications of these observations for the nature of V838 Monocerotis, its distance, and its ejecta are discussed. Subject headings: circumstellar matter---stars: individual (V838-Mon)---techniques: polarimetric---techniques: spectroscopic ¹Ritter Observatory, MS #113, Department of Physics and Astronomy, University of Toledo, Toledo, OH 43606-3390 USA, jwisnie@physics.utoledo.edu, nmorris2@uoft02.utoledo.edu, karen@physics.utoledo.edu, anatoly@physics.utoledo.edu, agault@utphysa.panet.utoledo.edu ²Department of Astronomy, University of Wisconsin-Madison, 475 N. Charter St., Madison, WI 53706 ³Department of Physics and Astronomy MS-108, Rice University, 6100 Main Street, Houston, TX 77005, jhoffman@rice.edu ⁴Space Astronomy Lab, University of Wisconsin-Madison, 1150 University Avenue, Madison, WI 53706, meade@sal.wisc.edu, jnett@sal.wisc.edu 1. Introduction Brown et al. (2002) reported the discovery of a possible nova, later to be designated V838 Monocerotis, on 2002 January 6.6. Prior to outburst, V838 Mon was a hot blue star, whose B band brightness was stable at 15.85 ± 0.4 from 1949-1994 (Goranskii et al. 2002). Munari et al. (2002a) noted that V838 Mon was not detected in prior Hα emission-line surveys. A spectrum obtained on January 26 showed numerous neutral metal and s-process lines, and resembled that of a heavily reddened K-type giant (Zwitter & Munari 2002). V838 Monocerotis underwent a second major photometric outburst in early February 2002, changing from V=10.708 on February 1.86 to V=8.024 on February 2.91 (Kimeswenger et al. 2002a). Spectra obtained during and immediately following this outburst (Iijima & Della Valle 2002; Morrison et al. 2002) revealed the emergence of various ionized metal lines. Kaeufl et al. (2002) estimated a blackbody continuum temperature of 4500 K to be present on February 9, and Henden et al. (2002) found that a light echo had developed around V838 Mon on February 17. IRAS source 07015-0346 has been associated with the location of V838 Mon (Kato et al. 2002), leading to the suggestion that this IR emission is from the dust causing the light echoes (Kimeswenger et al. 2002b). A third, less intense outburst occurred in early March 2002 (Kimeswenger et al. 2002b; Munari et al. 2002a). By April 16, V838 Mon’s spectrum had evolved such that it resembled an M5 giant (Rauch et al. 2002), with strong TiO molecular bands and a temperature of ~3000 K. Banerjee & Ashok (2002) detected TiI emission lines from near infrared spectroscopy, peaking in strength on May 2, and argued that this emission arose from circumstellar ejecta. They used the strengths of these TiI lines to estimate the mass of V838 Mon’s ejected envelope to be $10^{-7}$ to $10^{-5}$ M⊙. By October 2.17, spectroscopic observations suggested that V838 Mon had evolved into a “later than M10 III” type star (Desidera & Munari 2002). Desidera & Munari (2002) also detected a weak blue continuum, suggesting the presence of a binary companion. Followup spectroscopy (Wagner & Starrfield 2002; Munari et al. 2002c) confirmed this detection, and Munari et al. (2002c) suggested that the companion was a B3 V type star. The unique, complex evolution of V838 Monocerotis has led Munari et al. (2002a) to suggest that this object represents a new class of objects, “stars erupting into cool supergiants (SECS)”. In this paper, we report the spectroscopic and spectropolarimetric properties of V838 Monocerotis following its second and third photometric outbursts. In section 2, we outline our observational data. We detail the equivalent width and line profile variability of selected spectral lines in section 3.1. Our spectropolarimetric data, most notably the detection of an intrinsic polarization component, are discussed in section 3.2. We address the distance to V838 Mon in section 3.3. Finally, in section 4, we discuss the implications of these observations for future studies of this unique object. 2. Observations We obtained spectroscopic observations of V838 Monocerotis with the Ritter Observatory 1m reflector, using a fiber-fed echelle spectrograph. The fiber used for these observations has a diameter of 200 µm, which corresponds to roughly 5″ on the sky. Nine non-adjacent orders of width 70 Å were observed in the range 5285 -6595 Å. Data were recorded on a 1200 x 800 Wright Instruments Ltd. CCD, with 22.5 x 22.5 µm pixels. With $R \equiv \lambda / \Delta \lambda \simeq 26,000$, the spectral resolution element, $\Delta \lambda$, is about 4.2 pixels owing to a widened entrance slit. Observations were reduced with IRAF\(^5\) using standard techniques. Further details about the reduction of Ritter data can be found in Morrison et al. (1997). Unless otherwise noted, all data were shifted to the heliocentric rest frame and continuum normalized. We obtained spectropolarimetric observations of V838 Mon with the University of Wisconsin’s HPOL spectropolarimeter, which is the dedicated instrument on the 0.9m Pine Bluff Observatory (PBO) telescope. These data were recorded with a 400 x 1200 pixel CCD camera, covering the wavelength range of 3200 -10500 Å, with a spectral resolution of 7 Å below 6000 Å and 10 Å above this point (Nordsieck & Harris 1996). Observations were made with dual 6 x 12 arc-second apertures, with the 6 arc-second slit aligned E-W and the 12 arc-second decker aligned N-S on the sky. The two apertures allow simultaneous star and sky data to be recorded, providing a reliable means for subtraction of background sky polarization and hence allowing accurate observations to be made even in non-photometric skies. We processed these data using REDUCE, a spectropolarimetric software package developed by the University of Wisconsin-Madison (Wolff et al. 1996). Further details about HPOL and REDUCE may be found in Nook (1990) and Harries et al. (2000). Instrumental polarization is monitored on a weekly to monthly basis at PBO via observations of polarized and unpolarized standard stars, and over its 13 year existence, HPOL has proven to be a very stable instrument. We have corrected our data for instrumental effects to an absolute accuracy of 0.025% and 1° in the V band (Nordsieck, private communication). HPOL spectroscopic data are not calibrated to an absolute flux level due to the non-photometric skies routinely present (Harries et al. 2000). \(^5\)IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation. Table 1 provides a log of the observations from both observatories. 3. Results 3.1. Spectroscopic Variability We now discuss the spectral evolutionary history of V838 Monocerotis from February 5 to March 14. Our observations of V838 Mon from February 5 to February 9, during the onset of the second photometric outburst, indicate an overall shift toward a higher ionization state (Iijima & Della Valle 2002; Morrison et al. 2002) as compared with initial observations (Zwitter & Munari 2002). Hα shows a strong P-Cygni profile, with electron scattering wings extending at least ±1100 km s\(^{-1}\) from February 5 to February 8 and about 850 km s\(^{-1}\) on February 9, and an average heliocentric blue edge radial velocity of -300 km s\(^{-1}\) (see Figure 1). This radial velocity is slightly lower than the terminal velocity of -500 km s\(^{-1}\) observed in late January in CaII, BaII, NaI, and LiI lines (Munari et al. 2002b). Goranskii et al. (2002) report that a spectrum on February 5 shows Hα with FWZI = 3100 km s\(^{-1}\) and an absorption component at -300 km s\(^{-1}\), which is inconsistent with our findings. The extent of the electron scattering wings strongly depends on accurate continuum placement. We are confident that, within the limits of the SNR of our data, we see a 5 Å “flat” continuum region at each end of the spectral interval containing Hα, hence we are accurately determining the continuum level. The total equivalent width peaked on February 6 (see Table 3), and then began a steady decrease. Equivalent width errors were calculated using \(\sigma^2 = N(h_\lambda/SNR)^2(f_*/f_c)\), where \(N\) is the number of pixels across a line, \(h_\lambda\) is the dispersion in Å pixel\(^{-1}\), \(f_*\) is the flux in the line, \(f_c\) is the flux at the continuum, and SNR is the signal to noise ratio (Chalabaev & Maillard 1983). In early February, all the strong lines exhibited significant line profile variability. In Hα (Figure 1), the emission peak migrated to longer wavelengths with time. The high velocity component of Si II 6347.1Å and 6371.4Å (Figure 2) weakens with time, and the intrinsic component of Na I 5889.95Å and 5895.9Å (Figure 3) also shows variability. Note that since the interstellar Na I components appear to be saturated, they could not fit with gaussians and subtracted to reveal the pure intrinsic component. The low resolution HPOL spectrum (Figure 4) on February 8 clearly depicts the P-Cygni profiles of FeII 4923.9Å, 5018.4Å, 5169.0Å and the CaII infrared triplet 8498.0Å, 8542.1Å, 8662.1Å. Hydrogen Paschen absorption lines at 8438.0Å, 8467.3Å, 8598.4Å, 8750.5Å, 8862.8Å, 9014.9Å, 9229.0Å are observed, as well as HI 10049.4Å, which has a clear P-Cygni profile. By February 14, the P-Cygni profile of Hα had weakened considerably (Table 3, Figure 1) and its absorption and emission components were approaching equality in strength. The strong electron scattering wings previously observed had disappeared by our February 14 observation. Goranskii et al. (2002) noted that the Hα electron scattering wings had disappeared in their spectrum taken on February 16. These results are consistent with a decreasing excitation level in the circumstellar envelope. A low resolution red HPOL spectrum, obtained on February 13 (Figure 6), reveals two other qualitative changes: the emission components of both the P-Cygni CaII infrared triplet lines and HI 10049.4 Å line significantly decreased in strength. By the end of the third photometric outburst, a significant shift in V838 Mon’s spectral characteristics had occurred. Specifically, our spectra on March 11 showed that a second, high velocity absorption component had developed in a few lines. Hα (Figure 1) clearly shows this component, centered at a radial velocity of -200 km s\(^{-1}\) with a blue edge radial velocity of -280 km s\(^{-1}\). Figure 2 shows that this feature is also present in the SiII 6347.1 Å line, centered at -200 km s\(^{-1}\) with a blue edge radial velocity of -260 km s\(^{-1}\), and in the SiII 6371.4 Å line, centered at -140 km s\(^{-1}\) with a blue edge radial velocity of -190 km s\(^{-1}\). Based upon the radial velocities of these dual absorption features, we identify the enormous P-Cygni profile around 6394 Å, seen in Figure 2, as FeI 6393.6 Å. A strong P-Cygni profiled line in the vicinity of 6190 Å, which we attribute to FeI 6191.6 Å, also emerged on March 11 (Figure 5). Figure 1 also reveals new spectroscopic features at 6544.9 Å, 6577.2 Å, and 6582.5 Å, which we attribute to MgII 6545.9 Å and CII 6578.1 Å and 6582.9 Å. The apparent emergence of both higher excitation lines (CII and MgII) simultaneously with lower excitation lines (FeI) illustrates the complexity of V838 Mon’s outburst. In fact, nearly all 9 orders of our spectra show evidence for the emergence of new spectral features on March 11 and March 14. Due to low signal to noise ratios, as well as uncertainties in line blending and profile shapes, we are unable to identify all lines definitively. Since many of these lines are consistent with the rest wavelengths of FeI, NeI, NiI, TiII, MgII, and FeII, and since as noted above we have positively identified two lines of FeI emerging on March 11, these results indicate that V838 Mon began to experience a shift to a lower ionization state. The evidence for spectral evolution that we observed will need to be combined with that of other authors to portray a comprehensive picture of V838 Mon’s outbursts. ### 3.2. Spectropolarimetric Variability Figures 5-6 illustrate the wavelength dependent polarization of V838 Mon on February 8 and February 13 respectively. The differences between these two observations are readily apparent. The integrated Johnson R band polarization of the February 8 data is \( P = 3.226 \pm 0.004\% \) at a position angle of 149.0 \( \pm \) 0.1\(^\circ\), while the R band polarization of the later observation is \( P = 2.667 \pm 0.004\% \) at a position angle of 153.4 \( \pm \) 0.1\(^\circ\). This change strongly suggests the presence of an intrinsic polarization component. Furthermore, the February 8 data are characterized by strongly depolarized emission lines, while the February 13 polarimetric data show no line features. Polarimetric studies of Be stars (Harrington & Collins 1968; Coyne 1976) have found that in contrast to continuum photons, line emission, which predominantly originates in Be circumstellar disks, has a low probability of being scattered. With a few exceptions (McLean & Clarke 1979; Quirrenbach et al. 1997), emission lines should show little to no intrinsic polarization. Thus an intrinsically polarized emission line star should exhibit depolarized emission lines, e.g. a superposition of polarized continuum flux and unpolarized line flux. If one employs a similar argument with the ejecta of V838 Mon, the strongly depolarized emission lines of February 8 may be used to infer the interstellar polarization component (ISP). Similarly, the absence of depolarization effects in the February 13 data suggests that this polarization minima may be primarily attributed to interstellar polarization. As previously noted, the electron scattering wings of H\(\alpha\) disappeared by February 14. Since one expects electron scattering in the ejecta of V838 Mon to be the primary source of any intrinsic polarization, the disappearance of the electron scattering wings is consistent with the hypothesis that the polarization signal observed on February 13 is primarily interstellar in nature. In order to parametrize the wavelength dependence of the interstellar polarization in the February 13 data, we fitted the empirical Serkowski law (Serkowski et al. 1975), as modified by Wilking et al. (1982) to these data. The resulting ISP parameters are: \( P_{\text{max}} = 2.746 \pm 0.011\% \), \( \lambda_{\text{max}} = 5790 \pm 37\,\text{Å} \), \( PA = 153.43 \pm 0.12\,\text{°} \), \( \delta PA = 0 \), and \( K = 0.971 \). This fit is overlaid in Figures 5 -6. This Serkowski fit provides a near perfect fit to the February 13 observation; furthermore, it nicely fits the depolarized emission lines in the February 8 observation. We qualitatively crosscheck this claim by using the polarization and extinction relationship, formulated by Serkowski et al. (1975), \( 3E_{B-V} \leq P_{\text{max}} \leq 9E_{B-V} \). Munari et al. (2002b) established a lower limit for the interstellar reddening of \( E_{B-V} \sim 0.25 \) and suggested that the finding of Zwitter & Munari (2002), \( E_{B-V} = 0.80 \pm 0.05 \), represents an upper limit. Following the arguments of Munari et al. (2002b), we adopt the midpoint of these values, \( E_{B-V} = 0.50 \), which bounds the interstellar polarization along the line of sight to V838 Mon by \( 1.5\% \leq P_{\text{max}} \leq 4.5\% \), and thus qualitatively agrees with our ISP determination. Munari et al. (2002b) reported preliminary polarimetry results in which they suggested the ISP is characterized by \( P_{\text{max}} = 2.6\% \) at 5500 Å at a position angle of 150\( \pm \)2\(^\circ\). We are thus confident that our parametrization accurately describes the interstellar polarization component. We used these Serkowski parameters to remove the ISP component from the February 8 data, as seen in Figure 4, leaving only the intrinsic component. We find the integrated V band intrinsic polarization to be $P = 0.983 \pm 0.012\%$ at a position angle of $127.0 \pm 0.5^\circ$. It is interesting that the intrinsic polarization is clearly not wavelength independent, which one would expect in the case of pure electron scattering. Rather, the polarization gradually increases at wavelengths shortward and longward of $\sim 8000\AA$, which suggests the presence of an absorptive opacity source in V838 Mon’s ejecta. A possible Paschen jump, albeit only at a one-sigma detection level, is visible in the raw and intrinsic polarization in Figure 4. Combined with the spectroscopic observations of strong Hα electron scattering wings on February 8, this might suggest a not unlikely speculation that hydrogen is the opacity source (Wood et al. 1996, 1997). ### 3.3. Distance Estimations The distance to V838 Mon has yet to be agreed upon. Munari et al. (2002a,b) followed the propagation of V838 Mon’s light echo, assuming a spherical distribution of scattering material, to derive a distance of $790 \pm 30$ pc. Kimeswenger et al. (2002b) estimated a distance of 2.5 kpc from HST light echo images; however, it has been suggested that the geometry assumed by these authors is unrealistic (Munari et al. 2002b; Kimeswenger et al. 2002b). More recently, the reported detection of a hot binary companion (Desidera & Munari 2002; Wagner & Starrfield 2002; Munari et al. 2002c) has led Munari et al. (2002c) to suggest a distance of 10-11 kpc, based upon spectrophotometric parallax. We add to the above discussion by considering the distance implied by our spectroscopic and polarimetric observations. Based upon the assumption that cataclysmic variables contain no intrinsic polarization, Barrett (1996) suggested a rough relationship between polarization and distance. When applied to sources near the Galactic Plane, for distances $\leq 1$ kpc, this relation is given by $P/d = 3.6\%$ kpc$^{-1}$. Given our estimate of $P_{\max}$ of 2.746%, this would suggest a distance to V838 Mon of 763 pc. V838 Mon’s strong, double interstellar Na I D lines provide a different constraint on the distance. At galactic longitude $217.8^\circ$, radial velocities of objects outside the solar circle are positive and increase monotonically with increasing distance from the sun. Thus, the radial velocity of the longer wavelength component provides a lower limit on the distance to V838 Mon. The radial velocities of the two components of the D lines were measured in the spectra of February 5, 6, 8, and 9. For D1 and D2, the means and standard deviations were, respectively, $21.9 \pm 0.6$, $22.1 \pm 0.8$, $47.9 \pm 0.8$, and $47.5 \pm 2.8$ km s$^{-1}$, relative to the LSR. Note that our data are accurate to 2 km s\(^{-1}\), as compared to the IAU velocity standard \(\beta\) Gem, which is constant to better than 0.1 km s\(^{-1}\) (Larson et al. 1993). To read off the distance of the 48 km s\(^{-1}\), further cloud, we used the velocity contour map by Brand & Blitz (1993), which does not assume the velocity field of Galactic rotation to be axisymmetric. The galactic longitude of V838 Mon coincides with an interesting feature in this map, an “island” of high velocities of about 50 km s\(^{-1}\) located about 2500 pc from the Sun. We estimate that distances consistent with this velocity map, for a radial velocity of +48 km s\(^{-1}\), lie in the range 2500 ± 300 pc. This estimate constitutes our lower limit on the distance to V838 Mon. Velocities as large as 50 km s\(^{-1}\) are not reached again in this direction at heliocentric distances less than 8 kpc. Since this lower limit is greater than 1 kpc, the distance estimation technique used with our polarimetric data is no longer applicable. 4. Discussion Our spectroscopic data offer both qualitative and quantitative insight into the initial stages of the 2002 outburst. Future modeling efforts can be constrained by the equivalent width variability of the lines presented. Furthermore, the complex line profile variability and evolution of various species and ionization stages of lines presented in this paper should also provide constraints on future attempts to explain this outburst. In spite of our sparse polarimetric data set, these observations clearly demonstrate that the ejecta of V838 Monocerotis deviated significantly from a spherical geometry. We note the similarity between our observations and those of Bjorkman et al. (1994), who found Nova Cygni 1992 to have an intrinsic polarization signal during the initial stages of outburst. These authors suggest the intrinsic polarization during this initial stage was caused by electron scattering in a slightly flattened spheroidal shell. As the shell expanded, the electron scattering optical depth decreased, hence the intrinsic polarization declined. A similar interpretation could be applied to V838 Mon. The electron scattering wings around \(\text{H}\alpha\) were sizable on February 5, but had clearly weakened by February 9 and disappeared by February 14. Coupled with our discovery of an intrinsic polarization component present on February 8 but gone by February 13, this picture of an expanding, flattened spheroidal shell could provide a viable explanation of the intrinsic polarization observed during the 2002 outburst. Finally, we consider the implications of these observations for future studies of this object. Munari et al. (2002b) and Kimeswenger et al. (2002b) discuss different classifications of V838 Mon, including a nova outburst, a post-AGB star, a M31-Red type variable, and a V4332 Sgr type variable: both suggest that V838 Mon is most similar to a V4332 Sgr type variable. As described above, we suggest that the geometry of the outburst, as probed by polarimetry, might be similar to that of a nova outburst. This suggests that the geometry of V4332 Sgr’s, V838 Mon’s, and nova outbursts might be similar. It would be worthwhile to measure the polarization of V4332 Sgr today, to verify that like V838 Mon, it has no intrinsic polarization at a time long after outburst. Furthermore, we suggest that polarimetric observations immediately following the outbursts of all future V4332 Sgr type variables be made. Such observations would provide an ideal testbed to correlate the geometry of each outburst, and hence help to identify the true nature of these unique objects. We would like to thank Dr. Kenneth H. Nordsieck for providing access to the HPOL spectropolarimeter. We also thank Brian Babler for his help with various aspects of HPOL data reduction and management. We thank the anonymous referee for helping to improve this paper. Support for observational research at Ritter Observatory has been provided by the University of Toledo, with technical support provided by R.J. Burmeister. K.S.B. is a Cottrell Scholar of the Research Corporation, and gratefully acknowledges their support. This research has made use of the SIMBAD database operated at CDS, Strasbourg, France, and the NASA ADS system. REFERENCES Henden, A., Munari, U., & Schwartz M. IAU Circ., 7859 Kimeswenger, S., Lederle, C., & Schmeja, S. 2002a, IAU Circ., 7816 Munari, U., Desidera, S., & Henden, A. 2002c, IAU Circ., 8005 Wagner, R.M. & Starrfield, S.G. 2002, IAU Circ., 7992 Zwitter, T. & Munari, U. 2002 IAU Circ., 7812 Fig. 1.— Hα line profiles sorted chronologically. From shorter to longer wavelengths, the tick marks denote -1000 km s$^{-1}$, -500 km s$^{-1}$, 500 km s$^{-1}$, and 1000 km s$^{-1}$. Fig. 2.— SiII 6347.1Å and 6371.4Å line profiles sorted chronologically. Fig. 3.— NaI 5889.95Å and 5895.92Å line profiles sorted chronologically. Note that the narrow interstellar line components are superimposed on the intrinsic components. As discussed in section 3.1, the saturation of the interstellar components prevents the isolation of the intrinsic components. Fig. 4.— HPOL spectropolarimetry from February 8. The upper panel shows the flux, in units of ergs cm$^{-2}$ s$^{-1}$ $\AA^{-1}$, with the red data magnified by a factor of 2. The next two lower panels display the total polarization and position angle, where the red data, e.g. 6000-10500Å, are binned to a constant error of 0.075% and blue data, e.g. 3200-6000Å, are binned to a constant error of 0.12%. Overplotted is the derived Serkowski interstellar polarization component, whose parameters are given by: $P_{\text{max}} = 2.746 \pm 0.011 \%$, $\lambda_{\text{max}} = 5790 \pm 37 \AA$, $PA = 153.43 \pm 0.12^\circ$, $\delta PA = 0$, and $K = 0.971$. The bottom two panels show the intrinsic polarization and position angle, binned to constant errors of 0.07% and 0.10% for the red and blue data respectively. The wavelength dependence of the intrinsic polarization is not representative of pure electron scattering; rather, it implies the presence of an opacity source such as hydrogen. Fig. 5.— A strong P-Cygni profile, attributed to FeI 6191.6Å, is shown in a non-continuum normalized spectrum from March 14. Fig. 6.— HPOL spectropolarimetry from February 13. The upper panel shows the flux, in units of ergs cm$^{-2}$ s$^{-1}$ Å$^{-1}$. The lower two panels give the total polarization and position angle, binned to a constant error of 0.074%. The fitted interstellar polarization component is given by the solid line. The intrinsic polarization which was present on 8 February has clearly disappeared by February 13. ### Table 1: <table> <thead> <tr> <th>UT Date, 2002</th> <th>MJD</th> <th>Observatory</th> <th>SNR: Hα</th> <th>SNR: Si II</th> <th>SNR: Fe I</th> <th>SNR: Na I</th> </tr> </thead> <tbody> <tr> <td>February 5</td> <td>2452310.7</td> <td>Rit</td> <td>24</td> <td>22</td> <td>⋯</td> <td>18</td> </tr> <tr> <td>February 6</td> <td>2452311.7</td> <td>Rit</td> <td>84</td> <td>68</td> <td>⋯</td> <td>42</td> </tr> <tr> <td>February 8</td> <td>2452313.6</td> <td>Rit</td> <td>48</td> <td>42</td> <td>⋯</td> <td>32</td> </tr> <tr> <td>February 8</td> <td>2452313.7</td> <td>HPOL</td> <td>⋯</td> <td>⋯</td> <td>⋯</td> <td>⋯</td> </tr> <tr> <td>February 9</td> <td>2452314.7</td> <td>Rit</td> <td>66</td> <td>64</td> <td>⋯</td> <td>46</td> </tr> <tr> <td>February 13</td> <td>2452318.8</td> <td>HPOL</td> <td>⋯</td> <td>⋯</td> <td>⋯</td> <td>⋯</td> </tr> <tr> <td>February 14</td> <td>2452319.7</td> <td>Rit</td> <td>42</td> <td>⋯</td> <td>⋯</td> <td>28</td> </tr> <tr> <td>February 19</td> <td>2452324.6</td> <td>Rit</td> <td>14</td> <td>⋯</td> <td>⋯</td> <td>⋯</td> </tr> <tr> <td>March 11</td> <td>2452344.6</td> <td>Rit</td> <td>60</td> <td>64</td> <td>⋯</td> <td>32</td> </tr> <tr> <td>March 14</td> <td>2452347.6</td> <td>Rit</td> <td>104</td> <td>84</td> <td>78</td> <td>34</td> </tr> </tbody> </table> **Note.** — Summary of observations. Rit denotes Ritter spectroscopy and HPOL denotes HPOL spectropolarimetry. Multiple observations during one night were coadded, using standard IRAF techniques, to increase the SNR. The Modified Julian Dates listed correspond to the midpoint of the observations for a specific night. The signal to noise ratios cited are the signal to noise ratios per resolution element, calculated in line free regions of the spectrum. ### Table 2: <table> <thead> <tr> <th>Line</th> <th>MJD</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td>2452310.6</td> <td>2452311.6</td> <td>2452313.6</td> <td>2452314.6</td> <td>2452318.7</td> <td>2452319.6</td> <td>2452324.6</td> <td>2452344.6</td> </tr> <tr> <td>Hα</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>FeII 4923.9Å</td> <td>...</td> <td>...</td> <td>p¹</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>FeII 5018.4Å</td> <td>...</td> <td>...</td> <td>p¹</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>FeII 5169.0Å</td> <td>...</td> <td>...</td> <td>p¹</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>FeII 5316.2Å</td> <td>p</td> <td>p</td> <td>p¹</td> <td>p</td> <td>...</td> <td>p?</td> <td>...</td> <td>...</td> </tr> <tr> <td>NaI 5889.95Å</td> <td>p</td> <td>p</td> <td>p</td> <td>p</td> <td>...</td> <td>p</td> <td>...</td> <td>...</td> </tr> <tr> <td>NaI 5895.9Å</td> <td>p</td> <td>p</td> <td>p</td> <td>p</td> <td>...</td> <td>p</td> <td>...</td> <td>...</td> </tr> <tr> <td>FeI 6191.6Å</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>p</td> </tr> <tr> <td>SiII 6347Å</td> <td>p</td> <td>p</td> <td>p</td> <td>p</td> <td>...</td> <td>...</td> <td>p²</td> <td>p²</td> </tr> <tr> <td>SiII 6371Å</td> <td>p</td> <td>p</td> <td>p</td> <td>p</td> <td>...</td> <td>...</td> <td>p²</td> <td>p²</td> </tr> <tr> <td>NaI 6380Å</td> <td>a</td> <td>a</td> <td>a</td> <td>a</td> <td>...</td> <td>...</td> <td>a</td> <td>a</td> </tr> <tr> <td>FeI 6393.6Å</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>p²</td> </tr> <tr> <td>Hα</td> <td>p</td> <td>p</td> <td>p</td> <td>p</td> <td>p¹</td> <td>p</td> <td>p</td> <td>p²</td> </tr> <tr> <td>CH 6576Å</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>e</td> <td>e</td> </tr> <tr> <td>CH 6583Å</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>...</td> <td>e</td> <td>e</td> </tr> <tr> <td>CaII 8498Å</td> <td>...</td> <td>...</td> <td>p¹</td> <td>...</td> <td>p¹</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>CaII 8542Å</td> <td>...</td> <td>...</td> <td>p¹</td> <td>...</td> <td>p¹</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>CaII 8662Å</td> <td>...</td> <td>...</td> <td>p¹</td> <td>...</td> <td>p¹</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>Paschen</td> <td>...</td> <td>...</td> <td>a¹</td> <td>...</td> <td>a¹</td> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>HI 10049.4Å</td> <td>...</td> <td>...</td> <td>p¹?</td> <td>...</td> <td>p¹?</td> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> Note. — Observed spectral lines and their general characteristics (p = P Cygni profile, a = absorption, e = emission). ¹ denotes line identification via low resolution HPOL spectropolarimetry. ² denotes multiple absorption components observed. Table 3: <table> <thead> <tr> <th>Line</th> <th>MJD 2452310.6</th> <th>MJD 2452311.6</th> <th>MJD 2452313.6</th> <th>MJD 2452314.6</th> <th>MJD 2452319.6</th> <th>MJD 2452324.6</th> <th>MJD 2452344.6</th> <th>MJD 2452347.6</th> </tr> </thead> <tbody> <tr> <td>Hα (total)</td> <td>-29.15 ±0.75</td> <td>-34.22 ±0.25</td> <td>-33.42 ±0.43</td> <td>-27.88 ±0.26</td> <td>-9.01 ±0.15</td> <td>-3.40 ±0.26</td> <td>-0.66 ±0.03</td> <td>-0.73 ±0.02</td> </tr> <tr> <td>SiII 6347 Å (abs)</td> <td>1.49 ±0.06</td> <td>1.22 ±0.02</td> <td>0.80 ±0.02</td> <td>0.81 ±0.02</td> <td>...</td> <td>...</td> <td>0.53 ±0.01</td> <td>0.45 ±0.01</td> </tr> <tr> <td>SiII 6347 Å (em)</td> <td>-0.28 ±0.02</td> <td>-0.36 ±0.01</td> <td>-0.35 ±0.01</td> <td>-0.29 ±0.01</td> <td>...</td> <td>...</td> <td>-0.11 ±0.01</td> <td>-0.12 ±0.01</td> </tr> <tr> <td>SiII 6371 Å (abs)</td> <td>0.90 ±0.05</td> <td>0.81 ±0.02</td> <td>0.48 ±0.02</td> <td>0.45 ±0.01</td> <td>...</td> <td>...</td> <td>0.43 ±0.01</td> <td>0.37 ±0.01</td> </tr> <tr> <td>SiII 6371 Å (em)</td> <td>-0.31 ±0.02</td> <td>-0.30 ±0.01</td> <td>-0.25 ±0.01</td> <td>-0.25 ±0.01</td> <td>...</td> <td>...</td> <td>-0.25 ±0.01</td> <td>-0.23 ±0.01</td> </tr> <tr> <td>NII 6380 Å</td> <td>0.19 ±0.01</td> <td>0.15 ±0.01</td> <td>0.13 ±0.01</td> <td>0.13 ±0.01</td> <td>...</td> <td>...</td> <td>0.12 ±0.01</td> <td>0.13 ±0.01</td> </tr> <tr> <td>FeII 5316.2 Å (abs)</td> <td>1.98 ±0.16</td> <td>1.51 ±0.07</td> <td>1.65 ±0.10</td> <td>1.42 ±0.06</td> <td>1.03 ±0.09</td> <td>...</td> <td>0.77 ±0.08</td> <td>0.55 ±0.04</td> </tr> <tr> <td>FeII 5316.2 Å (em)</td> <td>-2.27 ±0.20</td> <td>-2.39 ±0.11</td> <td>-2.76 ±0.17</td> <td>-3.01 ±0.10</td> <td>-2.87 ±0.20</td> <td>...</td> <td>-0.55 ±0.09</td> <td>-0.63 ±0.06</td> </tr> </tbody> </table> Note. — Equivalent width, in Å, of selected spectral lines for the 8 nights of Ritter observations. Except for Hα all measurements were made on spectra smoothed with a boxcar function of size 3.
Geometric Solution of the Hierarchy Problem by Means of Einstein-Cartan Torsion Carl F. Diether III* and Joy Christian† Einstein Centre for Local-Realistic Physics, 15 Thackley End, Oxford OX2 6LB, United Kingdom Two of the major open questions in particle physics are: (1) Why are there no elementary fermionic particles observed in the mass-energy range between the electroweak scale and the Planck scale? And (2), what mechanical energy may be counterbalancing the divergent electrostatic and strong force energies of point-like charged fermions in the vicinity of the Planck scale? In this paper, using a hitherto unrecognized mechanism derived from the non-linear amelioration of Dirac equation known as the Hehl-Datta equation within Einstein-Cartan-Sciama-Kibble extension of general relativity, we present detailed numerical estimates suggesting that the mechanical energy arising from the gravity-induced four-fermion self-interaction in this theory can address both of these questions in tandem. I. INTRODUCTION For over a century Einstein’s theory of gravity has provided remarkably accurate and precise predictions for the behaviour of macroscopic bodies within our cosmos. For the elementary particles in the quantum realm, however, Einstein-Cartan theory of gravity may be more appropriate, because it incorporates spinors and associated torsion within a covariant description [1][2]. For this reason there has been considerable interest in Einstein-Cartan theory, in the light of the field equations proposed by Sciama [3] and Kibble [4]. For example, in a series of papers Poplawski has argued that Einstein-Cartan-Sciama-Kibble (ECSK) theory of gravity [5] solves many longstanding problems in physics [6][7][8][9]. His concern has been to avoid singularities endemic in general relativity by proposing that our observed universe is perhaps a black hole within a larger universe [7]. Our concern, on the other hand, is to point out using numerical estimates that ECSK theory also offers solutions to two longstanding problems in particle physics. The first of these problems can be traced back to the fact that gravity is a considerably weaker “force” compared to the other forces. When Newton’s gravitational constant is combined with the speed of light and Planck’s constant, one arrives at the energy scale of $\sim 10^{19}$ GeV, which is some 17 orders of magnitude larger than the heaviest known elementary fermion (the top quark) observed at the mass-energy of $\sim 175$ GeV. Thus there is a difference of some 17 orders of magnitude between the electroweak scale and the Planck scale. There have been many attempts to explain this difference, but none is as simple as our explanation based on the torsion contributions within the ECSK theory. The second problem we address here concerns the well known fact that as we approach the Planck length, $\sim 10^{-35}$ m, the electrostatic and strong force self-energies of point-like fermions become divergent. We will show, however, that torsion contributions within the ECSK theory resolves this difficulty as well, at least numerically, by counterbalancing the divergent electrostatic and strong force energy densities near the Planck scale. In fact, the negative torsion energy associated with the spin angular momentum of elementary fermions may well be the long sought after mechanical energy that counteracts the divergent positive energies stemming from their electrostatic and strong nuclear charges. II. STATIC COUNTERPART OF THE HEHL-DATTA EQUATION The ECSK theory of gravity is an extension of general relativity allowing spacetime to have torsion in addition to curvature, where torsion is determined by the density of intrinsic angular momentum, reminiscent of the quantum-mechanical spin [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. As in general relativity, the gravitational Lagrangian density in the ECSK theory is proportional to the curvature scalar. But unlike in general relativity, the affine connection $\Gamma^k_{ij}$ is not restricted to be symmetric. Instead, the antisymmetric part of the connection, $S^k_{ij} = \Gamma^k_{[ij]}$ (i.e., *Electronic address: fdiethe@mailaps.org †Electronic address: jjc@alum.bu.edu the torsion tensor), is regarded as a dynamical variable similar to the metric tensor $g_{ij}$ in general relativity. Then, variation of the total action for the gravitational field and matter with respect to the metric tensor gives Einstein-type field equations that relate the curvature to the dynamical energy-momentum tensor $T_{ij} = (2/\sqrt{-g}) \delta \mathcal{L}/\delta g^{ij}$, where $\mathcal{L}$ is the matter Lagrangian density. On the other hand, variation of the total action with respect to the torsion tensor gives the Cartan equations for the spin tensor of matter [5]: $$s^{ij} = \frac{1}{\kappa} S^{[ij]}, \quad \kappa = \frac{8\pi G}{c^4}. \quad (1)$$ Thus ECSK theory of gravity extends general relativity to include intrinsic spin of matter, with fermionic fields such as those of quarks and leptons providing natural sources of torsion. Torsion, in turn, modifies the Dirac equation for elementary fermions by adding to it a cubic term in the spinor fields, as observed by Kibble, Hehl and Datta [1][4][5]. It is this nonlinear Hehl-Datta equation that provides the theoretical background for our proposal. The cubic term in this equation corresponds to an axial-axial four-fermion self-interaction in the matter Lagrangian, which, among other things, generates a spinor-dependent vacuum-energy term in the energy-momentum tensor (see, for example, Ref. [13]). The torsion tensor $S^k_{ij}$ appears in the matter Lagrangian via covariant derivative of a Dirac spinor with respect to the affine connection. The spin tensor for the Dirac spinor $$S^k_{ij} = \frac{i\hbar c}{4} \bar{\psi} \gamma^{[i} \gamma^j \gamma^k] \psi,$$ (2) where $\bar{\psi} \equiv \psi^\dagger \gamma^0 \left( \psi^*_1, \psi^*_2, -\psi^*_3, -\psi^*_4 \right)$ is the Dirac adjoint of $\psi$ and $\gamma^i$ are the Dirac matrices: $\gamma^{(i} \gamma^{j)} = 2g^{ij}$. The Cartan equations (1) render the torsion tensor to be quadratic in spinor fields. Substituting it into the Dirac equation in the Riemann-Cartan spacetime with metric signature $(+, -, -, -)$ gives the cubic Hehl-Datta equation [1][4][5]: $$-i\hbar \gamma^k \psi_k = mc\psi + \frac{3\hbar^2 c}{8} \left( \bar{\psi} \gamma^5 \gamma_k \psi \right) \gamma^5 \gamma^k \psi,$$ (3) where the colon denotes a general-relativistic covariant derivative with respect to the Christoffel symbols, and $m$ is the mass of the spinor. The Hehl-Datta equation (3) and its adjoint can be obtained by varying the following action with respect to $\psi$ and $\bar{\psi}$ (respectively), without varying it with respect to the metric tensor or the torsion tensor [13]: $$S = \int d^4x \sqrt{-g} \left\{ -\frac{1}{\kappa} R - \frac{i\hbar c}{2} \left( \bar{\psi} \gamma^k \psi_k - \bar{\psi} k \gamma^k \psi \right) - mc^2 \bar{\psi} \psi - \frac{3\hbar^2 c^2}{16} \left( \bar{\psi} \gamma^5 \gamma_k \psi \right) \gamma^5 \gamma^k \psi \right\}. \quad (4)$$ The last term in this action corresponds to the effective axial-axial, four-fermion self-interaction mentioned above: $$\mathcal{L}_{AA} = -\sqrt{-g} \frac{3\hbar^2 c^2}{16} \left( \bar{\psi} \gamma^5 \gamma_k \psi \right) \gamma^5 \gamma^k \psi. \quad (5)$$ This self-interaction term is not renormalizable. But it is an effective Lagrangian density in which only the metric and spinor fields are dynamical variables. The original Lagrangian density for a Dirac field in which the torsion tensor is also a dynamical variable (giving the Hehl-Datta equation), is renormalizable, since it is quadratic in spinor fields. But as we will see renormalization may not be required, if ECSK gravity turns out to be what is realized in Nature. Before proceeding further we note that the above action is not the most general possible action within the present context. In addition to the axial-axial term, an axial-vector and a vector-vector terms can be added to the action, albeit as non-minimal couplings (see, for example, Ref. [15]). However, it has been argued in Ref. [13] that minimal coupling is the most natural coupling of fermions to gravity because non-minimal couplings are sourced by components of the torsion that do not appear naturally in the models of spinning matter. For this reason we will confine our treatment to the minimal coupling of fermions to gravity and the corresponding Hehl-Datta equation, while recognizing that strictly speaking our neglect of non-minimal couplings amounts to an approximation, albeit a rather good one. Moving forward to our goal of numerical estimates, if we require the action (4) to be invariant under local U(1) phase transformations, then $\psi_k$ transforms to $\psi_k + iqA_k \psi/\hbar$ for a charge $q$ and a gauge field $A_k$, and eq. (3) generalizes to $$-i\hbar \gamma^k \psi_k + q \gamma^k A_k \psi = mc\psi + \frac{3\hbar^2 c}{8} \left( \bar{\psi} \gamma^5 \gamma_k \psi \right) \gamma^5 \gamma^k \psi.$$ (6) In the rest frame of the particle and anti-particle, with the metric signature \((+, -, -, -)\), this equation simplifies to \[ - i h \gamma^0 \frac{\partial \psi}{\partial t} + q c A_0 \gamma^0 \psi = mc^2 \psi + \frac{3 \hbar c^2}{8} \left( \bar{\psi} \gamma^5 \gamma_0 \psi \right) \gamma^5 \gamma^0 \psi , \] which can be further simplified to \[ - i h \begin{pmatrix} \frac{\partial \psi_a}{\partial t} \\ \frac{\partial \psi_b}{\partial t} \end{pmatrix} + q c A_0 \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} = mc^2 \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} - \frac{3 \hbar c^2}{8} \left\{ \psi_1^* \psi_3 + \psi_2^* \psi_4 + \psi_1 \psi_3^* + \psi_2 \psi_4^* \right\} \begin{pmatrix} -\psi_3 \\ -\psi_4 \end{pmatrix}, \] where we have used \[ \gamma^0 = \begin{pmatrix} +1 & 0 & 0 & 0 \\ 0 & +1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix} \quad \text{and} \quad \gamma^5 = \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ +1 & 0 & 0 & 0 \\ 0 & +1 & 0 & 0 \end{pmatrix} . \] If we now represent the particles and anti-particles with two-component spinors \(\psi_a\) and \(\psi_b\), respectively \([17]\), where \[ \psi_a := \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} \quad \text{and} \quad \psi_b := \begin{pmatrix} \psi_3 \\ \psi_4 \end{pmatrix} \] are the two-component spinors constituting the four-component Dirac spinor, then the above equation can be written as two \textit{coupled} partial differential equations: \[ - i h \frac{\partial \psi_a}{\partial t} + q c A_0 \psi_a = mc^2 \psi_a + \frac{3 \hbar c^2}{8} \left\{ \psi_1^* \psi_3 + \psi_2^* \psi_4 + \psi_1 \psi_3^* + \psi_2 \psi_4^* \right\} \psi_b \] \[ + i h \frac{\partial \psi_b}{\partial t} - q c A_0 \psi_b = mc^2 \psi_b - \frac{3 \hbar c^2}{8} \left\{ \psi_1^* \psi_3 + \psi_2^* \psi_4 + \psi_1 \psi_3^* + \psi_2 \psi_4^* \right\} \psi_a. \] Unlike the case in Dirac equation, these equations for the spinors \(\psi_a\) and \(\psi_b\) are coupled equations even in the rest frame. They decouple in the limit when the torsion-induced axial-axial four-fermion interaction is negligible. On the other hand, at low energies it is reasonable to assume that, in analogy with the Dirac spinors in flat spacetime, the above two-component spinors for free particles decouple in the rest frame, admitting plane wave solutions of the form \[ \psi_a(t) = e^{-i(m c^2 / \hbar) t} \psi_a(0) \quad \text{and} \quad \psi_b(t) = e^{+i(m c^2 / \hbar) t} \psi_b(0), \] together with \[ \psi_b(t) = e^{+i(2 m c^2 / \hbar) t} \psi_a(t) , \] so that initially they are equal to each other \([17]\): \[ \psi_b(0) = \psi_a(0). \] Substitution of this form of solutions into eqs. (11) and (12) reduces these equations to the following two equations: \[ - mc^2 \psi_a(t) + q c A_0 \psi_a(t) = mc^2 \psi_a(t) + \frac{3 \hbar c^2}{4} \left( \cos \left( \frac{2 m c^2}{\hbar} t \right) \right) |\psi_a(0)|^2 \psi_b(t) \] \[ \text{and} \quad - mc^2 \psi_b(t) - q c A_0 \psi_b(t) = mc^2 \psi_b(t) - \frac{3 \hbar c^2}{4} \left( \cos \left( \frac{2 m c^2}{\hbar} t \right) \right) |\psi_a(0)|^2 \psi_a(t). \] Now we are only concerned about the static scenario, for which \((i.e., \text{at time } t = 0)\) these pair of equations reduce to \[ + \frac{q c A_0}{2} \psi_a(0) = mc^2 \psi_a(0) + \frac{3 \hbar c^2}{8} |\psi_a(0)|^2 \psi_b(0) \] \[ \text{and} \quad - \frac{q c A_0}{2} \psi_b(0) = mc^2 \psi_b(0) - \frac{3 \hbar c^2}{8} |\psi_a(0)|^2 \psi_a(0). \] In order to obtain scalar counterparts of these (now essentially decoupled) equations, we multiply them through from the left by \( \tilde{\psi}_a(0) = (\psi_a^1(0), \psi_a^2(0)) \) and \( \tilde{\psi}_b(0) = (\psi_b^1(0), \psi_b^2(0)) \), respectively, and – in the light of eq. (15) – arrive at \[ + \frac{qcA_0}{2} |\tilde{\psi}_a(0)|^2 - \frac{3\kappa h^2 c^2}{8} |\tilde{\psi}_a(0)|^4 = mc^2 |\psi_a(0)|^2 \tag{20} \] and \[ - \frac{qcA_0}{2} |\psi_b(0)|^2 + \frac{3\kappa h^2 c^2}{8} |\psi_b(0)|^4 = mc^2 |\psi_b(0)|^2. \tag{21} \] Substituting in SI units for the scalar field \( A_0 = V/c = q/(4\pi \varepsilon_0 r) \) in the Lorentz gauge (where \( V \) is the electric potential), and for \( \kappa = \frac{8\pi G}{c^2} \), together with the probabilities \( |\tilde{\psi}_b(0)|^2 = |\psi_b(0)|^2 \sim 1/r^3 \) of finding the particles in the volume \( r^3 \) [cf. eq. (15)], we finally arrive at our central equations, which hold at least for \( t = 0 \), for any electroweak fermion of charge \( q \) and mass \( m \) and its anti-particle in the Riemann-Cartan spacetime: \[ \frac{q^2}{8\pi \varepsilon_0 r^4} - \frac{3\pi G h^2}{c^2 r^6} = \frac{m c^2}{r^3} \tag{22} \] and \[ \frac{3\pi G h^2}{c^2 r^6} - \frac{q^2}{8\pi \varepsilon_0 r^4} = \frac{m c^2}{r^3}. \tag{23} \] where \( r \) is the radial distance from \( q \) and the two equations correspond to the particle and anti-particle, respectively. Note that, as one would expect, if we multiply through each of these two equations by \( r^3 \), then for fermion anti-fermion pair annihilation the terms on the LHS of the equations cancel out, leaving \( 2mc^2 \) in energy to fly off as photons. It is also worth noting that without ameliorating the Dirac equation with a cubic term, eq. (22) would reduce for an electron to \( ah/2r_e = m_e c \), giving \( r_e = (ah/2m_e c) \sim 10^{-15} \text{m} \), where \( a = e^2/(4\pi \varepsilon_0 hc) \) is the fine structure constant. This is one half of the classical electron radius. Experimental evidence, however, suggests that electron radius is much smaller. As we shall see, our calculations with the cubic term included predicts the electron radius to be of the order of \( 10^{-34} \text{m} \), which is closer to the Planck length. This may turn out to be the correct value of the electron radius. Needless to say, what we have presented above is a derivation of eq. (22) within a theory that may be viewed as a semi-classical theory of Dirac fields in a Riemann-Cartan spacetime [4][5]. It can be interpreted also as a theory of gravity-induced four-fermion self-interaction within standard general relativity [1][5]. A possible second-quantized generalization of this theory is beyond the scope of our paper. However, any such generalization must necessarily reproduce the Hehl-Datta equation (3) for single fermions even at reasonably high energies, just as Dirac equation remains valid for single fermions at high energies [16]. It is therefore not unreasonable to base our numerical estimates below on eq. (22) derived above. We shall soon see that this equation is both necessary and sufficient for our purposes. Finally, it is important to note here that, despite the appearance of four spinors in the interaction term of eq. (4), it describes the self-interaction of a single fermion, of range \( \sim 10^{-31} \text{m} \), not mutual interactions among the spins of four distinct fermions. That is to say, it does not describe a “spin field” of some sort as a carrier of a new interaction [5]. If, however, one insists on interpreting the interaction term in eq. (4) as describing interactions among four distinct fermions, then the mass of the corresponding exchange boson would have to exceed \( 10^{14} \text{GeV} \), which is evidently quite unreasonable. What is more, as we shall soon see, within our scheme any corrections due to vacuum polarization are automatically compensated for in the production of electroweak mass-energy, dictated by eqs. (22) and (23) above. III. PARTICLE MASSES VIA TORSION ENERGY CONTRIBUTION For our numerical analysis it is instructive to bring out the physical content of eq. (22) in purely classical terms. To this end, recall that when the self-energy of an electron is attributed solely to its electrostatic content, it is found that that energy is divergent, provided we assume the electron to be a point particle. This energy is called “self-energy”, because it arises from the interaction of the charge of the electron with the electrostatic field that it itself is creating. With the help our result (22) above we can avoid this fundamental difficulty as follows. Multiplying through eq. (22) with \( 3/4\pi \), it can be written as \[ \left( \frac{q^2}{8\pi \varepsilon_0 r^4} \right) \frac{3}{4\pi} - \left( \frac{3\pi G h^2}{c^2 r^6} \right) \frac{3}{4\pi} r^3 = \left( m c^2 \right) \frac{3}{4\pi} r^3. \tag{24} \] With $4\pi r^3/3$ recognized as a volume of a sphere of radius $r$, it is now easy to recognize each quantity in the parenthesis in this equation as energy, and each term as the corresponding energy density. Now the first term on the left of the equation is three times the electrostatic energy density at a distance $r$ from the charge, with the latter given by [18] $$u_{\text{static}} = \frac{q^2}{32\pi^2\varepsilon_0 r^4}. \quad (25)$$ And the quantity in the first parenthesis is the total energy of a “thin” spherical shell of charge $q$ at a distance $r$ [18]: $$U_{\text{sphere}} = \frac{q^2}{8\pi\varepsilon_0 r}. \quad (26)$$ Unfortunately the first term in eq. (24) diverges as $r \to 0$. But if we cut-off $r$ at the Planck length, $l_P = \sqrt{G\hbar/c^3}$, then by setting $q = -e$ for an electron we obtain $$\frac{3e^2}{32\pi^2\varepsilon_0 r^4} \approx 2.5190 \times 10^{120} \text{ GeV m}^{-3}. \quad (27)$$ Although finite, this is still an extremely large energy density. But such a large energy density for charged leptons is never realized in Nature. A natural question then is: Is there a negative mechanical energy density that cancels out most of this energy to produce the observed rest mass-energy of leptons? We believe the answer lies in the second term of eq. (24), which – as we saw above – arises from the non-linear amelioration of the Dirac equation within the ECSK theory. Indeed, if we again set the Planck length cut-off for $r$ in the second term of eq. (24), then we obtain $$-\frac{9G\hbar^2}{4e^2 l_P^6} \approx -6.5067 \times 10^{123} \text{ GeV m}^{-3}. \quad (28)$$ Comparing this value with the electrostatic energy density at the Planck length cut-off estimated in eq. (27) we see at once that the torsion-induced mechanical energy (28) can indeed counterbalance the huge electrostatic energy. This is a surprising observation, considering the widespread belief that “the numerical differences which arise between GR and ECSK theories are normally very small, so that the advantages of including torsion are entirely theoretical” [16]. Moving forward to our goal of numerical estimates, let us note that whenever terms quadratic in spin happen to be negligible, then the ECSK theory is observationally indistinguishable from general relativity. Therefore, for post-general-relativistic effects, the density of spin-squared has to be comparable to the density of mass. The corresponding characteristic length scale, say for a nucleon, is referred to as the Cartan or Einstein-Cartan radius, defined as [2][16] $$r_{\text{Cart}} \approx \left(l_P^2 \lambda_C^2\right)^{\frac{1}{3}}, \quad (29)$$ where $\lambda_C$ is the Compton wavelength of the nucleon. Now it has been noted by Poplawski [6][7][8][9] that quantum field theory based on the Hehl-Datta equation may avert divergent integrals normally encountered in calculating radiative corrections, by self-regulating propagators. Moreover, the multipole expansion applied to Dirac fields within the ECSK theory shows that such fields cannot form singular, point-like configurations because these configurations would violate the conservation law for the spin density, and thus the Bianchi identities. These fields in fact describe non-singular particles whose spatial dimensions are at least of the order of their Cartan radii, defined by the condition $$\epsilon \sim k s^2, \quad (30)$$ where $\sqrt{s^2} \sim \hbar c|\psi|^2$ is the spin density, $\epsilon \sim me^2|\psi|^2$ is the rest energy density, and $|\psi|^2 \sim 1/r^3$ is the probability density, giving the radius (29). Consequently, at the least the de Broglie energy associated with the Cartan radius of a fermion (which is approximately $10^{-27}$ m for an electron) may introduce an effective ultraviolet cutoff for it in quantum field theory in the ECKS spacetime. The avoidance of divergences in radiative corrections in quantum fields may thus come from spacetime torsion originating from intrinsic spin. Poplawski and others, however, took $\epsilon$ to be the mass-energy density of the fermion to arrive at the Cartan radius (29). But it is easy to work out from the first term of our eq. (22) that at the Cartan radius the electrostatic energy density for an electron is still extremely large: $$\frac{3\alpha\hbar c}{8\pi (10^{-27}m)^3} \approx 1.7188 \times 10^{69} \text{ GeV m}^{-3}. \quad (31)$$ For this reason it is not correct to identify $\epsilon$ with the rest mass-energy density, which is $\approx 5.1099 \times 10^{77}$ GeV m$^{-3}$ for an electron at the Cartan radius. The electrostatic energy density of an electron is thus eleven orders of magnitude higher. Therefore $\epsilon$ is better identified with the electrostatic energy density (31), provided most of it is canceled out. If in eq. (22) we set the electrostatic energy density appearing in its first term to be equal to the spin energy density induced by the four-fermion self-interaction appearing in its second term and solve for $r$, then we obtain $$r_t = \sqrt{\frac{6\pi}{\alpha}} l_p \approx 8.2143 \times 10^{-34} \text{ m}.$$ \hspace{1cm} (32) Which is about fifty one times larger than the Planck length, and is a remarkably simple constant in terms of the Planck length and the fine structure constant. According to eq. (22), this is the effective radius at which energy density due to spin density should completely compensate the huge electrostatic energy seen in (27). In our view this is the correct Cartan radius, at least for the charged leptons, that may still provide a plausible mechanism for averting singularities, since it is closer to the Planck length. It is important to note, however, that these huge energy densities never actually occur in Nature, because according to our eq. (22) they are automatically compensated. The physical mechanism described above is simply to enable extraction of the radius $r_t$ for different charged fermions. In order to obtain an observed mass-energy for the elementary fermions, we now posit that there is a very small difference between the radii of their electrostatic energy density and their spin energy density: $\Delta r := |r_x - r_t|$. We do not propose a specific reason for this difference, but one possible reason may be our neglect to include axial-vector and vector-vector interactions in the action (4) for the derivation of Hahl-Datta equation, as we discussed earlier [13][15]. While exclusion of such non-minimal couplings may be justified on physical grounds, their inclusion in the derivation of eq. (22) would not have allowed us to decouple the equations of motion (16) and (17) for particles and anti-particles at $t = 0$. Consequently, in that case the probability densities such as $|\psi_a(0)|^2 \sim 1/r^4$ could not be assumed to be exactly the same for the electrostatic term and the spin density term in those equations. Perhaps a second-quantized generalization of the ECSK-Dirac theory would eventually lead us to a better understanding of the origin(s) of $\Delta r$. In order to approximate the difference $\Delta r$, we hold the radius for the spin energy density to that of the “cancellation radius” $r_t$, because this radius is constant for a given charged lepton, and because we expect spin energy density to be the same for all charged leptons. We then vary the radius for the electrostatic and the rest mass-energy densities, which we take to be the same. Using eq. (22), this leads us to the following formula for our numerical estimates: $$\frac{\alpha \hbar c}{2 r_t^3} - \frac{3 \pi G \hbar^2}{c^2 r_t^6} = \frac{m_x c^2}{r_t^3}.$$ \hspace{1cm} (33) As shown in the Appendix below, we were able to find solutions for $r_x$ for the charged leptons using arbitrary precision in Mathematica. The first in our results listed below is the solution for $r_t$ up to 20 significant figures. Then, using the same precision for comparison, we list the results for $r_e$ for an electron, $r_\mu$ for a muon, and $r_\tau$ for a tauon, along with the anti-fermions using eq. (23): $$r_t = 8.2143011998060519095 \times 10^{-34} \text{ m} \rightarrow 0.0 \text{ MeV},$$ $$r_e^- = 8.2143011998060519083 \times 10^{-34} \text{ m} \rightarrow 0.511 \text{ MeV},$$ $$r_\mu^- = 8.2143011998060518270 \times 10^{-34} \text{ m} \rightarrow 106 \text{ MeV},$$ $$r_{\tau^-} = 8.2143011998060505218 \times 10^{-34} \text{ m} \rightarrow 1777 \text{ MeV},$$ $$r_e^+ = 8.2143011998060519107 \times 10^{-34} \text{ m} \rightarrow 0.511 \text{ MeV},$$ $$r_\mu^+ = 8.2143011998060519920 \times 10^{-34} \text{ m} \rightarrow 106 \text{ MeV},$$ $$r_{\tau^+} = 8.2143011998060532972 \times 10^{-34} \text{ m} \rightarrow 1777 \text{ MeV}.$$ Evidently, very minute changes in the radii are seen to cause large changes in the observed rest mass-energies of the fermions. But as the differences in the radii go larger, the resultant mass-energies go higher, as one would expect. Needless to say, for the anti-leptons the results for $r_x$ will be larger than $r_t$ rather than smaller, by the same amount. It seems extraordinary that Nature would subscribe to such tiny differences resulting from large number of significant figures, but that might explain why the underlying relationship between the observed values of the masses of the elementary particles has remained elusive so far. In addition to the possible reasons for this mentioned above, it is not inconceivable that the difference between the spin energy density and the electrostatic energy density radii arises due to purely geometrical factors. We also suspect that there may possibly be some kind of symmetry breaking mechanism at work similar to the Higgs mechanism, and this symmetry breaking results in the observed mass-energy generation. As a consistency check, let us verify that the tiny length differences seen above vanish, \( \Delta r \to 0 \), as the corresponding rest mass-energy differences tend to zero: \( \Delta E \to 0 \). To this end, we recast eq. (33) for arbitrary \( r_x \) in a form involving only rest mass-energy on the RHS as: \[ \frac{\alpha \hbar c}{2 r_x} = \frac{3 \pi G \hbar^2}{c^2 r_x^2} r_x^3 = m_x c^2. \tag{34} \] If we now set \[ A \equiv \frac{\alpha \hbar c}{2} \quad \text{and} \quad B \equiv \frac{3 \pi G \hbar^2}{c^2}, \] then, with \( \Delta E = m_x c^2 \) and setting \( r_x = r_t \) as the cancellation radius for which \( \Delta E = 0 \), we obtain \[ r_t = \sqrt{B/A}. \tag{36} \] This allows us to derive a general expression for \( r_x \) when \( \Delta E \neq 0 \): \[ \frac{A}{r_x} - \frac{A^3}{B^2} r_x^3 = \Delta E. \tag{37} \] From this expression it is now easy to see that \[ \lim_{\Delta E \to 0} \left\{ \frac{A}{r_x} - \frac{A^3}{B^2} r_x^3 = \Delta E \right\} \implies r_x = \sqrt{B/A} = r_t, \tag{38} \] and conversely, using (36), \[ \lim_{r_x \to r_t} \left\{ \frac{A}{r_x} - \frac{B}{r_t^6} r_t^3 = \Delta E \right\} \implies \Delta E = 0. \tag{39} \] Consequently, with \( \Delta r \equiv | r_t - r_x | \), we see from the above limits that \( \Delta r \to 0 \) as \( \Delta E \to 0 \), and vice versa. As a rough estimate the calculation for the radius \( r_q \) of elementary quarks can be performed in a similar manner as that for electrons, since at such short distances the strong force reduces to a Coulomb-like force. One must also factor-in the electrostatic energy, so that a relationship like the following must be calculated, say, for the top quark: \[ \frac{2 \alpha \hbar c}{9 r_{q_t}^4} + \frac{4 \alpha_s \hbar c}{r_{q_t}^4} - \frac{3 \pi G \hbar c}{e^4} \left( \frac{\hbar c}{r_{q_t}^3} \right)^2 = \frac{m_t c^2}{r_{q_t}^3}. \tag{40} \] Here \( \alpha_s \) is the appropriate strong force coupling (we use 0.1), and we have used a generic expression for the total energy density of the strong field as a spherical shell less spherical symmetry to match the other terms. Needless to say, a cancellation radius different from that of the charged leptons has to be calculated first, by setting \[ \frac{2 \alpha \hbar c}{9 r_{q_t}^4} + \frac{4 \alpha_s \hbar c}{r_{q_t}^4} = \frac{3 \pi G \hbar c}{e^4} \left( \frac{\hbar c}{r_{q_t}^3} \right)^2. \tag{41} \] A calculation of the radius for the top quark based on eq. (40) can be found in the Appendix. We expect it to be only a very rough estimate of the actual value of the radius. Since only one spin density is involved, the above calculation might be able to approximate the behaviour of the quarks. But there may be slightly different radial differences for the electrostatic part and the strong force part, which would make solving for those differences very difficult. The calculation of the radii \( r_q \) for the up and down quark will probably be problematic as well, since their masses are not well known. But if an underlying relationship is discovered, then that may help to know those masses better. In a similar rough manner we approximate the radius \( r_n \) for neutrinos by replacing \( \alpha \) with the coupling for the weak force, \( \alpha_w \sim 1/29.5 \), in eq. (33) and using the mass-energy upper limit from Ref. 20. The results are consistent with radius of \( \sim 10^{-34} \) m. It appears that our approximations are of the same order for all leptons and quarks, and that according to our rough estimates the size of all elementary fermions could be very close to the Planck length. IV. POSSIBLE SOLUTION OF THE HIERARCHY PROBLEM As alluded to in the introduction, the Hierarchy Problem refers to the fact that gravitational interaction is extremely weak compared to the other known interactions in Nature. One way to appreciate this difference is by combining the Newton’s gravitational constant $G$ with the reduced Planck’s constant $\hbar$ and the speed of light $c$. The resulting mass scale is the Planck mass, $m_P$, which some have speculated to be associated with the existence of smallest possible black holes [7]. If we compare the Plank mass with the mass of the top quark (the heaviest known elementary particle), $$m_P = \sqrt{\frac{\hbar c}{G}} \approx 2.1765 \times 10^{-8} \text{ kg},$$ $$m_t = \frac{173.21 \text{ GeV}}{c^2} \approx 3.1197 \times 10^{-25} \text{ kg},$$ then we see that there is some 17 orders of magnitude difference between them. This illustrates the enormous difference between the Planck scale and the electroweak scale. Many solutions have been proposed to explain this difference, such as supersymmetry and large extra dimensions, but none has been universally accepted, for one reason or another. Furthermore, recent experiments performed with the Large Hadron Collider are gradually ruling out some of these proposals. But regardless of the nature of any specific proposal, it is clear from the above values that predictions of numbers with at least 17 significant figures are necessary to successfully explain the difference between $m_P$ and $m_t$. We saw from our numerical demonstration in the previous section that within the ECSK theory minute changes in length can induce sizable changes in the observed masses of elementary particles, and that we do have numbers at our disposal with more than 17 significant figures for producing those masses. Moreover, all length changes occurring in our demonstration are taking place close to the Planck length. Thus, since we are “canceling out” near the Planck length to obtain masses down to the electroweak scale, ours is clearly a possible mechanism for resolving the Hierarchy Problem. Within the ECSK theory, which extends general relativity to include spin-induced torsion, gravitational effects near micro scales are not necessarily weak. On the other hand, since torsion is produced in the ECSK theory by the spin density of matter, it is mostly confined to that matter, and thus is a very short range effect, unlike the infinite range effect of Einstein’s gravity produced by mass-energy. In fact the torsion field falls off as $1/r^6$, as shown in the calculations of Sect. III, since it is produced by spin density squared, confined to the matter distribution [9]. To compare the strengths of gravitational and torsion effects at various scales, we may define a mass-dependent dimensionless gravitational coupling constant, $\frac{Gm^2}{\hbar c}$, and evaluate it for the electron, top quark, and Planck masses: $$\alpha_G = \frac{Gm_e^2}{\hbar c} \approx 1.7517 \times 10^{-45},$$ $$\alpha_G = \frac{Gm_t^2}{\hbar c} \approx 1.1620 \times 10^{-36},$$ $$\alpha_G = \frac{Gm_P^2}{\hbar c} = 1,$$ $$\alpha_G = \frac{e^2}{4\pi\varepsilon_0\hbar c} \approx 7.2973 \times 10^{-3}.$$ Here $\alpha_e$ is the electromagnetic coupling constant, or the fine structure constant. From these values we see that near the Planck scale the gravitational coupling is very strong compared to the electromagnetic coupling. However, as we noted above and in Sect. III, near the Planck scale torsion effects due to spin density are also very strong, albeit with opposite polarity compared to that of Einstein’s gravity, akin to a kind of “anti-gravity” effect of a very short range. For our demonstration above we have used electrostatic energy density and spin density for matter in a static approximation, for which the field equation within the ECSK theory reduces to $G^{00} = T^{00}$. A numerical estimate for $G^{00}$ from the contributions of the electrostatic energy and spin density parts of $T^{00}$ at our cancellation radius gives $$G_{\text{stat}}^{00} = \frac{8\pi G \alpha \hbar c}{e^4} \approx +5.2614 \times 10^{61} \text{ m}^{-2},$$ and $$G_{\text{spin}}^{00} = -\frac{8\pi G}{e^4} \frac{3\pi \hbar^2}{e^2 r_t^4} \approx -5.2614 \times 10^{61} \text{ m}^{-2}.$$ (42) Evidently, these field strengths at the cancellation radius are quite large even for a single electron. Fortunately they are never realized in Nature, because, as we can see, they cancel each other out to produce $G_{00}^{\text{net}} = 0$. On the other hand, if we use only the mass-energy density for electron at the cancellation radius, then we obtain $G_{00}^{\text{mass}} \approx 3.0674 \times 10^{43} \text{ m}^{-2}$, which is again some 18 orders of magnitude off the mark. What is more, the latter field strength does not fall off as fast as that due to the spin-induced torsion field. Thus it is reasonable to conclude that without the cancellation of divergent energies due to the four-fermion interaction we have explored here, our universe would be highly improbable. V. CONCLUDING REMARKS In this paper we have addressed two longstanding questions in particle physics: (1) Why are there no elementary fermionic particles observed in the mass-energy range between the electro-weak scale and the Planck scale? And (2), what mechanical energy may be counterbalancing the divergent electrostatic and strong force energies of point-like charged fermions in the vicinity of the Planck scale? Using a hitherto unrecognized mechanism extracted from the well known Hehl-Datta equation, we have presented numerical estimates suggesting that the torsion contributions within the Einstein-Cartan-Sciama-Kibble extension of general relativity can address both of these questions in conjunction. The first of these problems, the Hierarchy Problem, can be traced back to the extreme weakness of gravity compared to the other forces, inducing a difference of some 17 orders of magnitude between the electroweak scale and the Planck scale. There have been many attempts to explain this huge difference, but none is simpler than our explanation based on the spin induced torsion contributions within the ECSK theory of gravity. The second problem we addressed here concerns the well known divergences of the electrostatic and strong force self-energies of point-like fermions at short distances. We have demonstrated above, numerically, that torsion contributions within the ECSK theory resolves this difficulty as well, by counterbalancing the divergent electrostatic and strong force energies close to the Planck scale. It is widely accepted that in the standard model of particle physics charged elementary fermions acquire masses via the Higgs mechanism. Within this mechanism, however, there is no satisfactory explanation for how the different couplings required for the fermions are produced to give the correct values of their masses. While the Higgs mechanism does bestow masses correctly to the heavy gauge bosons and a massless photon, and while our demonstration above does not furnish a fundamental explanation for the fermion masses either, we believe that what we have proposed in this paper is worthy of further research, since our proposal also offers a possible resolution of the Hierarchy Problem. Needless to say, the geometrical cancellation mechanism for divergent energies we have proposed here also dispels the need for mass-renormalization, with our cancellation radius $r_t$ acting as a natural cutoff radius taming the infinities. Thus both classical and quantum electrodynamics appears to be more complete with torsion contributions included. Appendix: Calculations of Cancellation Radii using Wolfram Mathematica In this appendix we explain how we used the arbitrary-precision in Mathematica to solve the numerical equations out to 22 significant figures. Each equation displayed below — derived from our central equation (22) — is simplified so that only the numerical factors have to be used, since the dimensional units cancel out, leaving lengths in meters. For decimal factors, the numbers must be padded out to 22 digits with zeros. Then the numerical part of electrostatic energy density is defined as $A$ and the numerical part of spin energy density is defined as $B$, just as in eq. (35) above. These are then used throughout to perform the calculations. For the values of various physical constants involved in the calculations we have used the 2014 CODATA values, Ref. [19], and values from the Particle Data Group, Ref. [20]. Calculation of the Cancellation Radius for Charged Leptons using Formula (22): \[ \frac{\alpha \hbar c}{2} r_t^2 - \frac{3 \pi G \hbar^2}{c^2} = 0 \quad (A.1) \] \[ A := N[(7.2973525664000000000000 \times 10^{-3})(1.0545718000000000000000 \times 10^{-34})(2.9979245800000000000000 \times 10^8)/2], 22; \quad B := N[(3\pi(6.6740800000000000000000 \times 10^{-11})(1.0545718013911300000000 \times 10^{-34})^2)]/ \] Calculation of Radius $r_e$ of Electron and Positron \[ \frac{\alpha \hbar c}{2 r^2_e} = \frac{3 \pi G \hbar^2}{c^2 r^6_e} = m_e c^2 \quad \Rightarrow \quad \frac{\alpha \hbar c}{2 r^2_e} - \frac{3 \pi G \hbar^2 r^4_e}{c^2 r^6_e} = m_e c^2 r_e = 0 \quad (A.2) \] \[ -\frac{\alpha \hbar c}{2 r^2_+} + \frac{3 \pi G \hbar^2}{c^2 r^6_+} = m_+ c^2 \quad \Rightarrow \quad -\frac{\alpha \hbar c}{2 r^2_+} + \frac{3 \pi G \hbar^2 r^4_+}{c^2 r^6_+} = m_+ c^2 r_+ = 0 \quad (A.3) \] \[C1:=N[(9.10938356000000000000 \times 10^{-31})((2.99792458000000000000 \times 10^8)^2), 22] \] \[N \text{ Solve} [A - B + r^4_+/(r^6_+) - C1 \times r_e = 0, r_e], 22] \quad \text{Last} \] \[\{r_e \rightarrow 8.2143011998060519083 \times 10^{-34}\} \] \[\{r_+ \rightarrow 8.2143011998060519070 \times 10^{-34}\} \] Calculation of Radius $r_\mu$ of Muon and Anti-Muon \[ \frac{\alpha \hbar c}{2 r^2_\mu} - \frac{3 \pi G \hbar^2}{c^2 r^6_\mu} = \frac{m_\mu c^2}{r^2_\mu} \quad \Rightarrow \quad \frac{\alpha \hbar c}{2 r^2_\mu} - \frac{3 \pi G \hbar^2 r^4_\mu}{c^2 r^6_\mu} = m_\mu c^2 r_\mu = 0 \quad (A.4) \] \[ -\frac{\alpha \hbar c}{2 r^2_\mu^+} + \frac{3 \pi G \hbar^2}{c^2 r^6_\mu^+} = \frac{m_\mu c^2}{r^2_\mu^+} \quad \Rightarrow \quad -\frac{\alpha \hbar c}{2 r^2_\mu^+} + \frac{3 \pi G \hbar^2 r^4_\mu^+}{c^2 r^6_\mu^+} = m_{\mu^+} c^2 r_{\mu^+} = 0 \quad (A.5) \] \[C2:=N[(1.88353159400000000000 \times 10^{-27})((2.99792458000000000000 \times 10^8)^2), 22] \] \[N \text{ Solve} [-A + B \times r^4_\mu^+/r^6_\mu^+] - C2 \times r_\mu = 0, r_\mu], 22] \quad \text{Last} \] \[\{r_\mu \rightarrow 8.2143011998060518270 \times 10^{-34}\} \] \[\{r_{\mu^+} \rightarrow 8.2143011998060519920 \times 10^{-34}\} \] Calculation of Radius $r_\tau$ of Tauon and Anti-Tauon \[ \frac{\alpha \hbar c}{2 r^2_\tau} - \frac{3 \pi G \hbar^2}{c^2 r^6_\tau} = \frac{m_\tau c^2}{r^2_\tau} \quad \Rightarrow \quad \frac{\alpha \hbar c}{2 r^2_\tau} - \frac{3 \pi G \hbar^2 r^4_\tau}{c^2 r^6_\tau} = m_\tau c^2 r_\tau = 0 \quad (A.6) \] \[ -\frac{\alpha \hbar c}{2 r^2_\tau^+} + \frac{3 \pi G \hbar^2}{c^2 r^6_\tau^+} = \frac{m_\tau c^2}{r^2_\tau^+} \quad \Rightarrow \quad -\frac{\alpha \hbar c}{2 r^2_\tau^+} + \frac{3 \pi G \hbar^2 r^4_\tau^+}{c^2 r^6_\tau^+} = m_{\tau^+} c^2 r_{\tau^+} = 0 \quad (A.7) \] \[C3:=N[(3.16747000000000000000 \times 10^{-27})((2.99792458000000000000 \times 10^8)^2), 22] \] \[N \text{ Solve} [-A + B \times r^4_\tau^+/r^6_\tau^+] - C3 \times r_\tau = 0, r_\tau], 22] \quad \text{Last} \] \[\{r_\tau \rightarrow 8.2143011998060505218 \times 10^{-34}\} \] \[\{r_{\tau^+} \rightarrow 8.2143011998060532972 \times 10^{-34}\} \] Calculation of the Cancellation Radius for Quarks using Formula (41): \[ \frac{2 \alpha \hbar c}{9} + 4 \alpha_s \hbar c - \frac{3 \pi G \hbar^2}{c^2 r_q^2} = \left(\frac{4 \alpha \hbar c}{2} + 36 \alpha_s \hbar c\right) r_q^2 - \left(9 \pi G \hbar^2 / c^2\right) = 0 \quad (A.8) \] \[D:=N[(36)(1/10)(1.05457180000000000000 \times 10^{-34})(2.99792458000000000000 \times 10^8), 22] \] \[N \text{ Solve}[(4 A + D \times r^2_q - (9) B == 0, r_q)], 22] \quad \text{Last} \] \[ r_{qt} \rightarrow 7.8294227049438663 \times 10^{-35} \] Calculation of Radius \( r_{qt} \) of Top Quark: \[ \frac{2\alpha \hbar c}{9 r_{qt}^4} + \frac{4\alpha_s \hbar c}{c^2 r_{qt}^6} - \frac{3\pi G \hbar^2}{c^2 r_{qt}^6} = \frac{m_t c^2}{2} + \frac{4\alpha \hbar c}{c^2 r_{qt}^6} - \frac{(9)3\pi G \hbar^2 r_{qt}^4}{c^2 r_{qt}^6} - 9 m_t c^2 r_{qt} = 0 \] \( \text{(A.9)} \) \[ E := \sqrt[9]{(3.08779000000000000000000 \times 10^{-25})}((2.9972458000000000000 \times 10^8)^2) \]\n \[ N[Solve\{(4)A + D - (9)B \star r_{qt}^4 \star r_{qt}^6 - E \star r_{qt} == 0, r_{qt}\}, 22] //Last \{ r_{qt} \rightarrow 7.8294227049438660486 \times 10^{-35} \}\]
Connectivity effects in the segmental self- and cross-reorientation of unentangled polymer melts A. Ottochian,1 D. Molin,1,a) A. Barbieri,1,2,b) and D. Leporini1,3,c) 1Dipartimento di Fisica “Enrico Fermi,” Università di Pisa, Largo B. Pontecorvo 3, I-56127 Pisa, Italy 2INFN, Sezione di Pisa, Largo B. Pontecorvo 3, I-56127 Pisa, Italy 3INFN-CRS SOFT, Largo B. Pontecorvo 3, I-56127 Pisa, Italy (Received 6 July 2009; accepted 19 October 2009; published online 6 November 2009) The segmental (bond) rotational dynamics in a polymer melt of unentangled, linear bead-spring chains is studied by molecular dynamics simulations. To single out the connectivity effects, states with limited deviations from the Gaussian behavior of the linear displacement are considered. Both the self and the cross bond-bond correlations with rank $\ell =1,2$ are studied in detail. For $\ell =1$ the correlation functions are precisely described by expressions involving the correlation functions of the chain modes. Several approximations concerning both the self- and the cross-correlations with $\ell =1,2$ are developed and assessed. It is found that the simplified description of the excluded volume static effects derived elsewhere [D. Molin et al., J. Phys.: Condens. Matter 18, 7543 (2006)] well accounts for the short time cross-correlations. It also allows a proper modification of the Rouse theory which provides quantitative account of the intermediate and the long time decay of the rotational correlations with $\ell =1$. I. INTRODUCTION The rotational dynamics of polymers are currently studied by several experimental techniques, including dielectric relaxation,1 NMR,2 electron paramagnetic resonance,3 light scattering,4 and more recently, single molecule spectroscopies5 and simulations.6–18 Due to the computational effort, numerical studies often consider short, unentangled chains with dynamics which in the melt state is usually rationalized by the Rouse model.19 Even if several papers discussed the rotational self-correlation functions, to the best of our knowledge only a few ones touched on the issue of the rotational cross-correlations, i.e., the ones involving distinct chain portions, in the framework of studies on dielectric relaxation,14 the crossover from Rouse19 to reptation dynamics,16 local ordering effects,15 and long-range bond-bond correlations.6 This is a little bit surprising in that rotational cross-correlations with $\ell$ rank are involved in the interaction between dipoles placed on distinct parts of the polymeric chain, an issue of remarkable interest in dielectric relaxation ($\ell =1$) and other techniques such as NMR,2 electron paramagnetic resonance,3 light scattering,4 and single molecule spectroscopies20,21 ($\ell =2$). Motivated by the above remarks, the present paper reports on extensive molecular dynamics (MD) simulations of melts of unentangled polymer chains with different lengths to the purpose of assessing the Rouse model as a convenient framework to interpret the rotational self- and cross-correlations with $\ell =1,2$. To better evidence the role of connectivity, a key concept of the Rouse model, we limited other effects like the heterogeneity of motion by carrying out the simulations under conditions of limited deviations from the Gaussian behavior of the linear displacement. The paper is organized as follows. In Sec. II the Rouse model and the relevant predictions are presented. In Sec. III the relevant rotational correlation functions are defined. In Sec. IV the model and the details of the simulation are given. In Sec. V the results are discussed. In Sec. VI the conclusions are summarized. II. SHORT ACCOUNT OF THE DISCRETE ROUSE MODEL The Rouse model19 is the simplest bead-spring model for flexible polymer chains.22–24 It is usually applied to describe the long time or large scale dynamics of polymers by neglecting hydrodynamic interactions, chain entanglements, as well as excluded volume effects. This model has been frequently applied to nonentangled chains in concentrated solutions. It also serves in the description of the entangled chains: the tube model analyses the motion of the Rouse chain confined in a tubelike regime for calculating various kinds of dynamic properties.23 Thus, the Rouse model is one of the most important models in the field of polymer dynamics and has been tested by experiments25–31 and numerical simulations.7–16,32–44 In particular, the Rouse dynamics of isotopic mixtures,41 in confined environments,13 and chemically reacting systems44 have been studied. Corrections for free-volume effects,39 intra- and intermolecular mean-force potentials33 and uncrossability constraints43 are also known. A mode-coupling theory providing microscopic justification for the use of the Rouse theory in polymer melts has been... developed.\textsuperscript{45} Numerical simulations of polymer melts, to be compared with basic predictions of the mode-coupling theory, were also reported.\textsuperscript{42} In the discrete\textsuperscript{46,47} Rouse model each chain is composed of $M_R - 1$ segments being modeled by $M_R$ noninteracting beads, connected by entropic springs with force constant $\kappa = 3k_BT/a_R^2$, where $a_R$ is the average size of the segment, i.e., the root mean-square length of the spring, $k_B$ is the Boltzmann constant, and $T$ is the absolute temperature. No other interaction between the beads is present. The model considers a given chain and regards the surrounding ones as a uniform frictional medium with Gaussian properties. The segmental friction coefficient of the tagged chain is denoted by $\zeta$. The surrounding chains are depicted to exert on each bead of the selected chain also a fast-fluctuating random force to ensure proper equilibrium properties via the fluctuation-dissipation theorem. The mean-field description of the Rouse model may be understood by noting that the size of a chain in the melt scales as $R_g^2 \approx M$, where $R_g$ is the gyration radius and $M$ is the number of monomers. If the monomer density is $\rho$, the number of chains $N_c$ in the volume $R_g$ scales as $N_c \approx \rho R_g^3/M \approx \sqrt{M}$.$\textsuperscript{33,34}$ The Langevin equation for the inner beads of the tagged chain ($2 \leq n \leq M_R - 1$) is $$\dot{\mathbf{r}}_n(t) = \frac{3k_BT}{a_R^2} [\mathbf{r}_{n-1}(t) - 2\mathbf{r}_n(t) + \mathbf{r}_{n+1}(t)] + \mathbf{f}_n(t),$$ and for the end beads ($n=1,M_R$) $$\dot{\mathbf{r}}_1(t) = \frac{3k_BT}{a_R^2} [\mathbf{r}_2(t) - \mathbf{r}_1(t)] + \mathbf{f}_1(t),$$ $$\dot{\mathbf{r}}_{M_R}(t) = \frac{3k_BT}{a_R^2} [\mathbf{r}_{M_R-1}(t) - \mathbf{r}_{M_R}(t)] + \mathbf{f}_{M_R}(t),$$ where $\mathbf{r}_n$ is the position vector of the $n$th bead of the chain and the dot denotes a time derivative. The Cartesian components of the stochastic force $\mathbf{f}_n(t)$ are modeled as Gaussian white noise with zero average and correlations according to the fluctuation-dissipation theorem $$\langle f_{n\beta}(t)f_{m\gamma}(t') \rangle = 2\zeta k_BT \delta_{nm} \delta_{\alpha\beta} \delta(t - t').$$ The set of Eq. (1) with $n=1,\ldots,M_R$ are exactly solvable.\textsuperscript{47} The solution, i.e., the position of the $n$th bead $\mathbf{r}_n$, is conveniently expressed in terms of normal coordinates, the so-called Rouse modes $\mathbf{X}_n^R$ with $p=0,\ldots,M_R-1$, as $$\mathbf{r}_n(t) = \mathbf{X}_0^R(t) + 2 \sum_{p=1}^{M_R-1} \mathbf{X}_p^R(t) \cos \left[ \frac{(n-1/2)p\pi}{M_R} \right].$$ The Rouse modes may be conversely written as $$\mathbf{X}_p^R(t) = \frac{1}{M_R} \sum_{n=0}^{M_R-1} \mathbf{r}_n(t) \cos \left[ \frac{(n-1/2)p\pi}{M_R} \right].$$ The static cross-correlations between the Rouse modes vanish. In particular, for $p > 0$ $$\langle \mathbf{X}_p^R, \mathbf{X}_q^R \rangle = \delta_{pq} \frac{a_R^2}{8M_R \sin^2(p\pi/2M_R)},$$ and $$\equiv \frac{M_R a_R^2}{2\pi^2 p^2}, \quad p/M_R \ll 1.$$ For $p = q = 0$ one finds $$\langle |\mathbf{X}_0^R(t) - \mathbf{X}_0^R(0)|^2 \rangle = 6 \frac{k_BT}{M_R} t,$n$$ which describes the diffusive motion of the center of mass $\mathbf{R}_{CM} = \mathbf{X}_0^R$. For $p > 0$, having defined the normalized self-correlation function of the $p$th Rouse mode as $$\phi_p(t) = \frac{\langle \mathbf{X}_0^R(t) \cdot \mathbf{X}_p^R(0) \rangle}{\langle |\mathbf{X}_0^R(t)|^2 \rangle},$$ The Rouse model predicts the exponential decay of $\phi_p(t)$ $$\phi_p(t) = \exp \left[ -\frac{t}{\tau_p} \right],$$ with characteristic time $$\tau_p = \frac{\zeta a_R^2}{12k_BT \sin^2(p\pi/2M_R)},$$ $$\equiv \frac{M_R^2}{3\pi^2 k_BT p^2}, \quad p/M_R \ll 1.$$ III. ROTATIONAL CORRELATION FUNCTIONS In this section suitable correlation functions to characterize both the global and the local reorientation of the chain are defined and some approximations presented. The corresponding Rouse expressions are identified. A. Bond correlation functions 1. Definitions One is interested in the rotational dynamics of linear polymer chains with $M$ stiff monomers, the $m$th one being located at the position $\mathbf{R}_m$, $1 \leq m \leq M$. The local reorientation process is accounted for by the unit vector along the $m$th bond of the chain $\mathbf{b}_m$ as $$\mathbf{b}_m = \frac{1}{b_0} (\mathbf{R}_{m+1} - \mathbf{R}_m),$$ $b_0$ being the bond length. The rotational self-correlation function of the $m$th bond with rank $\ell$ is defined as $$C_{ell}(t) = \langle P_\ell(\mathbf{b}_m(t) \cdot \mathbf{b}_m(0)) \rangle,$n$$ where $P_\ell(x)$ is the Legendre polynomial of order $\ell$. The self-correlation function averaged over all the bonds is defined as $$C_\ell(t) = \frac{1}{M-1} \sum_{m=1}^{M-1} C_{ell}(t).$$ \[ \chi(t) = b_{m+\Delta m}(t) \cdot b_m(0). \] (17) The rotational cross-correlation function with rank \( \ell = 1 \) between the \( m \)th and \( (m+\Delta m) \)th bonds is defined as \[ C_{1,m,\Delta m}(t) = \langle \chi(t) \rangle, \] (18) whereas the rotational cross-correlation function with rank \( \ell = 2 \) is defined as \[ C_{2,m,\Delta m}(t) = \langle \chi^2(t) \rangle - \frac{1}{3}. \] (19) Note that for \( \Delta m \neq 0 \), \( C_{1,m,\Delta m}(0) \) and \( C_{2,m,\Delta m}(0) \) have no trivial values due to excluded volume effects. However, they both vanish at long times, i.e., \( C_{1,m,\Delta m}(\infty) = C_{2,m,\Delta m}(\infty) = 0 \). We also note that \( C_{1,m,0}(t) = C_{1,m}(t) \) and \( C_{2,m,0}(t) = 2C_{2,m}(t)/3 \). The cross-correlation function averaged over all the bonds which are spaced by \( \Delta m - 1 \) other segments is defined as \[ C_{\ell,\Delta m}(t) = \frac{1}{M - 1 - \Delta m} \sum_{m=1}^{M-1-\Delta m} C_{\ell,m,\Delta m}(t). \] (20) 2. Approximations Equation (15) yields \[ C_{\ell,n}(t) = 1 - \ell(\ell + 1)(\theta_n^2(t))/4 \] at short times, where \( \theta_n(t) \) is the angle spanned by the bond \( b_n \) in a time \( t \). At short times the chain connectivity little affects the bond reorientation. Then, one anticipates that the initial stage of the correlation loss is well accounted for by the free rotational diffusion which predicts \( C_{\ell,n}(t) = \exp[-\ell(\ell + 1)D_R t] \) at any time, thus leading to the identification \( \langle \theta_n^2(t) \rangle = 4D_R t \), \( D_R \) being the rotational diffusion coefficient.\(^{48}\) Even if the linear increase in time does not hold at long times in that \( \langle \theta_n^2(\infty) \rangle = (\pi^2/4)/2 \), the following ansatz will be considered: \[ C_{\ell,n}(t) = \exp\left[ -\frac{\ell(\ell + 1)}{4} \langle \theta_n^2(t) \rangle \right]. \] (21) The above approximation is expected to hold for correlation functions with high \( \ell \)-values decaying on shorter time scales. If one neglects the weak dependence of \( \langle \theta_n^2(t) \rangle \) on the bond number \( n \), Eq. (21) reduces to \[ C_{\ell}(t) = \exp\left[ -\frac{\ell(\ell + 1)}{4} \langle \theta^2(t) \rangle \right], \] (22) where \( \langle \theta^2(t) \rangle \) is the average of \( \langle \theta_n^2(t) \rangle \) over all the bonds. Equation (21) predicts \( C_{2,n}(t) = C_{1,n}(t)^2 \) so that, by using Eqs. (16), for \( \ell = 2 \) \[ C_2(t) \approx \frac{1}{M - 1 - \Delta m} \sum_{m=1}^{M-1} C_{1,m}(t)^2. \] (23) The above approximation is expected to improve Eq. (22) with \( \ell = 2 \) since it has the correct limit \( C_2(\infty) = 0 \). Note that Eqs. (21)–(23) do not necessarily imply an exponential time decay. Another relation may be written for the cross-correlation functions as well. Let us consider the angle \( \beta_{m,\Delta m} \) between \( b_m(0) \) and \( b_{m+\Delta m}(0) \) and the angle \( \phi_{m,\Delta m} \) between the two vectors \( b_m(0) \times b_m(0) \) and \( b_{m+\Delta m}(0) \times b_{m+\Delta m}(0) \), respectively [\( \phi_{m,\Delta m}(t) \) is the dihedral angle of the two planes with the two vectors as normal vectors]. If both \( \theta_{m,\Delta m} \) and \( \phi_{m,\Delta m} \) are weakly correlated with \( \beta_{m,\Delta m} \) (roughly, this means that the static correlations between two bonds spaced by \( \Delta m \) do not affect their dynamics), \( C_{1,m,\Delta m}(t) \) is approximated by \[ C_{1,m,\Delta m}(t) \approx \langle \cos \beta_{m,\Delta m}(t) \rangle \cos \theta_{m,\Delta m}(t) \] \[ + \langle \sin \beta_{m,\Delta m}(t) \rangle \sin \theta_{m,\Delta m}(t) \cos \phi_{m,\Delta m}(t). \] (24) Notice that \( \langle \cos \theta_{m,\Delta m}(t) \rangle = C_{1,m}(t) \) and that the angle \( \beta_{m,\Delta m} \) calculation requires only information on statistics. B. Rouse segment correlation functions 1. Definitions and properties To assess the predictions of the Rouse model on the rotational dynamics, suitable correlation functions to be compared with the ones of Sec. III A 1 must be defined. Henceforth, for a given correlation function \( C \), the corresponding Rouse expression will be denoted by \( C^R \). Their relations with the modes \( \mathbf{X}_R^2 \) are given in the Appendix. To characterize the local reorientation of the chain one has to focus on the correlation functions of the Rouse segments to be defined as \[ a_n = \frac{1}{a_R} (r_{n+1} - r_n), \quad n = 1, \ldots, M_R - 1. \] (25) One aspect to be considered is that, differently from bonds, the length of the segments of the Rouse chain is Gaussian distributed and the moments only are known, e.g. \[ \langle a_n^2 \rangle = 1, \] (26) \[ \langle a_n^4 \rangle = \frac{3}{2}, \] (27) with \( a_n^2 = \mathbf{a}_n \cdot \mathbf{a}_n \). The rotational self-correlation function of the \( n \)th segment with rank \( \ell = 1 \) is defined as \[ C_{1,\ell,n}^R(t) = \langle a_n(t) \cdot a_n(0) \rangle. \] (28) The above equation is formally identical to the corresponding bond-bond correlation function for fixed bond length \( b_0 \), i.e., Eq. (15) with \( \ell = 1 \). Since \( C_{1,\ell,n}^R(t) \) involves \( \langle a_n^2 \rangle \) only, Eq. (28) is mapped into Eq. (15) by the identification \( a_R = b_0 \).\(^{48-51}\) If the correlation functions with rank \( \ell \geq 2 \) are considered, the moments \( \langle a_n^{2p} \rangle \) with \( p \geq 2 \) come into play. Since the inequality \( \langle a_n^{2p} \rangle \neq \langle a_n^{2p} \rangle \) holds [e.g., see Eq. (27)], the correlation functions \( C_{1,\ell,n}^R \) with \( \ell \geq 2 \) must be properly defined for comparison with the corresponding ones of Sec. III A 1. Owing to that, the self-correlation function of the \( n \)th segment with rank \( \ell = 2 \) is defined as \[ C_{2,\ell,n}^R(t) = \frac{3 \langle (a_n(t) \cdot a_n(0))^2 \rangle - 1}{4}. \] (29) With the above definitions \( C_{1,\ell,n}^R(0) = 0 \) and \( C_{1,\ell,n}(\infty) = 0 \). Note that \( C_{2,\ell,n} \) and the corresponding bond correlation function, Eq. (15) with \( \ell = 2 \), are formally different. In analogy with Eq. (16) one defines the self-correlation function averaged over the $M_R-1$ segments as $$C^R_{t}(t) = \frac{1}{M_R-1} \sum_{n=1}^{M_R-1} C^R_{\ell,n}(t).$$ (30) A compact expression for $C^R_{t}(t)$ is presented as Eq. (A7) in the Appendix. In the Appendix it is also shown that $$C^R_{2,n}(t) = C^R_{1,n}(t)^2.$$ (31) From Eq. (30) one gets $$C^R_{2}(t) = \frac{1}{M_R-1} \sum_{n=1}^{M_R-1} C^R_{1,n}(t)^2.$$ (32) The cross-correlation functions are defined in terms of $X^R_{t}(t) = \mathbf{a}_{m+\Delta m}(t) \cdot \mathbf{a}_m(0).$ (33) The expressions of $c^R_{1,m,\Delta m}(t)$ and $c^R_{2,m,\Delta m}(t)$ are get by replacing $X(t)$ with $X^R(t)$ in Eqs. (18) and (19), respectively $$C^R_{1,m,\Delta m}(t) = \langle X^R(t) \rangle,$$ (34) $$C^R_{2,m,\Delta m}(t) = \langle (X^R(t))^2 \rangle - \frac{1}{2}.$$ (35) We note that $c^R_{1,m,0}(t) = C^R_{1,m}(t)$ and $c^R_{2,m,0}(t) = 4C^R_{2,m}(t)/3.$ In Appendix it is shown that $$C^R_{2,m,\Delta m}(t) = \frac{3}{2}C^R_{1,m,\Delta m}(t)^2.$$ (36) The average of the cross-correlation functions over all the segments $C^R_{\ell,\Delta m}(t)$ and $\ell = 1, 2$ is defined in analogy with Eq. (30). By replacing Eq. (5) into Eq. (25) and inserting the result into Eqs. (28), (29), (34), and (35) the self-correlation and cross-correlation functions with rank $\ell = 1, 2$ are related to the normalized correlation functions of the Rouse mode $\phi^R_{t}(t)$ [Eq. (11)]. Their explicit expressions are derived in the Appendix. In general, correlation functions with rank $\ell$ are expressed by multivariate polynomials with degree $\ell$ in terms of the variables $\phi^R_{t}(p)$ with $p = 1, \ldots, M_R-1.$ Rouse chains are “phantom” chains, i.e., excluded volume effects are ignored. This means $$\langle \mathbf{a}_m \cdot \mathbf{a}_m \rangle = \delta_{00}.$$ (37) The above relation sets the static properties of the Rouse modes, i.e., Eq. (7). In this limit the cross-correlation functions at very short times vanish, i.e., $C^R_{\ell,m,\Delta m}(0) = 0$ for $\ell = 1, 2$ and $\Delta m \neq 0,$ as one may check in Eqs. (A14) and (A9) for $\ell = 1,$ as well as Eq. (A14) for $\ell = 2.$ 2. Approximation In order to test the Rouse model one has to evaluate the modes $X^R_{t}(t).$ To this aim, the bead position is identified with the monomer position in Eq. (6), $$X^R_{t}(t) = \frac{1}{M} \sum_{n=1}^{M} R^R_{n}(t) \cos \left[ \frac{(n-1/2)p\pi}{M} \right].$$ (38) The key quantity of the Rouse expressions concerning the rotational correlation functions are the mode correlation functions $\langle X^R_{p}(t) \cdot X^R_{q}(0) \rangle.$ A number of schemes for their evaluation will be adopted which are summarized by the relation $$\langle X^R_{p}(t) \cdot X^R_{q}(0) \rangle = \langle |X^R_{p}|^2 \rangle \phi^R_{q}(t), \quad i = \text{MD}, \text{ R, SM3},$$ (39) where the normalized mode correlation function $\phi^R_{q}(t)$ is taken from simulations (see examples in Fig. 2). The label $i = \text{MD}$ means that the correlation function is evaluated from simulations in full. Setting $i = \text{R}$ implies that the modulus is taken by Eq. (7). The choice $i = \text{SM3}$ signals that the modulus is evaluated from Eq. (38) according to the SM3 model. The SM3 model removes the phantom character of the Rouse chains by providing excluded volume corrections to the single-chain static properties of a melt of unentangled polymers. It needs the temperature and the interaction potential between nonbonded monomers [Eq. (40) in the present work] as only input parameters. IV. DETAILS OF THE SIMULATION We investigate systems of $N$ fully flexible linear chains with $M$ monomers each. The $(M,N)$ pairs under investigation are $(3,667), (5,200), (10,200), (15,220), (22,300)$ and $(30,300).$ The sample is confined into a cubic box with periodic boundary conditions. To handle the boundary conditions, the minimum image convention is adopted. The interaction between nonbonded monomers occurs via the Lennard-Jones (LJ) potential given by $$U(r) = 4\epsilon[(\sigma/r)^{12} - (\sigma/r)^6] + U_{\text{cut}}.$$ (40) The potential is cut off at $r_{\text{cut}}=2.5\sigma$ and properly shifted by $U_{\text{cut}}$ so as to vanish at that point and to make it continuous everywhere. No torsional potential is present. The RATTLE (Ref. 54) algorithm is used to constrain neighboring monomers in the same chain at a distance $b_0 = 0.97\sigma.$ From now on LJ units are adopted with the Boltzmann constant $k_B = 1.$ The samples are equilibrated under Nosé-Andersen dynamics at a given temperature $T$ and pressure $P$ until the average displacement of the chains’ centers of mass is as large as twice the mean end-to-end distance. Data are collected during production runs in microcanonical conditions. The time step for the chosen velocity verlet integration is $\delta t = 2.5 \times 10^{-3}.$ No adjustment of the temperature, e.g., by rescaling the velocities, was needed during the production run. The results have been averaged over ten independent runs at least. The system is studied at constant pressure $P = 2.0$ and temperature $T = 1.2$ for chains with length $M = 3, 5, 10, 15, 22,$ and $30,$ and at temperatures $T=0.65, 0.7, 0.75, 0.8, 1.0, 1.2, 1.4, 1.6,$ and $1.8$ for the chain with $M = 10.$ V. RESULTS AND DISCUSSION Flexible linear chains have average size of the segment $a_R = \sqrt{C_{\sigma}\beta_0}.32$ For the present case, i.e., fully flexible chains, the characteristic ratio $\sqrt{C_{\sigma}} = 1.19.17$ On this basis the identification $M_R = M$ is used when comparing the Rouse predictions with the numerical results and the Rouse modes are evaluated by Eq. (38). They are found to be fairly orthogonal. In fact, the quantity $\langle X^R_{p} X^R_{q} \rangle$ with $p \neq q$ is two to three orders of magnitude less than the moduli of the involved modes (data not shown) in agreement with other studies.36 The Rouse model predicts that the mean-square displacement of monomer displacement at time $t$ is given by $\langle R^2(t) \rangle = 6Dt$ where $D$ is the diffusion coefficient. Notice the absence of any plateaulike behavior. The top inset evidences that $\langle R^2(t) \rangle$ decays at long times as a stretched exponential with stretching parameter $\beta = 0.86$ given by the slope of the solid line. Bottom inset: scaling of the slowest ($\phi_9$) and the fastest ($\phi_1$) correlation functions for $T=0.65$, 0.7, 0.75, 0.8, 1.2, 1.4, 1.6, and 1.8. Note the plateau region of $\phi_9(t)$ at short times and the lowest temperatures evidencing the cage effect. The latter is not seen in $\phi_1(t)$. The discrepancy is known and ascribed to the limited allowable range of chain lengths to see the Rouse dynamics which, expectedly, yields finite $M$ corrections to the ideal Rouse behavior.\cite{13} The bottom inset of Fig. 2 shows a detailed view of the slowest ($p=1$) and the fastest ($p=9$) correlation functions of the Rouse modes for all the temperatures. Curves are scaled by $\tau_\phi$, being defined by $\phi_1(\tau_\phi)=1/e$. The scaling works nicely for $\phi_1(t)$, whereas deviations are apparent at short times for $\phi_9(t)$ where at the lowest temperatures a plateau develops signaling the onset of the cage effect in agreement with the analysis of the non-Gaussian effects in Fig. 1 bottom. The plateau is not seen in $\phi_1(t)$ which has not relaxed appreciably within the cage lifetime. The larger sensitivity to caging of $\phi_9(t)$ is understood by noting that the mode $X_9$ represents the local motion of the chain which includes $M/p=10/9 \approx 1$ bonds. The missing time/temperature scaling of the cage effect is well-known, e.g., see Ref. 55. Differently, the time/temperature scaling of the long-time relaxation is a major prediction of the Rouse model.\cite{23} B. Rotational self-correlation functions Figure 3 shows the temperature-dependence of the rotational correlation function $C_\ell(t)$ [Eq. (16)] for $\ell=1,2$ of the melt of decamers ($M=10$). It is seen that at the same temperature $C_2(t)$ decays faster than $C_1(t)$ since the former is more sensitive to small angular displacements and the rotational dynamics is not dominated by jumps which would lead to substantial $\ell$-independence of the decay times.\cite{55} On decreasing the temperature, both $C_1(t)$ and $C_2(t)$ show a plateau at short times signaling the onset of the cage effect. As it is anticipated, the confined motion inside the cage results in larger correlation losses for $\ell=2$ than $\ell=1$. Figure 3 shows that $C_1(t)$ and $C_2(t)$ have several decay regimes well expressed by the general form $\log C(t) \sim -\alpha t^\beta$, the $\alpha$ exponent being dependent on the particular time window under consideration. At very short times in the ballistic regime $\langle \theta^2(t) \rangle \sim t^2$ and then log $C_1(t) \sim -t^2$ according to Eq. (22). At longer times and low temperatures there is a crossover from ballistic to caged dynamics evidenced by the quasiplateau at $1 \leq t \leq 10$. At higher temperatures the latter is not apparent. A. Monomer displacement and Rouse modes Figure 1 (top panel) shows the monomer mean-square displacement at $T=1.2$ for all the chain lengths under study. Curves are suitably scaled for easier comparison. A number of different regimes are seen. At short times ($t \leq 0.1$) the motion is ballistic: $\langle R^2(t) \rangle \propto t^2$. At later times the connectivity drives the motion of monomers to a subdiffusive regime, i.e., $\langle R^2(t) \rangle \propto t^{\alpha_2}$ with $\alpha_2 = 0.6 < 1$. The Rouse model predicts $\alpha_1=0.5$.\cite{23} For displacements larger than the mean-square end-to-end distance $R_{ce}^2$ (not shown in Fig. 1) the monomer motion becomes diffusive, i.e., $\langle R^2(t) \rangle = 6Dt$ where $D$ is the diffusion coefficient. Notice the absence of any plateau-like region in the mean-square displacements and then the missing evidence of well-defined caging effects due to nearest neighbors at $T=1.2$. Escaping from traps exhibits marked non-Gaussian features which may be characterized by the non-Gaussian parameter $\alpha_2$.\cite{17} $$\alpha_2(t) = \frac{3}{5} \left( \frac{\langle |\Delta R(t)|^4 \rangle}{\langle |\Delta R(t)|^2 \rangle^2} \right) - 1. \quad (41)$$ Figure 1 (bottom) plots $\alpha_2$ for all the chain lengths under study at $T=1.2$ and for $M=10$ at $T=0.65$. The small deviations from the Gaussian behavior confirm the negligible trapping of the monomers at $T=1.2$. Differently, the decamer at $T=0.65$ exhibits somewhat larger deviations pointing to stronger caging of the monomers. Figure 2 (top panel) plots the normalized self-correlation functions of the Rouse modes $\phi_\ell(t)$ for $M=10$ and $T=1.2$. The top inset evidences that $\phi_9(t)$ decays at long times as a stretched exponential with stretching parameter $\beta \approx 0.86$. The Rouse model predicts $\beta=1$ for all the modes [Eq. (11)]. At longer times the rotational motion enters a new regime where the connectivity affects markedly the bond reorientation and leads to a stretched decay with stretching parameter \( x \approx 0.67 \) for \( \ell = 1 \) and 2. Notice that the parameter is quite close to the exponent of the subdiffusive regime of the mean-square displacement occurring in the same time window (see Fig. 1). At longer times \( C_1(t) \) crosses over to a slower decay regime with stretching parameter \( x' \approx 0.4 \). Both stretching parameters do not exhibit appreciable temperature dependence, as it is seen in Fig. 4 where the curves are scaled by \( \tau_r \), the latter being defined by \( C_1(\tau_r^*) = 1/e \). Equation (22) gives a hint about the similarity of the time-dependence of \( C_1(t/\tau_r^*) \) and \( C_2(t/\tau_r^*) \). It states that changing the \( \ell \) rank just shifts \( \log_{10}(-\ln C_{\ell,p}) \) by a constant term. Notice that Fig. 4 also shows that both \( C_1(t) \) and \( C_2(t) \) are effectively scaled onto master curves at intermediate and long times but not at short times where caging is effective, in agreement with previous findings for both dimers\(^{35}\) and decamers.\(^{17}\) Figure 5 shows the molecular weight dependence of \( C_1(t) \) and \( C_2(t) \) for \( T=1.2 \). It is seen that the decay is weakly dependent on the chain length for \( 1 \leq t \leq 10 \) (intermediate times), whereas increasing the connectivity leads, especially for \( C_1(t) \), to a slower decay for \( t \approx 10 \) (long times). Insight into the time dependence of \( C_1(t) \) is reached by numerical evaluation of Eq. (A6) via Eq. (39) with \( i=SM3 \) and \[ \phi_p(t) = \exp \left[ -\left( \frac{t}{\tau_p^i(M)} \right)^{\beta} \right]. \] with \( \tau_p^i(M_R) = \pi/\sin^2(p \pi / 2M_R) \). Equation (42) assumes equal stretching for all the modes (see Fig. 2) and the SM3 model removes the phantom character of the Rouse chains (see Sec. III B 1). Figure 6 shows the results. It is seen that the \( x \) slope of the intermediate regime just reflects the stretching of the Rouse modes. The long-time regime takes place for \( t \approx 1/\tau_p^i(M_R) \). The larger stretching with increasing chain length is ascribed to the wider distribution of relaxation times \( \tau_p^i(M_R) \) spanning the range \( 1 \leq p \leq M_R^{-1} \). Figure 5 shows that the decay of \( C_1(t) \) exhibits two regimes with analogous features with respect to the ones of \( C_1(t) \). It is interesting to compare the decay of \( C_1(t) \) and \( C_2(t) \) with the approximations developed in Secs. III A 2 and III B 2 This is done in Figs. 7 and 8. Figure 7 shows that \( C_1(t) \) is virtually coincident with Eq. (A6) by using Eq. (38) to evaluate the Rouse mode correla- tion functions from the MD data. The agreement is quite expected since Eq. (A6) follows from the high orthogonality of the modes $X_i$. It is worth noting that evaluating Eq. (A6) via Eq. (39) with $i=R$ or SM3 result nearly in the same agreement with the exact calculation carried out by MD data, $i=\text{MD}$ (not shown). Figure 7 also compares the MD results concerning $C_1(t)$ for different molecular weights at $T=1.2$ and for $M=10$ at different temperatures with Eq. (22) with $\ell=1$. By comparing the top panel of Fig. 7 with Fig. 5 (top panel), it is seen that Eq. (22) with $\ell=1$ accounts for the correlation loss up to the end of the intermediate decay regime ($1 \approx t \approx 10$). Notice that: (i) the intermediate decay is stretched (see Fig. 5), then Eq. (22) cannot be read as a trivial consequence of the diffusion model predicting exponential decays; (ii) the decay is tracked even at low temperatures where the cage effect is more apparent and the non-Gaussian effects are not negligible (see Fig. 1). However, at long times Eq. (22) breaks down in that it does not predict the complete correlation loss but a plateau at $\exp[-(t^2)]=0.23$. Figure 8 compares the MD results concerning $C_2(t)$ for different molecular weights at $T=1.2$ and for $M=10$ at different temperatures with Eq. (22) with $\ell=2$, Eq. (23), and Eq. (32) with $M_R=M$. The top panel of Fig. 8 shows that Eq. (32) agrees poorly with the simulations. This is a little surprising since the latter equation relies on the Gaussian character of the monomer displacement (see Appendix) which is still valid at $T=1.2$, as shown by the small non-Gaussian parameter (see Fig. 1, lower panel). Figure 8 shows better agreement by the two variants of Eq. (21), i.e., Eqs (22) and (23), even at low temperatures where non-Gaussian effects are more apparent. In particular, the agreement of Eq. (22) with $\ell=2$ is much better than for $\ell=1$, owing to the larger decay of $C_2(t)$ for a given mean-square angle $\langle \theta^2(t) \rangle$ spanned by the bond in a time $t$. C. Rotational cross-correlation functions Figure 9 plots the cross-correlation functions $C_{1,\Delta n}$ [Eq. (20)]. The correlations decreases with the bond-bond distance $\Delta n$. For a given distance, due to the connectivity, the cross-correlations first increase at short times and then vanish. when time exceeds the chain rotational correlation time \( \sim \tau_c \). The MD results are compared to a number of approximating schemes evaluating \( C_{1,m,\Delta m} \) [Eq. (18)]. First, the latter was identified with Eq. (A3), the bond dependence was averaged by Eq. (A8), and the Rouse mode correlation functions were taken directly from the simulations. As it is seen in Fig. 9, the comparison is quite satisfactory since Eq. (A3) is also used for the mode orthogonality, which is quite good. We also replaced \( \langle X_i^R(t) X_i^R(0) \rangle \) in Eq. (A3) with Eq. (39) with \( i = \text{SM3} \). The good agreement shows that the SM3 model well accounts for the excluded volume effects on the static properties. Figure 9 also includes the approximation of \( C_{1,m,\Delta m} \) expression by Eq. (24). The agreement is excellent at short and long times. The discrepancies at intermediate times point to the conclusion that the angle \( \beta_m,\Delta m \) between \( b_m(0) \) and \( b_{m+\Delta m}(0) \) is partially correlated with the dihedral angle \( \phi_{m,\Delta m}(t) \) and the angle \( \theta_{m+\Delta m} \) spanned by \( b_m,\Delta m \) in a time \( t \). For \( \Delta m = 1 \) the quantities \( \langle \cos \beta \rangle \) and \( \langle \sin \beta \rangle \) in Eq. (24) are also evaluated by the SM3 model resulting in the solid line. The partial improvement at intermediate times is counterbalanced by small deviations at short times. As noted in Sec. III B the “phantom” character of the Rouse chains, due to no account of the excluded volume, implies that their cross-correlations vanish at the initial time, i.e., \( C_{1,0}^R(0) = 0 \) for \( \Delta m \neq 0 \) [this is seen by evaluating Eq. (A4) at \( t=0 \)]. Figure 9 shows that the weakness of this conclusion in that it shows that \( C_{1,0}^R(0) \neq 0 \) for \( \Delta m = 1, 2, \) and 3. Notice that \( C_{1,0}(t) \), when evaluated via Eqs. (A3) and (A8) and by using the MD results, is coincident with the exact MD results within the errors. This comes as no surprise in that Eq. (A3) relies on the mode-mode orthogonality only and then is virtually exact. Fig. 10 compares the exact MD evaluation of \( C_{2,1} \) with the expectation of the Rouse theory, as given by Eqs. (A14) and (20). Being the former derived on the basis of Eq. (A4) which, in turn, relies on Eq. (37), one finds \( C_{2,1}^R(0) = 0 \). However, the MD simulation yields \( C_{2,1}(0) \neq 0 \). It is worth noting that \( C_{2,1}(0) \) may be evaluated by the SM3 model via Eqs. (17) and (19) by neglecting the \( m \)-dependence to yield \[ C_{2,1}^R(0) = -0.113, \tag{43} \] which is in good agreement with \( C_{2,1}(0) = -0.103 \) from simulations. VI. CONCLUSIONS The paper presents a thorough MD study of the segmental (bond) rotational dynamics in a melt of unentangled, linear chains. To single out the connectivity effects, the study considered states with limited deviations from the Gaussian behavior of the linear displacement. Both the self- and the cross-bond-bond correlations with rank \( \ell = 1 \) and 2 are studied in detail. For \( \ell = 1 \) (of major interest for dielectric relaxation) the correlation functions are precisely described by expressions involving the correlation functions of the chain modes. This is shown in Figs. 7 and 9 where the general results for the self-correlations, Eq. (A6), and the cross-correlations, Eq. (A3) [averaged by Eq. (A8)], are compared to the simulations, respectively. Several approximations concerning both the self- and the cross-correlations with \( \ell = 1 \) and 2 are developed and assessed. For \( \ell = 2 \) (involved in NMR, electron paramagnetic resonance, light scattering, and single molecule spectroscopies) a relation between the self-correlations with \( \ell = 2 \) and the ones with \( \ell = 1 \) [Eq. (32)] is derived under the assumption of Gaussian properties of the chain segments (a key hypothesis of the Rouse model). When the relation is compared to the MD results, deviations are noted (see Fig. 8) pointing to limited robustness of Eq. (32) even under small non-Gaussianity. Much better agreement is found by adopting alternative expressions [Eqs. (22) and (23)]. On the other hand, Eq. (22), works only at short times for \( \ell = 1 \) (see Fig. 7). For the cross-correlations with \( \ell = 1 \) the approximate expression, Eq. (24) yields excellent agreement at both short and long times (see Fig. 9). decay of the self-correlations with $\ell =1$. The SM3 model well accounts for the short time cross-correlations for both $\ell =1$ and 2 as well. The self-correlations are seen to have long-time tails which are little dependent on the temperature, whereas are strongly dependent on the chain length. This feature may provide early signatures of the onset of the reptation dynamics. On the other hand, the cross-correlations evidence the deep impact of the excluded volume effects on the chain dynamics in a much clearer way than the self-correlations and suggest how to investigate in the time-domain the long-range spatial correlations recently reported. ACKNOWLEDGMENTS Financial support from MUR within the PRIN project “Aging, fluctuation and response in out-of-equilibrium glassy systems” and FIRB project “Nanopack” as well as computational resources by “Laboratorio per il Calcolo Scientifico,” Pisa are gratefully acknowledged. APPENDIX: ROUSE EXPRESSIONS OF THE SEGMENTAL CORRELATION FUNCTIONS The Appendix summarizes the derivation of the self- and cross-correlation functions with rank $\ell =1$ and 2 according to the Rouse model. The cross-correlation function $C_{\ell,m,\Delta m}^{R}(t)$, Eq. (34), is first considered. By replacing Eq. (5) into Eq. (25), one relates the $n$th Rouse segment to the chain modes as $$a_{n}(t) = \delta \frac{M_{R}^{-1}}{\alpha_{R}} \sum_{p=1}^{M_{R}^{-1}} c_{n,p} R_{p}(t),$$ (A1) with $$c_{np} = \sin \left[ \frac{np \pi}{M_{R}} \right] \sin \left[ \frac{p \pi}{2M_{R}} \right].$$ (A2) Replacing Eq. (A1) into Eq. (33) and the result into Eq. (34) one finds $$C_{\ell,m,\Delta m}^{R}(t) = \frac{16}{\alpha_{R}^{2}} \sum_{p=1}^{M_{R}^{-1}} c_{m+2\Delta m,p}^{R} c_{m,p} \langle R_{p}^{R}(t) R_{p}^{R}(0) \rangle,$$ (A3) $$= \frac{2}{M_{R}} \sum_{p=1}^{M_{R}^{-1}} \left[ \frac{(m+2\Delta m)p \pi}{M_{R}} \right] \sin \left[ \frac{mp \pi}{M_{R}} \right] \phi_{p}^{t}(t).$$ (A4) Equation (A4) is coincident with previous results. The explicit expression of the self-correlation function of the $n$th segment $C_{1,m}^{R}(t)$ [see Eq. (28)] is recovered via Eq. (A4) by setting $\Delta m = 0$, $$C_{1,m}^{R}(t) = \frac{2}{M_{R}} \sum_{p=1}^{M_{R}^{-1}} \sin \left[ \frac{mp \pi}{M_{R}} \right] \phi_{p}^{t}(t).$$ (A5) Notice that $C_{1,m}^{R}(0)=1$ and $C_{1,m}^{R}(\infty)=0$. By replacing Eq. (A3) with $\ell =1$ and $\Delta m = 0$ into Eq. (30), a compact expression is obtained for the self-correlation function averaged over all the segments $C_{1}^{R}(t)$, which reads $$C_{1}^{R}(t) = \frac{16}{\alpha_{R}^{2}} \sum_{m=1}^{M_{R}^{-1}} c_{m+2\Delta m,p}^{R} \langle X_{p}^{R}(t) X_{p}^{R}(0) \rangle,$$ (A6) $$= \frac{1}{M_{R}-1} \sum_{p=1}^{M_{R}^{-1}} \phi_{p}^{t}(t).$$ (A7) Equation (A7) follows by using Eqs. (7), (10), and (11) and is coincident with previous results. By averaging Eq. (A4) over all the segments the expression of $C_{2,m,\Delta m}^{R}(t)$ reads $$C_{2,m,\Delta m}^{R}(t) = \frac{1}{M_{R}-1-\Delta m} \sum_{m=1}^{M_{R}^{-1}-\Delta m} C_{1,m,\Delta m}^{R}(t),$$ (A8) $$= \frac{1}{M_{R}-1-\Delta m} \times \sum_{p=1}^{M_{R}^{-1}} \left[ (M_{R}-\Delta m) \cos \left[ \frac{\Delta m \pi}{M_{R}} \right] \right] \cos \left[ \frac{p \pi}{M_{R}} \right],$$ (A9) which reduces to Eq. (A7) for $\Delta m = 0$. We now consider the cross-correlation function $C_{2,m,\Delta m}^{R}(t)$. To evaluate Eq. (35), one evaluates the quantity $\langle R_{p}^{R}(t) \rangle$ via Eq. (33). Due to the space isotropy one finds $$\langle R^{R2}(t) \rangle = 3 \langle (a_{m+\Delta m,\alpha}(t) a_{m+\Delta m,\beta}(0))^{2} \rangle$$ $$+ 2 \langle (a_{m+\Delta m,\alpha}(t) a_{m+\Delta m,\beta}(0) a_{m+\Delta m,\gamma}(0)) \rangle.$$ (A10) For a Gaussian process the identity $$\langle ABCD \rangle = \langle AB \rangle \langle CD \rangle + \langle AC \rangle \langle BD \rangle + \langle AD \rangle \langle BC \rangle$$ allows to express Eq. (A10) as $$\langle R^{R2}(t) \rangle = \frac{1}{2} \langle (a_{m+\Delta m}(t) \cdot a_{m}(0))^{2} \rangle + \frac{1}{3}.$$ (A11) The above expression takes into account also the statistical independence of the different Cartesian components of the vector $a$, i.e., $\langle a_{m+\Delta m,\alpha}(t) a_{m+\Delta m,\beta}(t) \rangle = 0$ with $\alpha, \beta = x, y, z, \alpha \neq \beta, \Delta m \geq 0$, and arbitrary $t_{1}$ and $t_{2}$. For $\Delta m \geq 1$ replacing Eq. (A11) into Eq. (35) yields $$C_{2,m,\Delta m}^{R}(t) = \frac{3}{2} \langle (a_{m+\Delta m}(t) \cdot a_{m}(0)) \rangle.$$ (A12) Inserting Eqs. (33) and (34) into the above equation proves Eq. (36). For the self-correlation function $C_{2,m}^{R}(t)$ the same line of reasoning yields $$C_{2,m}^{R}(t) = \langle (a_{m}(t) \cdot a_{m}(0)) \rangle.$$ (A13) Inserting Eq. (28) into the above equation proves the equality $C_{2,m}^{R}(t) = C_{1,m}^{R}(t)$, i.e., Eq. (31). Replacing Eq. (A5) into the latter equality relates $C_{2,m}^{R}(t)$ to the correlation functions of the Rouse modes. A similar task may be accomplished also... for the cross-correlation function $C_{2,m,\Delta m}(t)$. By replacing Eq. (A4) into Eq. (36) one gets $$C_{2,m,\Delta m}(t) = \frac{16}{3M_R} \left( \sum_{p=1}^{M_R-1} \sin \left( \frac{(m+\Delta m)p\pi}{M_R} \right) \sin \left( \frac{m\pi}{M_R} \right) \phi^R_p(t) \right)^2.$$ (A14) If the self-correlation function $C^R_{2}\phi(t)$ is averaged over all the segments a compact expression for $C^R_{2}\phi(t)$ is obtained from Eq. (32) and Eq. (A5) as $$C^R_{2}\phi(t) = \frac{1}{2M_R(M_R-1)} \sum_{p,q=1}^{M_R-1} \sigma_{M_R,p,q}\phi^R_p(t)\phi^R_q(t),$$ (A15) with $$\sigma_{M_R,p,q} = 2 + \delta_{p,q} + \delta_{p+q, M_R}.$$ (A16) An analytical explicit expression for $C^R_{2,m,\Delta m}(t)$ may be also derived. However, it is quite long and then of limited interest. 50. See Appendix 4.1 of Ref. 23.
Study Of The Interactions Of Proteins, Cells And Tissue With Biomaterials 2010 Abhijeet Bhalkikar University of Central Florida Find similar works at: http://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu Part of the Electrical and Electronics Commons STARS Citation This Masters Thesis (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of STARS. For more information, please contact lee.dotson@ucf.edu. STUDY OF THE INTERACTIONS OF PROTEINS, CELLS AND TISSUE WITH BIOMATERIALS by ABHIJEET BHALKIKAR B.S. University of Pune, India, 2002 A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in the Department of Electrical Engineering and Computer Science in the College of Engineering and Computer Science at the University of Central Florida Orlando, Florida Summer Term 2010 ABSTRACT Bioengineering is the application of engineering principles to address challenges in the fields of biology and medicine. Biomaterials play a major role in bioengineering. This work employs a three level approach to study the various interactions of biomaterials with proteins, cells and tissue in vitro. In the first study, we qualitatively and quantitatively analyzed the process of protein adsorption of two enzymes to two different surface chemistries, which are commonly used in the field. In the second study, we attempted to engineer a tissue construct to build a biocompatible interface between a titanium substrate and human skin. In the third study, an in-vitro model of the motoneuron-muscle part of the stretch reflex arc circuit was developed. Using a novel silicon based micro-cantilever device, muscle contraction dynamics were measured and we have shown the presence of a functional neuro-muscular junction (NMJ). These studies have potential applications in the rational design of biomaterials used for biosensors and other implantable devices, in the development of a functional prosthesis and as a high-throughput drug-screening platform to study various neuro-muscular disorders. # TABLE OF CONTENTS LIST OF FIGURES ........................................................................................................ iv LIST OF TABLES ........................................................................................................... v INTRODUCTION ............................................................................................................ 1 CHAPTER 1 ADSORPTION BEHAVIOR OF TWO PROTEINS ON FLUORINATED AND GLASS SURFACES STUDIED USING A COMBINATION OF XPS AND PROTEIN COLORIMETRIC ASSAY .......................................................... 4 Introduction ........................................................................................................... 4 Materials and Methods ....................................................................................... 5 Results and Discussion ....................................................................................... 9 Conclusion .......................................................................................................... 16 References ......................................................................................................... 17 CHAPTER 2 ENGINEERING A TITANIUM AND POLYCAPROLACTONE CONSTRUCT FOR A BIOCOMPATIBLE INTERFACE BETWEEN THE BODY AND ARTIFICIAL LIMB ........................................................................................................ 19 Introduction ....................................................................................................... 19 Materials and Methods .................................................................................... 20 Results ............................................................................................................... 29 Discussion ......................................................................................................... 34 Conclusion ....................................................................................................... 36 References ......................................................................................................... 37 CHAPTER 3 SKELETAL MUSCLE TISSUE ENGINEERING ON BIOMEMS DEVICES .................................................................................................................... 39 Introduction ....................................................................................................... 39 Materials and Methods .................................................................................... 42 Conclusion ....................................................................................................... 47 References ....................................................................................................... 48 CONCLUSION .......................................................................................................... 49 LIST OF FIGURES Figure 1. BCA standard curve using the BSA standards ......................................................... 8 Figure 2. XPS data for GO adsorption on 13F and plain glass ........................................ 9 Figure 3. XPS data for HRP adsorption on 13F and plain glass .......................................... 9 Figure 4. Calculation of a binding constant for HRP adsorption on 13F for lower concentrations ..................................................................................................................... 12 Figure 5. Calculation of a binding constant for HRP adsorption on 13F for higher concentrations ......................................................................................................................... 12 Figure 6. MicroBCA data for GO adsorption on 13F and plain glass ................................. 13 Figure 7. MicroBCA data for HRP adsorption on 13F and plain glass .............................. 14 Figure 8. Titanium button design (a) A schematic drawing of the modified buttons (b) a representative picture of the buttons (c) SEM image of polished titanium button (d) SEM image of acid etched titanium button ................................................................. 22 Figure 9. Printed PCL grid (a) A representative picture of the printed PCL grid (units in mm) (b) Tensile strength testing of PCL grid ........................................................................ 26 Figure 10. Surface roughness and adhesive strength for button modifications (a) Root mean square roughness was measured using an interferometer for polished (P) buttons, buttons with holes (H), acid etched buttons (AE) and acid etched buttons with holes (AEH) (b) Adhesion strengths were measured for the AE, H, and AEH groups. * indicates a significant increase in surface roughness of buttons as compared with polished buttons (p < 0.05). λ indicates a significant increase in strength of buttons as compared to acid etched buttons (p < 0.08) .......................................................................................... 30 Figure 11. Average viable bacteria as seen by interferometer (a) Viable bacteria as seen with various antibacterial agents chlorhexidine diacetate (ChD), titanium dioxide (TiO2) mixed in with the hyaluronic acid (HA). * indicates a significant decrease in bacterial viability as compared with HA alone (p < 0.05). (b) The percentage of bacteria seen in treatment groups using bacteria in broth as the standard number of bacteria in broth at the same time point. A significant decrease from the non-treatment group was only seen in the bold groups (p < 0.05) ................................................................................................. 33 Figure 12. Myotube formation on patterned cantilevers (day 10, 20x magnification) ....... 43 Figure 13. Field stimulation of the co-culture showing contractile behavior of the muscle ................................................................................................................................. 44 Figure 14. Glutamate administration to muscle-motoneuron coculture ......................... 46 Figure 15. Glutamate administration to pure muscle culture ........................................ 46 LIST OF TABLES Table 1. Values of binding constants for GO and HRP on 13F and glass ....................... 15 INTRODUCTION Biomaterials are an integral part of bioengineering. By their definition they are “any material that is either natural or man-made which comprises whole or part of a living structure or a biomedical device that performs, augments or replaces a natural function in our body”. When implanted in the body, biomaterials come in contact with blood, proteins, cells and tissues. Each of these components has very specific interactions with the biomaterial. The goal of this thesis was to study and quantify some of these interactions using different parameters and materials. Chapter 1 is dedicated to the study of protein interactions with certain biomaterials. A surface interaction is the interaction between a protein and biomaterial and is controlled by a variety of factors. Factors include the surface chemical moieties of the biomaterial involved, the structure and sequence of the protein, as well as the pH and ionic strength of the buffer solution used. In this study, the adsorption of two enzymes glucose oxidase and horseradish peroxidase, were quantified on two different surface composition, a fluorinated surface that is hydrophobic in nature and a glass surface that is hydrophilic. The quantification was achieved by using both X-ray photoelectron spectroscopy (XPS) and micro-BCA assay, which are complementary methods. The absolute quasi-equilibrium surface coverage using both techniques was calculated. The affinity constants (Ka) for the proteins to the surface were also calculated using a simple Langmuir adsorption model equation. Both techniques produced comparable results. Also, the qualitative difference in the adsorption on the two compositions is also discussed. In chapter 2, the interaction between a tissue and biomaterial was studied by creating a tissue engineered construct to build a bridge between titanium and human skin. This has potential application in the development of a fully osseo-integrated artificial limb. A novel polycaprolactone based tissue-engineering construct, was developed and then printed on a titanium substrate using a computer assisted bio-printing tool. This construct was then optically and mechanically characterized to determine the adhesive strength of the construct to the substrate. Human dermal fibroblast cells were then plated on the construct and their viability was assessed after several days in culture. In order to prevent bacterial infection at the interface, the construct was also seeded with 3 different anti-bacterial agents viz., silver nanoparticles, titanium dioxide anatase and chlorhexidine diacetate. The efficacy of these agents was then assessed by observing the viability of *Staphylococcus aureus* bacteria, which were plated on these constructs. Results indicated that the construct provided excellent mechanical properties similar to skin, was viable for fibroblast cells and exhibited very good antibacterial properties with the chlorhexidine diacetate. In chapter 3, the interaction between cells and biomaterials was investigated. The development of an *in-vitro* model of the stretch reflex arc circuit in our body was attempted. In this embryonic rat skeletal and spinal cord motoneuron cells were co-cultured on a special bio-MEMS, silicon based, cantilever device under defined conditions. The cantilever device was then fixed in a unique AFM detection system. An electric field stimulation of a defined voltage and frequency was applied to the co-culture and the synchronous contraction of the muscle cells was observed. This allows the study of the muscle force dynamics. The formation of a functional neuro-muscular junction (NMJ) was shown by interrogating the system with glutamate, which is an excitatory neurotransmitter. This induced the muscle to undergo contraction by the motoneuron but the blocking of the NMJ using a cholinergic agonist was not observed. The application of the glutamate to a pure muscle culture elicited no response. This system has potential in a high throughput drug-screening platform for neuro-muscular diseases. CHAPTER 1 ADSORPTION BEHAVIOR OF TWO PROTEINS ON FLUORINATED AND GLASS SURFACES STUDIED USING A COMBINATION OF XPS AND PROTEIN COLORIMETRIC ASSAY Introduction The structural changes of proteins at a solid-liquid interface are of great interest in bioengineering. However, the measurements of the extent and the rate of protein conformational changes are very difficult. In recent years, the interest in proteins has grown due to the development of new techniques in protein chemistry and major advances with more established techniques. The need for atomic level description of the structure and dynamics of proteins at interfaces has led to the development of new approaches in protein studies. X-ray Photoelectron Spectroscopy (XPS) is an excellent surface specific technique, which can be used to study the adsorbed proteins layers on different surfaces due to its high surface sensitivity and chemical selectivity. Recently, more researchers are turning toward XPS to study cell culture and proteins on surfaces. Extensive literature is available on the principles of XPS analytical procedures and instrumentation. In addition, there are a few papers reporting on the qualitative and quantitative investigations of protein adsorption on different surfaces using XPS, ToF-SIMS and other techniques. Researchers have studied immobilization and adsorption of glucose oxidase (GO) and horse-radish peroxidase (HRP) to surfaces using tools from analytical chemistry. In this chapter, an additional analytical biochemical tool was provided to gain a better look at the behavior of proteins at interfaces and their structural changes upon adsorption. The biochemical assay used was the bicinchoninic acid colorimetric assay, which is very sensitive in detecting very small amounts of protein and is commonly used for quantifying the total amount of protein\textsuperscript{10}. The main aim of this work was to study the protein adsorption on a fluorinated hydrophobic surface (13F) and a hydrophilic clean glass surface to determine which physical properties of the protein or material are important in describing the mechanism of protein adsorption. The proteins used in the study were GO and HRP, both of which are widely used in biosensors\textsuperscript{11,12}. In a previous study, the adsorption was carried out under static conditions\textsuperscript{13}. The Langmuir adsorption isotherms were determined and data binding constants were calculated using a modified Langmuir adsorption isotherm equation for XPS and biochemical assay techniques. Both techniques produced comparable results. Materials and Methods Micro cover glasses (22x22 mm, VWR) were cleaned according to the published procedure\textsuperscript{14} and used as substrates in all protein adsorption experiments. The hydrophobic surfaces were prepared by modifying clean glass with trichloro (1H,1H,2H,2H-perfluorooctyl) silane (13F) (Gelest Inc.). To assure the desired surface properties, contact angle and XPS were conducted and only samples with a contact angle below 5° were used as hydrophilic surfaces and those with contact angles above 105° were used as hydrophobic surfaces. Glucose oxidase (GO) (50 KU, Sigma-Aldrich) and immunopure horse-radish peroxidase (HRP) (100 mg, Pierce) were used in all protein adsorption experiments. The protein adsorption and desorption experiments were performed in 8 ml staining jars with 4 cover slips per jar for 2 hours at room temperature with mild agitation. The surfaces were immersed in phosphate buffer saline (PBS) buffer solution (Fisher-Scientific) (pH = 7.4) with protein concentrations ranging from 5 to 500 µg/ml. After adsorption, surfaces were removed, rinsed three times with PBS and once in water, and then air dried overnight. Washed and dried samples were examined using a Physical Electronics 5400 ESCA spectrometer. The instrument was operated using a monochromatic Mg Ka X-ray source with a pass energy at 40 eV. The take-off angle was 90°, and normal operating pressure was approximately 10⁻⁹ Torr. Survey and high-resolution energy spectra for silicon, oxygen, carbon, nitrogen, and fluorine were measured for each sample. The intensities of nitrogen N (1s) peaks at 400 eV and carbonyl peaks C (1s) at 287 eV, specific to protein peptide bonds, were calculated using an internal standard (after deconvolution and curve fitting peaks were normalized against the sum of the area under the curves of all the peaks) and the data were averaged for each sample using three different spots. The representative XPS data was obtained for adsorption of a protein on 13F based on the nitrogen and carbonyl peaks, respectively. There was a correlation in the intensity changes between the data obtained using N or the carbonyl peak. It was very useful information for samples on which the presence of nitrogen on a surface cannot be associated exclusively with the presence of protein on the surface. After protein adsorption, coverslips were exposed to the same amount of protein, transferred to glass jars (4 surfaces/glass jar) and incubated in 8 ml of 1% sodium dodecyl sulphate (SDS) (Sigma-Aldrich) solution overnight on a shaker at room temperature. After desorption, the surfaces were removed, dried and studied by XPS. The XPS data showed negligible nitrogen peaks indicating insignificant amounts of protein on the surfaces. Next, the aliquots of solutions with an unknown amount of desorbed protein were transferred to a 96 well plate and quantified using the microBCA assay. The microBCA™ protein assay kit was purchased from Pierce Ltd. and the working reagent was prepared according to the kit instructions. The protein standard was prepared by diluting the BSA stock solution (2.0 mg/ml) into the PBS (pH=7.4) buffer to achieve the desired concentration. Three sets of eight dilutions were made ranging in concentration from 0 - 40 μg/ml to prepare a standard curve. An example of a standard curve is shown in Figure 1. 150 ml each of the blank and unknown samples were all aliquoted onto the same micro plate in triplicate and 150 ml of the working reagent was then added to each well and mixed. The plate was then incubated (37°C) for 2 hours. After incubation, the plate was cooled to room temperature and read at 562 nm using a BioTek Synergy HT multi detection microplate reader utilizing the KC4 software. The optical density (OD) of the blanks was subtracted from the OD of the samples to obtain the net OD. The concentration of desorbed protein was estimated using a BCA standard curve and the % monolayer coverage on the surface was then calculated. Figure 1. BCA standard curve using the BSA standards Results and Discussion Figures 2 and 3 represent the averaged Langmuir adsorption isotherm data based on the XPS analysis using the integrated area of nitrogen peaks for GO and HRP on 13F and glass, respectively. Figure 2. XPS data for GO adsorption on 13F and plain glass Figure 3. XPS data for HRP adsorption on 13F and plain glass As previously described\textsuperscript{13}, to model the protein adsorption, a modified Langmuir adsorption isotherm was used, given by equation 1. \[ Q = \frac{KC}{1+KC} \] (1) where \( Q \) = monolayer coverage, \( K \) = adduct formation constant at steady-state, and \( C \) = molar concentration. The final state of adsorption is a reactive site limited adsorbed layer. Since \( Q = \frac{N}{N_m} \), where \( N \) is the amount of material on the surface at a given concentration, and \( N_m \) is the amount on the surface at monolayer coverage, equation 1 can be rearranged to: \[ \frac{C}{N} = \frac{C}{N_m} + \frac{1}{KN_m} \] (2) This is similar to the equation \( y = mx + c \) for a straight line, where \( m \) is the slope of the line and \( c \) is its y-intercept. Comparing the two equations, one gets \( y = \frac{C}{N}, x = C, m = \frac{1}{Nm} \) and \( c = \frac{1}{KNm} \). Therefore, the binding constant \( K \) can be calculated by plotting \( \frac{C}{N} \) versus \( C \) and then determining the slope and y-intercept of the graph. The constant is then \( K = \frac{m}{c} \). The amount of protein adsorbed on the surface can be determined from XPS analysis using the nitrogen peak or from the microBCA assay. From Figures 2 and 3, it is clear that the data can be divided into two concentration regimes, with two linear regions with different slopes that can fit the data. The slope of each line is decreased with increasing analytical concentration of the protein. in solution. Therefore, the data was fitted to the adsorption isotherms separately for lower and higher protein concentrations in solution using Equation 2. A representative fit for HRP adsorption on 13F is shown in Figures 4 and 5. The data indicates that at first HRP was adsorbed quickly to the surface with an average binding constant of $K_1 = 0.0166$. At a certain coverage, however, further adsorption of HRP was decreased due to the unavailability of binding sites. The second binding energy was therefore lower and equals $K_2 = 0.0068$. The data for GO adsorption has also shown a similar trend with $K_1 = 0.034$ and $K_2 = 0.012$, respectively. Figure 4. Calculation of a binding constant for HRP adsorption on 13F for lower concentrations Figure 5. Calculation of a binding constant for HRP adsorption on 13F for higher concentrations The example of a binding isotherm obtained for GO and HRP on clean glass using the microBCA assay is shown in Figures 6 and 7. The % monolayer coverage of the proteins was calculated by assuming the molecular footprint area of GO and HRP to be ~56 nm$^2$ and ~40 nm$^2$, respectively$^{15,16}$. The binding constants were calculated in a similar manner as indicated above. At lower concentrations both proteins adsorbed vigorously to the surface. At higher concentrations, adsorption proceeded at a slower rate. Figure 6. MicroBCA data for GO adsorption on 13F and plain glass The binding constants for both proteins on both surfaces are summarized in Table 1. It can be observed from the table that both XPS and microBCA data show the same adsorption behavior at higher concentrations, but for lower concentrations the binding constants that were calculated based on the microBCA method are lower compared to the XPS measurements. It is important, however, to point out that for lower concentrations the microBCA method is at its determination limits, and therefore, such a large discrepancy was observed for lower protein coverage. Another trend that can be observed from the data is that GO adsorption on 13F was greater than that on glass, while for HRP the adsorption was greater on glass than 13F. This can be attributed to the fact that the iso-electric point (pI) of GO is 4.2 and that of HRP is 7.2, so that in the buffer used GO is highly negatively charged and HRP is slightly charged. These different electrostatic interactions might explain the different adsorption profiles in regards to glass and 13F. Also, GO is a much larger protein (mol. weight 160 kDa) whereas HRP is smaller (40 kDa), which might lead to the different orientations of the protein on the surface, thus leading to different coverages. Both the XPS and microBCA data have shown the same trends. <table> <thead> <tr> <th></th> <th>XPS</th> <th>MicroBCA assay</th> </tr> </thead> <tbody> <tr> <td></td> <td>K1</td> <td>K2</td> </tr> <tr> <td>GO on 13F</td> <td>0.034</td> <td>0.012</td> </tr> <tr> <td>GO on glass</td> <td>0.023</td> <td>0.018</td> </tr> <tr> <td>HRP on 13F</td> <td>0.017</td> <td>0.006</td> </tr> <tr> <td>HRP on glass</td> <td>0.032</td> <td>0.017</td> </tr> </tbody> </table> Conclusion The adsorption behavior of two test proteins on two different surfaces was observed. XPS is a very sensitive technique and can be used to detect very small amounts of an adsorbed protein. Another biochemical tool, the microBCA assay, was also used to look at the adsorption phenomena. Both techniques are complementary to each other and produced comparable results. The advantage of using the biochemical assay was its ease of use and expensive instrumentation such as an XPS setup, to look at protein adsorption to different surfaces is not necessary. This technique is useful to biologists, biochemists, surface chemists, and engineers. References CHAPTER 2 ENGINEERING A TITANIUM AND POLYCAPROLACTONE CONSTRUCT FOR A BIOCOMPATIBLE INTERFACE BETWEEN THE BODY AND ARTIFICIAL LIMB Introduction Titanium is a commonly used material in dental and orthopedic applications because of its high mechanical properties, chemical stability, and biocompatibility. Its excellent biocompatibility allows titanium implants to be directly anchored to bone or osseointegrated. The conventional prosthetic replacement in amputees is a stump-socket design, which transfers force through the prosthetic to an external contact point on the patient. Such a design results in nonuniform distribution of pressure and can lead to pain, infection, and necrosis of the soft tissues at the point of contact. It is believed that intraosseous transcutaneous amputation prostheses (ITAPs) can overcome these issues by directly attaching the implant to the skeleton through transcutaneous abutment. Transcutaneous implants have been used clinically since the 1960s. However, subsequent attempts to use similar implants in amputees have had limited success due to problems with loosening of the implant, mechanical failure, and infection. This weak adhesion allows for invasion of bacteria at the tissue-implant interface. It is believed that optimizing the attachment of the skin to the prosthetic will lead to clinically viable ITAPs. Human skin is multifunctional and consequently has a complex architecture comprised of multiple layers with some indistinct boundaries. Skin acts as an active protective agent, or barrier, against traumas such as friction, impact, pressure, and shear stress. In addition many things have an effect on the properties of skin, including the location of the skin on the body, the rate of application and duration of the stress, and the age of the skin\textsuperscript{12}. In order to develop a clinically viable ITAP, the device must be mechanically strong, provide a tight seal at the biotic-abiotic interface, and take into account the complex properties of skin and other native tissues. In this study, a surface modified titanium construct was developed in order to build a surface that would allow for direct tissue adherence as well as scaffold adherence. Along with this construct, a novel Computer Aided Biology (CAB) Tool was used to fabricate a complex, three-dimensional (3D) polycaprolactone (PCL) scaffold on top of the titanium construct. PCL is well known for being a highly flexible biomaterial and was approved for use in surgical sutures over 30 years ago\textsuperscript{13}. This study focused on characterizing these constructs and scaffolds, testing the adherence of the scaffolds to the titanium constructs, and examining different antibacterial agents to reduce bacterial invasion. Materials and Methods Description of the Computer Aided Biology Tool The CAB Tool, which was previously known as the BioAssembly Tool or BAT\textsuperscript{14,15}, was developed to produce artificial constructs that would demonstrate properties of native tissue (microenvironment, 3D organization, and inter-cellular contact). The CAB Tool utilizes a computer-aided-design/computer-aided-manufacturing (CAD/CAM) approach to build heterogeneous tissue models. This system is a multi-head, through-nozzle deposition machine developed to conformably deposit biomaterials, cells, and co-factors on various supporting surfaces to create surrogate tissues and tentative platforms for experiments in cell biology and tissue engineering. The device contains: an XY coordinate system with a stage; a number of Z-traveling deposition heads (currently up to 3), each of which is supplied with an individual controlling video camera; LED work area illumination; a fiber optic light source to illuminate the deposition area and cure photopolymers in-line; individual ferroelectric temperature controls for each deposition head; a water jacket temperature control for the stage; stainless steel and anodized work surfaces; and a piezoelectric humidifier. Button Modification Titanium buttons were generated from 2 mm thick, known-standard titanium foil (Sigma Aldrich, St. Louis, MO). A schematic of the buttons can be seen in Figure 8. Each button was machined such that there was a 2 mm thick round section with a 12.7 mm diameter, as well as a 6.35 mm tab that is 1 mm thick. All of the buttons were initially polished to a mirror finish using a coarse grit (36 grit) sandpaper followed by finer grit (400 and 800) sandpapers. The buttons were modified into four groups: (1) polished buttons, (2) polished buttons with holes, (3) acid etched buttons, and (4) acid etched buttons with holes. The buttons with holes (2 and 4) had a square array of 10 x 10 holes with 200 μm diameter, depth, and separation. The total 10x10 array of holes was 3.8 mm square. The acid etched buttons (3 and 4) were etched by immersing them in a 50:40 v/v mixture of 18% HCl and 48% H₂SO₄ at 60°C for 5 minutes. The buttons were then rinsed thoroughly in deionized water. Figure 8. Titanium button design (a) A schematic drawing of the modified buttons (b) a representative picture of the buttons (c) SEM image of polished titanium button (d) SEM image of acid etched titanium button Surface Roughness Measurements The unmodified and modified titanium button surfaces were imaged using a Hitachi S3500N scanning electron microscope, with the accelerating voltage set to 20 kV. The surface roughness of the acid treated and plain titanium buttons with and without holes was measured with the help of a Zygo optical interferometer running Metro Pro software. The samples were all mounted on the stage, the z-stop position was calibrated, and the light intensity was adjusted to 85-90%. The objective lens focus was adjusted until the interference fringes could be observed and the stage roll and pitch were then adjusted until the fringes covered the entire surface to be measured. Next, the Metro Pro software was used to capture the interference image and simultaneously reconstruct a pseudo-colored 3D profile of the surface. A region on the 3D profile was selected using the software crosshairs in order to automatically generate the average and root mean square (RMS) roughness values for that region. Three measurements were taken for the same surface and the values were averaged. The values were then plotted for the different surface treatments. Preparing PCL for Printing Polycaprolactone (PCL, molecular weight 80 kDa; Sigma Aldrich, St. Louis, MO) pellets were dissolved in glacial acetic acid (Sigma Aldrich, St. Louis, MO) at a concentration of 70% w/v. This concentration was found to be best for dispensing and ease of solvent evaporation, resulting in a solid structure. The mixture of PCL pellets and acetic acid was placed in a glass bottle with a sealed cap, and the solution was dissolved using a sonicator for 1-2 hours. After the PCL was fully dissolved, the solution was stirred with a spatula, backfilled into a 3-mL dispensing syringe (EFD, Providence, RI), closed with a stopper at the bottom and top of the syringe, and spun in a centrifuge at 2000 rpm for 5 minutes to remove air bubbles. This solution was then used for scaffold printing. Printing PCL Scaffolds The syringe was connected to an air pressure line for dispensing of the PCL solution. The ceramic dispensing tip used had an inner diameter of 100 μm and outer diameter of 150 μm. A pressure of 25 psi was used to push the PCL solution through the ceramic tip orifice and deposit onto the target substrate. The printing speed (both XY stage and Z movement) of the dispensing pump was 2.5 mm/s. The speed is very important to the rate of evaporation of the acetic acid solvent, which therefore affected the creation of pores within the scaffold. A script (pen path) was created in AutoCAD and used to print the PCL scaffolds. The initial dispensing height was 50 μm and a lift of 25 μm between each layer. Scaffold designs were entered into the PathCAD program to generate porous constructs that were 5.4 mm x 5.4 mm. A single line of extrusion was used to generate the struts; thus, the designed strut thickness (width of the lines used to fabricate the PCL scaffold) was 100 μm. The input pore size (the open space in between the lines of PCL) was 300 μm. This scaffold was designed to be 130 μm tall, with a strut thickness of 100 μm. The printed scaffolds were measured, and the measured values were compared with the expected values. To bring the grids to a pH of 7.0, they were heated at 55°C for at least 6 hours, bathed in 90% ethanol for 30 minutes, and washed two times in PBS. Strength of PCL Grids Preliminary mechanical testing was conducted with a specially designed scaffold. This scaffold was fabricated to be similar to the scaffolds printed on the titanium buttons, with a height of 130 μm and a strut thickness of 100 μm. The scaffolds were longer (10 mm) and wider at one end. To test the tensile strength of the scaffold, the narrower end was fixed and the wider end was pulled until the scaffold broke. The force required to break the scaffolds was recorded, and the ultimate strength was calculated by dividing the force with the effective cross-sectional area (0.169 mm²). On two separate days, 5 samples were printed and then stretched to determine the strength of the PCL grids. Adhesion Testing A tensiometer (Instron 3369, Instron Corp., Norwood, MA) was used to examine the peel off adhesion test for the PCL. A modified PCL grid was printed on the buttons for adhesion testing. These grids were printed such that there was a section of the grid hanging over the edge of the button. An aluminum jig was manufactured (in house) in order to attach the tensiometer to the PCL on the button, as can be seen in Figure 9. This jig had one plate with a circular groove in it where the buttons were placed and another aluminum plate was placed over it and screwed into place to secure the button. This section was then attached to one end of the tensile tester. The free end of the printed PCL was adhered to another plate with glue, which was then attached to the other testing end of the tensile tester. The crossheads were moved in opposite directions producing a tensile force on the PCL-titanium interface. The crosshead speed was set at 3 mm/min. The test was carried out until either the PCL peeled off from the titanium substrate or broke into two pieces. The software generated values for the break load, maximum load, and the maximum displacement. The stress (MPa) vs. strain curves were calculated, and the adhesion strength (MPa) for each surface treatment was assigned the value of the maximum stress from the corresponding curve. On two separate days, 3 measurements were taken for each surface treatment and the values were then averaged and the standard deviations were calculated and plotted. Cell Culture on PCL Solutions of PCL alone and 70% PCL in acetic acid were extruded into 6-well tissue culture polystyrene (TCPS) plates at a volume of 100 μL. Each extrusion was neutralized and sterilized by heating at 55°C for 6 hours, bathing in 90% ethanol for 30 minutes, and then rinsing twice with PBS. Human dermal fibroblast cells (Hs68; ATCC, Manassas, VA) were cultured in media containing 90% Dulbecco’s Modified Eagle’s Medium (DMEM; ATCC, Manassas, Va) + 10% Fetal Bovine Serum (ATCC, Manassas, VA) with 1% penicillin/streptomycin (Sigma Aldrich, St. Louis, MO) according to the ATCC cell culture protocol. When the cells reached confluence, they were seeded at a concentration of $10^5$ cells/mL onto the PCL, 70% PCL in acetic acid, and plain TCPS. After 1 hour, the constructs were rinsed with PBS and cell media was added. After 3 days, the viability of the cells was assessed using a fluorescent live/dead assay (Invitrogen, Carlsbad, CA). Cell viability was assessed twice with a sample size of 5 for each group. Assessing Cell Viability To assess the viability of the constructs, a staining solution containing calcien AM and ethidium homodimer in divalent-cation free PBS (DCF-PBS) was prepared following the instructions included with the kit. The constructs were washed with DCF-PBS and then bathed in the staining solution for 30 minutes at room temperature, and protected from light. The constructs were then washed 2 times for 15 minutes with DCF-PBS and imaged within 1 hour after staining by using an epifluorescent microscope to image the live (excitation, 488 nm; emission 530 nm) and dead (excitation, 528-553 nm; emission, 580 nm) cell fluorescence. The total number of cells were counted within 5 fields of view. Cells with homogeneous bright green staining throughout the cell were counted as live, and cells with bright red staining were counted as dead. Percentage viability was calculated as the number of live cells divided by the total number of cells counted. Preparing Antibacterial Samples Type I collagen (Col) and hyaluronic acid (HA) solutions were mixed with one of three antibacterial materials, either silver nanoparticles (Ag; Sigma Aldrich, St. Louis, MO), Titanium dioxide anatase (TiO\textsubscript{2}; Sigma Aldrich, St. Louis, MO), or chlorhexadine diacetate (ChD; Sigma Aldrich, St. Louis, MO). A 3.0 mg/mL collagen solution was prepared as previously described\textsuperscript{14}. Briefly, purified rat-tail collagen type I (BD Biosciences, Bedford, MA) was mixed with Dulbecco’s Modified Eagle’s Medium (DMEM) and brought to a pH of 7.0-7.4 by the addition of 1 M NaOH. A solution of HA was prepared from an Extracel Hydrogel kit (Glycosan Biosystems, Inc., Salt Lake City, UT) by following the given protocol. After the Col or HA solution was prepared, an antibacterial agent was mixed into the solution at 1-10\% w/w by gently pipetting. Next, 100 \( \mu \)L of the solution was placed in a 6 well plate and allowed to fully polymerize at 37°C for 1 hour. Antibacterial Assay \textit{Staphylococcus aureus} (ATCC, Manassas, VA) was grown in Caso broth (casein-peptone soymeal-peptone broth) overnight at 37°C in a water bath. The antibacterial samples (described above) were incubated at 37°C for 1 hour in the Caso broth solution containing \textit{S. aureus}. Aliquots of broth were obtained from each group. The aliquots were stained with Crystal Violet for 15 minutes, and an acetic acid solution was added to solubolize the stained bacteria in the broth. The bacteria were then quantified using a spectrophotometer at 630 nm. These experiments were performed on two separate days with a sample size of 5 for each group. Statistics In order to assess the differences between treatments, each experiment was carried out as described in their corresponding sections. The measured values were then examined using a Student’s T test. A difference in values was only labeled as a “significant” difference if the p value was less than 0.05. Results Button Modification As shown in Figure 8, buttons were machined from 2 mm thick titanium foil such that there was a 2 mm thick round section with a 12.7 mm diameter and a 6.35 mm long tab that was 1 mm thick. Half of the buttons had a 10 x 10 array of holes that were 200 μm in diameter, depth, and separation. All of the buttons were polished to a mirror finish. Then, half of the holed buttons and half of the non-holed buttons were acid etched to increase surface roughness. Surface Roughness Figure 8 illustrates the SEM images of the polished and acid etched buttons, respectively. Using a Zygo optical interferometer, the RMS surface roughness was measured for the button groups: polished without holes (P), polished with holes (H), acid etched without holes (AE), and acid etched with holes (AEH). These results can be seen in Figure 10a. No significant difference was observed in the surface roughness of the two polished groups (P and H). A significant increase (p < 0.05) was noted in the surface roughness of the acid etched groups (AE and AEH) when compared to the polished buttons. Figure 10. Surface roughness and adhesive strength for button modifications (a) Root mean square roughness was measured using an interferometer for polished (P) buttons, buttons with holes (H), acid etched buttons (AE) and acid etched buttons with holes (AEH) (b) Adhesion strengths were measured for the AE, H, and AEH groups. * indicates a significant increase in surface roughness of buttons as compared with polished buttons (p < 0.05). λ indicates a significant increase in strength of buttons as compared to acid etched buttons (p < 0.08) Precision of Printed PCL Grids Porous scaffolds are sometimes desired to provide an area for cells to migrate and proliferate or for controlled release of chemicals\textsuperscript{16}. The CAB tool can print complex 3D scaffolds with different designs in terms of overall shape, dimension, and pore size. In order to test the accuracy of the CAB tool using 70\% PCL, a 5.4 mm x 5.4 mm x 1.5 mm (LxWxH) scaffold was fabricated. Figure 9 demonstrates a representative scaffold that was printed using the CAB tool. This figure shows that the struts and pores within the scaffold are uniform and evenly spaced. The printed scaffolds were measured and the measured values were compared with the expected values. A single, non-overlapping line was extruded in order to generate each strut within the scaffolds. Since the dispensing tip used had an inner diameter of 100 μm, the expected strut size was 100 μm. The pore size (the open space in between the lines of PCL) that was programmed into the script was 300 μm. The measured strut size was 65 ± 12 μm; and the measured pore size was 315 ± 10 μm. The overall porosity of the scaffolds was 54 ± 1%. Tensile Strength of PCL Grids To demonstrate that the printing process of the CAB tool does not significantly alter the strength of the printed PCL, a special scaffold design was entered into the PathCAD software. These scaffolds were able to hold between 4-8 pounds before ultimately breaking. Using the effective cross sectional area of 0.169 mm$^2$, the ultimate strength varied from 24 to 40 MPa with an average of 29.62 MPa. Adhesion Testing When PCL was printed on top of the smooth titanium buttons, the PCL peeled off of the buttons upon drying. When the Titanium buttons were either acid etched or holes were added to the button surface, the PCL would remain on the Titanium button. The different surface modifications were examined to determine which provided the best adhesion between the PCL and the button. The average adhesive strengths for the acid etched buttons (AE), polished buttons with holes (H), and acid etched buttons with holes (AEH) can be seen in Figure 10b. Buttons that were acid etched, had an average adhesive strength of 0.13 MPa. When holes were added to the buttons, there was a significant increase ($p < 0.05$) in the adhesive strength as compared with acid etched buttons with no holes. However, while there was a slight increase in the average adhesive strength, a significant difference was not noted between the holed buttons that were acid etched and the holed buttons that were polished. Thus, the addition of holes gave the desired result of a significant increase in adhesive strength. Cell Viability To verify that the preparation of 70% PCL in acetic acid would not effect the viability of cells, human dermal fibroblasts were seeded onto sterile, neutralized 70% PCL in acetic acid, PCL alone, and on TCPS. There was a significant decrease ($p < 0.05$) in the viability of cells seeded on the 70% PCL (10% decrease) or plain PCL (15% decrease) as compared with cells seeded on TCPS. When the number of cells was counted on the three test surfaces, there was a significant increase (53%; $p < 0.05$) in the number of cells seen on TCPS as compared with either of the PCL surfaces. Antibacterial Assay Initially, examination was done using different antibacterial agents (Silver nanoparticles [Ag] and Titanium Dioxide Anatase [TiO$_2$]) within the PCL or coating the PCL. However, a significant decrease in the viable bacteria when using these methods was not observed. Based on these results, the antibacterial activity of different antibacterial agents (Ag, TiO$_2$, and Chlorhexidine Diacetate [ChD]) embedded in natural biomaterials, type I Collagen (Col) or Hyaluronic Acid (HA) were examined. Figure 11a indicates the absorbance readings by spectrophotometer. No significant difference was observed in the viable bacteria seen on either Col or HA alone when compared with the bacteria broth. Figure 11b demonstrates the percentage of bacteria seen in the treatment groups using bacteria in broth as the standard number of bacteria in broth at the same time point. A significant decrease ($p < 0.05$) in bacteria was seen when 10% w/w TiO$_2$ was added to either Col or HA. Also, a significant decrease ($p < 0.05$) in bacteria was observed when 10% w/w ChD was added to the HA. This decrease was not significant when the ChD was added to the Col. A significant decrease in bacteria was noted in the treatment groups using HA with varying concentrations of either ChD or TiO$_2$ (1% - 5% w/w). No significant decrease in the bacteria was observed when TiO$_2$ was used. However, there was a significant decrease in bacteria when any concentration of ChD was added to the HA. Figure 11. Average viable bacteria as seen by interferometer (a) Viable bacteria as seen with various antibacterial agents chlorhexidine diacetate (ChD), titanium dioxide (TiO$_2$) mixed in with the hyaluronic acid (HA). * indicates a significant decrease in bacterial viability as compared with HA alone (p < 0.05). (b) The percentage of bacteria seen in treatment groups using bacteria in broth as the standard number of bacteria in broth at the same time point. A significant decrease from the non-treatment group was only seen in the bold groups (p < 0.05) <table> <thead> <tr> <th>Treatment</th> <th>Percent</th> </tr> </thead> <tbody> <tr> <td>None</td> <td>100.00</td> </tr> <tr> <td>HA Alone</td> <td>82.92</td> </tr> <tr> <td>HA + 1% ChD</td> <td>29.67</td> </tr> <tr> <td>HA + 2.5% ChD</td> <td>34.17</td> </tr> <tr> <td>HA + 5% ChD</td> <td>29.77</td> </tr> <tr> <td>HA + 1% TiO$_2$</td> <td>65.63</td> </tr> <tr> <td>HA + 2.5% TiO$_2$</td> <td>84.12</td> </tr> <tr> <td>HA + 5% TiO$_2$</td> <td>79.22</td> </tr> </tbody> </table> Discussion Due to the nonuniform distribution of pressure in conventional stump-socket prosthetic replacements, amputees have problems with pain, infection, and necrosis of the soft tissues at the point of contact\textsuperscript{4,5}. ITAPs would allow for direct anchoring to the bone, thus potentially overcoming these problems. However, limited success has been seen in such devices in amputees\textsuperscript{6-10}. It is believed that clinically viable ITAPs may be achieved by creating a construct that has tight adherence of the overlying scaffold\textsuperscript{11}. This study characterizes the fabrication of a PCL scaffold on top of a surface modified titanium construct with a primary focus on creating a tight adherence. When using the CAB Tool to generate spatially organized 3D PCL constructs, it was concluded that a 70% solution of PCL in acetic acid was the best concentration to use for consistency in printing. Porous scaffolds are often required for biomedical applications, because the pores provide an area for cells to migrate and proliferate\textsuperscript{16}. For this study, porous scaffolds were fabricated using a 100 μm ceramic with programmed pores of 300 μm. The resulting scaffolds had an average strut width of 65 ± 12 μm and a pore size of 315 ± 10 μm. The resulting porosity was 54 ± 1%. Previous research has demonstrated that PCL has an average ultimate strength of 29 – 42 MPa\textsuperscript{17,18}. The PCL constructs printed in this study had an average tensile strength of 29.62 MPa. This falls within the range of previously reported values\textsuperscript{17}. The tensile strength of human skin has been measured as 17 – 21 MPa\textsuperscript{12}. Thus, the scaffolds produced in this study have shown great promise in their utility for alternative skin in prosthetic devices since their strength is similar to that of natural skin. When PCL was printed on top of smooth titanium buttons, the PCL peeled off of the buttons upon drying. It has been demonstrated that the addition of surface roughness to titanium surfaces increased the osseo-integration of the implanted construct\textsuperscript{19}. Thus, when titanium buttons were either acid etched or the surface was machined with an array of holes, the PCL was able to adhere to the titanium. Buttons that were acid etched, had an average adhesive strength of 0.13 MPa. A significant increase in the adhesive strength was observed when an array of holes was added to the surface of the titanium. The adhesive strength increased to 0.19 MPa for buttons with only holes (H) and to 0.22 MPa for buttons with both holes and acid etching (AEH). Thus, the addition of the array of holes does significantly increase the adhesive ability of Titanium. Orr et al., defined an adhesive as a material that exhibited an adhesive strength of greater than 0.1 MPa, and materials that had an adhesive strength lower than this were sealants\textsuperscript{20}. With an adhesive strength of 0.22 MPa, printing a PCL grid on acid etched and machined (AEH) titanium is an adhesive, and can be effectively treated as a bio-concrete. Sarasam and Madihally demonstrated that PCL dissolved in acetic acid could be used in polymeric blends in biomedical applications\textsuperscript{21}. In this study, it was noted that when the 70% PCL in acetic acid constructs were dried at 55°C and then bathed in 90% EtOH for 30 minutes, the constructs were neutralized, and thus compatible with viable cells. When the viability of cells was examined, there was a slight (10%) difference in the viability of cells seeded onto 70% PCL as compared with TCPS. Significantly fewer (53%) cells were noted on the 70% PCL after three days in culture. These results demonstrated that the 70% PCL does not have an adverse effect on the viability of cells. When an antibacterial agent was added to the PCL or placed on the titanium button, there was no significant decrease in the number of viable bacteria when compared to bacteria in the culture. However, when an antibacterial agent was added to type I collagen or hyaluronic acid, there was a significant decrease. Lower concentrations of antibacterial agents were tested, and with as low a concentration as 1% (w/w) chlorhexidine diacetate mixed in HA, there was a significant decrease in the viable bacteria. It was also noted that the chlorhexidine diacetate was more effective at low concentrations in HA than it was in the type I collagen. Thus, by adding an antibacterial agent within HA, bacterial invasion can be greatly decreased. Conclusion The results demonstrate that by not only adding roughness, but by adding surface features such as holes, the adhesive strength of titanium can be greatly increased. Also, the fabrication of a porous PCL scaffold on top of the titanium has tensile strengths similar to that of natural skin. With the addition of a coating of HA containing an antibacterial agent, such as chlorhexidine diacetate, the bacterial resistance of the titanium and PCL construct will be greatly increased. It will greatly advance the application of ITAPs in the medical field if the engineered titanium and PCL construct successfully promotes effective *in vivo* adhesion between the titanium-epithelium interface, thus preventing infection of the skin and underlying tissue adjacent to prosthetic implants. We would like to gratefully acknowledge the collaboration with Dr Tithi Duttaroy, Dr Cindy Smith, and Dr Ken Church at nScrypt Inc. for providing all their help with this project. References CHAPTER 3 SKELETAL MUSCLE TISSUE ENGINEERING ON BIOMEMS DEVICES Introduction The integration of biological components with artificial devices is very important for creating bio-hybrid devices for a variety of functions and applications. Skeletal muscle can be integrated with BioMEMS devices and can be useful in a number of applications, including biorobotics, bioprostheses, tissue replacement, physiological and pharmacological studies and diseased models. Previously fetal rat muscle cells harvested from E18 rat embryos were able to form functional myotubes on a synthetic self assembled monolayer substrate (DETA) and its subsequent integration with silicon based cantilever structures\(^1\). A completely defined, serum-free medium to culture the cells was also developed. Immunocytochemistry and electrophysiology studies were able to show the formation of functional myotubes on both surfaces. The detailed protocol for the cantilever fabrication, surface functionalization and cell culture are documented in Das, 2007\(^2\). After having shown the integration of the myotubes with the cantilever devices, our group was also able to quantitatively measure the contractile forces exerted by the functional myotubes using a novel AFM detection setup utilizing a laser, photodiode and a X-Y stage using micromotors\(^3\). In the study a 2V bipolar pulse with pulse duration of 40 ms and frequency of 1 Hz was applied to the myotubes cultured on the cantilevers to show the rhythmic contraction of the myotubes. Applying a higher frequency pulse resulted in tetanic conditions. To evaluate the surface cues needed for skeletal muscle to differentiate, vitronectin was patterned on the DETA substrates and C2C12 cells were cultured on the patterns\(^4\). Vitronectin was also patterned on top of commercial, substrate embedded microelectrodes. The cells differentiated and formed functional myotubes, which were evaluated using myosin heavy chain staining. The novelty of this study was that instead of adding vitronectin to the growth and differentiation medium, the authors found that the surface bound vitronectin worked better\(^4\). Ishibashi et al\(^5\) studied the electrical stimulation of C2C12 cells on a porous alumina substrate sealed by PDMS. They were selectively able to block certain areas of the substrate using air bubbles, so that the electrical field could not be applied to that area. The stimulation of the C2C12 myotubes was visualized by a Fluo-4 dye, which showed the calcium transients in response to the externally applied electrical stimulus. To develop skeletal muscle fibers for tissue engineered grafts, Zhao et al\(^6\) cultured C2C12 myoblasts on a PDMS polymer microchip with linearly aligned microgrooves. The myoblasts attached, grew and differentiated to form myotubes and also formed 3D multi-layered structures. They also found that the deeper grooves worked better in aiding the alignment of the cells. Cimetta et al\(^7\) also cultured C2C12 cells and rat neonatal rat cardiomyocytes on polyacrylamide based hydrogels modified by micro-contact printing of extracellular matrix proteins. Skeletal muscle cultures respond to chemical cues in the surrounding milieu. However, Zhao\(^8\) studied skeletal myogenesis using electrical stimulation cues. They fabricated a microelectrode array, cultured C2C12 cells on the surface, and applied a low intensity (500 mV), but high frequency (1000 kHz), signal to the electrodes. The cells proliferated and reached confluency. They also differentiated into functional myotubes as evidenced by the MHC and actin filament staining. Another cue for muscle differentiation using external mechanical stimulation was shown in a study by Vandenburgh et al\(^9\). In this study primary human skeletal muscle cells were used. The cells were grown in a silicone mold with end attachment sites. Similar to mechanical loading \textit{in vivo}, the cells were then attached to posts of a mechanical stimulator setup, which mechanically moved the posts, providing tension to the muscle cells. This repetitive stretching/relaxation increased the muscle elasticity, mean myofiber diameter and myofiber area. Skeletal muscle on BioMEMS can also be used as a bio micro-actuator. In the first study Dennis et al\(^{10}\) designed and built a swimming robot actuated by two explanted frog semitendinosus muscles. The robot performed basic swimming maneuvers in an extracellular ringer solution and remained active for 42 hours. Montemagno et al\(^{11}\) built a micromechanical device operated by muscle bundles starting from muscle cells. The muscle cells were cultured onto a thin gold bridge between a free standing cantilever and a post. This provided a novel way to characterize the properties of muscle bundles. They also created an autonomous mechanical object which moved on a surface in response to muscle contraction fuelled by glucose. In this study, embryonic rat skeletal muscle cells from E18 embryos and spinal cord motoneurons from E15 embryos were co-cultured on a patterned, silicon based microcantilever device. The culture was maintained in the defined medium\(^2\). After 10-12 days in culture, the co-culture system was interrogated with an AFM detection system. The muscle contraction dynamics on application of a defined external electric field stimulation was recorded. The system was also interrogated with the excitatory neurotransmitter glutamate to show the existence of a neuro-muscular junction. Materials and Methods Surface modification and characterization of cantilever devices The cantilever devices were cleaned by soaking them in 3:1 v/v solution of concentrated sulfuric acid and hydrogen peroxide (piranha solution) and raising the temperature to 120°C for 10 minutes. The devices were subsequently rinsed thoroughly with deionized water and dried in an oven overnight. The devices were then coated with a PEG alkylsilane 2-[Methoxypoly(ethyleneoxy)propyl]trimethoxysilane for 30 minutes according to a previously established protocol. The PEG modified surfaces were then patterned using deep UV laser ablation with a specific photomask, which ablated the PEG from the cantilever surfaces and also created somal adhesion sites for the motoneurons. The ablation was done for 40 seconds, and after ablation the surfaces were then backfilled with DETA (3-Trimethoxysilyl propyl) diethylenetriamine alkylsilane according to a previously established protocol. Glass coverslips (22x22 mm) were used as controls and were subsequently characterized with contact angle goniometry and XPS. Cell culture and electrical characterization of muscle-motoneuron coculture Motoneurons and hind limb skeletal muscle cells were harvested from day 15 and day 18 (E18) rat embryos, respectively, obtained from pregnant Sprague-Dawley rats. A cell count was performed and the motoneurons and muscle cells were each plated at a density of 200 cells/mm² on the cantilevers using a PDMS cell separation chamber of our own. design. The cells were imaged on day 10. Figure 12 indicates a representative phase picture of a cantilever which shows a number of myotubes formed that span the entire length of the cantilever. Similar myotube structures were observed on other cantilevers as well. The motoneurons were not visible because they lie on the silicon substrate. Initially it was necessary to determine if the myotubes were functional and contractile. The cantilevers were placed in a field stimulation chamber filled with extracellular media and an electrical stimulation was applied to the cells using micro-electrodes. The field stimulation pulse was a bipolar pulse with an amplitude of ± 2 V, pulse duration of 40 ms and frequency of 1 Hz. Figure 13 shows the oscilloscope readings from the photodiode due to the contraction of the myotube. It also indicates the synchronization of the response (upper panel) with the trigger stimulus (lower panel). Figure 12. Myotube formation on patterned cantilevers (day 10, 20x magnification) Figure 13. Field stimulation of the co-culture showing contractile behavior of the muscle Administration of extracellular glutamate to motoneuron muscle coculture on cantilevers The next step was to interrogate this system with glutamate, which is an excitatory neurotransmitter. The experiment was performed on day 14. The field stimulation was switched off and a single dose (30 µL) of 50 mM glutamate was added to 2 mL of the media 30 seconds after the AFM recording was started. Figure 14 shows the oscilloscope readings before and after the glutamate administration. Contractile behavior of the myotubes was observed after the administration, which was aperiodic in nature (upper panel). The lower panel shows the field stimulation which was turned off during the experiment. The recording was performed for over 3 minutes. The glutamate containing media was then replaced with fresh media and no further contractions were observed. As a control experiment, another cantilever device was cultured with only muscle cells and the cells were tested for their contractility using the field stimulation. Again a single dose of 50 mM glutamate was administered into media 45 seconds after the recording was started. Figure 15 shows the oscilloscope readings. No contractile behavior of the myotubes was observed. Figure 14. Glutamate administration to muscle-motoneuron coculture Figure 15. Glutamate administration to pure muscle culture Conclusion A novel silicon based micro-cantilever device was used to create a platform for studying skeletal muscle tissue engineering. Embryonic rat motoneurons and hind limb muscle were cocultured on the device and the system was probed for the formation of functional neuro-muscular junctions by using the excitatory neurotransmitter glutamate. Future studies will continue to probe for this functional aspect and the applications include studying diseased models and high throughput drug screening. References CONCLUSION The work in this thesis involves the study of the interactions of proteins, cells and tissues with biomaterials. The first part addresses the question of protein adsorption on surfaces. Glucose oxidase and horseradish peroxidase were adsorbed to a fluorinated and glass surface. The concentration dependent adsorption characteristics were quantified using X-ray photoelectron spectroscopy and micro-BCA biochemical assay. The data obtained was then fitted with a simple Langmuir adsorption model equation. The affinity/binding constants of the proteins to the surfaces was also derived. These values were then compared with each other. In the second part, tissue interactions with biomaterials were observed. A poly-caprolactone based tissue-engineering construct was printed on a titanium substrate. The optical and mechanical characteristics were evaluated. Human fibroblast cells were then seeded on the construct and their viability was assessed. Also, the antibacterial properties of the construct were evaluated by seeding the construct with antibacterial agents, such as silver nanoparticles, titanium dioxide and chlorhexidine diacetate. In the third part, cell interactions with biomaterials were studied. We cultured embryonic rat skeletal muscle and motoneuron cells together on a patterned silicon based micro-cantilever device. The culture was then maintained in a defined medium and after 10-12 days in culture, the system was interrogated with an AFM detection system. The muscle contraction dynamics on application of defined external electrical field stimulation was recorded. We also attempted to show the existence of an in-vitro neuromuscular junction with the help of the neurotransmitter glutamate. In the future, work involves using more sensitive detection methods for quantifying protein adsorption and also using our cantilever based system as a high-throughput drug-screening platform.
The primary function of a multiprocessor computer system to execute software as written by application programmers. The application programmer writes code in a high level language that embodies a higher level programming model. A compiler reduces the application software and associated libraries to machine code and operating system (OS) calls. Ultimately, all the software, in the form of machine code, is executed by hardware according to the instruction set architecture it is designed to implement. This is a classic use of layers of abstraction to manage complexity. Each abstraction layer specifies, through a well-defined interface, what should be done, not how it should be done. How it should be done is the implementation. When mapping application software to hardware there are three layers of abstraction, and three interfaces, that are of primary interest (see Figure 1). These are the Application Programming Interface (API), Application Binary Interface (ABI), and Instruction Set Architecture (ISA). Atop this stack of interfaces is a programming model which is a high level paradigm used for expressing an algorithm. **Figure 1. Three major multiprocessor interfaces are the API, ABI, and ISA.** The high level language programmer works at the API level and develops software using a programming language that is based on a programming model. For example, the Java API supports multi-threaded applications using a shared memory programming model. Conversely, a given programming model may be implemented with a number of high level language APIs, all sharing common features. The API consists of the programming language, itself, plus libraries that may be used for implementing some or all of the multiprocessing capabilities. Strictly speaking the term “API” refers to the entire interface; in common usage, however, a particular library of routines is sometimes referred to as “an API”; we will occasionally use this terminology. As part of many API implementations, runtime software, or simply “the runtime”, provides certain management and service functions at the user level (i.e., with non-privileged instructions). These functions are often programming language-dependent, and performing them at the user-level saves the overhead of invoking the operating system. Consequently, the runtime not only executes API routines, but it may also manage some API-specific state information (e.g., variables and data structures). For example, the runtime may maintain message passing buffers on behalf of API routines that send messages between communicating threads. The ABI consists of the operating system call interface and the user-level portion of the instruction set. The user-level instruction set consists of the non-privileged instructions that perform the basic load/store, branching, and ALU operations, for example. The operating system call interface is defined as part of the underlying operating system, Windows or Linux, for example. The ISA consists of all the instructions, both privileged and non-privileged; the privileged instructions being used only the OS as part of its implementation. These privileged instructions often involve the management of hardware resources, real memory and I/O, for example. The ISA also contains aspects of the architecture other than the instructions themselves; for example, the virtual memory architecture and exceptions (traps and interrupts) are specified as part of an ISA. For multiprocessor systems, the ISA defines the sequences of values observed by processors when they execute load and store instructions that access shared regions of memory. To summarize: a program written for an API is compiled into a binary program, that, when loaded into memory with API libraries, satisfies the specification of the ABI. The hardware implementing a specific ISA then executes the software (including both the program’s binary and OS-implemented operations). In this chapter, we will describe all three of the major interfaces, as well as an overview of API and ABI implementations. The implementation of the ISA is covered in greater detail throughout the book. 2.1 Programming Models The most widely-used programming model is the one implemented by a sequential procedural language such as C or FORTRAN. Object-oriented languages such as Java also fit this programming model. For throughput multiprocessing, where single-threaded programs run independently, this is the programming model being used. From the user’s perspective this is just classic multiprogramming. For software that expresses concurrency, there are a number of parallel programming models, each of which has its own particular features. Two of the most common are adapted from the conventional procedural model, and these are ones that we will focus on. They are the shared memory and message passing programming models. Before describing these parallel programming models, however, we should first define some terms more formally than we have done thus far. Over the years the terms “thread”, “task”, and “process” have meant different things depending on the operating system supporting them. The convergence toward Unix (including Linux) and Windows operating systems has also led to a convergence in terminology. “Task” now is usually used informally, while “process” and “thread” have specific meanings. We will define them here as they appear at the API level; later we will discuss them at lower levels of abstraction. A process consists of a shared memory space and one or more execution threads (defined below) that can directly read and write values from/to the shared space. At the API level, memory may be arranged as a linear space containing individual variables and language supported structures, or memory may be organized as a heap that holds objects which are accessed via references (pointers) as in the case of Java or C#. A thread consists of 1) a sequencer that steps through program statements and executes them, and 2) a private address space (in addition to the shared address space of the process to which it belongs). Hence, if a process consists of multiple threads, there are multiple sequencers and associated private address spaces. A private address space can only be accessed by statements executed by the thread with which the private space is associated. The private address space is sometimes arranged and accessed as a stack. In the basic shared memory programming model, there is typically a single process with multiple threads. The threads communicate through the shared memory space. In the basic message passing programming model, there are multiple processes, each with a single thread. In a pure message passing model, each process has its own address space, with no shared memory accessible from the multiple processes, so the processes can communicate only by explicitly passing messages. In general, a program could employ a hybrid model that uses both message passing and shared memory and that supports multiple processes, each with multiple threads per process. Programming models with explicit threads are so commonly used that it is easy to overlook the fact that there are other models. To provide some overall perspective, it is instructive to consider briefly a different type of programming model -- one that is functional rather than procedural. An example of such a model is embodied in spreadsheets which use a functional programming model. (In addition to being a functional programming model, spreadsheets also use a graphical interface, rather than a text-based interface as is common in many languages.) In a functional language, there is no explicit, procedural control flow as in languages such as C or Java. That is, there are no explicit threads or processes. Rather, the function is expressed as a series of equations that must simultaneously hold. Parallelism is implicit in the functional specification. The expressions in each spreadsheet cell can be evaluated in any consistent order (possibly involving re-evaluation of certain equations) as long as they stabilize to a set of numbers consistent with all the equations. Software implementing the spreadsheet may analyze the equations and determine a sequence of data dependencies, which can then be used to guide the order of evaluation. This evaluation can take place in parallel as long as the data dependencies are satisfied. Finally, it should be noted that computer hardware based on a functional model, dataflow computers, have also been proposed and experimental versions have been built, but thus far they have not been successfully commercialized. As noted above, most parallel programming languages follow a procedural model rather than a functional model. The procedural model explicitly embodies control flow sequencing. The commonly used procedural programming model makes threads explicitly visible to the programmer so that the programmer can create and manage multiple threads. This may be done by constructs in the high level language or through libraries in the API, but regardless of how it is done, the key point is that the programmer has an awareness of the multiple threads, and develops software programs accordingly. In the following sections, we will discuss the characteristics of the two commonly used parallel programming models, with examples of the ways that APIs typically implement their primary features. The features of interest include the way that threads of control are represented and managed, are synchronized, and communicate with one another. ### 2.2 Shared Memory Programming In this and the following section, shared memory programming will be described in a top down fashion: the API level is covered in this section, and its implementation at the ABI and ISA levels is in the next. Each of the major components -- processes and threads, communication, and synchronization -- will be covered. In addition, example parallel programming patterns will be given. Throughout this section, examples will be drawn from two popular parallel programming APIs. The reader should refer to documentation for these two APIs for further details and examples. - C and C++ are provided API level support for parallel programming via standard libraries that manage both process and threads. Because of the close historical relationship between C and the Unix operating system, many of the standard API level routines map directly to ABI level OS calls. The pthreads library [2] is a set of standard routines for managing threads. - The Java API was originally developed around a multithreading model. The Thread class is contained in java.lang package, and is therefore central to the Java language. 2.2.1 Processes and Threads The primary operations on processes and threads are creation, termination, suspension, and resump- tion of activity. Because a process is defined to have an address space, when a process is created, a new memory address space is created for it by the operating system. The process is given its initial program code, and execution begins at some specified initial program statement. For example, the C library call fork() creates a duplicate of the process performing the call; the call- ing process is the parent and the forked process is the child. Because one would often want to run dif- cent code in the child than was running in the parent, a fork() is typically followed by an exec() call performed by the child, which loads in a new program for the child to execute. This fork/exec pairing is a feature (or quirk, depending on one’s perspective) of Unix. The Windows OS performs what one would consider a more “natural” fork operation which both creates the new address space and loads in new software to execute. The C library routine wait() causes a parent to suspend and wait for a child to complete; when a process completes, it can terminate itself by calling the exit() routine. With respect to threads, the pthreads pthread_create routine creates a new thread, running within the same memory space as the current, calling process. The routine pthread_exit terminates the thread. Figure 2 illustrates a Unix process with two threads. As shown in the figure, the new thread has its own stack within the user memory space. The stack is private to the thread. The private mem- ory is not necessary for the shared memory paradigm, but it contains data that is local to the thread and typically holds intermediate results and other data that is specific to the thread. A Java virtual machine runs as a single process which interprets or otherwise emulates a Java program. This single virtual machine process can support multiple Java threads, however. This is done through the Threads class; within the Threads class, the method start() creates (initializes) a thread and the method run() contains the code that belongs to the thread. Consequently, the to create a new thread, the programmer instantiates a new Thread object, then calls the start() and run() meth- ods for that object. When the run() method returns, the thread is terminated. 2.2.2 Communication An important aspect of a programming model is the way that threads communicate with one another. In fact, it is this aspect that gives the programming models their names: the shared memory model, and the message passing model. Although communication is a key feature of the shared memory model, it requires relatively little explanation. At the API level, the shared memory programming model is illustrated in Figure 3. The architecture of the shared memory, itself, is determined by the high level language being used. It may be a flat homogeneous space where variables and structures reside (as in C), or it may be an object heap (as in Java). Regardless of its form, however, the all the threads of a process have access to it, and may read and write variables (or get and put object fields). It is through this shared memory space that values are communicated from one thread to another. On thread may write a value to a variable or field and another may read it; communication in the shared memory model is as simple as that (hence, one of the reasons the shared memory model is appealing). 2.2.3 Synchronization Synchronization among communicating threads or processes is as important as data communication; if data communication is not properly synchronized, then the threads or processes can not reliably communicate. In the shared memory programming model synchronization is used in a number of ways. For example, synchronization may be used to let one thread know when communicated data is available, to prevent multiple threads from simultaneously updating the same data structure, or to wait for all threads to reach some point in their computation before proceeding. In the shared memory programming model, synchronization is usually explicit, and is either built into the language or is supported via API library routines. The pthreads library contains support for a number of synchronization operations, for example. Java contains both language level and API library support. In this section, we will focus on three primary forms of synchronization in the shared memory programming model: mutual exclusion, point-to-point synchronization, and rendezvous. In practice, these forms of synchronization are used as building-block elements of more complex application design patterns. An application design pattern is a general description or template for the solution to a commonly recurring software design problem. For example, it is common for two threads to have a producer/consumer relationship. One thread produces data that is consumed by another. There can be many types of producers, consumers, and data, but they all fit a common pattern, so the same synchronization structures can be used to implement all of them. Following subsections describe some of the important application design patterns, and the ways that conditions, locks, and rendezvous are employed to implement them. **Mutual Exclusion** The first form of synchronization is *mutual exclusion*. The objective of mutual exclusion (mutex, for short) is to assure that only one thread at a time has access to a data structure or a piece of code. This is typically done via the abstraction of *locks*. A thread can set (or acquire) a lock, thereby excluding data or code access by other threads until the lock is cleared (or released). The use of lock/unlock can be demonstrated via two commonly used programming patterns. Both of them implement synchronized updates of a shared data structure. Consider the situation where values held in a data structure are read, new data values are computed, and the data structure is updated with the new values. A read-modify-write code sequence of this type will require a number of machine instructions. A problem occurs if two or more different threads attempt to read and update the same data structure simultaneously; for example, a second thread may read from the data structure when it has been partially updated by the first, thereby reading inconsistent data values. Or, two threads may attempt to write simultaneously to the structure, again leading to inconsistent data. A solution is to force threads to “take turns” when accessing the shared data structure, with one performing its accesses (reads and/or writes) before the other is allowed to proceed with its reads and writes. We consider two basic parallel programming patterns for the reliable sharing of data structures among multiple threads. The first is the *code locking* pattern. In this pattern, the data structure is always updated by a common set of code sequences, so that access to the data structure is controlled via access to the code sequences. The code sequences are often embodied as a set of procedures that are defined in conjunction with the shared data structure. This pattern is also called a *monitor* pattern. A similar technique is employed in object oriented programming where classes consist of objects and the methods that can access them. The code locking pattern is illustrated in Figure 4. The code that updates a shared data structure is called update, and any thread wishing to update the structure must do so by calling update. Internal to update is a lock/unlock sequence that assures that at most one thread can be executing the read/modify/write update sequence at any given time. The variable code_lock is declared to be of type mutex, and it serves as a lock variable. If a thread calls update and the code is unlocked, then the execution of lock(code_lock) causes it to be locked, and the calling thread proceeds. When it is finished with the update, it unlocks the structure. On the other hand, if a thread calls update and another thread is currently executing update, then the second thread will block at the statement lock(code_lock) and wait until the first thread executes unlock(code_lock). A code sequence of the type we have just described is an example of a critical section. It is a code sequence, consisting of a number of instructions that must be executed by only one thread at a time. That is, after a thread enters the critical section, no other thread should be allowed to enter until the first thread exits the critical section. ![Figure 4. Code locking parallel programming pattern.](image) The second pattern for updating shared data is called data locking and is illustrated in Figure 5. With the data locking pattern, a lock is associated with the data structure, rather than with the code that modifies it. In Figure 5, each of the threads updates the data structure in a different manner. Many code sequences can potentially update the structure, and a programmer can, more-or-less at will, develop new code sequences to access the structure, as long as the code sequence properly locks and unlocks the structure. The key point is that before any code sequence can access the structure, it must first lock the data structure (or wait if another code sequence has already locked it). The data locking pattern is sometimes preferred if different combinations of data structures are accessed and modified collectively. For example, one code sequence may read data values from two data structures and update a third; another code sequence may access the third and use its values to update a fourth structure. For these types of operations, the programmer should first acquire the locks for all the structures involved in the collective operation, perform the operation, then release all the locks. While the data locking pattern offers a lot of flexibility, it also exposes an important problem with multithreaded programming: deadlock. Deadlock can occur with many of the parallel programming patterns, but it is more apparent with the data locking pattern. With data locking, deadlock can occur when multiple locks must be acquired, but care is not taken with respect to the order of the lock acquisitions. This is illustrated in Figure 6. Here, two threads both attempt to acquire locks to struct1 and struct2. The first thread locks struct1 before struct2, and the second thread attempts to acquire them in the opposite order. Now, in practice, this may happen to work most of the time, but if the timing is such that both threads reach the locking code at the same time and the first thread locks struct1 while the second thread simultaneously locks struct2, then both threads will be blocked in their attempt to get their second lock. Neither thread can proceed; there is deadlock. ![Figure 5. Data locking parallel programming pattern.](image) ![Figure 6. Threads that may potentially deadlock.](image) Deadlock can be avoided in situations such as the one just described by following a convention of lock ordering amongst the various structures. In the example just given, a convention may be that all code must lock struct1 before locking struct2. For simple cases, this approach can be easily done. How- ever, with more complex structures and sharing patterns (and programmer oversights), deadlock can, and does, occur in practice. There is no easy, universal solution to the deadlock problem; deadlock has become accepted as one of those issues that complicates the design of reliable multi-threaded code. Another important issue when using mutual exclusion is the granularity at which locks are applied. By *granularity*, we mean the size of the data structure or code sequence that is protected with a single lock. Granularity affects overhead for locking, thereby affecting overall software performance. If locks are applied over larger regions (with fewer locks), then parallel programming tends to be simpler, however, threads may then be blocked by a lock when it isn’t really necessary. As an extreme example, many early multi-threaded versions of Unix used what is called the “one big lock model” where a single lock surrounds all the code in the operating system kernel. This means that only one thread can be doing active work in the kernel at a time. Employing the one big lock model clearly has a negative effect on performance, as there are many cases where multiple threads in the kernel won’t interact with each other at all. On the other hand, it is a very simple method for getting a multi-threaded operating system up and running. As another example, consider a large array of data structures, for example banking records. In transaction processing software, one could implement data locking where there is one lock for the entire array of bank records. This would be simple, and might avoid some deadlock situations (because only a single lock must be acquired), but it would be unnecessarily inefficient. Most of the time, the bank records being updated are for different customers, and there would be no interference among threads performing simultaneous updates. At the other extreme, one could provide every record with its own lock, and a thread would almost never have to block. There are some disadvantages to applying locks at a very fine granularity, however. In some cases, it may mean that many locks must be acquired and there is time overhead in acquiring the individual locks. Furthermore, there is memory overhead for maintaining lock variables. In general, the software developer should strike a balance between the extremes and strive for a granularity that is small enough that threads don’t unnecessarily block for any significant fraction of time, but large enough that time and space aren’t wasted. With respect to some commonly used APIs, Java has built-in support for mutual exclusion [10]. Every Java object has a lock, and methods can be declared to be synchronized. If a synchronized method for a given object is called, then that object cannot be accessed by any other synchronized method until the first method is finished. This is essentially a form of data locking. Critical sections can be implemented in Java by defining a synchronized code block that incorporates an object whose sole purpose is to provide a lock. In the pthreads API, a lock variable of type pthread_mutex_t is first declared. Then the routine pthread_mutex_lock sets the lock variable and pthread_mutex_unlock unlocks it. **POINT TO POINT SYNCHRONIZATION** In *point to point synchronization* one thread signals another thread that some condition holds. The condition being signaled is often the availability of data. For example, in a producer/consumer programming pattern, a producer thread may place data into a shared buffer in memory, and then signal the (waiting) consumer thread that the data is ready to be consumed. This is illustrated in Figure 7. In this simple example, a data value is passed from a producer thread to a consumer thread through the variable *buffer*. The variable *full* serves as a synchronization variable and is used by the producer to signal the consumer that the buffer has been filled with a new data value. After reading the value, the consumer thread clears the full variable, thereby signaling the producer that it can pass another data ```c <Producer> while (full == 1) {} ; wait buffer = value; full = 1; <Consumer> while (full == 0) {} ; wait b = buffer; full = 0; ``` Figure 7. Example of produce/consumer pattern, implemented with ordinary variables. In the example just given, an ordinary variable and memory operations are used for implementing the synchronization. However, most APIs provide special routines for this purpose. By using API routines, the synchronization variables and operations are implemented in lower abstraction layers, possibly all the way down to the ISA. Also, the API routine may be implemented in a fashion that is more efficient than when ordinary variables and operations are used. For example, the API, under runtime software control, may temporarily suspend a waiting thread and allow another (non-blocked) thread to run. This is described in more detail in the next subsection. In the pthreads API, a mutex lock is associated with a condition that is to be signaled. The routine pthread_cond_wait causes a thread to suspend execution and wait for a condition to hold. The routine pthread_cond_signal when executed by a thread signals the condition to another thread (and if suspended waiting for the signal, the other thread will become active and continue execution). RENDEZVOUS A third important form of synchronization is the rendezvous, where two or more cooperating threads must all reach some point in the program before any of them can proceed. It is easiest to understand barriers by considering their application in the bulk synchronous programming pattern [11]. See Figure 8. In the bulk synchronous programming pattern multiple threads operate in a sequence of phases, where, in a given phase, multiple threads execute in parallel without synchronizing or communicating data values. Then, all threads wait until they have all completed the execution phase. At that point, they communicate data values, and perhaps a single thread does some additional serial computation. Then, the threads begin the next phase, and the process is repeated. Figure 8. Bulk Synchronous programming pattern. There may also be a barrier immediately following the communication phase. For example, multiple threads may be performing a complex database search. Each thread operates in parallel, searching an assigned portion of the database. Then, when all the threads are finished with their part of the search, they compare results, decide on a plan of action, then go back and do additional searching. Other examples come from numerical computing where iterative solver algorithms employ parallel threads which execute one computational iteration on arrays of data, then reach a barrier, check for convergence, then, if convergence hasn’t occurred, they go back and perform another iteration. 2.3 Shared Memory Implementation Now, we consider the way API level shared memory constructs are implemented at the ABI and ISA levels. This involves the definition of the ABI and the way an API is implemented in terms of the ABI. Keep in mind that characteristics of the API level are defined by a high level language and its libraries, while the ABI is defined by OS calls and user level instructions from the ISA. 2.3.1 Processes and Threads Because a process has its own address space, and the OS controls memory mapping and protection, the OS must be involved in the creation of a new process. Consequently, creating a process at the API level is implemented with lower level code that performs one or more OS calls. In the case of C/Unix, the `fork()`, `exec()`, and `wait()` library calls translate directly to equivalent OS calls. Because there are often more processes than hardware processors, the OS schedules the processes by determining which ones get to run at any given time. It typically does this by maintaining a run queue for processes that are ready to execute. When a process requests I/O, or its time slice runs out, the OS chooses the next waiting process from the run queue (possibly taking process priorities into account) and starts the new process. And, because each process has its own address space, the OS must also change the address mapping (along with other process-related tables). The bottom line is that because processes have different address spaces, and the OS manages address space mapping, the OS must be involved with process management functions, including creation and scheduling. Conceptually, one could support API level threads by associating each with an ABI level process, because most ABIs allow for the sharing of selected memory pages. This page sharing is set up by explicit OS calls. If this were done, however, then thread switches become equivalent (at the ABI level) to process switches. Because of the necessity to call the OS and for the OS to change page mappings for every process switch, this would lead to high overhead, or what is often referred to as a “heavy weight”, thread switches. By employing explicit ABI level threads, the weight of threads can be reduced; and, depending on the thread implementation the weight may be reduced significantly. The Unix OS supports a pthread_create system call, which is the counterpart of the p threads API routine of the same name. The Linux OS supports a clone() system call which has similar functionality; so the pthread_create p threads routine would be implemented with a clone() on a Linux system. Creating a thread does not require a new address space, and switching threads within the same address space does not require changes in the page mapping. Consequently, the OS can manage threads in a manner similar to the management of processes, except thread switches have become lighter because only the register state and stack pointer have to be changed at the time of a switch, not the entire address space. Because address space mapping does not have to be changed on a thread switch, however, it is not even necessary to involve the OS at all. That is, the implementation of API threads can be done at the user level via the runtime software (Figure 1). In particular, the runtime can create threads by simply adding another stack within a process’s existing address space. The runtime can maintain its own run queue, so that threads can be scheduled at the user level. For example, when a thread completes its task, it can jump to the runtime scheduler, pick up a new task from the run queue and then jump to the start location of the new task (essentially becoming a new thread). In this manner a single OS thread can be time-shared among multiple user level threads. We have just described two types of threads: kernel threads, which the OS schedules, and user threads, which the user level runtime manages. To perform parallel processing, a program must create a certain number of kernel threads, because these are the ones that are assigned to hardware processors (program counters) by the OS. However, once the kernel threads are created, multiple user level threads can then be mapped to the kernel threads in whatever manner the runtime chooses. The relationship between kernel threads and user threads is illustrated in Figure 9. In the figure, the OS scheduler assigns kernel threads to processors. Then, additional scheduling of user level threads onto the kernel threads can be performed by user level runtime software associated with an API implementation. At the ISA level no special support is required for processes and threads, beyond that already present for single threads. The management of threads and processes amounts to the assignment of the program counter of a processor to a location with a given thread or process. This is done by the OS. all that is required is that the OS always maintain control over which thread/process is running at any given time. As multiple processors are booted, operating system code is initially given control to each of the program counters. Then, just as with a uniprocessor, the OS always controls the software that has access to the program counter. When control is passed to user level code, the OS sets a timer which forces the control to be passed back to the OS after some maximum interval. If the user code performs a system call, is interrupted, or traps, it also relinquishes control jumping to an ISA-specified “vector” location in the OS. ![Diagram of kernel and user threads] Figure 9. Kernel threads are scheduled onto hardware processors by the operating system; user threads are scheduled onto the kernel threads by a use level runtime scheduler. ### 2.3.2 Communication A shared memory programming model is typically implemented on a hardware platform that supports shared memory directly in the hardware; that is, ordinary machine level load and store instructions can be used for reading and writing shared variables (or fields in objects). There have been exceptions to this, where a shared memory programming model has been implemented on a distributed memory hardware platform, but these exceptions have primarily been research projects. Generally speaking, implementing the shared memory programming model on distributed memory hardware poses a number of difficult performance problems and such implementations have not entered mainstream. In the conventional shared memory implementation, data communication at the ABI and ISA levels is just a matter of mapping the API high level language reads and writes (or gets and puts) onto ordinary machine level load and store instructions. That is, no special instructions are required in the ISA for merely communicating data among threads. However, because shared memory can be accessed from multiple, concurrently executing threads of execution, additional semantics regarding the ordering of loads and stores from different threads must be specified as part of the ISA; and this is not as easy as it might first appear. In fact, memory ordering semantics are important enough that they will be discussed in a subsection of their own (Section 2.3.4), after we have discussed synchronization in the next subsection. ### 2.3.3 Synchronization It is this aspect of the shared memory model that has the biggest impact on the ISA. Implementing synchronization ordinarily does not involve OS intervention (and costly system calls). Thread synchronization is typically implemented at the user level, using a small number of machine level instructions, sometimes with API runtime support. However, unlike simple data communication which uses ordinary load and store instructions, special instructions are often added to the ISA for implementing synchronization primitives. The reason can best be understood by considering an example. Figure 10 illustrates a naive ISA level implementation of a critical section where ordinary loads and stores are used for locking and unlocking. At first glance, this example may appear to work properly: only one thread at a time can be in the critical section. In fact, this is not a reliable implementation of a critical section. The reason is that it may happen that the timing is such the two threads perform the load of the lock (at LAB1) at the same time. They both will read a 0, will both set the lock to one, and both will enter and execute the code in critical section at the same time. There are other, more clever implementations of a critical section that use ordinary loads and stores; however, they also involve longer code sequences, are more difficult to understand, and can lead to bugs if not properly implemented. Consequently, most ISAs include special instructions to simplify mutual exclusion and other synchronization operations. These instructions allow the indivisible, or *atomic*, reading and writing of a memory variable by the same instruction. An example will illustrate the utility of such instructions. ```plaintext <thread 1> <thread 2> . LAB1: Load R1, Lock Branch LAB1 if R1==1 Enter R1, 1 Store Lock, R1 <critical section> . Enter R1, 0 Store Lock, R1 LAB2: Load R1, Lock Branch LAB2 if R1==1 Enter R1,1 Store Lock, R1 <critical section> . Enter R1, 0 Store Lock, R1 ``` Figure 10. A naive implementation of a critical section using ordinary loads and stores for mutual exclusion. In Figure 11, mutual exclusion is implemented with a Test&Set instruction. This one instruction both reads the value of variable \texttt{lock} and writes a one to \texttt{lock}. Between the read and write no other instruction has access to the \texttt{lock} memory location. That is the, the read and the write are indivisible, or atomic. Then, whichever thread gets to the locking instruction first will set the lock, and the other thread will see a lock that is already set. Also, note that the instruction we use looks more like a “Read&Set” than a “Test&Set”. For the Test&Set used in some ISAs, the read operation also tests the value for zero and sets a condition code register accordingly. Then, the following conditional branch checks the condition codes. Because the pseudo-ISA used in our examples doesn’t use condition codes, the lock variable is only read, and is tested by the subsequent conditional branch instruction. Finally, in the example, the lock is cleared with a reset instruction, although it could have been done with an ordinary store as in Figure 10. The reason is that some hardware implementations may be better optimized if it is known that a synchronization instruction is being executed rather than an ordinary store (see Section 2.3.4). \begin{verbatim}<thread 1> \rightarrow<thread 2> . . . LAB1: Test&Set R1, Lock Branch LAB1 if R1==1 \rightarrow<critical section> . . Reset Lock Reset Lock \end{verbatim} \textbf{Figure 11. Implementation of a critical section with an atomic Test&Set instruction.} In practice, there are a number of instructions that can be used to achieve the same effect as the Test&Set; the one thing they have in common is some form of atomic read/write to a memory location. And, in general, given one such instruction, the functional equivalent of the others can be easily encoded. Two such examples are the Fetch&Add and Swap instructions. The semantics of these instructions, along with the Test&Set are illustrated in Figure 12. The Fetch&Add loads a value from memory, adds a value to it and stores the result back. The Fetch&Increment instruction (not shown in the figure) is a special case of Fetch&Add where the value added is always a 1. A Swap instruction simply swaps a value in a register with a value in memory. The Test&Set we have been using is equivalent to a Swap where the initial register value is always one. Figure 12b illustrates the implementation of a Fetch&Add with a Test&Set; the Test&Set is used as a lock for a critical section that performs the fetch and add using conventional instructions. Test&Set(reg, lock) reg ← mem(lock); mem(lock) ← 1; Fetch&Add(reg, value, sum) reg ← mem(sum); mem(sum) ← mem(sum) + value; Swap(reg, opnd) temp ← mem(opnd); mem(opnd) ← reg; reg ← temp (a) try: Test&Set(lock); if lock == 1 go to try; reg ← mem(sum); mem(sum) ← reg + value; reset (lock); (b) Figure 12. a) The semantics of three synchronization instructions; memory reads and writes are indivisible b) the Fetch&Add operation implemented with a Test&Set. Some RISC instruction sets have divided the read and write components of a test and set into separate, specialized load and store instructions, but the semantics of the special instructions are defined such that they can be combined to achieve the same effect as an atomic read/write. The MIPS instruction set includes a pair of load/store instructions of this type. In the MIPS instruction set, the load_linked instruction performs a conventional memory load, but, in addition, it records the memory address in a special register and sets a flag indicating, in normal usage, that there is a pending store_conditional instruction to the same address. Then, any operation performed by any of the processors that might cause atomicity to be violated clear the flag. Exactly which operations clear the flag depend on the implementation. But a common situation occurs if another processor performs a store to the memory address held in the special register. The flag is also cleared when the process performing the load_linked instruction is context switched, because the special address register is not saved on a context switch. A store_conditional instruction fails (does not write to memory) if the flag is cleared at the time it is executed. As defined in the MIPS instruction set, the store_conditional instruction also sets a condition code corresponding to whether the flag was set or clear. This allows the success of the store instruction to be tested via a conditional branch instruction. Figure 13 illustrates the use of the load_linked store_conditional pair; the code sequence in the figure implements an atomic swap operation between the values in register r4 and the memory location addressed by register r1. The first four instructions form a loop that tries the load_linked and store_conditional until the store succeeds. Then, after it succeeds, the value loaded from memory is copied into register r4. In most real cases, the store_conditional will succeed on the first try. The separation of the load and store components is in line with the RISC philosophy of separating complex instructions into simple ones. In a bus-based multiprocessor, this means there is no need for a special bus operation that performs an indivisible read/write pair to a memory location; normal, separate bus cycles can be used. However, as with some other early RISC features, such as delayed branches, the separation of the load and store operations probably seemed like a good idea at the time. In retrospect, this feature is closely tied to a specific implementation (in this case, a shared bus). In a non-bus system, it would probably be simpler to implement an indivisible read/write as a single operation. Swap(r4, mem(r1)) try: mov r3,r4 ;move exchange value ll r2,0(r1) ;load locked sc r3,0(r1) ;store conditional beqz r3,try ;if store fails mov r4,r2 ;load value to r4 Figure 13. Implementing an atomic Swap with the MIPS load_linked and store_conditional instructions. LOCK EFFICIENCY An important issue when implementing locks is efficiency, and there is a number of aspects to efficiency. One of them was addressed earlier in Section 2.2.3 with respect to lock granularity. Recall that granularity is the size of the code or data region that is protected by a single lock. Finer granularity means more locks, but it reduces the probability of needless thread blocking. Lock granularity is an efficiency feature that is visible to the HLL programmer, and the HLL programmer is therefore responsible for determining the granularity of locks, and, indirectly, the number of times that threads are blocked. There are other lock efficiency issues that are determined by the implementation of the API. Consider the critical section as implemented in Figure 11. This is an example of a spin lock, so called because of the small loop that repeatedly tests (“spins” on) the lock until it is found to be clear. Such a spin lock, while effective, can lead to a significant waste of resources. First, it repeatedly hammers on the memory system, consuming bandwidth. If a number of threads are simultaneously spinning on the same lock, the aggregate memory bandwidth demand can be huge. In some cases, adjusting lock granularity can at least partially alleviate this problem. A second problem is that the processor doing the spinning is not performing what one would consider useful computation; it is simply performing the same operation over and over. A partial solution to the first problem is based on the observation that in multiprocessor systems that employ cache memories, a repeated series of read operations to the same address only access the local cache – unless some other processor writes to the address. On the other hand, a write operation involves notifying, one way or another, all the other caches in the system of the modification to memory. Hence, the Test&Set as we have defined it consumes more memory bandwidth resources than a simple read (or Test). The reasons for this will be more apparent when we discuss cache coherence techniques in Chapter 4, however, the summary just given will suffice for now. In any case, the above observation regarding the relative bandwidth requirements of Test&Set and ordinary loads, leads to a more efficient spin lock implementation referred to as Test&Test&Set (see Figure 14). In Test&Test&Set, there is a pair of loops. First, there is a loop that spins on the lock performing only a read. When it finds that the lock is clear, it attempts to acquire it via a Test&Set. If this fails (because some other thread acquired it first), then it goes back and spins some more. Test&Test&Set(reg, lock) \[ \begin{align*} \text{test\_it:} & \quad \text{load} \quad \text{reg, mem(lock)} \\ & \quad \text{branch} \quad \text{test\_it if reg==1} \\ \text{lockit:} & \quad \text{Test&Set} \quad \text{reg, mem(lock)} \\ & \quad \text{branch} \quad \text{test\_it if reg==1} \end{align*} \] Figure 14. Implementation of Test&Test&Set spin lock. Other solutions for reducing spin overhead have been proposed. For example, one technique is to “back off” and wait for a period of time after lock acquisition fails before trying again. The more failures, the longer the back off time. This technique is similar to the method used in Ethernet for network access. A solution to both types of wasted resources (memory bandwidth and processor cycles) can be implemented in the runtime. The basic idea is, when a thread fails to acquire a lock, it is placed on a runtime managed queue to wait for the lock to become available and there is a switch to a different (un-blocked) thread. When a thread finishes with lock, it branches to the runtime where any threads in the lock’s queue can be re-awakened, given the lock, and placed into the runtime’s run queue. This is illustrated in Figure 15. This approach also allows the runtime to enforce some higher level discipline/priorities regarding the use of a critical section. The response time of a queued lock is somewhat slower than a spin lock, but it may be more efficient overall. Figure 15. A queueing lock. If a thread finds a lock busy, it is enqueued to wait until the lock is released. In the meantime, another user-level thread can run. BARRIERS In pthreads, barrier synchronization can be implemented with the `pthread_join` routine; a thread executing this operation suspends and waits for all the other threads to execute a `pthread_join` before they are all allowed to proceed. Barrier synchronization can also be implemented with a counter variable and a critical section that each thread uses to update and check the count. This illustrated in Figure 16. In this example, a barrier is a structure that consists of a lock, a flag, and a counter. The count is initialized to zero. Each thread calls the barrier code with arguments that indicate the name of the barrier and the number of threads that are taking part in the barrier synchronization (in this example there are \( n \) threads). Each thread first sets the barrier’s lock and increments the counter; the very first thread also clears the flag (which may still be set from the previous use of the barrier). Then, the thread checks to see if the count was equal to \( n \), the total number of threads. If not, (the else case) the thread waits for the flag to be set. If so, it clears the count (to initialize it for the next usage of the barrier) and sets the flag to one, notifying all the other waiting threads that the barrier has now been satisfied, and they can go ahead and continue computing. ```c Barrier (bar_name, n) { Lock (bar_name.lock); if (bar_name.counter == 0) bar_name.flag = 0; mycount = bar_name.counter++; Unlock (bar_name.lock); if (mycount == n) { bar_name.counter = 0; bar_name.flag = 1; } else while(bar_name.flag = 0) {}; /* busy wait */ } ``` Figure 16. Example of barrier synchronization code. If there are a large number of threads waiting to synchronize at the barrier and they arrive at about the same time, the critical section code can be a bottleneck. To relieve this bottleneck, a barrier can be implemented as a hierarchy of smaller barriers. Each of the barriers has a counter and lock, but only the barrier at the very top of the hierarchy has a flag. This is illustrated in Figure 17. Here, each thread is associated with one of the lower level barriers. The thread first checks the counter at the lowest level, if it is the last thread to reach the lower level barrier, then it moves up one level and checks the counter at that level. When it reaches a level where it is not the last, then it begins waiting for the flag (which is set at the highest level). The very last thread to reach the barrier goes all the way to the top barrier, and sets the flag, thereby releasing all the threads. By providing multiple barriers in a hierarchy, threads can “check-in” at low level barriers in parallel, and the bottleneck is relieved. Figure 17. Barrier implemented as a hierarchy. Threads in bold have already reached the barrier. The individual barriers in this example service only two threads or lower level barriers; in a real implementation, the fan-in would probably be higher. In some research and special purpose parallel computer systems, ISA level support for barriers has been provided. For example, the Cray T3D had a special hardware barrier network for implementing barriers. In most mainstream applications, however, barriers do not occur frequently enough, and there aren’t enough threads to justify special purpose hardware for barriers. 2.3.4 Memory Consistency and Coherence Now, we return to the topic of shared memory communication and deal with what is probably the most difficult issue when it comes to a shared memory implementation – correct sequencing of load and store instructions performed by multiple threads. In a uniprocessor program, there is a single thread, and the program counter defines the sequence with which instructions are to be executed. Naturally, the programmer expects instructions to modify registers and memory in exactly the order specified by the sequencing of the program counter. This is what is referred to as program order. Program order is part of the ISA specification (although it is so basic to the understanding of how computers work that it is often not explicitly stated). And, although program order is part of the architected specification, an actual hardware implementation does not literally have to execute instructions in the same order; many superscalar processors do not, as a matter of fact (see Chapter 3). What is important is that the software behaves exactly as (gives the same answers as) it would if the hardware did execute instructions in program order. Sequential Consistency With the shared memory programming model, the multiple threads of control modify the same shared memory, and each one of them operates according to its own program order. Because memory is shared, one thread can read a value from memory that another thread has written. This leads to the need for an ISA specification regarding the observed ordering of memory events among the concurrently executing threads. At first, this seems as basic as program order is with single threads. But let’s consider an example. Figure 18a is a section of code that is part of a simple producer/consumer programming pattern. Ini- Initially, variables $A$ and $Flag$ are 0. Thread 0 produces a value of 9, places it into variable $A$ and sets $Flag$ to 1, indicating that the value in $A$ is ready to be consumed. Thread 1 repeatedly tests the flag for 0, and, eventually, when $Flag$ becomes 1, Thread 1 proceeds by reading the value of $A$, and compares it with 0. It seems obvious that $A$ should not be 0 when Thread 1 tests it and the test should fail. That is, the multi-threaded program seems to suggest that $A$ is set to 9 before $Flag$ is set to 1, and when Thread 1 tests and finds $Flag$ to be 1, the value of $A$ must be a 9 at that point. $$ \text{Thread0:} \quad A=0; \quad \text{Flag} = 0; \\ \quad \ldots \\ \quad A=9; \quad \text{Flag} = 1; \\ \text{Thread1:} \quad \ldots \\ \quad \text{While (Flag==0) } \\ \quad L2: \quad \text{if (A==0)...} $$ (a) ![Diagram of multi-threaded Producer/Consumer code] (b) Figure 18. a) Multi-threaded Producer/Consumer code b) Implementation on a system with buffering in an interconnection network. The assignment of 9 to $A$ gets delayed and is held in a buffer internal to the network. Meanwhile, the other memory accesses proceed smoothly. Consequently, Thread 1 reads an unexpected result for variable $A$. However, in real hardware implementations, unless care is taken, the “impossible” condition of $A==0$ may be true when Thread 1 reaches L2. For example, consider an implementation as shown Figure 18b. This hardware implementation employs an interleaved memory with multiple memory banks and an interconnection network containing buffering at intermediate stages, in case there is contention for links in the network. Because of memory addressing patterns elsewhere in the system (from other active processors in the system), some paths from certain processors to certain memory banks may be more or less congested than others. This can affect the delay getting to and from memory banks. In this particular example, variables $A$ and $Flag$ are in different memory banks, and the path from processor 0 (which is running Thread 0) to $A$ contains some congested links and the update to $A$ gets hung up in a buffer. Meanwhile the path from Processor 0 to Flag is not congested (and fast). The paths from Processor 1 (running Thread 1) to both $A$ and $Flag$ are also fast. Under these conditions, when Thread 1 reads the updated $Flag$ value, it immediately reads the value of $A$, and gets the old value of 0, not the new value of 9. This suggests that implementing program order in individual processors and then combining them arbitrarily into a multiprocessor is not sufficient to yield the expected, or even reliable, results when a multi-threaded application is executed. Furthermore, the example just given contains only processors and memories; if cache hierarchies are added to the mix, the problem becomes much more difficult because the caches are, in a sense, very large buffers that can hold data values for a very long time before they reach memory. The first step in dealing with memory ordering among multiple threads is to define the semantics of memory ordering in a simple way. To provide one such a specification, consider a straightforward memory ordering model (see Figure 19). The figure shows multiple processors, each performing loads and stores to shared memory in program order. The memory system selects and acts upon the memory requests in some order (the exact order is not important, any order can be used). The key point is that memory receives and appears to act upon the memory requests one at a time. Note that it is not necessary that only one access literally happens at a time (e.g., during a given clock cycle); in fact some of the accesses can take place at the same time, as long as the data values loaded and stored are the same as if the selector only chooses one at a time to act upon. ![Diagram of memory ordering model](image) **Figure 19.** Model for sequential consistency. Processors perform loads and stores in program order, and memory systems services requests in some (arbitrary) order. The need for a memory ordering model was recognized at least as early as 1979 by Leslie Lamport [12], who identified the model shown in Figure 19 as “sequential consistency” and defined it as: “A system is sequentially consistent if the result of any execution is the same as if the operations of all processors were executed in some sequential order and the operations of each individual processor appears in this sequence in the order specified by its program” In a sequentially consistent system, the program ordered loads and stores from each processor can be interleaved in a consistent manner. Return to the producer/consumer example given in Figure 18. If we look at the dynamic sequence of loads and stores, we will see something like that shown in Figure 20a. Time goes from top to bottom. Note that the while {} is essentially a spin loop on Flag, waiting for it to become a 1, so there will be a series of loads of Flag, until finally one of the loads returns a 1. In Figure 20, we assume that there are two such loads of Flag before it becomes a 1 on the third load. Then, considering the sequences of loads and stores, we see that if they behave in a sequentially consistent manner, then they can be interleaved in a consistent sequence of loads and stores; this is shown in Figure 20a. On the other hand, if we consider the sequence of events that leads to the results shown in shown in Figure 18, then it would be impossible to interleave the loads and stores in a sequentially consistent way. This is shown in Figure 20b. Arcs have been added to show the necessary event sequence as follows. Thread 0’s store of A ← 9 must precede the Store of Flag ← 1 because Thread 0 must satisfy program order. Similarly, Thread 1’s load of Flag=1 must precede the Load of A=0. Moreover, Thread 0’s store of Flag ← 1 must precede Thread 1’s load of Flag=1 and Thread 1’s load of A=0 must precede Thread 0’s store of A ← 9. The “precedes” arcs form a loop, which implies it is impossible to interleave the two sequences in a sequentially consistent way. \[ \begin{align*} \text{Thread 0:} & \quad \text{Thread 1:} \\ \text{Store A←0} & \quad \text{Load Flag=0} \\ \text{Store Flag←0} & \quad \text{Load Flag=0} \\ \text{......} & \quad \text{Load Flag=1} \\ \text{Store A←9} & \quad \text{Load A=9} \\ \text{Store Flag←1} & \\ \end{align*} \] (a) \[ \begin{align*} \text{Thread 0:} & \quad \text{Thread 1:} \\ \text{Store A←0} & \quad \text{Load Flag=0} \\ \text{Store Flag←0} & \quad \text{Load Flag=0} \\ \text{......} & \quad \text{Load Flag=1} \\ \text{Store A←9} & \quad \text{Load A=0} \\ \text{Store Flag←1} & \\ \end{align*} \] (b) **Figure 20.** a) Load and store instructions from Threads 0 and 1 can be interleaved in a sequentially consistent manner. B) Example where it is impossible to interleave loads and store in a sequentially consistent manner. If the interconnection network of Figure 18b were implemented in a way that forces sequential consistent way, then the “expected” result of the example in Figure 20a would occur, and the “unexpected” result of Figure 20b could never occur. Forcing sequential consistency in a multiprocessor implementation may lead to reduced performance, however, because it may place additional constraints on the system. For example, one could implement sequential consistency in the network of Figure 18 by having the memory system send an acknowledge signal back to the storing processor at the time a store actually takes place. Then, if a processor is forced to wait for each store to be acknowledged before performing any other memory operations (loads or stores), sequential consistency follows. However, this implementation would cause the system to slow considerably; every store from a processor would require a “round trip” delay to memory, and overlapping any other memory operation with a store would be inhibited. In practice, other, cleverer sequential consistency implementations have been proposed and used. We will discuss them in greater detail in Chapter 4. In general, they allow a processor to overlap memory operations, but have additional hardware to check whether sequential consistency may have been vio- lated. If so, the processor is able to “roll back” its state, and resume execution at an earlier point so that sequential consistency is assured. The complexity of implementing SC in a high performance manner has also led to relaxed consistency models, with simpler and/or higher performance implementations, where all of the constraints of sequential consistency are not required. **RELAXED CONSISTENCY MODELS** A relaxed consistency model is a memory ordering model that is not as strict as sequential consistency; this makes them easier to implement and/or better performing. A wide variety of relaxed consistency models have been proposed and used; some of which contain subtleties and complexities that make them difficult to understand. Rather than consider the full range of consistency models here, let’s consider one of the basic relaxed consistency models that is both useful and easy to understand. Further discussion will wait until Chapter 4 when we cover hardware implementation of memory systems. We defer discussion because many of the relaxed consistency models are intertwined with implementation aspects of the memory hardware, and Chapter 4 discusses memory implementations. In virtually all the relaxed models, synchronization operations such as the Test&Set are defined to be points where consistency must be maintained. This makes sense, because synchronization operations are explicitly included in an ISA to facilitate communication through memory. One of the most relaxed forms of memory consistency, therefore, defines consistency to be maintained only at synchronization instructions; that is, sequential consistency does not have to be maintained for ordinarily loads and stores. This form of consistency is called *Weak Ordering*. (Sequential Consistency is sometimes referred to as “Strong Ordering”, but owing to the way that Strong Ordering was originally defined [13], the two, strictly speaking, are not equivalent [14]). Weak Ordering is defined in the following manner. First, in any program execution, it must appear that, before any synchronization instruction executes, all preceding loads and stores must have been completed. Second, it must appear that the synchronization instruction completes before any subsequent load or store instructions are executed. The term “appear” means that the values observed in registers and memory must be identical to those that would be present if the stated condition holds. In a real implementation, there are often clever things a designer can do so that these conditions do not literally hold at all times; they only appear to, and that is sufficient. Now we apply Weak Ordering to the example in Figure 18. In this example, Flag is a synchronization variable, and we will use a new opcode Set to set such a synchronization variable to a one. We will use Test as a synchronization instruction which loads a flag (in an ISA that uses condition codes, this load instruction could also set a condition code to be tested with a conditional branch; in our ISA, we will do the test as part of the branch.). The code sequence is illustrated in Figure 21a. Here, because of the ordering conditions for synchronization instructions, the Set by Thread 0 must wait for the store to A to complete before it can execute. The Test by Thread 1 must also complete before following loads execute. If the constraints on the Set and Test instructions are enforced, then the expected result (A=9) will be observed. To implement correct operation with relaxed memory ordering, some ISAs provide explicit memory barrier instructions which functionally act as a no-op, but force ordering of preceding and/or following memory operations. For example, the Alpha ISA has a memory barrier (MB) instruction that must wait for all preceding loads and stores to complete before it executes (as a no-op). The Intel IA32 ISA has fence instructions that perform a similar function. In the producer/consumer example, then, a regular store instruction can then be used for setting Flag, as long as it is immediately preceded by a MB instruction (see Figure 21b). These barrier instructions can be placed as appropriate (by the compiler or library writer) to enforce ordering when needed. In practice, however, these barriers tend to be applied very liberally to avoid any possibility of bugs, and in some cases may give performance equivalent to brute-force implementations of sequential consistency. Now, the reason for providing a separate Reset instruction for clearing a lock instead of an ordinary Load instruction (see Figure 11) should be apparent. When a relaxed consistency model, such as Weak Ordering, is implemented, the special opcode tells the hardware that it must impose memory ordering constraints at that point. An ordinary load instruction does not require this ordering enforcement. As mentioned above, there are a number of consistency models that fall somewhere between sequential consistency and Weak Ordering. We will not discuss the details here; the range of ordering models is discussed in Chapter 4, along with their implementations. **MEMORY COHERENCE** Historically, the terms “consistency” and “coherence” (or “coherency”) sometimes have been used interchangeably, but in recent years, they have come to have more specific meanings. In particular, coherence describes the memory ordering behavior when accessing a single variable and consistency describes the ordering behavior when multiple variables are accessed (as just discussed). Consider again the producer/consumer example of Figure 18. We consider the ordering of loads and stores separately for the variables A and Flag as shown in Figure 22. If we consider variable A alone (Figure 22a) we see that there is consistent ordering. Similarly, there is a consistent ordering for variable Flag (Figure 22b). Because there is a consistent ordering for both variables when considered separately, we say that the memory system is coherent. On the other hand, as shown earlier, the overall ordering is not sequentially consistent. Coherency is defined by considering ordering of loads and --- Figure 21. Synchronization with Weak Ordering. Consistency is maintained at synchronization instructions. stores to individual variables; consistency is defined by considering the ordering among loads and stores to all the variables. \[ \begin{align*} \text{Thread0:} & \quad \text{Thread1:} \\ \text{Store } A & = 0 \\ \text{Store } A & = 9 \\ \text{Load } A & = 9 \\ \end{align*} \] (a) \[ \begin{align*} \text{Thread0:} & \quad \text{Thread1:} \\ \text{Store } \text{Flag} & = 0 \\ \text{Load } \text{Flag} & = 0 \\ \text{Load } \text{Flag} & = 0 \\ \text{Store } \text{Flag} & = 1 \\ \text{Load } \text{Flag} & = 1 \\ \end{align*} \] (b) Figure 22. An illustration that the results of example in Figure 20 provide memory coherence for both a) variable A and b) variable Flag. Consequently, memory is coherent for this example, al- though it is not sequentially consistent. In a multiprocessor system with a cache hierarchy changes to variables do not get propagated to all the cached copies instantaneously; changes must propagate through the system, and different cached cop- ies and main memory may get updated at different times. This non-simultaneous update of copies can lead to memory coherence problems if it is not properly handled. Because caches are often a dominant part of the memory coherence problem, this is often characterized as a “cache coherence” problem. However, in general, it is the memory architecture that must be coherent, not just the caches. Virtually all multiprocessor systems in use today implement cache coherence in hardware. In some respects, memory coherence seems more fundamental than memory consistency. With coherence, loads and stores can be tracked with respect to individual variables; while for consistency, loads and stores must essentially be tracked in all combinations. Straightforward, high performance solutions to the memory coherence problem are well-known; the same cannot be said for sequential consistency implementations. Implementations of memory (cache) coherence will be discussed in some detail in Chapter 4. In this chapter, where our interest is in the ISA, not hardware implementations, it is safe to say that all the common ISAs embody memory coherence in their definitions. ### 2.3.5 Transactional Memory With conventional shared memory, API-visible locks are used as a primitive element for many types of synchronization, and the locks typically map directly to hardware mutex variable and atomic read- modify-write instructions. The high level language programmer is responsible for dealing directly with low level primitives. It would be better to provide API-level features with semantics that avoid details of an implementation in favor of conceptually clear sharing patterns. This is the goal of transactional memory, which casts operations on shared data as higher level transactions. Each transaction is a group of memory accesses and modifications which fit together naturally. Transactional memory effectively combines data communication and synchronization. Transactional semantics have been used in database systems for some time, and have only recently made their way into the shared memory programming model [18]. From the perspective of architecture and hardware implementations, transactional memory is still in the research domain, but it is an approach that is receiving considerable interest and deserves some discussion here. With transactional memory, sequences of shared memory operations are bundled into “transactions”. The key property is that all the operations in one thread’s transaction appear to occur atomically to all the other threads. An example, derived from the example Figure 6 is shown in Figure 23. Two data structures are accessed, and both are potentially modified. In Figure 6 there are two separate programmer-defined locks, one for each structure, which must both be acquired before the structures can be operated upon. However, at a higher level, the entire sequence that accesses both structures is conceptually a single operation. In Figure 23 the operations on the two structures are bundled together as a single transaction that will be performed atomically with respect to the other threads. The effect is the same as acquiring, and later releasing, the two separate locks, but the transaction expresses the overall intent in a more transparent way. A transaction is at a higher, more abstract level and describes what should be done, rather than how it should be done, as with the case with explicit locks. Of course, the underlying implementation, as determined by the language implementer, could use locks, but it does not have to use locks. A key problem is that current instruction sets and hardware do not provide primitives that lead to high performance implementations of transactional memory. Hence, language designers and computer architects are working together to define both the high level language semantics and the hardware primitives that will support transactional memory in a conceptually clean and efficient manner. ```xml <Thread0> Begin_Transaction <access struct1> <access struct2> End_Transaction </Thread0> <Thread1> Begin_Transaction <access struct1> <access struct2> End_Transaction </Thread1> ``` **Figure 23. Example of a transaction pattern for shared memory programming.** Transactions are not only more natural at a high level, but they may also provide some relief from the tight constraints of sequential consistency. In particular, transactions become the fundamental unit for memory ordering models rather than individual loads and stores. Figure 24 shows sequences of memory instructions that access memory; note that there may be other non-memory instructions in the sequences, but they are not shown. Recall that with sequential consistency, only certain interleavings of loads and stores are allowed. For example, one such interleaving is illustrated on the right side of Figure 24a. However, if accesses are divided into transactions (Figure 24b), then proper interleavings must only be maintained at the transaction level. One example is shown on the right side of Figure 24b. The sequentially consistent interleaving in Figure 24a would not be allowed for the transactions given in Figure 24b. ### Figure 24. Examples of Sequential Consistency and Transactional Semantics On the left are code sequences executed by two threads; on the right are potential interleavings under a) Sequential Consistency and b) Transactional Semantics. By making transactions atomic, and not the individual loads and stores, there are fewer ordering constraints on individual loads and stores within a transaction. The compiler or the hardware can reorder these operations without violating transactional semantics (although data dependences must still be maintained). For example, the two stores in the first transaction of Thread 0 in Figure 24 can be reordered by the hardware and the results will still be correct. Because of atomicity, all other threads will only observe the result after both stores have been performed. In practice, transactions may be intermixed with loads and stores to shared memory that do not belong to a transaction. This leads to two varieties of transactional semantics. In one class, transactions are atomic only with respect to other transactions; they do not have to be atomic with respect to loads and stores that are not part of transactions. In the other, atomicity is maintained among both transactions and non-transactional loads and stores. The first class is referred to as *weak atomicity* and the second is referred to as *strong atomicity* [17]. The difference is illustrated in Figure 25. In the figure, the last load and store for Thread 1 are not part of a transaction; they are individual loads and stores. With weak atomicity, these loads and stores are not required to respect the atomicity of a transaction, so they can be interleaved inside a transaction as shown on the right side of the figure. With strong atomicity, this interleaving would not be allowed. Current instruction sets and hardware provide no special support for transactional memory. Consequently, today, it must be supported entirely by software, and this is inefficient. A software implementation, essentially has to save the contents of any state that is modified by a transaction, then check to see if any other thread writes to any of the addresses accessed by a transaction as it executes, and, if so, the transaction is aborted, the initial state is restored, and the transaction must begin again. The state saving and detection of writes to the transaction addresses adds a significant number of instructions and slows down a transaction considerably. To make transactional memory efficient, it is clear that some type of ISA (and underlying hardware) support will be needed. On the other hand, this ISA/hardware support alone will probably not be enough, there will also have to be a software component of a transactional memory implementation. This interplay between hardware and software and the definition of the interface (in the ISA) is a topic of significant study. Some of the alternatives are described in Chapter 4 when transactional memory hardware support is discussed. At the ISA level, one approach is to provide begin_transaction, end_transaction instructions as is suggested in Figure 23. Another approach is to provide more primitive operations to assist with the tracking of instructions that access transaction write addresses. These alternatives will also be discussed in greater detail in Chapter 4. <table> <thead> <tr> <th>Initially:</th> <th>Initially:</th> </tr> </thead> <tbody> <tr> <td>A=B=C=D=0</td> <td>A=B=C=D=0</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Thread0:</th> <th>Thread1:</th> </tr> </thead> <tbody> <tr> <td>Begin_Trans</td> <td>Begin_Trans</td> </tr> <tr> <td>Store A&lt; 2</td> <td>Store A&lt; 2</td> </tr> <tr> <td>Store B&lt; 1</td> <td>Store B&lt; 1</td> </tr> <tr> <td>End_Trans</td> <td>End_Trans</td> </tr> <tr> <td>Begin_Trans</td> <td>Begin_Trans</td> </tr> <tr> <td>Load A</td> <td>Load B</td> </tr> <tr> <td>Store C&lt; 3</td> <td>Store C&lt; 3</td> </tr> <tr> <td>End_Trans</td> <td>End_Trans</td> </tr> <tr> <td>Load C</td> <td>Load C</td> </tr> <tr> <td>Store D&lt; 4</td> <td>Store D&lt; 4</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Thread0:</th> <th>Thread1:</th> </tr> </thead> <tbody> <tr> <td>Store A&lt; 2</td> <td>Store A&lt; 2</td> </tr> <tr> <td>Store B&lt; 1</td> <td>Store B&lt; 1</td> </tr> <tr> <td>Store C&lt; 3</td> <td>Store C&lt; 3</td> </tr> <tr> <td>Store C&lt; 1</td> <td>Store C&lt; 1</td> </tr> <tr> <td>Load D</td> <td>Load D</td> </tr> <tr> <td>Store D&lt; 4</td> <td>Store D&lt; 4</td> </tr> </tbody> </table> Figure 25. With weak atomicity, loads and stores that are not part of a transaction are not required to respect atomicity within a transaction. ### 2.4 Message Passing In the message passing programming model, concurrently executing processes or threads communicate by sending and receiving explicit messages. The model is often implemented with an API formed by appending a set of message passing routines to a conventional procedural programming language. Because messages are passed between explicitly identified buffers using only API-defined routines, there is no need for logically shared memory. Another important feature of the message passing programming model is that communication and synchronization are combined in message semantics; they do not have to be handled separately as is the case in most shared memory APIs (transactional memory is an attempt to combine synchronization and communication, however). MPI, or Message Passing Interface [6], is a standard set of API routines that support the message passing parallel programming model. This set of routines has been combined with a number of popular programming languages including C and FORTRAN. When describing the message passing model, we will use examples from MPI. 2.4.1 Processes and Threads In this subsection we will consider the creation and management of processes and threads as used in a typical message passing API. Then, in the next subsection, message passing communication and synchronization are considered. The message passing programming model can be implemented easily on both shared and distributed memory hardware. Consequently, computation can be performed either by threads sharing an address space or by processes with different address spaces. This leads to three different environments in which an API can support message passing: 1) Threads sharing an address space; threads run as part of a single process; 2) Processes having separate address spaces; processes run under the control of a common operating system; 3) Processes having separate address spaces; processes run under the control of different operating systems. Environments 1) and 2) are best supported on shared memory hardware and the threads and processes run under a common operating system. Consequently, the API can support creation, management, and termination of concurrent processes and threads in much the same way as described for the shared memory model in Section 2.2.1. For example, the Unix OS fork(), exec(), and wait() system calls can be used. In typical usage, a main application process may create, and then manage a number of cooperating threads or processes through these system calls. Environment 3) is probably the most common situation where message passing is used in practice. In this environment, the concurrently executing processes are under the control of different operating systems. Because there is no common OS, software running under control of one of the operating systems cannot directly perform a system call that creates a process on another system. One solution to this problem is to create processes in a manner that is external to the cooperating user processes. For example, the user may run a shell script that starts processes on the various distributed processors. Then, each of these processes calls a routine that initializes a runtime message passing environment based on OS-provided networking facilities, thereby establishing inter-process communication. This was the approach taken in the first version of MPI, MPI-1, where there are no explicit routines for forking or creating processes; rather, it is assumed that the user first starts a number of separate programs, through means external to MPI. Then each of the programs calls the routine MPI_INIT() which initializes the MPI API environment and sets up the necessary communication linkages; exactly how this is done depends on the actual hardware platform. MPI extensions, provided in MPI-2 support the creation of processes from inside an MPI program. These extensions employ an interface to an external process manager; the external process manager does the actual process creation. The routine MPI_COMM_SPAWN() creates a number of processes, all executing the same executable binary; MPI_COMM_SPAWN_MULTIPLE() can create processes running different executables. Once processes are created and communication channels are set up, then the other process management functions are relatively straightforward. For example, in MPI there is a routine to terminate running processes. An important part of a message passing API is the ability to name processes and to direct message sending and receiving to/from specific processes, or groups of processes. Typically when a process is created, the creating process is returned the name of the created process. A process may also get its own name through an API call. In MPI, the process identifier is called the process’s rank. There are often cases where inter-process communication or other actions should be confined to a subset of processes. This suggests support for giving groups of processes a common name that can be used, for example, when a message is to be broadcast to all members of the group. MPI provides such support for grouping processes into collections that can communicate with each other. This is illustrated in Figure 26. An MPI group contains an ordered set of processes, and an MPI communicator is an object that defines a “universe” for communication. The group MPI_COMM_WORLD contains all the processes. In practice, it is common to first define a group, and then create a communicator so that the processes within the group can communicate. Figure 26. Processes in a message passing API may be organized into groups for collective communication. 2.4.2 Communication and Synchronization In a pure message passing programming model (see Figure 27), multiple processes each have there own private memory. There are no shared variables, and all data communication takes place via explicit messages. In the simplest case, point-to-point communication, one process executes a “send” operation, which creates a message that contains of data items to be sent. Then, another process executes a “receive” operation to read the contents of the sent message. There are also more complex forms of message passing, collective communication, where, for example, one process may broadcast a message to all the other processes that are running in parallel. **POINT-TO-POINT COMMUNICATION** In point-to-point communication, the sending process calls a routine of the form `send(RecProc, SendBuf,...)` and a receiving process calls `receive(SendProc, RecBuf...)`. For a send routine, the parameter `RecProc` names destination of the message and may indicate a wildcard if a number of processors are valid receivers. The data to be sent is in sending processor's memory space and is identified as `SendBuf`. Other parameters in the send routine may include the size and type of the data to be sent. Similarly, for the receive routine, `SendProc` identifies the process from whom a message is expected and `RecBuf` identifies the memory location in the receivers address space where the message should be delivered. An example of a point-to-point message passing routine taken from MPI performs a basic send and is of the form: `MPI_Send(buffer, count, type, dest, tag, comm)`, where the arguments are: - `buffer` – data buffer holding data to be sent - `count` – number of data items to be sent - `type` – type of data items to be sent - `dest` – rank (identifier) of the receiving process - `tag` – arbitrary programmer-defined identifier; the tag of a send and the corresponding receive must match. The tag may be a wildcard. - `comm` – communicator number A basic MPI receive `MPI_Recv(buffer, count, type, source, tag, comm, status);` its arguments are: - `buffer` – address where received data is to be placed - `count` – number of data items - `type` – type of data items - `source` – rank (identifier) of the sending process - `tag` – arbitrary programmer-defined identifier tag of send and receive must match - `comm` – communicator number - `status` – the source, tag, and number of bytes transferred. Message sends and receives not only communicate data; they also may provide implicit synchronization. Consequently, a variety of message send/receive routines are supported; they have similar communication semantics but differ in the timing and ordering of sends and receives. This is analogous to memory ordering semantics in the shared memory model. As with memory ordering models, message sequencing is very closely connected with underlying implementations. In MPI, messages may be synchronous or asynchronous, and they may be blocking or non-blocking. These are pairs of properties that are similar, but they are not the same. The distinction between synchronous and asynchronous is a natural distinction that is defined at higher level. The distinction between blocking and non-blocking is more closely related to underlying implementations. These distinctions will be elaborated upon in following paragraphs. A send routine is synchronous if it returns only when a matching receive is called by the receiving process. In essence, there is an acknowledgment from the receiver back to the sender indicating that the message has been received. While this implied acknowledgment is pending, the send routine stalls¹, and no further instructions in the sending process are executed. A synchronous receive routine stalls until a message becomes available. A send routine is asynchronous if the sending process can proceed immediately after executing the send. Similarly, an asynchronous receive routine simply posts an “intent” to receive a message, and, if the message happens to be available it is copied into the receive buffer. If the message is not yet available, the receiving process continues computation, anyway. Both the sender and receiver are given a request handle that identifies that particular point-to-point communication. Then, there is a routine that allows the sending and receiving process to test to see whether the message has been received. Given the request handle as in argument, the test routine returns a status flag indicating whether the message is available or has been received. The advantage of asynchronous routines is that they allow the sender/receiver to do useful work while waiting for a message rather than stalling. The blocking nature of sends and receives depends on availability of buffering that is part of the implementation. A blocking send returns as soon as its send buffer has been completely read (as part of the send implementation); otherwise it blocks (stalls) until the send buffer has been read. A non-blocking send does not wait for the send buffer to be read; it can go ahead with computation. For the non-blocking send a test instruction allows the sending process to determine if the buffer is empty. A blocking receive blocks until data is available in the receive buffer. A non-blocking receive can continue executing in a manner similar to an asynchronous receive. An example of non-blocking communication in MPI is given in Figure 28. This example, adapted from an MPI tutorial [6], performs neighbor communication among processes that are logically connected in a ring topology. The identifier of a given process is rank, the name of the previous process in the ring is prev, and the name of the next process in the ring is next. After initialization routines, each process computes its prev and next, modulo the number of processes, numprocs. Then the process perform a pair of non-blocking receive routines, MPI_Irecv, directed at both the prev and next neighbors; this is followed by a matching pair of a non-blocking send routines MPI_Isend. The two buffers, one for each neighbor are buf[1] and buf[2]. Request handles are in the array reqs[4]. The sent message is a “1” flag. The MPI_Waitall is a routine that waits for all the non-blocking communication to complete. ¹ The term “stall” is used rather than “block” to avoid confusion with blocking/non-blocking messages. ```c #include "mpi.h" #include <stdio.h> int main(argc,argv) int argc; char *argv[]; { int numprocs, rank, next, prev, buf[2], tag1=1, tag2=2; MPI_Request reqs[4]; MPI_Status stats[4]; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &numprocs); MPI_Comm_rank(MPI_COMM_WORLD, &rank); prev = rank-1; next = rank+1; if (rank == 0) prev = numprocs - 1; if (rank == (numprocs - 1)) next = 0; MPI_Irecv(&buf[0], 1, MPI_INT, prev, tag1, MPI_COMM_WORLD, &reqs[0]); MPI_Irecv(&buf[1], 1, MPI_INT, next, tag2, MPI_COMM_WORLD, &reqs[1]); MPI_Isend(&rank, 1, MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[2]); MPI_Isend(&rank, 1, MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[3]); MPI_Waitall(4, reqs, stats); MPI_Finalize(); } ``` **Figure 28. Example of non-blocking communication in MPI.** There are at least two ways that buffering can be implemented, and specific blocking semantics depend on this implementation. Figure 29 illustrates the two models. The model in Figure 29a consists of the user defined buffer that holds data at the sending end, some kind of communication channel over which the message travels, and the user-defined buffer for holding data at the receiving end. A send signals that valid data is available in the send buffer, but the data is maintained there until a receive copies the message from the send buffer to the receive buffer. In the second implementation (Figure 29b), there is additional system-defined buffering; this buffering is often associated with the processor at the receiving end, but does not have to be. A send causes the message to be copied into the system buffer. Then, a receive copies the message into the receive buffer. An advantage of the first implementation is that there is no storage for system buffering, and there is less data copying so it is potentially faster. However, it means that a send must block until a receive has been executed at the other end. In this sense, it is similar to a synchronous send. In the second implementation, the send may continue processing before the receive has taken place; in this case, it is similar to an asynchronous send. In either of the two buffering models, a receive blocks (stalls) until data is available to be copied into its receive buffer. In summary, the stalling for synchronous operations is implementation depend- ent and is written into the semantics of the send and receive routines. The stalling for blocking operations is dependent on the buffering in an implementation. ![Diagram](image_url) Figure 29. Semantic models for MPI message passing; a) messages are passed directly from the send buffer to the receive buffer; b) there is intermediate system buffering between the send and receive buffers. Blocking sends and receives are especially prone to deadlock, depending on the implementation. Consider an implementation as in Figure 29a. In Figure 30a, two processes swap data via blocking sends and receives; Process 0 performs a send to process 1 followed by a receive from process 1, and process 1 does the opposite. This would seem to be a natural way for a pair of processes to exchange information, but it deadlocks. Because a send blocks until it buffer is emptied, both processes will block on their send, waiting for the other process to perform a receive, which will cause the deadlock. Deadlock may also occur in systems as illustrated in Figure 29b if the implementation should run out of system buffer space, thereby blocking the copy from the send buffer into the system buffer. To avoid deadlock, a sequence such as that given in Figure 30b should be used. This approach, however, serializes the two pairs of send/receives; i.e., they cannot be overlapped. (a) <Process 0> Send(Process1, Message); Receive(Process1, Message); Receive(Process0, Message); Send(Process0, Message); (b) <Process 0> Send(Process1, Message); Receive(Process1, Message); Receive(Process0, Message); Send(Process0, Message); Figure 30. Two processes swap messages via blocking sends and receives; a) if both send, then receive, there is deadlock b) deadlock can be avoided by carefully ordering the sends and receives. To summarize, Figure 31 illustrates four cases; in all four, the send is performed before the receive. In Figure 31a, the send and receive are synchronous, so the send stalls until the receive executes and provides an acknowledgement. The actual copying of data happens at some point between the send and receive, but does not directly affect the timing. Figure 31b illustrates the asynchronous blocking case where there is no system buffering. The timing is similar to the synchronous timing, except it is the emptying of the sender’s buffer that causes the sender to stop stalling. Figure 31c illustrates the asynchronous blocking case where there is intermediate system buffering. In this case, the send only waits until data is copied into the system buffer. Finally, Figure 31d illustrates the asynchronous non-blocking case. The sending process is able to proceed immediately (as with the blocking case and intermediate buffering, although in this case there is no assurance that the sender’s buffer has been emptied, so it should first test to see if the buffer has been emptied before performing another send. Figure 31. Send and receive timing when the send is performed before the receive a) synchronous, b) asynchronous, blocking with no intermediate buffer; c) blocking with an intermediate buffer, d) asynchronous and non-blocking. MPI defines message passing to be synchronous or asynchronous, and blocking and non-blocking, as we have just defined (MPI was used as a guide in formulating these definitions). The examples of MPI_send and MPI_recv given at the beginning of this section are asynchronous and blocking. The routine MPI_Send is the synchronous version of the MPI_send. In a sense the synchronous version is “stronger” than the blocking version, and some implementations may implement the blocking version with the synchronous version. MPI also contains non-blocking sends and receives, these are MPI_Isend and MPI_Irecv. Because of their non-blocking nature, these send/receives need additional API support, as pointed out above. MPI provides an MPI_test routine that checks the status of a pending send or receive operation. MPI also provides an MPI_Wait, which causes a sender to block on an already full send buffer. An MPI_Wait was used in the example given in Figure 28. **COLLECTIVE COMMUNICATION** Besides point-to-point communication, the message passing model also includes collective communication, for example, when one process broadcasts the same message to all the other processes. A broadcast is illustrated in Figure 32a. The routine broadcast (SendBuf, SendProc, Communicator, ...) sends a message from SendProc’s send buffer to all the processors in the communicator group; in the example, all the processors shown are in the communicator group. Other more elaborate patterns of group communication are also possible. For example, a scatter operation distributes a sequence of messages to a number of different processes. The routine scatter (SendBuf, RcvBuf, SendProc, Communicator) is illustrated in Figure 32b. A gather is a collective receive operation that gathers messages from a number of processes into a process’s receive buffer. The routine gather (SendBuf, RcvBuf, SendProc, Communicator) is illustrated in Figure 32c. In MPI, a communicator is associated with collective communication and identifies the processes that will participate. A broadcast, for example, is MPI_Bcast (*buffer, count, datatype, root, communicator); here, “root” is the identifier of the broadcasting process. Figure 32. Collective communication a) broadcast, b) scatter, c) gather. 2.5 Message Passing Implementations Even though message passing can be performed between processes with private memory address spaces, the message passing programming model does not require hardware memory that uses distributed (non-shared) memory. It can be built on either shared memory hardware or distributed memory hardware. Implementations on hardware shared memory are not uncommon. On the other hand, if the hardware uses distributed memory, then the message passing programming model is used predominantly (as compared with the shared memory model). It is also possible to mix the shared memory model and message passing model on the same shared memory hardware platform. For example, threads belonging to the same process may pass messages for some types of data communication, and use shared memory for others. Which is used, and in what combination, is up to the programmer or software development team. There is typically no special ISA or ABI level support for message passing in most real systems. This is one of the big advantages of the message passing model – the implementer can just lump together existing hardware platforms with a network and get them to work together with neither OS changes (or minimal changes) nor specific ISA support (for example, any memory ordering model will do). Some compute servers where program parallelism is a primary application area may provide additional system software support, however. It's not that there haven’t been proposals for ISA level support – there have. However, these proposals have been made in the context of special purpose parallel computer systems, where the predominant application is program parallelism (to make a single program run very fast) and, in that context, message passing support leads to greater efficiency. The main objective of ISA (and hardware) support is reduction of communication overheads. To describe implementations of message passing APIs in more detail, we return to the three API-supported models listed at the beginning of the previous section and consider ways in which they can be implemented. 1) Threads sharing an address space on a shared memory platform. Here, much of the implementation can be done at user level with the assistance of runtime software. Because of the common, shared address space, message passing can be managed at the user level with normal load and store instructions. For example, a send routine would call the API runtime which would post the presence of a message in a shared memory runtime table, and then either return immediately (for a non-blocking or asynchronous send) or enter a wait loop (or be placed into an runtime-managed thread queue). A receive routine would also call the runtime. If the runtime finds that a matching message has been posted, then it would copy the message from the sender's buffer to the receiver's, using load and store instructions. Otherwise, the runtime would either return to the receiver (for a non-blocking or asynchronous receive) or wait until a message is sent. The key point is that in this implementation, user-level runtime code can implement the message passing API. Consequently, it would be relatively fast and efficient when compare with a distributed memory implementation which would require operating system intervention. One disadvantage of this shared memory implementation is that the message buffers are not protected from accidental overwriting by a “rogue” thread. 2) Processes running on shared memory hardware communicate via messages in shared regions of their address space. For the most part the implementation would be similar to the one given above, with user level runtime code providing the implementation. Here, there are a couple of alternatives. In one, the programmer could make sure that all message buffers are in the shared regions of memory. Then, message passing would be managed by the runtime, and messages would pass directly from the sender’s buffer to the receiver’s as in Figure 29b. Alternatively, the implementation might set aside a special shared memory region for message buffering and use intermediate buffering as illustrated in Figure 29b; although in this case the buffering would be in user memory space rather than system space. In this case, the memory regions of the parallel processes would be protected from over-writing due to a bug in another process. 3) Processes on distributed memory hardware communicate via network hardware; the processes have no shared memory regions and are under control of different operating systems. If message passing is used among processes in a distributed memory system with separate operating systems on each compute node, then the API runtime software must convert message sends (or receives) into system calls that use networking routines to communicate from one process to another (the direct communication is among the runtimes that support the processes. This method is substantially slower, because of the need for OS intervention to pass individual messages. There are techniques for reducing the overheads. For example, the message send may require OS intervention with the receiving OS placing the message into a user level buffer at the receiver’s end. Then, the receive can be done in the runtime, without a system call. This is an example as shown in Figure 29b where the system buffer is in the receiver’s runtime space. 2.6 Summary Figure 33 illustrates the architectures and programming models by giving commonly used examples. This is the only chapter where we will discuss the upper levels of the architecture stack (API and ABI) in any detail. The API and ABI are discussed in this chapter primarily to provide background for the remainder of the book. They are of interest, however, because certain hardware tradeoffs are influenced by characteristics of the API and ABI levels. As Figure 33 suggests, all the popular APIs and ABIs run on conventional ISAs, which very similar features for supporting multi-threading. ![Diagram of Programming Models and Architectures] Figure 33. Examples of commonly used programming models and architectures. The high level material is for background and we do not discuss it further, except, perhaps in examples. The ISA level issues are important for some of the multi-core hardware discussions. We know what must be supported, now in the remainder of the book, we will discuss how it is done. OS issues will be discussed more in the context of clustered architectures (where communication and synchronization are implemented via the OS). 2.7 References 2. A. Muys, A Pthreads Tutorial. 8. B. Barney, Posix Threads Programming, Lawrence Livermore National Laboratory.
LIBSVM: A Library for Support Vector Machines CHIH-CHUNG CHANG and CHIH-JEN LIN, National Taiwan University LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail. Categories and Subject Descriptors: I.5.2 [Pattern Recognition]: Design Methodology—Classifier design and evaluation; G.1.6 [Numerical Analysis]: Optimization—Quadratic programming methods General Terms: Algorithms, Performance, Experimentation Additional Key Words and Phrases: Classification LIBSVM optimization regression support vector machines SVM ACM Reference Format: DOI = 10.1145/1961189.1961199 http://doi.acm.org/10.1145/1961189.1961199 1. INTRODUCTION Support Vector Machines (SVMs) are a popular machine learning method for classification, regression, and other learning tasks. Since the year 2000, we have been developing the package LIBSVM as a library for support vector machines.\(^1\) LIBSVM is currently one of the most widely used SVM software. In this article,\(^2\) we present all implementation details of LIBSVM. However, this article does not intend to teach the practical use of LIBSVM. For instructions of using LIBSVM, see the README file included in the package, the LIBSVM FAQ,\(^3\) and the practical guide by Hsu et al. [2003]. LIBSVM supports the following learning tasks. (1) SVC: support vector classification (twoclass and multiclass); (2) SVR: support vector regression. (3) One-class SVM. \(^1\)The Web address of the package is at http://www.csie.ntu.edu.tw/~cjlin/libsvm. \(^2\)This LIBSVM implementation document was created in 2001 and has been maintained at http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.pdf. \(^3\)LIBSVM FAQ: http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html. A typical use of LIBSVM involves two steps: first, training a dataset to obtain a model and second, using the model to predict information of a testing dataset. For SVC and SVR, LIBSVM can also output probability estimates. Many extensions of LIBSVM are available at libsvmtools.4 The LIBSVM package is structured as follows. 1. Main directory: core C/C++ programs and sample data. In particular, the file svm.cpp implements training and testing algorithms, where details are described in this article. 2. The tool subdirectory. This subdirectory includes tools for checking data format and for selecting SVM parameters. 3. Other subdirectories contain prebuilt binary files and interfaces to other languages/software. LIBSVM has been widely used in many areas. From 2000 to 2010, there were more than 250,000 downloads of the package. In this period, we answered more than 10,000 emails from users. Table I lists representative works in some domains that have successfully used LIBSVM. This article is organized as follows. In Section 2, we describe SVM formulations supported in LIBSVM: C-Support Vector Classification (C-SVC), ν-Support Vector Classification (ν-SVC), distribution estimation (one-class SVM), ε-Support Vector Regression (ε-SVR), and ν-Support Vector Regression (ν-SVR). Section 3 then discusses performance measures, basic usage, and code organization. All SVM formulations supported in LIBSVM are quadratic minimization problems. We discuss the optimization algorithm in Section 4. Section 5 describes two implementation techniques to reduce the running time for minimizing SVM quadratic problems: shrinking and caching. LIBSVM provides some special settings for unbalanced data; details are in Section 6. Section 7 discusses our implementation for multiclass classification. Section 8 presents how to transform SVM decision values into probability values. Parameter selection is important for obtaining good SVM models. Section 9 presents a simple and useful parameter selection tool in LIBSVM. Finally, Section 10 concludes this work. 2. SVM FORMULATIONS LIBSVM supports various SVM formulations for classification, regression, and distribution estimation. In this section, we present these formulations and give corresponding references. We also show performance measures used in LIBSVM. 2.1. C-Support Vector Classification Given training vectors $x_i \in \mathbb{R}^d, i = 1, \ldots, l$, in two classes, and an indicator vector $y \in \mathbb{R}^l$ such that $y_i \in \{1, -1\}$, C-SVC [Boser et al. 1992; Cortes and Vapnik 1995] solves Table I. Representative Works in Some Domains that have Successfully Used LIBSVM. <table> <thead> <tr> <th>Domain</th> <th>Representative works</th> </tr> </thead> <tbody> <tr> <td>Computer vision</td> <td>LIBPMK [Grauman and Darrell 2005]</td> </tr> <tr> <td>Natural language processing</td> <td>Maltparser [Nivre et al. 2007]</td> </tr> <tr> <td>Neuroimaging</td> <td>PyMVPA [Hanke et al. 2009]</td> </tr> <tr> <td>Bioinformatics</td> <td>BDVal [Dorff et al. 2010]</td> </tr> </tbody> </table> the following primal optimization problem: $$\begin{align*} \min_{w, b, \xi} & \quad \frac{1}{2} w^T w + C \sum_{i=1}^{l} \xi_i \\ \text{subject to} & \quad y_i (w^T \phi(x_i) + b) \geq 1 - \xi_i, \\ & \quad \xi_i \geq 0, \quad i = 1, \ldots, l, \end{align*}$$ (1) where \(\phi(x_i)\) maps \(x_i\) into a higher-dimensional space and \(C > 0\) is the regularization parameter. Due to the possible high dimensionality of the vector variable \(w\), usually we solve the following dual problem: $$\begin{align*} \min_{\alpha} & \quad \frac{1}{2} \alpha^T Q \alpha - e^T \alpha \\ \text{subject to} & \quad y^T \alpha = 0, \\ & \quad 0 \leq \alpha_i \leq C, \quad i = 1, \ldots, l. \end{align*}$$ (2) where \(e = [1, \ldots, 1]^T\) is the vector of all ones, \(Q\) is an \(l\) by \(l\) positive semidefinite matrix, \(Q_{ij} = y_i y_j K(x_i, x_j)\), and \(K(x_i, x_j) = \phi(x_i)^T \phi(x_j)\) is the kernel function. After problem (2) is solved, using the primal-dual relationship, the optimal \(w\) satisfies $$w = \sum_{i=1}^{l} y_i \alpha_i \phi(x_i)$$ (3) and the decision function is $$\text{sgn}(w^T \phi(x) + b) = \text{sgn}\left(\sum_{i=1}^{l} y_i \alpha_i K(x_i, x) + b\right).$$ We store \(y_i \alpha_i \forall i, b\), label names,\(^5\) support vectors, and other information such as kernel parameters in the model for prediction. ### 2.2. \(\nu\)-Support Vector Classification The \(\nu\)-support vector classification [Schölkopf et al. 2000] introduces a new parameter \(\nu \in (0, 1]\). It is proved that \(\nu\) an upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Given training vectors \(x_i \in \mathbb{R}^d, i = 1, \ldots, l\), in two classes, and a vector \(y \in \mathbb{R}^l\) such that \(y_i \in \{1, -1\}\), the primal optimization problem is $$\begin{align*} \min_{w, b, \xi, \rho} & \quad \frac{1}{2} w^T w - \nu \rho + \frac{1}{l} \sum_{i=1}^{l} \xi_i \\ \text{subject to} & \quad y_i (w^T \phi(x_i) + b) \geq \rho - \xi_i, \\ & \quad \xi_i \geq 0, \quad i = 1, \ldots, l, \\ & \quad \rho \geq 0. \end{align*}$$ (4) The dual problem is $$\begin{align*} \min_{\alpha} & \quad \frac{1}{2} \alpha^T Q \alpha \\ \text{subject to} & \quad 0 \leq \alpha_i \leq 1/l, \quad i = 1, \ldots, l, \\ & \quad e^T \alpha \geq \nu, \quad y^T \alpha = 0. \end{align*}$$ (5) \(^5\)In LIBSVM, any integer can be a label name, so we map label names to \(\pm 1\) by assigning the first training instance to have \(y_1 = +1\). where \( Q_{ij} = y_i y_j K(x_i, x_j) \). Chang and Lin [2001] show that problem (5) is feasible if and only if \[ v \leq \frac{2 \min(\#y_i = +1, \#y_i = -1)}{l} \leq 1, \] so the usable range of \( v \) is smaller than \((0, 1]\). The decision function is \[ \text{sgn} \left( \sum_{i=1}^{l} y_i \alpha_i K(x_i, x) + b \right). \] It is shown that \( e^T \alpha \geq v \) can be replaced by \( e^T \alpha = v \) [Crisp and Burges 2000; Chang and Lin 2001]. In LIBSVM, we solve a scaled version of problem (5) because numerically \( \alpha_i \) may be too small due to the constraint \( \alpha_i \leq 1/l \). \[ \begin{align*} \min_{\tilde{\alpha}} & \quad \frac{1}{2} \tilde{\alpha}^T Q \tilde{\alpha} \\ \text{subject to} & \quad 0 \leq \tilde{\alpha} \leq 1/l, \quad i = 1, \ldots, l, \\ & \quad e^T \tilde{\alpha} = vl, \quad y^T \tilde{\alpha} = 0 \end{align*} \] If \( \alpha \) is optimal for the dual problem (5) and \( \rho \) is optimal for the primal problem (4), Chang and Lin [2001] show that \( \alpha / \rho \) is an optimal solution of \( C \)-SVM with \( C = 1/(\rho l) \). Thus, in LIBSVM, we output \((\alpha / \rho, b / \rho)\) in the model.\(^6\) ### 2.3. Distribution Estimation (One-Class SVM) One-class SVM was proposed by Schölkopf et al. [2001] for estimating the support of a high-dimensional distribution. Given training vectors \( x_i \in \mathbb{R}^n \), \( i = 1, \ldots, l \) without any class information, the primal problem of one-class SVM is \[ \begin{align*} \min_{w, \xi, \rho} & \quad \frac{1}{2} w^T w - \rho + \frac{1}{vl} \sum_{i=1}^{l} \xi_i \\ \text{subject to} & \quad w^T \phi(x_i) \geq \rho - \xi_i, \\ & \quad \xi_i \geq 0, \quad i = 1, \ldots, l. \end{align*} \] The dual problem is \[ \begin{align*} \min_{\alpha} & \quad \frac{1}{2} \alpha^T Q \alpha \\ \text{subject to} & \quad 0 \leq \alpha_i \leq 1/(vl), \quad i = 1, \ldots, l, \\ & \quad e^T \alpha = 1, \end{align*} \] where \( Q_{ij} = K(x_i, x_j) = \phi(x_i)^T \phi(x_j) \). The decision function is \[ \text{sgn} \left( \sum_{i=1}^{l} \alpha_i K(x_i, x) - \rho \right). \] \(^6\)More precisely, solving (6) obtains \( \bar{\rho} = \rho l \). Because \( \bar{\alpha} = l \alpha \), we have \( \alpha / \rho = \bar{\alpha} / \bar{\rho} \). Hence, in LIBSVM, we calculate \( \alpha / \bar{\rho} \). Similar to the case of $\nu$-SVC, in LIBSVM, we solve a scaled version of (7). \[ \min_{\alpha} \frac{1}{2}\alpha^T Q \alpha \\ \text{subject to } 0 \leq \alpha_i \leq 1, \quad i = 1, \ldots, l, \\ e^T \alpha = \nu l \] **2.4. $\epsilon$-Support Vector Regression ($\epsilon$-SVR)** Consider a set of training points, \((x_1, z_1), \ldots, (x_l, z_l)\), where \(x_i \in \mathbb{R}^n\) is a feature vector and \(z_i \in \mathbb{R}^1\) is the target output. Under given parameters \(C > 0\) and \(\epsilon > 0\), the standard form of support vector regression [Vapnik 1998] is \[ \min_{w, b, \xi, \xi^*} \frac{1}{2} w^T w + C \sum_{i=1}^{l} \xi_i + C \sum_{i=1}^{l} \xi_i^* \\ \text{subject to } w^T \phi(x_i) + b - z_i \leq \epsilon + \xi_i, \\ z_i - w^T \phi(x_i) - b \leq \epsilon + \xi_i^*, \\ \xi_i, \xi_i^* \geq 0, i = 1, \ldots, l. \] The dual problem is \[ \min_{\alpha, \alpha^*} \frac{1}{2}(\alpha - \alpha^*)^T Q (\alpha - \alpha^*) + \epsilon \sum_{i=1}^{l} (\alpha_i + \alpha_i^*) + \sum_{i=1}^{l} z_i (\alpha_i - \alpha_i^*) \\ \text{subject to } e^T (\alpha - \alpha^*) = 0, \\ 0 \leq \alpha_i, \alpha_i^* \leq C, i = 1, \ldots, l. \] where \(Q_{ij} = K(x_i, x_j) = \phi(x_i)^T \phi(x_j)\). After solving problem (9), the approximate function is \[ \sum_{i=1}^{l} (-\alpha_i + \alpha_i^*) K(x_i, x) + b. \] In LIBSVM, we output $\alpha^* - \alpha$ in the model. **2.5. $\nu$-Support Vector Regression ($\nu$-SVR)** Similar to $\nu$-SVC, for regression, Schölkopf et al. [2000] use a parameter $\nu \in (0, 1]$ to control the number of support vectors. The parameter $\epsilon$ in $\epsilon$-SVR becomes a parameter here. With $(C, \nu)$ as parameters, $\nu$-SVR solves \[ \min_{w, b, \xi, \xi^*, \epsilon} \frac{1}{2} w^T w + C \left(\nu \epsilon + \frac{1}{l} \sum_{i=1}^{l} (\xi_i + \xi_i^*)\right) \\ \text{subject to } (w^T \phi(x_i) + b) - z_i \leq \epsilon + \xi_i, \\ z_i - (w^T \phi(x_i) + b) \leq \epsilon + \xi_i^*, \\ \xi_i, \xi_i^* \geq 0, i = 1, \ldots, l, \quad \epsilon \geq 0. \] The dual problem is \[ \min_{\alpha, \alpha^*} \frac{1}{2} (\alpha - \alpha^*)^T Q (\alpha - \alpha^*) + z^T (\alpha - \alpha^*) \\ \text{subject to } \quad e^T (\alpha - \alpha^*) = 0, \quad e^T (\alpha + \alpha^*) \leq C, \\ 0 \leq \alpha_i, \alpha_i^* \leq C/l, \quad i = 1, \ldots, l. \] (10) The approximate function is \[ \sum_{i=1}^l (-\alpha_i + \alpha_i^*) K(x_i, x) + b. \] Similar to \(\nu\)-SVC, Chang and Lin [2002] show that the inequality \(e^T (\alpha + \alpha^*) \leq C\) can be replaced by an equality. Moreover, \(C/l\) may be too small because users often choose \(C\) to be a small constant like one. Thus, in LIBSVM, we treat the user-specified regularization parameter as \(C/l\). That is, \(\bar{C} = C/l\) is what users specified and LIBSVM solves the following problem. \[ \min_{\alpha, \alpha^*} \frac{1}{2} (\alpha - \alpha^*)^T Q (\alpha - \alpha^*) + z^T (\alpha - \alpha^*) \\ \text{subject to } \quad e^T (\alpha - \alpha^*) = 0, \quad e^T (\alpha + \alpha^*) = \bar{C}, \\ 0 \leq \alpha_i, \alpha_i^* \leq \bar{C}, \quad i = 1, \ldots, l \] Chang and Lin [2002] prove that \(\epsilon\)-SVR with parameters \((\bar{C}, \epsilon)\) has the same solution as \(\nu\)-SVR with parameters \((l\bar{C}, \nu)\). 3. PERFORMANCE MEASURES, BASIC USAGE, AND CODE ORGANIZATION This section describes LIBSVM’s evaluation measures, shows some simple examples of running LIBSVM, and presents the code structure. 3.1. Performance Measures After solving optimization problems listed in previous sections, users can apply decision functions to predict labels (target values) of testing data. Let \(x_1, \ldots, x_l\) be the testing data and \(f(x_1), \ldots, f(x_l)\) be decision values (target values for regression) predicted by LIBSVM. If the true labels (true target values) of testing data are known and denoted as \(y_1, \ldots, y_l\), we evaluate the prediction results by the following measures. 3.1.1. Classification. \[ \text{Accuracy} = \frac{\# \text{ correctly predicted data}}{\# \text{ total testing data}} \times 100\%. \] 3.1.2. Regression. LIBSVM outputs \(\text{MSE}\) (mean squared error) and \(r^2\) (squared correlation coefficient). \[ \text{MSE} = \frac{1}{l} \sum_{i=1}^l (f(x_i) - y_i)^2, \] \[ r^2 = \frac{\left( \sum_{i=1}^l f(x_i) y_i - \sum_{i=1}^l f(x_i) \sum_{i=1}^l y_i \right)^2}{\left( \sum_{i=1}^l f(x_i)^2 - \left( \sum_{i=1}^l f(x_i) \right)^2 \right) \left( \sum_{i=1}^l y_i^2 - \left( \sum_{i=1}^l y_i \right)^2 \right)} \] 3.2. A Simple Example of Running LIBSVM While detailed instructions of using LIBSVM are available in the README file of the package and the practical guide by Hsu et al. [2003], here we give a simple example. LIBSVM includes a sample dataset heart.scale of 270 instances. We split the data to a training set heart.scale.tr (170 instances) and a testing set heart.scale.te. ``` $ python tools/subset.py heart_scale 170 heart_scale.tr heart_scale.te ``` The command `svm-train` solves an SVM optimization problem to produce a model.7 ``` $ ./svm-train heart_scale.tr ``` * optimization finished, #iter = 87 nu = 0.471645 obj = -67.299458, rho = 0.203495 nSV = 88, nBSV = 72 Total nSV = 88 Next, the command `svm-predict` uses the obtained model to classify the testing set. ``` $ ./svm-predict heart_scale.te heart_scale.tr.model output ``` Accuracy = 83% (83/100) (classification) The file `output` contains predicted class labels. 3.3. Code Organization All LIBSVM's training and testing algorithms are implemented in the file `svm.cpp`. The two main subroutines are `svm_train` and `svm_predict`. The training procedure is more sophisticated, so we give the code organization in Figure 1. From Figure 1, for classification, `svm_train` decouples a multiclass problem to two-class problems (see Section 7) and calls `svm_train_one` several times. For regression and one-class SVM, it directly calls `svm_train_one`. The probability outputs for classification and regression are also handled in `svm_train`. Then, according to the SVM formulation, `svm_train_one` calls a corresponding subroutine such as `solve_c_svc` for C-SVC and `solve_nu_svc` for ν-SVC. All `solve_*` subroutines call the solver `Solve` after preparing suitable input values. The subroutine `So, stop1ve minimizes a general form of SVM 7The default solver is C-SVC using the RBF kernel (48) with \( C = 1 \) and \( γ = 1/n \). optimization problems; see (11) and (22). Details of the subroutine Solve are described in Sections 4 through 6. 4. SOLVING THE QUADRATIC PROBLEMS This section discusses algorithms used in LIBSVM to solve dual quadratic problems listed in Section 2. We split the discussion to two parts. The first part considers optimization problems with one linear constraint, while the second part checks those with two linear constraints. 4.1. Quadratic Problems with One Linear Constraint: C-SVC, \(\epsilon\)-SVR, and One-Class SVM We consider the following general form of C-SVC, \(\epsilon\)-SVR, and one-class SVM: \[ \min_{\alpha} f(\alpha) \quad \text{subject to} \quad y^T \alpha = \Delta, \quad 0 \leq \alpha_t \leq C, \quad t = 1, \ldots, l, \] where \[ f(\alpha) \equiv \frac{1}{2} \alpha^T Q \alpha + p^T \alpha \] and \(y_t = \pm 1, \quad t = 1, \ldots, l\). The constraint \(y^T \alpha = 0\) is called a \textit{linear} constraint. It can be clearly seen that C-SVC and one-class SVM are already in the form of problem (11). For \(\epsilon\)-SVR, we use the following reformulation of Eq. (9). \[ \min_{\alpha, \alpha^*} \frac{1}{2} \begin{bmatrix} y^T & 0 \end{bmatrix} \begin{bmatrix} Q & -Q \\ -Q & Q \end{bmatrix} \begin{bmatrix} \alpha^* \\ \alpha \end{bmatrix} + \begin{bmatrix} \epsilon e^T - z^T, \epsilon e^T + z^T \end{bmatrix} \begin{bmatrix} \alpha^* \\ \alpha \end{bmatrix} \quad \text{subject to} \quad y^T \begin{bmatrix} \alpha^* \\ \alpha \end{bmatrix} = 0, \quad 0 \leq \alpha_t, \alpha^*_t \leq C, \quad t = 1, \ldots, l, \] where \[ y = [1, \ldots, 1, -1, \ldots, -1]^T \] We do not assume that \(Q\) is Positive semidefinite (PSD) because sometimes non-PSD kernel matrices are used. 4.1.1. Decomposition Method for Dual Problems. The main difficulty for solving problem (11) is that \(Q\) is a dense matrix and may be too large to be stored. In LIBSVM, we consider a decomposition method to conquer this difficulty. Some earlier works on decomposition methods for SVM include, for example, Osuna et al. [1997b], Joachims [1998], Platt [1998], Keerthi et al. [2001], Hsu and Lin [2002b]. Subsequent developments include, for example, Fan et al. [2005], Palagi and Scandrone [2005], Glasmachers and Igel [2006]. A decomposition method modifies only a subset of \(\alpha\) per iteration, so only some columns of \(Q\) are needed. This subset of variables, denoted as the working set \(B\), leads to a smaller optimization subproblem. An extreme case of the decomposition methods is the Sequential Minimal Optimization (SMO) [Platt 1998], which restricts \(B\) to have only two elements. Then, at each iteration, we solve a simple two-variable problem without needing any optimization software. LIBSVM considers an SMO-type decomposition method proposed in Fan et al. [2005]. Note that \(B\) is updated at each iteration, but for simplicity, we use \(B\) instead of \(B^k\). If \(Q\) is PSD, then \(a_{ij} > 0\). Thus subproblem (13) is used only to handle the situation where \(Q\) is non-PSD. ALGORITHM 1: An SMO-type decomposition method in Fan et al. [2005] (1) Find $\alpha^1$ as the initial feasible solution. Set $k = 1$. (2) If $\alpha_k^k$ is a stationary point of problem (2), stop. Otherwise, find a two-element working set $B = \{i, j\}$ by WSS 1 (described in Section 4.1.2). Define $N \equiv \{1, \ldots, l\} \setminus B$. Let $\alpha_B^k$ and $\alpha_N^k$ be subvectors of $\alpha^k$ corresponding to $B$ and $N$, respectively. (3) If $\alpha_j = K_i + K_j - 2K_j > 0$, solve the following subproblem with the variable $\alpha_B = [\alpha_i, \alpha_j]^T$: $$ \begin{align*} \min_{\alpha_i, \alpha_j} & \quad \frac{1}{2} \left[ \begin{array}{cc} Q_{ii} & Q_{ij} \\ Q_{ij} & Q_{jj} \end{array} \right] \begin{bmatrix} \alpha_i \\ \alpha_j \end{bmatrix} + (p_B + Q_{BN} \alpha_N^k)^T \begin{bmatrix} \alpha_i \\ \alpha_j \end{bmatrix} \\ \text{subject to} & \quad 0 \leq \alpha_i, \alpha_j \leq C, \\ & \quad y_i \alpha_i + y_j \alpha_j = \Delta - y_N^k \alpha_N^k. \end{align*} $$ (4) Set $\alpha_B^{k+1}$ to be the optimal solution of subproblem (12) or (13), and $\alpha_N^{k+1} = \alpha_N^k$. Set $k \leftarrow k + 1$ and go to step 2. 4.1.2 Stopping Criteria and Working Set Selection. The Karush-Kuhn-Tucker (KKT) optimality condition of problem (11) implies that a feasible $\alpha$ is a stationary point of (11) if and only if there exists a number $b$ and two nonnegative vectors $\lambda$ and $\xi$ such that $$ \nabla f(\alpha) + b y = \lambda - \xi, $$ where $\nabla f(\alpha) \equiv Q \alpha + p$ is the gradient of $f(\alpha)$. Note that if $Q$ is PSD, from the primal-dual relationship, $\xi$, $b$, and $w$ generated by Eq. (3) form an optimal solution of the primal problem. The condition (14) can be rewritten as $$ \begin{align*} \nabla_i f(\alpha) + b y_i & \geq 0 & \text{if } \alpha_i < C, \\ \nabla_i f(\alpha) + b y_i & \leq 0 & \text{if } \alpha_i > 0. \end{align*} $$ Since $y_i = \pm 1$, condition (15) is equivalent to that there exists $b$ such that $$ m(\alpha) \leq b \leq M(\alpha), $$ where $$ m(\alpha) \equiv \max_{i \in \{t : \alpha_i < C, y_i = 1 \text{ or } \alpha_i > 0, y_i = -1\}} -y_i \nabla_i f(\alpha), \quad M(\alpha) \equiv \min_{i \in \{t : \alpha_i < C, y_i = -1 \text{ or } \alpha_i > 0, y_i = 1\}} -y_i \nabla_i f(\alpha), $$ and $$ \begin{align*} I_{up}(\alpha) & \equiv \{t : \alpha_i < C, y_i = 1 \text{ or } \alpha_i > 0, y_i = -1\}, \quad \text{and} \\ I_{low}(\alpha) & \equiv \{t : \alpha_i < C, y_i = -1 \text{ or } \alpha_i > 0, y_i = 1\}. \end{align*} $$ That is, a feasible $\alpha$ is a stationary point of problem (11) if and only if $$ m(\alpha) \leq M(\alpha). $$ From (16), a suitable stopping condition is: \[ m(\alpha^k) - M(\alpha^k) \leq \epsilon, \] (17) where \( \epsilon \) is the tolerance. For the selection of the working set \( B \), we use the following procedure from Section II of Fan et al. [2005]. **WSS 1** 1. For all \( t, s \), define \[ \begin{align*} a_{ts} & \equiv K_{tt} + K_{ss} - 2K_{ts}, \\ b_{ts} & \equiv -y_t \nabla_t f(\alpha^k) + y_s \nabla_s f(\alpha^k) > 0, \end{align*} \] and \[ \bar{a}_{ts} \equiv \begin{cases} a_{ts} & \text{if } a_{ts} > 0, \\ \tau & \text{otherwise.} \end{cases} \] Select \[ \begin{align*} i & \in \arg \max_t \{-y_t \nabla_t f(\alpha^k) \mid t \in I_{up}(\alpha^k)\}, \\ j & \in \arg \min_t \left\{- \frac{b_{ts}^2}{\bar{a}_{ts}} \mid t \in I_{low}(\alpha^k), -y_t \nabla_t f(\alpha^k) < -y_i \nabla_i f(\alpha^k) \right\}. \end{align*} \] 2. Return \( B = \{i, j\} \). The procedure selects a pair \( \{i, j\} \) approximately minimizing the function value; see the term \(-b_{ts}^2/\bar{a}_{ts}\) in Eq. (19). 4.1.3. **Solving the Two-variable Subproblem.** Details of solving the two-variable subproblem in Eqs. (12) and (13) are deferred to Section 6, where a more general subproblem is discussed. 4.1.4. **Maintaining the Gradient.** From the discussion in Sections 4.1.1 and 4.1.2, the main operations per iteration are on finding \( Q_B N \alpha^k_N + p_B \) for constructing the subproblem (12), and calculating \( \nabla f(\alpha^k) \) for the working set selection and the stopping condition. These two operations can be considered together because \[ Q_B N \alpha^k_N + p_B = \nabla f(\alpha^k) - Q_B \alpha^k_B \] (20) and \[ \nabla f(\alpha^{k+1}) = \nabla f(\alpha^k) + Q_{,B}(\alpha^{k+1}_B - \alpha^k_B), \] (21) where \(|B| \ll |N|\) and \( Q_{,B} \) is the submatrix of \( Q \) including columns in \( B \). If at the \( k \)th iteration we already have \( \nabla f(\alpha^k) \), then Eq. (20) can be used to construct the subproblem. After the subproblem is solved, Eq. (21) is employed to have the next \( \nabla f(\alpha^{k+1}) \). Therefore, LIBSVM maintains the gradient throughout the decomposition method. 4.1.5. **The Calculation of \( b \) or \( \rho \).** After the solution \( \alpha \) of the dual optimization problem is obtained, the variables \( b \) or \( \rho \) must be calculated as they are used in the decision function. Note that \( b \) of \( C \)-SVC and \( \epsilon \)-SVR plays the same role as \(-\rho\) in one-class SVM, so we define \( \rho = -b \) and discuss how to find \( \rho \). If there exists \( \alpha_i \) such that \( 0 < \alpha_i < C \), then from the KKT condition (16), \( \rho = y_i \nabla_i f(\alpha) \). In LIBSVM, for numerical stability, we average all these values. \[ \rho = \frac{\sum_{i:0<\alpha_i<C} y_i \nabla_i f(\alpha)}{||\{i \mid 0 < \alpha_i < C\}||} \] For the situation that no \( \alpha_i \) satisfying \( 0 < \alpha_i < C \), the KKT condition (16) becomes \[ -M(\alpha) = \max\{y_i \nabla_i f(\alpha) \mid \alpha_i = 0, y_i = -1 \text{ or } \alpha_i = C, y_i = 1\} \[ \leq \rho \leq -m(\alpha) = \min\{y_i \nabla_i f(\alpha) \mid \alpha_i = 0, y_i = 1 \text{ or } \alpha_i = C, y_i = -1\}. \] We take \( \rho \) the midpoint of the preceding range. 4.1.6. Initial Values. Algorithm 1 requires an initial feasible \( \alpha \). For C-SVC and \( \epsilon \)-SVR, because the zero vector is feasible, we select it as the initial \( \alpha \). For one-class SVM, the scaled form (8) requires that \( 0 \leq \alpha_i \leq 1 \), and \( \sum_{i=1}^{l} \alpha_i = \nu l \). We let the first \( \lfloor \nu l \rfloor \) elements have \( \alpha_i = 1 \) and the \( (\lfloor \nu l \rfloor + 1) \)st element have \( \alpha_i = \nu l - \lfloor \nu l \rfloor \). 4.1.7. Convergence of the Decomposition Method. Fan et al. [2005, Section III] and Chen et al. [2006] discuss the convergence of Algorithm 1 in detail. For the rate of linear convergence, List and Simon [2009] prove a result without making the assumption used in Chen et al. [2006]. 4.2. Quadratic Problems with Two Linear Constraints: \( \nu \)-SVC and \( \nu \)-SVR From problems (6) and (10), both \( \nu \)-SVC and \( \nu \)-SVR can be written as the following general form: \[ \min_{\alpha} \frac{1}{2} \alpha^T Q \alpha + \mathbf{p}^T \alpha \] subject to \( y^T \alpha = \Delta_1 \), \( e^T \alpha = \Delta_2 \), \( 0 \leq \alpha_i \leq C, t = 1, \ldots, l \). The main difference between problems (11) and (22) is that (22) has two linear constraints \( y^T \alpha = \Delta_1 \) and \( e^T \alpha = \Delta_2 \). The optimization algorithm is very similar to that for (11), so we describe only differences. 4.2.1. Stopping Criteria and Working Set Selection. Let \( f(\alpha) \) be the objective function of problem (22). By the same derivation in Section 4.1.2, The KKT condition of problem (22) implies that there exist \( b \) and \( \rho \) such that \[ \nabla_i f(\alpha) - \rho + b y_i \begin{cases} \geq 0 & \text{if } \alpha_i < C, \\ \leq 0 & \text{if } \alpha_i > 0. \end{cases} \] Define \[ r_1 \equiv \rho - b \text{ and } r_2 \equiv \rho + b. \] If \( y_i = 1 \), (23) becomes \[ \nabla_i f(\alpha) - r_1 \begin{cases} \geq 0 & \text{if } \alpha_i < C, \\ \leq 0 & \text{if } \alpha_i > 0. \end{cases} \] If \( y_i = -1 \), (23) becomes \[ \nabla_i f(\alpha) - r_2 \begin{cases} \geq 0 & \text{if } \alpha_i < C, \\ \leq 0 & \text{if } \alpha_i > 0. \end{cases} \] ACM Transactions on Intelligent Systems and Technology, Vol. 2, No. 3, Article 27, Publication date: April 2011. Hence, given a tolerance $\epsilon > 0$, the stopping condition is \[ \max \left( m_p(\alpha) - M_p(\alpha), m_n(\alpha) - M_n(\alpha) \right) < \epsilon, \] (27) where \[ m_p(\alpha) \equiv \max_{i \in I_{mp}(\alpha), y_i = 1} -y_i \nabla_i f(\alpha), \quad M_p(\alpha) \equiv \min_{i \in I_{mp}(\alpha), y_i = 1} -y_i \nabla_i f(\alpha), \quad m_n(\alpha) \equiv \max_{i \in I_{mn}(\alpha), y_i = -1} -y_i \nabla_i f(\alpha), \quad M_n(\alpha) \equiv \min_{i \in I_{mn}(\alpha), y_i = -1} -y_i \nabla_i f(\alpha). \] The following working set selection is extended from WSS 1. **WSS 2 (Extension of WSS 1 for $\nu$-SVM)** 1. Find \[ i_p \in \arg m_p(\alpha^k), \quad j_p \in \arg \min_t \left\{ -\frac{b_{ip}^2}{\alpha_{ip}} \mid y_i = 1, \alpha_t \in I_{low}(\alpha^k), -y_i \nabla_i f(\alpha^k) < -y_{ip} \nabla_{ip} f(\alpha^k) \right\}. \] 2. Find \[ i_n \in \arg m_n(\alpha^k), \quad j_n \in \arg \min_t \left\{ -\frac{b_{in}^2}{\alpha_{in}} \mid y_i = -1, \alpha_t \in I_{low}(\alpha^k), -y_i \nabla_i f(\alpha^k) < -y_{in} \nabla_{in} f(\alpha^k) \right\}. \] 3. Return $[i_p, j_p]$ or $[i_n, j_n]$ depending on which one gives smaller $-b_{ij}^2/\alpha_{ij}$. 4.2.2. The Calculation of $b$ and $\rho$. We have shown that the KKT condition of problem (22) implies Eqs. (25) and (26) according to $y_i = 1$ and $-1$, respectively. Now we consider the case of $y_i = 1$. If there exists $\alpha_i$ such that $0 < \alpha_i < C$, then we obtain $r_1 = \nabla_i f(\alpha)$. In LIBSVM, for numerical stability, we average these values. \[ r_1 = \frac{\sum_{\alpha_i = 0, y_i = C, y_i = 1} \nabla_i f(\alpha)}{|\{i \mid 0 < \alpha_i < C, y_i = 1\}|} \] If there is no $\alpha_i$ such that $0 < \alpha_i < C$, then $r_1$ satisfies \[ \max_{\alpha_i = C, y_i = 1} \nabla_i f(\alpha) \leq r_1 \leq \min_{\alpha_i = 0, y_i = 1} \nabla_i f(\alpha). \] We take $r_1$ the midpoint of the previous range. For the case of $y_i = -1$, we can calculate $r_2$ in a similar way. After $r_1$ and $r_2$ are obtained, from Eq. (24), \[ \rho = \frac{r_1 + r_2}{2} \quad \text{and} \quad -b = \frac{r_1 - r_2}{2}. \] 4.2.3. Initial Values. For $\nu$-SVC, the scaled form (6) requires that \[ 0 \leq \alpha_i \leq 1, \quad \sum_{i : y_i = 1} \alpha_i = \frac{\nu l}{2}, \quad \text{and} \quad \sum_{i : y_i = -1} \alpha_i = \frac{\nu l}{2}. \] We let the first $\nu l/2$ elements of $\alpha_i$ with $y_i = 1$ to have the value one.\(^9\) The situation for $y_i = -1$ is similar. The same setting is applied to $\nu$-SVR. \(^9\)Special care must be made as $\nu l/2$ may not be an integer. See also Section 4.1.6. 5. SHRINKING AND CACHING This section discusses two implementation tricks (shrinking and caching) for the decomposition method and investigates the computational complexity of Algorithm 1. 5.1. Shrinking An optimal solution $\alpha$ of the SVM dual problem may contain some bounded elements (i.e., $\alpha_i = 0$ or $C$). These elements may have already been bounded in the middle of the decomposition iterations. To save the training time, the shrinking technique tries to identify and remove some bounded elements, so a smaller optimization problem is solved [Joachims 1998]. The following theorem theoretically supports the shrinking technique by showing that at the final iterations of Algorithm 1 in Section 4.1.2, only a small set of variables is still changed. **Theorem 5.1** Theorem IV in Fan et al. [2005]. Consider problem (11) and assume $Q$ is positive semi-definite. 1. The following set is independent of any optimal solution $\bar{\alpha}$. $$I \equiv \{i \mid -y_i \nabla_i f(\bar{\alpha}) > M(\bar{\alpha}) \text{ or } -y_i \nabla_i f(\bar{\alpha}) < m(\bar{\alpha})\}$$ Further, for every $i \in I$, problem (11) has a unique and bounded optimal solution at $\alpha_i$. 2. Assume Algorithm 1 generates an infinite sequence $\{\alpha^k\}$. There exists $k$ such that after $k \geq \bar{k}$, every $\alpha_i^k, i \in I$ has reached the unique and bounded optimal solution. That is, $\alpha_i^k$ remains the same in all subsequent iterations. In addition, $\forall k \geq \bar{k}$: $$i \notin \{t \mid M(\alpha^k) \leq -y_t \nabla_t f(\alpha^k) \leq m(\alpha^k)\}.$$ If we denote $A$ as the set containing elements not shrunk at the $k$th iteration, then instead of solving problem (11), the decomposition method works on a smaller problem. $$\min_{\alpha_A} \frac{1}{2}\alpha_A^T Q_A \alpha_A + (p_A + Q_N \alpha_N^k)^T \alpha_A$$ subject to $0 \leq \alpha_i \leq C, \forall i \in A$. $$y_A^T \alpha_A = \Delta - y_N^T \alpha_N^k,$$ where $N = \{1, \ldots, l\}\setminus A$ is the set of shrunk variables. Note that in LIBSVM, we always rearrange elements of $\alpha, y, \text{ and } p$ to maintain that $A = \{1, \ldots, |A|\}$. Details of the index rearrangement are in Section 5.4. After solving problem (28), we may find that some elements are wrongly shrunk. When that happens, the original problem (11) is reoptimized from a starting point $\alpha = [\alpha^k_A \alpha^k_N]$, where $\alpha_A$ is optimal for problem (28) and $\alpha_N$ corresponds to shrunk bounded variables. In LIBSVM, we start the shrinking procedure in an early stage. The procedure is as follows. 1. After every $\min(l, 1000)$ iterations, we try to shrink some variables. Note that throughout the iterative process, we have $$m(\alpha^k) > M(\alpha^k)$$ because the condition (17) is not satisfied yet. Following Theorem 5.1, we conjecture that variables in the following set can be shrunk. \[ \{ t \mid -y_t \nabla_i f(\alpha^k) > m(\alpha^k), t \in I_{\text{low}}(\alpha^k), \alpha^k_t \text{ is bounded}\} \cup \\ \{ t \mid -y_t \nabla_i f(\alpha^k) < M(\alpha^k), t \in I_{\text{up}}(\alpha^k), \alpha^k_t \text{ is bounded}\} \] \[ = \{ t \mid -y_t \nabla_i f(\alpha^k) > m(\alpha^k), \alpha^k_t = C, y_t = 1 \text{ or } \alpha^k_t = 0, y_t = -1\} \cup \\ \{ t \mid -y_t \nabla_i f(\alpha^k) < M(\alpha^k), \alpha^k_t = 0, y_t = 1 \text{ or } \alpha^k_t = C, y_t = -1\} \] (30) Thus, the size of the set \( A \) is gradually reduced in every \( \min(l, 1000) \) iterations. The problem (28), and the way of calculating \( m(\alpha^k) \) and \( M(\alpha^k) \) are adjusted accordingly. (2) The preceding shrinking strategy is sometimes too aggressive. Hence, when the decomposition method achieves the following condition for the first time: \[ m(\alpha^k) \leq M(\alpha^k) + 10\epsilon, \] (31) where \( \epsilon \) is the specified stopping tolerance, we reconstruct the gradient (details in Section 5.3). Then, the shrinking procedure can be performed based on more accurate information. (3) Once the stopping condition \[ m(\alpha^k) \leq M(\alpha^k) + \epsilon \] (32) of the smaller problem (28) is reached, we must check if the stopping condition of the original problem (11) has been satisfied. If not, then we reactivate all variables by setting \( A = \{1, \ldots, l\} \) and start the same shrinking procedure on the problem (28). Note that in solving the shrunk problem (28), we only maintain its gradient \( Q_{AA}\alpha_A + Q_{AN}\alpha_N + p_A \) (see also Section 4.1.4). Hence, when we reactivate all variables to reoptimize the problem (11), we must reconstruct the whole gradient \( \nabla f(\alpha) \). Details are discussed in Section 5.3. For \( \nu \)-SVC and \( \nu \)-SVR, because the stopping condition (27) is different from (17), variables being shrunk are different from those in (30). For \( y_t = 1 \), we shrink elements in the following set. \[ \{ t \mid -y_t \nabla_i f(\alpha^k) > m_p(\alpha^k), \alpha^k_t = C, y_t = 1\} \cup \\ \{ t \mid -y_t \nabla_i f(\alpha^k) < M_p(\alpha^k), \alpha^k_t = 0, y_t = 1\} \] For \( y_t = -1 \), we consider the following set. \[ \{ t \mid -y_t \nabla_i f(\alpha^k) > m_p(\alpha^k), \alpha^k_t = 0, y_t = -1\} \cup \\ \{ t \mid -y_t \nabla_i f(\alpha^k) < M_p(\alpha^k), \alpha^k_t = C, y_t = -1\} \] 5.2. Caching Caching is an effective technique for reducing the computational time of the decomposition method. Because \( Q \) may be too large to be stored in the computer memory, \( Q_{ij} \) elements are calculated as needed. We can use available memory (called kernel cache) to store some recently used \( Q_{ij} \) [Joachims 1998]. Then, some kernel elements may not need to be recalculated. Theorem 5.1 also supports the use of caching because in final iterations, only certain columns of the matrix \( Q \) are still needed. If the cache already contains these columns, we can save kernel evaluations in final iterations. In LIBSVM, we consider a simple least-recent-use caching strategy. We use a circular list of structures, where each structure is defined as follows. ``` struct head_t { ``` head_t *prev, *next; // a circular list Qfloat *data; int len; // data[0,len) is cached in this entry }; A structure stores the first `len` elements of a kernel column. Using pointers `prev` and `next`, it is easy to insert or delete a column. The circular list is maintained so that structures are ordered from the least-recent-used one to the most-recent-used one. Because of shrinking, columns cached in the computer memory may be in different length. Assume the `i`th column is needed and `Q_{i,t}` have been cached. If `t ≤ |A|`, we calculate `Q_{t+1:|A|,i}` and store `Q_{t+1:|A|,i}` in the cache. If `t > |A|`, the desired `Q_{t+1:|A|,i}` are already in the cache. In this situation, we do not change the cached contents of the `i`th column. 5.3. Reconstructing the Gradient If condition (31) or (32) is satisfied, LIBSVM reconstructs the gradient. Because ∇_{i f}(α), i = 1, ..., |A| have been maintained in solving the smaller problem (28), what we need is to calculate ∇_{i f}(α), i = |A| + 1, ..., l. To decrease the cost of this reconstruction, throughout iterations we maintain a vector \( \bar{G} \in \mathbb{R}^l \). \[ \bar{G}_i = C \sum_{j=1}^{l} Q_{ij}, \; i = 1, \ldots, l \] Then, for `i` \( \notin A \), \[ \nabla_i f(\alpha) = \sum_{j=1}^{l} Q_{ij} \alpha_j + p_i = \bar{G}_i + p_i + \sum_{j \in A, 0 < \alpha_j < C} Q_{ij} \alpha_j. \] Note that we use the fact that if \( j \notin A \), then \( \alpha_j = 0 \) or \( C \). The calculation of \( \nabla f(\alpha) \) via Eq. (34) involves a two-level loop over `i` and `j`. Using `i` or `j` first may result in a very different number of \( Q_{ij} \) evaluations. We discuss the differences next. 1. `i` first: for \(|A| + 1 \leq i \leq l \), calculate \( Q_{i,1:|A|} \). Although from Eq. (34), only \( \{Q_{ij} \mid 0 < \alpha_j < C, \; j \in A\} \) are needed, our implementation obtains all \( Q_{i,1:|A|} \) (i.e., \( \{Q_{ij} \mid j \in A\} \)). Hence, this case needs at most \[ (l - |A|) \cdot |A| \] kernel evaluations. Note that LIBSVM uses a column-based caching implementation. Due to the symmetry of \( Q \), \( Q_{i,1:|A|} \) is part of \( Q \)'s `i`th column and may have been cached. Thus, Eq. (35) is only an upper bound. 2. `j` first: let \[ F \equiv \{ j \mid 1 \leq j \leq |A| \text{ and } 0 < \alpha_j < C \}. \] For each `j` \( \in F \), calculate \( Q_{1:|A|,j} \). Though only \( Q_{|A|+1:|A|,j} \) is needed in calculating \( \nabla_i f(\alpha), i = |A| + 1, \ldots, l \), we must get the whole column because of our cache implementation.\(^{10}\) Thus, this strategy needs no more than \[ l \cdot |F| \] kernel evaluations. This is an upper bound because certain kernel columns (e.g., \( Q_{1:|A|,j}, j \in A \)) may be already in the cache and do not need to be recalculated. \(^{10}\) We always store the first \(|A|\) elements of a column. We may choose a method by comparing (35) and (36). However, the decision depends on whether $Q$'s elements have been cached. If the cache is large enough, then elements of $Q$'s first $|A|$ columns tend to be in the cache because they have been used recently. In contrast, $Q_i$ needed by method 1 may be less likely in the cache because columns not in $A$ are not used to solve problem (28). In such a situation, method 1 may require almost $(l - |A|) \cdot |A|$ kernel valuations, while method 2 needs much fewer evaluations than $l \cdot |F|$. Because method 2 takes an advantage of the cache implementation, we slightly lower the estimate in Eq. (36) and use the following rule to decide the method of calculating Eq. (34). \[ \begin{align*} \text{If } & \quad \frac{l}{2} \cdot |F| > (l - |A|) \cdot |A| \\ \text{Else } & \quad \text{use method 1} \\ \text{use method 2} \end{align*} \] This rule may not give the optimal choice because we do not take the cache contents into account. However, we argue that in the worst scenario, the selected method by the preceding rule is only slightly slower than the other method. This result can be proved by making the following assumptions. —A LIBSVM training procedure involves only two gradient reconstructions: The first is performed when the $10\kappa$ tolerance is achieved; see Eq. (31). The second is in the end of the training procedure. —Our rule assigns the same method to perform the two gradient reconstructions. Moreover, these two reconstructions cost a similar amount of time. We refer to “total training time of method $x$” as the whole LIBSVM training time (where method $x$ is used for reconstructing gradients), and “reconstruction time of method $x$” as the time of one single gradient reconstruction via method $x$. We then consider two situations. (1) Method 1 is chosen, but method 2 is better. We have \[ \begin{align*} \text{Total time of method 1} & \leq \text{(Total time of method 2)} + 2 \cdot \text{(Reconstruction time of method 1)} \\ & \leq 2 \cdot \text{(Total time of method 2)}. \\ \end{align*} \] We explain the second inequality in detail. Method 2 for gradient reconstruction requires $l \cdot |F|$ kernel elements; however, the number of kernel evaluations may be smaller because some elements have been cached. Therefore, \[ l \cdot |F| \leq \text{Total time of method 2}. \\ \] Because method 1 is chosen and Eq. (35) is an upper bound, \[ 2 \cdot \text{(Reconstruction time of method 1)} \leq 2 \cdot (l - |A|) \cdot |A| < l \cdot |F|. \\ \] Combining inequalities (38) and (39) leads to (37). (2) Method 2 is chosen, but method 1 is better. We consider the worst situation where $Q$'s first $|A|$ columns are not in the cache. As $|A| + 1, \ldots, l$ are indices of shrunk variables, most likely the remaining $l - |A|$ columns of $Q$ are not in the cache either and $(l - |A|) \cdot |A|$ kernel evaluations are needed for method 1. Because $l \cdot |F| \leq 2 \cdot (l - |A|) \cdot |A|$, we have \[ \begin{align*} \text{Reconstruction time of method 2} & \leq 2 \cdot \text{(Reconstruction time of method 1)}. \\ \end{align*} \] Table II. A Comparison between Two Gradient Reconstruction Methods. (a) a7a: $C = 1$, $\gamma = 4$, $\epsilon = 0.001$. | $l$ | | $|F|$ | $|A|$ | Method 1 | Method 2 | Method 1 | Method 2 | |-----|-----|-----|-----|---------|---------|---------|---------| | First | 10,597 | 12,476 | 0 | 21,470,526 | 45,213,024 | 170,574,272 | | Second | 10,630 | 12,476 | 0 | 0 | 45,213,024 | 171,118,048 | Training time ⇒ First 10s 108s 10s 422s The decomposition method reconstructs the gradient twice after satisfying conditions (31) and (32). We show in each row the number of kernel evaluations of a reconstruction. We check two cache sizes to reflect the situations with/without enough cache. The last two rows give the total training time (gradient reconstructions and other operations) in seconds. We use the RBF kernel $K(x_i, x_j) = \exp(-\gamma \|x_i - x_j\|^2)$. Therefore, Total time of method 2 \[ \leq (\text{Total time of method 1}) + 2 \cdot (\text{Reconstruction time of method 1}) \] \[ \leq 2 \cdot (\text{Total time of method 1}). \] Table II compares the number of kernel evaluations in reconstructing the gradient. We consider problems a7a and ijcnn1. Clearly, the proposed rule selects the better method for both problems. We implement this technique after version 2.88 of LIBSVM. 5.4. Index Rearrangement In solving the smaller problem (28), we need only indices in $A$ (e.g., $\alpha$, $y$, and $x$, where $i \in A$). Thus, a naive implementation does not access array contents in a continuous manner. Alternatively, we can maintain $A = \{1, \ldots, |A|\}$ by rearranging array contents. This approach allows a continuous access of array contents, but requires costs for the rearrangement. We decide to rearrange elements in arrays because throughout the discussion in Sections 5.2 through 5.3, we assume that a cached $i$th kernel column contains elements from the first to the $t$th (i.e., $Q_1^{t}$), where $t \leq l$. If we do not rearrange indices so that $t = l$, the whole column $Q_t^{l}$ must be cached because $l$ may be an element in $A$. We rearrange indices by sequentially swapping pairs of indices. If $t_1$ is going to be shrunk, we find an index $t_2$ that should stay and then swap them. Swapping two elements in a vector $\alpha$ or $y$ is easy, but swapping kernel elements in the cache is more expensive. That is, we must swap $(Q_{t_1}, Q_{t_2})$ for every cached kernel column $i$. To make the number of swapping operations small, we use the following implementation: Starting from the first and the last indices, we identify the smallest $t_1$ that should leave the largest $t_2$ that should stay. Then, $(t_1, t_2)$ are swapped and we continue the same procedure to identify the next pair. 5.5. A Summary of the Shrinking Procedure We summarize the shrinking procedure in Algorithm 2. 11Available at http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets. Algorithm 2: Extending Algorithm 1 to include the shrinking procedure **Initialization** 1. Let \( \alpha^1 \) be an initial feasible solution. 2. Calculate the initial \( \nabla f(\alpha^1) \) and \( \bar{G} \) in Eq. (33). 3. Initialize a counter so shrinking is conducted every \( \min(l, 1000) \) iterations 4. Let \( A = \{1, \ldots, l\} \) - For \( k = 1, 2, \ldots \) 1. Decrease the shrinking counter 2. If the counter is zero, then shrinking is conducted: a. If condition (31) is satisfied for the first time, reconstruct the gradient b. Shrink \( A \) by removing elements in the set (30). The implementation described in Section 5.4 ensures that \( A = \{1, \ldots, |A|\} \). c. Reset the shrinking counter 3. If \( \alpha^k \) satisfies the stopping condition (32) a. Reconstruct the gradient b. If \( \alpha^k \) satisfies the stopping condition (32) Return \( \alpha^k \) Else Reset \( A = \{1, \ldots, l\} \) and set the counter to one\(^{12} \) 4. Find a two-element working set \( B = [i, j] \) by WSS 1 5. Obtain \( Q_{1:|A|, i} \) and \( Q_{1:|A|, j} \) from cache or by calculation 6. Solve sub-problem (12) or (13) by procedures in Section 6. Update \( \alpha^k \) to \( \alpha^{k+1} \) 7. Update the gradient by Eq. (21) and update the vector \( \bar{G} \) 5.6. Is Shrinking Always Better? We found that if the number of iterations is large, then shrinking can shorten the training time. However, if we loosely solve the optimization problem (e.g., by using a large stopping tolerance \( \epsilon \)), the code without using shrinking may be much faster. In this situation, because of the small number of iterations, the time spent on all decomposition iterations can be even less than one single gradient reconstruction. Table II compares the total training time with/without shrinking. For a7a, we use the default \( \epsilon = 0.001 \). Under the parameters \( C = 1 \) and \( \gamma = 4 \), the number of iterations is more than 30,000. Then shrinking is useful. However, for ijcnn1, we deliberately use a loose tolerance \( \epsilon = 0.5 \), so the number of iterations is only around 4,000. Because our shrinking strategy is quite aggressive, before the first gradient reconstruction, only \( Q_{A, A} \) is in the cache. Then, we need many kernel evaluations for reconstructing the gradient, so the implementation with shrinking is slower. If enough iterations have been run, most elements in \( A \) correspond to free \( \alpha_i \) (0 < \( \alpha_i < C \)); that is, \( A \approx F \). In contrast, if the number of iterations is small (e.g., ijcnn1 in Table II), many bounded elements have not been shrunk and \( |F| \ll |A| \). Therefore, we can check the relation between \( |F| \) and \( |A| \) to conjecture if shrinking is useful. In LIBSVM, if shrinking is enabled and \( 2 \cdot |F| < |A| \) in reconstructing the gradient, we issue a warning message to indicate that the code may be faster without shrinking. 5.7. Computational Complexity While Section 4.1.7 has discussed the asymptotic convergence and the local convergence rate of the decomposition method, in this section, we investigate the computational complexity. \(^{12}\)That is, shrinking is performed at the next iteration. From Section 4, two places consume most operations at each iteration: finding the working set $B$ by WSS 1 and calculating $Q_B(\alpha_{B}^{k+1} - \alpha_{B}^{k})$ in Eq. (21). Each place requires $O(l)$ operations. However, if $Q_B$ is not available in the cache and assume each kernel evaluation costs $O(n)$, the cost becomes $O(ln)$ for calculating a column of kernel elements. Therefore, the complexity of Algorithm 1 is (1) #Iterations $\times O(l)$ if most columns of $Q$ are cached throughout iterations. (2) #Iterations $\times O(nl)$ if columns of $Q$ are not cached and each kernel evaluation costs $O(n)$. Several works have studied the number of iterations of decomposition methods; see, for example, List and Simon [2007]. However, algorithms studied in these works are slightly different from LIBSVM, so there is no theoretical result yet on LIBSVM’s number of iterations. Empirically, it is known that the number of iterations may be higher than linear to the number of training data. Thus, LIBSVM may take considerable training time for huge datasets. Many techniques, for example, Fine and Scheinberg [2001], Lee and Mangasarian [2001], Keerthi et al. [2006], Segata and Blanzieri [2010], have been developed to obtain an approximate model, but these are beyond the scope of our discussion. In LIBSVM, we provide a simple subsampling tool, so users can quickly train a small subset. 6. UNBALANCED DATA AND SOLVING THE TWO-VARIABLE SUBPROBLEM For some classification problems, numbers of data in different classes are unbalanced. Some researchers (e.g., Osuna et al. [1997a, Section 2.5]; Vapnik [1998, Chapter 10.9]) have proposed using different penalty parameters in the SVM formulation. For example, the $C$-SVM problem becomes $$\min_{w,b,\xi} \frac{1}{2} w^T w + C^+ \sum_{y_i=1} \xi_i + C^- \sum_{y_i=-1} \xi_i$$ subject to $y_i(w^T \phi(x_i) + b) \geq 1 - \xi_i$, $$\xi_i \geq 0, i = 1, \ldots, l,$$ where $C^+$ and $C^-$ are regularization parameters for positive and negative classes, respectively. LIBSVM supports this setting, so users can choose weights for classes. The dual problem of problem (40) is $$\min_{\alpha} \frac{1}{2} \alpha^T Q \alpha - e^T \alpha$$ subject to $0 \leq \alpha_i \leq C^+$, if $y_i = 1$, $0 \leq \alpha_i \leq C^-$, if $y_i = -1$, $$y^T \alpha = 0.$$ A more general setting is to assign each instance $x_i$ a regularization parameter $C_i$. If $C$ is replaced by $C_i$, $i = 1, \ldots, l$ in problem (11), most results discussed in earlier sections can be extended without problems. The major change of Algorithm 1 is on solving the --- 13 Note that because $|B| = 2$, once the subproblem has been constructed, solving it takes only a constant number of operations (see details in Section 6). 14 This feature of using $C_i$, $vi$ is not included in LIBSVM, but is available as an extension at libsvmtools. subproblem (12), which now becomes \[ \min_{\alpha_i, \alpha_j} \frac{1}{2} \begin{bmatrix} \alpha_i & \alpha_j \end{bmatrix} \begin{bmatrix} Q_{ii} & Q_{ij} \\ Q_{ji} & Q_{jj} \end{bmatrix} \begin{bmatrix} \alpha_i \\ \alpha_j \end{bmatrix} + (Q_{i,N} \alpha_N + p_i)\alpha_i + (Q_{j,N} \alpha_N + p_j)\alpha_j \\ \] subject to \( y_i \alpha_i + y_j \alpha_j = \Delta - \theta^N_N \alpha_N \), \( 0 \leq \alpha_i \leq C_i \), \( 0 \leq \alpha_j \leq C_j \). Let \( \alpha_i = \alpha_i^k + d_i \) and \( \alpha_j = \alpha_j^k + d_j \). The subproblem (41) can be written as \[ \min_{d_i, d_j} \frac{1}{2} \begin{bmatrix} d_i \\ d_j \end{bmatrix} \begin{bmatrix} Q_{ii} & Q_{ij} \\ Q_{ji} & Q_{jj} \end{bmatrix} \begin{bmatrix} d_i \\ d_j \end{bmatrix} + \begin{bmatrix} \nabla_i f(\alpha^k) \\ \nabla_j f(\alpha^k) \end{bmatrix} \begin{bmatrix} d_i \\ d_j \end{bmatrix} \] subject to \( y_i d_i + y_j d_j = 0 \), \( -\alpha_i^k \leq d_i \leq C_i - \alpha_i^k \), \( -\alpha_j^k \leq d_j \leq C_j - \alpha_j^k \). Define \( a_{ij} \) and \( b_{ij} \) as in Eq. (18), and \( \hat{d}_i \equiv y_i d_i, \hat{d}_j \equiv y_j d_j \). Using \( \hat{d}_i = -\hat{d}_j \), the objective function can be written as \[ \frac{1}{2} \hat{a}_{ij} \hat{d}_i^2 + \hat{b}_{ij} \hat{d}_i. \] Minimizing the previous quadratic function leads to \[ \alpha_i^{\text{new}} = \alpha_i^k + y_i b_{ij}/\hat{a}_{ij}, \\ \alpha_j^{\text{new}} = \alpha_j^k - y_j b_{ij}/\hat{a}_{ij}. \] (42) These two values may need to be modified because of bound constraints. We first consider the case of \( y_i \neq y_j \) and rewrite Eq. (42) as \[ \alpha_i^{\text{new}} = \alpha_i^k + (-\nabla_i f(\alpha^k) - \nabla_j f(\alpha^k))/\hat{a}_{ij}, \\ \alpha_j^{\text{new}} = \alpha_j^k + (-\nabla_i f(\alpha^k) - \nabla_j f(\alpha^k))/\hat{a}_{ij}. \] In the following figure, a box is generated according to bound constraints. An infeasible \((\alpha_i^{\text{new}}, \alpha_j^{\text{new}})\) must be in one of the four regions outside the box. Of course, we must identify the region that \((a_{i}^{\text{new}}, a_{j}^{\text{new}})\) resides. For region I, we have \[ a_{i}^{k} - a_{j}^{k} > C_{i} - C_{j} \text{ and } a_{i}^{\text{new}} \geq C_{i}. \] Other cases are similar. We have the following pseudocode to identify which region \((a_{i}^{\text{new}}, a_{j}^{\text{new}})\) is in and modify \((a_{i}^{\text{new}}, a_{j}^{\text{new}})\) to satisfy bound constraints. ```c if(y[i]!=y[j]) { double quad_coef = Q_i[i]+Q_j[j]+2*Q_i[j]; if (quad_coef <= 0) quad_coef = TAU; double delta = (-G[i]-G[j])/quad_coef; double diff = alpha[i] - alpha[j]; alpha[i] += delta; alpha[j] += delta; if(diff > 0) { if(alpha[j] < 0) // in region III { alpha[j] = 0; alpha[i] = diff; } } else { if(alpha[i] < 0) // in region IV { alpha[i] = 0; alpha[j] = -diff; } } if(diff > C_i - C_j) { if(alpha[i] > C_i) // in region I { alpha[i] = C_i; alpha[j] = C_i - diff; } } else { if(alpha[j] > C_j) // in region II { alpha[j] = C_j; alpha[i] = C_j + diff; } } } ``` If \(y_{i} = y_{j}\), the derivation is the same. 7. MULTICLASS CLASSIFICATION LIBSVM implements the “one-against-one” approach [Knerr et al. 1990] for multiclass classification. Some early works of applying this strategy to SVM include, for example, Kressel [1998]. If \(k\) is the number of classes, then \(k(k-1)/2\) classifiers are constructed and each one trains data from two classes. For training data from the \(i\)th and the \(j\)th classes, we solve the following two-class classification problem. \[ \min_{w^i, b^i, \xi^i} \frac{1}{2} (w^{ij})^T w^{ij} + C \sum_i \xi_i^i \] subject to \[ (w^{ij})^T \phi(x_t) + b^{ij} \geq 1 - \xi_i^i, \quad \text{if } x_t \text{ in the } i^{th} \text{ class}, \] \[ (w^{ij})^T \phi(x_t) + b^{ij} \leq -1 + \xi_i^j, \quad \text{if } x_t \text{ in the } j^{th} \text{ class}, \] \[ \xi_i^i \geq 0. \] In classification we use a voting strategy: each binary classification is considered to be a voting where votes can be cast for all data points \( x \) – in the end a point is designated to be in a class with the maximum number of votes. In case that two classes have identical votes, though it may not be a good strategy, now we simply choose the class appearing first in the array of storing class names. Many other methods are available for multiclass SVM classification. Hsu and Lin [2002a] give a detailed comparison and conclude that “one-against-one” is a competitive approach. 8. PROBABILITY ESTIMATES SVM predicts only class label (target value for regression) without probability information. This section discusses the LIBSVM implementation for extending SVM to give probability estimates. More details are in Wu et al. [2004] for classification and in Lin and Weng [2004] for regression. Given \( k \) classes of data, for any \( x \), the goal is to estimate \[ p_i = P(y = i \mid x), \quad i = 1, \ldots, k. \] Following the setting of the one-against-one (i.e., pairwise) approach for multiclass classification, we first estimate pairwise class probabilities \[ r_{ij} \approx P(y = i \mid y = i \text{ or } j, x) \] using an improved implementation [Lin et al. 2007] of Platt [2000]: If \( \hat{f} \) is the decision value at \( x \), then we assume \[ r_{ij} \approx \frac{1}{1 + e^{A\hat{f} + B}}, \tag{43} \] where \( A \) and \( B \) are estimated by minimizing the negative log likelihood of training data (using their labels and decision values). It has been observed that decision values from training may overfit the model (43), so we conduct fivefold cross-validation to obtain decision values before minimizing the negative log likelihood. After collecting all \( r_{ij} \) values, Wu et al. [2004] propose several approaches to obtain \( p_i \), \( \forall i \). In LIBSVM, we consider their second approach and solve the following optimization problem. \[ \min_p \frac{1}{2} \sum_{i=1}^{k} \sum_{j \neq i} (r_{ji} p_i - r_{ij} p_j)^2 \] subject to \( p_i \geq 0, \forall i, \sum_{i=1}^{k} p_i = 1 \) \tag{44} The objective function in problem (44) comes from the equality \[ P(y = j \mid y = i \text{ or } j, x) \cdot P(y = i \mid x) = P(y = i \mid y = i \text{ or } j, x) \cdot P(y = j \mid x) \] ALGORITHM 3: 1. Start with an initial \( p \) satisfying \( p_i \geq 0, \forall i \) and \( \sum_{i=1}^{k} p_i = 1 \). 2. Repeat \((t = 1, \ldots, k, 1, \ldots)\) \[ p_t \leftarrow \frac{1}{Q_{tt}} \left[ -\sum_{j:j \neq t} Q_{ij} p_j + p^T Q p \right] \] normalize \( p \) until Eq. (45) is satisfied. and can be reformulated as \[ \min_p \frac{1}{2} p^T Q p \] where \[ Q_{ij} = \begin{cases} \sum_{s:s \neq i} r_{si}^2 & \text{if } i = j, \\ -r_{ji} r_{ij} & \text{if } i \neq j. \end{cases} \] Wu et al. [2004] prove that the nonnegativity constraints \( p_i \geq 0, \forall i \) in problem (44) are redundant. After removing these constraints, the optimality condition implies that there exists a scalar \( b \) (the Lagrange multiplier of the equality constraint \( \sum_{i=1}^{k} p_i = 1 \)) such that \[ \begin{bmatrix} Q & e^T \\ e & 0 \end{bmatrix} \begin{bmatrix} p \\ b \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}. \] (45) where \( e \) is the \( k \times 1 \) vector of all ones and \( 0 \) is the \( k \times 1 \) vector of all zeros. Instead of solving the linear system (45) by a direct method such as Gaussian elimination, Wu et al. [2004] derive a simple iterative method. Because \[ -p^T Q p = -p^T Q(-b e) = b p^T e = b, \] the optimal solution \( p \) satisfies \[ (Q p)_t - p^T Q p = Q_{tt} p_t + \sum_{j:j \neq t} Q_{ij} p_j - p^T Q p = 0, \quad \forall t. \] (46) Using Eq. (46), we consider Algorithm 3. Eq. (47) can be simplified to \[ p_t \leftarrow p_t + \frac{1}{Q_{tt}}[-(Q p)_t + p^T Q p]. \] Algorithm 3 guarantees to converge globally to the unique optimum of problem (44). Using some tricks, we do not need to recalculate \( p^T Q p \) at each iteration. More implementation details are in Appendix C of Wu et al. [2004]. We consider a relative stopping condition for Algorithm 3: \[ \|Q p - p^T Q p e\|_\infty = \max_t \|Q p_t - p^T Q p\| < 0.005/k \] When \( k \) (the number of classes) is large, some elements of \( p \) may be very close to zero. Thus, we use a more strict stopping condition by decreasing the tolerance by a factor of \( k \). Next, we discuss SVR probability inference. For a given set of training data \( D = \{(x_i, y_i) | x_i \in \mathbb{R}^n, y_i \in \mathbb{R}, i = 1, \ldots, l\} \), we assume that the data are collected from the model. \[ y_i = f(x_i) + \delta_i, \] where \( f(x) \) is the underlying function and \( \delta_i \)'s are independent and identically distributed random noises. Given a test data \( x \), the distribution of \( y \) given \( x \) and \( D \), \( P(y \mid x, D) \), allows us to draw probabilistic inferences about \( y \); for example, we can estimate the probability that \( y \) is in an interval such as \( [f(x) - \Delta, f(x) + \Delta] \). Denoting \( \hat{f} \) as the estimated function based on \( D \) using SVR, then \( \zeta = \zeta(x) \equiv y - \hat{f}(x) \) is the out-of-sample residual (or prediction error). We propose modeling the distribution of \( \zeta \) based on cross-validation residuals \( \{\zeta_i\}_{i=1}^l \). The \( \zeta_i \)'s are generated by first conducting a fivefold cross-validation to get \( \hat{f}_j \), \( j = 1, \ldots, 5 \), and then setting \( \zeta_i \equiv y_i - \hat{f}_j(x_i) \) for \( (x_i, y_i) \) in the \( j \)th fold. It is conceptually clear that the distribution of \( \zeta_i \)'s may resemble that of the prediction error \( \zeta \). Figure 2 illustrates \( \zeta_i \)'s from a dataset. Basically, a discretized distribution like histogram can be used to model the data; however, it is complex because all \( \zeta_i \)'s must be retained. On the contrary, distributions like Gaussian and Laplace, commonly used as noise models, require only location and scale parameters. In Figure 2, we plot the fitted curves using these two families and the histogram of \( \zeta_i \)'s. The figure shows that the distribution of \( \zeta_i \)'s seems symmetric about zero and that both Gaussian and Laplace reasonably capture the shape of \( \zeta_i \)'s. Thus, we propose to model \( \zeta_i \) by zero-mean Gaussian and Laplace, or equivalently, model the conditional distribution of \( y \) given \( \hat{f}(x) \) by Gaussian and Laplace with mean \( \hat{f}(x) \). Lin and Weng [2004] discuss a method to judge whether a Laplace and Gaussian distribution should be used. Moreover, they experimentally show that in all cases they have tried, Laplace is better. Thus, in LibSVM, we consider the zero-mean Laplace with a density function. \[ p(z) = \frac{1}{2\sigma}e^{-\frac{|z|}{\sigma}} \] Assuming that \( \zeta_i \)'s are independent, we can estimate the scale parameter \( \sigma \) by maximizing the likelihood. For Laplace, the maximum likelihood estimate is \[ \sigma = \frac{\sum_{i=1}^l |\zeta_i|}{l}. \] Lin and Weng [2004] point out that some “very extreme” \( \zeta_i \)'s may cause inaccurate estimation of \( \sigma \). Thus, they propose estimating the scale parameter by discarding \( \zeta_i \)'s which exceed \( \pm 5 \cdot (\text{standard deviation of the Laplace distribution}) \). For any new data \( x \), we consider that \[ y = \hat{f}(x) + z, \] where \( z \) is a random variable following the Laplace distribution with parameter \( \sigma \). In theory, the distribution of \( \zeta \) may depend on the input \( x \), but here we assume that it is free of \( x \). Such an assumption works well in practice and leads to a simple model. 9. PARAMETER SELECTION To train SVM problems, users must specify some parameters. LibSVM provides a simple tool to check a grid of parameters. For each parameter setting, LibSVM obtains Cross-Validation (CV) accuracy. Finally, the parameters with the highest CV accuracy are returned. The parameter selection tool assumes that the RBF (Gaussian) kernel is used although extensions to other kernels and SVR can be easily made. The RBF kernel takes the form \[ K(x_i, x_j) = e^{-\gamma\|x_i - x_j\|^2}, \] so \((C, \gamma)\) are parameters to be decided. Users can provide a possible interval of \(C\) (or \(\gamma\)) with the grid space. Then, all grid points of \((C, \gamma)\) are tried to find the one giving the highest CV accuracy. Users then use the best parameters to train the whole training set and generate the final model. We do not consider more advanced parameter selection methods because for only two parameters \((C, \gamma)\), the number of grid points is not too large. Further, because SVM problems under different \((C, \gamma)\) parameters are independent, LIBSVM provides a simple tool so that jobs can be run in a parallel (multicore, shared memory, or distributed) environment. For multiclass classification, under a given \((C, \gamma)\), LIBSVM uses the one-against-one method to obtain the CV accuracy. Hence, the parameter selection tool suggests the same \((C, \gamma)\) for all \(k(k-1)/2\) decision functions. Chen et al. [2005, Section 8] discuss issues of using the same or different parameters for the \(k(k-1)/2\) two-class problems. LIBSVM outputs the contour plot of cross-validation accuracy. An example is in Figure 3. 10. CONCLUSIONS When we released the first version of LIBSVM in 2000, only two-class \(C\)-SVC was supported. Gradually, we added other SVM variants, and supported functions such as multiclass classification and probability estimates. Then, LIBSVM becomes a complete SVM package. We add a function only if it is needed by enough users. By keeping the system simple, we strive to ensure good system reliability. In summary, this article gives implementation details of LIBSVM. We are still actively updating and maintaining this package. We hope the community will benefit more from our continuing development of LIBSVM. ACKNOWLEDGMENTS The authors thank their group members and users for many helpful comments. A list of acknowledgments is at http://www.csie.ntu.edu.tw/~cjlin/libsvm/acknowledgements. Fig. 3. Contour plot of running the parameter selection tool in LiBSVM. The dataset heart_scale (included in the package) is used. The x-axis is $\log_2 C$ and the y-axis is $\log_2 \gamma$. REFERENCES Received January 2011; accepted February 2011 ACM Transactions on Intelligent Systems and Technology, Vol. 2, No. 3, Article 27, Publication date: April 2011.
Understanding the elasticity of fibronectin fibrils: Unfolding strengths of FN-III and GFP domains measured by single molecule force spectroscopy Nehal I. Abu-Lail a,b, Tomoo Ohashi c, Robert L. Clark a,b, Harold P. Erickson c,* Stefan Zauscher a,b,* Abstract While it is well established that fibronectin (FN) matrix fibrils are elastic, the mechanism of fibril elasticity during extension is still debated. To investigate the molecular origin of FN fibril elasticity, we used single molecule force spectroscopy (SMFS) to determine the unfolding behavior of a recombinant FN-III protein construct that contained eight FN-III domains (1–8 FN-III) and two green fluorescent protein (GFP) domains. FN-III domains were distinguished from GFP domains by their shorter unfolding lengths. The unfolding strengths of both domains were determined for a wide range of pulling rates (50 to 1745 nm/s). We found that the mechanical stabilities of FN-III and GFP domains were very similar to each other over the entire range of pulling speeds. FN fibrils containing GFP remain brightly fluorescent, even when stretched, meaning that GFP domains remain largely folded. Since GFP and FN-III have equal unfolding strengths, this suggests that FN-III domains are not extensively unraveled in stretched FN fibrils. Our results thus favor an alternative model, which invokes a conformational change from a compact to an extended conformation, as the basis for FN fibril elasticity. © 2005 Elsevier B.V./International Society of Matrix Biology. All rights reserved. Keywords: Protein unfolding; Single molecule force spectroscopy; Wormlike chain 1. Introduction The extracellular matrix (ECM) plays an important role for the elasticity of tissues and the regulation of cell adhesion (Oberhauser et al., 2002). The ECM contains a number of non-collagen adhesive glycoproteins that can assemble into fibrillar structures, which function to organize the ECM, that mediate cell attachment through specific binding sites and that guide cell migration in developing tissue (Alberts, 2002). Fibronectin (FN) is an ECM glycoprotein that is composed of tandem repeats of three distinct types (I, II and III) of individually folded domains (Potts and Campbell, 1996). The FN type III (FN-III) domain is a common motif of modular proteins and is found in about 2% of all animal proteins (Bork and Doolittle, 1992). The 15–17 FN-III domains of FN constitute about two thirds of the length of the molecule, and each FN-III domain contains seven β-strands that form a sandwich of anti-parallel β-sheets (Leahy et al., 1996). Several studies have shown that FN matrix fibrils assembled in a cell culture are under tension (Halliday and Tomasek, 1995; Zhong et al., 1998); however, details of the molecular conformation of FN remain unclear. To study FN fibril elasticity, Ohashi et al. (1999, 2002) engineered cells that secreted FN that had a GFP domain spliced between 3FN-III and 4FN-III. This naturally fluorescent FN was assembled into matrix fibrils that could be observed at sequential times in living culture. One striking finding was... that the elastic matrix fibrils could stretch three to four times their rest length. Two mechanisms for the deformation of FN molecules have been discussed recently (Erickson, 2002). The first mechanism proposes that stretching is achieved by force-induced unfolding of FN-III domains. Support for this model is provided by studies using the atomic force microscope (AFM) to stretch single molecules containing tandem repeats of FN-III domains (Oberhauser et al., 1998; Rief et al., 1998b). In these experiments, stretching produced a characteristic sawtooth wave of force, where each peak was interpreted to correspond to the sudden unfolding of one FN-III domain. The second mechanism is based on observations that FN molecules, which can be considered as flexible filaments, can exist in two conformations. In the extended conformation the molecule is largely straightened, while in the compact conformation the molecule is folded back on itself, where the fold is likely stabilized by ionic bonds (Erickson and Carrell, 1983; Johnson et al., 1999). The proposed deformation mechanism is that FN molecules go from the folded, compact conformation to a more extended conformation when force is applied to a matrix fibril. Two recent studies by Baneyx et al. (2001, 2002) used fluorescence resonance energy transfer (FRET) to investigate the conformation of FN molecules in FN fibrils. They cautiously interpreted their results in favor of the first deformation mechanism, i.e., force-induced domain unfolding. The observed FRET signal, however, may be reporting largely the conformational change from a compact to an extended conformation, and thus does not yet allow to distinguish between the two mechanisms (Erickson, 2002). Recent experiments by Ohashi et al. (2002) showed that FN-GFP fibrils remain brightly fluorescent even after extensive fibrillar stretching, which suggests that most GFP domains remain folded. This result could be interpreted in favor of the second deformation mechanism, assuming that under physiological conditions the average mechanical strength of FN-III domains is larger than the average strength of GFP domains. GFP is inserted in-line between FN-III domains three and four and, upon extension, sustains the same force as the FN-III domains. Since the GFP domains remain apparently folded under applied stress as observed by the FRET signal, one could conclude that most FN-III domains must remain folded also, as they would sustain similar forces. To be able to further distinguish between the two deformation mechanisms, it now becomes important to determine the relative unfolding forces of GFP and FN-III domains in the same construct. To address this problem on the molecular level, we made a protein construct comprising FN-III domains 1–8, with a CFP and YFP (cyan and yellow variants of GFP) inserted between FN-III domains 3 and 4. We then used the AFM to stretch single molecules of this construct --- **Fig. 1.** Force plotted as a function of separation at four different pulling rates, demonstrating representative mechanical unfolding hierarchies of the 1–3FN-III-YFP-CFP-4–8FN-III construct. The curved solid lines are fits of the WLC model to the data using a constant persistence length of 0.42 nm. The numbers between peaks indicate the unfolding length increase. (a) 103 nm/s, only GFP domains unfold; (b) 201 nm/s, first one FN-III domain unfolds (11 nm), followed by two GFP domains (64 and 71 nm), unfolding at slightly lower force; (c) 580 nm/s, first several FN-III domains unfold (23, 14 and 17 nm), followed by two GFP domains (69 and 73 nm); (d) 1745 nm/s, a GFP (76 nm), a FN-III (19 nm) and another GFP domain (74 nm) unfold at similar forces. 2. Results and discussion 2.1. Pulling experiments on a recombinant construct containing FN-III and GFP domains AFM pulling experiments were performed on a construct composed of 1–8 FN-III domains with a pair of color-shifted GFPs inserted between the 3FN-III and 4FN-III domains (Ohashi et al., 2002). The two GFP variants are yellow and cyan fluorescent proteins, and the mutations that shift the color are unlikely to cause major changes in the mechanical stability. The construct also contained two cysteine residues, inserted just after the last FN-III domain, which can form links to a gold surface. However, non-specific adsorption of the recombinant construct cannot be ruled out (Rief et al., 1998b). Single molecules were picked up at random with an AFM cantilever tip and stretched for up to several hundred nanometers (Rief et al., 1998b). The resulting force-extension curves showed the characteristic sawtooth patterns that correspond to successive domain unfolding events (Fig. 1a–d). The representative examples shown in Fig. 1 suggest that GFP domains (contour lengths range from 40 nm to 82 nm) can unfold either before or after FN-III domains (contour lengths range from 5 nm to 32 nm) and at somewhat higher or lower force. 2.2. Fingerprints of FN-III and GFP domains Examples of the unfolding length distributions at two pulling rates (436 and 870 nm/s) are shown in Fig. 2a and b. The unfolding lengths fall into two separate and statistically different distributions (Mann–Whitney rank sum test, \( P < 0.001 \)) that correspond to FN-III and GFP domains (gray and white fills, respectively). Within each of these two distributions, there appear three sub-peaks; the first two most likely correspond to partially folded intermediates and the third to the unfolding of a complete domain. Having a wide range of contour lengths due to the presence of partially folded intermediates is typical in SMFS of polyproteins (Meadows et al., 2003; Oberdorfer et al., 2000; Oberhauser et al., 2002). For both pulling rates, the third peak of the FN-III unfolding length distribution is located at about 28 nm, close to the expected 27 nm, and the third peak of the GFP distribution is located at about 75 nm, again close to the expected 79 nm. The unfolding lengths for all sub-peaks are reported in Tables 1 and 2. Our results for the unfolding of FN-III domains are in a good agreement with previous studies, which have reported contour length increases in the range of 28–32 nm (Rief et al., 1998a,b, 1999), of 28 nm (Oberhauser et al., 2002) and of 25 nm (Oberdorfer et al., 2000). We could now assign each peak of the sawtooth wave to FN-III or GFP based on the length of extension that followed it. 2.3. The mechanical stability of FN-III and GFP domains An important and novel aspect of our study is that we have compared the unfolding strengths of FN-III and GFP domains over a wide range of pulling rates to determine the relative unfolding strengths of FN-III and GFP domains. ### Table 1 A summary of all the properties observed for the unfolding of 1–8 FN-III and GFP domains at different pulling rates (average value ± the standard error of mean) <table> <thead> <tr> <th>Pulling rate (nm/s)</th> <th>GFP</th> <th>FN-III</th> <th>Ratio of FN-III/GFP</th> </tr> </thead> <tbody> <tr> <td></td> <td>Lengthening (nm)</td> <td>Force (pN)</td> <td>n</td> </tr> <tr> <td>50</td> <td>64.6±3.1</td> <td>80.6±6.9</td> <td>17</td> </tr> <tr> <td>103</td> <td>61.5±1.7</td> <td>88.8±4.1</td> <td>57</td> </tr> <tr> <td>201</td> <td>59.4±1.9</td> <td>98.2±5.0</td> <td>43</td> </tr> <tr> <td>291</td> <td>62.5±2.4</td> <td>104.2±5.4</td> <td>32</td> </tr> <tr> <td>436</td> <td>54.6±1.5</td> <td>117.9±5.0</td> <td>64</td> </tr> <tr> <td>580</td> <td>59.1±1.5</td> <td>119.9±5.1</td> <td>77</td> </tr> <tr> <td>870</td> <td>55.3±1.3</td> <td>128.8±4.6</td> <td>78</td> </tr> <tr> <td>1745</td> <td>52.2±1.7</td> <td>135.8±7.4</td> <td>50</td> </tr> </tbody> </table> in the same protein construct over a wide range of pulling rates. The unfolding forces over all pulling rates ranged from 29 to 399 pN for FN-III domains and from 35 to 271 pN for the GFP domains. Fig. 3a–h show the distribution of unfolding forces at 399 pN for FN-III domains and from 35 to 271 pN for the GFP domains. While the average unfolding strengths of FN-III and GFP span a similar range (Table 1), the GFP domains appear to be slightly weaker. The widths of the force distribution histograms are in good agreement with those observed separately for FN-III domains and GFP domains under similar experimental conditions (Dietz and Rief, 2004; Oberhauser et al., 2002). The unfolding forces of FN-III and GFP domains were statistically different at 103, 580 and 1745 nm/s (Mann Whitney rank sum test, \( P = 0.002, 0.017 \) and 0.005, respectively). Our values for the unfolding strengths of 1–8 FN-III domains at 580 nm/s (137.2 ± 4.0 pN, \( n = 159 \)) are in good agreement with those reported for the 2–14 FN-III domains at 600 nm/s pulling rate (145 pN) (Oberhauser et al., 2002). Also, our values for the unfolding forces of GFP molecules at 291 nm/s (104.2 ± 5.4 pN, \( n = 32 \)) are in excellent agreement with those reported recently by Dietz and Rief (2004) at 300 nm/s (104 ± 40 pN). The data in Fig. 3 show that more FN-III than GFP domains unfolded, consistent with the fact that there are eight FN-III and only two GFP domains in each recombinant construct. Furthermore, the actual ratio likely depends also on where along its length the molecule was picked up. Although the number of domains of the two protein types differs in the recombinant protein, we have sufficiently many pulls for both domain types to evaluate our data in a statistically meaningful way. For example, Table 1 shows that the ratio of the number of FN-III domains unfolding to the number of GFP domains unfolding is about 4 to 1 at all investigated pulling rates, in good agreement with the ratio of the number of FN-III domains to GFP domains (8 to 2) in the construct. ### 2.4. The dependence of mechanical unfolding of FN-III and GFP domains on pulling rate The unfolding strength of proteins typically depends on the pulling rate and varies for different protein domains (Li et al., 2000; Williams et al., 2003). We performed force pulling experiments over a wide range of pulling rates to compare the average unfolding strengths of FN-III and GFP domains in the same construct. The data in Fig. 4 shows (1) that at all pulling rates the unfolding strengths of FN-III and GFP domains have slightly lower unfolding strengths than FN-III domains; (2) that the unfolding strength of FN-III and GFP domains increases linearly with the logarithm of the pulling rate as predicted by Evans and Ritchie (1997); and (3) that the slopes of the lines are almost identical. This suggests that the mechanical stability of the two domain types is quite similar over a large range of pulling rates. Since the slopes of FN-III and GFP domains are similar, it is not unreasonable to extrapolate from the unfolding strengths obtained experimentally at high pulling rates to those at much lower pulling rates that apply in a FN fibril. Extension rates in FN fibrils in vivo may reach about 8 \( \mu \)m/h (i.e., movements that correspond to cell migration; Ohashi et al., 2002) and the corresponding pulling rates would be about 2 nm/s. When extrapolating from the experimentally accessible, high pulling rates to the low pulling rates that occur in vivo, one must assume that only one mechanical energy barrier is present; i.e., the barrier observed with AFM at high pulling rates. The assumption of one major energy barrier over the range of rates of interest here is not unreasonable considering (1) the similarity in slopes for the rate dependence of FN and GFP domain unfolding; and (2) previous experimental evidence that showed that the extrapolation from a 100 nm/s pulling rate to low pulling rates for I27 domains is in good agreement with the unfolding rates of the protein measured by standard chemical denaturation techniques (Carrion-Vazquez et al., 1999b). ### Table 2 <table> <thead> <tr> <th>Pulling rate (nm/s)</th> <th>Unfolding length (nm)</th> <th>Unfolding force (pN)</th> <th>( n )</th> <th># of amino acids</th> </tr> </thead> <tbody> <tr> <td>( (a) ) 1–8 FN-III</td> <td>( (a) ) 1–8 FN-III</td> <td>( (a) ) 1–8 FN-III</td> <td>( (a) ) 1–8 FN-III</td> <td>( (a) ) 1–8 FN-III</td> </tr> <tr> <td>50</td> <td>10.1 ± 0.4</td> <td>19.9 ± 0.5</td> <td>29.1 ± 0.7</td> <td>90 ± 5</td> </tr> <tr> <td>103</td> <td>10.3 ± 0.3</td> <td>20.1 ± 0.3</td> <td>28.7 ± 0.4</td> <td>101 ± 3</td> </tr> <tr> <td>201</td> <td>10.3 ± 0.4</td> <td>19.4 ± 0.4</td> <td>28.4 ± 0.4</td> <td>119 ± 5</td> </tr> <tr> <td>291</td> <td>10.3 ± 0.4</td> <td>19.5 ± 0.4</td> <td>28.8 ± 0.4</td> <td>116 ± 6</td> </tr> <tr> <td>436</td> <td>10.9 ± 0.3</td> <td>20.1 ± 0.3</td> <td>28.5 ± 0.3</td> <td>129 ± 6</td> </tr> <tr> <td>580</td> <td>10.8 ± 0.4</td> <td>20.5 ± 0.3</td> <td>28.9 ± 0.4</td> <td>139 ± 7</td> </tr> <tr> <td>870</td> <td>11.5 ± 0.3</td> <td>20.2 ± 0.3</td> <td>28.3 ± 0.3</td> <td>147 ± 10</td> </tr> <tr> <td>1745</td> <td>10.7 ± 0.3</td> <td>19.8 ± 0.3</td> <td>28.8 ± 0.2</td> <td>165 ± 6</td> </tr> </tbody> </table> <table> <thead> <tr> <th>(b) GFP</th> <th>(b) GFP</th> <th>(b) GFP</th> <th>(b) GFP</th> <th>(b) GFP</th> </tr> </thead> <tbody> <tr> <td>50</td> <td>48.2 ± 3.5</td> <td>62.1 ± 0.1</td> <td>74.6 ± 1.2</td> <td>64 ± 10</td> </tr> <tr> <td>103</td> <td>48.9 ± 1.1</td> <td>64.4 ± 1.0</td> <td>73.8 ± 0.5</td> <td>86 ± 7</td> </tr> <tr> <td>201</td> <td>48.1 ± 1.3</td> <td>65.6 ± 1.1</td> <td>74.8 ± 1.1</td> <td>101 ± 9</td> </tr> <tr> <td>291</td> <td>48.9 ± 1.3</td> <td>64.9 ± 1.5</td> <td>76.0 ± 0.8</td> <td>100 ± 7</td> </tr> <tr> <td>436</td> <td>46.8 ± 0.7</td> <td>66.3 ± 0.9</td> <td>74.7 ± 1.3</td> <td>117 ± 6</td> </tr> <tr> <td>580</td> <td>49.0 ± 0.9</td> <td>66.8 ± 0.8</td> <td>75.6 ± 0.6</td> <td>120 ± 7</td> </tr> <tr> <td>870</td> <td>47.3 ± 0.7</td> <td>64.1 ± 0.7</td> <td>75.3 ± 1.0</td> <td>137 ± 6</td> </tr> <tr> <td>1745</td> <td>48.7 ± 0.9</td> <td>65.7 ± 0.6</td> <td>74.5 ± 1.3</td> <td>139 ± 10</td> </tr> </tbody> </table> S: short, M: medium, L: long; \( n \): number of data points analyzed; average value ± standard error of mean. Our results of the rate dependence of FN-III domain unfolding agree well with those reported by Oberhauser et al. (2002). The unfolding strength of GFP was previously explored at only one pulling rate (Dietz and Rief, 2004). Our measurements of the unfolding strengths of FN-III and GFP domains in the same construct and over a range of pulling rates are essential for our conclusion that FN-III and GFP domains are approximately equal in strength, even at low extension rates. 2.5. Partially folded intermediates of FN-III and GFP domains The increase in contour length upon unfolding FN-III and GFP domains spans a range, with a substantial number of events releasing less than the full length of polypeptide. This suggests the existence of partially unfolded intermediates, as observed in previous studies (Fisher et al., 2000b; Li et al., 2005; Oberhauser et al., 2002). We wanted to know if the partially unfolded intermediates might have unfolding forces that differ from those of the full-length domain (i.e., 27 nm for the FN-III domains and 79 nm for the GFP domains). We therefore determined the average unfolding force for all events within each sub-peak (Table 2). Our choice of intermediate unfolds was based on the increase in contour length upon domain unfolding. A contour length increase between 5 and 32 nm was attributed to FN-III domain unfolding, while a contour length increase between 40 and 82 nm was attributed to GFP domain unfolding. Since our choice of partial intermediates was based only on contour length differences, a doubt that an FN-III unfolding event may also be attributed to a partial unfolding of a GFP domain can be raised. However, since our FN-III contour length distribution looks the same with and without GFP, this effect is likely small. The data summarized in Table 2 show (1) that the unfolding strengths of intermediates of different lengths were similar to those of full domains and (2) that the force for unfolding the full domains was in the same range as the global average, in good agreement with published results for partial FN-III domain unfolding (Fisher et al., 2000b; Li et al., 2005; Oberhauser et al., 2002; Rief et al., 1998b) and for partial GFP unfolding (Dietz and Rief, 2004). 2.6. Pulling experiments on native FN and on a construct of 7–14 FN-III domains As a control, SMFS experiments were also performed on native FN molecules and on a construct composed of 7–14 FN-III domains. This recombinant fragment covers the part of the FN molecule not included in our 1–8 FN-III-CFP-YFP construct. The force-extension curves obtained for both showed the characteristic sawtooth patterns (inserts in Fig. 5a and b). On average, the increase in unfolding distance of native FN and 7–14 FN-III domains was 20 ± 0.6 (n = 168) and 21 ± 0.5 (n = 255), respectively. The peak force values for unfolding of native FN ranged from 35 pN to 202 pN (Fig. 5a) and those for 7–14 FN-III domains ranged from 35 pN to 208 pN (Fig. 5b). Averaged over all events, the forces required to unfold the native FN and 7–14 FN-III domains were 120 ± 3 pN (n = 163) and 125 ± 3 pN (n = 235), respectively. Importantly, the average unfolding forces for native FN and the 7–14 FN-III domains are indistinguishable from the unfolding force for GFP (120 pN) and 1–8 FN-III (137 pN) at a pulling rate of 580 nm/s (Table 1). This observation implies that the unfolding of FN-III domains is independent of the presence of the GFP domains since on average the force required to unfold the FN-III domains in our construct was very similar to the force required to unfold the FN-III domains in the 7–14 FN-III construct or in the native state. Our average unfolding force of 120 pN for native FN is consistent with an average value of 145 pN (Oberhauser et al., 2002). Our average unfolding force of 137 pN for 1–8 FN-III domains is consistent with the observation that domains 1–2 FN-III are stronger than domains 10–FN-III, 12–FN-III and 13–FN-III (Oberhauser et al., 2002). We did not, however, see the pronounced hierarchy of unfolding strengths within single FN molecules that was reported previously (Oberhauser et al., 2002); our unfolding peaks were much more random. 2.7. Implications for FN elasticity Previous studies showed that matrix fibrils stretch significantly from their original length. One example is the prominent fibril running between two cells in Fig. 5 of Ohashi et al. (2002), which stretches almost two-fold between 1:30 and 3:00 h. We showed previously that FN-GFP fibrils remain brightly fluorescent even when stretched and suggested that the GFP domains are mostly folded (Ohashi et al., 2002). We cannot, however, rule out the possibility that the observed fluorescence is produced by a small fraction of folded domains, with the majority being unfolded. To address this concern, we measured the integrated fluorescence of this fibril along its full length at four time points and found that the fluorescence remained constant relative to nearby immobilized fibrils (Ohashi et al., 2002). If GFP domains had unfolded during this stretch, it would have reduced the integrated fluorescence. It is possible that tension itself, without unfolding, could decrease fluorescence; however, since the fluorescence signal remained constant during stretching, we conclude that stretching does not unfold GFP domains and also not FN-III domains, since FN-III domains are of similar strength. While partial unfolding of GFP was observed by Dietz and Rief (2004), contributions from partially unfolded intermediates on the overall fluorescence signal could only have been small, as any intermediates that lost the hydrophobic core around the fluorophore would also have lost fluorescence. Another important element in our analysis is the observation that FN fibrils in vivo appear to be stretched about four times their rest length. If unfolding of FN-III domains were the basis for this stretch, then it would require the majority of the FN-III domains to be unfolded. The reasoning is that each FN-III domain can contribute a maximum of a seven-fold extension, while FN-I and FN-II cannot contribute because they have internal disulfide links that would prevent their extension. If our analysis is correct that the majority of the GFP domains remain folded, then we conclude that the majority of the FN-III domains remain folded also. Recently, Li et al. (2005) observed unfolding intermediates when stretching a polyprotein of 10FN-III. According to their results, the force required to unfold the native state of the 10FN-III domain was 100±20 pN and the force required to unfold the intermediate was only 50±20 pN. Since these forces are lower than the forces reported here for both GFP and FN-III domains, unfolding or at least partial unfolding of the weak 10FN-III domains in native FN is possible. However, we showed that the average forces required to unfold domains in the native FN molecule and the 7–14FN-III domains in our constructs are indistinguishable from the unfolding force for GFP (120 pN) and 1–8FN-III (137 pN) at a pulling rate of 580 nm/s (Table 1). This observation implies that only a small fraction of the FN-III domains could actually have been unfolded since the average force required to unfold the FN-III domains remained constant with (in 7–14FN-III and native FN) and without (in 1–8FN-III) the presence of the 10FN-III domain. A small fraction of unfolded FN-III domains could not produce the four-fold stretch observed with FRET. Therefore, we conclude that the majority of the FN-III domains remain folded. Fig. 6. A model illustrating how the change of FN from the compact to the extended conformation might produce fibril elasticity. (a) A FN dimer, where the two subunits are joined by disulfide bonds near their C-termini, is shown in the extended conformation. (b) The molecule folds in such a way that domains 2–3 FN-III of one subunit form an electrostatic bond with domains 12–14FN-III of the other subunit. This is repeated to make a doubly folded molecule (Johnson et al., 1999). (c) The FN fibril is proposed to assemble by connecting molecules near their N-termini to make a longitudinal strand. With all molecules in the compact conformation, as shown in (c), the fibril would be in its relaxed, contracted state. Under tension, the electrostatic bonds would break and the molecules could be pulled to their extended conformation, resulting in a four-fold extension. Reprinted with permission from Erickson (2002). Just as important as the unfolding force is the force that prevents significant domain refolding from occurring. While the forces developed during fibril extension may not be sufficient to cause major domain unfolding, they potentially could slow the refolding of already or spontaneously unfolded domains (Carrion-Vazquez et al., 1999b). For example, Li et al. (2002) have estimated that a static force of 13.7 pN would leave 50% of Ig domains unfolded. A similar force would probably apply to FN-III domains. However, the force that would prevent refolding of GFP domains is not known and will thus remain an important aspect for future research. If, at present, we can exclude unfolding of FN-III domains as the dominant mechanism for fibril elasticity, this leaves the compact to extended conformational change as the favored model (Fig. 6). In this model, FN molecules are folded into the compact conformation in a relaxed fibril. As the fibril is stretched, molecules are pulled into the extended conformation. Depending on how the molecules are connected to each other (this is still not known), the extension could easily produce a four-fold stretch as was observed from the FRET analysis done on FN matrix fibrils assembled in cell culture (Ohashi et al., 2002). Studies of proteolytic and recombinant protein fragments indicated that the compact conformation is stabilized by an electrostatic contact between FN-III domains 2–3 of one subunit of the dimer and domains 12–14 of the other subunit (Johnson et al., 1999). If this connectivity, or another similar one, also prevailed in vivo, then the extension from the compact to the extended conformation could produce the four-fold stretch observed by fluorescence analysis (Ohashi et al., 2002). Finally, the compact conformation is stabilized by electrostatic bonds (Erickson, 2002; Johnson et al., 1999) that are likely broken at much weaker forces than those required to break the multiple hydrogen bonds that hold together FN-III domains. 3. Experimental procedures 3.1. Protein engineering We began with the pAIPFN-YFP vector (Ohashi et al., 2002), and constructed a new vector pAIPFN/YFP-CFP, in which a CFP was inserted with a short, eight-amino acid linker immediately following the YFP. We then used PCR to amplify the fragment from FN-III domains 1–8, including restriction sites to clone it into pET15 (Novagen), which provides an N-terminal histag. Two cysteine residues were added to the C-terminus to enhance attachment to a gold substrate (presumably through covalent bonding). The final 1–3FN-III-YFP-CFP, 4–8FN-III amino acid sequence is: mgsshhhhhsslvprgshmSGPVE... (FNI–3)...ETTGt-ggr-MVS KG... (YFP)... DElyk-... tselefgt-MVS KG... (CFP)... DElyK-ggr-RPRSD... (FN4–8)... RQKTgllc (the FN, YFP and CFP sequences are underlined, and the sequences in lowercase are derived from the cloning site and linkers). Escherichia coli BL21 (DE3) was transformed with the vector and the recombinant protein was expressed at 20 °C to improve solubility (a large fraction of 1–3FN-III-YFP-CFP, 4–8FN-III was insoluble at 37 °C). The recombinant protein from the soluble fraction was purified with a cobalt-agarose column (TALON, Clonetech) using standard procedures. A FRET signal from the purified protein was confirmed with a spectrofluorophotometer (Shimadzu, RF-5301PC), indicating that the GFP molecules are properly folded. FRET was demonstrated by the spectrum of a sample excited at 433 nm. A donor peak (CFP) was obtained at 488 nm and an acceptor peak (YFP) at 525 nm. The acceptor peak was about 50% higher in amplitude than the donor peak. The eight-amino acid linker had a thermolysin cut site and, after digestion, the donor peak increased about 50% and the acceptor peak fell, consistent with the loss of FRET. Native bovine FN and the recombinant fragment 7–14FN-III were prepared as described previously (Johnson et al., 1999). 3.2. Single molecule force spectroscopy experiments All force measurements were performed using an atomic force microscope (AFM) (MultiMode with a low noise AFM head, Nanoscope III controller, Veeco, Santa Barabra, CA). Commercially available “V”-shaped silicon nitride cantilevers (Veeco) were used in all experiments. The force constant of each cantilever was determined individually before the experiment from the power spectral density of the thermal noise fluctuations in solution (Hutter and Bechhoefer, 1993). The measured spring constant values for the cantilevers used in the experiments ranged between 50 and 80 pN/nm, in agreement with reported values using other methods and as reported by the manufacturer. A systematic error on the order of 10% can typically be expected in the determination of the spring constant values for AFM cantilevers. While this error largely impacts comparison with force data obtained from other laboratories, it does not impede the comparison between the unfolding strengths of FN and GFP domains in our experiments. A 100 μl aliquot of a 20 μg/ml mixture of the protein construct of interest (1–3FN-III-YFP-CFP, 4–8FN-III, native FN or 7–14FN-III) suspended in PBS was allowed to adsorb onto a freshly cleaned, evaporated gold surface for an hour. We found that this concentration produced about one successful “pick up” for every 50 touches, which yields a high probability for single molecule attachments (Evans and Ritchie, 1999). Our experiments were carried out in phosphate-buffered saline (PBS) buffer solutions to mimic physiological conditions, yielding pH values and ionic strengths close to those in-vivo so as not to compromise the stability of the GFP domains. After adsorption, excess unbound proteins were removed by ample rinsing with PBS. The protein sample substrate was then mounted onto the piezoelectric scanner of the AFM and covered with a drop of PBS before mounting the AFM head. During force pulling experiments, care was taken to minimize evaporative loss of solvent. Force pulling measurements were performed over a wide range of pulling rates (50, 103, 201, 291, 436, 580, 870 and 1745 nm/s) over a constant distance of 500 nm with a resolution of 4096 data points. All measurements were performed at room temperature and with a retraction (ramp) delay of 1 s. 3.3. Wormlike chain modeling The wormlike chain (WLC) model of entropic elasticity (Eq. (1)) (Marko and Siggia, 1995) was applied to fit the force-extension profiles of individual protein domains. The force $F$ required for stretching an unfolded polypeptide chain to a length $X$ is given by (Bustamante et al., 1994; Oberhauser et al., 2002; Rief et al., 1997): $$F = -k_B T \left[ \frac{X}{L_p} + \frac{1}{4} \left( 1 - \frac{X}{L_c} \right)^2 - \frac{1}{4} \right],$$ where $k_B$ is Boltzmann’s constant, $T$ is the absolute temperature, $L_p$ is the persistence length (which describes the bending rigidity of the protein) and $L_c$ is the contour length of the polypeptide. The change in contour length that results from unfolding of a protein domain is a measure for the unfolded length of the domain (Fisher et al., 1999). The applicability of the WLC to describe the unfolding force data of proteins has been demonstrated for a wide range of proteins (Best et al., 2001; Carrion-Vazquez et al., 1999a,b; Li et al., 2001; Marszalek et al., 1999; Oberdorfer et al., 2000; Oberhauser et al., 1998, 2002; Rief et al., 1997). In fitting the force-extension data, we used a constant value of 0.42 nm for the persistence length of all unfolded peptides, while the contour length was allowed to vary (Oberhauser et al., 1998). The persistence length value of 0.42 nm was chosen based on previous results of WLC fits to force-extension measurements on FN-III domains (Rief et al., 1998b), and a similar range between 0.3 and 0.5 nm found for a variety of proteins (Fisher et al., 2000a,b; Oberdorfer et al., 2000; Rief et al., 1998b). Keeping the persistence length constant allowed us to obtain a more consistent measure for the contour length of a protein module. The WLC model’s ability to fit the experimental data was gauged by the values of $r^2$ (coefficient of determination), obtained from non-linear curve fitting (TableCurve, SPSS). 3.4. Data analysis For each domain, the unfolding force was taken as the maximum force of the unfolding peak with respect to the zero force base line. Different protein domains (FN-III or GFP) were distinguished based on their characteristic contour length increase upon domain unfolding (distance signature). Two methods have been applied to determine this distance signature. The first method considers the increase in contour length upon domain unfolding as the difference in the distance at the maximum force value between two successive unfolding events (Oberdorfer et al., 2000). The second method uses the increase in contour length as estimated from the fitting of two successive unfolding peaks to a polymer elasticity model (WLC model) (Oberhauser et al., 2002). In comparing both methods, one finds that the first method ignores the differences in the force magnitudes for successive peaks used to determine the increase in contour length, and the second method sometimes fails to give reliable fits due to the typically lower number of data points at larger forces. We used both methods to analyze a significant subset of our data and we found that they gave similar results (Fig. 7). Therefore, for simplicity, the data reported here were obtained by the first method unless otherwise mentioned. Since we know the number of amino acids in each domain, we can match the experimentally observed unfolding length to the theoretically expected value. FN-III domains have 90 amino acids each and are expected to unfold to an overall length of 27 nm using an amino acid length of 0.34 nm, and subtracting a distance of 3.6 nm, corresponding to the size of the folded protein (Oberhauser et al., 1998). Although the maximal distance between adjacent $\alpha$-carbon atoms is 0.38 nm, the tetrahedral geometry reduces this spacing to about 0.34 nm per residue when it is in a fully extended $\beta$-sheet conformation (Yang et al., 2000). Similarly, GFP is expected to unfold to a maximum distance of 79 nm since the protein consists of 239 amino acids and has a folded size of 2 nm (distance between N- and C-termini) (Ormo et al., 1996). Therefore, each unfolding event that resulted in an increase in contour length ranging between 5 and 32 nm was attributed to a FN-III domain, and any unfolding event that resulted in an increase in contour length ranging between 40 and 82 nm was attributed to a GFP. Acknowledgment We thank Professor Piotr Marszalek for discussions and comments on the manuscript. We thank also Mr. Gwangrog Lee for his help with cantilever calibration. The authors would like to thank the National Science Foundation for support through grants NSF DMR-0239769 CAREER AWARD (SZ) and NSF EEC-0210590 NIRT (SZ and RLC), and the National Institutes of Health through grant NIH CA-47056 (HPE). References Leahy, D.J., Aakhl, I., Erickson, H.P., 1996. 2.0 angstrom crystal structure of a four-domain segment of human fibronectin encompassing the RGD loop and synergy region. Cell 84, 155–164.
CHILDREN AND YOUNG ADULT LITERATURE Marwiyah Abstrak Keywords: Children Literature, Young Adult Literature, School Collection Introduction Definition of children’s literature There are some definitions of children’s literature stated by experts. Children’s literature comprises those books written and published for young people who are not interested yet in adult literature or who may not possess the reading skills or developmental understandings necessary for its perusal. It is literature that implies some value added to separate good books from ordinary one. The value relates to quality that is associated with literary standard. This means that children’s literature should be good-quality trade books. Every book that children read is not necessarily a part of children’s literature. Textbook, comic book, adult science fiction borrowed from parents is not children’s literature. However, some people still believe that anything children read and enjoy is a part of their literature. More people working on this field state that children book are books that are not only read and enjoyed by children but books that are written for children. This means that children’s book should meet certain criteria related to literary standard. Children’s books are book for children from birth to adolescence covering topic that is relevance for children on their ages and the books should be interesting for children. Understanding children development aspect As children grow their interest on type of book may change. Therefore, it is important to understand children development on what their interest in terms of the kind of book they need. There four phases of children development are: 1. Infancy through preschool. Children in this age are acquiring language, forming attachment with caregiver, developing locomotor skills and beginning to learn autonomy. Children learn language by listening both from conversation and through story and poetry performed by parents, teachers or caregivers. 2. Early childhood (5 – 8) Children in this age are marked by some important things: - Developing language - Enjoying achievement (learning to read) - Imitating adult roles - Using “concrete” thought - Projecting an optimistic view 3. Middle childhood (9 – 12) Children in this age gain several typical developments such as: --- • becoming more like adult in logical thought pattern • growing increasingly dependent on peer group. • perfecting skills • employing metacognitive thought • moving toward independence from parents • developing competence in interpersonal and social relationships Implication in-school learning; it is a good time increase independence, broader friendships and developing interest, like sports, art or music but the skill required for academic success become more complex. 4. Adolescence. There are some changes like mature biologically, autonomous, start to employ abstract thought and establish sexual identity. Value of literature to children Basically, literature entertains through fiction works and it informs through non-fiction books. More specifically, children can benefit from literature that offers two values: 1. Personal value that consists of enjoyment, imagination and inspiration, vicarious experience, understanding and empathy, heritage, moral reasoning, literary and artistic preferences. 2. Academic value is related to improving skill on • Reading. Appropriate literature allows children to develop language development • Writing, that can be reached by reading literature and listening. Children will learn the pattern of their favorite authors and adapt it in their writing. • content area subject using literature across curriculum that require teachers to use works of literature as teaching materials in the content areas of social, studies and history, science, health and possibly math. --- • Art appreciation. Illustration is an important element in children’s books that can help tell what the story about (cognitive value) and convey the value as art (aesthetic value). **Books and Young Adult** *Definition* To identify books for young adult, we need to understand “young adult” and their needs. It is difficult to define term young adult, but generally young adult are thirteen to nineteen years old. In this age when puberty comes, there are changes in young adult such as biological aspect and maturity growth. Complex of personal, family, environmental and social factor contribute to these changes. *Types of literature for young adults* To define literature for young adult is difficult. Psychological aspect influence what their interest on reading. Young adult experience crises of identity relating to some relationship: 1. With authority such as school and parents that tend to limit their personal autonomy. 2. With peers where relationships are sometimes transient but can develop into a strong and lasting bonding between confidantes. 3. with social norms 4. With sexual expectation and orientation 5. with indoctrination of faiths and morals 6. With one’s self-concept. --- Often young adult read book especially adapted to their experience. They demand that their literature reflect the world they know and inhabit.\textsuperscript{11} They are reading for some different purposes: emotional satisfaction, information and aesthetic adventure. Some young adult enjoy several type of literature such as popular and serious fiction, useful non-fiction and poetry collection.\textsuperscript{12} **History of children’s literature** Hillman divides the history of children’s literature into 5 periods:\textsuperscript{13} 1. *Before 16\textsuperscript{th} century* Literature in this period was initiated by traditional literature taking several forms such as oral tradition, story-tellers and ritual. However, the invention of printing machine in 1440s helped the publication on several subject like social politic and economy to meet people’s need at that time. Firth publication of children literature was Aesop’s Fable, Morter D’Arthur and History of Reynard the Fox. 2. *16\textsuperscript{th} and 17\textsuperscript{th} century* Several types of literature were produced such as picture book like *Orbis Pictus*, and fairy tales. The first children’s book made was chapbook and hornbook.\textsuperscript{14} The content of books in this period was religious didactic. 3. *18\textsuperscript{th} century* The tradition of mother goose nursery rhymes was developed. The theme used varied: political satire, human straits, and human folly. The important moment in this period was that the childhood aspect in literature was recognized that means that modern literature had come. Some books were published such as *Two Shoes* in 1744. Scientific didacticism replaced the religious one. \textsuperscript{11} Ibid, p. 55. \textsuperscript{12} Hearne, *Choosing books for children: a commonsense guide*, p. 112. \textsuperscript{13} Hillman, *Discovering children’s literature* p. 21. \textsuperscript{14} Hornbook is small wooden boards shaped like paddles and covered with a thin layer of transparent horn. 4. *19th century* Children’s literature in this period became more various such as collections of folklore such as *Ugly duckling*. The role of illustrator who used technology produced good picture books. This is era of Golden Age. 5. *20th century* This period was marked by the explosion of picture books. The difference of children’s interest on books grew. The genre of children’s literature varied like fantasy, historical fiction, folklore, realistic adventure, animal stories, poetry and family stories. **Evaluation of Children Literature** *Literary elements* These five elements are only for fiction books. Those are: 1. Plot, an element relating to what happens in the story. It could be conflict of the main character: person-against-person, person-against-nature, person-against-person, person-against society. It could be chronologic,\(^{15}\) flashback, or episodic.\(^{16}\) 2. Characters. This is about the actors in the story. It could be a good character (protagonist) or bad character (antagonist). 3. Setting. This element is related to time and place where the story occurs. The setting could be vague and general, usually in folktales like:”long ago in a cottage in the deep woods.” 4. Theme. This element address on what the story about. It can be expressed by means of complete sentence. Good theme for children should convey moral message. \(^{15}\) Chronologic is plot that covers a particular of time relate to events in order within the period of time. \(^{16}\) Episodic is a type of plot that tie together separate short stories or episodes, each an entity in itself with its own conflict and resolution. See Lynch-Brown, *Essentials of children’s literature* p.26. 5. Style, an element that shows the way authors tell the story. It can be seen through words chosen by author, sentences (is it easy to understand), and the organization of the book. *Visual elements* Since most children books combine text and illustration, we need to look at the visual elements: 1. *Line* that marks the form of a picture to define the outline. The line generally defines the object within the picture. 2. *Color* that is used to convey the situation when the story occurs, for example calm described by soft color. 3. *Shape* that is produced by areas of color and by lines joining and intersecting to suggest outlines of forms. 4. *Texture* that is the impression of how a picture's object feels. It enables artists to provide contrast within the picture. 5. *Composition* that includes the arrangement of the visual elements within a picture and the way in which these visual elements relate one to the other and combine to make the picture.\(^\text{17}\) **Categories of Literature (genre)** 1. **Fiction** There are some types of fiction books: - *Poetry*, the expression of ideas and feeling through a rhythmical composition of imaginative and beautiful words.\(^\text{18}\) It is condensed language and imagery.\(^\text{19}\) Good poetry will depend on these elements: the music of poetry (melody and movement), the use of words (vigorous, rich, delicate words), and the content --- \(^{17}\) *Ibid*, p. 32. \(^{18}\) *Ibid*, p. 44. that is built around subject or ideas. Also included in poetry: mother goose and rhymes - *Folklore.* It explains the phenomena of nature. The most familiar and the most appealing type of folklore is folktale. The motifs of folklore are various, to account these motifs is to count the heartbeat of the human race. Each character represents a part of humanity, each part of life, each a part of existence. - *Modern fantasy.* This genre was fully accepted in the beginning of 19th century. There are various sub-genre of modern fantasy: *animal fantasy* that shows a blend of human characters with animal quantities, *high fantasy* that refers to the seriousness of purpose and lofty idealism of the hero’s quest, *time fantasy* in which the authors construct parallel worlds that touch the world in magical places, science-fiction that is a blend of rational technological explanation and a literary form. - *Contemporary realistic fiction.* This fiction has a strong sense of actuality. It is possible that the stories are about people and events that could happen in the real world. Realistic fiction illustrates the real world in all its dimension: humorous, joyful, etc. - *Historical fiction.* It implies an author has created a story set in the past. Historical fiction tells the stories of history; as a distinct genre consists of imaginative stories grounded in the facts of our past but it is not biography. The story is imaginary characters and plot based on realistic story in the previous time. Historical fiction is realistic but it differs from contemporary fiction in that the stories are set in the past rather than the present. --- 20 Sutherland, *Children’s Literature* p. 98. 21 Ibid, p.103. 24 Ibid, p. 269. • Fables, myth and epics. These are a part of the great stream of folklore. Fable is brief narrative taking abstract ideas of behavior. Myth attempts to explain in complex symbolism the vital outlines of existence.\textsuperscript{26} • Picture book. It is a book that combines text and illustrations to contribute the meaning of the story.\textsuperscript{27} This book is primarily good for young children that are acquiring language. Types of picture book are: baby books, interactive books, toy books, wordless book, alphabet books, concept books, counting books, picture-story books, easy-to read book and graphic novel. Information books using picture can be a picture book, but it is not fiction. 2. Non-fiction: • Biography. It is a book that gives factual information about the lives of actual people, including their experiences, influences, accomplishments and legacies. Biography can be classified: authentic biographies, fictionalized biography, and biographical fiction.\textsuperscript{28} • Information books. These books can be written on any aspect of the biological, social or physical world, including outer spaces. This type of book was often seen as boring and difficult to read but in twentieth century, authors of informational books changed their writing style and make these books as interesting and engaging as fiction books.\textsuperscript{29} Informational books are sometime presented in picture books primarily books for young children. • Multicultural, a collection that refers to trade books, regardless of genre, that have the main character a person who is a member of racial, religious or language microculture other than Europe-American. The value of good literature reflects many aspects of a culture such as value, belief, ways of life. \textsuperscript{26} Suhterland, \textit{Children…}, p. 127. \textsuperscript{27} Lynch-Brown, \textit{Essentials of children literature}, p. 76. \textsuperscript{28} Ibid, p. 164. and patterns of thinking. In Indonesian setting, it is good to have Indonesian multicultural books, since there are many cultures in Indonesia. - International literature. In United States is defined as literary selections that were originally published for the children in a country other than the United States in a language of that country and later published in the United States.\(^{30}\) International book is important both for teachers’ development and students’ development. Good story from other cultures and languages help connect students to potential friends around the world.\(^ {31}\) Thus, this type of book is essential to develop understanding and appreciation for other countries and culture. **Children literature and curriculum** *Connecting children’s literature with curriculum* Literature is a part of curriculum in most elementary schools.\(^ {32}\) Even more, children’s literature complements school curriculum. Teachers try to integrate curriculum and see all disciplines as interconnected.\(^ {33}\) Teachers found that children’s literature is the source to connect integration of classroom that complements each area of study through providing purposeful readings. Each genre makes each distinctive contribution, for example realistic fiction that complements all subject areas. The plot and setting connect with social studies, science, arithmetic or fine arts. The theme advances psychological or philosophical reality that connects with reader.\(^ {34}\) Some efforts to connect children literature are made through some activities such as reading aloud. --- \(^{32}\) Sutherland, *Children…*, p. 204. \(^{34}\) *Ibid*, p. 79. Function of literature in school curriculum Literature has leading role in formal education of children in three related areas: 1. Instructional reading program Most instructional reading programs recognize the importance of literature. Basal reading textbook recommend trade books to be used from the beginning of formal reading instruction. Through trade books children will gain pleasure in reading. In teaching reading, teacher use library-based program in which they plan instruction using trade books. It is an approach to teach reading through the use of trade books. Teachers can use children books to provide the base for the reading and language and arts curriculum of any primary and middle schools. This approach is based on the theory that children learn by searching for meaning in the world around them, constantly forming hypothesis, testing them to determine whether they work, and subsequently accepting or rejecting them. By this method, children hear literature read aloud several times a day and they discover that good books can entertain them. As a result they would constantly practice reading books they have chosen because they are interested in the topics. Many books are available to perform activities such as read aloud, for example, Red-Eyed Tree Frog that can be used to read aloud. This book has connection to curriculum in term how children become understand about frog for example what they eat. 2. Subject matter areas In this areas related to the use of literature for learning in several areas such as social studies and sciences. Traditionally, these areas depend on textbook for learning in entire classes. However, information on textbook is limited in terms of the content since it survey broad areas of knowledge. Therefore, trade books are important to --- 37 Hillman, Discoveringchildren’s literature, p. 243. fulfill this limitation. Non-fiction books provide more in-depth consideration of particular topics. Furthermore, literature offers its value of content area subject using literature across curriculum. Recently, the phrase literature across curriculum is often related to children’s literature. The phrase means the use of literature as teaching materials in the context areas of social studies and history, science, health and math. Literature and other curricular areas can be integrated in a variety of patterns. Literature is often combined with other sources of knowledge to broaden the study of a topic in science, social studies, mathematics or the arts. Many non-fiction books (science, social studies, mathematics, language study, arts) are available in which children can explore almost any topic that interests them. Research on Literature across curriculum performed by Morrow in 1997 shows that students who received yearlong literature based reading and literature-based science instruction scored higher than control group in reading and total language score on the California. 3. Literature program School literature programs vary widely. Some schools promote either reading for pleasure or formal study of literature. Most schools admit children’s need for some pleasurable experiences with literature that enable them to think critically. How to write children books Writing books for children are interesting for some people primarily who have great concern on children books. There are some aspects need to consider writing children books as explained by Berthe Amoss and Eric Suben: 1. Fundamental Preparation: 40 Lynch-Brown, Essentials of children’s literature, p. 221. 42 Galda, Literature and children, p. 303. • Read. This activity is important to look at closely what kind of book published for children, what children most like, etc. Talking to booksellers can help us to know what books are selling best and why. Reading a good writing in the area we interested in is important.46 • Remember. Remembering our own childhood in terms of story and books we like best will be a good start to analyze why we like. • Watch. Children are a good object of our observation. We need to consider their ages, interest and skill. • Write. This is the important step in writing books. We can start writing on what we know through journal that we should keep. 2. Literary aspects that are related to theme, plot, character, setting (time and place) as well as format and age level. When we decide to choose certain topic, for example dinosaur, we should also decide to what age level we would like to write. 3. Organization that consists of beginning, middle and ending. To make children books interesting, good opening words are important to grab readers’ attention. We need to make readers to keep turning the ending pages.48 Writing children book is important for librarian because the number of children’s book, primarily in developing countries, is limited and also the price is expensive. Thus, one of strategies to acquire children books is writing books for children. In North America, many parents are very interested in developing children books by writing by themselves. --- 48 *Ibid*, p. 94. Concluding Remarks As an educator and as a librarian we have to admit that children’s literature has a great impact on children’s development. Children’s books are not only for pleasure but also, at the same time, books have an essential function to support learning integrated within curriculum. In another word, children’s books have academic and personal values. Therefore, the knowledge on how to recognize the good books for children and how to use them to support learning process is important both for teachers and librarians. REFERENCES
In vitro digestibility of mountain-grown irrigated perennial legume, grass and forb forages is influenced by elevated non-fibrous carbohydrates and plant secondary compounds Yunhua Zhang†, Jennifer W. MacAdam*, Juan J. Villalba and Xin Dai Departments of Plants, Soils & Climate and Wildland Resources, and Utah Agricultural Experiment Station, Utah State University, Logan, Utah, 84322, USA; †Y. Zhang, School of Resources and Environment, Anhui Agricultural University, Hefei, Anhui Province, 230036, P.R. China. Abstract BACKGROUND: Perennial legumes cultivated under irrigation in the Mountain West USA have non-fibrous carbohydrate (NFC) concentrations exceeding 400 g kg⁻¹, a level commonly found in concentrate-based ruminant diets. Our objective was to determine the influence of NFC concentration and plant secondary compounds on in vitro rumen digestion of grass, legume and forb forages compared with digestion of their isolated neutral detergent fiber (NDF) fraction. Forages were composited from ungrazed paddocks of rotationally stocked irrigated monoculture pastures between May and August 2016, frozen in the field, freeze-dried, and ground. RESULTS: The maximum rate ($R_{\text{Max}}$) of gas production was greater for the legumes alfalfa (ALF; *Medicago sativa* L.) and birdsfoot trefoil (BFT; *Lotus corniculatus* L.) than for the legume cicer milkvetch (CMV; *Astragalus cicer* L.) the grass meadow brome (MBG; *Bromus riparius* Rehm.) and the non-legume forb small burnet (SMB; *Sanguisorba minor* Scop.), and intermediate for the legume sainfoin (SNF; *Onobrychis viciifolia* Scop.). The $R_{\text{Max}}$ of isolated NDF was greatest for BFT and CMV, intermediate for ALF, SNF and SMB and least for MBG. CONCLUSIONS: More than 900 g of organic matter (OM) kg$^{-1}$ dry matter (DM) of legumes was digested after 96 h. Across forages, the extent of whole plant digestion increased with NFC and crude protein (CP) concentrations, decreased with NDF concentrations, and was modulated by secondary compounds. The extent of digestion of isolated NDF decreased with concentration of lignin and residual tannins. Keywords: condensed and hydrolysable tannins; cumulative fermentation gases; irrigated pastures; isolated fiber; non-fibrous carbohydrates; perennial legume forages INTRODUCTION Increasing the forage content of beef finishing diets and dairy rations could reduce feed costs and improve the health and longevity of dairy cows. Intake and digestibility of forages by ruminants are a function of forage neutral detergent fiber (NDF) and energy concentration; however, the digestibility of NDF and the proportion of NDF that is ultimately not digested in the rumen can limit the rate of passage of forages from the rumen and thereby limit intake. The concentration and biochemistry of lignin and the localization of its deposition in forage cell walls, constrains ruminant intake and digestibility, while greater concentrations of non-fibrous carbohydrates (NFC) represent energy for microbial colonization of forages. These variables are, in part, a function of plant phenology and environment. For instance, forages grown under irrigation in the high-elevation Mountain West USA have more NFC and less NDF compared with the same forages cultivated in warmer, more humid environments. Legume forages such as the true clovers or alfalfa (ALF) are commonly grazed only as components of grass-legume mixtures to avoid issues with pasture bloat that can result from the rapid digestion of bloat-causing forages. Temperate nutrient-dense legumes such as birdsfoot trefoil (BFT) and sainfoin (SNF) are non-bloating due to the presence of condensed tannins (CT) that precipitate excess plant protein in the rumen and reduce protein availability, disrupt biofilms, or alter rumen microbial ecology. Alternatively, vein structural tissue holds the upper and lower epidermis of CMV leaves together preventing bloat by slowing rumen microbial access to plant cell contents. The objective of this study was to compare the rate and extent of digestion of two non-tannin (ALF and CMV) and two CT-containing (BFT and SNF) legumes with a grass (MBG) and a non-legume forb (SMB) when these forages were grown under irrigation in the Mountain West region. Grasses have more fiber and less lignin than legumes, and the non-legume forb SMB is persistent and palatable to livestock but contains a hydrolysable tannin (HT) that reduces nitrogen excretion in the urine of beef cattle. The *in vitro* gas production technique was used to study the influence of NDF, NFC, crude protein (CP), lignin, CT, HT and other nutrients and secondary plant compounds on the rumen digestibility of these forages. **MATERIALS AND METHODS** **Forage collection** The six forage species used in this study were composited from irrigated monoculture pastures (0.365 ha each) that were rotationally stocked with dry beef cows from late May through mid-August of 2016. Pastures were located in Lewiston, Utah (41.95°N; 111.87°W; altitude 1370 m.a.s.l.). Composited material was collected weekly (CMV MBG and SMB; 11 dates x 5 reps; 1 g DM per sample) or monthly (ALF, BFT, SNF; 3 dates x 9 reps; 2 g DM per sample) between 8.00 h and 12.00 h from paddocks that would be grazed next in rotations by clipping ~10 samples of each species to a 7.6 cm stubble. A total of approximately 250 g fresh weight was collected by walking a corner-to-corner transect of the paddock. Grass pasture regrowth was maintained in the vegetative stage and all forbs were maintained in the flowering stage during This article is protected by copyright. All rights reserved. this period. Forage samples were immediately frozen under dry ice and stored at –20°C until freeze-dried, then milled to pass the 1-mm screen of a Wiley mill (Thomas Scientific, Swedesboro, NJ, USA). **NDF isolation** Neutral detergent fiber of each species was isolated using the standard protocol for the ANKOM A200 Fiber Analyzer (ANKOM Technology, Macedon, NY, USA). Following an acetone rinse, samples were dried at 102°C and weighed to determine NDF concentration. Isolated NDF was soaked overnight in 10% (v/v) tert-butyl alcohol and 90% 1 M sodium sulfate at 39°C and rinsed successively with hot water, 95% ethanol, and acetone to remove traces of neutral detergent solution. **In Vitro Rumen Fluid Fermentation** The *in vitro* methodology of Theodorou et al. was used to determine the kinetics of rumen fermentation. Three runs of the fermentation were carried out over a 3-wk period. Triplicate samples of approximately 0.4 g of each whole plant and NDF isolate sample plus a BFT control and a blank were included in each run. Fermentations were carried out in 125-mL borosilicate glass serum bottles (Wheaton, Boston, MA, USA). A 40-mL aliquot of buffer containing macro- and microminerals, artificial saliva, a reducing solution, and resazurin was added to forage in each bottle, which was flushed with CO₂ and sealed with a butyl rubber septum and aluminum crimp cap (Wheaton). All reagents were purchased from Sigma-Aldrich (St. Louis, MO). Bottles with forage and buffer were stored overnight at 4°C then warmed to 39°C before 20 mL rumen fluid was added. Blank vials contained buffer and rumen fluid only. Animal handling was conducted under Utah State University Institutional Animal Care and Use Committee protocol #2834. Rumen fluid was collected approximately 4 h after a meal of ALF hay. Fluid was squeezed from the mat of fermenting forage of a ruminally cannulated Angus cow into a pre-warmed thermal flask, transported 15 min. to the laboratory, strained twice through three layers of cheesecloth and maintained at 39°C under CO₂ gas. The pH of rumen fluid was 6.3 ± 0.4. When fermentation was terminated at 96 h, mean pH of all samples was 6.75 ± 0.05 (SD); the pH of blanks was 7.02 ± 0.20, and the pH of controls was 6.92 ± 0.08. Gas pressure measurements were made with a needle-equipped pressure transducer (PX409-015GUSBH; Omega Engineering Inc., Stamford, CT, USA) at 1, 2, 4, 6, 8, 10, 12, 18, 24, 36, 48, 72, and 96 hours after inoculation, and accumulated gas was vented after each measurement. Fermentation was stopped by cooling serum bottles to 4°C in a walk-in freezer. Undigested fermentation residues were collected in ANKOM in situ bags with 50 µm porosity, thoroughly rinsed with distilled water, and dried at 60°C to obtain dry mass. Gas pressure was transformed to gas volume using the equation gas volume (mL) = 5.3407 × gas pressure (psi). A single phasic model (Equation 1) for cumulative gas volume (G) was used to calculate fermentation kinetics parameters where A (mL g⁻¹ organic matter; OM) was the asymptotic (maximum) gas volume, B (hours) was the incubation time (t) at which half the maximum amount of gas had been formed, and C was a constant describing the sharpness of the switching characteristic of the cumulative gas curve \(^{21}\). As the value of C increases, cumulative gas production curves become more sigmoidal and increase in slope, \[ G = \frac{A}{1 + (B/t)^C} \] Equation (1) The time when the maximum rate of substrate digestion occurred (\(t_{RM}\)) and the maximum rate of substrate digestion (\(R_{Max}\)) were calculated from A, B, and C using Equations 2 and 3 \(^{21}\). \[ t_{RM} = B(C - 1)^{1/c} \] Equation (2) \[ R_{Max} = \frac{1}{B} (C - 1)^{(1-\frac{1}{c})} \] Equation (3) Forage Analyses Dry matter (DM) of forages and isolated fiber was determined by drying three subsamples of each forage for 48 h at 105°C. Digestible dry matter (DDM) was calculated by subtracting DM of undigested fermentation residue at the end of the fermentation from substrate DM and dividing by substrate DM. Residues were ashed at 550°C and subtracted from substrate DM to estimate substrate OM. Undigested OM was calculated by subtracting ash from undigested fermentation residue DM; some minerals may have been solubilized during fermentation\(^{22}\). Digestible organic matter (DOM) was calculated by subtracting undigested OM from substrate OM and dividing by substrate OM. Forage composites were analyzed by near infrared spectroscopy (NIRS; FOSS 2500 Feed Analyser, Foss Analytics, Hilleroed, Denmark) to estimate CP, amylase-treated NDF \((aNDF)\), NDF digestibility as a proportion of NDF \((NDFD)\), acid detergent fiber \((ADF)\), NFC, ash, acid detergent lignin \((ADL)\), fat, and total digestible nutrients \((TDN)\) using the March 2018 NIRS Consortium mixed hay equation (https://www.nirsconsortium.org/). The calibration set for this equation does not include SMB. The WSC concentration of forage samples was determined by the phenol-sulfuric acid method \(^{23}\), and their starch concentration by the glucose oxidase-peroxidase method \(^{24}\). Total CT concentrations were determined by the butanol-HCl-acetone method \(^{25}\). Condensed tannin standards were isolated from BFT and SNF \(^{26}\). Hydrolyzable tannins were determined for SMB relative to methyl gallate standards and a tannic acid check \(^{27}\). **Experimental Design and Statistical Analyses** Cumulative fermentation gas kinetics parameters were estimated using PROC NLIN and then analyzed using PROC GLIMMIX (SAS/STAT 14.3, SAS Institute Inc., Cary, NC, USA) with a mixed model in which run was the random factor, and species and material (whole plant or isolated fiber) with their interaction were fixed effects. To account for the correlation of whole plant and isolated fiber materials from the same species, a heterogeneous compound-symmetry structure (CSH) was included based on AIC and BIC selection. Pairwise differences among the least squares means (LSMEANS) of the kinetics parameters were tested with Tukey-Kramer multiplicity adjustment. PROC CORR was used to determine correlation between whole plant DDM and CP and between ADL and undigested OM. Tests were considered significant at the 0.05 level. RESULTS The four legumes used in this study (ALF, BFT, CMV, and SNF) had similar CP, NDF and NFC concentrations (Table 1). The mean CP concentrations of these legumes was approximately twice that of the grass and the non-legume forb, and their NDF concentrations were approximately half that of the grass and the forb (Table 1). The NFC concentration of the non-legume forb was similar to legume NFC concentrations, and the NFC concentrations of all five forbs was more than twice that of the grass (Table 1). Acid detergent lignin concentrations were similar for three of the legumes (ALF, BFT, and CMV), while ADL values were similar for SNF and SMB; the four legumes and the non-legume forb had ADL values 1.4 to 2 times greater than that of the grass. Neutral detergent fiber concentrations estimated by NIRS (Table 1) and NDF concentrations determined by ANKOM digestion (Table 2) were ranked similarly, including that of SMB. Values for NDF by NIRS averaged 7% less than ANKOM analysis. Water-soluble carbohydrate (WSC) concentrations (Table 2) were greatest for the legumes BFT and CMV and the forb SMB, and were greater for the legume SNF than the grass. The WSC concentration of ALF was less than that of other forages tested, while starch concentration was greatest for ALF. In whole plant material, the CT concentration of SNF was more than double that of BFT, and the concentration of HT in SMB was intermediate to the CT concentrations of BFT and SNF (Table 2). Isolated NDF of SMB retained about half the HT concentration of whole plant material, while isolated NDF of BFT and SNF retained 28 and 21%, respectively, of the CT in whole plant material of these species (Table 2). Whole plant DDM and DOM did not differ among legumes and were greater than for the grass; grass values were greater than the non-legume forb SMB (Table 3). There was a positive correlation between whole plant DDM and CP ($r = 0.94$). The concentrations of undigested OM for whole plant material of MBG and SMB were approximately two and three times greater, respectively, than that of the four legumes (Table 3). The DDM of isolated NDF was greatest for CMV followed by the grass, and least for SMB (Table 3). Acid detergent lignin concentration was positively correlated with undigested OM of isolated NDF ($r = 0.86$). For isolated NDF, more OM remained undigested for the three tannin-containing forages than for other forage species. The NDF of SMB contained the greatest concentration of HT and had the greatest concentration of undigested OM, followed by SNF and BFT. Forage species had a significant effect on $R_{\text{Max}}$ (Table 4) of both whole plant material and isolated NDF. The $R_{\text{Max}}$ of ALF and BFT was greater than for other forages except SNF, and the $R_{\text{Max}}$ of isolated NDF was greater for BFT and CMV than MBG. Time to reach $R_{\text{Max}}$ ($t_{R_{\text{M}}}$) was less than 8 h for whole plant material of all species and more than 28 h for isolated NDF of all species (Table 4). The $R_{\text{Max}}$ of isolated NDF reached values equal to or greater than the rate observed for whole plant material of CMV, MBG, and SMB. The asymptotic gas volume (Parameter A) is the cumulative gas production at the end of fermentation predicted from analysis of cumulative gas as a function of time (Fig. 1). Asymptotic gas volume of MBG whole plant material was greater than that of CMV and SMB whole plant material, and did not differ from that of ALF, BFT, and SNF whole plant material (Table 5). Asymptotic gas volume of isolated NDF was greatest for CMV and MBG and least for SMB. Acid detergent lignin concentration was negatively correlated with asymptotic gas volume of isolated NDF ($r = -0.93$). Hours to one-half asymptotic cumulative gas volume (Parameter B; Table 5) is related to both the rate and extent of fermentation; a lower value for Parameter B reflects a more rapid rate of rumen digestion. Among whole plant material, time to one-half gas volume was greatest for MBG and SMB, and did not differ from that of CMV. Parameter B of whole plant CMV did not differ from SNF, and Parameter B of whole plant SNF did not differ from ALF and BFT, which reached one-half asymptotic cumulative gas volume the most quickly; ALF and BFT also had the greatest whole plant $R_{\text{Max}}$. Time to one-half asymptotic gas volume of whole plant material was negatively correlated with CP ($r = -0.90$). For isolated NDF, time to one-half asymptotic cumulative gas volume was greater for MBG than for ALF, BFT, and CMV and did not differ from SNF and SMB (Table 5). For whole plant material, Parameter C was greater for ALF and BFT than for CMV, MBG, and SMB indicating a sharper acceleration of fermentation rate following the initial lag phase. Parameter C was greater for whole plant SNF than MBG and SMB and did not differ from ALF, BFT or CMV (Table 5). Values of Parameter C (Table 5) were greater for isolated NDF than for whole plant material ($p < 0.01$) and did not differ among forages (Fig. 1). **DISCUSSION** The NFC concentrations of legumes grown in the Mountain West were comparable to those for corn silage (358 g kg$^{-1}$ DM) and beet pulp (383 g kg$^{-1}$ DM) $^{28}$. The elevated NFC concentrations of forages cultivated in the Mountain West likely result from the greater net photosynthesis of long, sunny days $^{29}$ combined with cool night temperatures that reduce the rate of respiration that consumes non-structural carbohydrates$^{30}$. Starch concentration was least in the cool-season grass, which stores carbohydrates as fructans $^{31}$ in leaf and stem bases below grazing height. Alfalfa starch concentration in this study was greater than values generally reported for forage legumes; however, starch concentration of red clover (*Trifolium pretense* L.) leaves reached 350 g kg$^{-1}$ DM by the end of the day when grown at day/night temperatures of 19-23°C /14-16°C $^{32}$. When legume WSC and starch concentrations were summed by species, they accounted for 27% (CMV) to 46% (ALF) of NFC; the balance of NFC consists of organic acids and ND-soluble fiber, including pectins and some hemicelluloses $^{33}$. This article is protected by copyright. All rights reserved. Increasing the NFC of high-forage diets by adding grain usually decreases fiber digestibility \(^{34}\), augments VFA synthesis and may increase nutrient flow from the rumen \(^{35,36}\). In this study, the undigested OM of isolated legume NDF ranged from 225 to 503 g kg\(^{-1}\) DM after 96 h while legume whole plant undigested OM ranged from 78 to 88 g kg\(^{-1}\) DM. The whole plant undigested OM of SMB, the non-legume forb, was nearly three times greater than for the legumes even though NFC of all five forbs was similar. The reduced digestibility of SMB whole plant and isolated fiber was likely due to the antimicrobial activity of HT, which is known to negatively affect digestion \(^{37}\). These results demonstrate that NFC does not inhibit NDF digestion, and that the extent of forage digestion is influenced by the concentration of NDF and can be modulated by the presence of plant secondary compounds in NDF. Whole plant 96-h DDM of the tannin-containing legumes BFT and SNF was similar to the non-tannin legumes ALF and CMV. Grass whole plant material produced the greatest asymptotic fermentation gas volume (Parameter A). Diets with greater fiber concentrations result in greater rumen acetate-to-propionate ratios\(^{38}\), and acetate synthesis results in a greater volume of gas production than propionate synthesis \(^{39,40}\). Comparing the gas volume of isolated forage NDF eliminates variation due to acetate and propionate contributions, since non-structural carbohydrates are the main source of propionate synthesis. Cicer milkvetch contains a water-soluble arabinogalactan protein that prevents cellulolytic bacteria from digesting cellulose \(^{41,42}\) and cattle gained significantly less on irrigated, rotationally stocked CMV than BFT pastures \(^{43}\). This arabinogalactan protein may account for the lower $R_{\text{Max}}$ and greater Parameter B of whole plant CMV that suggests inhibited short-term digestion compared with ALF and BFT. However, these parameters were not less for CMV isolated fiber. While ALF and CMV had similar whole plant ADL concentrations, ALF has more rigid, erect stems while CMV has less rigid, vine-like stems. These differences in stem structure could result from differences in the amount, type or localization of lignin in stem fiber cell walls and result in greater isolated NDF DDM of CMV compared with ALF. The grass had the least ADL, which was positively correlated with undigested NDF OM and negatively correlated with isolated NDF asymptotic gas volume. The extent of digestion of SMB isolated NDF was likely constrained by both elevated ADL and the presence of about 50% whole plant HT retained in isolated NDF. More OM remained undigested in NDF isolates from tanniferous species that retained tannins in the NDF fraction. The HT in SMB bound more dietary N than the CT in BFT or SNF when beef cows were fed tannin-containing hays as their complete diet. Here, the NDF of SMB, with half the whole-plant concentration of HT, had the greatest undigested OM. Tannins may bind to structural cell wall proteins and inhibit enzymes secreted by wall-digesting rumen microbes. The concentration of structural proteins in monocots is approximately 1% while structural proteins comprise as much as 10% of the cell walls of dicots, including legumes and forbs. Our demonstration that the NDF fraction of tanniferous forages retains significant CT and HT concentrations suggests tannins could influence the digestion of both the cell contents and the cell wall fraction of forages. CONCLUSIONS The perennial legumes used in this study had elevated whole plant CP compared with the grass MBG and the forb SMB. The four legumes and the forb, SMB, had elevated NFC concentrations compared with the grass. The four legumes had the greatest DDM and DOM of the six forages, and the least undigested OM after 96 h. The digestion of isolated NDF, however, was more variable and reduced by the combined concentrations of lignin and residual tannin. Among the legumes, ALF and BFT had similarly elevated maximal rates of digestion, abbreviated times to one-half asymptotic cumulative gas production, and sharp inflection points in their cumulative gas curves. While the CT concentrations of BFT and SNF appeared to affect isolated NDF digestion, they did not reduce the extent of forage digestion measured as DDM. However, some combination of reduced protein concentration, elevated NDF and the presence of HT inhibited both the rate and extent of SMB digestion. While the fiber of CMV was rapidly and more completely digested than that of other forage species, the digestion of CMV whole plant material was extensive but less rapid than other legumes, perhaps due to the presence of a water-soluble arabinogalactan that inhibits fiber digestion. Reducing the concentration of secondary compounds such as the arabinogalactan in CMV or the HT in SMB would be worthy plant breeding goals to enhance forage digestibility and improve the nutrition and productivity of ruminants grazing these forages. CONFLICT OF INTEREST The authors have no conflict of interest that might have influenced this study. ACKNOWLEDGEMENTS This research was supported by National Natural Science Foundation of China grant 31872418 and by the Utah Agricultural Experiment Station, Utah State University, and approved as journal paper number 9167. We thank Dr. Jim Pfister and Kermit Price for assistance with NDF isolation, and Casey Spackman and Clint Stonecipher for assistance with the fermentation study and rumen fluid collection. We also thank Dr. Mary Beth Hall for guidance on WSC and starch analysis of forages. REFERENCES Figure Legend Fig. 1. Cumulative gas production of whole plant material (A) and isolated neutral detergent fiber (B) for an *in vitro* fermentation carried out for 96 h, expressed as mL gas g⁻¹ digestible organic matter (DOM). Error bars represent SEM; n = 3. Table 1. Near infrared spectroscopy (NIRS) predictions of for nutritive value composited from pre-grazed rotationally stocked irrigated pastures from May through August of 2016. <table> <thead> <tr> <th>Species</th> <th>CP</th> <th>ADF</th> <th>aNDF</th> <th>NDFD</th> <th>NFC</th> <th>Ash</th> <th>ADL</th> <th>Fat</th> <th>TDN</th> </tr> </thead> <tbody> <tr> <td></td> <td>g kg(^{-1}) DM</td> <td>g kg(^{-1}) NDF</td> <td>g kg(^{-1}) DM</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>ALF</td> <td>259</td> <td>198</td> <td>226</td> <td>515</td> <td>441</td> <td>68.9</td> <td>50.8</td> <td>14.5</td> <td>700</td> </tr> <tr> <td>BFT</td> <td>259</td> <td>179</td> <td>192</td> <td>373</td> <td>481</td> <td>62.9</td> <td>54.2</td> <td>16.0</td> <td>822</td> </tr> <tr> <td>CMV</td> <td>253</td> <td>212</td> <td>238</td> <td>671</td> <td>424</td> <td>80.2</td> <td>50.8</td> <td>14.8</td> <td>784</td> </tr> <tr> <td>SNF</td> <td>218</td> <td>233</td> <td>252</td> <td>320</td> <td>476</td> <td>48.8</td> <td>73.1</td> <td>07.0</td> <td>760</td> </tr> <tr> <td>MBG</td> <td>136</td> <td>370</td> <td>588</td> <td>618</td> <td>190</td> <td>80.3</td> <td>36.8</td> <td>27.9</td> <td>604</td> </tr> <tr> <td>SMB</td> <td>122</td> <td>298</td> <td>412</td> <td>487</td> <td>401</td> <td>59.8</td> <td>77.4</td> <td>34.7</td> <td>686</td> </tr> </tbody> </table> CP, crude protein; ADF, acid detergent fiber; aNDF, neutral detergent fiber assayed with a heat-stable α-amylase; NDFD, NDF digestibility; NFC, non-fibrous carbohydrates; ADL, acid detergent lignin; TDN, total digestible nutrients. ALF, alfalfa; BFT, birdsfoot trefoil; CMV, cicer milkvetch; SNF, sainfoin; MBG, meadow bromegrass; SMB, small burnet. NIRS Consortium calibration equations do not include small burnet. Table 2. Fiber, carbohydrate and tannin concentrations of whole plant material and tannin concentrations of isolated neutral detergent fiber (NDF) of forages composited from pre-grazed rotationally stocked irrigated pastures from May through August of 2016. <table> <thead> <tr> <th>Species</th> <th>ANKOM aNDF (SE)</th> <th>WSC (SD)</th> <th>Starch (SD)</th> <th>Whole Plant Tannins (SD)</th> <th>NDF Tannins (SD)</th> </tr> </thead> <tbody> <tr> <td>ALF</td> <td>260d (0.29)</td> <td>86.5d (1.36)</td> <td>67.0a (0.26)</td> <td></td> <td></td> </tr> <tr> <td></td> <td>125.9a (4.20)</td> <td>36.5b (0.17)</td> <td>21.2c (1.86)</td> <td>6.0c (0.41)</td> <td></td> </tr> <tr> <td>BFT</td> <td>245e (0.27)</td> <td>130.1a (2.01)</td> <td>15.3d (0.44)</td> <td></td> <td></td> </tr> <tr> <td>CMV</td> <td>261d (0.15)</td> <td>110.1b (7.28)</td> <td>31.8c (0.10)</td> <td>56.1a (2.07)</td> <td>11.8b (0.19)</td> </tr> <tr> <td>SNF</td> <td>300c (0.24)</td> <td>92.3c (4.54)</td> <td>8.0e (0.30)</td> <td></td> <td></td> </tr> <tr> <td>MBG</td> <td>577a (0.07)</td> <td>131.1a (5.44)</td> <td>14.0d (1.57)</td> <td>39.6b (1.28)</td> <td>19.9a (1.28)</td> </tr> <tr> <td>SMB</td> <td>406b (0.23)</td> <td>130.1a (2.01)</td> <td>15.3d (0.44)</td> <td></td> <td></td> </tr> </tbody> </table> ALF, alfalfa; BFT, birdsfoot trefoil; CMV, cicer milkvetch; MBG, meadow bromegrass; SMB, small burnet; SNF, sainfoin. For neutral detergent fiber (aNDF), grass n=46 and forb n=69; aNDF values followed by the same letter are not different at $p < 0.05$ based on the Tukey-Kramer multiplicity adjustment. Water-soluble carbohydrate (WSC), starch and tannin data are the means of three laboratory replicates. ALF, CMV and MBG contain no condensed or hydrolyzable tannins. Table 3. Characteristics of whole plant and isolated neutral detergent fiber (NDF) of forages composited from pre-grazed rotationally stocked irrigated pastures from May through August of 2016. Data are the means of three laboratory runs; standard errors in parentheses. <table> <thead> <tr> <th>Species</th> <th>Whole Plant Material</th> <th>Isolated NDF</th> </tr> </thead> <tbody> <tr> <td></td> <td>DDM</td> <td>DOM</td> </tr> <tr> <td>ALF</td> <td>833a</td> <td>917a</td> </tr> <tr> <td></td> <td>(9.6)</td> <td>(6.7)</td> </tr> <tr> <td>BFT</td> <td>831a</td> <td>916a</td> </tr> <tr> <td></td> <td>(14.0)</td> <td>(11.3)</td> </tr> <tr> <td>CMV</td> <td>852a</td> <td>918a</td> </tr> <tr> <td></td> <td>(7.4)</td> <td>(7.4)</td> </tr> <tr> <td>SNF</td> <td>848a</td> <td>908a</td> </tr> <tr> <td></td> <td>(13.6)</td> <td>(9.7)</td> </tr> <tr> <td>MBG</td> <td>738b</td> <td>796b</td> </tr> <tr> <td></td> <td>(8.2)</td> <td>(5.8)</td> </tr> <tr> <td>SMB</td> <td>693c</td> <td>758c</td> </tr> <tr> <td></td> <td>(9.2)</td> <td>(8.0)</td> </tr> </tbody> </table> DDM, digestible dry matter; DOM, digestible organic matter; OM, organic matter. ALF, alfalfa; BFT, birdsfoot trefoil; CMV, cicer milkvetch; SNF, sainfoin; MBG, meadow bromegrass; SMB, small burnet. Values within a column followed by the same letter are not different at $p < 0.05$ based on the Tukey-Kramer multiplicity adjustment. Table 4. Maximum rate ($R_{\text{Max}}$) and time during fermentation at which maximum rate of cumulative gas production is reached ($t_{R_{\text{Max}}}$) based on \textit{in vitro} fermentation of whole plant and isolated neutral detergent fiber (NDF) expressed on an organic matter (OM) basis. <table> <thead> <tr> <th>Species</th> <th>ALF</th> <th>BFT</th> <th>CMV</th> <th>SNF</th> <th>MBG</th> <th>SMB</th> </tr> </thead> <tbody> <tr> <td></td> <td>$R_{\text{Max}}$</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Whole</td> <td>0.071a</td> <td>0.072a</td> <td>0.040b</td> <td>0.053ab</td> <td>0.032b</td> <td>0.034b</td> </tr> <tr> <td>NDF</td> <td>0.046ab</td> <td>0.052a</td> <td>0.052a</td> <td>0.047ab</td> <td>0.040b</td> <td>0.044ab</td> </tr> <tr> <td></td> <td>$t_{R_{\text{Max}}}$</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Whole</td> <td>28.5 (2.41)</td> <td>30.4 (2.41)</td> <td>30.4 (2.41)</td> <td>33.0 (2.41)</td> <td>33.7 (2.41)</td> <td>31.7 (2.41)</td> </tr> <tr> <td>NDF</td> <td>28.5 (2.41)</td> <td>30.4 (2.41)</td> <td>30.4 (2.41)</td> <td>33.0 (2.41)</td> <td>33.7 (2.41)</td> <td>31.7 (2.41)</td> </tr> </tbody> </table> ALF, alfalfa; BFT, birdsfoot trefoil; CMV, cicer milkvetch; SNF, sainfoin; MBG, meadow bromegrass; SMB, small burnet. Data are the means of three laboratory runs and three replicates in each run; standard errors in parentheses. Values within a row followed by the same letter are not different at $p < 0.05$ based on the Tukey-Kramer multiplicity adjustment. Table 5. Kinetics of cumulative gas production from *in vitro* fermentation of whole plant and isolated neutral detergent fiber (NDF)\(^2\). Parameters include asymptotic cumulative gas production (A) expressed on an organic matter (OM) basis, time to one-half asymptotic cumulative gas volume (B), and a constant (C) describing the sharpness of the sigmoidal inflection point. <table> <thead> <tr> <th>Species</th> <th>ALF</th> <th>BFT</th> <th>CMV</th> <th>SNF</th> <th>MBG</th> <th>SMB</th> </tr> </thead> <tbody> <tr> <td></td> <td>mL g(^{-1}) OM</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Whole</td> <td>194ab (13.5)</td> <td>187ab (13.0)</td> <td>162b (11.3)</td> <td>171ab (11.9)</td> <td>207a (14.4)</td> <td>164b (11.4)</td> </tr> <tr> <td>NDF</td> <td>164b (8.6)</td> <td>152b (7.9)</td> <td>188a (9.8)</td> <td>123c (6.4)</td> <td>181a (9.4)</td> <td>104d (5.4)</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Parameter B</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Whole</td> <td>12.4c (1.34)</td> <td>12.2c (1.32)</td> <td>20.1ab (1.44)</td> <td>15.9bc (1.72)</td> <td>26.1a</td> <td>26.5a (2.86)</td> </tr> <tr> <td>NDF</td> <td>26.3b (1.65)</td> <td>26.5b (1.66)</td> <td>26.4b (1.65)</td> <td>28.8ab (1.80)</td> <td>30.9a</td> <td>28.8ab</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Parameter C</th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Whole</td> <td>2.51a (0.071)</td> <td>2.48a (0.070)</td> <td>2.29bc (0.065)</td> <td>2.40ab (0.068)</td> <td>2.21c (0.063)</td> <td>2.13c (0.060)</td> </tr> <tr> <td>NDF</td> <td>3.30 (0.172)</td> <td>3.57 (0.186)</td> <td>3.57 (0.186)</td> <td>3.55 (0.185)</td> <td>3.33</td> <td>3.41 (0.177)</td> </tr> </tbody> </table> ALF, alfalfa; BFT, birdsfoot trefoil; CMV, cicer milkvetch; SNF, sainfoin; MBG, meadow bromegrass; SMB, small burnet. Data are the means of three laboratory runs and three replicates within each run; standard errors in parentheses. Values within a row followed by the same letter are not different at $p < 0.05$ based on the Tukey-Kramer multiplicity adjustment.
Paul Frommer ’65 has plenty of words to describe his introduction to the world of major motion pictures. It’s been remarkable. It’s been extraordinary. And it’s been total keye’ung. That’s Na’vi for “insanity.” Na’vi, the language of the humanoid inhabitants of the planet Pandora, the setting of the blockbuster film Avatar, is Frommer’s brainchild. And like any child, it’s changed his life considerably. It all started in 2005, as the linguist-turned executive was teaching at the University of Southern California’s Marshall School of Business. A friend from the linguistics department, in USC’s college of arts and sciences, forwarded to Frommer an e-mail that he and the more than 20 other members of the department had received from a representative of Lightstorm Entertainment, the production company of director James Cameron. Cameron, the creator of Titanic, at that time the largest grossing film in movie history, was looking for someone to invent a new language, to be spoken by an extraterrestrial people who would be the focus of his next movie, then called Project 880. By Karen McCally ’02 (PhD) Although he earned a doctorate in linguistics from USC and later published in the field, Frommer pursued a career as a strategic planner for a Los Angeles marketing firm and now teaches courses on business communication. “When I saw the e-mail, I said ‘whoa!’,” he says. “I jumped on it.” This spring, as the final product of Cameron’s vision, Avatar, has surpassed Titanic as the highest grossing film of all time, Frommer’s inbox overflows with messages—hundreds, he says—from fans of the movie who want to learn and write in Na’vi. Fans have also launched a Na’vi Web site and a discussion forum, to which there are more than 100,000 posts. Many of the fans have already mastered the language, composing e-mails to Frommer entirely in the language. He calls the response both “astonishing and gratifying.” “People go to the movie, and they’re just swept away,” he says. “It touches people on a very deep level, and they come away wanting to connect with Pandora. One way to do that is through the language.” At first glance, learning Na’vi might not seem so daunting. Its current vocabulary is small, consisting of a little more than 1,000 words. That’s miniscule compared to the vocabulary of a typical English-speaking adult, which is about 65,000 words, according to Rochester’s Elissa Newport, the George Eastman Professor of Brain and Cognitive Sciences and the chair of the department. “But the size of the vocabulary isn’t what makes it a language or what makes it interesting,” she adds. “The size of the vocabulary is the least of the characteristics you would look at to decide, ’Is this really a language?’” Newport, an internationally recognized expert on language acquisition, says all languages have the same basic elements: A set of sounds (or hand signs—Newport studies signed languages as well), and a system of rules for combining those elements into words, and words into sentences. So to create Na’vi, Frommer started, as a linguist would, by defining its sounds. “Something that I enjoy doing, and I think many linguists do as well, is just playing around with sounds, just making funny sounds and rolling them around in your mouth, and seeing how it feels,” he says. “You realize you can have some very interesting combinations.” But there should be some limit to those combinations. “You want to come up with something that has some sort of distinctiveness to it, and one way you do that is by deciding what sounds go into the mix, but just as importantly, what sounds are going to be left out,” Frommer says. He compares the process to cooking. “When you’re cooking and you open your cabinet and see this array of spices, if you put in everything you have on the shelf, you’re going to get a mess,” he says. “It may be unpalatable, or it may have no particular distinction. But if you’re judicious, and you take certain things, and leave other things on the shelf, then you might get something that has character to it.” Na’vi, for example, does not have the -b, -d, and hard -g sounds that are common in English. And although some sounds that appear regularly in English, such as the -ng sound, also appear in Na’vi, in Na'vi that sound appears at the beginning of words—words such as n grop (create) or nga (you)—as well as at the end, as in the English word ending -ing. Among Na’vi’s most distinctive features are the “ejectives,” or “popping sounds” that Frommer says are heard in many Native American languages, as well as in Central Asia. “I put them in because they’re interesting sounds, and I thought they might arouse some interest in the language, kind of like an interesting spice that I was putting in.” “The reaction I’ve gotten from a number of people who aren’t linguists is, ‘You know, that sounds like a real language,’” he says, with clear delight. According to Newport, that’s because the listeners are beginning to recognize patterns. “People start to learn the patterns, even in small doses. They’ll start to recognize the words that recur, and the word orders that recur, and the sounds that recur. In a two-and-a-half-hour movie, people probably are starting to recognize, even without realizing it, the patterns they’ve been exposed to.” But it’s quite a leap from recognizing patterns to actually speaking the language. For the cast, mastering unfamiliar sound Language Creation— an Adult Form of Play Professor of English Sarah Higley says creating languages is a more common pursuit than many people might suspect. She would know: She’s the inventor of the language Teonaht, a board member of the Language Creation Society, and a member of an online Listserv of more than 500 people—linguists, computer scientists, mathematicians, humanities scholars, and others—who create languages for fun. They call such languages “constructed languages”—or conlangs—and pursue their hobby as an art form that can be enjoyed for its sounds, its script, or, for real aficionados, its grammatical structure. “More people have done this in the past than we could ever tell,” says Higley. “The reason there seems to be a burst of people doing it is only because the Internet has put us in touch with each other.” “We’re not nuts,” she adds, alluding to critics who dismiss conlangers as (she says dryly) “people who all live in our grandmothers’ basements and have nothing else to do.” Higley, for example, is a scholar of medieval language, literature, and poetic structure, who teaches courses on these subjects, as well as science fiction and fantasy writing, which can borrow heavily from medieval concepts of magic. In her latest book, Hildegard of Bingen’s Unknown Language: An Edition, Translation, and Discussion (Palgrave MacMillan, 2007), Higley explores the invented vocabulary of the 12th-century German nun, placing it in the context of language invention in both the past and present. Over the past decades, Higley has continued to transform Teonaht into a strikingly original language, both phonetically and structurally. Not everyone remains focused on a single language for so long. Many conlangers create several languages. “They’re really interested in the structure,” she says. “They have a certain idea. And they get bored with it, and start a new structure.” “Some people change languages like they change clothes. Others stick with one invention for a lifetime.” —Karen McCally ’02 (PhD) LANGUAGE ARTS: Invented languages can be enjoyed for their sounds and structure, say Higley. combinations, as well as the ejectives, took practice. Among Frommer’s roles was coaching the actors—Sigourney Weaver, Sam Worthington, Zoe Saldana, and others—helping them both on and off the set to master Na’vi pronunciation. He was accompanied by a veteran dialect coach, Carla Meyer, who has worked on more than 40 films, including Pirates of the Caribbean, The Gift, and A River Runs Through It. Meyer and Frommer shared the task of determining the Na’vi accent—the accent that Zoe Saldana, for example, adopted as her Na’vi character, Neytiri, learned to speak English. “We put our heads together to try to figure out exactly what they might sound like when they spoke English, and that’s not at all an easy question,” Frommer says. “One thing we played around with, is that there’s no -j sound in Na’vi, but of course the main character’s name is Jake. So if Neytiri was trying to say ‘Jake,’ what would she say? The closest sound that they have to -j is -ts, so it might come out ‘tsake.’” A native of New York City, Frommer came to Rochester in the early 1960s on a Bausch & Lomb scholarship to study not languages, but astrophysics. “From the time I was eight, everybody knew ‘Paul is going to be an astronomer,’” he says. As it turned out, he earned his degree in mathematics. And while he had studied a bit of French, German, Hebrew, and Latin, it wasn’t until after graduation, when he joined the Peace Corps, that he realized his love for language. He was sent to Malaysia, where he taught math in Malay. “I realized how much fun it was, and that I was pretty good at it,” he says. In the mid-1970s, while a doctoral student in linguistics at USC, he spent a year in Iran and completed his thesis on an aspect of Persian grammar. When he entered the business world, he maintained a foothold in the field of linguistics, coauthoring Looking at Languages: A Workbook in Elementary Linguistics (Wadsworth) in 1994 with USC linguistics professor Edward Finegan. It was Finegan, in fact, who forwarded Frommer the e-mail from Lightstorm, in advance of the interview in which he closed the deal. Now he finds himself a high-profile figure in a small but growing guild of language inventors—people from the fantasy writer J. R. R. Tolkien to the hundreds of computer scientists, linguists, mathematicians, and others who have invented languages as a hobby and shared them with one another over the Internet. Frommer’s personal favorite among notable language inventors is Marc Okrand, an expert in Native American languages who created Klingon for the 1984 movie Star Trek III: The Search for Spock. Frommer says Klingon “changed the game” when it came to science fiction filmmaking. In the 1977 movie Star Wars, for example, the language of aliens was “pretty much gibberish,” he says. Klingon, on the other hand, is a “very well-developed, difficult language.” “Ever since then, it’s been understood that that’s the standard. Especially for someone like Cameron, who lavishes this incredible detail on everything he does. He wanted the detail in the language as well.” Klingon inspired a cult following, as Na’vi appears to be doing now. As Frommer’s Na’vi reaches a level of renown fast approaching Okrand’s Klingon, Cameron has indicated plans for an Avatar sequel. That’s good news for Frommer, who would like nothing more than to continue to expand on the 1,000-plus word language. “Tvong Na’vi,” he says. Let Na’vi bloom. © Poetic Appeal Why does the art of poetry thrive in an age of instant communication? By Kathleen McGarvey Photographs by Adam Fenster A cell phone trills. A BlackBerry vibrates, bristling for immediate attention. “Tweets” accrete, each bearing fleeting news of someone’s latest passing thought on Twitter. Now, now, now, now, now. In an era of such frenzied exchange of language, it might seem that there would be little place for the poem. But poetry never has been more alive at Rochester than it is today, in writing workshops and poetry readings, informal gatherings and solitary sessions where a writer confronts a blank sheet—or screen. Far from being blotted out by contemporary mores of communication, poetry provides a kind of corrective. “Poetry, like all great writing, whether poetry or prose, forces you to be very slow,” says James Longenbach, the Joseph H. Gilmore Professor of English and an acclaimed poet and literary critic. “You have to read very slowly. You have to write very slowly. That’s what I say to people who say they don’t understand poetry. If you try to speed through language the way we do in POETIC PROGRESS: “To write one poem, you have to read a thousand of them,” says poet James Longenbach, the Joseph H. Gilmore Professor of English. most of our lives, poetry will be not just irrelevant, but incredibly frustrating.” Speed, succinctness, transparent and uncomplicated meaning—these are the currency of now ubiquitous electronic communications. But poetry, which also concerns itself with condensation of thought, is an art of shades of meaning, ambiguities of purpose, and the pleasures of language itself. “We’ve become the culture of the sound bite—and poetry is precisely the opposite of that,” says Thomas DiPiero, a professor of French and of visual and cultural studies, as well as the senior associate dean of the humanities. “It’s a way of thinking—a very specific way of thinking. It’s been called ‘concentrated thought.’” And, judging by the English majors as well as students from disciplines throughout the College who fill English literature classrooms each semester, it has a powerful appeal. “There’s a strong sense, a thrilling sense, of writing among the undergraduates, and not just of poetry but of fiction as well; you can’t have one genre without the other,” says Longenbach, the author of critical works such as The Resistance to Poetry and The Art of the Poetic Line, as well as volumes of poetry including Draft of a Letter and Fleet River. Offered through the English Department, the poetry workshops that Longenbach and colleague Jennifer Grotz, an assistant professor of English, teach are part of the department’s creative writing program. Directed by Joanna Scott, a novelist and the Roswell S. Burrows Professor of English, the program is grounded in an understanding that writing is a creative discipline that draws on the study of a wide range of literature. “In workshops, half our time is spent reading the greatest poems we can read,” says Longenbach, whose poetry has also appeared in publications such as The New Yorker, The New Republic, Slate, and The Paris Review. “To write one poem, you have to have read a thousand of them.” Grotz, whose poetry volume titled Cusp won the Bread Loaf Writers’ Conference Bakeless Prize in 2003, says that she teaches students to “read as a writer would.” Joining the University faculty last fall, Grotz also translates French and Polish poetry and will teach in Rochester’s new literary translation program. Grotz found her own way to poetry slowly, teaching herself by reading other poets before taking up the academic study of poetry. A Texan who grew up “in a house with no books,” she was “like a musician who could pick out a tune,” she says. In her students, Grotz seeks to develop a facility with writers’ tools. “My philosophy of teaching at least introductory-level poetry is to break it down into what writers call ‘craft lenses.’ To have the students think of themselves as writers, with skills they want to develop—image, music, and so on.” For Giulia Perucchio ’13, who took Grotz’s workshop last fall, that approach was invaluable. “We connect huge, fluid things with very specific images,” she says. A graduate of Rochester’s School of the Arts, she came to the University already focused on creative writing. “That’s the best thing I learned from her: how to be very specific, very direct.” Poetry’s roots at Rochester run to the University’s beginning. Ashael Kendrick, a scholar of Greek and one of the professors who came to Rochester when the University was first formed in 1850, translated and anthologized poetry. In 1968, Anthony Hecht ’87 (Honorary), the former John H. Deane Professor of Rhetoric and Poetry, received the Pulitzer Prize for poetry while WELL-VERSED: Poet Jennifer Grotz, an assistant professor of English (opposite), will edit a poetry series for Open Letter, the University’s literary translation press. She also teaches in the translation program, where students like Tyler Goldman ’10 (above) “think critically about the way language operates.” at Rochester, where he was a member of the English department for 18 years. In many ways, the name most closely associated with verse at Rochester is that of the late Hyam Plutzik, who preceded Hecht as the John H. Deane Professor of Rhetoric and Poetry and taught at the University from the mid-1940s until his death in 1962. A widely published poet concerned with themes such as the relationship between science and poetry, Plutzik taught writing workshops and gave weekly poetry readings on campus. Today he's memorialized in the Plutzik Library for Contemporary Writing at Rush Rhees Library, where professor emeritus and poet Jarold Ramsey is also honored with the Jarold Ramsey Study. The library houses the William and Hannelore Heyen Collection, an extensive poetry archive assembled by poet Heyen. Rare Books, Special Collections, and Preservation also holds collections—including early editions, manuscripts, and correspondence—by John Dryden, Hilda Doolittle (H.D.), John Gardner, Carl Sandburg, and Alfred, Lord Tennyson, and other notable poets. Tyler Goldman ’10, an English major with a creative writing emphasis from Baladwyd, Pa., took part in the literary translation program’s inaugural course, translating Roman lyric poetry into English. He says among the values of literary translation is its ability to heighten a writer’s awareness of language. “It allows you to think critically about the way language operates,” he says. That awareness is key to any writer’s development, Longenbach says. “I teach poetry almost exclusively as craft,” he says, “how we focus and sharpen the way we harness language. I tell students we’re almost never going to talk about the subject of a poem. What’s unique is the way the language takes you through the experience.” There aren’t a lot of different subjects for pop songs, he observes, but we listen to our favorites again and again. Why? It’s not that we can’t recall them—quite the opposite. It’s our attraction to how they express an experience. Poetry, which he calls a “sonic art,” is the same. “You read a poem many times, not because you can’t remember the words, but because you want to inhabit the way it moves through language.” Pulitzer Prize–winning poet Galway Kinnell ’49 (MA) agrees. A poem is “not just an exposition of an idea or an event, but a reliving of it,” he says. That evocative force lies in the images and music its words create. “In poetry workshops, I find, students learn to attend to the precision of their language more powerfully than in any other class I teach,” says Longenbach, who became interested in poetry in college, after having spent “a great deal of my youth involved in music, as a pianist.” Such exactness is not what everyone anticipates, however. Grotz and Longenbach find ways to help their students appreciate that poetry—like all art forms—requires a blending of feeling and craft. “You’re working with young people who feel passionately about something, and you’re helping them learn how to connect that passion to a passion for the beauty and accuracy of language,” says Longenbach. Strong emotion can be an impetus for a poem, but it’s not enough. “People who write not-very-good poems have compelling emotions, too,” he says, “but they haven’t figured out how to get it on the page.” CHANGED PERSPECTIVE: Studying poetry has given her a “new set of eyes,” says Samantha Miller ’11, a double major in English and philosophy who hopes to teach poetry at the college level. “In a sense, poetry doesn’t fit with our times, but I think that makes it more valuable.” In Grotz’s workshop, Rainer Maria Rilke’s Letters to a Young Poet, a slim volume of correspondence from Rilke to an aspiring poet, helps frame discussion of the emotive dimension of poetry. She delivers the book to students in a sealed envelope, just as a letter would arrive. “To my mind, Rilke really helps to address the other reason young poets turn to poetry: expressing themselves, thinking about what it means to be human,” says Grotz. “I contain our ‘soul talk’ to Rilke. Otherwise we focus on technique. It helps us talk more clinically about the craft—but it’s very hard to talk about one without the other.” “Technique is what allows empathy to come through as empathy and not just as ‘I have these emotions,’” says Emily Claman ’06. After graduating with a degree in philosophy, she earned an MFA with a concentration in poetry from Washington University in St. Louis and credits her work with Longenbach and poet and former Rochester faculty member Sally Keith for her pursuit of a poetic career. When he was an undergraduate, poet Ilya Kaminsky recalls, Longenbach spoke with him “on a line-by-line basis” about poets Frost, Lowell, Walcott, and Ashbery. “Just think of it: James Longenbach, famous poet and literary scholar, has spent hours and hours of his time reading poems of a first-semester freshman who did not even know English well at that time,” says the Odessa, Ukraine, native who is now a professor of poetry at San Diego State University. “Such generosity of spirit is what makes education possible and what truly propels talent to grow.” Workshops are not the only courses in which Rochester students encounter poetry, of course. And poetry doesn’t stand alone, says Longenbach—“There’s a climate of writing here: fiction, poetry, and increasingly, playwriting”—nor is it separate from the work of the larger English department. When Kenneth Gross, a professor of English who has published extensively on Renaissance and modern verse, teaches his course on lyric poetry, he guides students in “slowing down, and dwelling on images and ambiguities.” Such ambiguities are an irreducible part of poetry’s complexity, and its power—a dimension, in fact, of the very precision Grotz and Longenbach instill. “Poetry works, and sticks around, because it’s not clear. There’s something that can’t be put into words, even though it is words,” Gross says. Poetry “makes you consider multiplicities—often contradictory multiplicities—of meaning,” says DiPietro. “Reading poetry is like reading the world.” And while students in his courses—not just English majors, but an “impressive range,” says Gross—might be uncertain in approaching poetry, he reminds them that “they have a lot of experience with rhythmically shaped language: nursery rhymes, prayers, music lyrics, epitaphs, even jingles.” In his lyric poetry course, Gross—author of books such as Spenserian Poetics: Idolatry, Iconoclasm and Magic and Shylock is Shakespeare—focuses on Shakespeare’s sonnets and the poems of John Keats, Emily Dickinson, and Elizabeth Bishop. They’re short works that “give them a sense of a single poetic intelligence,” he says. “For these poets, the major poems are the intense, short lyrics. They’re very meaty objects of analysis.” But he shows students, too, that poetic language inhabits places they might not expect. In one course, he spent a week examining with students the texts of national anthems such as the Star Spangled Banner and La Marseillaise. “It made them take up things they didn’t think of as poems—or even as things to be read—and see them as rather charged.” Not to be overlooked, either, is the sheer enjoyment that engaging with a poem as a writer or a reader can provide. “However dark or difficult a poem, in some way it has to foreground pleasure,” says Gross. That pleasure is what feeds literary readings like the Plutzik Reading Series, which brings readings by contemporary novelists and poets to the Rochester community. “The Plutzik Series pulls an audience beyond the classroom—and also feeds back into the classroom,” Gross says, as faculty members—particularly Longenbach, Scott, and now Grotz—incorporate work by visiting writers into their courses. Like the Neilly Series, a writers’ lecture series supported by an endowment from Andrew H. ‘47 and Janet Dayton Neilly, the Plutzik Series is “a huge part of the literary community here. It transcends poetry,” says Goldman. “Often, when I taught poetry classes, even workshops,” before coming to Rochester, “there was a part of my job that was being a salesman”… Here I don’t feel the need to sell poetry at all. The students come interested and hungry.” For Samantha Miller ’11, a double major in English and philosophy from Henrietta, N.Y., who is in Grotz’s workshop this semester, poetry counterbalances the more impatient and utilitarian interaction with language she has in other facets of her life. “We’re so used to text messaging, e-mails—instant gratification and immediate answers. And poetry takes a lot more time,” she says. “In a sense, poetry doesn’t fit with our times, but I think that makes it even more important and valuable.” Miller hopes one day to teach poetry at the college level and says her literary study at Rochester has shaped not only her professional ambitions but also the very way she sees the world. “What you can gain by studying poetry is a new set of eyes,” says Miller. “You have a new appreciation for even the most minute things around you.” It engenders, says Kinnell, “a tenderness towards existence.” Ultimately, Grotz suggests, there’s even something elemental to it. “Everybody knows poetry isn’t what you do to make money,” she says. “And it’s not read the way popular fiction is, by any means. It may seem like an old-fashioned thing to do. But it’s the perfectly packaged thing for a human being. It’s totally human-shaped, human-made.” “It’s breath.” March–April 2010 ROCHESTER REVIEW 29 Taking the Measure of Mountains Rochester earth scientist Carmala Garzione is changing the way geologists think about the rise of mountain ranges. By Jonathan Sherwood ’04 (MA), ’09S (MBA) More than 12,000 feet above sea level in the Andes mountains of South America, Carmala Garzione finds herself at the center of a seismic shift in how she and other scientists understand the forces at work beneath one of the world’s longest continental mountain ranges. An expert on the geological processes that can push the Earth’s upper crust skyward, Garzione, a professor of earth and environmental sciences at Rochester, is pioneering a new approach that she and colleagues say offers a more accurate picture of how such mountain ranges rose to where they are today. Based on news methods of paleoaltimetry, the science of judging ancient mountain heights, that Garzione helped develop, her research indicates that the Andes rose to their current height in as little as 2 million years sometime between 6.4 million and 10 million years ago. That’s a remarkable growth spurt for a mountain range that now features peaks between 5,000 and 7,000 meters (17,000 and 23,000 feet). “That’s several times faster than geologists had estimated before,” Garzione says, noting that some previous work estimated that the Andes took as long as 50 million years to reach their current heights. “It means there is some unexpected process going on beneath the Earth’s crust that’s creating mountains like these.” Investigating how that process works has earned wide recognition for Garzione. In 2007, she received the Young Scientist Award from the Geological Society of America, which cited her research as “groundbreaking.” Last year, the New York Academy of Science followed suit, honoring Garzione with its Blavatnik Award. Her findings, which are based on detailed comparisons of the mineral composition of sediment that erodes from mountains over the life of their growth, are forcing geologists to rethink how mountains form and even how their growth contributes to global climate change. In a process that geologists know as “shortening,” mountain ranges such as the Andes and the Himalayas are formed when vast sections of the Earth’s lithosphere, called tectonic plates, collide. and push against each other. The plates buckle like a wrinkling rug, pushing up a long range of mountains. Exactly how quickly a range of mountains rises has long been shrouded in mystery because few scientists can measure how high a mountain may have been when it first started its ascent. Before Garzione’s research, geologists estimated that uplift by examining the fossils left by vegetation or by dating when certain minerals from deep underground began moving to the surface. But plant characteristics can change radically over millions of years, and changes in climate can also vary the speed of erosion, throwing significant question marks into the equation. Instead, Garzione theorized that by examining the mineral composition of sediment and comparing it to atmospheric conditions at different altitudes she would have a better picture of the time it took for a mountain to reach its height. “I wrote my doctoral dissertation on the possibility of retrieving atmospheric information from ancient sediment in the Himalayan mountains, dating it, and forming a record of the Himalayas’ and Tibet’s uplift history,” says Garzione. “Based on my estimates, southern Tibet and the Himalayas appeared to have been high throughout their depositional history, so I was eager to put this technique to the test in a place that appears to have been at a lower elevation more recently. We focused on the sediment that was deposited in the high Andes mountains because fossil estimates put them much lower just 10 million years ago,” she says. “However, trees cannot grow at the modern elevations in the Andes, so this fossil-based approach cannot tell us when and how fast the mountains rose. “As a mountain range rises, it experiences different atmospheric conditions due to its change in height. Those atmospheric changes, such as temperature and the amount and composition of rainfall, are recorded in minerals that form near the surface at different altitudes on the mountainside. “The challenge was to see if I could get a clearer idea of the Andes’ growth than we’d ever had before.” On the Bolivian Altiplano—a high-elevation basin in the Andes—Garzione took samples of sedimentary rock that had accumulated between 12 million and 5 million years ago. Garzione analyzed the mineral composition of sedimentary strata in the Altiplano, studying the ratio between the mineral carbonate, which is released from surface water during erosion, and the isotopes oxygen-16 and oxygen-18. More than 99 percent of the oxygen in water is made up of oxygen-16 and less than 1 percent is oxygen-18, but as vapor rises to higher altitudes in the form of clouds, oxygen-18 is removed from the clouds. As rain falls, the clouds are slowly depleted of the isotope. Because the change is locked in the minerals that form on the mountains’ surfaces, Garzione was able to uncover a record of the altitude at which the minerals formed. Garzione also used a second method to look at the Bolivian sediment that focused on the temperature at which the surface-forming carbonates were created. Since air temperature decreases with altitude, the rocks’ original altitude should be preserved in a temperature-based mineral snapshot. Garzione, along with Prosenjit Ghosh and John Eiler of the California Institute of Technology, employed a technique developed at CalTech to examine the abundance of oxygen-18 and carbon-13 isotopes that bonded together. Using the CalTech method, Garzione and the CalTech team gauged the temperature at which the carbonates formed—from the hot Amazonian jungle climate to the freezing peaks of the Andes. Both studies pointed to the same conclusion: Between 10 million and 6.4 million years ago, the Andes lifted more than a mile. “When I first showed this data to others, they had a hard time believing that mountains could pop up so quickly,” says Garzione. “With supporting data from the new paleotemperature technique, we have more confidence in the uplift history and can determine the processes that caused the mountains to rise.” How did the Andes rise so dramatically, geologically speaking? Garzione says the answer may come in the not-so-scientific-sounding process known as “deblobbing.” That’s the colloquial term given to a process by which a dense root in the Earth’s mantle becomes detached from the Earth’s crust. As plates thicken during mountain building, the dense lower crust and upper mantle also thicken and are heated to higher temperatures in the Earth’s interior. At hotter temperatures, they become unstable and begin to flow downward under the force of their own mass into the Earth’s mantle, much like a more dense blob in a lava lamp flows downward. When two tectonic plates collide, such as when the Nazca oceanic plate in the southeastern Pacific collides with the South American continental plate, the continental plate begins to buckle. Floating on a less dense and partially molten mantle, the plates press together and the buckling creates the first swell of a mountain range. Below the crust, however, there’s another kind of buckling going on in the more elastic portions of the upper mantle. The dense mantle “root” clings to the underside of the crust, growing in step with the burgeoning mountains above. The root acts like an anchor, weighing down the whole range and preventing it from rising, much like a fishing weight on a small bobber holds the bobber low in the water. In the case of the Andes, the mountains swelled to a height of one to two kilometers before the mantle root disconnected and sunk into the deeper, partially molten mantle. The effect was like cutting the line to the fishing weight—the mountains suddenly “bobbed” high above the surrounding crust, and in less than 3 million years, the mountains had lifted from less than two kilometers to roughly four. The process had been proposed since the early 1980s, but it has never stood up to scrutiny because the techniques to estimate surface elevation have only been recently developed. “People have largely ignored the role of the upper mantle because it is difficult to look 50 to 200 kilometers into the Earth; whereas we can easily see the deformation on the surface,” says Garzione. “Some geologists have guessed that the dense lower crust and mantle are removed continuously and evenly during mountain building. Our data argue that this dense material just accumulates down there until some critical moment when it becomes unstable and drops off.” Garzione is seeking even more accurate measurements of mountain growth speeds. She has begun new research in northern Tibet that brings together what she describes as one of the largest collaborative efforts between climatologists and geologists yet assembled. “This study is a first of its kind,” says Garzione. “We’re studying the Tibetan Plateau to answer how mountain formation changed the Earth’s climate in the region, and how that climate change in turn affected the mountains as they formed. In terms of the breadth of research, this is the biggest proposal that the earth sciences and atmospheric sciences programs at the National Science Foundation have ever supported. “It’s really exciting to see how our field is changing,” she says. “We’re able to ask bigger questions, and we need researchers from across disciplines to come together to answer them.” How did the Andes rise a dramatic kilometer per million years? The answer may come in the not-so-scientific-sounding process known as “deblobbing.” Jonathan Sherwood ’04 (MA) ’09S (MBA) is a senior science writer for University Communications.
Impact of the HIV-1 env Genetic Context outside HR1–HR2 on Resistance to the Fusion Inhibitor Enfuvirtide and Viral Infectivity in Clinical Isolates Franky Baatz1, Monique Nijhuis2, Morgane Lemaire1, Martiene Riedijk2,3, Annemarie M. J. Wensing2, Jean-Yves Servais1, Petra M. van Ham2, Andy I. M. Hoepelman3, Peter P. Koopmans4, Herman G. Sprenger5, Carole Devaux1, Jean-Claude Schmit1, Danielle Perez Bercoff1 1 Laboratory of Retrovirology, CRP-Santé, Luxembourg, Luxembourg, 2 Department of Virology, Medical Microbiology, UMC Utrecht, Utrecht, The Netherlands, 3 Department of Internal Medicine and Infectious Diseases, UMC Utrecht, Utrecht, The Netherlands, 4 Division Infectious Diseases, Department of General Internal Medicine, Radboud University Medical Center, Nijmegen, The Netherlands, 5 Division of Infectious Diseases, Department of Internal Medicine, University Medical Center Groningen, Groningen, The Netherlands Abstract Resistance mutations to the HIV-1 fusion inhibitor enfuvirtide emerge mainly within the drug’s target region, HR1, and compensatory mutations have been described within HR2. The surrounding envelope (env) genetic context might also contribute to resistance, although to what extent and through which determinants remains elusive. To quantify the direct role of the env context in resistance to enfuvirtide and in viral infectivity, we compared enfuvirtide susceptibility and infectivity of recombinant viral pairs harboring the HR1–HR2 region or the full Env ectodomain of longitudinal env clones from 5 heavily treated patients failing enfuvirtide therapy. Prior to enfuvirtide treatment onset, no env carried known resistance mutations and full Env viruses were on average less susceptible than HR1–HR2 recombinants. All escape clones carried at least one of G36D, V38A, N42D and/or N43D/S in HR1, and accordingly, resistance increased 11- to 2800-fold relative to baseline. Resistance of full Env recombinant viruses was similar to resistance of their HR1–HR2 counterpart, indicating that HR1 and HR2 are the main contributors to resistance. Strictly X4 viruses were more resistant than strictly R5 viruses, while dual-tropic EnvS featured similar resistance levels irrespective of the coreceptor expressed by the cell line used. Full Env recombinants from all patients gained infectivity under prolonged drug pressure; for HR1–HR2 viruses, infectivity remained steady for 3/5 patients, while for 2/5 patients, gains in infectivity paralleled those of the corresponding full Env recombinants, indicating that the env genetic context accounts mainly for infectivity adjustments. Phylogenetic analyses revealed that quasispecies selection is a step-wise process where selection of enfuvirtide resistance is a dominant factor early during therapy, while increased infectivity is the prominent driver under prolonged therapy. Editor: Ben J. Marais, University of Stellenbosch, South Africa Received: March 15, 2011; Accepted: June 1, 2011; Published: July 8, 2011 Copyright: © 2011 Baatz et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: FB was supported by a fellowship from the Fonds National de Recherche, Luxembourg [TR-PHD BFR06-033]. MN and PvH were supported by The Netherlands Organisation for Scientific Research (NWO) Vidi [91796349]. The ATHENA database is supported by a grant from the Dutch Ministry of Health. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have read the journal’s policy and have the following conflicts: AMJW and AIM served as consultants for Roche. HGS received payments for travel, accommodation and meeting expenses from Roche Pharma The Netherlands. MR received a travel grant from Roche. All other authors have declared that no competing interests exists. This does not alter the authors’ adherence to all the PLoS ONE policies on sharing data and materials. Introduction The human immunodeficiency virus type 1 (HIV-1) envelope glycoprotein (Env) mediates viral entry into the host cell by fusion of the viral envelope with the host cell membrane (reviewed in [1,2,3]). The Env complex is composed of two non-covalently linked subunits, the surface glycoprotein (gp120) and the transmembrane glycoprotein (gp41), displayed as homotrimers at the surface of the virion and of infected cells. Viral entry is a multistep phenomenon: Binding of gp120 to the CD4 receptor expressed on the surface of target cells induces a conformational change that exposes the third hypervariable loop (V3) of gp120, which in turn binds one of the two chemokine receptors CCR5 (R5 viruses) or CXCR4 (X4 viruses). Consequently, the viral V3 sequence defines cell tropism to a large extent, although regions outside the V3-loop have been described to modulate coreceptor usage [4,5,6]. Coreceptor binding triggers further conformational changes in the ectodomain of gp41 that lead to the insertion of the N-terminal glycine-rich fusion peptide into the host cell membrane. Folding of heptad repeat 2 (HR2) region onto heptad repeat 1 (HR1) region forms a highly stable six helix bundle structure and brings the viral and host cell membranes into close contact, ultimately leading to the fusion of both membranes [7,8,9,10]. It has been shown that synthetic peptides that bind to one of the HR motifs interfere with the formation of a stable six helix bundle and inhibit viral entry [7,11,12,13]. Enfuvirtide (ENF, T-20) [11,14] is a subcutaneously injected 36 amino acids (AA) peptide mimicking part of the HR2 sequence (AA 127 to 162). Enfuvirtide binds HR1 and hinders the fusion process by preventing the HR1–HR2 interaction. Enfuvirtide is active against multi-drug resistant... viral strains and is currently recommended as salvage therapy for highly drug experienced patients. Enfuvirtide resistant HIV-1 variants however rapidly emerge under enfuvirtide selective pressure [15,16] and resistance mutations have been described both in vitro and in vivo [17,18,19,20,21,22,23,24,25,26,27]. Most resistance mutations are located within HR1, between AA 36 and 45 (HXB2 numbering) [17,19,25]. Some resistance mutations, reduce viral infectivity in the absence of the drug, probably as a consequence of impaired interactions between HR1 and HR2 and delayed fusion kinetics [19,28,29,30]. Compensatory mutations that restore viral infectivity may arise within HR2 [19,26,29,31]. Some of these mutations, such as the S138A substitution, have been suggested to confer some resistance per se [19,29], but others do not or only modestly impact the level of resistance. Enfuvirtide operates by a unique mechanism as it does not target the static Env, but a structural intermediate of the entry process, called the fusogenic intermediate, induced by binding of gp120 to the CD4 receptor. Hence, factors that influence the short kinetic window during which HR1 is accessible to the peptide also influence viral susceptibility. Env determinants outside of HR1 and HR2, including tropism, coreceptor affinity [30,32,33], the CD4 binding region of gp120 [20] and the bridging sheet region [30], have been shown to modulate the level of susceptibility/resistance to enfuvirtide. Although the impact of coreceptor usage on the level of susceptibility has been addressed by many authors, results remain controversial, likely a consequence of different experimental and analytical approaches [27,32]. Furthermore, it has been suggested that other determinants outside of HR1 may be involved in resistance and/or that the env genetic context drives the selection of EnvS in which resistance mutations emerge [34,35,36]. The present study addresses the relative contributions of Env determinants outside of the HR1 and HR2 regions to resistance to enfuvirtide and to viral infectivity by comparing NL4-3-derived recombinant virus pairs harboring either HR2 regions to resistance to enfuvirtide and to viral infectivity by relative contributions of Env determinants outside of the HR1 and mutations emerge [34,35,36]. The present study addresses the genetic context drives the selection of Envs in which resistance. Generation of HR1–HR2 and full Env recombinant viral particles Eighty full envelopes were cloned. All the tested HR1–HR2 recombinant viruses were infectious, but only 65% (32/80) of the full Env recombinant viruses were infectious, in line with previous reports [35]. A higher proportion (77.3%, 17/22) of baseline clones were infectious compared to enfuvirtide-escape clones (60.3%, 35/58) and very few late clones were infectious (Table S1). Phenotypic analyses were restricted to clones that allowed to generate both HR1–HR2 and full Env infectious recombinant viruses (n = 52). Baseline genotypic and phenotypic susceptibility to enfuvirtide No known enfuvirtide resistance mutation was detected in pre-treatment clones. Viruses from patients A, C, D and E carried the subtype B HR1 consensus sequence GIVQQQNNLL between AA 36 and 45. All the pre-treatment viral env clones from patient B carried the N42S polymorphism (Table S1). Full Env recombinant viruses displayed greater variability in enfuvirtide susceptibility (FCIC50 range: 0.91–27.47) than HR1–HR2 recombinant viruses (FCIC50 range: 0.47–7.16) (Fig. 1.A). Overall median FCIC50 of the fullEnv recombinant viruses was higher than that of the HR1–HR2 recombinant viruses (Fig. 1.A) (p = 0.05), indicating that the env genetic context lowered susceptibility to enfuvirtide. At a single clone level, all the full Env pre-treatment recombinant viruses from patients A, C, D and E were less susceptible than the corresponding HR1–HR2 recombinant viruses (Fig. 1.A and Fig. 2.A, C, D and E, Table S1). In contrast, patient B pre-treatment HR1–HR2 recombinant viruses displayed higher FCIC50 than their full Env counterparts (Fig. 1.A). When compared to the 4 other patients’ envs, full Env recombinant viruses from patient B were more susceptible to enfuvirtide than those from the 4 other patients (p = 0.02), while HR1–HR2 recombinant viruses were less susceptible than those of the 4 other patients (p = 0.01) (Fig. 1.B). Genotypic and phenotypic resistance during enfuvirtide treatment All the clones retrieved from patients after enfuvirtide treatment failure harbored at least one known resistance mutation within HR1, at positions 36 and/or 38 (patients A and C) and at positions 42 and/or 43 (patients B, D and E) (Table S1). Of note, none of the clones carried resistance mutations concomitantly at positions 36 and 38, 36 and 43 or 38 and 43, in contrast to reports by others [34,35]. FCIC50 of escape recombinant viruses increased 11- to 2800-fold relative to baseline. Overall, median FCIC50 of full Env recombinant viruses did not differ significantly from the FCIC50 of HR1–HR2 recombinant viruses (fullEnv median FCIC50 = 1469 [177;2401], HR1–HR2 median FCIC50 = 1062 [481;1877], p > 0.05). Likewise, at an individual patient level, no significant difference was detected in the level of resistance between full Env and HR1–HR2 recombinant viruses (patients A, B, D and E). For 3 patients, resistance was associated with mutations at one position at all time points (AA 36 for patient A, AA 43 for patients B and E) and featured an over 100-fold increase in FCIC50 relative to the corresponding baseline clones, regardless of coreceptor usage (Fig. 2.A, B and E, Table S1). For patient B, FCIC50 of the HR1–HR2 recombinant viruses were higher than those of the corresponding full Env recombinant viruses at weeks 16 and 45, as for baseline clones. Furthermore, the level of resistance of HR1–HR2 recombinant viruses remained steady between weeks 16 and 45, whereas full Env recombinant viruses reached 3-fold higher resistance levels at week 45 than at week 16 (Fig. 2.B, Table S1), highlighting that the env genetic context did not easily tolerate the constraints that allowed escaping enfuvirtide treatment. Because of the limited availability of early and late clones for patients A and E, variations in the level of resistance over time could not be assessed. For patients C and D, resistance was associated with multiple mutations evolving over time. For patient C, 4/5 early escape viruses carried the G36D substitution and one clone carried the V38A mutation. The V38A variant featured 4-fold (full Env) and 6-fold (HR1–HR2) higher resistance than the early G36D clones (Fig. 2.C, Table S1). At later time points, only the V38A mutation was detected associated with the L44M resistance mutation. Resistance of intermediate and late clones increased by 15-fold at week 59, and by 40-fold (HR1–HR2 recombinants) and 24-fold (full Env recombinants) at week 129 (p = 0.03). For patient D, 3/5 early env clones carried the N43D resistance mutation and one Relative viral infectivity during enfuvirtide treatment We then investigated whether the env genetic context surrounding HR1–HR2 played a role in infectivity adjustments. Overall, HR1–HR2 recombinant viruses featured higher median relative infectivities (RI) than their full Env counterparts prior to and after enfuvirtide treatment escape: pre-treatment median HR1–HR2 RI = 0.39 [0.58;1.22] vs median Env RI = 0.16 [0.04;0.43], (p = 0.006) and after treatment failure median HR1–HR2 RI = 0.67 [0.44;1.21] vs median Env RI = 0.28 [0.13;0.75], p = 0.01. Because in this in vitro system differences in infectivity between paired recombinant viruses probably reflect the higher infectivity of HXB2-derived gp120 of HR1–HR2 recombinants over patient-derived primary gp120 of full env recombinants, we compared the changes in RI of HR1–HR2 and of full Env recombinant viruses during the time of follow-up rather than HR1–HR2/full Env recombinant virus pairs at a given time point. RIs of full Env recombinant viruses spanned a wider range than RIs of HR1–HR2 recombinant viruses, consistent with the high genotypic conservation of the HR1–HR2 region. Early after enfuvirtide failure, RI of the escape variants was similar to or lower than at baseline for both HR1–HR2 and full Env recombinant viruses (Fig. 3). At later time points (intermediate and late), RI of full Env recombinant viruses increased for all patients relative to HR1–HR2 recombinant viruses, consistent with the high genotypic conservation of the HR1–HR2 region. Early after enfuvirtide failure, RI of the escape variants was similar to or lower than at baseline for both HR1–HR2 and full Env recombinant viruses (Fig. 3). At later time points (intermediate and late), RI of full Env recombinant viruses increased for all patients relative to early escape clones (Fig. 3). The RI of HR1–HR2 recombinant viruses, by contrast, remained steady for patients A, B and E, strongly indicating that determinants other than HR1–HR2 account for infectivity adjustments, either through the emergence of compensatory mutations or through the selection of Env that best tolerate resistance mutations. For patients C and D, in contrast, HR1–HR2 recombinant viruses showed a trend to increase, likely mapping determinants that account for gains in infectivity to HR1–HR2. Among the polymorphisms previously described to modulate viral infectivity, compensatory mutation S138A [19] was detected in 5/9 escape Env from patient B, in 9/10 intermediate and late escape Envs from patient D (but in none of the early escape Envs) and in all viral escape clones from patient E (Table S1). In all patients, however, the RIs of variants carrying the S138A mutation were comparable to those of variants carrying wild-type S138 (Fig. 3 and Table S1), suggestive that the relative infectivity of HXB2-derived gp120 of HR1–HR2 recombinants over patient-derived primary gp120 of full env recombinants, we compared the changes in RI of HR1–HR2 and of full Env recombinant viruses during the time of follow-up rather than HR1–HR2/full Env recombinant virus pairs at a given time point. RIs of full Env recombinant viruses spanned a wider range than RIs of HR1–HR2 recombinant viruses, consistent with the high genotypic conservation of the HR1–HR2 region. Early after enfuvirtide failure, RI of the escape variants was similar to or lower than at baseline for both HR1–HR2 and full Env recombinant viruses (Fig. 3). At later time points (intermediate and late), RI of full Env recombinant viruses increased for all patients relative to early escape clones (Fig. 3). The RI of HR1–HR2 recombinant viruses, by contrast, remained steady for patients A, B and E, strongly indicating that determinants other than HR1–HR2 account for infectivity adjustments, either through the emergence of compensatory mutations or through the selection of Envs that best tolerate resistance mutations. For patients C and D, in contrast, HR1–HR2 recombinant viruses showed a trend to increase, likely mapping determinants that account for gains in infectivity to HR1–HR2. Among the polymorphisms previously described to modulate viral infectivity, compensatory mutation S138A [19] was detected in 5/9 escape Envs from patient B, in 9/10 intermediate and late escape Envs from patient D (but in none of the early escape Envs) and in all viral escape clones from patient E (Table S1). In all patients, however, the RIs of variants carrying the S138A mutation were comparable to those of variants carrying wild-type S138 (Fig. 3 and Table S1), suggestive that the relative infectivity of HXB2-derived gp120 of HR1–HR2 recombinants over patient-derived primary gp120 of full env recombinants, we compared the changes in RI of HR1–HR2 and of full Env recombinant viruses during the time of follow-up rather than HR1–HR2/full Env recombinant virus pairs at a given time point. RIs of full Env recombinant viruses spanned a wider range than RIs of HR1–HR2 recombinant viruses, consistent with the high genotypic conservation of the HR1–HR2 region. Early after enfuvirtide failure, RI of the escape variants was similar to or lower than at baseline for both HR1–HR2 and full Env recombinant viruses (Fig. 3). At later time points (intermediate and late), RI of full Env recombinant viruses increased for all patients relative to early escape clones (Fig. 3). The RI of HR1–HR2 recombinant viruses, by contrast, remained steady for patients A, B and E, strongly indicating that determinants other than HR1–HR2 account for infectivity adjustments, either through the emergence of compensatory mutations or through the selection of Envs that best tolerate resistance mutations. For patients C and D, in contrast, HR1–HR2 recombinant viruses showed a trend to increase, likely mapping determinants that account for gains in infectivity to HR1–HR2. contribution of the S138A substitution to infectivity rescue might also involve other determinants. Other mutations included substitutions N125D [29,38] (patients A, B and C), N125S and N126K [37,39,40] (patient D) and E137K [41,42] (patient E) (Table S1). The N125D/S polymorphisms within HR2 were detected in both baseline and escape clones, arguing against them playing a pivotal role in restoring infectivity. For patient E, the impact of E137K is difficult to evaluate as only intermediate clones were recovered and as gains in viral infectivity seem to map to env determinants outside of HR1–HR2 (Fig. 3.E). In contrast, the N126K compensatory mutation in patient D viruses was detected exclusively in on-treatment viral sequences, pointing to it as a presumed contributor to the parallel infectivity gains of HR1–HR2 and full Env recombinants. Taken together, our data suggest that both compensatory mutations within HR1–HR2 (N126K for patient D, yet unidentified determinants for patient C) and the env genetic context (patients A, B and E) contribute to increased viral infectivity. Phylogenetic analysis of full env sequences Phylogenetic analyses were performed to dissect the relative contributions of resistance and of infectivity to selective evolution under prolonged enfuvirtide pressure. For patients A and B, all the clones (baseline and post-treatment failure) were intermingled, suggesting that these viral populations explored different evolutionary routes. Early resistance mutations (G36D and N42S+N43D) conferred high level resistance (>100-fold) and persisted throughout treatment (Fig. 4.A and B). For patient B, week 45 clones grouped on one branch and were related to pre-treatment clone 5.16 and early clone 16.1. The gain in infectivity of intermediate clones relative to the early clones was higher than the gain in resistance, suggesting that the evolutionary paths embraced must have favored infectivity adjustments. For patients C and D, viral clones from each time point clustered together. Both patients hosted strains with resistance mutations at 2 positions at early time points (Table S1), and determinants within HR1–HR2 were sufficient to confer enfuvirtide resistance and to restore viral infectivity. The V38A (patient C) and N43D (patient D) mutations conferred higher resistance than the G36D (patient C) or N42D+N43S (patient D) mutations respectively, and at intermediate and late time points, the early G36D and N42D+N43S variants were outcompeted by V38A or the N43D variants (Fig. 2 and Fig. 3), indicating that early after treatment escape, the level of resistance is the main driver of viral selection. It is noteworthy that for patients A and D, only one late clone was infectious. For patient C, all late (week 129) clones were extremely tightly related and emerged from the branch grouping early G36D variants (clones 27.3, 27.4, 27.6) rather than from the early V38A variant or from the intermediate V38A+L45M variants (Fig. 4.C). For these 3 patients, late infectious clones stemmed from early rather than from intermediate escape clones, suggesting that the other evolutionary viable paths explored by the virus at the intermediate time point, led to a dead-end under pursued enfuvirtide pressure (Fig. 4.A, C and D). Late infectious clones had similar resistance levels than early/intermediate clones and high RIs (Fig. 2 and Fig. 3), indicating that infectivity levels molded selection at late time points. These findings highlight restricted evolution and limited evolutionary possibilities under prolonged enfuvirtide pressure, and illustrate that while resistance may be achieved through different paths, optimal Env properties enhancing RI are highly constrained. Discussion In this study, direct comparison of HR1–HR2 and full Env recombinant viral particles containing longitudinal Env samples from 5 patients receiving prolonged enfuvirtide-based therapy, formally points to HR1 and HR2 as the principal contributors to high level resistance. The surrounding env genetic context played a modulatory role on basal susceptibility and to a lesser extent on the level of resistance, in line with a previous report [16]. Previous studies have suggested that the env genetic context contributes to resistance to enfuvirtide both directly and by driving the selection of resistance mutations [23,34,35]. In these reports, escape variants that appeared under enfuvirtide pressure evolved from minority variants present prior to treatment. Because the selected resistance mutations conferred higher resistance levels within the genetic context within which they arose than in pre-therapeutic env clones from the same patient, the authors conclude that the env genetic context drives the resistance pathway embraced by viruses under drug pressure and that the level of resistance is the primary determinant of selection [34,35]. These reports contrast with the mainly modulatory role of the env genetic context we recorded. In our study, the weight of the full Env ectodomain was compared to isogenic HR1–HR2, while in the study by Goubard et al. [35] one single mutation (V38A) was inserted by site-directed mutagenesis. It is therefore not possible to exclude that in their study, the HR1–HR2 region rather than the full Env accounted for the env-encoded contribution to enfuvirtide resistance. In the case of patient B, the env genetic context lowered the level of resistance. Furthermore, the HR1–HR2 recombinant viruses from this patient were less susceptible to enfuvirtide than the HR1–HR2 recombinants from the other patients despite the N42S polymorphism, which has been associated with increased enfuvirtide susceptibility and a slightly improved virological outcome [25,27,43]. In this patient, other yet undetermined, env-encoded determinants likely contributed to the particularly high susceptibility to enfuvirtide, and/or the env genetic context remained essentially unfavorable to the development of enfuvirtide resistance. Noteworthily, patient B hosted only strictly R5 viruses. It is therefore possible that tropism was one of the determinants that contributed to render full Env recombinants less susceptible to enfuvirtide than the isogenic HR1–HR2 recombinants. We show that X4-coreceptor usage was the only factor associated with significantly lower resistance levels. Coreceptor usage has been previously reported to influence susceptibility/resistance to enfuvirtide, but contradictory results have been reported depending on the design of the study and on whether Figure 3. Relative infectivity (RI) of full Env and HR1–HR2 recombinant viruses. U87.CD4.CCR5 and U87.CD4.CXCR4 were infected with serial two-fold dilutions (ranging from 400 pg p24/well to 50 pg p24/well) of HR1–HR2 and full Env recombinant viruses. Luciferase activity was monitored as a function of p24 input to assess infectivity and was normalized to the HXB2 reference to estimate RI. For full Env recombinant viruses, RI was determined on U87.CD4 cells expressing either CCR5 (red) or CXCR4 (blue). Closed circles (•) represent strictly R5 or X4 recombinant viruses and open circles (○) represent dual-tropic recombinant viruses. RI of HR1–HR2 recombinant viruses are represented as closed triangles (▲) and were... dual/mixed viruses are considered separately or included among the X4 strains [27,32,33,36]. Here we constructed env recombinant clones and analyzed strictly R5, strictly X4 and dual-tropic viruses as 3 distinct groups. The lower susceptibility of X4 recombinant viruses at baseline and after virological failure may reflect other intrinsic properties of gp120 [30,44,45]. Formal quantification of the gain in resistance associated with X4 tropism would require switching tropism by site-directed mutagenesis. In our study, X4 tested on U87.CD.CXCR4 cells. Medians (horizontal bars) of at least two independent experiments are shown. Medians with interquartile ranges are reported at the top of the graphs. doi:10.1371/journal.pone.0021535.g003 **Figure 4. Phylogenetic analysis of the full env sequences.** Complete env coding sequences were aligned with HXB2 as the outgroup using the MUSCLE software and phylogenetic trees were constructed using the PhyML software. Main resistance mutations and compensatory mutations are indicated for each clone. Non-infectious clones are marked with a *'. Bootstrap values >60% are indicated. doi:10.1371/journal.pone.0021535.g004 recombinant viruses (both full Env X4 and HR1–HR2 viruses) also featured higher infectivity than R5 recombinants. Although the design of our experimental system is not suited to study the potential relationship between the level of resistance and infectivity, it is plausible that the lower susceptibility of X4 recombinant viruses is related to properties of gp120 that accelerate the fusion process, such as fusion kinetics, coreceptor affinity and binding sites [30,44,45]. Yet, the fact that dual tropic full Env recombinant viruses displayed similar FCIC50 in both the CXCR4- and CCR5-cell lines irrespective of infectivity levels in each cell type, argues against a direct relation between viral infectivity and enfuvirtide susceptibility. Alternatively, dual tropic viruses might differ from strictly R5 or strictly X4 viruses through intermediate fusion kinetics, in line with their intermediate median FCIC50 (data not shown). Previous studies have reported both decreases and increases in viral infectivity under enfuvirtide treatment [28,34]. We did not detect pronounced reductions in viral infectivity (for both HR1–HR2 and full Env recombinant viruses), as reported by others [23]. However, working with clones implicitly restrains phenotypic analyses to infectious clones only, and we cannot exclude that losses in viral infectivity might be underestimated. Indeed, 77.3% of the baseline clones but only 60.3% of the escape clones were infectious, especially very few late clones were infectious. We found that both determinants within the HR1–HR2 regions (including compensatory mutations such as the S138A, N125D or N126K) and other gp120 properties likely accounted for gains in viral infectivity. All patients achieved high level resistance as of the earliest time points and different resistance pathways were attempted. Early escape clones (patients C and D) with lower resistance were outcompeted by more resistant clones carrying the V38A or N43D resistance mutations respectively, indicating that the level of resistance reached is indeed one major determinant of evolution, as previously suggested [15,23,35]. Nonetheless, we found that intermediate and late infectious clones from all patients progressively gained RI revealing that viral infectivity also strongly molds the selection of escape variants, and becomes the prevalent actor of evolutionary adjustments under prolonged drug pressure once high level resistance is established. These apparently discrepant results with previous reports that attribute the selection of viral strains under sustained enfuvirtide pressure to the sole level of resistance [23,35] might be due to the duration of enfuvirtide pressure: the latest on-therapy samples in the studies by Menzo et al. and by Goubard et al. were retrieved after 20 and 37 weeks of treatment, corresponding to a time-point between early and intermediate in our study. At these time-points, the level of resistance is indeed probably the major determinant of quasispecies selection, and viral diversity is still preserved. However, at later time points it is the level of infectivity that drives the selection of particular Envs, a process that is highly constrained and strikes genetic diversity. In conclusion, our findings indicate that selection of viral variants is molded gradually by the level of resistance achieved through mutations within HR1–HR2 at first, and by the viral determinants mining viral infectivity, including specific compensatory mutations and polymorphisms as well as other Env properties. The first step of the process, escaping enfuvirtide pressure and achieving resistance, can occur via different routes and involve a diversity of mutations, as illustrated by the relative dispersal of early escape clones from all patients throughout the phylogenetic trees. Infectivity adjustments occur later and are likely more strongly constrained by the env genetic context, translating into evolutionary “turn-backs” and scarce phylogenetic diversity. Materials and Methods Patients and patient samples Patients belong to the Dutch ATHENA Cohort for treatment evaluation and the study received ethical approval in the Netherlands. HIV-infected patients are informed that their data/samples are collected as part of the ATHENA evaluation study and consent is arranged according to an opting out procedure. Clinicians sign a form to assign patients to the Cohort. Five heavily treated male patients receiving enfuvirtide (90 mg twice daily) (Roche Pharmaceuticals) as part of a salvage regimen in addition to an optimized background treatment (OBT) and who experienced virological failure were selected. All patients were infected with subtype B viruses. Median viral load was 125372 RNA copies/ml (range: 4960–372000) and mean CD4+ cell counts were 147 cells/μl (range: 30–410) at baseline (Table S2). A median of 10 (range: 9–12) resistance mutations to reverse transcriptase inhibitors and a median of 10 (range: 6–12) resistance mutations to protease inhibitors were detected. Enfuvirtide treatment initiation induced a decrease from 0.8 log to 3.0 log HIV-RNA. All patients experienced virological failure after a median of 20 weeks on enfuvirtide+OBT. Patient B interrupted ENF therapy between weeks 22 and 29. Longitudinal frozen plasma samples prior to enfuvirtide treatment (baseline) and longitudinal samples during treatment failure were selected. Cell lines HEK293T (ATCC) cells were maintained in Dulbecco’s Modified Eagle Medium (DMEM) supplemented with 10% heat-inactivated foetal bovine serum (FBS), L-glutamine (2 mM), penicillin (50 U/mL) and streptomycin (50 μg/mL) (all from Gibco). U87.CD4.CCR5 and U87.CD4.CXCR4 cells (AIDS Research and Reference Reagent Program, Division of AIDS, NIAID, NIH [46]) were maintained in DMEM supplemented with 15% FBS, L-glutamine (2 mM), penicillin (100 U/mL), streptomycin (100 μg/mL), geneticine (300 ng/mL) (Gibco) and puromycine (1 ng/mL) (Sigma-Aldrich). Expression of CD4, CCR5 and CXCR4 was monitored by flow cytometry. Cloning and analysis of patient-derived full envs HIV-1 RNA was isolated as previously described [47]. env was reverse transcribed and amplified using forward primer Env1a (5’-GGCTTATGGCATCTCCTATGCGGAGAA-3’) HXB2 5954-5982 and reverse primer Env1b (5’-TACGCTTCTCCA-GTCCCCCCTTTTCTTTTA-3’) HXB2 9096-9068 for patient C (baseline, early and intermediate) and forward primer FLenv1.1 (5’-TAGAGGCCCTGGAAGCATCCAGGAAG-3’) HXB2 5853-5877 and reverse primer FLenv1.2 (5’-TTGCTACTTGTGATTTGCTTTATAG-3’) HXB2 8936-8913 for all other samples The RT-PCR mix contained. 4.8 μl viral RNA, 0.2 μM of each primer, 0.4 mM of each dNTP (GE Healthcare), 3.3 mM MgSO4, 0.5 μl SuperScript III RT/Platinum Taq High Fidelity DNA polymerase and 8 units RNaseOUT (Invitrogen Life Sciences), and conditions were as follows: reverse transcription for 30 min at 55°C, denaturation for 2 min at 94°C, and 30 amplification cycles (94°C for 20 s, 55°C for 30 s, 68°C for 4 min) followed by a final extension at 68°C for 5 minutes. 5 μl of this PCR product were further amplified over 30 cycles (95°C for 15 s, 58°C for 30 s, 68°C for 4 min) using forward primer Env2a (5’-AGAAAGAGGCAGACAGGATGG-CAATGA-3’) HXB2 6202-6228 and reverse primer Env2b (5’-TTTGGACACCTTTGCCACCCATG-3’) HXB2 8797-8817 for patient C (baseline, early and intermediate) and forward primer FLenv2.1 (5’-GATGAGCCTTTAGGCATCTCCCTA- TGAGCAGGAAGAAG-3' HXB2 nt 5957-5983) and reverse primer Flenv2.2 (5'-AGCTTGATCCGTCCTGAGATAC-CTGCTCACC-3' HXB2 8904-8882) for all other samples. The nested PCR mix contained 2.0 mM MgSO4, 0.3 µM of each primer, 1.6 units Expand High Fidelity enzyme mix (Roche Applied Science) and 0.2 mM of each dNTP in Expand High Fidelity buffer. PCR products were purified (QiAquick PCR purification kit, Qiagen), A-tailed with 5 units of Taq DNA polymerase (Roche Applied Biosystems) at 70°C for 20 minutes, and TA-cloned in the pGEM-T-easy vector (Promega) using T4 DNA ligase (Promega) overnight at 4°C. Cloned sequences were PCR amplified from single DNA fragments, and TA-cloned in the pGEM-T-easy vector (Promega) using T4 DNA ligase (Promega) overnight at 4°C. Five clones were randomly selected for each time point (Table S1). For data analysis purposes, clones were classified as “baseline” (pre- enfuvirtide), “early” (first time point after virological failure, weeks 16 to 27), “intermediate” (weeks 42 to 59) and “late” (after week 94). For patients A and D, only one late clone was infectious and was therefore grouped with the corresponding intermediate clones. All clones were sequenced (Genbank accession numbers HQ386140 to HQ386219). Full clonal env sequences were aligned against HXB2 using the MUSCLE software (v3.8.31) [48]. Amplification of patient-derived env and HR1–HR2 sequences env and HR1–HR2 fragments were PCR amplified from single clones and from reference pHXB2-env (AIDS Research and Reference Reagent Program, NIH) [49] using forward primer Flenv2.1. (5’-GCTTGATCATCTCCTATGGCCGAGAAAGA-3’ HXB2 5955-5983) and reverse primer rec.env_F (5’-AAGGCTGTCTTATTTCTTGTAGTTG-3’ HXB2 8776-8749) for the env amplicon, and forward primer rec.HR1-2_F (5’-GAGGCGACTTTGGAAGATATG-3’ HXB2 7469-7671) and reverse primer rec.HR1-2_R (5’-GGTGAATATCTCTGCTTACCTC-3’ HXB2 8363-8344) for the HR1–HR2 amplicon. Clonal template DNA was amplified using 1.5 units Platinum Taq High Fidelity DNA polymerase (Invitrogen Life Sciences) in High Fidelity Platinum PCR buffer containing 1.5 mM MgSO4, 0.2 mM of each dNTP, 0.2 µM of each primer over 35 cycles (94°C for 30 s, 64°C for 30 s, 60°C for 3 min), followed by a final extension at 60°C for 10 minutes. DNA concentration and quality were assessed by gel electrophoresis and triplicate DNA amplifications were pooled. Phylogenetic analyses Phylogenetic analyses of env sequences were performed using the Maximum Likelihood method [50,51] and the gamma corrected HKY85 [52] model of molecular evolution as suggested by the TOPOALi software package (v2.5), setting HXB2 as the outgroup. Bootstrap values were determined using 1000 replicates. Production of Env and HR1–HR2 recombinant viruses The EcoRI-NcoI fragment from the previously described pNL4-3.GFP.eGF [53], was subcloned into pZero-2 (Invitrogen Life Sciences) to engineer two luciferase-tagged pNL4-3-derived vectors pNL4-3Δen and pNL4-3ΔHR1–HR2. To construct pNL4-3Δen, the env ectodomain was deleted by inverse PCR using the phosphorylated reverse primer del.env_RP (5’-CTCTTTCTATTAG-3’ HXB2 6220-6197) and forward primer del.env_FP (5’-GCTTGATCATCTCCTATGGCCGAGAAAG-3’ HXB2 8322-8346), creating an AfeI restriction site (shown in italic) for linearization. The parental DNA fragment was DpnI digested and the PCR amplicon was self-ligated using T4 DNA ligase (New England Biolabs) and transformed into electroporant TOP10 E.coli cells (Invitrogen Life Sciences). The SfiI-BamHI fragment containing the remaining endodomain of gp41 was then cloned into the SfiI-BamHI digested plasmid pNL4-3.Luc (gift of T. Dragic, Albert Einstein College of Medicine) [54], generating pNL4-3Δen.Luc. pNL4-3ΔHR1–HR2.Luc was engineered similarly, using the phosphorylated reverse primer del.HR1-2_RP (5’-ATTGGGCACCCTGTGACCTC-3’ HXB2 7854-7834) and forward primer del.HR1-2_FP (5’-GATAAATGGGCAAGATTGTGG-3’ HXB2 8214-8234) creating an AciI restriction site (shown in italic) (Fig S1) for linearization. Correct deletion of the env and of the HR1–HR2 regions was attested by sequencing. Full Env and HR1–HR2 recombinant viral particles were generated by cotransfecting HEK293T cells with 8 µg of AfeI-linearized pNL4-3Δen.Luc or AfeI-linearized pNL4-3ΔHR1–HR2.Luc with 4 µg of env amplicon or 1 µg of HR1–HR2 amplicon respectively, packaged in 20 µl of HEKFectin (Bio-Rad). Cell culture medium was harvested 48 hours later and clarified at 4°C to eliminate cell debris, and p24 antigen concentration was quantified by ELISA (Perkin-Elmer). Experimental biases due to the recombinant event with two different amplicons were excluded by verifying that HR1–HR2 and full Env recombinant viruses produced from HXB2 and from an otherwise identical HXB2-derived double mutant carrying the G36S+V38M mutations in gp41 (HXB2-SIM) featured similar IC50 and infectivity levels (data not shown). Determination of viral infectivity 104 U97.CD4.CXCR4 or U87.CD4.CCR5 cells seeded in 96-well plates were infected (in triplicate wells) with serial 2-fold dilutions of viral supernatants, (400 to 50 pg/well of p24 antigen). Infection was synchronized by spinoculation (1200g, 2 hours, 25°C) and monitored after 48 hours by measuring bioluminescence in cell lysates (Promega luciferase assay system) using a Teco Genios Pro Luminometer. Relative Light Units (RLUs) were plotted against viral p24 inoculi and the slope of the linear regression was calculated using GraphPad Prism v5.01 (GraphPad Software). Relative infectivity (RI) was defined as the ratio of the infectivity (slope) of each recombinant virus to that of the reference strain HXB2. Enfuvirtide phenotypic susceptibility/resistance testing 104 U97.CD4.CXCR4 or U87.CD4.CCR5 cells were infected with recombinant viral supernatants in the presence of increasing enfuvirtide (Eurogentec) concentrations (0.04 ng/ml to 15×103 ng/ml). Viral input was adjusted to produce 102-105 RLUs. Infection (in triplicate wells) was synchronized by spinoculation and luciferase activity was monitored 48 hours after infection. IC50 values were calculated using the GraphPad Prism software and resistance was expressed as the fold-change in IC50 (FCIC50) relative to HXB2. Statistical analyses RI and FCIC50 are expressed as median with the 25th and 75th percentiles [interquartile range, IQR] and p-values<0.05 (two-sided) were considered statistically significant. Intra-patient results were compared using a nonparametric Kruskal-Wallis test followed by Dunn’s Multiple Comparison Test. Nonparametrical Mann-Whitney test (U-test) was used to compare inter-patient samples and Wilcoxon signed-rank test to analyze paired Env and HR1–HR2 recombinant viruses. For dual-tropic Env recombinant viruses, mean FCIC50 values were used. Supporting Information Figure S1 Construction of the pNL4-3Δen.Luc and the pNL4-3ΔHR1–HR2.Luc vectors. After subcloning of the EcoRI- NotI fragment from pNL4-3.GIV.eGFP into pZero-2, fragments 6220–8322 or 7854–8214 were deleted by inverse PCR and an AflI and Aci restriction site was introduced in each of the constructs respectively. The NotI-RsalII fragments deleted of the Env ectodomain or of the HR1–HR2 regions were cloned into the NotI-RsalII digested pNL4-3.SLuc in order to generate the final backbones pNL4-3ΔEnv.SLuc and pNL4-3ΔHR1–HR2.SLuc. (PDF) Table S1 Supporting table (PDF) Table S2 Supporting table (PDF) References Acknowledgments We are also greatly indebted to Daniel Struck for his assistance in the phylogenetic analyses as well as for fruitful discussions. Furthermore, we would like to thank Minny Meeuwissen for assistance in data-collection. Author Contributions Conceived and designed the experiments: FB MN MR JCS DPB. Performed the experiments: FB ML MR J-VS PMvH. Analyzed the data: FB MN AMJW DPB. Contributed reagents/materials/analysis tools: AMJW AIMH PPK HGS. Wrote the paper: FB MN CD DPB.
Sentence Recognition: A Theoretical Analysis ICS Technical Report #89-7 by Walter Kintsch and David Welch University of Colorado, Boulder, CO Franz Schmalhofer University of Freiburg, FRG and Susan Zimny Indiana University, Pittsburgh, PA Abstract How sentences from a discourse are recognized can be explained by combining models of item recognition derived from list-learning experiments with notions about the representation of text in memory within the framework of the construction-integration model of discourse comprehension. The implications of such a model of sentence recognition are worked out for two experimental situations. In the first experiment, subjects read brief scriptal texts and were then tested for recognition with verbatim old sentences, paraphrases, inferences, and two types of new distractor sentences after delays from 0 to 4 days. Differential decay rates for the wording and meaning of the text and for scriptal information were observed. The model provides a good quantitative account of the data. In the second experiment, the speed-accuracy trade-off in sentence verification was studied for old verbatim sentences, and correct and false inferences. Qualitative predictions derived from the model based on the parameter estimates from the first study were in agreement with the data. Readers without an adequate situation model were found to make quick judgments based on surface and textbase characteristics of the test sentences, while experts are initially more cautious because they rely more on the situation model. A large number of experiments on recognition memory exist in which the material used consists of lists of words or pictures. Several models of recognition memory are available today which account very well for most of the phenomena observed in these experiments. Can these theories also account for experimental data when the materials used are not lists of items, but coherent discourse? By combining the essential features of current models of recognition memory developed in the context of list learning studies with a model of discourse comprehension and assumptions about the representation of discourse in memory, a model of sentence recognition can be obtained that accounts for the major features of sentence recognition data. Thus, we do not propose developing a new model for sentence recognition. Instead, we shall combine existing models of list learning and text comprehension processes to derive a theoretical analysis of sentence recognition. We begin by comparing three current models of item recognition (Gillund & Shiffrin, 1984; Hintzman, 1984; Murdock, 1982) to determine their common features, which we take over in developing a model of sentence recognition. We then introduce some notions about the representation of discourse in memory from van Dijk & Kintsch (1983) and briefly sketch the construction-integration model of discourse comprehension (Kintsch, 1988). Finally, we show how these elements in combination provide an account of sentence recognition data. We demonstrate that our model can be made to match a set of sentence recognition data in which old verbatim sentences, paraphrases, inferences and new sentences are used as test items for retention intervals varying between an immediate test and a four-day delay (Experiment I). The model is further evaluated by testing some of its qualitative implications with respect to the speed-accuracy trade-off in sentence recognition judgments (Experiment II). 1. Models of Item Recognition Three models of recognition memory will be considered here, those of Hintzman (1984), Murdock (1982), and Gillund & Shiffrin (1984). All three models are formulated rigorously so that quantitative predictions are possible, and all appear to be empirically adequate in the domains to which they have been applied. At first glance, the three models appear to be about as different as they could be in their basic make-up: Murdock's is a distributed memory model; Hintzman postulates multiple episodic traces; Gillund & Shiffrin conceive of memory as a network of interassociated nodes, while the other two models employ feature vectors. However, these models share some essential similarities when they are expressed formally, and it is these that we shall use as a basis for a model of sentence recognition. **Hintzman (1984):** This model is a multi-trace model, in which each experience leaves its own memory trace. Memory traces, as well as test items, are represented as feature vectors, the values of the features being 1, -1, or 0. The similarity of a memory trace to some probe is the (weighted) dot product of their corresponding feature vectors. The total activation of a probe, its Intensity I, is given by the sum of the similarity values of the probe with all traces in memory. E(I) = 0 if the probe does not resemble any traces and increases as the quality of the match improves. Under reasonable conditions Var(I) can be treated as constant. For recognition judgments, the I distribution is fed into a TSD-like (signal detection) decision mechanism. **Murdock (1982):** Murdock also represents memory traces as well as test items as feature vectors. However, a single vector now represents the memory trace of a whole list of items with which the feature vectors of the test items are compared on a recognition test. Once again, a dot product is taken and the resulting values are summed to obtain a retrieval strength value, which is then used in a TSD-like decision system. There are other versions of distributed memory models for item recognition which differ from Murdock in their mathematical formulation, but these differences are irrelevant at this general level of analysis. Gillund & Shiffrin (1984): Unlike the previous two models, items in this model are represented as nodes related to each other by associate links in a retrieval structure. Suppose that there is a set of items \([I]\), a test node \(T\), and a context node \(C\), with the similarity between a test node and an item \(I\) being \(S(T,I)\), and the similarity between the context node and item \(I\) being \(S(C,I)\). For recognition, the memory probe is assumed to consist of \(T\) and \(C\), and the activation resulting from comparing the memory probe with item \(I\) is given by the product \(S(T,I)\times S(C,I)\). The total activation of \(T\) is just the sum of the activations for each of the items in memory, and, as in the previous models, serves as a test statistic for a TSD decision system. Obviously, this brief description does not do justice to the three models considered here. Nevertheless, it suffices to make a few important points. The discrepancy in their verbal formulation notwithstanding, they agree on some crucial mathematical properties. First, in all models the target is compared to all memory traces, and the sum of these comparisons provides the relevant test statistic. This sets these models apart from the previous generation of recognition models, where a recognition decision was thought to be dependent only upon the similarity of the target item to its corresponding memory trace. This is a crucial feature of item recognition. However, it does not appear to matter much exactly how this comparison between the set of memory traces and the target item is performed: whether the traces are summed first, and then the comparison is made (as in Murdock), or whether the comparisons are made first and their outcomes are then summed (as in Hintzman and Gillund & Shiffrin) makes no difference for present purposes. Similarity between trace and target in the Hintzman and Murdock models is computed by the dot product of the corresponding feature vectors. In Gillund & Shiffrin the links in the associative network represent familiarity values directly. The discourse comprehension theory as formulated in Kintsch (1988) lends itself most naturally to the latter approach, though a more molecular analysis would be possible in principle. Finally, all three models use a TSD decision mechanism to turn strength measures (Intensity, Familiarity) into yes-no decisions. These three elements sufficiently specify the recognition mechanism for the model to be proposed here. The ideosyncratic features of the three models will be neglected in favor of these formal communalities. The fact that all three models fit recognition data about equally well implies that the features common to these models are responsible for the fit to the data. The rest represents either differences in theoretical metaphors and verbal interpretations of the common formal substance of the model, or, if it is to be taken more seriously, requires for resolution a broader framework than just laboratory studies of item recognition.¹ 2. Levels of Representation According to van Dijk & Kintsch (1983), three levels must be distinguished in the memory representation of discourse. At one level, a text is characterized by the exact words and phrases used. This is the surface level of representation. Linguistic theory provides the tools for the description and analysis of this level of representation. At another level, not the exact wording but the semantic content of the text must be represented. Both the local (microstructure) and global (macrostructure) characteristics of the text play a role here (Kintsch & van Dijk, 1978). Several representational schemes have been developed within linguistics, semantics, artificial intelligence, and psychology for this purpose. We shall use here the propositional representation first introduced in Kintsch (1974). The situation model is the third level of representation important for text comprehension (van Dijk & Kintsch, 1983). What is represented at this level is not the text itself, but the situation described by the text, detached from the text structure proper and embedded in pre-established fields of knowledge. The principle of organization at this level may not be the text's macrostructure, but the knowledge schema (e.g., an appropriate script or frame) used to assimilate it. In a number of experimental studies it has been shown that these three levels of representation can be distinguished in sentence recognition experiments (e.g., Schmalhofer & Glavanov, 1986; Fletcher & Chrysler, in press). Old verbatim sentences are represented at all three levels of representation: the surface structure, the textbase, and the situation model. Paraphrases of old sentences, on the other hand, differ in terms of the surface structure from what is stored in memory, but not at the textbase and situation model level. Inference statements that were not directly expressed in the text differ from the memory representation both in terms of their surface structure and propositional content, but they are part of the same situation model. Finally, contextually related, but not inferable test sentences differ from the memory representation at all three levels. Thus, by looking at the differences among these types of test sentences, estimates of the memory strength at each level of representation may be obtained in sentence recognition experiments. 3. The Construction-Integration Model The construction-integration model of Kintsch (1988) describes how texts are represented in memory in the process of understanding and how they are integrated into the comprehender's knowledge base. The crucial features of the model are as follows. Comprehension is simulated as a production system, the rules of which operate at various levels: some build propositions from the linguistic information provided by the text; some generate macropropositions; some retrieve knowledge from the comprehender's long-term memory that is related to the text, thus serving as mechanisms for elaboration and inference. All these rules share one general characteristic: they are weak, "dumb" rules that don't always achieve the desired results. In addition to what should have been constructed, these rules generate redundant, useless, and even contradictory material. In contrast, most other models of comprehension attempt to specify strong, "smart" rules, which, guided by schemata, arrive at just the right interpretations, activate just the right knowledge, and generate just the right inferences. Smart rules necessarily must be quite complex, and it is very hard to make smart rules work right in ever-changing contexts. Weak rules, as they are used here, are obviously much more robust - but, left to themselves, they do not generate acceptable representations of the text. Irrelevant or contradictory items that have been generated by weak rules, however, can be rejected, if we consider not just the set of items generated by the rules, but also the pattern of interrelationships among them. Items which are irrelevant to the text as a whole which were produced by the indiscriminate firing of some production rule will be related only to one or a few other items, while contradictory items will be negatively connected to some of the other items in the network of items produced by the model. Relevant items, on the other hand, will tend to be strongly interrelated - be it because they are derived from the same phrase in the text, or because they are close together in the textbase, or because they are related semantically or experientially in the comprehender's knowledge base. Thus, if activation is spread around the network of items, an integrated representation can be obtained. The construction-integration model achieves with weak rules followed by an integration process what other models of text comprehension try to achieve with smart rules. Kintsch (1988) not only describes the relevant details of this model, but also reports some results that (a) suggest that this kind of a model may capture some features of human comprehension processes better than "smart" comprehension models, and (b) demonstrates that the model is computationally adequate in some reasonably complex domains. The construction-integration model provides a natural account of sentence recognition. First, comprehension of a paragraph is simulated in the way just outlined, resulting in a memory representation consisting of text propositions, plus whatever knowledge elaborations and inferences were generated that survived the integration process. These items have some sort of activation value - central, important propositions being more highly activated than peripheral ones - and they are related to each other in the ways specified by the model. Formally, this means we have an activation vector \( A \), specifying for each element that was constructed its final activation value, and a coherence matrix \( C \), specifying the relations among these elements. The two characterize in the model the memory representation achieved as a result of comprehending this paragraph. The model is then given the to-be-recognized test sentence to comprehend, for which it will construct the same kind of representation. In recognition, the representation of the test sentence is compared with the representation of the whole paragraph. This is done by joining the two coherence matrices and observing how much activation flows from the original paragraph to the test sentence. If the test sentence fits in well with the original text (e.g., it is actually a part of it), it will become strongly activated. If it has no connections at all to the original material, it will not be activated at all. The more similar it is to the original, the more connections there will be, and the more highly activated the test sentence will become. Thus, we can use the amount of activation that flows from the original paragraph to the test sentence as a measure of its familiarity or strength, and use a decision rule to derive a binary recognition response. The proposed model of sentence recognition is based on three components: a recognition mechanism from the list-learning literature, the notion that discourse is represented at different levels, and the processing mechanisms of the construction-integration model. The test item - the test sentence - is compared, at each level of representation, against all items in memory - the whole text. The comparison yields an index of the similarity between what is remembered and the test item, as measured by the amount of activation that flows from the memory representation into the test item. This similarity index is then used in a decision mechanism. Thus, the recognition principles derived from the list learning literature have been embedded into the framework of the construction-integration model. In the next section, an experiment on sentence recognition from discourse will be described. These data will provide the framework for the detailed and formal development of our model. 4. Sentence Recognition Experiment I. Zimny (1987) studied sentence recognition for verbatim old sentences, paraphrases, inferences, and two types of distractor sentences for retention intervals up to four days. She constructed 18 texts of about 150-200 words each, based on the scriptal norms of Galambos (1982). Each text described a sequence of scriptal events (e.g. "Nick goes to the movies") by stringing together high-frequency, familiar actions from the norms, interspersed with some non-scriptal material (e.g. his girlfriend wore a dress with pink polka dots). The reason for constructing these texts according to script norms was so that we knew what sort of situation model was likely to be constructed for each text, namely a script-based one. Linguistic analyses specify the structure of the surface representation for arbitrary texts, and propositional analyses are similarly general, yielding textbase hierarchies for a wide variety of texts. Unfortunately, this is not the case for the situation model: for most texts we have no clear idea what sort of a situation model would be generated. Consequently, we must work with special cases where enough research has been done to establish this kind of information. Research in this area has therefore focused on a few cases such as maps, as in Perrig & Kintsch (1985), mental models, as in Johnson-Laird (1983), or scripts, as in Bower, Black, & Turner (1979) as well as the present case. For each text, Zimny constructed five test sentences which vary in terms of their level of discourse representation. Old sentences appeared at test as they had in the original text, and are represented at the surface, textbase, and situation model levels. Paraphrases involved minimal word order or single word changes; they are identical with sentences from the text at the levels of their textbase and surface representation, but differ in some ways in their surface structure. Inferences were sentences that could be inferred by readers from the surrounding context with high reliability; these sentences fit into the same situation model as actual sentences from the text, but they differed both in terms of their textbase and surface representations. While an attempt was made to keep the test sentences similar in terms of their length and complexity, they obviously had to differ in numerous ways, with some being much more salient and recognizable than others. Therefore, Zimny wrote three different versions of her texts, so that each sentence could serve either as an old, paraphrase, or inference sentence. In addition, two entirely new test sentences were used with each text. One sentence was contextually appropriate, while the other was unrelated to the theme and context of the text and served as the baseline for the recognition analysis. One group of subjects was asked to recognize the test sentences for each text right after reading the text. Subjects were instructed to answer "yes" if they thought they had seen the sentence before, and "no" otherwise. Three other groups of subjects received the test sentences after delays of 40 minutes, 2 days, or 4 days. The results most relevant for present purposes are shown in Figures 1 and 2. Figure 1 shows the percent "yes" responses subjects gave to old test sentences, paraphrases, inferences, as well as context appropriate and context inappropriate distractor items, as a function of delay. The main effects of sentence type and delay were both significant statistically, but most importantly, there was a significant interaction between delay and sentence type, $F(6,280) = 38.7$, $p < .001$. Figure 2 provides estimates of the trace strengths at the three levels of representation over the delay intervals. The percent "yes" data were first turned into d' measures by using the context inappropriate distractor items as a baseline. This transformation was necessary to remove strong, delay-dependent bias effects from the analysis: on the immediate test, subjects used a strict criterion for saying they had seen a sentence before, but after four days they were willing to assert this on the basis of much weaker evidence. Secondly, difference measures between the d's were computed. The difference between the memory strengths of old sentences and paraphrases provides a measure of the strength of the surface representation. (how something was said). The difference between the strengths of the paraphrase sentences and inferences provides a measure of the strength of the textbase representation (whether something was actually said in the text or not). And finally, the difference between the strength of the contextually appropriate distractor items and the inference sentences provides a measure of the strength of the situation model (whether something is true in the given situational context or not). These difference values are plotted in Figure 2. A statistical analysis of these data revealed that, in addition to significant main effects, the interaction between delay and trace type was also significant statistically, $F(6,280) = 6.29$, $p<.001$. - Insert Figs. 1 & 2 about here- Figure 2 shows some interesting trends. First of all, surface memory was found only on the immediate test. Memory for the textbase was quite strong initially, decreased with delay, but remained above zero even after four days. Situational memory, on the other hand, stayed at a high level, independent of delay. These are the data that will be modelled here. **Sentence Recognition: Theoretical Derivations.** To derive theoretical predictions for the data from the Zimny experiment, somewhat different aspects of the construction-integration model will have to be emphasized than in Kintsch (1988), it still remains the same model. In Kintsch (1988) the memory representation of a text was developed only at the propositional level: surface traces, as well as situational representations were neglected. Obviously, these distinctions will have to made explicit in a treatment of sentence recognition. On the other hand, the focus of Kintsch (1988) was on the performance of the model as an inference engine - something that we shall neglect in the present application of the model. The reason for omitting this aspect of the model here is that it does little actual work in the present application, and that its inclusion would make an already complex story even more complicated. This simplification does introduce some distortions, however, which will have to be considered after the simplified case has been presented. The Zimny data are averaged over subjects and sentences. Predictions will be derived for a single text which is much briefer than the original texts used by Zimny, and for only a few specific test sentences. While these materials are not atypical, it is certainly the case that for another text example and other test sentences somewhat different quantitative predictions and parameter values may have been obtained. However, the overall pattern of results would presumably remain the same. Thus, predictions for a "typical" subject and material set are compared here with data averaged over subjects and materials. The following two-sentence text will be used as the input text: *Nick decided to go to the movies. He looked at the newspaper to see what was playing.* (This is the beginning of a text based on a Going-to-the-Movies script used by Zimny (1987), which then continues through the whole event). In Kintsch (1988), this text would have been broken down into propositional units (such as NICK, (GO-TO,NICK,MOVIES), etc.) which then would activate knowledge through their associative links in the reader's long-term memory store (perhaps *Nick wanted to see a film*). This propositional structure would be consolidated through an integration process which eliminates the context-irrelevant knowledge that had been activated. For the sake of simplicity, we omit the knowledge activation process in this application, and only look at the actual text contents, as explained above. However, since we know that surface properties of the text as well as the situation model also play a role in sentence recognition, we make explicit in our analysis the linguistic relations as well as the scriptal relations among the input units in the text. A simulation of the model constructs a network of text elements that specifies how strongly each element is related to each other. We are concerned with three types of relationships, corresponding to the three levels of representation of text in memory. Within each level, we specify relation strengths in terms of distances among elements in a coherence network. The pattern of interconnectedness among these items will determine the degree of activation each element will receive. - Insert Fig. 3 about here - In Figure 3, 10 word groups (linguistic elements, L) have been distinguished in the text. Most of these correspond to propositions (P) as well as elements of the situation model (M), except P7 and M7 do not have a corresponding linguistic element L7. The linguistic elements form syntactic chunks (S), according to the phrase structure of the sentences. E.g. L3 (to-go-to) and L4 (the-movies) combine to form the chunk S3. Together, L and S constitute the elements of the surface representation of the text. (They are distinguished here merely for convenience, to allow a ready comparison between the actual words and phrases used in the text and the propositions or situation model elements corresponding to these words or phrases). The graph shown in Figure 3 allows one to calculate a distance matrix among the L- and S-elements: for instance, L1 is one step away from S1, three steps away from L2, and not connected to L10. The propositions P1 - P9 are connected to each other in a somewhat different pattern. Following Kintsch (1974), one can approximate the structure of a propositional textbase by noting the pattern of argument overlap among the propositions. For example, P1 appears as an argument in P2, P3, P5, and P8, while P2 overlaps with P1 and P3. The textbase structure obtained via argument overlap is shown in Figure 4. This network defines a distance matrix among the propositional elements: P2 is a single step away from P1, three steps away from P7, and 4 steps away from P9. - Insert Fig. 4 about here - A similar distance matrix can be computed for the elements of the situation model. Since the text was explicitly constructed from script norms, it can be safely assumed that the situation model in this case is structured as a script, i.e. as a schema with slots for Properties, Agents, Preparatory Steps, etc. (e.g., Schank & Abelson, 1977). The script header M10 must be added to the items directly derived from the text – an exception to the policy of neglecting all inferences in the present application of the model. The resulting structure is also shown in Figure 4. This time, M2 is one step away from M3, two steps from M1, one step from M7, and three from M9. It is not necessary to think of L1 (the exact word used in the text), P1 (the corresponding proposition) and M1 (an element of the situation model) as three distinct objects in the reader’s memory representation. It is the same “Nick” in all three cases, but viewed once from a linguistic perspective where it enters into a certain set of relations with other linguistic elements, once considered as a proposition which plays a role in the textbase, and once considered in terms of its role in the “Go-to-the-Movies” script. For analytic purposes it is useful to distinguish L, P, and M units, but what matters conceptually is that text elements enter into different relationships with other elements, depending upon the level of analysis: surface, propositional, or situational.3 These relationships define a network which is represented by the coherence matrix. This matrix is needed as a basis for the integration process. The rows and columns of this matrix are given by the elements L1 - L11, S1 - S8, P1 - P9, and M1 - M10. The entries of the matrix designate the strength of the relationship between row and column elements. Numerical parameters for the strength of relations among elements a certain distance apart in the graphs shown in Figures 3 and 4 must be estimated at this point. An unsystematic trial-and-error procedure was employed to obtain these estimates. Intuition suggests that local relations in the surface structure and textbase are quite strong but weaken rapidly as the distance between items increases. Hence, values of 5 and 3 were used in the coherence matrix for items 0 and 1 steps apart in either in the surface structure or in the or textbase. All other connections 16 were set to 0. On the other hand, scripts are more stable long-term memory structures, allowing for more long-distance relations, so that strength values of 4, 3, 2 and 1 were assigned to items 0, 1, 2 and 3 steps apart in the script structure, respectively. Finally, a value of 4 was used to tie together the same node at different levels of representation, e.g., L1 to P1, and P1 to M1. In consequence, the effective connections for the surface and textbase elements in the coherence matrix correspond to the links shown in Figures 3 and 4, but the connections among the model elements are much richer, since not only neighboring nodes are directly connected, but also nodes two and three steps apart in Figure 4. In this way a 38 x 38 coherence matrix was obtained. Each of the 38 items was assigned an initial weight of 1/38 in an activation vector A1. This activation vector was successively multiplied with the coherence matrix. After each multiplication, the resulting activation vector was renormalized so that the sum of all activation values was 1. After 7 such cycles the average change in activation was less than .0001, and the process of spreading activation was stopped at that point. Figure 5 shows the pattern of activation over the 38 elements in the activation vector. L and S elements wind up with relatively low activation values (because only a few linguistic connections contribute to the spread of activation, given the matrix structure and parameter values assumed above). P elements are more strongly activated, partly because they are embedded in a more strongly interconnected network than the linguistic elements, and partly because they are directly connected to the dominant M elements. The reason for the higher activation of the M elements is of course their much greater interconnectedness. Note that the only inference admitted here, the "Going-to-the-Movies" script header, has become one of the most highly activated items. - Insert Fig. 5 about here - The memory trace, then, consists of three components: the 38 elements that were constructed from the text (in the general case, these would be augmented by a substantial amount of activated knowledge - inferences and elaborations), their interconnections as represented by the coherence matrix $C$, and their activation values, given by the activation vector $A$. We can now turn to the recognition test. First, consider an old test sentence that is taken verbatim from the original text, e.g. *He looked at the newspaper*. As in the memory models discussed above, the familiarity value of this sentence is based on the dot product $T^*A$, where $T$ is a vector with unit activation in all elements associated with the test sentence and $A$ is the activation vector. The calculations are illustrated in Table 1. Now consider a paraphrase, such as *Nick studied the newspaper*. Most of the elements constructed from this sentence are again duplicates of elements in the existing memory structure, but there are some new ones: the word *studied* (but not the proposition P5, which remains unaffected by the substitution of a synonym), as well as two new $S$ elements (in place of S4 and S5). These three new elements are added to the coherence matrix and connected with the existing memory structure in the same way as the original elements themselves were interconnected. Thus, an expanded coherence matrix $C_P$ is obtained. Activation is now spread through this new structure until the activation vector $A_P$ stabilizes, which occurs after just 2 cycles. Table 1 shows the resulting pattern of activation for this test sentence. Its familiarity is slightly below that of the old, verbatim sentence, in qualitative agreement with Zimny's data. - Insert Table 1 about here - The computation of familiarity values are also shown for two inference sentences in Table 1. The first test sentence "*Nick wanted to see a film*" is composed almost entirely of new elements, requiring the addition of 12 items to the original coherence matrix. It is a plausible inference (though not a logically necessary one), and its familiarity value comes out quite high, though well below that of the paraphrase sentence. The second inference sentence "Nick bought the newspaper" shares more elements with the original memory structure, but does not fit into the script structure as tightly as the first (wanting to see a film is itself a preparatory step in the Movies script, while buying the newspaper is just something appended to the newspaper introduced earlier). As a result, the second inference receives slightly less activation than the first. Finally, the familiarity value of a distractor sentence "Nick went swimming" is computed in Table 1; its only connection with the original paragraph is the name "Nick", and it receives the lowest activation value, as it should. The familiarity values computed so far look sensible, and are in qualitative agreement with the data. With additional assumptions about forgetting, further predictions can be derived. Suppose we simulate memory for two delay intervals, a short delay, corresponding to Zimny's 40 min. and 2 day intervals, which yielded comparable results in Figure 1, and and a long delay, corresponding to the 4 day delay. We want to derive predictions for the time of recognition testing, i.e. after the paragraph has been read, and after forgetting has taken place. We are assuming that the effect of forgetting is a weakening of the connections between the items in memory, with the connections among surfaces traces decaying most rapidly, textbase connections less so, while the situation model remains intact, as in the Zimny study (Figure 2). Numerically, this means that we set surface and textbase connections to 4 and 2 for 0 and 1 step distances (instead 5 and 3) to simulate the short-delay test. For the long-delay test, all surface connections are set to 0, and textbase connections to 3 and 1, for 0- and 1-step distances, respectively. (Note that we are in effect collapsing acquisition and retention into a single matrix here). Then, the same calculations are performed as in Table 1. However, the resulting activation values are not directly comparable across the three delay intervals, because of the way activation vectors have been renormalized after each multiplication. By keeping the total activation always at 1, the activation vectors indicate only relative values among the items in each vector, but not absolute values across different matrices. In order to obtain absolute strengths values, each activation vector must be weighted by the total sum of all entries in the corresponding coherence matrix. If there are many and numerically stronger connections in a matrix (immediately after reading), activation will reach a higher level than if there are fewer and weaker connections (after 4 days). These absolute strength values for the three delay intervals are shown for old sentences, paraphrases, inferences, and new sentences in Figure 6. - Insert Fig. 6 about here - Obviously, Figure 6 gives a fair qualitative account of the data in Figures 1 and 2. The differences in response strengths between old items and paraphrases disappear as delay increases, and old items, inferences and new items converge, but not completely. In order to go from the strengths values shown in Figure 6 to Yes-No responses, further assumptions need to be made about how strength values are transformed into Yes-No decisions. Instead of developing here a standard TSD model for that purpose, a simple response-strength model was assumed employing a ratio rule. The probability of a "Yes" response was computed by subtracting from each strength value a delay-specific threshold value and dividing the result by the total response strength, mapping the strength values into the [0,1] interval. Thus, four parameters need to be estimated for this purpose: a threshold for a Yes response for each of the delay intervals (we know that there are pronounced changes in bias over a four-day delay), and a value for the total response strength. These four parameters were estimated by the method of least squares. The resulting fit to the data from Figure 1 is shown in Figure 7. - Insert Fig. 7 about here - It would be hard to improve the fit of the predictions in Figure 7 through more sophisticated methods of parameter estimation for the coherence matrices, or a more elaborate decision model. Clearly, the present model does very well, in that it gives a good qualitative account of the data (Table 1 and Figure 6), as well as a good quantitative fit (Figure 7). In evaluating the fit of the model it must be remembered that we have not constructed an ad hoc model for sentence recognition, but have put together this model from various existing components: a recognition mechanism from the list learning literature, ideas about the memory representation, and a model of comprehension processes from recent work on discourse processing. Neither is there anything new about the way memory representations are constructed here: phrase structure chunks, textbases, and scripts are all familiar and widely used. Even the parameters in the model are constrained, both a priori (connection strengths can decrease with delay, but not increase), and empirically (surface traces must decay rapidly, textbase traces more slowly and incompletely, and model traces not at all). A theory of sentence recognition has been constructed largely from old parts, and it appears to be empirically adequate. Nevertheless, a more skeptical view is also possible. There are a large number of parameters in the theory, and although it is not known how many are really free to vary (nor how this relates to the degrees of freedom in the data), their precise values are certainly underconstrained. Furthermore, illustrative predictions for particular test sentences are used as a basis for predicting data averaged over many texts and sentences as well as subjects. In short, it is not entirely obvious what is responsible for the good fit that was obtained - the theoretical principles emphasized here, or the design decisions made in putting this theory together. To some extent this dilemma reflects the fact that it is hardly ever possible to evaluate a complex theory with respect to a single set of data. Fortunately, the theory makes some additional predictions that do not depend on any further parameter estimation. If the model presented here is more or less correct, then other predictions about sentence recognition follow which can be evaluated at least qualitatively without further parameter estimation. In deriving the predictions for the Zimny (1987) data shown in Figures 6 and 7, two different inference statements were used as examples. Both were pragmatic inferences that people were likely to make in this context, but they differed in interesting ways. The first inference, "Nick wanted to see a film" is strongly related to the text at the level of the situation model: It is a common (though certainly not a necessary) prerequisite for going to the movies. On the other hand, at the textbase and surface levels, the connection is made only by a single term, "Nick". In contrast, the second inference, "Nick picked up the newspaper", shares both "Nick" and "newspaper" with the original text at the surface and textbase levels, but is not directly related to the going-to-the-movies script; it is merely an addendum to "newspaper". This makes an interesting difference in the way the present model handles these statements. As was shown in Table 1, the wanting-to-see-a-film inference accrues more activation (258 units) than the picking-up-the-newspaper inference (212 units). However, there is a significant difference in the speed with which this accrual occurs. In the first case, the amount of activation attracted by the inference statement in the first cycle is low (173 units, or 73% of the eventual total), and rises rather slowly over 13 cycles to its asymptotic value. The second inference, on the other hand, gets most of its activation right away (198 units, or 93%, so it is initially the stronger one) and reaches asymptote in 9 cycles. If one wanted to venture a generalization from just these two examples, one could say that model-based inferences are weak initially but increase in strength to a high value with enough processing, while inferences that are based more on surface similarity acquire activation quickly, but do not change much with further processing. In the model, this is obviously a consequence of the fact that surface and textbase relations are very local, while the situation model network is more extended. The way to test this hypothesis would be to collect speed-accuracy trade-off data for inference statements differing as outlined above. Alternatively, one can try to apply the model to some existing speed-accuracy data collected by Schmalhofer, Boschert, & Kühn (in preparation) that illustrate a closely related phenomenon. Schmalhofer et al. collected data from novices and experts verifying sentences from a highly technical text (an introduction to some features of the programming language LISP). They found rather striking differences in the speed-accuracy functions for these two groups of subjects, and we shall try to account for these differences by means of the hypothesis suggested above. In the Zimny data we are dealing with different types of inferences (surface- vs. model-based similarity), while Schmalhofer et al. deal with different types of subjects (experts with a good situation model and novices with an incomplete or faulty situation model). For the reasons mentioned above, the present model predicts quite different speed-accuracy trade-off functions in both of these cases. **Experiment II.** Schmalhofer et al. (in preparation) had 40 subjects study brief texts introducing them to the programming language LISP. Half of the subjects had no programming experience, while the other half were proficient in the programming language PASCAL (but had no experience with LISP). Therefore, the subjects with programming experience presumably knew about functions in general, and when studying the LISP text, could employ this knowledge about function schemata to understand what they were reading, i.e. construct an appropriate situation model. Novices, on the other hand, were presumably unable to do so within the relatively short time they were allowed to study these texts. On the other hand, they certainly could understand the words and phrases they read and form a coherent text base. - Insert Table 2 about here - Subjects were tested on four texts. An example of a text used in the experiment is shown in Table 2, together with three types of test sentences: an old verbatim sentence, and a correct and an incorrect inference. Subjects were asked to verify whether or not the test sentences were true, and to provide confidence judgments. When a test sentence was presented, a subject made 6 responses in a sequence, at 1.5 sec intervals when a signal tone was presented. The first response signal occurred 750 msec before the sentence appeared on the screen. Obviously, subjects could only guess at that time, but during the next 7.5 sec they had ample time to fully process each test sentence. The last response signal differed from the previous ones, and subjects could make their final response without time pressure. The percentage of "yes" responses on two consecutive responses can be used to determine the subject's change in opinion whether the sentence is true or false. Incremental d' values were calculated to assess this change. The incremental d' value for the processing time 0 is based on the difference between an unbiased guess (50% true responses) and the subject's actual guessing. No significant differences either between groups or sentence types were observed on this initial guessing trial. The results of the analyses for the next five responses for old sentences, correct inferences, and incorrect inferences are shown in Figures 8, 9, and 10, respectively. Separate analyses of variance were performed for each sentence type. The factor response signal time was, of course, always highly significant, while the difference between the high and low knowledge groups never quite reached levels of statistical significance. More important were the interactions between these factors: as Figure 8 suggests, novices and experts performed equally on old sentences, but for inferences (Figures 9), a significant statistical interaction was obtained, $F(4,148) = 4.15$, $p<.01$. - Insert Figs. 8, 9 & 10 about here - For true inferences, novices are relatively confident early in processing that the sentence would be true and become more and more uncertain during later processing. Experts, on the other hand, do not jump to conclusions, but gradually accumulate evidence throughout the processing period. Both experts and novices tend to accept false inferences as true initially, but experts eventually reject them confidently, while novices remain uncertain. These findings can be readily interpreted within the construction-integration model as it has been applied here to sentence recognition data. **On-line integration.** In previous work with the construction-integration model, the sentence was assumed to be the processing unit, purely for reasons of convenience: as long as one is not much concerned with what happens within a sentence, this is a useful simplification. However, if one is interested in how activation develops during the reading of a test sentence, the convenient fiction of the sentence as a processing unit must be abandoned. Instead, it will be assumed here that words are the processing units. As each word is read, all elements that can be constructed at this point are constructed and added to the existing net, which is then re-integrated. Thus, each sentence contains as many processing units as it has words (or, rather, word groups, the L-units in Figure 3). In order to illustrate how this model works, we first simulate the processing of the original text. Since we are not interested in the on-line properties of this process, this is done in exactly the same way as with the Zimny data: all the appropriate L, S, P and M units are constructed and connected according to the same principles as in Figure 3 and 4. A function schema, with slots for "Name", "Use", "Input" and "Output", provides the basis for the situation model. The resulting network is then integrated, and a pattern of activation is obtained which, together with the net of interrelationships itself, characterizes the memory representation formed for the to be remembered text. An old, verbatim test sentence is recognized by computing the amount of activation of its elements at each input stage. Thus, the test sentence "The function FIRST is used to extract the first S-term", is processed in seven input stages, as shown in Figure 11. First, "The "function" is processed, yielding the elements L2, P2, and M2. The second input unit comprises "FIRST", that is the elements L3, S1, P3, and M3. The remaining input units are also shown in Figure 11. - Insert Figs. 11 & 12 about here - Figure 12 illustrates how the model works for the inference statement "A single S-term is produced by the function FIRST". Only one element is constructed in the first processing unit: the unit L20 "a-single" (the numbering takes account of what was already constructed in the processing of the original text). More happens next: "S-term" corresponds to L12, P9 and M9 of the original text. Furthermore, at this point the new S-element S18 is constructed, as well as the proposition P21, (SINGLE, S-TERM). Note that no new model element is constructed corresponding to P21, for there is no way to know where in the function-schema such an element should be placed. In the third input unit, not only the new surface element L21 is generated, but also the sentence unit S22 and the corresponding proposition P22 (PRODUCE, $, (SINGLE, S-TERM)). Both of these are at this point incomplete: we don't know as yet what produces (SINGLE, S-TERM) - the $-sign is used as a placeholder in the proposition -, and we do not know all of the constituents of S20. S- and P-units are constructed as soon as possible, before all of the relevant information is available. This assumption in the present model is supported by results in the psycholinguistic literature, where it has been shown repeatedly that people assign words and phrases to plausible syntactic structures on-line, and do not wait until a complete analysis becomes possible (e.g. Frazier & Rayner, 1982). The immediate processing strategy at the linguistic and textbase levels contrasts with a wait-and-see strategy at the situation model level. In the former case, there are powerful heuristics available that make immediate processing feasible - e.g. the Minimal Attachment strategy of Frazier & Rayner (1982), or the Referential Coherence strategy for forming a coherent textbase. (Kintsch & van Dijk, 1978). The results may not be optimal (e.g., causal links are more useful in stories than mere referential links), or they may have to be revised eventually (as in garden path sentences), but they yield useful approximations for on-line processing that can later be modified if necessary. Immediate processing is also used when situation model elements are encountered in a test sentence that are already available in the original memory representation of the text. In that case, it is assumed that they retain their original position in the situation model. (As all heuristics, this will sometimes be wrong, e.g. in the case of false test sentences). Newly formed elements of the situation model, on the other hand, cannot be assigned on-line to a slot in the schema: where an element fits into a schema, or whether it doesn’t fit at all or contradicts it, usually can be determined only after the whole sentence has been processed. Thus, the processing of new situation model elements is delayed until the sentence wrap-up. In Figure 12, the elements M21 and M22, (PRODUCE, M2, (SINGLE,M12)), are therefore constructed in Input Stage 6 and assigned to the "Output" slot of the Function schema. Fit of the model to the data. How well can this model account for the Schmalhofer et al. (in preparation) data? There are three striking features of these data: the fact that for old verbatim test sentences, the speed-accuracy trade-off functions are essentially the same for naive and expert subjects; the fact that experts have slowly rising, high-asymptote functions for correct inferences, while novices are characterized by fast-rising, low-asymptote functions; and the fact that experts eventually come to reject false inferences more strongly than novices. The model implies all three of these observations. At this point, there are two ways to proceed. We could try to explore appropriate link values for the coherence matrix, estimate thresholds, and so on, as was done for the Zimny data, and attempt to fit the speed-accuracy data quantitatively. On the other hand, if we are satisfied with a qualitative fit only, computations could be based on the same parameters that were used in the Zimny data. This approach has some advantages in that it avoids the possibility that good fits are obtained merely because we happened to select just the right parameter combinations. There are no reasons at all why the same parameters should fit both sets of data, and good reasons why they should not (different subject groups, vastly different texts, different task demands - for superficial processing of many simple texts in one case and careful processing of much less material in the other). Nevertheless, if the model really has something to say about sentence recognition independent of the numerical values of the parameters in the Zimny simulation, one might expect that the qualitative pattern of the predictions would correspond to the main features of the new set of data. We have therefore chosen the second way to proceed. - Insert Figs. 13, 14, & 15 about here - The difference between novice and expert subjects in the present model is that the former have only a fragmentary, partly correct situation model. Since we are only interested in qualitative predictions, the more radical assumption was made that novices have no situation model at all. Specifically the speed-accuracy functions were simulated with the same parameter values that were used for the Immediate Group above, except that all link strengths are 0 in the situation model of the novices. The results are shown in Figures 13, 14, and 15 for old sentences, and correct and incorrect inferences. These calculations are based on the old sentence analyzed in Figure 11 and the inference analyzed in Figure 12. For the false inference, the following sentence was analyzed: "The argument of the function must consist of five LISP atoms". The calculations were the same as for a correct inference, except that in the last input cycle the activation of all M-elements was subtracted rather than added to the total sentence activation to reflect the fact that contradictory sentences provide counter evidence at the model level, while whatever surface and textbase similarities there are still continue to support a positive response. Schmalhofer's speed-accuracy functions (Figures 8-10) plot a d' as a measure of response strength against time. The model predictions are in terms of total activation against input stage.\textsuperscript{5} Figure 13 captures the relevant features of Figure 8: old, verbatim test items increase rapidly in strength and to a high level, the same for experts and novices. Inferences, on the other hand, rise faster for novices, but to a lower level, while the inference function for the experts rises more slowly initially but to a higher level (Figure 14). This pattern of results thus looks a lot like what was suggested for surface-based and model-based inferences in the Zimny data. Finally, Figure 15 exhibits the stronger rejection of false inferences (contradictions) by the experts than by the novices. Obviously, Figure 15 is only a caricature of the corresponding data in Figure 10: real novices do not have zero situation model, as was assumed for the model calculations, only a weak one. 7. Conclusion A model of sentence recognition from discourse has been developed and tested here which builds upon previous work on item recognition and discourse comprehension. The recognition mechanism used in this model has been derived from previous models of recognition developed to account for list learning data. Two elaborations from the domain of discourse comprehension were needed to enable this recognition mechanism to deal with sentences from a coherent discourse, rather than with list items. First, sentences must be represented in memory at several levels of representation, each of which can contribute to a recognition judgment. Second, the very processes of comprehension as formulated in the construction-integration model of Kintsch (1988) were shown also to be involved in judging whether a sentence had been experienced before as part of a discourse. Thus, familiar theoretical notions could be combined to provide an explanation for sentence recognition. This explanation fared quite well when tested against the results of empirical investigations of sentence recognition. In Experiment I, a good quantitative account of recognition for old sentences, paraphrases, and inferences was obtained for delays ranging from immediate tests to four days. However, due to the complexity of the comprehension model, a large number of parameters had to be estimated to match these data. Hence, we changed our strategy in Experiment II from one of fitting empirical results quantitatively to one of testing qualitative implications of the model which did not involved further parameter estimation. The data in question concerned the time course of sentence recognition. It was shown that the model predicted major qualitative features of speed-accuracy trade-off functions, without estimating new parameters. Thus, the model has been tested successfully against two large, complex sets of sentence recognition data. "Old sentences" and similar terms are abstractions. It would be quite possible for a particular, insignificant old statement to receive less activation than a particular, highly salient inference (just as the script-header inference in Figure 5 is more highly activated than most actual text elements). To obtain useful data in recognition experiments, items must be carefully controlled, e.g. words must be comparable in terms of such factors as length, frequency, or imagery value. The data are usually averages over many items in a class. The model makes predictions for particular discourses, and particular test items. We select typical items and work out the model predictions for these, but then match these predictions against averaged data. In other words, we are postulating an "ideal" text, just as theories commonly postulate an "ideal" subject. In principle it would be possible, though the amount of labor would be almost prohibitive, to calculate predictions for each text and test sentence used in the experiment, and then test averaged predictions against averaged data. While this would lead to greater quantitative precision, it would provide us with relatively little further insight. The model of sentence recognition developed here is quite general and can be applied to many different texts and test sentences, with one serious restriction: in order to apply the model, one needs to know what the situation model would look like for the text and the subjects in question. Linguistic analyses as well as propositional textbases (the latter if necessary based on the default rule of argument overlap, as in the present case) can be constructed for any kind of text, but situation models are much less well understood. In particular, it is not clear how non-propositional situation models (e.g. mental maps, as in Perrig & Kintsch, 1985) could be integrated into the present framework. Earlier models of sentence recognition share some characteristics with the model proposed here, but differ in other respects. Two such model are the schema-pointer-plus-tag model of Graesser (1981) and the plausibility judgment model of Reder (1982). Both models, in common with an earlier generation of recognition models, conceptualize recognition as a match between the memory representation of an item and the item presented at test, thus violating a basic feature of current recognition models as discussed in Section 1. Furthermore, they are much less specific than the computational model presented here. In other respects, however, there are some communalities between these models and the present approach. Graesser distinguishes two stages of sentence recognition, one corresponding to the question "Is the item in the memory trace?", and the other to "Must the item have been in the passage?" (Graesser, 1981, p. 92). Reder similarly distinguishes between a plausibility judgment and a direct retrieval (Reder, 1982). Clearly, there are some parallels here between matches based on the surface and textbase representation on the one hand and matches based on the situation model on the other. One could, in fact, claim that what has been done here is to provide an explanation and computational mechanism for the phrase "plausibility judgment". Significant differences should not be overlooked, however. Reder, for instance, emphasizes the stage character of the process with plausibility judgments normally coming first, preempting direct matches. In the present model, matches at all three levels of the representation occur in parallel, with the contribution of the situational match necessarily coming in rather late in the processing of a sentence, as the analyses of the speed-accuracy trade-off data in Section 7 show quite clearly. One does not need a separate model for sentence recognition. If we put together what we know about the item recognition process per se with the construction-integration model of discourse comprehension, we have a ready-made explanation for many of the phenomena of sentence recognition. Thus, the construction-integration model comes one step closer toward becoming a general theory of discourse comprehension and memory. References Schmalhofer, Boschert, & Kühn (in preparation) Text- and situation-based learning. Footnotes This research was supported by a Grant No. 15872 from the National Institute of Mental Health to Walter Kintsch. 1 The authors of the models discussed here are concerned with general models of human memory. The formal similarity noted above does not hold outside the domain of item recognition. 2 The task dependent nature of these results should be emphasized: long-term memory for surface features is frequently observed in other contexts, as is forgetting of situational information. Forgetting rates are clearly material- and task-dependent (for a review, see Kintsch & Ericsson, in press). 3 The reason we do not just have an element "1" instead of L1, P1, and M1, adding the three types of relationships together, is that on recognition tests we are usually dealing with only one of these elements, but not the others. 4 Schmalhofer (1986) has found the same pattern of responses for verification as for sentence recognition. 5 Very similar predictions are obtained if the length of each input unit is made proportional to the number of cycles needed for the integration process to settle. Table 1: Test sentences and their familiarity values. (The activation values displayed have been multiplied by 10,000) <table> <thead> <tr> <th>OLD:</th> <th>PARAPHRASE:</th> </tr> </thead> <tbody> <tr> <td>&quot;He looked at the newspaper&quot;</td> <td>&quot;Nick studied the newspaper&quot;</td> </tr> <tr> <td>L10 182</td> <td>L1 186</td> </tr> <tr> <td>L5 175</td> <td>studied 3</td> </tr> <tr> <td>L6 99</td> <td>L6 102</td> </tr> <tr> <td>S4 107</td> <td>S 25</td> </tr> <tr> <td>S5 241</td> <td>S 38</td> </tr> <tr> <td>P1 534</td> <td>P1 530</td> </tr> <tr> <td>P5 517</td> <td>P5 514</td> </tr> <tr> <td>P6 216</td> <td>P6 216</td> </tr> <tr> <td>M1 456</td> <td>M1 454</td> </tr> <tr> <td>M5 583</td> <td>M5 582</td> </tr> <tr> <td>M6 365</td> <td>M6 365</td> </tr> <tr> <td>Total &lt;3475&gt;</td> <td>Total &lt;3016&gt;</td> </tr> </tbody> </table> INFERENCES: "Nick wanted to see a film" "Nick bought the newspaper" L1 114 L1 172 wanted 1 bought 105 to-see 0 L6 5 a-film 0 S 27 S 1 S 37 S 3 P1 521 S 20 [BUY,P1,P6] 244 P1 394 P6 143 [WANT,P1,P] 79 M1 443 [SEE,P1,P] 81 [BUY,M1,M6] 398 [FILM] 14 M6 194 M1 565 Total <2122> [WANT,M1,M] 519 [SEE,M1,M] 302 [FILM] 490 [FILM] 490 Total <2583 Total <2583> NEW: "Nick went swimming" L1 187 L1 187 went 1 went 1 swimming 1 swimming 1 S 7 S 7 S 36 S 36 P1 578 P1 578 [GO,P1,P] 130 [GO,P1,P] 130 [SWIM,P1] 130 [SWIM,P1] 130 M1 448 M1 448 Total <1509 Total <1509> Table 2: A paragraph from the text used in Experiment II and sample test sentences. Original Text: The function FIRST is used to extract the first S-term from a combined S-term. The function FIRST has exactly one argument. The argument of the function must be a combined S-term. The value of the function is the first S-term of the argument. Test Sentences: Old: The function FIRST is used to extract the first S-term. Correct Inference: S single S-term is produced by the function FIRST. Incorrect Inference: The argument of the function must consist of five Lisp atoms. List of Figures Figure 1. Percent "yes" responses to old sentences, paraphrases, inferences, and new sentences as a function of delay. After Zimny (1987). Figure 2. Estimated strengths of the surface, textbase, and model traces. After Zimny (1987). Figure 3. Surface, textbase, and situation model elements of the to-be-remembered text. Figure 4. The coherence nets formed by the textbase and the situation model. Figure 5. Final activation values (multiplied by 10,000) of the language units (L1 to L10), the surface chunks (S1 to S8), propositions (P1-P9)), and model elements (M1 to M10). Figure 6. Activation values for the old test sentence, paraphrase, inference, and new test sentence as a function of delay. Figure 7. Observed (-----) and predicted (------) percent "yes" responses as a function of sentence type and delay. Figure 8. Judged correctness of old, verbatim test sentences as a function of processing time for high- (filled squares) and low-knowledge (open squares) subjects. After Schmalhofer et al. (in prep.) Figure 9. Judged correctness of correct inferences as a function of processing time for high- (filled squares) and low-knowledge (open squares) subjects. After Schmalhofer et al. (in prep.) Figure 10. Judged correctness of false inferences as a function of processing time for high- (filled squares) and low-knowledge (open squares) subjects. After Schmalhofer et al. (in prep.) Figure 11. An old, verbatim test sentence, processed sequentially in seven input stages. Figure 12. A correct inference, processed in six input stages. Figure 13. Activation of an old test sentence as a function of processing time for high- (filled squares) and low-knowledge subjects (open squares). Figure 14. Activation of a correct inference as a function of processing time for high- (filled squares) and low-knowledge subjects (open squares). Figure 15. Activation of a false inference as a function of processing time for high- (filled squares) and low-knowledge subjects (open squares). Nick decided to go to the movies. (LIKE, (DECIDE, P1, P3), (GO, P1, P4), (MOVIES)) P1 P2 P3 P4 M1 M2 M3 M4 He looked at the newspaper to see what was playing. (LOOK, P1, P6), (NEWSPAPER), (CAUSE, P5, P8), (SEE, P1, P9), (PLAY, INMOVIES) P5 P6 P7 P8 P9 M5 M6 M7 M8 M9 The textbase structure. The situation model. M10 GO-TO-MOVIES Props: M4 Agent: M1 Preparation: M2 → M3 M5 → M6 M7 → M8 → M9 The function FIRST is used to extract the first S-term. I II III IV V VI VII L2 L3 L5 L6 L7 L8 L9 S1 S6 S8 S7 S4 P2 P3 P5 P6 P7 P8 P9 M2 M3 M5 M6 M7 M8 M9 A single S-term is produced by the function FIRST.
[REMOVED]
Situating Early America’s Identities in the Atlantic World Benjamin E. Park University of Cambridge ‘We are all Atlanticists now’, David Armitage declared—likely tongue-in-cheek—in 2002.¹ If such a statement seemed a bit exaggerated at the time, it seems only commonplace a decade later. The amount of scholarship that has appeared in the last few years that utilises a theoretical model of Atlantic history and establishes Atlanticism as its grounding framework demonstrates the fruition of the so-called ‘Atlantic turn’ in US historiography. What was once a trendy—if still peripheral—topic in the academy is now commonplace, and drives the American publishing and job market to an extent that likely surprises those who plead for the methodological adjustment.² This is especially the case for early American history, particularly during the period that immediately follows the American Revolution, commonly referred to as the early republic, and the political cultures and nationalisms produced therein. W. M. Verhoeven’s proclamation in 2002 that ‘the many revolutions that produced the national ideologies, identities, and ideas of state of present-day American and Europe’ were shaped by a ‘trialogue (between France and Britain and America)’—a statement designed to drive a radically new methodological model—now seems pedestrian, if not an understatement.³ ¹ Benjamin E. Park (benjamin.e.park@gmail.com) is a postdoctoral fellow at the University of Missouri. He holds graduate degrees from the Universities of Edinburgh and Cambridge. The previous few years have been an especially fruitful period for this approach. Particularly, a specific subtheme of this approach has drawn increased attention: the Atlantic world’s relationship to the formation of American identities. Book after book have appeared that claim a new perspective on how Americans came to understand their national image, and often emphasise the role foreign influences played. British pundits, French Jacobins, Haitian slaves, Irish rebels, German revolutionaries, and even oriental imports, these books tell us, had a large influence on how citizens of the newly United States understood their own national character. Indeed, this reasoning implies, the American ‘image’ was created in direct opposition to these distinctive and foreign ‘others’. The cover of Sam Haynes’ book, which explores early American Anglophobia, reproduces a mid-nineteenth century political cartoon that could likewise serve as the standard image for this common narrative: a short, hooligan Jonathan (representing America) positions himself in a dramatic defensive pose meant to compensate for his incommensurate appearance in relation to the towering figure of foreign culture—in this case, an overly smug, pompous, portly, and rosy-cheeked John Bull (representing Britain). In the formation of American-ness, the ‘other’ played as distinct a role as the ‘self’.¹ With this emphatic push for a transnational perspective on early American identities, it is useful to take a step back and examine the state of the field and ask what the Atlantic framework offers the study of early American nationalism. This paper seeks to do three things in assessing this methodological movement. First, it engages the theoretical underpinnings upon which most of the recent literature operates. Much of this work, in tracing the development of an American ‘identity’ within the context of an Atlantic world, must first consider what an ‘identity’ entails. Was it the same thing as the production and promotion of nationalism? Historical work on ‘nationalism’ and ‘identity’ from the last two decades have drawn from Benedict Anderson’s ‘imagined community’ thesis, yet elements of that framework have crumbled as a result of several recent historiographical trends.5 Where does American nationalism studies stand today, and how might an Atlantic framework both build upon and deconstruct those developed premises? Second, this paper engages these studies by focusing particularly on a number of works since 2007 that focus on identity construction within the United States during the early republic period. By comparing these books’ Atlantic frameworks and postulating what types of benefits and pitfalls such frameworks represent, I hope to demonstrate broader historiographical trends of cultural transmission, intellectual genealogy, patriotic cosmopolitanism, and identity formation. As the broad umbrella of Atlantic history encompasses a broad range of methodologies and approaches, I will give an overview of the disciplinary fields upon which these recent books are built, and then organise them into two broad categories: first, those that attempt to trace physical bodies and tangible materials across the ocean, and second, those that attempt to trace foreign ideas—or, at least, perception of foreign ideas. Finally, the third goal of the paper is to offer general observations on the task of viewing early United States history and culture within the framework of Atlantic studies, as well as contemplate some potential avenues for future study. Specifically, I will focus on the pitfalls of cultural translation across the ocean and between regions, and how the many (mis)appropriations on behalf of the Americans demonstrate the necessity of keeping a deep understanding of the American context even viewed within an Atlantic scope. To what extent does the constant nagging of American exceptionalism during the early republic necessitate the limits of broader perspective? In short, is it more crucial to place America’s identities within an Atlantic world or to incorporate elements of the Atlantic world within America’s identities? Deconstruction ‘Nationalism’ Before engaging the recent books, it is important to trace the theoretical and historiographical models upon which they are patterned. The study of ‘nationalism’ and ‘identity’ has a long history itself. Benedict Anderson’s highly influential *Imagined Communities* argued that the advent of print culture in the mid-eighteenth century introduced ‘unified fields of exchange and communication below Latin and above the spoken vernaculars’, a development that laid the foundations for modern conceptions of nationalism. ‘The convergence of capitalism and print technology’, he wrote, ‘created the possibility of a new form of imagined community, which in its basic morphology set the stage for the modern nation’. The American Revolution was the first movement to take advantage of this development, serving, as Anderson put it, as a ‘Creole pioneer’ for the rest of modernity to follow.6 This connection of print culture and nationalism, what Anthony Smith has termed ‘classical modernism’, has become the standard framework for understanding the rise of nationalist sentiments in the western hemisphere.7 Yet this general thesis has been challenged of late, on several fronts. While many recent theories share Anderson’s presupposition concerning the importance of print, there is increasing doubt that such interconnectivity can produce broad consensus. Instead of understanding nationalism as a ‘result’, an interconnected nation sharing a general framework of values and ideals, many now argue that nationalism should be seen as a ‘process’.\textsuperscript{8} For instance, Prasenjit Duara has written that ‘to see the nation as a collective subject of modernity obscures the nature of national identity’. Instead, it is more fruitful to ‘view national identity as founded upon fluid relationships; it thus both resembles and is interchangeable with other political identities’. Any conception of ‘nationalism’, Duara continued, is ‘rarely the nationalism of the nation, but rather represents the site where very different views of the nation contest and negotiate with each other’. It is only through the comparison of localised ‘national identities’ and the broader ‘nation-state’ that we can distinguish the uniqueness and significance of various nationalisms.\textsuperscript{9} Similarly, Rogers Brubaker has argued that ‘we should refrain from only seeing nations’ as ‘substantial, enduring collectivities’, but instead ‘think about nationalism without nations’ in order to see ‘nation as a category of practice, nationhood as an institutionalised cultural and political form, and nationness as a contingent event or happening’.\textsuperscript{10} Nationalism, then, is a form of ‘practice’, not a result. Such a revised framework forces historians to examine individual and local particulars on their own terms rather than as examples of a universal whole.\textsuperscript{11} A second challenge to Anderson’s thesis attacks the standard belief that nationalism itself originates with print culture, an association that has dominated the study of print culture. Karl Deutsch’s foundational \textit{Nationalism and Social Communication} presupposed that the first step toward a national identity was a public utterance by elite figures whose words then trickled down to mass culture through print and messengers.\textsuperscript{12} More recently, Miroslav Hroch further elaborated on this process by presenting three ‘phrases’ of nationalism, the first of which involved ‘initial agitation’ on the part of a few elite figures in hopes of correcting and transforming the larger culture. Yet these perspectives, rooted in the ‘classic modernist’ that presupposes a radical break from previous cultures, have been charged with overlooking the continuity throughout nationalist formations. Growing interest in a scholarly approach labelled as ‘everyday nationalism” has called for a closer analysis of how cultural sentiment predates print culture. In the most systematic defence of the approach, Jon Fox and Cynthia Miller-Idress have argued, ‘to make the nation is to make people national’, which implied more focus on the on-the-ground nationalist practice. Nationalism is produced through ordinary actions and milieus, in at least four central ways: ‘talking the nation’ (the discourse citizens invoke), ‘choosing the nation’ (individual choices and decisions), ‘performing the nation’ (arts, literature, and performance), and ‘consuming the nation’ (material and consumer goods). This broadens the base for what is to be considered in national formation, both in concepts and in possible subjects: it shifts the emphasis from elite print producers to average citizens from all walks of life. The final challenge to Anderson’s thesis is found in the growing literature of postcolonial theory. While most work in postcolonialism has focused on areas like the Middle East, Africa, and Asia—colonies of exploitation, occupation, or domination, and therefore has received little attention from American scholars—the recently emerging literature on ‘settler societies’ is significantly relevant. Defined as ‘societies in which Europeans have settled, where their descendants have remained politically dominant over indigenous peoples, and where a heterogeneous society has developed’, settler societies have been defined by several key arguments: the continued dominance of institutions and societies of European inheritance, the perpetuation of cultural and social forms, the tensions complicit with those that were once colonised now becoming colonisers themselves, and the importance of provincial polities and identities.\textsuperscript{15} Rather than an abrupt break with past colonial conditions, postcolonial theory emphasises resilience in cultural, social, and political structures, often maintaining the power and privilege bequeathed from their colonising ancestors.\textsuperscript{16} This often means acknowledging a fractured response within new nations, as various communities are left to interpret, absorb, and perpetuate nationalist tensions according to experience and lived realities.\textsuperscript{17} These new approaches to nationalist and identity theory have slowly crept into American historiography. Until the last few decades, historians viewed the concept of an ‘American’ character as an objective fact that could be identified and traced; as one historian wrote, American nationalism was ‘an independent variable’ detached from historical contexts.\textsuperscript{18} Yet recent scholarship has done much to collapse this myth. While emphasising the divisive, fragmented, and disputed environment of America’s society, the cultural turn in US history similarly revised the concept of a national identity into a nebulous and contested principle that was always on the move. Americans, this new scholarship argued, were divided by region, class, gender, and race, and conceptualisations of ‘America’ depended on who was being questioned. Thus, studies of nationalism took a much broader and inclusive approach. David Waldstreicher examined national celebratory rituals and argued that understanding nationalism ‘in terms of its practices as well as its ideas’ helps better to comprehend ‘the everyday interplay of rhetoric, ritual, and political action that permitted the abstractions of nationalist ideology to make real, effective, practical sense’.\textsuperscript{19} This type of approach better encompassed how nationalism was lived and experienced by more than just the founding fathers or public figures. Works on American nationalism came to emphasise components that had previously been on the periphery of the field and included violence, geography, speech, and gender. Literary scholars have latched on to the potential of postcolonial theory when examining early American texts, as the dynamic tension between isolation and dependence prove fertile ground for the study of cultural anxiety. Historians have recently ventured similar work, though it has primarily remained within the political sphere. And finally, materialist examinations of print dissemination in the early republic have blasted the myth of a nation connected through published letters, thus further crumbling the idea of a cohesive national identity. Yet the role of an interconnected Atlantic world in the formation of these fractured American identities remained, until recently, understudied. The ‘cultural turn’, and its concomitant focus on the lived realities of societal transformations, constructed a framework that emphasised the local over the national—let alone the international. Thus, the ‘Atlantic turn’, when applied to theories of nationalism, was forced to confront a serious problem: how does the historian maintain the lessons of cultural diversity while broadening geographic scope? Attempts to address this problem have been both broad and dynamic, as frameworks seek to apply the depth of cultural practices to the breadth of an oceanic context. Transatlantic Materials For there to be a transatlantic perspective on early America, there must first be transatlantic interaction. These interactions can come through several different forms, including through workers migrating across the hemispheres, statesmen moving throughout the countries, pioneers fleeing persecution, or slave bodies being transported against their will. These types of confrontations are the most concrete type of connection, for they can be more easily identified, examined, and quantified. The physical presence of an exotic individual or transportation to a foreign climate forces retrospection and differentiation, as it is more tangible and threatening than a caricatured ‘other’ displayed and imagined through print. A historical approach that engages these interactions emphasises the mobile nature of the eighteenth-century Atlantic world, where not only ideas but actual persons were transported from port to port, and large masses of people uprooted themselves and their families in ways that were previously inconceivable. While numerous causes of these movements were widely diverse—flights from disease-ravaged homelands, quests for new found wealth, searches for religious freedom, business travels in an increasingly interconnected economy, or forced relocation through the slave trade—the results were the same: scores of people encountered populations from foreign lands, which forced them to determine what, if anything, made them different. Nowhere was foreign interaction more tangible than American citizens who lived abroad and both witnessed and took part in foreign cultures and movements. Philipp Ziesche’s study of ‘cosmopolitan patriots’ explored the experience of Americans living in Paris during the beginning, duration, and aftermath of the French Revolution. From this vantage point, Ziesche argued, one is able to gain a better understanding of how cosmopolitan ideologies—which he assumed were held by all Americans living abroad—merged with notions of particularism and locality, and thus helps us better understand America’s strenuous nationalist relationship with France. ‘Historians have often told the story of the United States and France in the late eighteenth century as one of inevitable disenchantment, in which exclusionary yet realistic nationalisms supplanted a well-meaning yet utopian cosmopolitanism’, he wrote. ‘But looking at the age of revolution from the vantage point of Americans in Paris suggests that nation-building and universalism were complementary rather than competing forces during this period’. Ziesche’s fascinating text focused on how these American expatriates balanced their cosmopolitan zeal with what they came to see as local concerns and circumstances. Such a framework forces the historian to rethink what these foreign encounters meant to those who experienced them. The characters that fill Ziesche’s narrative include many notable figures—Thomas Jefferson, Gouverneur Morris, Thomas Paine, James Monroe, and Joel Barlow—yet they are now seen as strangers in a foreign land forced to understand their surroundings, rather than founders in the homeland they helped establish. This revised framework makes them understand and conceptualise their native country in new ways. These individuals, like many other Americans at the beginning of the French Revolution, came to the audacious conclusion that ‘political doctrines could spread across frontiers; nations existed independent of states, and peoples independent of their rulers’. Part of being an ‘American patriot’, this reasoning continued, ‘meant being a cosmopolitan, as the promise of America was realized abroad as much as at home’. This cosmopolitan zeal, however, was soon tempered by parochial concerns. Even Jefferson, the foremost Francophile, came to see the particular circumstances of French culture as necessitating a different form of government. Like the more conservative Morris, whom Ziesche posited as a helpful counter-example, Jefferson realised that French citizens were, in a very important way, unlike those who lived in America. Though this did not temper his love of France and continued support for the country’s republican transformation, it does, as Ziesche rightly noted, highlight how ‘local differences in the age of revolution’ still played a major role in how even elite members of society viewed and understood a perceivably universal democratic movement. This makes America’s later turn to exceptionalism much more understandable: if Jefferson, the proud cosmopolitan, couldn’t equate American culture with the French, the average citizen could not fare much better. This type of narrative, where Americans are found abroad, is superb in reconceptualising the theoretical boundaries of nationalism, and more fully explains the specific people and contexts involved. But it is still limited in what it can tell us about American society at large during the period, let alone the cultural differences bred from divergent backgrounds. Ziesche’s narrative admittedly focused on a small number of elite expatriates, none of whom can be seen as strikingly representative of the larger American culture; it is more an examination of the theoretical construction of elite cosmopolitanism than of American identity. In fact, as Ziesche rightly notes, as the 1790s progressed the distance between Americans in France and Americans at home only increased. By the end of the decade, most American citizens came to see those expatriates in France as unreliable and sullied by their French connection and in possession of many of the same traits prevalent in the now-decrepit Europe: disloyal, selfish, libertine, and greedy. Even from the vantage point of those in France, America’s turn to exceptionalism in the wake of the French Revolution is still the traditional narrative found in the historiography of Revolutionary America, for the revision cannot encompass broad swaths of people. But citizens of the American republic did not have to travel across the Ocean to encounter the Atlantic world; rather, starting in the 1790s, an influx of refugees from Saint Domingue brought both the people and the ideas of a foreign revolution directly to their doorstep. Ashli White’s study of Haitians in the early republic is only one of a handful of studies that examine the presence of Haitian refugees during the period of Haiti’s revolution. (And, in general, it breaks the common Atlantic narrative that focuses on British encounters.) By embodying a physical representation of both national and racial otherness, these refugees forced Americans to reconsider what made their own revolution both influential to and unique from other rebellions. Most especially, the presence of their foreign, black, and free bodies brought issues of race to the forefront of identity and, in the end, confirmed their self-conception of the American nation as a white republic.28 White successfully examined the influence of both white and black refugees from Haiti, and she devoted half of her text to each vantage point. While the Haitian Revolution brought a myriad of issues to the American public, discussions were ‘usually limited to the politics of the white population’. With large numbers of white slaveholders being evicted from the island and settled in America, citizens heard stories of ‘revolt’ and ‘chaos’ that forced them to sympathise with the ousted slave-owners rather than the newly freed slaves. The revolution was depicted as a ruthless ‘rebellion’ based on, ironically enough, ‘aristocratic’ inclinations that were associated with Britain’s monarchy and France’s old regime. Rather than questioning the racist conceptions that had been at the root of Western societies, most American commentators on Haiti ‘insisted that the slaves were the pawns of white and coloured colonists who marshalled, for their own political and military ends, the raw and unthinking manpower of the enslaved’. Such a depiction enabled ideas of white supremacy to remain and further refused blacks their own political motivations. Perhaps White’s greatest achievement came in her ability to not only negotiate the fears Americans held when confronted with the Haitian slave revolt—a theme that has dominated the literature of Haiti/American relations—but also to explore how American slaveholders sought to diffuse potential dangers that came with the revolution. This she accomplished though an extended examination of the metaphor most Americans associated with the Haitian Revolution: a contagion. ‘Slaveowners were [both] reactive and proactive during the Haitian Revolution’, White explained, because they, like the many doctors seeking to discover and mitigate epidemic diseases during the same period, sought to find solutions to future problems. Though a small number used the events to press abolitionist agendas, and made explicit the relationship between American and Haitian Revolutions and the universal ‘rights of men’, a majority came to justify America’s continued practice of slavery by congratulating their own ‘humane’ treatment of slaves in comparison to the barbarity found in Haiti. Moral codes for slaveholders were reemphasised to lessen the chance of revolts—John Adams considered implementing formal trade regulations with Toussaint Louverture in hopes to prevent slave uprising—and other measures were taken in a defensive posture to better prepare the nation for the possible ‘disease’ Haiti carried; thus, progress was made, but the basic foundations of a racial hierarchy and white superiority remained unquestioned. Only amongst America’s black population did the revolution bring actual change, for it ignited ideas of freedom and emancipation—even if their white masters continually stifled those very ideas. So while works like those by both White and Ziesche demonstrate the extent of foreign, Atlantic “encounters” during the age of revolutions, it remains to be seen how much these external sources legitimately influenced the broader American culture or transitioned their imagined—let alone lived—communities. In both cases it appears that the Americans in France and the Haitians in America merely confirmed assumptions that were already present and deepened the American sense of uniqueness. Those people who were actually influenced—the American elites living in Paris and the slave population in the American republic—remained on the peripheries of American society, and were further portrayed as the anti-types of the national ideal. Once again, Americans primarily encountered their own prejudices and presuppositions. Yet recent research has emphasised that human beings were not the only things transferring across the ocean. Along with people there were also material products as Americans took part in a vibrant international trade market that transported physical merchandise across the continents. A close study of material culture, then, promises to highlight the narrowing distance between America and the rest of the world. Kariann Yokota, in her illuminating study of America ‘unbecoming British’, argued how the ‘importation of material culture, ideas, and experts from the mother country was an integral part of a provincial people's attempt to construct a ‘civilised’ nation on the periphery of the transatlantic world’. The importation, dissemination, and appropriation of these foreign goods reveals much of how Americans understood themselves, their nation, and their world, for it reveals cultural taste disseminated throughout a broad society.\textsuperscript{31} Material culture has received growing interest in early American studies and offers much for both Atlantic history and identity construction.\textsuperscript{32} Some have rightfully argued that it was an obsession over the Atlantic consumer marketplace that led to the Revolution itself.\textsuperscript{33} As Yokota noted, this production and consumption of objects were ‘expressed by the social relationships these economic and intellectual exchanges fostered’, and ‘embodied culturally’ the tensions and issues swimming across the Atlantic.\textsuperscript{34} It was through the exchange of these goods that most Americans interacted with a transatlantic marketplace within the confines of their own homes. But at the centre of this material culture was a deep anxiety: by relying on foreign markets, Americans neglected their own national production. When they entered international trade, Americans were constantly disappointed in the lack of interest from other nations; and in seeking respectability from other countries, especially Britain, Americans were implicitly admitting their denigrated status. The recent emphasis on material culture promises great and important insight into these tensions of identity formation. But while material culture offers a crucial lens into how citizens exchanged and expressed their sense of nation and culture, it would be a mistake to assume these goods held the same meaning in different settings. Though broader lessons remain possible, historians must be careful not to assume that the same experiences were shared throughout the early republic. In New England, for instance, British products remained in vogue longer than in the central colonies, and communities separated from port towns were unable to take part in these same types of transnational exchanges.\textsuperscript{35} This doesn’t imply that circum-Atlantic history has limited relevance on early America—indeed, tracing the physical and literal exchange of peoples and goods across the ocean is the most direct and concrete way to examine how America related to the broader Atlantic—yet most Americans did not experience these direction connections, and thus a more broad and eclectic approach to American interactions with the foreign is necessary. **Transatlantic Ideas** Even if a majority of citizens during the early republic period never directly interacted with foreign emissaries, they indirectly encountered them through a vibrant print culture. Several recent books expertly focused on the oblique, slippery influence that transnational tensions played during the era, as Americans did not encounter as much as perceive foreign culture in their quest for cultural autonomy. These fascinating and important works, however, demonstrate both the potential and limits of this ‘perception’ approach to early US history. How tangible do these connections have to be to justify the larger chronological scope? How accurate were the ‘perceptions’ of foreign influence that animated early Americans—and at what point are they truly ‘foreign’? How strong were these Atlantic ‘connections’ when the resulting ideas hardly resembled the original source? It is common in early American history now to stress how much citizens yearned to be part of this broader community: ‘Early national citizens viewed themselves as participants in a transnational community, drawn together by sinews of trade, migration, and information’, wrote Rachel Hope Cleves in her study of French Jacobinism in America. The expansive and imposing Atlantic Ocean, therefore, was ‘not…a barrier that cut them off from Europe but…a concourse that connected them to the Old World’. Cleves wrote that ‘when American readers sought out the news from France, they were seeking news of their own world. The streets of Paris led directly to the streets outside their doors’. But the question that is often left unasked—and when asked, often goes unanswered—is this: what limit did their lack of actual knowledge of the broader Atlantic World place on the usefulness of these broader frameworks? If the foreign information and influence early Americans encountered, adapted, and, as in the case of several monographs, feared, were significantly tinged by national interpretations and local issues, to what extent does the label ‘Atlantic’ remain accurate? Seth Cotlar’s fascinating tale of America’s rejection of Tom Paine and the radicalism he represented offers a potent example. America’s development of ‘democracy’, Cotlar argued, came in dialogue with the ‘Atlantic-wide debate’ over radical political movements percolating out of Europe. The story of America’s political scene, his book explained, was not isolated to the squabbles between Jeffersonian Republicans and Federalists, but was also in response to the radical movement that was centred in France and spread across an interconnected print web. Much of this influence ended up being reactionary, as the backlash against the extremes of the French Revolution created the ‘ideal American citizen’ that was a ‘proudly xenophobic American patriot who had little interest in or desire to emulate European politics’. Echoing the traditional exceptionalist narrative of American identity formation, though with the added flair of utopian politics, Cotlar’s account presented America’s Atlantic cousins as scary bogeymen who frighten citizens away from foreign interaction. But this tale also demonstrates the elastic nature of evidence. Cotlar sought to gauge the temperature of the American public by dissecting the newspapers they read: ‘political printers succeeded in generating a cohort of engaged and sympathetic readers’, he reasoned, and created imagined communities comprised of ‘people whom they knew not as neighbours, but as abstract and theoretically equal fellow citizens’. This is a difficult leap, which Cotlar acknowledged and therefore subsequently nuanced the relevance of his conclusions. This theoretical print culture that connected foreign nations and American citizens across the vast Atlantic world is a slippery concept, and often fails the test of materialist scholars. Interpretation and reception of both foreign news and national newspapers varied by locality, and the interconnected web of ‘engaged and sympathetic readers’ did not reach a broad scale until the mid nineteenth-century. How, then, can one assume a broad audience that shared national views, let alone Atlantic sympathies? Rachel Hope Cleves aimed to avoid this problem in her study of American anti-Jacobinism by tracing specific political and intellectual themes through several decades and in many localities. Indeed, her creative and insightful narrative of the Reign of Terror’s image in America from the French Revolution to the Civil War is an acute example of how eclectic transnational issues evolve over time and place. The purpose of the book was to explore the reception and memory of the French Revolution’s violent images in order to ‘understand the pressures that the exigencies of a new republican political culture placed on violence in the early national era’. While most historians have acknowledged the role of the French Revolution during the 1790s Federalist-Republican debates, Cleves argued that remnants of Robespierre and the Terror remained long after that decade: ‘For seven decades,’ she explained, ‘from the rise to power of the radical Jacobin club in 1792 until the fall of the southern Confederacy in 1865, French Revolutionary discourse pervaded American newspapers, religious literature, political orations, broadsides, private letters, fiction, poetry, pedagogy, drama, and periodicals’. Demonstrated most poignantly—and persuasively—by the sermons of Calvinist minister Elijah Parish, Cleves painted a portrait of America dominated by an intellectual mixture of human depravity and American exceptionalism—a blend that was drawn from the Terror’s morbidly graphic imagery as a way to caution against dangerous anarchy and bloody chaos. The French Revolution demonstrated, in the end, that humankind was too dangerous to be left to its own fancy, and that strong central order and cultural checks were required to maintain peace.\textsuperscript{42} Thus, many ministers, politicians, and writers emphasised the difference between America and France in their ability to resist violence and control humankind’s dangerous potential. The marriage of Federalist and Calvinist rhetoric brought stability and unity, especially after the French Revolution’s violent turn. (Indeed, Cleve’s work is one of the few treatments of American identity that persuasively infuses religion into the narrative.) Chants of anti-Jacobinism permeated late 1790s discourse, and conspiracies of an international Jacobin/Illuminati connection continued long after the Jeffersonian revolution in 1800. Then, importantly, after the perceivably Jacobin threat subsided, the image of Jacobinism was transferred to a new threat to American peace and civility: slavery. To American ‘anti-Jacobins’ in the antebellum period—and at this point the label of ‘anti-Jacobin’ became increasingly slippery—the persistence of slavery was connected to the violence of the French Revolution, as both were the examples of ‘the individual’s failure to control his or her depraved passions and be obedient to moral authority’. At the very least, the anti-Jacobin tradition provided abolitionists a language to denounce the violence and amoral acts of slaveholding. But the influence of the Terror continued even further: ‘Anti-Jacobins’ also ‘became the nation’s most consistent advocates of common schooling’ due to their insistence that moral education was necessary to prevent rising generations from developing the vices of depraved humankind. In Cleves’s narrative, the legacy and tradition of anti-Jacobinism had far-reaching effects in shaping American culture long after Robespierre. However, indicative of much of these types of work on Atlantic history, there still remains a question of how much the French Revolution was an actual influence on American culture and how much different movements used the event as a prop for their own arguments—that it was more a bogeyman than an actual instigator. This is especially the case with the abolitionists: in most cases, it appears that they were merely being pragmatic by using whatever language proved useful. Further, the label of ‘anti-Jacobin’—not unlike the broader question of ‘Atlantic culture’—becomes so hazy and diffused that its effectiveness as a description becomes questionable. Anything that emphasised the depravity of humankind, the necessity of a strong moral and governmental structure, and the fear of humanity’s anarchic tendencies could be labeled as ‘anti-Jacobin’; at this point, where does ‘anti-Jacobin’ begin and Federalism or, even more broadly, Calvinism, end? As recent scholarship has shown, there is a much longer intellectual pedigree for these viewpoints than merely the French Revolution. Where Cleves examined the fear of French Jacobins, Sam Haynes’s recent work focused on America’s continued obsession over ‘John Bull,’ an image that represented the overshadowing and intimidating figure of Great Britain. In the wake of the War of 1812—where, according to Haynes, citizens finally came to peace with the idea of being separate from England—‘Americans became even more conscious of the web of transatlantic connections that rendered them, for all intents and purposes, a cultural and economic satellite of the British empire’. Now that their military once again claimed victory, their ‘patriotic fervor of the postwar years encouraged them to address the issue of nationhood as never before’. In Haynes’s telling, Americans became obsessed to, on the one hand, repudiate all cultural ties to their former parent and, on the other, still seek approval from the very culture they denounced. The image of a British puppeteer was behind every threat the early republic encountered, from antislavery to Texas’s annexation. And in the process of describing and decrying the British ‘Other’, they were better prepared to develop their own national ‘self’. ‘To become more American,’ Haynes claimed, ‘they would first have to become less British’. Haynes is much more willing than Cleves to admit that these foreign threats were more perceived than real. For American orators, Haynes explained, Britain became ‘a one-size-fits-all bête noire’, an effective prop to invoke whenever they required a rhetorical opponent. ‘Americans during the Jacksonian period routinely indulged in transatlantic scapegoating’, Haynes acutely noted in one of his book’s strongest sections. Even if the text never persuasively proved that this Anglophobia reached beyond the newspaper editors and conspiratorial politicians, Haynes’s narrative successfully demonstrates how many Americans were more interested in the image of a foreign ‘Other’ than the actual presence itself. This is a necessary concession, for recent scholarship, like that of Elisa Tamarkin and Leonard Tennehouse, has shown that even while Americans rhetorically denounced England they continued to love, embrace, and import numerous British ideas materials with surprising frequency—an irony that epitomised America’s ambivalent attitude with the foreign world.47 Yet Britain and France were not alone in occupying the American mind. Timothy Roberts, in his study of the 1848 European revolutions, explored almost the same period but came to a tantalizingly different conclusion: American exceptionalism didn’t triumph during their encounter with the broader Atlantic world during the late 1840s; on the contrary, American exceptionalism only became more contested during that period. By focusing on how Americans interpreted and depicted the tumultuous revolutions of 1848, Roberts recounted the compelling story of two different perspectives during the decade. The first, more simplistic reaction to the wars was indeed a reaffirmation of exceptionalism: ‘simplistic or inaccurate interpretations’ of the European revolutions led many Americans to not only conclude ‘that the American Revolution was exceptional, but also that, indeed, so was America at the mid-nineteenth century, on account of its revolutionary heritage and its apparent lack of problems in contrast to the social unrest that plagued Europe’. It is in depicting this reaction that Roberts is most astute in demonstrating the faultiness of America’s understanding of the wider Atlantic world. Americans were not thirsting for international news as much as they were yearning for a reaffirmation of their own nation’s exceptional status. When contemporary events did not conform to how Americans wished to interpret them, ‘the details…were often altered or omitted altogether’.48 The most important part of how the revolutions were depicted was not accuracy, but reassurance. However, the second and more telling reaction—and perhaps the most important insight in Roberts’s volume—was a response that took time to sink into American culture. America during the late 1840s was not, of course, the place of contentment that newspapers wished to depict. The remnants of Jacksonian democracy and the slow coming of the Civil War introduced more unrest and caused much more cultural instability. Thus, an increasing number of American figures began concluding that ‘Revolutionary Europe, not despite its flaws but because of them, demonstrated that complacency—failure to reform—was transatlantic and thus implicitly challenged the notion of American exceptionalism’.49 It was the very realisation that America was not immune from the problems that plagued the world that enabled reform to arrive. Where Cleves found American exceptionalism at the roots of abolitionism and the Civil War, Roberts insisted that it was the ‘challenging and redirecting, if not ending, belief and practice in American exceptionalism’ that finally led to the many calls to reform.50 While the first reaction to the 1848 revolutions that Roberts depicted—which reaffirmed uniqueness—was based on caricatures and distortions, the second and more lasting response was based on an actual contemplative interpretation to what was taking place across the Atlantic. In this interpretation, the actual beginning of an Atlantic influence brought the end of American exceptionalism. The fact that these recent and well executed books written by able historians can drastically diverge on the obsessions and interactions between the early American republic and the broader Atlantic world, however, should underscore the impossibility of determining a single coherent American response to the larger world, let alone construct a representative identity for a period known for its tumultuous instability. The Atlantic’s Future(s) Perhaps the central message delivered in these recent works on the relationship between America’s identity and the broader Atlantic world is that the foreign ‘Other’ more often than not served as a proverbial mirror than a point of direct influence. America—or perhaps more succinctly, American rhetoric—was obsessed with the threats of British interference, Jacobin conspiracies, Haitian slave uprising, and foreign revolutions, but only the caricatured or distorted versions of those threats. They served more as a tool than an instigator. American thought did not actively participate in a transatlantic, international marketplace of ideas; in most cases, it was still a closed circle of Americans speaking to other Americans about American issues with only America’s best interest in mind. Foreign culture did indeed serve as a supplier of ideas and innovations, but it was more often than not relegated to a series of proof-texts meant to deal with parochial agendas. And that is perhaps why American historians should be more willing to adapt elements of what David Armitage termed ‘trans-Atlantic’ history, or ‘a history of the Atlantic world told through comparisons’. This approach to an international history focuses on how different regions received, interpreted, and appropriated ideas circulating throughout the revolutionary and early republic period. Atlantic histories of America have often treated both the foreign and the domestic as two monoliths contesting each other, when in reality there were vast divisions within each. Comparative studies that examine how these tensions played out in different times and places will better exemplify the heterogeneity of early America. Ironically, intellectual historians of the late nineteenth and twentieth century have already utilised these tools, a remarkable fact given that America during the Gilded Age was more internationally connected and thus more prone to the foreign influence upon which early Americanists have focused.\textsuperscript{52} If the Atlantic Ocean indeed served, as one historian aptly put it, as a ‘seaway for the movement of people, goods, ideas, and aspirations’ for early Americans, scholarship has to further deal with the transmission of those goods at the receiving seaports.\textsuperscript{53} Especially when it came to ideas, the raw materials leaving Europe rarely resembled the merchandise that was eventually distributed in America. There was no such thing as unbiased or objective news reporting to transmute foreign affairs—nor did anyone claim as much—and that should make historians alarmed to the point that even a serviceable knowledge of foreign relations was in many cases impossible. Newspaper editors, religious ministers, popular novelists, and crafty politicians all used transatlantic themes as malleable clay with which to craft their poignant messages. That transformation—that \textit{Americanization}—is perhaps the most important process for understanding early American intellectual culture. A more materialistic approach to America’s interaction with the Atlantic world, which would focus on actual dissemination and appropriation rather than an ethereal conception of a transatlantic dialogue, offers a more grounded analysis for how Americans actually worked to construct their own identities. Perhaps even more tantalizingly, the fact that this transformation of foreign texts, ideas, and caricatures is in many instances being performed by individuals who recently emigrated from those very countries should cause reflection. Indeed, if one were to search for the most tangible form of foreign influence, it is not necessary to look further than the very cities and neighbourhoods in which Americans lived. Immigration numbers were astoundingly large—especially from nations like Ireland and Scotland—during the decades following independence. Thus, the traditional narrative of depicting the ‘foreign’ as evil and degenerate obscures the fact that a large number of Americans during the early republic either came from those cultures themselves or were in close contact with those who did. That these rhetorical denouncements dismissed relatives and neighbours should raise questions of sincerity and earnestness. That said, the ‘Atlantic turn’ in American history should absolutely continue to be an important approach for understanding how citizens understood their newly United States; indeed, the ‘persistent localism’ of past decades is not the model that should be resurrected. At its best, Atlantic history provides the historian with tools to understand past cultures and frameworks for comparative analysis. As Joyce Chaplin has skilfully demonstrated, theories utilised by Atlantic historians can ‘bridge fields’ because ‘they are trading languages that can operate across frontiers’. Further, they can also help the scholar avoid historiographical exceptionalism, because ‘an illusion of uniqueness’ is most often the result of ‘ignorance of what is going on in parallel fields’. The Atlantic framework, then, provides relevance and clarity—a broader perspective from which to better interpret local events. Yet it remains important to refrain from placing modern sensibilities of transnationalism, cosmopolitanism, and globalism upon those of the past, and it is equally imperative to resist assuming a smooth transmission of ideas and influences within a fractured culture. Early Americans were indeed concerned over global events and foreign literature, but often only to preserve and reaffirm their own national, and local, anxieties. Our transatlantic frameworks should not infiltrate the largely provincial mind-sets of past thinkers. Those who did embrace a more cosmopolitan worldview—Ziesche’s Americans in Paris, or Roberts’s eyewitnesses to the 1848 revolutions—were more aberrations than harbingers. A majority of citizens of the early republic were most concerned over local affairs, local interests, and, most especially, local uniqueness. Rosemarie Zagarri is correct that ‘global history’—an even broader methodology than Atlantic history—‘challenges the [early American] field’s basic organizing principle: the primacy of the nation-state’. Yet that critique works in both directions, as it is not only crucial to understand the broader sphere but also to recognise the more provincial context for much of the early American experience. While it is tempting to depict Americans, as Thomas Bender did, as ‘rooted cosmopolitans’ instead of ‘nationalists’, such a depiction projects many of our own sensibilities on those of the past. In the end, the foreign ‘Other’ was most commonly nothing more than a pawn in the American game of exceptionalism. Acknowledging that fact does not approve the parochial mind-set, but it better captures the worldviews of the past. If Timothy Roberts’s concluding narrative is correct—and I suspect it is—that the oncoming and reality of the Civil War was the primary cause for challenging American exceptionalism, then it required the death of over half a million citizens to realise that the nation was not immune from larger problems. Even if strains of cosmopolitanism saw a later rebirth that continues today, it would never again go unchallenged. But that does not diminish the dominance of exceptionalism prior to that conflict. Atlantic history and transnational frameworks can help modern readers understand how Americans constructed that exceptionalism in the face of a large and evolving world. By keeping in mind the limits of this approach—the lack of overall exposure, the dearth of reliable knowledge, and, most importantly, the pre-eminence of parochial interests—the parameters of Atlantic history can remain wide enough to encompass both thematic issues as well as a broad array of American citizens. Even when it is admitted that exceptionalism ruled the day, Atlantic history can help determine what the many varieties of exceptionalism really meant. 4 Sam W. Haynes, Unfinished Revolution: The Early American Republic in a British World (Charlottesville: University of Virginia Press, 2010). 6 Anderson, Imagined Communities, 44, 46, 47. 8 For an example of the former, Anthony Smith argues that nationalism produces a “broad and abstract framework” that is filled in by local communities. Anthony D. Smith, Nationalism: Key Concepts, 2nd ed. (Cambridge: Polity Press, 2010), 27. 24 Philipp Ziesche, Cosmopolitan Patriots: Americans in Paris in the Age of Revolutions (Charlottesville, VA: University of Virginia Press, 2009), 165. For a similar approach that looks at how Americans experienced living in London prior to the Revolution, see Julie Flavelle, When London was the Capital of America (New Haven: Yale University Press, 2010). 25 Ziesche, Cosmopolitan Patriots, 7. 26 Ziesche, Cosmopolitan Patriots, 39. 27 Ziesche, Cosmopolitan Patriots, 124-125. 29 White, Encountering Revolution, 87-88. 30 White, Encountering Revolution, 125. 32 For earlier works on early American identity and material culture, see, for example, Richard L. Bushman, The Refinement of America: Persons, Houses, Cities (New York: Knopf, 1992). It should be noted that Roberts, *Distant Revolutions*, spends a chapter on Americans living in Europe during the 1848 revolutions, though his narrative in that chapter does not seem to play a role within his larger thesis, as will be discussed. Cotlar, *Tom Paine’s America*, 83. Cotlar, *Tom Paine’s America*, 17. Timothy Mason Roberts, *Distant Revolutions: 1848 and the Challenge to American Exceptionalism* (Charlottesville: University of Virginia Press, 2009), 15, 44, emphasis in original. Roberts, *Distant Revolutions*, 104. Roberts, *Distant Revolutions*, 191. For an argument for a comparative focus within Atlantic history, see Francis D. Cogliano, “Revisiting the American Revolution,” *History Compass* 8, no. 8 (August 2010): 951-963. Bibliography
[REMOVED]
[REMOVED]
Canada’s NEXT Classical:NEXT Opening Gala Proposal May 20, 2015, Rotterdam TABLE OF CONTENTS Ceremony design concept......................................................................................... 2 Artist, composer, speaker bios.................................................................................. 4 Proposed stage manager............................................................................................ 7 Financing / funding source....................................................................................... 8 Appendix 1: Production/Stage Manager detailed CV............................................. 8 Appendix 2: Video Production Company ............................................................... 10 Appendix 3: Video Montage Content Ideas............................................................. 11 CEREMONY DESIGN CONCEPT This proposed program for the Classical:NEXT opening night is designed to show that Canada is ideally and uniquely positioned to provide a delightful, dynamic and potentially shocking take on what’s “next” in classical and contemporary music. Musical selections and proposed artists are intended to reflect what Canadian classical music is today with representation from the current generation which is innovative and bold. Suggested artists come from a wide range of culturally and linguistically diverse communities, aboriginal origins and are geographically representative of Canada. The design concept for the evening focuses on a theme of “possibility.” What is possible for Canadian artists within the country and abroad? How can we challenge what is possible musically? The evening will have a seamless flow and be tightly produced. The program will move from artists to speakers connected through a high quality video montage that will showcase cultural elements not easily presented in the conference format. This will allow for the contrast of larger scale ensembles and projects with the smaller more intimate solo and chamber works that will be live during the event. Starting off with the Polaris Prize-winning Inuit throat singer Tanya Tagaq from a blacked-out theatre will immediately set the tone of surprise for the evening. The use of visuals both in the performance of works such as Hitchcock Etudes with its combined video production will allow for the dynamic use of the space and move to a closing finale with the keynote speech of conductor Yannick Nézet-Séguin. As his presentation ends the bar will be rolled out onto the stage and the audience will be invited to join the artists who will be serving drinks, allowing for immediate, intimate engagement while breaking the fourth wall and encouraging the audiences to take away the question of “what is possible?” STAGE PRE-SET: Podium (downstage far right); large hanging video screen(s); chairs/stands/etc. for Continuum (stage left); chairs/stands for Cecilia String Quartet (stage right); piano for Megumi Masaki (center stage) 0’00” Classical:NEXT Welcome from Jennifer Dauterman (2’00”) 2’00” Prelude - Tanya Tagaq, Inuit throat singer, solo improvisation. (3’00”) Video reference: Tanya Tagaq live at the Polaris Awards > http://www.youtube.com/watch?v=FcOYx4_72Zo. (Tanya exits) 5’00” Video montage This is NEXT –showcasing a diverse range of large-scale projects from across Canada, see Appendix 1 (2’00”) (Continuum moves to stage during video) 7’00” Opening speech (video or live) from Robert Lepage or Simon Brault (2’00”) (Robert or Simon exits) **9’00” Performance #1** - Continuum Contemporary Music *raW* (2003), by James Rolfe (11’00”) Instrumentation: piccolo, bass clarinet, violin, viola, percussion, piano **MP3**: Available in zipped folder. (Continuum exits, Measha Brueggergosman and Cecilia String Quartet move to stage) **20’00” Speech** (live) Measha Brueggergosman speaking about the sense of possibility Canada gives classical music. (2’00”) (Measha exits as Cecilia String Quartet starts) **22’00” Performance #2** – Cecilia String Quartet *Commedia dell’arte (2010)*, by Ana Sokolović (8’00”) **MP3**: Available in zipped folder. (Cecilia String Quartet exits as Barbara Hannigan moves to the podium) **26’00” Speech** (video) from Barbara Hannigan on being a Canadian classical musician in an international context. (2’00”) (Megumi Masaki moves to stage during this video speech) **28’00” Performance #3** - Megumi Masaki, piano (with video and electroacoustic accompaniment) *Hitchcock Etudes* (2014), by Nicole Lizée (EXCERPT) (7’00”) **MP3**: Available in zipped folder. (Note: included is the full *Hitchcock Etudes*. Only an excerpt would be performed.) **Video reference**: Excerpt, *Schoolhouse Etude* > [http://www.youtube.com/watch?v=HWEj0Guv87Q](http://www.youtube.com/watch?v=HWEj0Guv87Q). (Megumi Masaki exits, Yannick Nézet-Seguin moves to stage) **36’00” KEYNOTE SPEAKER** – Yannick Nézet-Seguin (9’00”) (Yannick exits) **45’00”** Bar rolled out onto stage and floor in front of stage during applause. VIDEO TEXT invites audience to stage for reception. Musicians serve audience members to engage them and direct them to reception. ARTIST, COMPOSER, SPEAKER BIOS Director and CEO of the Canada Council for the Arts, “Simon Brault has raised the profile of the arts in his home province of Quebec, across Canada and internationally and is well positioned to lead the Canada Council in remaining responsive to the shifting arts ecology and to changing demographics, technologies and economic developments. Simon has been a tireless champion of public engagement in the arts. We believe this is vital to the sustainability of the Canadian arts sector. The Canada Council will benefit from the vision of such a forward-looking and credible individual. Simon’s intellect and creativity will help set a strong course for the Canada Council’s future.” | http://canadacouncil.ca/council/news-room/news/2014/simon-brault. Noted by the San Francisco Chronicle as “a singer of rare gifts and artistic intensity” and by the Miami Herald for possessing “a superb voice capable of just about everything,” Canadian soprano Measha Brueggergosman has emerged as one of the most magnificent performers and vibrant personalities of the day. She is critically acclaimed by the international press as much for her innate musicianship and voluptuous voice as for a sovereign stage presence far beyond her years. | http://www.measha.com/bio. Taking their name from St. Cecilia, the patron saint of music, the Cecilia String Quartet is proud to be celebrating its 10th anniversary for the 2014-2015 season. The Quartet was formed in Toronto in October 2004, and after a decade of fruitful musical discovery, they once again reside in Toronto where they are Ensemble-in-Residence at the University of Toronto’s Faculty of Music. Hailed for their “powerful” (Chicago Sun-Times) and “dauntingly perfect” (Berliner Zeitung) performances, the CSQ perform for leading presenters in North America and Europe. Past engagements include performances at the Amsterdam Concertgebouw, Berlin Konzerthaus, Buffalo Chamber Music Society, and London’s Wigmore Hall. Their live concert recordings have been broadcast on more than a dozen international public radio networks, including Australia (ABC Classical FM), Canada (CBC/SRC), the United States (WQXR), England (BBC Radio 3), and Germany (DeutschlandRadio). Prize-winners at several international competitions, including Osaka (2008) and Bordeaux (2010), they were awarded First Prize at the 2010 Banff International String Quartet Competition (BISQC), where they also won the prize for the best performance of the commissioned work. | http://ceciliastringquartet.com/about. Formed in 1985, Continuum Contemporary Music presents concerts featuring the core ensemble of flute, clarinet, violin, cello, piano, and percussion, as well as unusual instrumental combinations. Featuring some of Canada’s top musicians, the ensemble has earned international acclaim - De Telegraaf (Amsterdam) wrote "Ensemble Continuum performs magic with sound"; Bragbants Dagblad hailed the ensemble as "sublimely skilled". Continuum was awarded the 1994 Chalmers Award and in 2014 was shortlisted for the Toronto Arts Foundation’s Roy Thomson Hall Award of Recognition. | http://continuummusic.org/about. Barbara Hannigan is known worldwide as a soprano of vital expressive force directed by exceptional technique. She is now bringing that same high energy and expertise to her varied activities as a conductor while continuing to work, as a singer, with the most prominent maestros, including in recent seasons Simon Rattle, Kent Nagano, Esa-Pekka Salonen, Andris Nelsons, Yannick Nézet-Séguin, Ludovic Morlot, David Zinman, Alan Gilbert and Reinbert De Leeuw. | http://www.barbarahannigan.com/about.htm. Thought by many to be the most brilliant theatre director of his age, Robert Lepage has also directed opera and film. Versatile in every form of theatre craft, Robert Lepage is equally talented as a director, playwright, actor and film director. His creative and original approach to theatre has won him international acclaim and shaken the dogma of classical stage direction to its foundations, especially through his use of new technologies. Contemporary history is his source of inspiration, and his modern and unusual work transcends all boundaries. | http://lacaserne.net/index2.php/robertlepage. Called a “brilliant musical scientist” and lauded for “creating a stir with listeners for her breathless imagination and ability to capture Gen-X and beyond generation,” Montreal based composer Nicole Lizée creates new music from an eclectic mix of influences including the earliest MTV videos, turntablism, rave culture, Hitchcock, Kubrick, 1960s psychedelia and 1960s modernism. She is fascinated by the glitches made by outmoded and well-worn technology and captures these glitches, notates them and integrates them into live performance. | http://www.nicolelizee.com/biography/#.VGzqD2d8Pcs. Award-winning pianist Megumi Masaki has established herself as an international artist renowned for her warmth and rapport with audiences and her superb musicianship. Her multi-faceted career as acclaimed soloist, chamber musician, champion of Contemporary music and pedagogue has taken her across Canada, the USA, Europe and Asia. In 2006, she made her film debut with musical performances in the CBC Documentary Film “Appassionata: Eckhardt-Gramatté.” Recently, she was selected as Artistic Director of the Eckhardt-Gramatté Competition. In 2005, she founded the International Virtuosi Concert Series in Frankfurt Germany. In 1999 Masaki co-founded the annual Waterford Summer Music Festival in Utah where she acts as artistic co-director, conductor, pianist and coach. She is the recipient of numerous scholarships, awards and grants from the Department of Foreign Affairs and International Trade Canada Government, Canada Council, Manitoba Arts Council, and British Council. Masaki was awarded the Willi-Daume Prize from the Deutsches Olympisches Institut and German National Olympic Committee for her project “Music and the Olympic Games”. Masaki is presently Associate Professor of Piano at Brandon University, School of Music where she coaches solo and collaborative pianists and teaches undergraduate and graduate piano pedagogy. | http://www.brandonu.ca/music/dept-faculty/masaki. Yannick Nézet-Séguin is Music Director of The Philadelphia Orchestra and Rotterdam Philharmonic Orchestra. He has conducted all the major ensembles in his native Canada and has been Artistic Director and Principal Conductor of the Orchestre Métropolitain (Montreal) since 2000. He continues to enjoy a close collaboration with the London Philharmonic Orchestra, of which he was Principal Guest Conductor 2008-14. | http://www.yannicknezetseguin.com/biography.html Serbian-born composer Ana Sokolović, who has lived in Montreal for two decades, has been immersed in the arts all her life. Before taking up theatre and music, she studied classical ballet. She studied composition at university under Dusan Radić in Novi Sad and Zoran Erić in Belgrade, then completed a master’s degree under the supervision of José Evangelista at the Université de Montréal in the mid-1990s. Her work is suffused with her fascination for different forms of artistic expression. Both rich and playful, her compositions draw the listener into a vividly imagined world, often inspired by Balkan folk music and its asymmetrical festive rhythms. The winds of change brought by her work quickly vaulted her to a prominent position on the Quebec, Canadian and international contemporary music scenes. In the winter of 2012, she was recognized as a national treasure by Quebec’s Ministère de la Culture, des Communications et de la Condition féminine. | http://www.anasokolovic.com/en/biography. Tanya Tagaq’s music isn’t like anything you’ve heard before. Unnerving and exquisite, Tagaq’s unique vocal expression may be rooted in Inuit throat singing but her music has as much to do with electronica, industrial and metal influences as it does with traditional culture. This Inuk punk is known for delivering fearsome, elemental performances that are visceral and physical, heaving and breathing and alive. Her shows draw incredulous response from worldwide audiences, and Tagaq’s tours tend to jump back and forth over the map of the world. From a Mexican EDM festival to Carnegie Hall, her music and performances transcend language. Tagaq makes musical friends and collaborators with an array of like-minded talents: opera singers, avant-garde violin composers, experimental DJs, all cutting edge and challenging. Tanya’s albums make for complex listening, but her recent Polaris Prize win attests to her ability to make difficult music speak a universal tongue. | http://tanyatagaq.com. STAGE MANAGER Graduating in History and Sociology (specializing in Latin America), Caroline Hollway has spent the best part of 30 years in the arts, as stage manager, production and technical manager, project manager and education director for community and young people’s theatre companies in Britain and Canada. It has taken her into community parades with boats, cars and bicycles made of sticky tape, giant puppets in Portugal, human circuses in housing estates, too many events on soggy fields, community plays in south London, touring round the Scottish Highlands, running two theatres for young people in Wales, enjoying fireworks and tugs (fortunately at the same time), operas with elephants and community arts consultancy. Her work in Canada has included Senior Manager of the Education and Outreach Department, Canadian Opera Company; Production Management for Constantinople, a multimedia theatre/opera production, which tours internationally and throughout Canada; Consulting Director of Outreach and Education, Theatre Direct; Arts Consultant, Ginder Consulting and Soundstreams and Production Management for Jumblies. She is most recently working as Producer for Luminato Festival of Arts and Creativity. She has one simple, but passionate, aim: to introduce people, especially young people, to the live arts as creators and participants, as well as spectators. See Appendix 1 for full CV. FINANCING / FUNDING SOURCE The Canada Council for the Arts is a full partner in this proposal and the primary source of finances for the opening program and associated activities. The Canada Council is also planning, for the first time, to bring a substantial delegation of Canadian agents and managers to participate in the trade fair, and will book a selection of stands in the trade fair from which all Canadian participants may conduct business. APPENDIX 1 – Production/Stage Manager CV CAROLINE HOLLWAY A producer, production manager and arts education consultant motivated by one simple but passionate aim, to engage people, especially young people, in the live arts as creators and participants, as well as spectators. Production: Producer, Visual Arts Luminato Festival Responsible for all aspects of the Visual Arts programming: 2014 artists included terence koh, Los Carpinteros and Lost Train (Kid Koala, Fred Morin and Jason Shron). 2013: Marina Abramovic Institute (MAI) Prototype, touring temporary venue in Trinity Bellwoods Park (including commissioning the build, all the items within and running the venue), Stockpile at Brookfield Place; Viktor&Rolf DOLLS at ROM, also L’Allegro Movement Project and Future Tastes: at the Kid’s Table - Sept – Dec 2013 Production Manager and Programmer Toronto Christmas Market Inspired by German Christmas Markets, the installation included of 30 booths, Santas house, roving carollers, fully programmed outdoor stage, rides and a 52’ Christmas tree. - September 2012 Parade Co-ordinator Canadian Olympic Committee, Olympic Heroes Parade, Toronto. Responsible for parade line up, marshalling area, accessibility of floats for Paralympian athletes. Dispersal. - March 2012– June 2012 (also 2010 and 2011) Production Manager, Visual Arts and Community events LuminaTO Festival of Creativity. Responsible for all production requirements of Visual Arts and Community programs. Including project managing 2012 ‘The Carretillas Project’ (Rainer Prohaska) (kitchens made up of shopping carts) and Re//Ply (Dan Bergeron) (sidewalk art interventions created out of condo boards) 2011 Habit (David Levine) at OCAD, Sargasso (Phillip Beazley) at Brookfield Place 2010 Ship o’ Fools Janet Cardiff/George Bures Miller; Wish Come True Friends with You; David Picault tribute; Mark Fast installation; Coleman Lemieux et Compagnie closing celebration at Nelson Mandela Park Public School. - Nov 2012 – Jan 2013 Production Manager Theatre Direct Canada Presentation of Sanctuary Song at National Arts Centre (also revival in Jan 2011) - May 2011 to Dec 2011 Production Manager Jumblies Theatre – Like an Old Tale Bringing three years of community arts outreach work in Scarborough to a culminating performance. Involving 23 community groups, upwards of 250 people, opera singers, choirs and musicians in a found space in Scarborough for 10 days of performances - September 2010 to January 2011 Associate Project Manager, Sony Centre re-opening of the refurbished theatre - March 2010 – May 2010 Production/Project Manager Jumblies Theatre Like an old Tale – workshop presentation of work to date on The Winter’s Tale. At Cedar Ridge Creative Centre. Involving 22 storytellings by 18 community groups over 11 days. - March 2009 – June 2009 Production Manger Soundstreams Canada/Luminato 09 coproduction of The Children’s Crusade. Premiere of promenade opera by R.Murray Schafer in a disused warehouse with 115 performers including 35 children. Part of LuminaTO 2009 - Jan 2009 –May 2009 Production Manager Nightswimming - City of Wine Premiere of a seven-play cycle, performed by 105 university drama students from across Canada. Appendix 2 - Video Production Company Riddle Films is a Gemini Award-winning and Genie-nominated production company committed to making high-quality films and television. Helmed by producers Jason Charters and Liam Romalis, their work has played on television screens and at film festivals around the world, including VisionTV, Bravo, PBS, CBC, YLE, Canal+, the Toronto International Film Festival, the Sundance Film Festival, the Rotterdam International Film Festival, and New York’s Museum of Modern Art. Productions include the documentary short *Remembering Arthur Goss*, the music series *God’s Greatest Hits*, the thirteen-part documentary series *Sex + Religion*, the Gemini Award-winning series *Gospel Challenge*, the Genie Award-nominated short film *Noise* and the Gemini Award-winning music documentary *Carry Me Home*. **APPENDIX 3 – Video Montage Content Ideas** *NB: This list is an example of ideal video content. More examples will be sourced and explored after discussions with Classical:NEXT.* 1. *Music from the New Wilderness*, presented by Western Front (Vancouver, BC) An immersive exploration of the B.C. soundscape, the production features new compositions by Adam Basanta (QC), Christian Calon (QC), Alicia Hansen (BC), Jennifer Schine (BC), and Jesse Zubot (BC); a Vancouver-based string quartet comprised of Peggy Lee, Jean René, Jesse Zubot and Joshua Zubot; and visuals by Krista Belle Stewart (BC). [http://front.bc.ca/events/music-from-the-new-wilderness-3/](http://front.bc.ca/events/music-from-the-new-wilderness-3/) 2. Modulus Festival and Orpheus Project, presented by Music on Main (Vancouver, BC) [http://www.musiconmain.ca/concerts/the-orpheus-project/](http://www.musiconmain.ca/concerts/the-orpheus-project/) 3. “Splash!” event, presented by Victoria Symphony (BC) Annual outdoor public event which in 2014 included a premiere of a new work by composer-in-residence Jared Miller (Can/US) entitled *The Unknown Warrior*. [https://www.youtube.com/watch?v=TSWdT-KEwtc](https://www.youtube.com/watch?v=TSWdT-KEwtc) 4. “Harbor Symphony” event, presented by Sound Symposium (Nfld) Biennial public event in the harbor in St. John’s featuring live fog-horn performance for multiple ships, coordinated by stopwatches and graphic scores by Canadian composers. 5. **Stickboy**, by Neil Weisensel (comp.), Shane Koyczan (lib.), presented by Vancouver Opera (BC) 6. **Lillian Alling**, by John Estacio (comp.), John Murrell (lib.), presented by Vancouver Opera (BC) 7. **Madame Merveille**, by André Ristic (comp.), Cecil Castellucci (lib.), presented by ECM+ (QC) [http://vimeo.com/13185842](http://vimeo.com/13185842) [http://www.ecm.qc.ca/media/ECM+CommMmeMerv2011BILINGUE.pdf](http://www.ecm.qc.ca/media/ECM+CommMmeMerv2011BILINGUE.pdf) 8. **Inuit Games**, orchestral piece by Pat Carrabré (MB) Recording featuring Inuit throat singers Pauline Pemik & Inukshuk Aksalnik (Nunavut) [http://www.musiccentre.ca/node/70324](http://www.musiccentre.ca/node/70324) 9. **Orchestral Tuning Arrangement**, by Linda Caitlin-Smith (ON) and John Oswald (ON) [http://www.musiccentre.ca/node/11571](http://www.musiccentre.ca/node/11571) 10. Footage from The Shaw Amphitheatre at The Banff Centre [http://www.banffcentre.ca/revitalization/shawamphitheatre.aspx](http://www.banffcentre.ca/revitalization/shawamphitheatre.aspx) 11. Video used by presenter David Pay, Music on Main, in his presentation for the International Society of Contemporary Music bid earlier this year in Poland. This video shows Vancouver, what is to be the host city for the ISCM Conference in 2017. We would aim to use footage similar to this, though focusing on the whole of Canada, as a grounding element to tie the above videos together and to show the breadth of the Canadian landscape. [https://www.dropbox.com/s/wgji8489v0lam6w/2017%20Bid.mov?dl=0](https://www.dropbox.com/s/wgji8489v0lam6w/2017%20Bid.mov?dl=0)
‘THE SHADOW OF ONE’S OWN HEAD’ OR THE SPECTACLE OF CREATIVITY SUSANNE VALERIE GRANZER UNIVERSITY OF MUSIC AND PERFORMING ARTS VIENNA, MAX REINHARDT SEMINAR Translation from German to English: Mirko Wittwar Body on Stage The relation between actor/actress and audience is extremely complex. Furthermore, it is different each time, as each performance is unique as such. In each case there develops a multi-layered relation, a kind of assemblage with a variety of connective and disjunctive relations. In the course of the performance these relations develop between actors/actresses and audience as well as between the actors/actresses on the stage. Just the same, however, within each actor/actress there develops an extended network of differential references, which is a precondition for successful acting. This process might be called *artistic research* or, to have it in the language of theatre, the research and investigation of the layers and stories of the characters which must be seen, understood and updated, i.e. they must be made visible. This text is supposed to focus on an event taking a successful performance to its extremes, when it comes to its poignancy. This may indeed be equated with a spectacle. This is to say that, by acting on stage, the position of the subject is put into question. It is deprived of its priority, of its first-person position, of its safe harbour. The sovereign subject is deprived of its power. It loses its crown which falls on its own feet. This may be read as a kind of destabilisation or even a narcissistic humiliation of the ego. At the same time this displacement contradicts itself. Is it not that it is the crucial event of creativity? The heart of the performing arts, beyond their aesthetic formats? Is it not that at the same time it concerns and does not concern the sensual experience of actors/actresses as well as the increased perception of the audience, as time goes by in a flash while standing still at the same time? This “death of the subject” may cause joy, however also the contrary. For it is an event going beyond the level of pure entertainment, beyond the sheer gesture of representation—and by acting suddenly a non-representative power of art has effect which the actors/actresses experience as being at the mercy of. By this turn, on the one hand acting starts this wonderfully easy floating by which it becomes surprisingly enriched and which cannot be created at will but must come by itself, like sleep which also cannot be commanded at will. On the other hand, it pulls the familiar rug out from under the participants’ feet, as it might be described. * Being an actress, I look back to practical experience with theatre, and my own experiences on stage have always urged and driven me to ask and to perhaps find out more about what happens by the complex process of acting on stage from a philosophical point of view. * Actors/actresses—this is obvious—work with themselves as the medium of their art. Thus, by them it becomes particularly visible that, as artists, they depend on all sensual and intellectual capabilities and performances of their human existence. For this is exactly what they are working with, one to one. By listening, watching, speaking, answering, walking, existing together with others, by understanding, thinking, assessing and creating references. Updating all these levels by their own and most extreme potentials and, to have it perhaps even more fundamentally, learning them once again, counts among the tasks of theatre. A challenging, relentless work, meaning primarily cultivation and forcing on but not ruling, dominating. Furthermore, seen from the outside it is a kind of work which is mostly underestimated. * It is one of the crucial experiences of acting on stage that there is no fundamental difference between mind and matter. Neither is the mind inferior to the body nor is the body superior to the mind. There is no hierarchy. Both are given preconditions as well as equipment for successful acting. If an actor/actress, consciously or unconsciously, attempts to ignore or manipulate the material or spiritual conditions of his/her own body, his/her acting will be below his/her best. He/she must give up on his/her own will while at the same time being most attentive. In other words, he/she must deliver him/herself and cooperate with him/herself. In this process the spiritual and the physical substance are permanently interplaying, they are given entities in time and space. They cannot be separated. On stage, prejudices or hostility towards them result in a draw. However, these processes are much easier described than made happen on stage. We have incorporated the separation of mind and matter much more than we are aware of and thus our specific relation to them. This is clearly unmasked by the stage. * Practical experiences on stage demonstrate unmistakably that mind and matter form a union. One might compare them to Siamese twins. They cannot be isolated, except they are taken violently apart from each other, which way both will be damaged. An actor/actress cannot and may not try to invalidate this fact, for he/she is not only tied to both but, even more, he/she himself/herself is both: mind AND matter. A dual interpretation of mind and matter does not work with theatre. This is why the stage does not know any ascetic ideal, any neutralisation of desire which after all still secretly and unconsciously has its say in the cellars of bodiliness. So to speak as an imaginary prompter with the textbook of the respective story in his/her hands, whispering it to the person on stage. However, declaring mind and matter one and the same would be a misunderstanding. Also, when working on stage sometimes rather “consciousness” and then again “intuition” must be in charge, and it is evident that each of them has its own quality and intelligence, allowing for their own specific ways of perceiving and understanding in each case. This is what actors/actresses must learn to differentiate for their art. They are urged to discover the different potentials of the different powers, to be able to work with them, always themselves being the subject of the play of these forces. With themselves as the guinea pigs. However, not with a rabbit pulled out of the hat but with one which means business when researching itself, this research, by the way, not happening once, that is once and for all, but being renewed and becoming ever more thorough with each rehearsal process and each performance. In this context, wanting to play the role of controller observing his/her own acting from the watchtower proves to be a dead-end. This does not work and will only be counterproductive. Of course, all artistic work is based on skilled craftsmanship (téchne) which is and must be learned, practiced, exercised, known and finally purposefully used. Uncontrolled acting is not only unprofessional but may be fatal. It does not bear contemplating. Of course, an actor/actress is always aware that he/she is acting, and each production consists of a great number of agreements which require to be kept. At the same time, however, these agreements need leeway, leeway for free acting, where each repetition becomes lively and unique. Repetitions should never be mechanic, just a technique, that is a stereotype. In such a case they would be empty and uninspired, just the reeling off of a programme, without groove, without flow and, to have it this way: unanimated. In the course of the process, again and again one and the same challenge and difficulty becomes obvious: what happens cannot be fully controlled. Each time it must turn out well. Having been successful one time does not necessarily mean to be successful the next time. It may be that the boulder has been rolled up the hill in vain. All efforts have been futile. Sisyphus says hallo from Hades. But also Apollo from Olympus, if acting has been a success. * This image from Hades is supposed to be an attempt to illustrate that on stage the act of willing is not sufficient for making acting successful, as the interpretation of the subject being the sole actor of what it does is invalidated by the creative act on stage. For the controller, to have it in the language of theatre, this means “exit”, and to have it even more pointedly: the actor’s/actress’s ego must simply “shut up”. This is meant to say that it must give up on its control function, of its predominant position. Then, in the process of acting, body and mind will be enabled to encounter each other on the thorough basis or artistic craftsmanship, by fruitful dispute and by lustfully quarrelling for the best possible solution for working together. The more intensively both are affirmatively connected in the process, the more they welcome each other as a community of friends, the more happily Eros is included, the better the effect for the entire situation on stage. Establishing a hierarchy of mind and matter and declaring them not partners with equal rights but antagonists is fatal. When acting, not listening to one’s body, for example, not giving in to its stimulations, declaring it a disturbing factor, that is declaring it a troublemaker—as it is suggested by the autonomy of the subject—results in a number of useless and unpleasant problems. Body and mind will block each other, and as a consequence they will lose the power of their potential and imagination. That is why the intimate relation of body and mind in the context of the artistic challenge is crucial for a successful performance—and results in an astonishing increase of pleasure for the actors/actresses. Criteria for Creativity Thus, the artistic process on stage makes issues of existence move. Issues in the general sense of our existence. The phenomena and problems of our conditio humana within which we find ourselves and to which we are exposed are really imposed on us in the process of acting. They can neither be excluded nor manipulated nor ignored without acting losing its lively creativity. All those many levels of our existence are the playground, and the radical aspects of theatre. This is the particular fate of this most anthropological of all arts. Its offer and its threat. Indeed, actors/actresses are immediately confronted with themselves as their own medium. Explicitly and obviously they must work with and contribute to the conditions of their human existence, from which they gain the shape of their character, from which they chisel the sculpture of their work. This requires much sensitivity and power and exposes us to a permanent process not knowing any standstill and never coming to an end once and for all. The courses of this process reveal a complex system of signs in the body, the mind, the mood and the gender which may neither be suppressed nor ignored but must be faced by actors/actresses while working with it openly. Thus, a role is never completed once and for all. Acting develops over time and passes in time. In a moment. It cannot be grasped, it does not become manifest by any object. It is always tied to making it happen again, which may always become a failure. On the other hand, it always allows for a beginning, for a fresh start, for another try. Just the same, the capability of starting again is an integral element of acting. If it fails, it may well be that it will be a success the next time, it may even become something grandiose. If, however, in terms of anthropology actors/actresses leave behind their particular talents in the dressing room after the performance or not is a different question. All these sensitive, sensual events always happen in front of other people, it may be during rehearsals in front of the team or during the performance in the front of the audience. Acting without a safety net, as we have it at theatre. Thus, the significance of what it means for actors/actresses to always being immediately exposed to others during the intimate event of their artistic work may not be belittled. Or, to have it positively, perhaps it is also an inspiration for them. In any case, always there are others present, watching them while they are not able to watch themselves. They never know what they are looking like from the outside. They always depend on the response of those others, including all ambivalences connected to it. One’s own art being publicly exposed to the permanent risk of becoming results in great fragility. On the other hand, and as a protection, it may be that this is the reason for the striking extrovertedness or even eccentricity of many actors/actresses. Thus, to understand the performing arts it is crucial to understand that actors/actresses, in the name of creativity, are always exposed and stay exposed to a dynamic ground. To have it differently, acting on stage is never a once and for all closed entirety. It is and stays to be a promise of becoming. At any rehearsal, at any performance, again and again and again. Actors/actresses must bear this process of constant becoming, must voluntarily entrust themselves to it, and this also means giving up on the understandable wish for security. Concerning their art, they are and stay to be subject to the promising, however also frightening fate of constant change. This challenge cannot be skipped. In other words, they must give up on wanting to be a sovereign subject, as we would like to believe, and must accept that their art intensively oscillates between active and passive, always on the threshold of a never-ending process, always being entangled in many differentiated relations to others they cannot control. One out of many. Being permanently exposed to others, bodily, sensitively, physically. They can never be sure about what might have been happening. The dynamic of becoming is always at work. Acting is an ephemeral kind of art, and no medium is capable of reflecting it appropriately. * To once again summarise the process of being as becoming on stage: As the crucial event of creativity when acting multi-sensorially on stage, a break has been described which may happen to an actor/actress. The “death of the subject”. It is this what suddenly makes him/her experience—not reflectively, not intellectually, but sensually, bodily—the giving up on the inherited, incorporated illusion of being “the actor, the subject” of “his/her own performance.” 4 Simultaneously there appears a zone of incapability to distinguish being active from being inactive, agent from patient (Agamben 1999a, 235). The actor/actress acts while not acting at the same time. He/she is both actor and non-actor—and precisely this strange pathos makes a performance an event. This ambivalent, artistic experience catapults those involved into an open in-between, into an irritating passage of being with-out-me. It is evident that there must be some-body who acts, however the contradictory condition for making a performance an event is that no-body acts. That is, there happens a process requiring an actor while not requiring an actor. This is an irritating fact leading a life of its own and not being controllable. What happens to me at this threshold, to my self, to myself as a person beyond reason and reflection? What does it mean for my free will, what happens to my autonomy? To my sovereignty? What happens to me when being permanently exposed to others, in this shared field to which I am obviously exposed and stay to be exposed? We have incorporated the ideas of the Enlightenment much more than we are usually aware of. The metaphysics of the subject with its illusions of identity and sovereignty work like a machine beyond awareness, and we are sitting in the cave of our perception, watching the shadows of our heads. Seen this way, *being as becoming on stage* creates a gap within man. This gap may have a frightening or constrictive effect, like a pressure or a burden. Or, on the contrary, like a promising openness. Then it will change from being fate to being the quality of beginning which does fade over time and finds expression by the *power of creativity*. **Becoming** In so far, the creative act of acting reveals a wound which has been incarnated in man, which may just the same be called the extraordinary *gift of acting on stage*. The *gift of acting*. In English, the word *gift* means *present*. In German, on the other hand, *Gift* means something toxic. Now, if we cross over the German and the English meaning of the word, danger and gain are crossed over, fear and joy, the disturbing and the fruitful. There happens a paradox encounter, a joyful shock, a shocking joy. One might say—perhaps even literally—that acting on stage by its highest demand makes actors/actresses *subject to some “anthropological change”* (Agamben 1999b, 260), in the course of which one must leave oneself to the dynamic process of transformation. The intensity of this process does neither spare any of those weaknesses nor any of those clichés all of us are full of. They are there, without exception, to our surprise even those we believed to be free of. Suddenly all of what up until now has been unseen and covered breaks into the open, takes us to a limit. Free will as an actor collapses, as we might have it pointedly. In terms of the cabaret: free will loses its stand, it becomes impotent, it is brought to its knees and sinks to the floor. There it is lying, grasping for breath. No help, nowhere. To have it more seriously and to repeat the event like a spiral: what happens does not depend on the actor/actress him/herself as an “object” nor is it a part of him/herself as a “subject” any longer. What in our everyday lives we commonly call our “property”, the bastion of our identity, of our being a person, must be given up on. The subject somewhat “dies”. Or, to have it more precisely, there happens a state of “dying”, a mutation. The process may definitely be described this way. There appears something alien, unknown, something which cannot be perceived but appears nevertheless. Life as itself? Yes, maybe this way: for moments there appears a specific, at the same time impersonal, undefined life during the unique event of acting. * Is it not that by such a rare event—to have it in Deleuze’s terminology—the *plane of immanence* presents itself where (a) life liberates itself by the never-ending desire to create itself? Life as a shapeless, self-organising process, constantly becoming qualitatively more differentiated out of itself? Then at theatre the art of mutating and liberating life would gain power and significance. *Metamorphosis* and not *mimesis* would be the predominant task of actors/actresses and the crucial event of acting on stage. * Of course, theatre is tied and connected to the problems and questions of mimesis and the fact of repetition. Usually, theatre performances do not happen once but repeatedly, and repetition is highly demanding, for as a perfect, mechanical repetition it is insufficient. Such a way of acting may be masterly and impressive. Of course. But the source of creativity is the exciting repetition of the radical openness of life itself, when the grammatical categories of subject and object, of agent and patient, can no longer be distinguished. Then the comfort zone of personality only appears as the mask of the person—with nothing behind it. Maybe this is when the mask of Dionysus appears? The god Dionysus, whose cultic rituals are considered the origin and driving force of the development of Greek theatre and who is connected to the thrill of ecstasy, to delirium and death. Precisely at theatre, which has such a fixation about person and personality, by the event of creativity acting on stage makes some impersonal yet unique life flash up which has nothing to do with fame, nothing to do with the glimmering hype of publicity. Mutation may become experiencable, which Nietzsche would perhaps describe as a dancing star. This is the beautiful anachronism of this corrupt and wonderful art. Of course, theatre allows for quite a number of other interpretations with different qualities. But I am convinced that the most remarkable event on stage, the crucial factor for its quality, is the loss of being a person, the loss of subjectivity, the loss of the first-person position. Actors/actresses may be violently attacked by this, because it makes the disconcerting flash up within ourselves. It is like tightrope walking without a safety net. However, is this not, like at the circus, the greatest attraction of acting, both for actors/actresses and audience, because the danger of falling down is so obvious? Even if this self-transgression is not expressed by words and stays unconscious, it works subcutaneously and provides acting with that inspiration and thrill which cannot be produced, cannot be made, but must successfully happen, must be successful out of itself. Always on the edge, looking into the abyss. Toying with falling into the abyss. A thrilling spectacle. Actors/actresses get in a state when the look of everyday life starts tumbling because suddenly life opens up by its immanent dimension and allows for looking at the passage from life to death. This then would be an event of the art of acting as the most fragile and intimate bond between performers and audience. Perhaps sometimes theatre is actually like this, and because all of us are mortal we may then even understand that such a unique event of acting on stage “is a moment that is only that of a life playing with death” (Deleuze 2001, 28). Such an event is rare at theatre. Unique. A feast, happening beyond the categories of mishap and tragedy, even if the stories told on stage are full of it. Is it not that rather it liberates from the categories of good and evil, of pain and loss, of resentment and thirst for revenge? Is it not that it generates clear joy and generosity? Fundamental benevolence? Affirmation? Is it not that from this event there indeed develops something like an "anthropological mutation" by which both actors/actresses and audience are grasped? Even if this event just flashes up for a moment and soon fades away in everyday life. Is it not that it signals an ethical appeal concerning the question of what and how life may have become once? * Liberated from oneself to oneself by way of a memento mori by the act of creativity. Perhaps this is how one's own relation to the world is newly defined by way of the "death of subjectivity". A relation which need not be exclusively based on fear but may as well be experienced as an encouraging, promising openness. Actors/actresses experience themselves as being liberated into the widest potential of their possibilities, and that not only as a prospect in the sense of a promise concerning a distant future but as an immediately kept promise, kept during the time of acting on stage. Here and now. Immanent. * Id est: according to Aristotle, fear and compassion are the classical categories of catharsis at theatre. However, could the plane of immanence liberating itself, could the liberation of the immanent dimension of life be interpreted as the cathartic effect of theatre? Would it not result in passion and joy of the always renewing dimension of existence? Of its still undeveloped possibilities following at our heels, like our own shadows? To Conclude: A Life... Theatre is full of stories. Many cultures are full of stories. That is why, to conclude my considerations, I would like to quote a short passage from Charles Dickens's Our Mutual Friend, a passage Gilles Deleuze refers to in his last text Immanence: A Life... (Deleuze 2001, 28–29). By the example of a concrete story it once again illustrates, one might even say dramatizes, the event of the act of creativity on stage we have been talking about so far. It is the story of Mr. Riderhood, a predator and creep who, after having been saved from drowning, is now hovering between life and death: No one has the least regard for the man: with them all, he has been an object of avoidance, suspicion and aversion; but the spark of life within him is curiously separable from himself now, and they have a deep interest in it, probably because it is life, and they are living and must die. [...] See! A token of life! An indubitable token of life! [...] He is struggling to come back. Now, he is almost here, now he is far away again. Now he is struggling harder to get back. And yet—like us all, when we swoon—like us all, every day of our lives when we wake—he is instinctively unwilling to be restored to the consciousness of this existence, and would be left dormant, if he could. (Dickens 1997, 419–420) This article has been realized in the context of the research project “Artist-Philosophers. Philosophy AS Arts-based Research” funded by the Austrian Science Fund (FWF): AR275-G21 in line with the programme for arts-based research (PEEK). See: Spinoza and his concept of parallelism: “The order and connection of ideas is the same as the order and connection of things” (cf. Spinoza 2000, 117 [2p7]). Deleuze: “Dualism is therefore only a moment, which must lead to a re-formation of a monism” (Deleuze 1998, 29), as well as Jean-Luc Nancy Corpus (2008). For Plato the body is a constant obstacle for the soul: “[T]he lovers of learning are aware that when philosophy takes over their soul, the soul really is bound thoroughly in the body and stuck to it, and is forced to consider the real things through it as if through a cage, and not on its own through itself, and that it drifts in utter ignorance” (Plato 2011, 74 [82d-e]). Friedrich Nietzsche: “[…] there is no ‘being’ behind doing, effecting, becoming; ‘the doer’ is merely a fiction added to the deed—the deed is everything” (Nietzsche 1989, 45). Derrida refers to the great number of meanings of the German word “Gift” (cf. Derrida 1992, for example 36, 81). And for the gift in Derrida, see also Derrida (1995). See Gilles Deleuze and Félix Guattari: “There are only relations of movement and rest, speed and slowness between uniformed elements, or at least between elements that are relatively uniformed, molecules and particles of all kinds. There are only haecceities, affects, subjectless individuations that constitute collective assemblages. […] We call this plane, which knows only longitudes and latitudes, speeds and haecceities, the plane of consistency or composition (as opposed to a plan(e) of organization or development)” (Deleuze and Guattari 1987, 266). --- ### Notes --- ### Works Cited Biography Since 1989 Susanne Valerie Granzer has been a Professor of the artistic discipline of the performing arts at the University of Music and Performing Arts Vienna, Max Reinhardt Seminar. She was trained for being an actress at the MRS in Vienna. Then, for 18 years she had important parts at Theater in der Josefstadt, Volkstheater Wien, Theater Basel, Düsseldorfer Schauspielhaus, Schauspielhaus Frankfurt am Main, Schillertheater Berlin and Burgtheater Wien. Parallel she studied philosophy at Goethe-Universität Frankfurt am Main and the University of Vienna. Doctorate in 1995. In 1997, together with the philosopher Arno Böhler, she founded wiener kulturwerkstätte GRENZ_film. Many “Philosophy on Stage” lecture performances. Co-founder of BASE (research centre for artistic research und arts-based philosophy, India) and the head of the residence programme there. © 2017 Susanne Valerie Granzer Except where otherwise noted, this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Revisiting Long–run Relations in Power Markets with High RES Penetration Angelica Gianfreda\textsuperscript{a}, Lucia Parisio\textsuperscript{b}, Matteo Pelagatti\textsuperscript{b} \textsuperscript{a}Energy Markets Group, London Business School, London, UK \textsuperscript{b}DEMS, University of Milano–Bicocca, Milan, Italy Abstract Electricity generation from renewable energy resources (RES) has become increasingly significant to reach EU and emissions reduction targets. At the same time, one of the main EU policy goals has been the creation of a common internal energy market for Europe. In this paper, we focus on these two issues previously studied separately, considering their possible interactions. We first analyze the long-run relationship between day-ahead electricity prices and fuel prices (natural gas and coal) looking at two samples of years characterized by low and high RES penetration, then we explore the integration of EU markets. We show that the electricity-fuel nexus found over 2006-2008 changed dramatically over 2010-2014 for the majority of countries considered. In particular, the long-run dependence of electricity from gas and coal prices is much lower in recent years. Furthermore, our results confirm that the considered EU countries are becoming less integrated as RES-E increases. Our findings suggest that nationally implemented policies to support renewables are successful in increasing RES penetration, but they have lessened the linkage among EU markets, then making integration more difficult to obtain. Keywords: Electricity, Natural Gas, Coal, Internal Energy Market (IEM), Overlapping Regulation 1. Introduction Electricity generation from renewable energy resources (RES–E) has become increasingly significant all around the world to meet planned targets on emissions reduction. REN21 (2015) provides a comprehensive and timely overview of RES market, industry, investment, and policy developments. In the European Union, at least 20% of the final energy consumption will have to be covered by RES by 2020 and this share will have to increase at least to 27% by 2030 (see EU (2009) and EU Commission (2014a)). The Renewable Energy Progress Report issued in 2013 states that the EU member countries have been initially successful in the implementation of the Directive, but then the difficulty of removing key barriers to renewable energy growth has decelerated the progress towards targets. Indeed in 2013, RES faced uncertainty and declining policy support not only in European countries but also in the United States, where grid constraints and a scenario of low gas prices made RES less profitable than in the EU. However, in the last ten years there were further positive developments which made RES more and more affordable due to continuous technological advances, falling prices, and financial innovations\(^1\). As a result, an increasing number of wind and solar power projects are being built around the world and the utilization of electricity from these sources, as well as from all other RES, can play a significant role in reducing country dependence on imported fossil fuels and in mitigating the global warming by reducing greenhouse gas emissions\(^2\). The world-wide promotion of RES-E generation has increased the complexity of the management of electricity system, given that wind energy (and solar to less extent) is highly variable and only partially predictable. Therefore, intermittency and the not-easily-storable nature of electricity have important implications for balancing supply and demand and any imbalance can cause large and sharp price changes especially when RES units are not uniformly spread in a territory and cause zonal congestions. The impact of RES-E production in different power markets may also produce spillover effects in neighboring countries. RES-E production might help market convergence or, on the contrary, it might obstacle the market convergence process. The creation of a common EU internal electricity market is the other fundamental pillar of EU energy policy. Integration is eased by the harmonization of market and technical rules, by an increase in the amount of interconnectors and most importantly by an economically efficient use of the interconnection capacity. Regulation 1228/2003/EC and 714/2009/EC introduced some fundamental guidelines for the management of cross-borders flows, based on common and economically efficient market rules. The market coupling mechanism has been introduced to efficiently use cross-border capacity between two neighboring areas. This mechanism considers the bids and asks of two or more power exchanges and allocate them taking into account cross-border transmission capacities\(^3\). The goal of our paper is to analyze whether policy support toward RES-E generation implemented in most EU countries in the last years has slowed down the integration process of electricity markets that began soon after the liberalization of the industry and the opening of wholesale power markets in EU countries. Several studies have analyzed the impact of RES on electricity prices, see among others --- \(^1\) As consequence, the recent report clearly states that “the EU and the vast majority of Member States are making good progress and 25 Member States are expected to meet their 2013/2014 interim targets, which correspond to the projected share of 15.3% of renewable energy in 2014 in the gross overall energy consumption; see EU Commission (2014b). \(^2\) CO2 emissions doubled worldwide from 1971 to 2010, when the 44% of them came from the electricity sector. See IEA (2013) \(^3\) We already assisted to the coupling of the Nordic markets with Sweden and Norway (1996), followed by Denmark (1998) and Finland (2000), Estonia, Poland, and Latvia (June 2013); the Central Western European (CWE) region with its trilateral coupling between France, Belgium and the Netherlands (2006), and Germany (2010). The Central and Eastern Europe (CEE) characterized by market coupling between the Czech Republic and Slovakia (2009), and Hungary (2012). Poland is also coupled with Sweden and Italy with Slovenia (2011), Austria (2014) and France (2014). Finally, on February 2014 the CWE and the Nordic region were coupled with the UK and Ireland, forming the North Western Europe (NWE) market and on May 2014 also the South-West European Market (Spain and Portugal) was coupled with North-Western Europe. Gelabert et al. (2011), Woo et al. (2011), Mauritzen (2013) and Ketterer (2014). However, previous contributions did not consider the effect of RES-E on long-run price relations and markets integration. We expect that EU policies towards market integration have eased the market convergence process at least in areas sharing sufficient transmission capacities. Despite few circumstances testifying the lack of import capacities\(^4\) or the unavailability of nuclear power\(^5\), the mechanism of market coupling does not eliminate price differentials across neighboring markets and there is no clear evidence of convergence in EU wholesale electricity prices. The issue of interdependencies in power prices has been deeply and extensively studied using cointegration analysis for assessing EU market integration, as in Bosco et al. (2010), Bunn and Gianfreda (2010), Balaguer (2011), Lindström and Regland (2012), Bollino et al. (2013), Huisman and Kilic (2013), Castagneto-Gissey et al. (2014) and de Menezes and Houllier (2015) among many others. Since market integration appears to be strongly influenced by the electricity mix among other potential price drivers, we provide empirical evidence on the effect of RES–E in two directions. On one side, we aim at showing that national policies to increase RES shares in the electricity mix have consequences on the long-run relations of electricity/fuel prices. On the other side, we want to test if the changing relationship of electricity prices with fuel prices has affected the degree of EU electricity market integration. We claim that the lack of long–run relations among fuel and electricity prices, induced by high RES penetration achieved with different local support policies, produced a reduction of the EU electricity market integration. This happens because RES have inverted and/or canceled the traditional relation between electricity and its fuels which was driving the common trend and producing convergence of EU wholesale electricity prices as documented in Bosco et al. (2010). We consider a time horizon from 2006 to the end of 2014, which is characterized by a progressive increment of RES generation from low, or even absent, to high penetration. In this connection, we divide the time series into two samples 2006-08 (low RES penetration) and 2010-14 (high RES penetration). We examine EU markets with different mixes, common borders and similar price-setting processes (Germany, France, Spain, Italy, The Netherlands and Great Britain) and Texas, as US benchmark market for a broader international comparison. In these markets wind and/or solar generation play the same role as negative demand and are expected to affect peak periods and average spot prices, fostering a substitution effect between RES and fuels, as suggested by Clò and D’Adamo (2015). The analysis conducted in our paper may therefore suggest policy implications related to the possibly unwanted interaction of two EU energy objectives: the support to RES and the creation of the IEM. We indeed expect firstly that a weak relation between electricity, gas and coal prices should occur in the second sample 2010-14. So we expect that when RES penetration increases the influence from fuels to electricity prices is reduced. Secondly, we guess that the degree of market integration should be reduced in the second sample, especially in hours with higher amounts of RES-E generation. \(^4\)Dutch power prices exhibited an average premium of 14€/MWh with respect to German prices in 2013, when there was insufficient import capacity for excess German solar and wind generation (EU Commission, 2014a). \(^5\)Belgian prices decoupled from the CWE region in 2012 because two nuclear reactors were permanently offline (EU Commission, 2014a). The paper is structured as follows: Section 2 provides a description of the background of our analysis and an overview of EU markets characteristics whereas Section 3 describes the data and the pre-processing techniques used. Results of the dynamic analysis of the considered electricity-fuel nexus and market integration are presented in Section 4. Finally, Section 5 concludes. 2. Background and Literature Review Wholesale electricity prices are influenced by supply and demand side drivers. Supply side drivers include the composition of the power generation mix, the amount of generation compared to consumption, the availability of power imports and exports and other factors as carbon emission allowances. Demand side drivers include the electricity needs of households and firms and they are influenced by energy policies and general economic conditions. Looking at the supply side, several factors have simultaneously contributed to cap wholesale prices such as the reduction of coal prices since the beginning of 2011 (mainly because US, the third exporter to EU, increased its hard coal exports as reaction of the increasing internal consumption of gas) and the stabilization of natural gas prices since the beginning of 2012 (after a significant growth in 2010 and 2011). Finally, the structural oversupply of EU emissions trading scheme (ETS) allowances has depressed carbon prices. All together, we assisted to an increased profitability of coal-fired power plants in the same time span when RES were gaining ground. In 2012, EU RES-E reached 799 TWh with an increase of more than 13% compared to 2011. Among renewable electricity sources, hydro power is the most important one and accounts for 46% of renewable electricity generation into the EU. Between 2011 and 2012 the electricity from wind registered a growth of about 14%, the solar energy growth was of more than 50%, with a share in renewable electricity generation reaching 9% and finally electricity from biomass and waste grew of about 12%. However, the levels of RES penetration differ across countries. For instance, there is an important contribution of hydro in Spain, Portugal, Sweden, Austria, Norway and Switzerland, i.e. in countries where the amount of rain significantly influences the generation costs and wholesale power prices. In other countries where wind and solar power generation rapidly increased, there was a reduction of average electricity prices. Several authors have shown the relationship between RES-E and electricity prices. Gelabert et al. (2011) presented an empirical analysis of the relation between electricity prices and prevailing technologies in Spain, accounting for the total electricity produced by RES and hydro. They show that generating electricity by RES and cogeneration reduces the electricity prices, and consumers should expect an average payment reduction significantly lower than the annual cost for RES support. While the cost of support has increased (mostly due to the introduction of solar PV) in period 2005-10, the wholesale price reduction has diminished. As a consequence, serious \[\text{EU Emission ETS allowance prices reached 15 €/tonnes of CO2 equivalent in June 2011 but fell below 5 €/tonnes in 2013, on average (EU Commission (2014b)).}\] \[\text{On the contrary, Italy, Ireland, the United Kingdom and the Netherlands were among the countries with the highest prices in 2013, because of insufficient interconnection capacities (Italy and Ireland) or because of the dominant expensive natural gas in setting the electricity system marginal price (The Netherlands and UK).}\] doubts emerge on the sustainability of electricity market structures with a large renewable share since the decreasing trend in electricity prices may be incompatible with the necessary remuneration of non-renewable sources. The relationship between wind and electricity prices, taking also into account its effect on congestion and external market interconnections, has been explored in several cases such as Texas, Australia, Spain, Denmark, Norway, United Kingdom, The Netherlands and Ger- many. More specifically, the effect of wind generation on electricity price variability has been explored for Texas by Woo et al. (2011), who prove that an increase in wind gener- ation reduces electricity prices but increases their variance. Similar conclusions are found by Ketterer (2014) in Germany. Woo et al. (2011) develop a model for wind-price interactions into interconnected zones, and then providing evidence that wind generation tends to cause time–dependent zonal price divergences in Texas. Further additional effects of import and export flows have been investigated by Mulder and Scholtens (2013) in The Netherlands. Mauritzen (2013) explores the relations between actual wind power and electricity trades between Denmark and Norway. Cruz et al. (2011) compare the predictive accuracy of several forecasting models for Spanish day–ahead spot prices and show that the inclusion of hourly electricity load and wind generation forecasts significantly improves price forecasts, mainly because wind generation has become a fundamental price driver in Spain. More recently, considering an hypothetical future energy scenario in which electricity will be completely generated by RES and covered by storage power plants, Brunner (2014) suggests that the expected effect of RES is to produce a general price level reduction in addition to changes in the volatility and into the structure of spot prices, given the dependence of supply on intermittent generation. In this scenario of significant growth of RES sources in markets still characterized by strong dependency on fossil fuels, the wholesale price dynamics is expected to evolve. We be- lieve that the fuel-electricity price nexus is likely to reflect this evolution, that is the dynamic relationship among electricity prices and fuel prices (gas and coal) might be significantly modified. Therefore, we analyze the effect of RES on long–run relations between electricity and fossil fuels in Germany, France, Spain, Italy, The Netherlands, and Great Britain. The markets considered are governed by different factors and regulatory provisions. For example, in German EPEX negative pricing is admitted\(^8\), in Spain there is a high share of wind penetration, whereas in Italy the high solar penetration, in connection with high hydro shares and no nuclear power, produces important implications in terms of balancing markets, as highlighted by Gianfreda and Parisio (2015). In addition, we consider cases of large interconnection capacity (France) and cases of limited transmission capacity\(^9\). Finally, we consider a broader international context including Texas in our sample. We believe that Texas is a very interesting case since it is the US top country for new wind power capacity installed and wind penetration in 2014\(^10\). At the same time Texas registered an increase in \(^8\)This happens not just in Germany, but also in Texas as well as in Australia, where sometimes negative prices occurred long time before the introduction of RES. A negative market price allows generators to pay the cost of staying “online”instead of incurring in a higher startup cost. \(^9\)In France 12 GW are available for exporting and 8 GW for importing. France is the worldwide biggest exporter of electricity with 47.6 TWh in 2013, see IEA (2013) \(^10\)According to AWEA (2014), the total wind power capacity was of 14 GW and the penetration was around electricity demand since 2009 so it can provide an interesting comparison with respect to the EU countries which experienced a longer time of demand decrease. Power generation mixes and their evolution though years have been depicted in Figures 1 and 2, that show the shares of technology generation, and RES penetration levels together with the yearly dynamics of demand (in TW on the right axis). The incomplete information about the Dutch generation mix led us to use the information contained in EU Commission (2014b). German power generation mix was largely dominated by mixed fuels, nuclear power and coal until 2011 when the German Parliament decided to phase-out nuclear power generation by 2022. This decision is likely to leave ground to coal and natural gas. At the same time, RES become more and more important with a share higher that 20% over total generation in 2013 and 2014. Looking at figures about RES penetration, we observe a slight increase in wind energy through years, but a sharp increase in both solar and biomass in the most recent years. Spanish electricity production looks more fragmented in these recent years, moving from the predominant nuclear share observed in 2005 (and equal to 46%) to a more RES oriented generation with a total share of more than 40% from 2010. We observe similar shares of nuclear, coal and gas in the last two years. Remarkably, Spain shows the highest RES penetration over the markets considered, with RES-E covering on the whole more than 15% of consumption. Similar dynamics for RES penetration is observed in Italy, where solar suddenly covered more than 7% of Italian demand for electricity. Besides wind and solar increasing shares in power generation, the Italian mix does not exhibit dramatic changes but we notice that the share of conventional thermal power plants dropped from 80% at the beginning of 2012 to 54% at the middle of 2014. Among fossils, gas primarily drives the generation, followed by coal, oil and mixed fuels whereas no nuclear generation is available. Opposite situation is found in France, where the energy production has been almost constantly granted by nuclear power for more than 70%, with very low shares for coal, gas and other fossil fuels which have been further reduced by the increase of RES, even if they show a low market penetration. The British power generation mix is dominated by gas-fired units (with a share around 40.2%), solid fuels (around 20%) and to a lesser extent by nuclear energy (around 10%). The renewable share of generation is around 10%, with a penetration of almost 5%. The Texan system is based on similar shares of coal and gas generation (around 40%) and on a low share of nuclear power. Texas produces more wind power than any other US state. Indeed, wind feed-in accounted for 4.4% of the US electricity generation and 9% of the Texan generation in 2014, with installed capacity of 15,635 MW. Dutch power generation mix is dominated by gas-fired units (with a share of 63.5% in 2011) and a significant amount of coal-fired production capacity (solid fuels amounted to 18.9% in 2011). RES represented a 10.9% share and nuclear power was less important with a share of 3.6%. The energy consumption was based mainly on fossil fuels in 2012, notably natural gas, crude oil and petroleum products, and to a lesser extent solid fuels. Renewable and nuclear energy exhibited values of penetration equal to 4.3% and 1.2%, respectively (for further details see EU Commission (2014b)). The Netherlands is involved in several successful market coupling projects and two interconnectors (BritNed with the UK, and NorNed with Norway) allows 10% in 2014. for new cross-border allocations. Complex regulatory frameworks have been adopted at national levels to promote RES–E and achieve the EU targets\textsuperscript{11}. As suggested by Böhringer and Rosendahl (2010), the regu- \textsuperscript{11}In Germany, electricity from renewable sources is supported through a feed-in tariff. The criteria for eligibility and the tariff levels are set out in the Act on Granting Priority to Renewable Energy Sources (EEG). According to this Act, operators of renewable energy plants are statutorily entitled against the grid operator to payments for electricity exported to the grid. The EEG also introduced the so-called market premium and the flexibility premium for plant operators who directly sell their electricity from renewable sources. Moreover, low interest loans for investments in new plants are provided for by different programs. Plants for the generation of electricity from RES shall be given priority connection to the grid. Furthermore, grid operators are obliged to give priority to electricity from renewable sources when purchasing and transmitting electricity. In Spain, the main support scheme was the “Régimen Especial” operated until lations for promoting the “greens” and reducing the “dirtiest” may produce unexpected and unwanted results because of their overlapping. Moreover, as anticipated by de Menezes and Houllier (2015), a fully coordinated EU energy policy is necessary in order to avoid unintended interactions between supranational and national policies, given that each individual country may affect the neighboring ones. On the one side, national supply is procured by local policies determining the internal energy mix in each country, whereas, on the other side, these same policies may represent obstacles against the creation of an internal energy market that actually needs strong coordination. Indeed, previous studies provided unclear evidence through years. Bosco et al. (2010) analyzed the integration of central European markets (France, Germany, the Netherlands and Austria) by means of a robust multivariate long-run dynamic analysis, showing a trend of high integration appearing to be common with gas, but not with oil prices over the period ending in March 2007. Bunn and Gianfreda (2010) present empirical results about the integration of the French, German, British, Dutch and Spanish power markets at different time horizons for sample period from July 2001 to July 2005. They found evidence of market integration increasing over time, despite an underlying inefficiency in each market with respect to the forward and spot price convergence. Causality tests, cointegration and impulse-response techniques, for both price levels and volatilities, were performed indicating less influence of the size and proximity of neighboring markets as compared to other studies. More integration is found at baseload than peak, and, surprisingly, less integration in forwards than spot prices. Lindström and Regland (2012) analyzed data for a subsequent period (2005-2010), again for spot and forward markets, and confirmed the previous finding of partial integration. In line with these results are those obtained by Balaguer (2011) who found high integration between Danish and Swedish prices and on the opposite price divergence among French, German and Italian prices during sample period 2003–2009. Bollino et al. (2013) analyzed the price convergence among Austria, Germany, France and Italy between 2004 and 2010 proving that German electricity prices were signals to the other neighboring markets. Using regime switching models over the period 2003–2010, Huisman and Kilic (2013) observed similar parameter estimates when modeling Belgian, Dutch, French, German and Nordic day–ahead prices. More recently, Castagneto-Gissey et al. (2014) explored the time–varying interactions among markets studying 13 EU electricity markets from 2007 to 2012 by means of the end of 2011. Afterwards, no other support schemes were put in place apart from a tax regulation system for investments related to RES-E plants. In Italy, there is a combination of premium tariffs, feed-in tariffs and tender schemes; tax regulation mechanisms are also in place for investment in RES-E plants. Furthermore, interested parties can make use of net–metering. RES units are granted priority despatch. In France, there are both feed–in tariffs and tax benefits. The use of the grid for the transmission of electricity from renewable sources is subject to the general legislation on energy. There are no special provisions for electricity from RES. In Great Britain, RES-E is supported through feed–in tariffs, Contracts for Difference, quota systems and tax regulation mechanisms. RES-E electricity is connected to the grid under the principle of non-discrimination, RES-E plant operators are granted the right to access the grid and grid operators are obliged to expand the grid if this is necessary to accept all generated RES E from a plant. In The Netherlands, the main support instrument is the SDE+ premium feed–in scheme (a premium on top of the market price to compensate for the difference between the wholesale price of electricity from fossil sources and the price of electricity from renewable sources), together with loans and various tax benefits. Moreover, net–metering applies to small installations. Further details on RES-E supporting mechanisms within EU are available at http://www.res-legal.eu. Granger-causal networks, providing evidence of increased association but with an overall lack of market integration. Therefore, it appears that market coupling and new interconnector commissioning which occurred after the implementation of the Third Energy Package did not provide the expected results. de Menezes and Houllier (2014) studied the European electricity market integration using data from nine EU markets for the time period 2000 to March 2013 and adopting a time-varying fractional cointegration analysis for both spot and one-month ahead electricity prices. They investigated not only whether wholesale prices were converging, but also whether the pace of convergence could have been affected by special events on the supply side (such as the 21st November 2006, when the Trilateral Market Coupling went live and the Belgian power exchange was launched; the launch of the NorNed interconnector on 6 May 2008; or the removal of 40% of German nuclear capacity on 6 August 2011). Electricity spot markets which are geographically close or well-connected have been found to have longer periods of price convergence. However, overall electricity spot prices were not increasingly converging and spot price dispersion could not be related to market integration. Using short-run dynamic correlations and long-run fractional cointegration analysis, de Menezes and Houllier (2015) studied the time-varying associations between electricity prices and German wind power, observing greater spot price and volatility associations across interconnected markets and less long-run integration of German and neighbors markets together with a decreasing speed in mean reversion. Therefore, given all previous analyses which did not consider the interaction between RES-E penetration and the integration of electricity markets, we are going to provide empirical evidence that the achievement of EU renewable targets by single countries indirectly affects the internal energy market. We show that local policies may interact with the EU supranational policies and actually produce results diverging from EU planned targets. 3. Methodology 3.1. Data Description We consider the weekday (average) wholesale electricity prices of the following countries: France (FR), Germany (DE), Great Britain (GB), Italy (IT), The Netherlands (NL), Spain (ES), and Texas (TX). Data have been collected directly from market exchanges and Datastream (Reuters, 2013) and all series are quoted or converted in €/MWh. In addition, we consider coal prices and the ICE UK\textsuperscript{13}, for EU, and the Henry Hub, for US, as reference prices for natural gas, all converted in €/MWh. We additionally consider another important driver of electricity prices, that is, the demand for electricity, even if measured by a proxy, such as \textsuperscript{12}They found that the cointegrated pairs differ after 6 August 2011 and no market was integrated with Germany, while before that date Netherlands and Switzerland were. They also found an increasing integration between Belgium, The Netherlands, NordPool and France, rejecting overall the hypothesis of less EU integration. \textsuperscript{13}It represents a pure hub benchmark against other hybrid prices containing hub and oil indexation elements, as in EU Commission (2014b). Furthermore, we do not disentangle between the wholesale gas prices across the EU, because of strong correlation across the EU hub prices from 2010-2014. Only two exceptions were the Italian PSV and French PEG South: but the former had progressively aligned with continental hub prices, whereas the latter followed an opposite evolution, diverging from the more traded PEG North prices, because of low levels and capacities of gas storage internal to the French market (EU Commission (2014b)). the system load. To account for different configurations of demanded quantity and supply conditions, we consider the daily prices of electricity determined at specific hours: we analyze the 4th, 13th, and 19th hour of each day. Indeed, from the inspection of the intra-daily profiles of load\textsuperscript{14}, it is possible to detect the following facts: i) the yearly average demand for individual hours moved across years by simple up- and down-shifts without changing its shape\textsuperscript{15} in any of the considered markets and years; ii) electricity demand generally shows its highest values during peak hours (i.e., from 8 to 20) and the opposite for off-peak hours; iii) for all markets the hours 4, 13 and 19 clearly represent situations in which we observe, respectively, the lowest load (h4), mid-day peak (h13) and late afternoon peak (h19). As shown for Germany in Figure 3, these hours are of particular interest for the following reasons: at hour 4 we do not expect significant changes brought about by the introduction of RES generation because both demand and RES production are low, although RES may slightly reduce the nexus between drivers and electricity prices; at hour 13 we expect a significant change brought about by the introduction of RES generation, since demand is very high and RES generation at its maximum, with the peak of solar production summing to wind generation; at hour 19 we expect some changes brought about by the introduction of RES generation, because the demand is still high, photovoltaic generation is low, but wind is still contributing to the total generation. It must be emphasized that we consider hourly prices determined on day-ahead markets, and not on balancing markets, therefore, we cannot actually see the real-time relations between electricity and gas prices induced by the required up- or down-regulation to back-up RES-E generation. Still, RES-E reduces the production of conventional units, as renewable generation increases through years and consequently reduces the influence of price drivers on day-ahead electricity prices. Furthermore, the general documented reduction of electricity demand in almost all EU countries must correspond to a reduction in the electricity-fuel nexus. We generally expect to observe a reduced gas/coal influence on electricity prices because of the contraction of demand in Germany, Spain, Italy, France, and Great Britain. Obviously, markets with flat or increasing trend for demand provide more reliable (controlled) results. For this reason, we look at The Netherlands and Texas as control cases because in the former case (NL) demand was stable around 110TW across years, whereas in the latter market (TX) demand increased over time. 3.2. Data Pre-processing It is a well known fact that wholesale electricity prices are very far from behaving like Gaussian processes and, thus, least-squares based econometric methods tend to fail, unless the most “extreme” features of these data are dealt with before the analysis. Furthermore, the data generating process of electricity prices can be viewed as the sum of a persistent component linked to the marginal costs of production and affected by the market structure plus an extremely noisy and leptokurtic component determined by short-term “shocks” such as strategies of the market participants, mismatch between the actual demand and its forecasts, plant maintenance, exceptional meteorological events, etc. From an econometric point of view, this structure of the data generating process tends to rule out all those methods based on autoregressions, such as the Augmented Dickey Fuller unit root test (ADF, Dickey and Fuller, 1979; Said and Dickey, 1984) and Johansen’s cointegration tests. In order to enhance the signal-to-noise ratio and reduce the leptokurtosis of the electricity price time series, we process the data in two ways. First of all, as in Bosco et al. (2010) we use weekly median prices. The effect of taking the median is twofold: it reduces both the effect of the short-term noise and the presence of isolated within-week extreme observations. Secondly, we fit a simple random walk plus noise model to the weekly median prices and identify possible additive outliers using the auxiliary residuals as described by Harvey and Koopman (1992). The outlier found at time $\tau$ is then substituted with its expected value (i.e., the smoothed estimate $E[y_\tau|y_1, \ldots, y_{\tau-1}, y_{\tau+1}, \ldots, y_n]$ based on the random walk plus noise model fitted excluding the identified outliers). This operation reduces the strong influence that aberrant observations have on least-squares procedures and further enhances the signal with respect to the noise. For natural gas and coal prices we use weekly medians without further processing. Figure 4 shows the dynamics of German weekly median prices before and after processing. Clearly, we see the importance of such pre-processing in taking care that extreme outliers are properly filtered out. Furthermore, the dynamics of EU electricity prices --- 16In order to see why, consider the simple data generating process $y_t = \mu_t + \epsilon_t$, with $\mu_t = \mu_{t-1} + \eta_t$, and $\eta_t$ and $\epsilon_t$ white noise processes with variances $\sigma^2_\eta$ and $\sigma^2_\epsilon$. The ADF and Johansen’s tests are based on the difference of the data, which in our simple model look like $\Delta y_t = \Delta \mu_t + \Delta \epsilon_t = \eta_t + \epsilon_t - \epsilon_{t-1}$, which is a moving average process (MA) of order 1 whose coefficient approaches $-1$ when the signal-to-noise ratio $\sigma^2_\eta/\sigma^2_\epsilon$ gets small. Thus, when the noise component $\epsilon_t$ dominates, the near-to-unity root of the MA process tends to cancel out with the unit root of the $y_t$ process and the ADF fails to find the unit root in $y_t$. Similarly, Johansen’s test finds more cointegration relations than there are. shown in Figure 5 clearly exhibits a common change starting from mid 2009. This allows us to divide our sample into two subsets and implement a dynamic analysis on years 2006-2008 and 2010-2014. 3.3. Methods All time series of electricity, coal and gas prices were tested for a unit root using the ADF test over the full-sample 2006-2014, and the conclusion is that all time series are integrated of order one (i.e., I(1)). Since our conjecture is that the strong increment of RES electricity generation induced changes in the relation between electricity prices and fuel prices, all the following analyses have been carried out separately on the two subsamples, 2006-2008 and 2010-2014. Moreover, we analyze the time series of hour 4, 13 and 19 individually. For each country, for each considered hour and for each subsample, we tested for the presence of cointegration among the logarithms of electricity and fuel (i.e., gas, coal) prices. Johansen’s test results were similar for all the countries, hours and subsamples: the hypothesis that all time series triplets have one long-run relation (i.e., one cointegration vector) cannot be rejected. Thus, the strong increment of RES generation in the second subsample did not have the effect of annihilating the presence of a long-run equilibrium among electricity, gas and coal prices. In order to understand if and how the long-run relations among these prices have changed in the second subsample, we estimated a vector error correction model (VECM) for each country, coherently with the number of cointegrating relations found by Johansen’s test. For all countries and subsamples the hypothesis of (weak) exogeneity of electricity prices was rejected. This means that hydrocarbon sources remain drivers of the long-run dynamics of electricity prices also in the subsample 2010–2014. Moreover, gas prices resulted (weakly) exogenous in the great majority of the estimated VECMs. Thus, natural gas tends to determine the long-run levels of electricity and coal prices, without being influenced by those prices in the same time-span. In this framework (VECM), the best way to assess the role that natural gas and coal prices play in influencing electricity prices in the long-run is by the forecast error variance decomposition (FEVD), which allows to determine how much of the forecast error variance of each of the variables can be explained by exogenous shocks to the other variables. Figures Figure 5: Weekly median prices (in €/MWh) for selected electricity markets from 2006 to 2014. 11–12 depict the share of variance of future electricity prices explained by endogenous shocks (i.e., on electricity prices) and by shocks in gas and coal prices over a forecast horizon of 1 to 50 weeks. The detailed discussion of these plots is the argument of Section 4, however a quick visual inspection of the graphs is sufficient to realize that in the second subsample unexpected movements in hydrocarbon prices, and in particular natural gas prices, determine a much smaller share of unanticipated future electricity prices changes. Bosco et al. (2010) assessed the degree of integration\(^\text{17}\) of some European electricity markets and concluded that there was a core of central European countries whose electricity prices followed a common trend. This common trend was the effect of two factors: i) well interconnected markets, ii) the common price of fuel (natural gas in particular) in marginal generation units. Now, during the last years, some interconnecting lines have increased their capacity, but RES generation is influenced by local conditions and so one of the factors determining the wholesale price of electricity in each country is likely to have lost its role, at least at some hours. We assess if the strong growth of RES generation has affected the integration among European electricity markets testing the hypothesis that for any two countries \(a\) and \(b\) in our sample the ratio of their electricity prices is mean-reverting, that is, it has a constant mean toward which it always reverts in a relatively short time; formally, we test the hypothesis \(H_0 : \log(p_{a,t}/p_{b,t}) \sim I(0)\). In fact, if two markets are integrated (\textit{strongly integrated} in the terminology of De Vany and Walls, 1999; Bosco et al., 2010) the relative prices should be approximately constant (weak law of one price). This hypothesis has been tested using the well-known KPSS statistic (Kwiatkowski et al., 1992) and its outlier-robust version RKPSS (Pelagatti and Sen, 2013) and the results for each country pair, hour and subsample can be read in Table 2. Table 1 summarizes the number of times the strong integration hypothesis was not rejected among the 15 unique pairs that can be formed considering our 6 European countries. While at hour 4 the number of strongly integrated countries has increased (from 10 to 12) from subsample 2006-2008 to subsample 2010-2014, because of the low demand and the increased capacity of cross-market interconnections, for hours 13 and 19, when RES generation and demand are high, the number of strongly integrated markets is reduced (from 8 to 4-5). \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Sample & \multicolumn{3}{|c|}{KPSS (\(\alpha = 5\%\))} & \multicolumn{3}{|c|}{RKPSS (\(\alpha = 5\%\))} \\ & h4 & h13 & h19 & h4 & h13 & h19 \\ \hline 2006-2008 & 10 & 8 & 8 & 10 & 8 & 8 \\ 2010-2014 & 12 & 5 & 5 & 12 & 4 & 5 \\ \hline \end{tabular} \caption{Number of times the \textit{strong integration} hypothesis was not rejected among all unique pairs of studied European countries.} \end{table} 4. Results and Discussion 4.1. Impacts at National Levels: Dynamic Analysis of Electricity–Fuels Nexus Given the price dynamics on the international markets characterized by lower coal prices and high US natural gas supply, we expect more coal consumption in the EU versus a \(^{17}\)The term \textit{integration} is here used with its usual meaning rather than with the econometrics meaning. greater US domestic consumption of natural gas. Hence we generally expect coal playing a leading role (according to the country generation mix) in the determination of EU day-ahead electricity prices in recent years and the opposite for Texas. We consider RES penetration by implementing a dynamic analysis over two samples: the first one, 2006–2008, in which the share of RES was rather limited and the second one, 2010–2014, in which RES-E production significantly increased. Thus, we assume that, controlling for demand, the changes in the dynamics of day-ahead electricity prices are mainly caused by the increase in RES generation. To this aim, we have decomposed the forecasted variance of electricity prices in terms of forecasted variance of coal and gas prices, by looking at specific hours or levels of demand. In this way, we are able to explore the evolution of the relationship between electricity prices and the prices of hydrocarbon resources (i.e., natural gas, and coal), and to establish their importance in determining day-ahead electricity prices in the long run. Results are depicted in Figures 6–12. For EU countries, we find a general decrease in the role of hydrocarbon sources as electricity price drivers from the first to the second subsample. Among hydrocarbon sources, gas prices seem to reduce their influence, while coal becomes relatively more relevant. Germany, France and the Netherlands share a common tendency: while in the first subsample the main driver for the long-run level of electricity prices is natural gas, with a minor role played by coal, in the second subsample gas prices become very weak (particularly in Germany and France) with coal increasing its effect, especially at hour 13. Notice that, differently from the other European countries, the Netherlands are characterized by a stable level of electricity demand. Thus, RES increase may be considered the principal reason for the observed change in fuel-electricity prices relations. In Great Britain, the role of coal reduces substantially, while that of gas experiences only a slight decrease. Similar considerations hold for Spain, where coal prices become less influential and gas slightly decreases its impact with the exclusion of hour 13 that was not influenced at all by gas prices in the first subsample. Looking at the Italian market, we notice a dramatic reduction of electricity price variation due to hydrocarbon prices at the off-peak hour. In the other hours considered, coal totally (h13) or partially (h19) substitutes gas as the main exogenous driver of electricity prices. An interesting case of study for international comparison is Texas, where electricity demand increased (from 310TW in the first subsample to 331TW in the second subsample) through our sample years approximately at the same pace as RES (wind) generation. Therefore we can think that new RES generation was able to cover exactly the demand increase. Moreover, in this country gas prices were decreasing. Given this facts, we would expect a strong increase in the influence of gas prices on electricity prices. However we observe this increasing linkage only in hour 4, whereas at hours 13 and 19 gas price appears to maintain a comparable or even a slightly decreasing influence. Summarizing our results, the first important feature that can be noticed across all countries is that for the sample 2010–2014 hydrocarbon price movements are much less relevant in determining electricity prices. The greatest part of the electricity price variance in 2006–2008 is determined by changes in gas prices, whereas coal price movements generally dominated electricity prices in 2010–2014. This must be associated with low emission prices which have favored coal in the production mixes especially in Germany, because of the nuclear phase-out decided after the Fukushima disaster. The fact that hydrocarbons are much less relevant in determining electricity prices over the sub-sample 2010–2014 has been explained in terms of the high RES penetration, in combination with the dynamics of demand and in terms of the different generation mix of drivers that each country shows during the two sub-samples\textsuperscript{18}. \textsuperscript{18}We have further provided the evidence that most of long-run price movements for natural gas, oil and coal where largely caused by oil price changes over the period 2006–2008, but later this link vanished. However, we have decided to exclude oil from our analysis to avoid spurious results given that natural gas and crude oil have been shown to be strictly connected, see Brigida (2014). Figure 6: Variance Decomposition for hours 4 (on first column), 13 (on second column) and 19 (on third column) on sample 2006-2008 (on the first row) and on sample 2010-2014 (on second row) for Germany Figure 7: Scomposizione della varianza di previsione per l_fr_hp4, l_fr_hp13, l_fr_hp19 su campione 2006-2008 (prima riga) e campione 2010-2014 (seconda riga) per Francia Figure 8: Scomposizione della varianza di previsione per l_nl_hp4, l_nl_hp13, l_nl_hp19 su campione 2006-2008 (prima riga) e campione 2010-2014 (seconda riga) per Paesi Bassi Figure 9: Variance Decomposition for hours 4 (on first column), 13 (on second column) and 19 (on third column) on sample 2006-2008 (on the first row) and on sample 2010-2014 (on second row) for Spain Figure 10: Variance Decomposition for hours 4 (on first column), 13 (on second column) and 19 (on third column) on sample 2006-2008 (on the first row) and on sample 2010-2014 (on second row) for Great Britain Figure 11: Variance Decomposition for hours 4 (on first column), 13 (on second column) and 19 (on third column) on sample 2006-2008 (on the first row) and on sample 2010-2014 (on second row) for Italy Figure 12: Variance Decomposition for hours 4 (on first column), 13 (on second column) and 19 (on third column) on sample 2006-2008 (on the first row) and on sample 2010-2014 (on second row) for Texas 4.2. Impacts on the Internal Energy Market Following the methodology described in Section 3.3, we test market (strong) integration on all pairs of EU markets according to their interconnections. Detailed statistics are available on request and results have been summarized in Table 2, where we have compared the standard test versus the modified and more robust test. Given all previous discussion, we expect to observe two regimes for integration: one for 2006–2008 characterized by some integration between interconnected electricity markets and one for 2010–2014 characterized by less integration, as consequence of local policies determining different generation mixes. Interestingly, we find two cases of maintained strong integration. The first one refers to Germany and France being integrated on both samples. This is not surprising in the light of the results described in Section 4.1, where we noticed the same evolution of the fuel/electricity price nexus in both countries. We therefore confirm the results obtained in Bosco et al. (2010) who concluded that “French and German markets seem to form a sort of core central zone” when prices up to 2007 were considered. The second case refers to France being integrated with Italy during hour 13, but not at hours 4 and 19 on 2006-2008; surprisingly, integration is detected across all hours on 2010-2014. This may be explained by higher interconnection capacity and its better exploitation due to the introduction of market coupling in the last years. See further discussion in Bunn and Gianfreda (2010) and Jerko et al. (2004). The other results confirm instead our expectations of no integration over the second set of years characterized by high RES-E penetration. Specifically, we find no integration between Germany and The Netherlands at hour 13 (during peak period) on both samples. At hours 4 and 19, the integration found on 2006-2008 disappears in the second sample. France and Spain were integrated during the first sample at hours 4 and 19, but not at hour 13. Both markets are no more integrated for all hours in the second subsample. Moreover, France and Great Britain are integrated in the first subsample across all hours, whereas in the second sample they revert to no integration. In line with our expectations, the effect of the strong growth of RES in EU countries is the reduction of the number of cointegration relations (i.e., strong integration among markets) during peak hours. For the off-peak hour we observe a stable number of (strongly) integrated market pairs. ### Table 2: Results from the KPSS tests are above the diagonal, whereas results from the RKPSS are below the diagonal. Test decisions are taken at 5%. Light gray indicates indirect connections among considered markets. | DE | FR | ES | NL | GB | IT | DE | FR | ES | NL | GB | IT | |-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | **Hour 4** | | | | | | | | | | | | | | **2006-2008** | | | | | | **2010-2014** | | | | | | | DE | - | coint | coint | coint | no coint | DE | - | no coint | no coint | no coint | no coint | coint | | FR | coint | - | coint | coint | no coint | FR | coint | - | coint | coint | coint | coint | | ES | coint | coint | - | coint | no coint | ES | coint | coint | - | coint | coint | coint | | NL | no coint | coint | no coint | - | coint | no coint | NL | no coint | coint | coint | - | coint | | GB | coint | coint | coint | coint | - | no coint | GB | no coint | coint | coint | coint | - | | IT | no coint | no coint | no coint | no coint | no coint | IT | coint | coint | coint | coint | coint | - | | **Hour 13** | | | | | | **Hour 13** | | | | | | | | DE | - | coint | coint | no coint | no coint | coint | DE | - | no coint | no coint | no coint | no coint | coint | | FR | coint | - | no coint | no coint | coint | coint | FR | coint | - | no coint | no coint | no coint | coint | | ES | coint | no coint | - | coint | no coint | coint | ES | coint | no coint | - | coint | coint | no coint | | NL | no coint | no coint | no coint | - | no coint | coint | NL | no coint | no coint | no coint | - | coint | | GB | no coint | coint | no coint | no coint | - | no coint | GB | no coint | no coint | coint | no coint | - | | IT | coint | coint | coint | coint | no coint | IT | coint | coint | coint | no coint | no coint | - | | **Hour 19** | | | | | | **Hour 19** | | | | | | | | DE | - | coint | coint | coint | no coint | DE | - | no coint | no coint | no coint | no coint | no coint | | FR | coint | - | no coint | no coint | coint | no coint | FR | coint | - | no coint | no coint | no coint | no coint | | ES | coint | coint | - | coint | no coint | coint | ES | coint | no coint | - | coint | coint | coint | | NL | no coint | no coint | no coint | - | no coint | coint | NL | no coint | no coint | no coint | - | coint | | GB | no coint | coint | no coint | no coint | - | no coint | GB | no coint | no coint | coint | no coint | - | | IT | no coint | no coint | no coint | no coint | no coint | IT | no coint | no coint | coint | coint | coint | - | 5. Conclusions and Policy Implications In this paper, we have presented new evidence showing that first, the national support for RES-E generation and the induced increase in RES penetration has affected the electricity-fuel nexus; secondly, this has influenced the process of integration among wholesale electricity markets already in place in the past decade. Consequently, all this has clearly shown how the overlapping of national and supranational energy policies may produce conflicting results. Despite the high levels of RES penetration registered in several countries in the last years, previous contributions did not consider the effect of RES-E on the long-run relations between electricity prices and prices of the fundamental drivers. Other analyses were mainly oriented in testing the EU market integration as realization of the EU internal energy market policy, without taking into account the local effects of higher RES penetration on this process. We provide for the first time the empirical evidence that the increasing RES penetration is lowering the traditional relationship between electricity and fuels (coal and gas) prices. Even if hydrocarbons remain drivers of the long-run dynamics of electricity prices, renewables are able to substantially reduce their influence. We find that coal-fired power generation has increased its influence on electricity prices, fostered by coal prices becoming relatively cheaper than natural gas prices. The switch from natural gas, the less emission-intensive generation source, to coal raises new challenges for national and European policies for reducing greenhouse gas emissions. Following the worldwide policies for climate change, the new and stricter air pollution regulations and the competition from natural gas, we were not expecting a coal renaissance. Coal consumption is indeed increasing worldwide, as documented by BP (2015) and this is not due just to the Chinese, or Indian, or other Asian countries’ economic growth. Actually coal often represents the cheapest energy option thanks to the development of a robust international coal markets. Then, the price of coal looks more appealing and makes more profitable the construction of new coal-fired power plants. However, the more coal plants are build, the harder it is to reduce the share of coal and this is a serious obstacle in mitigating climate change and global warming. Therefore, new efficient carbon emission trading schemes are called for instead of unclear, expensive and not coordinated subsidy policies to renewables which look more and more “spiraled out-of-control” billing costs paid by customers. Our results are consistent with the findings of Böhringer and Rosendahl (2010), who suggest that an increase in green generation may lower emission prices and, therefore, promote dirtier technologies as coal. As anticipated by Holttinen (2005), Weight (2009) and Mulder and Scholtens (2013), given that electricity prices have become more connected to weather conditions and less to fossil fuels, we empirically show that market integration decreases. Our findings show that RES-E generation must not be neglected by national regulators and policy makers when legislating and planning the long-term energy future, since renewable sources are going to exhibit not only national effects but also important consequences on other European markets. As theoretically argued by de Menezes and Houllier (2015), we show empirically that isolated energy policies with RES-E targets not centrally coordinated at the EU level may interact producing divergence of electricity prices across Europe and the departure from the creation of the internal energy market. Therefore, we strongly suggest the promotion of fully coordinated energy policies at both national and supranational levels and the promotion of those mechanisms able to foster coordination and enhance integration, such market coupling and grid interconnections. Even if not necessarily leading to permanent price convergence, completing the development of physical power transmissions and reinforcing existing interconnection capacity may enable an efficient balancing of power consumption and production between several market areas, urgently needed in view of increasing RES penetration which requires a more flexible and capable system. Acknowledgment We thank Silvester Van Koten for useful comments and suggestions. Furthermore, the DEMS visiting scholar programme at the University of Milano–Bicocca is kindly acknowledged by the first author for supporting this research project. References EU Commission (2014a). Communication from the commission to the european parliament, the council, the european economic and social committee and the committee of the regions. a policy framework for climate and energy in the period from 2020 to 2030.
A Dynamic Load Balancing System for Parallel Cluster Computing Overeinder, B.J.; Sloot, P.M.A.; Heederik, R.N.; Hertzberger, L.O. Published in: Future Generation Computer Systems DOI: 10.1016/0167-739X(95)00038-T Citation for published version (APA): A Dynamic Load Balancing System for Parallel Cluster Computing B. J. Overeinder, P. M. A. Sloot, R. N. Heederik and L. O. Hertzberger Parallel Scientific Computing and Simulation Group, Department of Computer Science, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands Abstract In this paper we discuss a new approach to dynamic load balancing of parallel jobs in clusters of workstations and describe the implementation into a Unix run-time environment. The efficiency of the proposed methodology is shown by means of a number of case studies. Key words: Dynamic load balancing, task migration, distributed computing systems, clusters of workstations. 1 Introduction With the advent of high-speed networks (most prominently FDDI and ATM), clusters of workstations achieve the same scalable parallelism as the current MPP architectures. However, software support for such parallel cluster systems still lags behind. These parallel cluster systems require new programming paradigms and environments to provide the user with mechanisms and tools to exploit the full potential of the available distributed resources. In these systems changes such as variation in demand of processor power, variation in number of processors, or dynamic changes in the run-time behaviour of the application, hamper the efficient use of resources. If we deal with load balancing on parallel cluster systems, we can identify three levels that determine the efficient use and utilization of the distributed resources, namely: - domain decomposition, because workload should be evenly divided over a set of tasks; - global scheduling, since the tasks must be mapped to distributed resources; - local scheduling, for the fact that multiple tasks must be scheduled on one processor. Next, we focus on the consequences with respect to load balancing in case the amount of workload and available distributed resources are not static. Consider, for example, an application that after a straightforward domain decomposition, can be mapped onto the processors of a parallel architecture. If the hardware system is homogeneous and allocated to only one application program, then the execution will run balanced until completion: we have mapped a static resource problem to a static resource system. However, if the underlying hardware system is a cluster of multi-user workstations we run into problems because the available processing capacity per node may change: in this case the static resource problem is mapped to a system with dynamic resources, resulting in a potentially unbalanced execution. Things can get even more complicated if we consider the execution of an application with a dynamic run-time behaviour to a cluster of workstations, i.e., the mapping of a dynamic resource problem onto a dynamic resource machine. One way to deal with this dynamic changing resource requirement would be an intelligent system that supports the migration of processes from overloaded to under-loaded processors at run-time, without interference from the programmer. In addition, the resulting adaptive system hides the complexity of the load balancing from the programmer/end-user. It implies the following constraints: - since we assume that the computational resource is a scalable cluster environment, the application programming model must be based on message passing; - it is essential we support a generic operating system, therefore the machine platform operating system should be Unix; - by hiding the complexity in libraries, the dynamic load balance run-time support system must be incorporated at user level. The first two constraints directly relate to the target hardware platform, i.e., a cluster of workstations interconnected by a network and providing \textit{de-facto} the Unix operating system. The third constraint stems from the fact that dynamic load balance facility is supplied on top of Unix, and not by enhancing the operating system. This facilitates the acceptance by individual users in academia and industry. To fully exploit the potential of clusters of workstations, a detailed comprehensive understanding of the underlying mechanisms must be obtained. Therefore, a good understanding of the interplay between the dynamic parallel application systems and the adaptive computing systems is essential. The work presented here reports on a pilot implementation of such an experimental adaptive system, for which we have tossed the name Dynamic-PVM [4,15]. The paper is outlined as follows. In Section 2 we introduce in general terms the necessary components for dynamic load balancing within the context of the formulated constraints. Given the functional design outlined in this section, the resulting implementation of the run-time support system is described in Section 3. Experiments and results are presented in Section 4. The results of the experiments are discussed and summarized in Section 5. Section 6 concludes with future work. 2 Background and Design Aspects Within the design of a self-contained experimental environment for dynamic load balancing of parallel application systems, at least the following three functionalities should be included: (i) parallel programming environment, (ii) parallel run-time support system, and (iii) checkpoint/migration facility. The parallel programming environment enables the programmer to decompose the application problem into parallel subtasks. The parallel run-time support system allows for the parallel execution of the parallel application system; and the checkpoint/migration facility extends the run-time support system with functionality necessary for dynamic load balancing. The first two facilities are provided by the PVM system [12]. The PVM system includes an application programming interface for parallel program development and a run-time support system to allow for parallel execution of the application. The task checkpoint/migration functionality extension must be integrated with the PVM run-time support. The choice to use PVM as the basic parallel programming environment is motivated by the free availability of the source code and the extendibility of the run-time support. The application programming interface incorporates the dynamic addition and deletion of hosts (resources) and processes. Moreover, PVM is the most widely used message passing environment to date. Therefore, we are able to test our system for various existing PVM-based applications. With respect to the checkpoint/migration facility, two operation levels are distinguished: operating system level and user level. In operating system level implementations the resource management facilities are supported by the OS kernel. Examples of such systems are Mach [9], V-Kernel [14], Sprite [5], and Charlotte [2]. User level designs and implementations of adaptive systems include dynamic resource management facilities by providing their own dynamic load balancing run-time support. Examples of user level designs are Condor [8] for sequential, and MPVM [3] for parallel application systems. The research presented here is a typical example of this last category. Table 1 shows a comparison of the different aspects of granularity and load managing required by the three systems under considerations: Condor, PVM, and the adaptive system we discuss here: DynamicPVM. Condor is included here as a representative example of a sequential job migration system. We use the term job to indicate the largest entity of execution (the application) consisting of one (viz., a sequential program) or more cooperating tasks (viz., a parallel program). <table> <thead> <tr> <th></th> <th>Condor</th> <th>PVM</th> <th>DynamicPVM</th> </tr> </thead> <tbody> <tr> <td>intended usage</td> <td>long running background jobs</td> <td>distributed parallel programs</td> <td></td> </tr> <tr> <td>unit of execution</td> <td>job</td> <td>task</td> <td></td> </tr> <tr> <td>load managing objective</td> <td>load distribution</td> <td>load decomposition</td> <td>both</td> </tr> <tr> <td>schedule policy</td> <td>dynamic load balancing</td> <td>round-robin allocation</td> <td>dynamic load balancing</td> </tr> <tr> <td>schedule objective</td> <td>resource utilization</td> <td>application turnaround time</td> <td>both</td> </tr> <tr> <td>performance objective</td> <td>efficiency</td> <td>effectiveness</td> <td>both</td> </tr> </tbody> </table> Table 1 Granularity and workload management strategies for Condor, PVM, and DynamicPVM. This table indicates the basic design considerations given the constraints we set out to met. The next subsections describe the essentials of the message passing system and the checkpoint/migration facility required to implement the functionalities outlined in Table 1. 2.1 The PVM System The PVM (Parallel Virtual Machine) system presents an integrated environment for heterogeneous concurrent computing on a network of workstations. The computational model is process-based, that is, the unit of parallelism in PVM is an independent sequential thread of control, called a task. A collection of tasks constituting the parallel application, cooperate by explicitly sending and receiving messages to one another. The support for heterogeneity permits the exchange of any data type between machines having different data representations. The PVM system consists of two parts: a daemon, called pvmd, and a library of PVM interface routines, the `pvmlib`. The PVM daemon and library enables a uniform view of the network of workstations, called hosts in PVM, as a parallel virtual machine. Each host in the virtual machine is represented by a daemon that takes care of task creation and dynamic (re-)configuration of the parallel virtual machine. PVM tasks are assigned to the available hosts using a round-robin allocation scheme. Once a task is started, it runs on the assigned host until completion, i.e., the task is statically allocated. The PVM library implements the application programming interface that includes primitives for process creation and termination, host addition and deletion, coordinating tasks, and message-passing primitives. The underlying communication model can be classified as asynchronous message-passing, where the messages are buffered at the receiving end. An important aspect of the communication model is that the message order from each sender to each receiver in the system is preserved. The PVM message-passing interface supplies both point-to-point communication primitives and global communication primitives based on dynamic process groups. To enable the use of heterogeneous host pools, messages can be encoded using an external data representation (XDR [11]). A relevant issue in the context of the forthcoming discussion, is message routing. PVM supports two routing mechanism for messages, namely indirect and direct routing. By default, the messages exchanged between tasks are indirectly routed via the PVM daemon. With indirect routing, a task sends the messages first to the local PVM daemon. The local daemon determines the host on which the destination task resides, and sends the message over the User Datagram Protocol (UDP) transport-layer to the responsible daemon. This daemon eventually delivers the message to the destination task. For example in Fig. 1, an indirect path from task a1 to b2 goes via `pvmd A` and `pvmd B`. Direct message routing allows a task to send messages to another task directly over a Transmission Control Protocol (TCP) link, without interference of the PVM daemons and thereby enhancing communication performance (see for example TCP connection between tasks a1 and c1 in Fig. 1). 2.2 Aspects of Checkpoint and Migration Systems supporting dynamic load balancing, such as Charlotte [2], Condor [8], or V-System [14], stem from the observation that many of the—constantly increasing number of—workstations in academic and industrial institutions are lightly loaded on the average. In general, workstations are intended for personal usage, which has a typical activity pattern that machines are only used for a small part of the day. Typical figures of large pools of workstations have a mean idle time of 80% \[8\]. Thus, there is an opportunity to use these idle workstations as computation servers to increase the processing power available to active users and such to improve the utilization of the hardware. The problem however, is the complexity involved to make efficient use this idle time. To address this problem, global scheduling based on dynamic load balancing by process migration is implemented. In order to make scheduling decisions, the dynamic load sharing system monitors the workstations in the network by keeping track of their load parameters. Workload is balanced over the network by placing new jobs on lightly loaded nodes and migrate jobs from heavily loaded machines to less loaded ones. To guarantee unobtrusiveness, access to idle workstations and retain the sympathy of the workstation’s owner, the system can detect interactive usage of a workstation and evacuate all jobs from such a workstation. Process migration is realized by the movement of an active process from one machine to another in a parallel or distributed computing system. The process is suspended and detached from its environment, its state and data (the checkpoint) transferred to the destination host, where it is restarted and attached to the destination environment. The major requirement for providing a migration facility is transparency: the execution of a process should proceed as if the migration never took place. In parallel application systems, this transparency should hold also for the migrated process’s communication partners. The application programs then do not to have take account for possible complications of checkpointing and migration. Migration is mainly applied to long running jobs to counterbalance better load balance for suspend, migration, and restart overhead. The effective global scheduling of application programs on a cluster of workstations is essential to efficiently use the potential of the system: it should achieve an efficient mapping between an application program and the parallel cluster. In general, global scheduling consists of three components: load data acquisition and distribution, and a load balance algorithm. For example, the Condor scheduler consists of both a centralized and a distributed part. Each node in the pool runs a small daemon that gathers statistics about the node and forwards this information to the central scheduler. This information is used to maximize the exploitation of the available processing power. The problem at hand is an experimental adaptive system, where we concentrate on the integration of a checkpoint/migration facility within PVM to enable global scheduling of parallel tasks. Global scheduling itself is a vast area of research, but will not be discussed in this paper. 3 Implementation Aspects of the Extensions in the DynamicPVM Experimental System This section describes the extensions to PVM that are necessary to support dynamic load balancing within the run-time support system. In order to implement task migration, as eluded in Section 2.2, functionalities in the *pvmd* and *pvmlib* need to be enhanced with checkpoint/migration mechanisms. It is essential to note that the intertask communication, viz., message routing by the *pvmd*, is strongly affected by the added functionality of task migration. Therefore, we need to develop a methodology to guarantee the transparency and correctness of this intertask communication. The extensions to the *pvmd* and *pvmlib* must not change the PVM programming interface and semantics, such that source code portability is guaranteed. The packet routing by the *pvmd* ensures migration transparency. With this approach, any standard PVM application can be linked and executed with the DynamicPVM system without a modification to the source code of this application, thus hiding the complexity for the end-user. 3.1 The Scheduler Although the scheduler as such is not considered in the experimental DynamicPVM system, its role and interface is mentioned here. The scheduler in DynamicPVM is the initiating process of all load balancing activities. The scheduler acts as a resource manager of the distributed system, that is, it de- cides when to migrate a task and to which host it is moved. In this respect, the scheduler largely determines the effectiveness of the DynamicPVM system in its aim for load sharing. The development of good algorithms or heuristics for load sharing is a study on its own and is beyond the scope of this article. The current simple scheduler decides on (re-)allocation of processors for tasks, based on gathered load information of the workstation pool. The scheduler is implemented as a normal PVM task. This approach makes the incorporation of new scheduling strategies flexible and provides for a flexible experimental platform for studying the effectiveness of the different load balancing disciplines. A consequence of implementing the scheduler as a PVM task, is that an additional interface must be provided to enable the scheduler to interact with the DynamicPVM system, in particular with the PVM daemons. To this end, the \textit{pvmlib} is extended with an interface routine, \texttt{pvm\_move(tid, host)}, that initiates the migration of task \texttt{tid} to the specified host. From an operational point of view, the activities of the scheduler consist of gathering distributed load data of the hosts in the pool, and decide on initial task placement and task migration. Initial task placement is the allocation of newly created tasks to hosts. The actual creation of tasks is the responsibility of the \textit{pvmds}. If the scheduler decides to migrate a task to another host, it issues the library function \texttt{pvm\_move()} to activate the migration of a task to a selected host. Section 3.3 describes the migration protocol in more detail. The implementation of the DynamicPVM scheduler, discussed here, collects load information from the hosts in the host list. From the load information and the list of tasks, it selects candidates for migration and decides on the destination hosts. In this ranking process the task/processor workload is taken into account to strive for load sharing. Initial placement of tasks is still carried out in a round-robin assignment by the \textit{pvmd} at which the task is spawned. \subsection{3.2 Consistent Checkpointing Through Critical Sections} To implement dynamic load balancing by task migration, the run-time support system must be able to create an image of the running process, the so-called checkpoint. A checkpoint of an active process consists of the state and data of the process, together with some additional information to recreate the process. To incorporate file I/O migration, the state vector also includes information about open files together with their modes, file descriptors, etc. The text segment of the active process is taken from the executable file, and is therefore not part of the checkpoint. A complication with checkpointing communicating PVM tasks, is that the state of the process also includes the communication status of the socket con- nections. Thus, to save the state of the process, the interprocess communication must also be in a well-defined state. Since suspension of the related communicating task is not desirable, the task should not be communicating with another task at the moment a checkpoint is created. To prohibit the creation of process checkpoints during communication, we apply the notion of critical sections and embed all interprocess communication operations in such sections. Checkpointing can only take place outside a critical section. When a checkpoint signal arrives during the execution of a critical section, the checkpointing is deferred till the end. We have implemented the checkpoint facility with two different strategies for storing the process’s state and data: direct and indirect. With direct checkpointing, the destination host opens a TCP connection to the host where the checkpoint is migrated from, and reads the process’s state and data. Indirect checkpointing on the other hand, creates a dump of the process’s state and data to a file on a shared (e.g., NFS-mounted) file system. With the current network bandwidth limitation, the direct checkpointing strategy is twice as fast as the indirect checkpoint strategy, because it involves only one transfer of the migrating process, compared to two transfers (write/read) when using a file system checkpoint (see also Section 4.3). The advantage of using a checkpoint to a file system is that the process can be restarted on another host at a later stage. 3.3 The Migration Protocol The main objective of the DynamicPVM migration facility is transparency of the migration protocol, i.e., to allow for the movement of tasks without affecting the operation of other tasks in the system. With respect to the individual task selected for migration this implies transparent suspension and resumption of execution: the task has no notion that it is migrated to another host, and the communication can be delayed without failure triggered by migration of one of the tasks. In the task migration protocol we distinguish five phases: (i) create new process context at destination host; (ii) disconnect task from its local pvmd; (iii) checkpoint task; (iv) move task to its new host; (v) restart and reconnect the task to its new pvmd. The first step in the migration protocol is the creation of a new process context at the destination host by sending a message to the pvmd representing that host. Next, the master pvmd updates its routing table to reflect the new location of the task, see also Section 3.4. Before the task selected for migration is suspended, the communication between this task and its *pvm* has to be flushed. Then the task is disconnected from its local *pvm* and messages arriving for that task are refused by the task’s original *pvm*. The master *pvm* will now broadcast the new location to all other *pvm*s, so that any subsequent message is directed to the task’s new location. The next phase is the actual migration of the process. As stated in the previous section, there are two checkpoint strategies to experiment with: direct and indirect. The newly created process on the destination host is requested to restart the checkpoint. If direct checkpointing is used, it opens a TCP socket and waits for the original task to begin transmission of the checkpoint. Using indirect checkpointing, the task opens the checkpoint file and reads the checkpoint from disk. Finally, after the checkpoint is read, the original state of the task (among which data, stack, signal mask, and registers) is restored and the task is restarted with a *longjmp*. Any message that arrived during the checkpoint/migration phase is then delivered to the restarted task. ### 3.4 Packet Routing In PVM the task identifier, *task id* for short, is a unique identifier which serves as the task’s address and therefore may be distributed to other PVM tasks for communication purposes. For this reason the *task id* must remain unchanged during the lifetime of a task, even when the task is migrated. This has implications for the packet routing of messages. The *task id* contains the host identifier at which the task is enrolled and a task sequence number. This information is used by the *pvm* to route packets to their destination, i.e., to the appropriate *pvm* and task. When a task is migrated to another host, this routing information is not correct anymore. Therefore, an additional routing functionality must be incorporated in the *pvm* routing software in order to support the migration of tasks. An important design constraint is that the routing facility must be highly efficient and should not impose additional limitations on the scalability. To provide transparent and correct message routing with migrating tasks, the *task ids* must be made location independent, thus by virtualizing the *task ids*. This is accomplished by maintaining additional routing information tables contained by all *pvm*s (see Fig. 2). These routing tables are consulted for all inter-task communication. Upon migration of a task, first the routing table of the master *pvm* is updated to reflect the change in location of the migrated task. Next, the master *pvm* broadcasts the routing table change to all other pvmds, such that each routing table reflects the actual location of all migrated tasks in the system. Figure 2 depicts the migration of a task attached to pvmd B and the subsequent routing table update. Fig. 2. Routing tables keep track of the migrated tasks. 4 Experiments and Results DynamicPVM is currently implemented for networks of IBM AIX/32 machines [4], Sun workstations operating under SunOS4 and Solaris [15], and PC’s running Linux. It supports only homogeneous checkpointing and migration, because the formats of the checkpoints (the “layout” of the processes) for AIX/32, SunOS4, Solaris, and Linux can not be interchanged. As a result, a task running on a Sun workstation operating under Solaris can only be migrated to another Sun workstation operating under Solaris. Migration between a Solaris workstation and a SunOS4 workstation is not supported (neither between Solaris and AIX/32). 4.1 The One-Factor Experiment One-factor designs are used to compare two systems that differ one categorical variable, here the standard PVM system and the DynamicPVM system. Techniques to analyze this one-factor experiments, in order to decide whether the observed difference is due to induced differences among the systems or due to experimental errors, are presented in this section. The model used for single-factor design is [6], $$y_{ij} = \mu + \alpha_j + e_{ij} \quad (1)$$ Here $y_{ij}$ is the $i$th response of design $j$, $\mu$ is the mean response, $\alpha_j$ is the effect of design $j$, and $e_{ij}$ is the experimental error. The model enables analysis of the origin of the variance, whether it stems from $\alpha_j$ or $e_{ij}$. The total variation of $y$ in a one-factor design accumulates in the effect factor $\alpha_j$ and the error term $e_{ij}$. If we square both sides of the model equation, the sum of squares can be written as, $$SSY = SS0 + SSA + SSE$$ where $SSY = \sum_{i,j} y_{ij}^2$, $SS0 = \sum_{i,j} \mu^2$, $SSA = \sum_{i,j} \alpha_j^2$, and $SSE = \sum_{i,j} e_{ij}^2$. If we design our experiment such that the effects of $\alpha_j$ and $e_{ij}$ add up to zero, then the cross-product terms of Eq. (1) are also equal to zero. Now we define the quantity total sum of squares (SST) by: $$SST = \sum_{i,j} (y_{ij} - \mu)^2 = SSY - SS0 = SSA + SSE \quad (2)$$ Although SST is different from the variance of $y$, it is a measure of $y$’s variability and is called the variation of $y$. Equation (2) shows that the total variation is determined by two parts: SSA representing the known part (due to different systems) and SSE representing the unknown part (due to experimental errors) of the variation. The significance of the known part of the variation is determined by comparing its contribution to the total variation with that contributed by the errors. The $F$-test is applied to check if SSA is significant larger than SSE and to derive whether the observed difference is due to significant differences among the systems rather than experimental errors. In the example of our two system experiment, described in the next subsection, we have the following instantiation of Eq. (1): - $j = 1$ indicates the PVM system; - $j = 2$ indicates the DynamicPVM system; - $i$ indicates one experiment, consisting of 1000 observations; - the values of $e_{ij}$ follow a normal distribution with mean $\bar{e}_{ij} = 0$ and standard deviation $\sigma_{e_{ij}}$; $\sigma_{e_{ij}}$ is tested in the $F$-test; - $\alpha_j$ indicates the difference between the $j = 1$ and the $j = 2$ experiment, and the $\mu$; therefore $\sum_{i,j} \alpha_j = 0$. 12 4.2 Measuring DynamicPVM Communication Overhead A well-known method to measure the basic communication properties of a message-passing systems is the ping-pong experiment. With the ping-pong experiment, series of messages of different sizes are sent between two tasks: one master and one slave. The master sends the message to the slave, the slave receives the message into a buffer, and immediately returns it to the master. Half the time of this message ping-pong is recorded as the time $t$ to send a message of length $n$. In this sense, the ping-pong experiment is a suitable benchmark to determine the overhead introduced by the DynamicPVM implementation. A serious problem in benchmarking systems in dynamically changing environments such as a network of workstations, is that one does not have control over all the factors influencing the measurements, for example network and processor load. Here however, we can design the experiment such that it circumvents this problem by performing the measurements in series of equally loaded workstation environments. In addition, detailed analysis of the results is necessary to preclude experimental errors. The ping-pong experiment was performed for both the public domain PVM implementation as well as the DynamicPVM implementation. The experiments were executed on a lightly loaded system of SparcStation4 workstations connected by a 10Mb/s Ethernet. The data were analyzed according to the techniques described in Section 4.1. All reported experiments passed the null hypothesis of the $F$-test. For each message size, 30 observation were collected for both PVM and DynamicPVM. Each individual observation consists of 1000 ping-pong measurements during “low” network load. The ratio behind this is that the network load qualification of “low” is not well defined. One series of 1000 ping-pong measurements with low network load, results in a observation for message size $n$ for one specific low network load. By repeated series over different low network loads, we obtain different observations with some variation. The results of the ping-pong experiments shown in Table 2 are the grand mean of these observations. The same experiments were performed for “medium” network load. Again for each message size, 30 observations for both PVM and DynamicPVM were collected; the resulting grand means are shown Table 3. In Fig. 3 we show the $\alpha_2$ values for DynamicPVM obtained from the one-factor experiment. The data used in the model are obtained from the low network load ping-pong experiment (see Table 2). The figure depicts the categorical difference in ping-pong results between PVM and DynamicPVM for increasing Table 2 PVM and DynamicPVM ping-pong results for low network load. The percentages are the overhead induced by DynamicPVM. <table> <thead> <tr> <th>Size (bytes)</th> <th>PVM (μsec)</th> <th>DynamicPVM (μsec)</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>7268</td> <td>7619 (4.8%)</td> </tr> <tr> <td>4</td> <td>7170</td> <td>7568 (5.6%)</td> </tr> <tr> <td>8</td> <td>7196</td> <td>7680 (6.7%)</td> </tr> <tr> <td>16</td> <td>7170</td> <td>7626 (6.4%)</td> </tr> <tr> <td>32</td> <td>7172</td> <td>7794 (8.7%)</td> </tr> <tr> <td>64</td> <td>7236</td> <td>7723 (6.7%)</td> </tr> <tr> <td>128</td> <td>7288</td> <td>7913 (8.6%)</td> </tr> <tr> <td>256</td> <td>7676</td> <td>8137 (6.0%)</td> </tr> <tr> <td>512</td> <td>7844</td> <td>8454 (7.8%)</td> </tr> <tr> <td>1024</td> <td>8482</td> <td>9048 (6.7%)</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Size (bytes)</th> <th>PVM (μsec)</th> <th>DynamicPVM (μsec)</th> </tr> </thead> <tbody> <tr> <td>2048</td> <td>9853</td> <td>10594 (7.5%)</td> </tr> <tr> <td>4096</td> <td>14904</td> <td>15856 (6.4%)</td> </tr> <tr> <td>8192</td> <td>22091</td> <td>23248 (5.2%)</td> </tr> <tr> <td>16384</td> <td>36980</td> <td>38437 (3.9%)</td> </tr> <tr> <td>32768</td> <td>65150</td> <td>67808 (4.1%)</td> </tr> <tr> <td>65536</td> <td>120756</td> <td>126199 (4.5%)</td> </tr> <tr> <td>131072</td> <td>232304</td> <td>242082 (4.2%)</td> </tr> <tr> <td>262144</td> <td>453843</td> <td>475323 (4.7%)</td> </tr> <tr> <td>524288</td> <td>900094</td> <td>937909 (4.2%)</td> </tr> <tr> <td>1048576</td> <td>1785052</td> <td>1854352 (3.9%)</td> </tr> </tbody> </table> Table 3 PVM versus DynamicPVM ping-pong results for medium network load. The percentages are the overhead induced by DynamicPVM. <table> <thead> <tr> <th>Size (bytes)</th> <th>PVM (μsec)</th> <th>DynamicPVM (μsec)</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>7425</td> <td>7612 (2.5%)</td> </tr> <tr> <td>4</td> <td>7372</td> <td>7660 (3.9%)</td> </tr> <tr> <td>8</td> <td>7367</td> <td>7635 (3.6%)</td> </tr> <tr> <td>16</td> <td>7402</td> <td>7588 (2.5%)</td> </tr> <tr> <td>32</td> <td>7432</td> <td>7611 (2.4%)</td> </tr> <tr> <td>64</td> <td>7489</td> <td>7652 (2.2%)</td> </tr> <tr> <td>128</td> <td>7570</td> <td>7806 (3.1%)</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Size (bytes)</th> <th>PVM (μsec)</th> <th>DynamicPVM (μsec)</th> </tr> </thead> <tbody> <tr> <td>256</td> <td>7817</td> <td>8017 (2.6%)</td> </tr> <tr> <td>512</td> <td>8311</td> <td>8519 (2.5%)</td> </tr> <tr> <td>1024</td> <td>9232</td> <td>9426 (2.1%)</td> </tr> <tr> <td>2048</td> <td>11151</td> <td>11322 (1.5%)</td> </tr> <tr> <td>4096</td> <td>17081</td> <td>17398 (1.8%)</td> </tr> <tr> <td>8192</td> <td>25623</td> <td>26149 (2.0%)</td> </tr> <tr> <td>16384</td> <td>43328</td> <td>43689 (0.8%)</td> </tr> </tbody> </table> message length. Due to the definition of \( \alpha_j \) with respect to \( \mu \), we have \( \alpha_1 = -\alpha_2 \). Fig. 3. The $\alpha_2$ values for DynamicPVM from the one-factor ping-pong experiment. 4.3 Checkpoint Overhead Figure 4 shows some results obtained by migrating a 75 Kbyte process with data segments of various sizes in both direct and indirect checkpointing mode. As can be seen in the figure, the time needed for the migration is linear to the size of the program. Fig. 4. Migration times (in seconds) for checkpointing using NFS and direct TCP. The systems used in these tests had enough free physical memory to restart the checkpoint without swapping pages to disk. If a process needs to swap pages to secondary storage, performance drops dramatically (data not shown). The NAS Parallel Benchmarks (NPB) is a suite of applications used by the Numerical Aerodynamic Simulation (NAS) Program at NASA for the performance analysis of parallel computers. The benchmark suite consists of five “kernels” and three simulated applications which mimic the computational behaviour of large scale computational fluid dynamics applications. A unique property of the NPB is that the applications are specified algorithmically. The implementation of the NPB kernels used in the experiments with DynamicPVM are described in [18]. The specific NPB kernels used in the performance analysis of PVM and DynamicPVM are: - **EP** The Embarrassingly Parallel kernel is based on a trivial partitionable problem requiring little or no communication between processors. - **MG** The 3-D Multigrid Solver is characterized by highly structured short- and long-distance communication patterns. - **CG** The communication patterns in the Conjugate Gradient kernel are long-distance and unstructured. - **FT** In the 3-D Fast Fourier Transformation, the communication patterns are structured and long distance. The experiments were performed on two sets of eight “approximate equally loaded” Sparc Classic’s during daytime. One set was reserved for PVM measurements and one set was reserved for DynamicPVM. The DynamicPVM tasks are migrated to lightly loaded workstations, if available. The checkpoints are made to disk, thus two times slower than the TCP checkpoint. <table> <thead> <tr> <th>Benchmark</th> <th>PVM time</th> <th>DynamicPVM time</th> <th>migrations</th> <th>chkp. size</th> <th>Speedup</th> </tr> </thead> <tbody> <tr> <td>EP</td> <td>19:11</td> <td>16:51</td> <td>4</td> <td>100 KB</td> <td>1.14</td> </tr> <tr> <td>FT</td> <td>57:27</td> <td>52:09</td> <td>1</td> <td>25 MB</td> <td>1.10</td> </tr> <tr> <td>MG</td> <td>48:01</td> <td>42:40</td> <td>2</td> <td>2.5 MB</td> <td>1.13</td> </tr> <tr> <td>CG</td> <td>21:26</td> <td>19:37</td> <td>5</td> <td>9 MB</td> <td>1.09</td> </tr> </tbody> </table> Table 4 Execution times of PVM versus DynamicPVM. Simulated Annealing (SA) is a technique for optimization problems of very large scale. In many typical optimization problems one wants to find among many configurations one configuration which minimizes or maximizes a certain cost function. In our application problem we study the crystallization of $N$ randomly placed particles on a virtual supporting sphere. The particles interact with each other according the Lennard-Jones potential [16]. **Sequential Simulated Annealing** The annealing process begins by creating a Markov chain, of given length, at a certain temperature. The Markov chain grows by randomly displacing particles and calculating the corresponding change in energy of the system and deciding on acceptance of the displacement. After a chain has ended, the temperature is lowered by multiplying the temperature with the cool-rate, which is a number slightly less than 1 (typically 0.9) after which a new chain is started. This process continues until a stop criterion is met. The stop criterion in our implementation is met when the standard deviation in the final energies of the last ten chains falls below a certain value (typically $10^{-6}$). **Systolic Simulated Annealing** A synchronous algorithm that does not mimic sequential SA is systolic SA [1, 13, 17]. In systolic SA a Markov chain is assigned to each of the available processors. All chains have equal length. The chains are executed in parallel and during execution information is transferred from a given chain to its successor. Each Markov chain is divided into a number of subchains equal to the number of available processors. The execution of chain $k+1$ is started as soon as the first subchain of chain $k$ is generated. Equilibrium is not yet established by then. Quasi-equilibrium of the system is preserved by adopting intermediate results of previous Markov chains. The experiments with the systolic SA application were executed on two pools of Sun SparcStation LX workstations. The PVM pool consisted of three workstations, and the DynamicPVM pool of six workstations. Table 5 shows the turn-around times of the systolic SA algorithm for PVM and DynamicPVM. The systolic SA problem size is determined by the number of particles, $N$, and the number of iterations. The progress of the systolic SA algorithm for PVM and DynamicPVM is depicted in Fig. 5. Progress is defined in terms of the number of temperature cooling steps, i.e., when a new Markov chain is started. Turn-around times of the systolic SA algorithm for different problem sizes. <table> <thead> <tr> <th>$N$</th> <th>iterations</th> <th>PVM</th> <th>DynamicPVM</th> <th>Migrations</th> <th>Speedup</th> </tr> </thead> <tbody> <tr> <td>20</td> <td>5</td> <td>0:14:14</td> <td>0:13:11</td> <td>1</td> <td>1.08</td> </tr> <tr> <td>20</td> <td>10</td> <td>0:31:20</td> <td>0:28:29</td> <td>3</td> <td>1.10</td> </tr> <tr> <td>20</td> <td>25</td> <td>1:23:42</td> <td>1:17:02</td> <td>3</td> <td>1.09</td> </tr> <tr> <td>40</td> <td>5</td> <td>1:34:19</td> <td>1:24:50</td> <td>2</td> <td>1.11</td> </tr> <tr> <td>40</td> <td>10</td> <td>3:21:55</td> <td>2:28:49</td> <td>1</td> <td>1.36</td> </tr> <tr> <td>40</td> <td>25</td> <td>7:49:28</td> <td>7:02:03</td> <td>4</td> <td>1.11</td> </tr> <tr> <td>60</td> <td>5</td> <td>3:38:20</td> <td>3:07:23</td> <td>2</td> <td>1.17</td> </tr> <tr> <td>60</td> <td>10</td> <td>7:22:00</td> <td>6:36:14</td> <td>2</td> <td>1.12</td> </tr> <tr> <td>60</td> <td>25</td> <td>17:31:46</td> <td>16:12:44</td> <td>3</td> <td>1.08</td> </tr> </tbody> </table> Fig. 5. Progress in time of the systolic SA application for and PVM and Dynamic-PVM. Problem size is $N = 40$ and iterations = 25. The mean CPU load for the PVM and DynamicPVM clusters are shown in Fig. 6. For the PVM cluster, the mean CPU load is calculated by taking the mean of the CPU load of the three workstations. The mean CPU load for the DynamicPVM cluster is computed by taking the mean of the CPU load of the three workstations currently executing a DynamicPVM task. 5 Discussion and Conclusions The results of the ping-pong experiment shown in Table 2 and Table 3 indicate that the percentage overhead of DynamicPVM is almost constant (about 5% for low network load and 2.5% for medium network load). The difference in overhead between low and medium network load can be explained since the overhead we measure is the accumulated effect of network overhead and DynamicPVM overhead. The relative DynamicPVM overhead becomes more predominant for low network load. Figure 3 depicts how the absolute $\alpha$-values increase with increasing message size. The increase in absolute overhead is due to the routing table lookup for each packet sent between two $pvm$s in DynamicPVM. As the number of bytes increases, the number of packets sent also increases (see Section 3.4). The overhead can be tuned by changing the DynamicPVM message fragment size. By increasing the packet size, a smaller number of routing table lookup operations are necessary per message sent. However, the overhead for small messages increases, as the packets sent between $pvm$s will be largely empty. Another overhead factor introduced by DynamicPVM, is task migration. This effect is studied in the checkpoint overhead experiment. The results for indirect (NFS) and direct (TCP) checkpointing are shown in Fig. 4. The migration times for indirect and direct checkpointing are linear with respect to the size of the checkpoint. The migration using NFS takes about twice as long as the migration over TCP, which is due to the fact that migration over NFS requires a separate write and read cycle, while in direct mode the write and read are overlapped. Nonetheless, both migration modes can be efficiently imple- mented given the underlying protocol. For direct checkpointing, the measured throughput is almost 1 MB/s, while the bandwidth of Ethernet is 1 MB/s. With indirect checkpointing a throughput of about 450 KB/s is measured. The computational kernels from NAS Parallel Benchmark suite represent different application behaviour in terms of computational and communication behaviour. Although the execution behaviour of the kernels are different, the advantage (speedup) gained with DynamicPVM is within the same range, see Table 4. This indicates that the experimental DynamicPVM vehicle is able to use the potential of idle workstations as computational resources, independently of the static or dynamic execution behaviour of the application. Even a memory intensive application, such as the FT kernel (25 MB checkpoint), profits from one task migration to an idle workstation. An interesting case study is the systolic simulated annealing algorithm. In Fig. 5 the progress of the simulated annealing process is displayed in time, for both PVM and DynamicPVM. The corresponding mean CPU load of the workstations is depicted in Fig. 6. Noticeable is the correspondence between the application activity and the mean CPU load. The test runs consisted of 25 iterations that are coordinated by one task. This can be found in Fig. 5, where progress retards for a period after an iteration. Figure 6 displays this by a drop in the measured mean CPU load. The same figure shows that the mean CPU load for DynamicPVM is lower than for PVM. The net effect is a smaller turn-around time. We conclude therefore: - The communication and checkpoint overhead experiments show that the experimental DynamicPVM system provides efficient task migration support. - The results of the NAS Parallel Benchmarks and the systolic SA case study show that the DynamicPVM system is able to exploit the potential of idle workstations, by (re-)mapping of a dynamic resource application onto a dynamic resource machine, irrespective of the behaviour of the kernels. - The DynamicPVM functionalities are provided through libraries, thus hiding the complexity of the load balancing process from the end-user. The resulting transparent appearance of adaptive systems such as DynamicPVM, lowers the barrier to cluster computing. This pilot study indicates that dynamic resource management on task level for parallel jobs is a promising approach to efficiently balance load in clusters of workstations. 6 Future Work The research presented here indicate that DynamicPVM is an adequate research vehicle to study different approaches to dynamic resource management of parallel jobs in cluster environments. One of the open issues is the development of true heterogeneous DynamicPVM: tasks moving from one architecture to another. This heterogeneous task migration is a difficult problem to solve at operating system level [5]. At user level, we can take an object-oriented approach, and implement process/object migration functionalities into the *libc*. The advantage of incorporating the generic process/object migration into the *libc*, is that other message passing interfaces, such as MPI, can make use of the offered facilities. In addition, the DynamicPVM vehicle allows for experimental research in exploring various scheduling mechanisms: the DynamicPVM system offers efficient support for task migration, but the effective use of the dynamic resources depends on an intelligent scheduler. The experiments described in the paper can be characterized as loosely synchronous computations. For DynamicPVM (loosely) synchronous behaviour is a worst case scenario, because the overall computation stalls during the migration of one of the tasks. In the future we will explore the potentials of the experimental DynamicPVM system by a more general study with fully asynchronous massively parallel applications. Particularly, current research is directed to optimistic parallel discrete event simulation methods, such as the Time Warp protocol [7,10]. It is identified that a serious limitation in the successful application of the Time Warp protocol are the extreme computation requirements. The complete non-deterministic asynchronous execution behaviour of the Time Warp protocol makes it a highly dynamic resource problem. The load balancing of this type of asynchronous systems is a specific merit of the experimental DynamicPVM system. References
<table> <thead> <tr> <th><strong>Title</strong></th> <th>Reanimation or reversibility in &quot;Valerius: The Reanimated Roman&quot;</th> </tr> </thead> <tbody> <tr> <td><strong>Author(s)</strong></td> <td>Allen, Graham</td> </tr> <tr> <td><strong>Publication date</strong></td> <td>2010-12</td> </tr> <tr> <td><strong>Type of publication</strong></td> <td>Article (peer-reviewed)</td> </tr> </tbody> </table> | **Link to publisher's version** | [http://www.connotations.uni-tuebingen.de/allen01913.htm](http://www.connotations.uni-tuebingen.de/allen01913.htm) [http://www.connotations.uni-tuebingen.de/debanastasaki01613.htm](http://www.connotations.uni-tuebingen.de/debanastasaki01613.htm) Access to the full text of the published version may require a subscription. | | **Item downloaded from** | [http://hdl.handle.net/10468/1213](http://hdl.handle.net/10468/1213) | Downloaded on 2019-08-04T06:04:53Z Reanimation or Reversibility in “Valerius: The Reanimated Roman”: Response to Elena Anastasaki “The Trials and Tribulations of the revenants” Abstract: This paper is an invited response to an earlier paper by Elena Anastasaki, part of which presented a reading of Mary Shelley’s short story “Valerius, the Reanimated Roman.” The paper takes issue with aspects of Anastasaki’s account of Shelley’s story and offers a revised account of its representation of history, identity and Mary Shelley’s attitude towards the Imperial and Republican periods of Roman history. * Keywords: Mary Shelley. “Valerius, the Reanimated Roman.” Literary criticism. Literary history. Romantic Studies. * It has been a struggle to transcend the essentially biographical manner in which Romantic women writers like Mary Shelley have traditionally been read. In her introduction to the Pickering edition of Mary Shelley’s novels, Betty T. Bennett writes that “one of the major barriers Mary Shelley encountered in her audiences then – and now [was/is] the failure to accept that her major works are designed to address civil and domestic politics.” This blindness to the political and it must be said philosophical dimensions of Mary Shelley’s work often comes from an over-concentration on biographical readings. Such readings, which I have elsewhere described in terms of “biographism,” involving a rather literalising equation of text and life, lack an awareness of the kinds of sophisticated disruption of the biographical and literary divide in which Mary Shelley’s writing is frequently involved. They also tend to make a too literalistic relation between literary thematics and psychoanalytical categories, reading tropes as though they were symptoms of or at least reflections on psychic conditions. Elena Anastaski’s account of the relation between the figure of the *revenant* and the disruptive force of poetry in Mary Shelley’s and Théophile Gautier’s prose fiction is, then, in its analysis of those writers engagement with form and meaning, a very welcome contribution. In her response, Claire Raymond states: Anastaski refreshingly is concerned not with the apparent effects of the *revenant*, her/his role as disruptor of boundaries, but rather with the internal grief and psychic dislocation that the *revenant* bears because of his/her position as always out of bounds. In a nicely original move, Anastaski considers the fragmentation and fracture within the *revenant*. Dispensing with Anastaski’s analysis of Gautier for now, I want to suggest that there is still, within her analysis of the *revenant* and fragmentation, a significant danger of “biographism.” This danger appears most starkly in what Anastaski does with the figure of reanimation; a figure which dictates her selection for discussion of three of Mary Shelley’s short stories: “Valerius: The Reanimated Roman,” “Roger Dodsworth: The Reanimated Englishman” and “The Mortal Immortal.” Early on in her paper, Anastaski gives a paragraph breakdown of the tragic deaths which haunted Mary Shelley’s life, from birth onwards, before stating: “It is not surprising then that from her first literary attempt, *Frankenstein* (1818), the theme of reanimation is to be found at the heart of her work” (28). Anastaski remains committed to something more than a “biographist” approach to this thematics. She writes, responding to comments by Charlotte Sussman on the short stories: “Personal experience might well have been a source of inspiration in the depiction of the self-awareness of these characters, but I am arguing that what these stories are all about is, on the contrary, *internal discontinuity* as a perception of the self” (34). However, later on in the essay we find Anastaski arguing that “[F]or Shelley, the search for wholeness is a strictly personal matter” (40). There is clearly, as Anastaski has shown, a recurrent thematics of reanimation within Mary Shelley’s work. We need to be a little careful, however. Are we always sure that what looks like reanimation is indeed reanimation? Is Frankenstein’s creature reanimated? or is it created, the reanimated parts of dead humans and dead animals ultimately producing something with authentic and singular life? Is the process of reanimation that Roger Dodsworth goes through, frozen and then thawed back to life hundreds of years later, the same process that Valerius more mysteriously goes through? Anastaski recognises at times that we are not given the exact specifics of Valerius’s reanimation? Given the title it might appear curious to ask the question, but still I intend to ask it: is Valerius in fact reanimated at all? As Anastaski writes: “Apart from the title, only a series of paradoxical phrases indicate Valerius’s unnatural situation” (28). The care I am suggesting here ultimately impinges on the questions of “biographism” and of political meaning with which I began. It is perhaps not reanimation that we should be primarily concerned with in trying to understand the ultimate meaning of a text such as “Valerius: The Reanimated Englishman,” the text I intend to focus on here. What is ultimately at stake in such a story is something we might more accurately style reversibility, a trope, and perhaps more than a trope, which can reconnect such a short story, on the periphery of the redrawn map of Mary Shelley’s oeuvre, to one of the now established canonical novels, The Last Man. Beyond that, reversibility, in ways I will only be able to gesture towards here, might help us in a more global understanding of the nature of politics, philosophy, aesthetics and biography in both Mary Shelley and P. B. Shelley’s lives and work. I will begin my brief reading of “Valerius: The Reanimated Roman” by quoting a greater portion of the passage from Anastaski to which I have just referred. It contains most of the issues I wish to illuminate. Apart from the title, only a series of paradoxical phrases indicate Valerius’s unnatural situation. Phrases like “my sensations of my revival” (332), “when I lived before” (333), “since my return to earth” (337), or again “before I again die” (339) make explicit his revival, but without giving the slightest hint concerning the way it came about. This silencing is supported by an external third person narrator, and the second by a character in the story, Isabel Harley – the woman who helps Valerius to cope with his new situation. The first part also incorporates the narration of Valerius himself, so that we have three different points of view concerning the reanimated character: Valerius is thus viewed by the external narrator (frame narrative), through his own narration (first fragment), and through another character’s narration (second fragment). (28) I will return to the questions of narrative structure and fragmentation later on. To begin our reading of those apparently enigmatic silences in “Valerius” I will remind readers of the story’s initial location. The third-person narrator referred to by Anastaski gives us two figures landing in “the little bay formed by the extreme point of Cape Miseno and the promontory of Bauli.” The narrator makes it very clear why they have arrived at this spot: “They sought the Elysian fields, and, winding among the poplars and mulberry trees festooned by the grapes which hung in rich and ripe clusters, they seated themselves under the shade of the tombs beside the Mare Morto” (Collected Tales: 332). As Charles E. Robinson notes, dating the composition of this story is not clear (Collected Tales: 397). What can be said is that the entire opening scene is a trial run or rerun of the opening of Mary Shelley’s 1826 novel, The Last Man, in which a narrative voice describes how she and her now dead companion visited Naples in 1818 and on the “8th of December of that year … crossed the Bay, to visit the antiquities which are scattered on the shores of Baiae.” The narrator goes on: “We visited the so called Elysian Fields and Avernus; and wandered through various ruined temples, baths, and classic spots; at length we entered the gloomy cavern of the Cumæan Sibyl” (Last Man: 5). It is here that the two travelling companions will find the Sibylline leaves within which the female traveller will eventually decipher the story of the end of the human race and the fate of the last man. The opening setting for “Valerius” is, thus, crucial, and provides all the clues we need to unlock what appears to Anastaski such an enigmatic form of reanimation. As Valerius states of the Elysian Fields to his companion: > This is the spot which was chosen by our antient and venerable religion, as that which best represented the idea oracles had given or diviners received of the seats of the happy after death. These are the tombs of Romans. This place is much changed by the sacrilegious hand of man since those times, but still it bears the name of the Elysian fields. Avernus is but a short distance from us, and this sea which we perceive is the blue Mediterranean, unchanged while all else bears the marks of servitude and degradation. (Collected Tales: 332-3) Valerius’s rhetoric of natural permanence and cultural-historical degradation will be important in the latter stages of this analysis. What is crucial here is Mary Shelley’s interest in the idea of the Elysian Fields. Glossing the mythological reference for her readers, Jane Blumberg writes: “The Elysian Fields were, in classical myth, that region of the Underworld reserved for the just and those favoured by the gods. Lake Avernus, perfectly circular, was believed by the Romans to be one of the portals to the Underworld” (Last Man: 5). Whether Mary Shelley saw the Elysian Fields as a last resting place for the great and the good is questionable, however. Certainly her text, *The Fields of Fancy*, first version of what was to become her unpublished novella, *Mathilda*, gives us an account of the Elysian Fields in which a long process of mourning and philosophical enlightenment leads to a transition to a spiritually more advanced realm. As the figure of Fantasia explains to the mourning figure who she repeatedly carries to the Elysian Fields and then back to earth: > When a soul longing for knowledge & pining at its narrow conceptions escapes from your earth many spirits wait to receive it and to open its eyes to the mysteries of the universe – many centuries are often consumed in these travels and they at last retire here to digest their knowledge & to become still wiser by thought and imagination working upon memory – When the fitting period is accomplished they leave this garden to inhabit another world fitted for the reception of beings almost infinitely wise – but what this world is neither can you conceive or I teach you … When we remember all these contexts it becomes clear that Valerius has returned to earth from the Elysian Fields. This is the meaning of such apparently enigmatic statements as: “when I lived before” (333), “since my return to earth” (337), and “before I again die” (339). Valerius is not reanimated so much as reborn into the world of the living. He appears to me to have returned to the earth in order to gain or perhaps test some form of knowledge not yet completely achieved or assimilated. If read in the mythologically rich manner we have been reading the story, the story appears to provoke this question within its readers: what lesson has Valerius still to learn? One thing that Valerius is quite explicit about is his “bitter disdain” for what he calls, in the first instance, “Italians” (Collected Tales: 333). In examining this aspect of the story, Anastski focuses on Valerius’s alienation from the modern world within which he finds himself. She states: “His suffering is clearly the direct consequence of his experiencing a lack of familiarity and – most importantly – continuity” (30). It is not sufficient, however, to figure a singular referent (ancient Rome) as the cause of this lack of continuity in Valerius’s relation to the world. What is not registered in Anastaski’s reading, but which is crucial for any real understanding of the political implications of the story, is that “Rome” is for Valerius itself a divided and contested referent. He makes this very clear early on in his narration. He states: “when the republic died, every antient Roman family became by degrees extinct and … their followers might usurp the name, but were not and are not Romans” (Collected Tales: 333). Valerius’s discontinuity is not simply in finding himself in the modern world of “Italians,” it is even more deeply contained in the fact that the ancient, ruined Rome he is now guided round bespeaks in part an Imperialism which for him is a betrayal of the Republican values to which he still holds. It is Imperial Rome as much as Catholic Rome that alienates Valerius, the Republican revenant. The bewildering historical discontinuities experienced by Valerius are symbolically captured for him within the Coliseum, at once the great symbol of Imperialism and yet also of the aesthetic and civic dream of Roman perfectibilism. Deciding never to quit its walls, Valerius achieves a kind of panoramic vision of Rome: From its height, I beheld Rome sleeping under the cold rays of the moon: the dome of St Peter’s and the various other domes and spires which make a second city, the inhabitants of men; the arch of Constantine at my feet; the Tiber and the great change in the situation of the city of modern times; all caught my attention, but they only awakened a vague and transitory interest. The Coliseum was to me henceforth the world, my eternal habitation … in those hallowed precincts, I shall pour forth, before I die, my last awakening call to Romans and to Liberty … If Rome be dead, I fly from her remains, loathsome as those of human life. It is in the Coliseum alone that I recognise the grandeur of my country – that is the only worthy asylum for an antient Roman (Collected Tales: 336). Describing his time, the first century BC aftermath of Sulla’s dictatorship and the rise of Julius Caesar, he speaks of how he believed “the sacred flame” of Republican Liberty was reigniting in “the souls of Camillus and Fabricius,” along with “Cicero, Cato, and Lucullus.” He adds, with huge irony given historical hindsight: “the younger men, the sons of my friends, Brutus, Cassius, were rising with the promise of equal virtue,” before concluding: When I died, I was possessed by the strong persuasion that, since philosophy and letters were now joined to a virtue unparalleled upon earth, Rome was approaching that perfection from which there was no fall; and that, although men still feared, it was a wholesome fear which awoke them to action and the better secured the triumph of Good (Collected Tales: 336). What history has subsequently shown Valerius has robbed him of this hope in perfectibility, and left him mourning a Roman Republican spirit which seems irreparably locked in the past. He agrees to go to England with Lord Harley in order to assess “if, after the great fluctuation in human affairs, man is nearer perfection than in my days” (Collected Tales: 339), however, everything Valerius says seems to imply that he has lost faith in that possibility. Isabel Harley, the woman in whom he finds his one consolation, has said to him: “You shall teach me to know all that was great and worthy in your days, and I will teach you the manners and customs of ours” (Collected Tales: 338). The last we see of Valerius, however, is on the night before he is to depart Rome and Italy for England. The narrator’s description appears to leave the issue of his melancholy over the lost Roman ideal very much open to question and unavailable for any serious resolution: The brilliant spectacle of sunset and the soft light of the moon invited to reverie and forbade words to disturb the magic of the scene. The old Roman perhaps thought of the days he had formerly spent at Baiae, when the eternal sun had set as it now did, and he lived in other days with other men. (Collected Tales: 339) The question of whether Valerius can ever learn to identify with the modern world he now finds himself in is connected very clearly in the story with the question of whether he can ever come to believe that the possibility of social and cultural “perfection” is still open, still alive. Valerius’s discontinuity with the modern world is a psychological problem Shelley adroitly attaches to the political and philosophical question of the fate and thus the future of Republicanism. That this open question about Valerius’s ability ever to understand the persistence (at least in potentia) of the spirit of Republicanism connects “Valerius” up to that Godwinian hermeneutical model Rajan has glossed in terms of the Romantic supplement of reading. The question is not resolved, since it is designed to resound within Shelley’s readers. The passage I have just quoted must, therefore, been the authentic ending of the text. The fragment which follows in Robinson’s edition should not, therefore, be considered as a continuation of the story but rather as an unassimilated fragment from it. There are very similar, structurally related moments in the last chapter of The Last Man, moments of vision, within and around the Coliseum, which can help us understand better the not inconsiderable historical and politico-philosophical complexities being staged in “Valerius: the Reanimated Roman.” Alone in Rome and on the earth, Lionel Verney, sits in the Forum, by the Coliseum, and describes a moment of imaginative repopulation: I strove, I resolved, to force myself to see the Plebeian multitude and lofty Patrician forms congregated around; and, as a Diorama of ages passed across my subdued fancy, they were replaced by the modern Roman; the Pope, in his white stole, distributing benedictions to the kneeling worshippers; the friar in his cowl; the dark-eyed girl, veiled by her mezzera …. (Last Man: 58) The repopulating, diorama of a vision can only last so long, however, and Verney then describes how the scene collapses before the stark, depopulated reality before him: I roused myself – I cast off my waking dreams; and I, who just now could almost hear the shouts of the Roman throng, and was hustled by countless multitudes, now beheld the desart ruins of Rome sleeping under its own blue sky; the shadows lay tranquilly on the ground; sheep were grazing untended on the Palatine, and a buffalo stalked down the Sacred Way that led to the Capitol. I was alone in the Forum; alone in Rome; alone in the world. (Last Man: 359) The scene ends, significantly, with what is perhaps the most important of the chapter’s many pyramid images: The generations I had conjured up to my fancy, contrasted more strongly with the end of all – the single point in which, as a pyramid, the mighty fabric of society had ended, while I, on the giddy height, saw vacant space around me. (Last Man: 359) As I have argued elsewhere, the pyramid is a perfect symbol for the tragic historical narrative presented by Lionel Verney, a narrative which begins with a populated world and ends with the last man, the single point of an extinguished human race. Standing on the top of the pyramid of human history, however, Verney, as its narrator, can see both its end and its beginning, its base and its apex. The pyramid image here, as throughout the novel, is in fact not one of tragic one-way entropic annihilation, but rather one of reversibility. Just as Verney in his imagination can repopulate the Forum and the Coliseum, so his narrative has demonstrated the reversible power contained in all writing and all narrative. It is my contention, presented in the spirit of an addition to Anastaski’s reading of “Valerius,” that the lesson Mary Shelley’s reanimated Roman must learn is that the spirit of Republican Roman can be reanimated, that an apparent historical decline of that spirit can be reversed. In a much larger work than this I might argue that Rome itself came to represent the possibility of historical and imaginative reversibility for both Mary and Percy Shelley. The proof of this interpretive argument, if we can call it that, lies in the fragment which accompanies the manuscript of “Valerius,” not as Anastaski suggests in any intended way, but simply as an adjacent, related, yet to be incorporated text. This fragment text gives us the perspective of Isabell Harley, Valerius’s would-be teacher. Isabel’s lesson is overwhelmingly that of historical and political reversibility. Isabell Harley’s fragment text (Collected Tales: 339-44) returns us to the moment in which Valerius gives up the Coliseum. She talks about the need to reconnect him in some way to the world around him and her attempts to produce this. She gets straight down to the point, in fact, directly addressing Valerius’s regret that Empire replaced Republican Rome: “You were happy in dying before the fall of your country and in not witnessing its degradation under the Emperors” (Collected Tales: 340). She argues that looking at the ruins of Imperial Rome she can still discover within them the persistence of the Republican spirit: When I visit the Coliseum, I do not think of Vespasian who built it or of the blood of gladiators and beasts which contaminated it, but I worship the spirit of antient Rome and of those noble heroes, who delivered their country from barbarians and who have enlightened the whole world by their miraculous virtue. I have heard you express a dislike of viewing the works of the oppressors of Rome, but visit them with me in this spirit, and you will find them strike you with that awe and reverence which power, acquired and accompanied by vice, can never give. (Collected Tales: 340) For Isabell, Rome’s Imperial ruins are reversible, the viewer has the choice to see in them either the terrors and the violence of the Empire or the resilient spirit of the Republic. The decisive power is in the mind of the modern viewer. Isabell take Valerius to a vantage-point from which they can view Rome and all he can see is destruction (Collected Tales: 341). Isabell’s response is again to mix destruction with immortal beauty, decay with the persistence of Republican spirit. She says: “It seems to me that if I were overtaken by the greatest misfortunes, I should be half consoled by the recollection of having dwelt in Rome” (Collected Tales: 342). She takes him to the Patheon at night, describing it as a temple “to all the gods” built shortly after his death. Valerius is inspired by the beauty and wholeness of the temple, but this positive response is shattered on the sight of a Christian cross: The cross told him of change so great, so intolerable, that that one circumstance destroyed all that had arisen of love and pleasure in his heart. I tried in vain to bring him back to the deep feeling of beauty and of sacred awe with which he had been lately inspired. The spell was snapped. The moon-enlightened dome, the glittering pavement, the dim rows of lovely columns, the deep sky had lost to him their holiness. He hastened to quit the temple. (Collected Tales: 343) Valerius is someone who cannot resist the idea of history as a destructive force eradicating all value; for him, everything of worth in the past is dead to the present. Isabell takes Valerius to the Baths of Caracalla and to the Protestant Cemetery, which is described in terms which, if the story was composed in 1819, anticipate the poetic description of the same spot in P. B. Shelley’s Adonais (Collected Tales: 343). It is here, “at the foot of the tomb of Cestius, that lovely spot where death appears to enjoy sunshine and the blue depth of the deep sky from which it is every where else shut out,” that Isabell describes Valerius as a ghost or revenant. Valerius belongs to the dead, he cannot find a connection to the modern world, Isabell’s lesson of reversibility, of the persistence of hope in the face of historical destruction, is something he cannot assimilate: Did Valerius sympathize with me? Alas! no. There as a melancholy tint cast over all his thoughts; there was a sadness of demeanour, which the sun of Rome and the verses of Virgil could not dissipate. He felt deeply, but little joy mingled with his sentiments. With my other feelings towards him, I had joined to them an inexplicable one that my companion was not a being of the earth. I often paused anxiously to know whether he respired the air, as I did, or if his form cast a shadow at his feet. His semblance was that of life, yet he belonged to the dead. (Collected Tales: 343) Reversibility, a vision of history which sees the possibility for rebirth alongside that of decay and destruction, and which retains a hope in a Republicanism which may seem dead and gone to the unimaginative eye, is unsuccessfully offered to Valerius, but clearly can still be recognised and adopted by the reader. There is a clear political and historical point to this short story, one which links it to a number of Mary Shelley’s most important texts, including her 1823 novel concerning the fate of Florentine Republicanism, Valperga. Mary Shelley’s short stories can, when read with care, appear closer to the tradition of the Godwinian novel than has until recently been suspected. Mary Shelley’s own struggle to achieve such a positive vision of history can perhaps be registered in everything she wrote from 1819 onwards. Works Cited ______“Introduction” in *The Novels and Selected Works of Mary Shelley*, opp. cit.: xiii-lxxiv. iii For a recent attempt to honour such complexities see Julia A. Carlson’s England’s First Family of Writers. iv Elena Anastaski, “The Trials and Tribulations of the revenants”: pp.26-46. There are significant ways in which Anastaski’s approach could be related to the ground-breaking work of Tilottama Rajan in texts such as The Supplement of Reading and “Mary Shelley’s Mathilda”. v Claire Raymond, “Response to Alena Anastaski’s ‘The Trials and Tribulations of the revenants’": 257. vii Mary Shelley, The Last Man: 5. viii The Shelleys had actually visited the Bay of Baiae (nowadays the Bay of Naples) and Avernus on 8th December 1818. x For Mary Shelley’s Republicanism see Betty T. Bennett, ‘The Political Philosophy of Mary Shelley’s Historical Novels”; Betty T. Bennett, Mary Shelley: An Introduction; Michael Rossington, ‘Future Uncertain”. xi See Allen Mary Shelley: 90-116.
Bulldaggers, Pansies, and Chocolate Babies Wilson, James F. Published by University of Michigan Press Wilson, James F. Bulldaggers, Pansies, and Chocolate Babies: Performance, Race, and Sexuality in the Harlem Renaissance. Project MUSE. muse.jhu.edu/book/1102. For additional information about this book https://muse.jhu.edu/book/1102 For content related to this chapter https://muse.jhu.edu/related_content?type=book&id=21017 CHAPTER 2 “Harlem on My Mind”: New York’s Black Belt on the Great White Way Harlem . . . Harlem Black, black Harlem Niggers, Jigs an’ shiney spades Highbrowns, yallers, fagingy fagades “... Oh say it, brother, Say it...” Pullman porters, shipping clerks an’ monkey chasers Actors, lawyers, Black Jews an’ fairies Ofays, pimps, lowdowns an’ dicties Cabarets, gin an’ number tickets All mixed in With gangs o’ churches Sugar foot misters an’ sun dodgin’ sisters Don’t get up Till other folks long in bed ... —“Harlem” by Frank Horne* “OFAYS, PIMPS, LOWDOWNS AN’ DICTIES” In March 1926, Anita Handy edited a new magazine called A Guide to Harlem and Its Amusements, in which she planned to provide tips for touring Harlem’s most popular attractions. When her inspiration was denounced in the black press for focusing only on the neighborhood’s lurid side, she responded that she only intended to satisfy the curiosity of those who had recently seen David Belasco’s Broadway production of Lulu Belle and read Carl Van Vechten’s controversial novel Nigger Heaven. She claimed that these two works had “caused a great number of people, especially white people, to visit Harlem,” but regret- tably, in her opinion, these crowds did not know “how to see the community in- telligently.” The highlight of Handy’s tour would include a trip to the epicenter of this thriving nightlife, a stretch known as “Jungle Alley,” which was located between Lenox and Seventh avenues on 133rd Street. Many of the nightclubs, such as Barron’s Exclusive Club, one of Harlem’s oldest (having opened in 1915), Con- nor’s, and the Clam House, were found on this block. In her publicity, she also promised that she would not show just the “night side life,” but also “the better side of Harlem,” including its churches, schools, and modest homes. Admit- tedly, she indicated, “The night life side is the only side the white tourists care to see, as it is the only side they have heard about.” For those wishing to expe- rience the “real thing,” Handy’s guide presumably offered an invaluable service to visitors who only knew Harlem from what they saw on the stage and read in popular fiction. As this account indicates, white fascination with Harlem was fueled in large part by its representations in the popular literature and entertainment of the 1920s. Plays, novels, and songs depicted an idealized, exotic, and rather risqué view of life among New York’s black denizens above 125th Street, and the images lured white people to encounter the authentic milieu on their own. New night- clubs and speakeasies could not open fast enough to oblige the hordes of white tourists. Writers, entertainers, and producers capitalized on the newest vogue and aroused further interest in Harlem’s seamier side by continuing to simulate it on stage and in fiction. Practically over night, these simulations of Harlem became the basis for how the “real” Harlem would be seen and experienced by white visitors. Concurrently, however, black community leaders attempted to counter these representations by publicizing the high moral standards of the residents and arguing that the decadence was a result of “the hundreds of downtown white people” who go to Harlem for a “moral vacation.” In the 1920s, Harlem was a contested space for representation, and this chapter examines that contestation through the distorted margins separating private and public, natural and staged, and authentic and manufactured. While the previous chapter explored this phenomenon via the semiprivate rent party institution in Harlem and the lesbian and gay demimonde throughout New York City, here I will focus on how the commercial theater of the 1920s com- pli cated the struggle for a representative view of black life and how competing forces attempted to define the “real” Harlem. The pithily titled *Harlem* (1929) serves as one of the clearest enactments of this struggle. *Harlem* is a Broadway melodrama by Wallace Thurman and William Jourdan Rapp, and the production is historically significant because it was the first commercially successful Broadway play written by an African American—Thurman (although it was cowritten by Rapp, a white playwright). In *Harlem*, Thurman and Rapp consciously recycled many of the conventions of popular Broadway melodrama, which they profitably combined with the white attraction for Harlem’s nightlife. The final product is a fascinating hybrid that also includes elements of black folk drama, musical comedy, and social realism. The drama, which was billed as a “Thrilling Play of the Black Belt,” demonstrates what George Hutchinson calls “the cobbled together of traditions out of heterogeneous elements and a babel of tongues.” This “hybridity,” which paralleled the contemporaneous divisive public debate inside and outside the black community, reveals that “real life” 1920s Harlem was a fragmented site of identification, and demonstrates the impossibility of determining an “authentic” African American identity for that era. Even more notably, through the collaboration of the black and white playwrights, depictions of the “old” and “new” Negro, and the attempt to re-create Harlem in Times Square, there is a genuine attempt to blur the boundaries between the races and create a work of art that transcends racial categorization. If this sounds particularly grandiose for a play that was subtitled *A Melodrama of Negro Life in Harlem*, the *Harlem* playwrights called their work an “educational drama,” and they deliberately intended to assail the stereotypes traditionally associated with Blacks on stage, such as the mammy figure, the slow-witted, superstitious “darkie,” and the cunning but malapropism-spouting trickster. Indeed, Thurman and Rapp strove to “present the [N]egro as he is” in a veritable, starkly naturalistic environment, and they even included a “Glossary of Harlemisms” in the playbill for deciphering the hip, jazz-inflected, colloquial dialogue spoken on stage. The drama contains a cross-section of a black community, which in the world of this play includes licentious, unrestrained young women, barbaric, sexually out-of-control partygoers, gun-shooting, handsome gangsters, as well as displaced, pious, southern folk, and idealistic, male social climbers. The conflicting images within Thurman and Rapp’s play fly in the face of black bourgeois critics, who insisted on images that put Blacks in a positive light and assisted in the task of racial uplift. While simultaneously hoping to educate their audiences, the playwrights were required by their producers to construct a play that would also appeal to the tastes of their mainstream Broadway audience, who craved larger-than-life characters, thrilling drama, and, as one contemporary producer instructed, a “wow” in the third act. “SO LIKE VAN VECHTEN, START INSPECTIN’” Broadway audiences were conditioned to a particular view of Harlem that had permeated the popular culture by 1929. To appreciate the pressure on Rapp and Thurman to embody this vision, one need look only at the controversy surrounding the publication of Carl Van Vechten’s *Nigger Heaven*, which helped initiate the Harlem vogue. Before examining Rapp and Thurman’s depiction of Harlem, this section will provide a context for the literary and theatrical representation of an “insider’s view” of Harlem as it was stimulated by that novel. In August 1926, *Nigger Heaven*, by white novelist and socialite Carl Van Vechten, appeared in bookstores across the country. The novel was an instant best seller, and within just a few months, it went through nine printings. In addition, the novel’s subsequent international success helped make Harlem an obligatory stop for tourists visiting New York City. Although the book was never adapted for the stage or film, its relationship to popular entertainment is not at all tangential. Its depiction of black life in Harlem had a tremendous impact on the way in which images of race were presented, perceived, and discussed in the era. As a result, nearly all of the African American performers on Broadway and in the nightclubs of the 1920s were influenced, arguably both positively and adversely, by this novel. More importantly, the arguments it raised about cultural difference laid the groundwork for public discussions over African American representations performed in a variety of venues. A great deal has been written about Van Vechten’s novel and the firestorm it provoked among literary and political leaders in the era, but because of its connections to the New York theater and nightclub worlds, it is worth discussing in this context. In brief, the melodramatic plot concerns the tempestuous romance of two young African Americans, Byron Kason and Mary Love. Naive, beautiful Mary is a librarian and Byron a struggling writer, and the two develop a wholesome, deep love for one another. Byron, however, grows increasingly caustic from a lack of success selling his stories, and as his failure becomes more and more debilitating, he considers Mary’s love smothering and patronizing. Soon after, he falls for the impetuous and exotic Lasca Sartoris, who was based on Nora Douglas Holt, a wealthy socialite of the 1920s and good friend of Van Vechten’s. In the novel Lasca shows Byron the pleasures of the flesh and material wealth (as well as introducing him to Harlem’s raucous night life). Eventually Lasca tires of Byron and dismisses him for Harlem’s numbers king (who now would be known as a “bookie”), Randolph Pettijohn. When Pettijohn is killed in a nightclub by a Harlem “sheik,” who is also angry at his taking Lasca away from him, Byron is circumstantially linked to the murder. Seeing no way out of this turn of events, Byron unloads his own pistol into the corpse of Pettijohn and succumbs to the law and his own fate. Thus ends the story of an idealistic young black man who comes to the Big City and is destroyed by its callous indifference. The responses to the book culminated in perhaps one of the most contentious debates over black representation in American history and demonstrated the deep divisions within the community and among the cultural leaders. Alain Locke, Rudolph Fisher, James Weldon Johnson, and Charles S. Johnson gave the book high praise. Wallace Thurman, who offered faint acclaim for the book as a work of literature, spoke out against the damnation heaped upon the novel. In “Fire Burns,” an editorial printed in the first and only edition of the literary magazine FIRE!!, Thurman wrote: Group criticism of current writings, morals, life, politics, or religion is always ridiculous, but what could be more ridiculous than the wholesale condemnation of a book which only one-tenth of the condemnators have or will read. And even if the book was as vile, as degrading, and as defamatory to the character of the Harlem Negro as the Harlem Negro now declares, his criticisms would not be considered valid by an intelligent person as long as the critic had had no reading contact with the book.7 A large vocal black contingent, however, was incensed by the book’s publication even though many, as Thurman and others indicated, never got past the title page. This outcry did not, however, stop people from reading the novel, and more likely added to its success. Robert F. Worth surmises that the novel sold more copies “than all the books by black writers of the Harlem Renaissance combined.”8 Many Harlemites, though, believed their community had been betrayed and exploited by Van Vechten, whom they had treated with the greatest hospitality or at least quiet tolerance as he did his “research.”9 Andy Razaf poked fun at Van Vechten’s methodological explorations in his song, “Go Harlem.” The lyric includes the line: “So, like Van Vechten, / Start inspectin’, / Go, Harlem, go Harlem, go.”10 Many in the community scorned Van Vechten’s sensationalized portrait of their community, and unsuccessfully tried to ban him from visiting Harlem. The title was especially offensive to some, but Van Vechten vociferously claimed that his use of the term was not intended to offend—perhaps he wished for it to shock—but he used the term nigger heaven ironically, both as a theatrical allusion and as a metaphor for Harlem. On a literal level, it refers to the second balcony in downtown theaters, where black audiences were relegated when they attended a Broadway show. The packing of black people into the gallery, requiring them to use separate doors and unadorned stairways, which contrasted with the ornate passageways leading to the orchestra and mezzanine sections of Broadway theaters, was a powerful social reminder of their status. (Incidentally, these characteristics are still evident in the existing Broadway theaters built around the turn of the century.) Even when the whites in the orchestra and mezzanine below were joyously applauding an all-black show like *Shuffle Along*, the theatrical spaces dictated, or better yet, “disciplined” in Foucauldian parlance, the great racial divide.\textsuperscript{11} Metaphorically, Van Vechten’s title refers to Harlem itself, pointing to the neighborhood as a segregated section for Blacks, situated geographically at the top of Manhattan Island. Although the title suggests a paradise-like quality of this community and its separation, in Van Vechten’s intended usage, the novel ironically presents Harlem as an overcrowded enclave for its black residents. The central character of the novel, Byron, articulates this view in an oft-quoted passage: > We sit in our places in the gallery of this New York theatre and watch the white world sitting down below in the good seats in the orchestra. Occasionally they turn their faces up towards us, their hard, cruel faces, to laugh or sneer, but they never beckon. It doesn’t seem to occur to them that Nigger Heaven is crowded, that there isn’t another seat, that something has to be done.\textsuperscript{12} Unfortunately, Van Vechten’s social commentary is lost within the melodramatic proceedings of the book. Overpowered by the exciting and vibrant nightclub scenes, which include the exploits of black gangsters, loose women, and dedicated revelers, Byron’s rant seems more like sour grapes than a social indictment. In his defense, Van Vechten never intended to exploit or insult his black hosts; in fact he had envisioned “taking up the Chinese and the Jews” in future fictional exposés (he never did).\textsuperscript{13} He championed black causes in his *Vanity Fair* columns and was a patron to several black artists, including Langston Hughes. He was a tremendous supporter of many black artists and entertainers, and his renowned parties included numerous African American guests at a time when New York’s high society was strictly segregated. In an era when black identity was being forged, and positive images were at a premium, Van Vechten seemed to be more interested in rebelling against white middle-class ideals and intent on sending a cultural shockwave through New York’s elite. Van Vechten’s book had an even more profound effect, and it touched a nerve among African Americans when racial tensions were especially high across the nation. In 1926, news of lynchings from the South continued to seep into Harlem, and there was still not a Senate-passed antilynching bill that would at the very least reflect a modicum of white concern. In a highly theatrical protest in December 1926, two political organizations, the National Negro Development Union and the National Negro Centre Political Party, gathered in Harlem in response to the lynching of Bertha Lowman and her two brothers in Aiken, South Carolina. Demanding that President Coolidge take action to halt the activity of the Ku Klux Klan, S. R. Williams, a Wilberforce College professor, used Van Vechten’s novel as evidence of white culpability. After denouncing *Nigger Heaven* and reading excerpts from the novel, he tore two pages from the book and asked the energized crowd what should be done with the pages “to show proper resentment of their contents.” As the crowd responded “Burn ’em up!” Williams lit the pages on fire and held them over his head until they were completely incinerated. There might be another ceremony, Williams told the crowd, to burn the rest of the book.14 In addition to showing *inter*racial divisions in the 1920s, the controversy surrounding *Nigger Heaven* reflects *intra*racial splits and fragmentation. While critics and reporters of the era attempted to depict Harlem as a community united by racial commonalities, the response to *Nigger Heaven* attested to the depth of the fissures with which it was bisected. Class divisions, varying national origins, political affinities, and religion were just some of the ways in which the community was divided, and Van Vechten created a call to arms. Apart from the occasional political protest, the battle over *Nigger Heaven* was mostly academic, and the theater of operations was the black mainstream and scholarly press, the black intelligentsia and religious figures its main warriors. James Weldon Johnson, a good friend of Van Vechten’s, championed the novel in the black journal *Opportunity,* and he pointed to the multifaceted presentation of Harlem in the novel. In his review, he applauds Van Vechten as the first white novelist to portray Harlem life not as a single experience, and he says the author presents “the components of that life from the dregs to the froth.” Johnson sees the book as a truthful, nonmanipulative narrative and a genuine documentary of Harlem, but at the same time, one that is literary and artful. Commenting on Van Vechten’s treatment of Harlem’s less wholesome elements, Johnson focuses on the universalism of the love story at the novel’s heart: The scenes of gay life, of night life, the glimpses of the underworld, with all their tinsel, their licentiousness, their depravity serve actually to set off in sharper relief the decent, cultured, intellectual life of Negro Harlem. But all these phases of life, good and bad, are merely the background for the story, and the story is the love life of Byron Kasson and Mary Love.15 Johnson maintains that the book is surely going to be “widely read,” and will undoubtedly “arouse much discussion.” Understanding that some people will have difficulty getting beyond the title and try to talk knowingly about the book anyway, he concludes: “This reviewer would suggest reading the book before discussing it.”16 In his scathing review in The Crisis (also a black journal) several months after the novel’s publication, Du Bois never mentions James Weldon Johnson by name, but he responds to Johnson’s appraisal point by point. He refers to the book as “a blow in the face” to the black community. Although he objects to the title, he says that that is the least of the novel’s offenses, asserting, “after all, a title is only a title.” In particular, Du Bois condemns the book for being an unflattering and false representation of Harlem. Assuming the opposite of Johnson’s position, he calls the work’s portrait of black life a “caricature,” which, he explains, “is worse than untruths because it is a mass of half-truths.” He writes: “Probably some time and somewhere in Harlem every incident of the book has happened; and yet the resultant picture built out of these parts is ludicrously out of focus and undeniably misleading.”17 He defiantly refutes any allegation that the depiction of Harlem is fair and balanced, and he posits a critique of the white, one-sided perception of Harlem, which focuses only on its scandalous images. He writes: [Van Vechten] is an authority on dives and cabarets. But he masses this knowledge without rule or reason and seeks to express all of Harlem life in its cabarets. To him the black cabaret is Harlem; around it all his characters gravitate. . . . Such a theory of Harlem is nonsense. The overwhelming majority of black folk never go to cabarets. The average colored man in Harlem is an everyday laborer, attending church, lodge and movie and as conservative and as conventional as ordinary working folk everywhere.18 In a conclusion that seems to answer Johnson’s appeal for people to read the book, Du Bois says: “I read Nigger Heaven and read it through because I had to. But I advise others who are impelled by a sense of duty or curiosity to drop the book gently in the grate and to try the Police Gazette.” Du Bois’s argument that the title was not a metaphor for Harlem, as Van Vechten posited, but rather a synecdochical archetype, was reiterated by community and religious leaders, who mourned the adverse effect it had on the neighborhood. They viewed such works as Lulu Belle and Nigger Heaven and their depiction of Harlem as a “paradise for cheap sport” with dismay. This was a small element of Harlem life, they argued, and the more dominant “good” and “decent” side of their neighborhoods was ignored. Reverend William Lloyd Imes, a pastor of St. James’ Church, asked: Would white folk like to be judged by their cheapest and vilest products of society? Do they feel flattered by the sordid, degrading life brought out in our courts? Those who really know Negro Harlem find its good, decent homes, its schools, its churches, its beginning of business enterprises, artists, musicians, poets, and scholars, influential civic organizations, modern newspapers and magazines published and controlled by the race, all of which is a veritable romance in itself. And in a tongue-in-cheek, ironic piece for the Messenger, George S. Schuyler wrote that Harlem had very recently earned a degree of respect for its growing number of intellectuals, writers, and poets. But he claims that these achievements have been nearly forgotten due to the interest in the vulgar nightlife. facetiously, he states that Carl Van Vechten and Broadway impresario David Belasco would soon be participating in a public debate to determine who is “most entitled to be known as the Santa Claus of Black Harlem, a community described as the Mecca of the New Negro but lately called ‘Nigger Heaven.’” Poking fun at Belasco and Van Vechten’s capitalization on black life and their self-serving “support” of black literary and cultural life, he concludes, “Both contestants are well known for their contributions to the Fund for the Relief of Starving Negro Intelligentsia and for their frequent explorations of the underground life north of 125th Street.” Within a year, Nigger Heaven became an integral part of the popular culture and was synonymous with Harlem entertainment. Its representations of black cabaret performers, singers, and dancers were replicated in the nightclubs, musical shows, and plays in New York and other cities across the United States. A A blunt example of the circulation of the title and its images can be found in George S. Oppenheimer and Alfred Nathan, Jr.'s song “Nigger Heaven Blues,” which appeared in The Manhatters, a musical revue that first appeared in Greenwich Village in the late spring of 1927 and moved to the Selwyn Theatre in August of the same year. The song was set in a cabaret scene and performed by whites in blackface, and the lyric attempts to capture the rag-tag, sexual spirit of the novel and includes the verse, “High yaller girls, choc’late and buff, / Doing their stuff, doing it rough / Oh boy, I got the Nigger Heaven Blues.” As critics warned, the original socially and politically ironic intentions of the title were consumed by the depictions of salacious dancing and unending jazz music. Even more than being a cultural marker, the novel became a travelogue, a tourist’s guidebook for visiting Harlem. The book was deemed a work of fiction, but people wanted an unmediated experience of the scenes from the novel because they seemed so “real” and “authentic.” An article from 1929 printed in the Jamaican Mail, a Kingston, Jamaica, newspaper, reflects this desire to experience the real, untainted Harlem. The author of the piece, Viscountess Weymouth, writes that since reading Nigger Heaven, she has wanted to experience Harlem, “this colourful Mecca of jazz, high spirits and drama.” Fortuitously, she met Carl Van Vechten at her first party in New York, and he “promised that he himself would unlock the ebony gates of Nigger Heaven” to her and her unidentified traveling companions. Their first stop was Connie’s Inn, where she saw a not very satisfying musical revue. Her disillusionment with Connie’s arose from the fact that except for the waiters and entertainers (she was quite impressed in particular by “a beautiful negress” who performed “an exotically barbaric dance”), there were nearly no “coloured people in the room.” She states sadly: “I was disappointed; the whole atmosphere was so obviously faked to lure the tourist.” The club lacked the authentic environment that typified her reading of the novel. Her spirits rose, however, with their arrival at the Sugar Cane, which figures prominently as the model for Van Vechten’s fictional “Black Venus” speakeasy in Nigger Heaven. Upon entering, she thought the place empty, but then “realized that black faces were beginning to extricate themselves from the dark background.” She recounts the scene with a cinematic detachment, almost as an ethnographer recording her observations on the behavior of her black subjects: “All of them dance beautifully, but violently, keeping quite still about the shoulders and swaying from the hips. When the band stopped they again faded quietly into darkness.” All in all, she was more than satisfied by her trip to this club because the speakeasy lived up to the expectations established by Van Vechten’s novel. Her evening concluded at an unnamed, carefully secluded pub. At first she was anxious and afraid as she entered the dimly lit club. She notes, “It was crowded with dusky faces; ours were conspicuous as the only white ones. I do not think we should have been admitted had Mr. Van Vechten not been there.” Her initial fright at the sense of impending danger and overall sense of foreboding recalls the Black Mass scene from *Nigger Heaven*. And like that unnamed space, she regarded this club as so covert and genuine, she was careful not to disclose its name or exact location. Publicizing it in her account would destroy the ineffable dark secrets she had learned. The sense of excitement and lawlessness of the scene was heightened by the “well-stocked bar” that greeted her upon entering, for as she reminded her readers, the United States at this time was “the land of prohibition.” Her fear finally dissipated and her sense of security returned later in the nighttime when a white policeman strolled in, “had a drink,” and left “happy.” To Weymouth, this club was the most educational and pleasing of all her stops, as she could also watch black people interacting in an environment untainted by white intrusion (except for Van Vechten’s guided party, of course). She recalled listening to “St. Louis Blues” “wailing” around her, and she described the music as “the broken, melancholy chant of a race of slaves, alive with a throbbing rhythm running through it, and breaking free at the close, dominant and virile.” Her tour concluded with a breakfast of waffles and fried chicken at the speakeasy, and she and her small party of whites left the club after dawn. Cynical observers, as well as a significant segment of the black community, referred to this particular version of Harlem as “Van Vechtenland,” one that was created and strengthened in the white imagination. Thurman and Rapp’s *Harlem* was originally intended as an antidote to this vision with a more accurate delineation. **“CITY OF REFUGE, CITY OF REFUSE”** When *Harlem* opened on Broadway, Whitney Bolton, a critic for the *New York Telegraph*, called the play “the most unretouched and, therefore, the most accurate of the photographs made at Seventh avenue and 132d street.” To Bolton, the photographic accuracy of the play extended to the treatment of its socially realistic characters: “The dark man of Manhattan Island and his girl of tantalizin’ tan receive here the consideration and study that no play which touched them has had before this work of William Jourdan Rapp and Wallace Thurman was written.” Other New York critics also praised the production’s veracity within its dramatic framework. One critic found the muddled melodramatic plot rather contrived, but said that “it is the many bits of authentic [N]egro life and Harlem color that make it humanly novel and interesting.” Similarly Alison Smith pointed out that even when the “feeble and disjointed” plot lagged in spots, “There [was] always the sense of an authentic picture” of black life. And Brooks Atkinson of the New York Times wrote, “As [N]egro melodrama, Harlem has a ring of authenticity that comes from the [N]egro influence in its authorship.” The generally mixed reviews of the play notwithstanding, most of the responses in the press pointed to the impressive skill with which the neophyte, white director Chester Erskin and the playwrights, one black and one white, re-created Harlem life on the Apollo stage on 43rd Street (which is not to be confused with Harlem’s Apollo Theatre on 125th Street, which opened in 1934). Although it was not the phenomenal success that Lulu Belle had been in 1926, Harlem managed to turn a small profit during its brief run on Broadway. Produced by Edward A. Blatt (who, several decades later, was the company manager of the Broadway play The Great White Hope, starring James Earl Jones), Harlem opened on Broadway on February 20, 1929, and played 93 performances (just shy of the 100-performance mark deemed necessary to be considered an unqualified hit within the industry). A few months later, a national tour of the play opened in Chicago, and while some members of the African American community petitioned to close the show, proclaiming that it offered a distorted view of black life, the production did quite well. In June of that year, the Broadway version closed rather abruptly after some financial rancor—the cast demanded they be paid the equivalent rates of other Broadway performers. The press reported that Erskin publicly called the actors “a bunch of crafty niggers” and that he vowed to shut down the show “not withstanding crowded houses.” Thurman spoke out against the reports and asserted Erskin’s innocence. After reassembling the cast, which included just five members of the original Broadway company along with most of the actors from the touring cast, the producers transferred the show to the Eltinge Theatre on Forty-second Street on October 21, 1929. The timing could not have been worse. The stock market crashed exactly one week later, and the reopened Harlem closed after sixteen performances. The play was the brainchild of Thurman, a major literary voice in the Harlem Renaissance and best known today for his novels The Blacker the Berry . . . (1929) and Infants of the Spring (1932), both of which also depict Harlem life. Iconoclastic and caustic, Thurman riled the old guard of the Harlem Renaissance with his “lukewarm interest in promoting African American identity.” Contemporary accounts by people who knew him, including Langston Hughes, Richard Bruce Nugent, and Dorothy West, describe him as self-loathing, morose, and extremely bitter. These qualities, Thurman’s early critics claimed, were evident in his writing. In his review of *The Blacker the Berry*, W. E. B. Du Bois said that Thurman appeared to “deride blackness.” Recent scholarship, especially by Eleonore Van Notten, David R. Jarraway, Amritjit Singh, and Daniel M. Scott III, paints a different picture. Thurman’s characters are far more varied than earlier thought. Rather than focusing on images of racial uplift or forwarding propaganda, Thurman created much more complex views of black life. He eschewed racial and sexual boundaries, and his work reflects this orientation. For example, in his novels he presents black characters who successfully pass for white (*Berry*) and ones who engage in both heterosexual and homosexual affairs (*Infants*). Thurman was intent on breaking down the barriers between the races, an effort best articulated by Raymond Taylor, the protagonist of *Infants of the Spring*: “Anything that will make white people and colored people come to the conclusion that after all they are all human . . . the sooner amalgamation can take place and the Negro problem will cease to be a blot on American civilization.” It is probably safe to surmise that this “amalgamation” was what Thurman had in mind when he enlisted the help of writer and friend William Jourdan Rapp to write a three-act play about the experiences of a representative black family in Harlem. Rapp, a former feature writer for the *New York Times* and editor for *True Story Magazine*, had written the scripts for numerous radio soap operas and was a burgeoning playwright in his own right. By the time Rapp died in 1942 at age forty-seven, he had coauthored three other Broadway plays, including *Whirlpool* (1929), *Substitute for Murder* (1935), and *The Holmeses of Baker Street* (1936). None of these was as successful as *Harlem*. Rapp and Thurman collaborated on two other plays, *Jeremiah the Magnificent* (1929), which received just one performance in 1933, and *Black Cinderella* (1929), which was apparently never completed. The basis for *Harlem* is Thurman’s short story “Cordelia the Crude,” which he wrote for the 1926 black literary magazine *Fire!!*, and which focuses on a young woman’s descent into prostitution after the sexually reticent narrator gives her two dollars after their first tryst. The climax of the story takes place at a Harlem rent party and offers a sensationalized view of Harlem after dark. This depiction of Harlem became the raison d’être for the play and the backdrop for Rapp and Thurman’s collaboration. Their partnership was, by all accounts, a felicitous one, and they established a strong, lasting friendship. Thurman’s cor- respondence with Rapp from 1929, the year *Harlem* opened, to 1934, the year of Thurman’s death, shows a strong professional and personal bond between the two men. Thurman entrusted Rapp in managing his financial affairs during his divorce from Louise Thompson and asked that Rapp be the first to be notified of Thurman’s death by the officials of the tuberculosis sanitarium where he died. In addition, Thurman confided in Rapp about the basis of the divorce suit, a sexual incident that occurred in the bathroom of 135th Street subway station. In a narrative that has a great deal in common with “Cordelia the Crude,” twenty-three-year-old Wallace Thurman was broke, hungry, and without prospects, and he accepted two dollars from a man in exchange for sexual favors. When Thurman accepted, two plainclothes police officers emerged from the mop closet and took them both to jail. Thurman gave a phony name and address, spent two days in jail, and scrounged up $25 for the fine. The other man, a repeat offender, received a six-month sentence. The level of trust between Thurman and Rapp is also evident in the numerous articles they wrote in conjunction with the play’s opening. In an essay unpublished in his lifetime, “My Collaborator,” Thurman offers a glimpse of their working relationship: I have often wished for a movietone camera during our play writing sessions. Posterity should not be deprived of the picture of Bill Rapp, excited over the possibilities or difficulties of a scene, leaping from his chair, pacing the floor, frantically gesturing the while he shouts Negro dialect with decided East Side overtone. The essays also suggest why the final version of the playscript seems to be a jumble of different artistic perspectives. The play attempts to integrate Thurman’s expertise in recording realistic scenes from Harlem nightlife with Rapp’s experience writing radio soap opera. Even the onstage rent party, the high point of the show, seems tacked on. Most likely this impression has to do with the fact that it was a rather late addition to the play, the “wow” that producers claimed the script lacked in its earliest incarnation. In “Detouring *Harlem* to Times Square,” Rapp and Thurman said that there were several versions of the play as they tried to “wow” the third act. They finally did, and Chester Erskin and Edward A. Blatt came aboard. When *Harlem* was finally produced, the problems with the script did not go unnoticed by the critics. The physical production received generally very favorable reviews, but the script was faulted for its disjointed craftsmanship. Many critics remarked that it was serviceable, but its tone and style were inconsistent and seemed to go in several different directions at once. Indeed, as indicated by the snippets from the reviews already quoted, *Harlem* is a “cobbling together” of familiar dramatic genres, including melodrama, social realism, and black folk play. As evidenced by the reactions in the popular press, however, in between the structural junctures of these dramaturgical forms there were flashes—or ephemeral snapshots—of presumably “natural” black behavior, “authentic” Harlem sights and sounds, and “real” black Harlemites (as opposed to actors) at work and play. The effects of this reconstruction reaffirmed the “truth” of those images for Broadway theatergoers, but at the same time, they also pointed to the constructedness of those images in the “real” Harlem. In brief, the plot of *Harlem* centers around the Williamses, a poor and struggling black family in Harlem, and the tumultuous events that arise from a rauous rent party in their home one Saturday in late November. The play also includes a hard-boiled, young black woman who will stop at nothing in her quest for wealth and fame, gun-shooting gangsters, the murder of an oily gambler, the subsequent frame-up of a hardworking, young black man, and proper justice as generated by a shrewd white detective. But at the core of the melodramatic maelstrom and musical mayhem is a modest black family trying to eke out a life in this strange new neighborhood. The audience learns within the first few minutes of the play that the family is new to Harlem, having only recently come north. The idealistic oldest son, Jasper, had recognized the numerous job opportunities that New York’s industrial center promised, moved there with his own wife and children, and shortly afterward summoned his extended family to this “City of Refuge” from their economic and racial oppression in the Deep South. However, the promises of a better life have been unfulfilled, as articulated by the family’s matriarch, referred to only as “Mother Williams,” who calls Harlem a “City of Refuse.” She proclaims: City of Refuge! Dat’s what you wrote an’ told us. Harlem is de City of Refuge. Is yo’ shure you don’ mean City of Refuse? Dat’s all dere is heah. De people! Dese dark houses made out of de devil’s brick, piled up high an’ crowdin’ one another an’ smellin’ worse dan our pig pen did back home in summer. City of Refuge! You—I—God, have mercy on our souls. From the outset, this ambivalence toward Harlem is at the heart of the play and recalls the situation in the real-life neighborhood. But the tension between the “actual” conditions and the presumed conditions, or those associated with im- ages of Harlem from popular culture, is defused onstage for theatergoers as it was for tourists visiting the district after dark. On the one hand, the economic and social situation of the family is rather miserable, but on the other, the sensational and riotous atmosphere belies the play’s ameliorative attitude toward their poverty. In its various drafts prior to opening on Broadway, the play was called Black Mecca, City of Refuge, and Black Belt, but in all cases, Harlem intended to present an authentic view of the neighborhood from an insider’s perspective. As responses in the black press confirmed, however, this “view” catered to that of its mostly white spectators. According to a report in the New York Age, an African American publication, the play’s press representative said that no advance publicity or opening night tickets were sent to the black press because the “show was primarily for ‘white consumption.’” It was presumably intended to give whites a privileged view of Harlem that black people would not need to see since they lived it. The black press did attend, however, and the criticism surrounding the play echoed that which greeted Nigger Heaven three years before. Reactions to Harlem in the black press once again stimulated the debate over visibility-at-all-costs versus the propagation of positive black images. For example, Theophilus Lewis remarked on the equality of the play’s black representations, presented within a dramatic form typically reserved for whites. That is, the play presents melodramatic black characters the same way in which white characters would be presented in a similar kind of play. Rather than addressing an essential black difference in the drama, which plays about “exotic” black life tended to do, Lewis believed that the playwrights fashioned a play around “ordinary” individuals. He wrote, “Its characters are not abnormal people presented in an appealing light but everyday people exaggerated and pointed up for the purpose of melodrama.” Salem Tutt Whitney of the Chicago Defender, on the other hand, argued that the exaggerated images were particularly harmful to developing racial attitudes. In an argument similar to Du Bois’s about Van Vechten’s novel, he said: There is no denying the fact that “Harlem” possesses dramatic value. It moves swiftly. Events take place in rapid succession that sometimes thrill and always entertain. But it is impossible for us to like the story. It is the Race situation that furnishes the ground for my objection. Most of the white people who see “Harlem” say, and are anxious to say, that it is a true portrayal of Race life. They do not say one phase of our Race life. To me it is not realism, it is exaggeration. And thereby we are condemned as a race. Yet as these reviews depict, the most fascinating aspect of the play is the way in which it combines both “exaggerated” and “realistic” images of black life. The play’s varied dramaturgical approaches reflect the constantly transforming terrain of Harlem and the futility of defining an “authentic” blackness. Thurman’s utopian vision of an “amalgamation” of the races is only occasionally successful in the final product, and it more strongly points to the fragmentation and hybridity of a black identity shifting and buckling under the weight of excessive conflicting representations. The pressure of accommodating the demands of a popular theater apparatus—intent on confirming racial stereotypes—all but makes the work of two artists trying to transcend racial categorization burst at its seams. If we employ Homi Bhabha’s terminology, examining the “in-between spaces” of the extremes of “realism” and “exaggeration” shows the impossibility of claiming a “truth” for a particular race of people, and this in itself is a form of transcending racial categorization.41 “GO, HARLEM, GO HARLEM, GO” Framed by a rather hackneyed melodramatic structure, the underlying motive for the play is undoubtedly its presumed presentation of naturalness and unfettered scenes from black life. To this end, the play celebrated pluralism, but one could argue that it also reaffirmed attitudes of white superiority. This was accomplished in a few subtle ways. Most obviously, it recapitulated the exoticizing white gaze. Unlike those attending an actual Harlem nightclub or rent party, white theatergoers could sit in their orchestra seats and study the customs and behavior of the Blacks onstage, whom the publicists went to great length to say came directly from Harlem. The play allowed audiences an opportunity to penetrate black life, in a manner similar to Viscountess Weymouth’s Van Vechten–escorted excursion, while maintaining a comfortable social distance, which is not guaranteed in an integrated club or party. The segregated theater conditions contributed to the separation of the races. The irony of this is evident in a letter that Wallace Thurman wrote to his Harlem collaborator William Jourdan Rapp: “Five different times I have bought seats for myself to see Harlem—including opening night—and tho I asked for center aisle seats (as much as a week in advance) not yet have I succeeded in not being put on the side in a little section where any other Negro who happened to buy an orchestra seat was also placed.”42 Audiences could gawk at the black actors on stage, but they were not compelled to come into contact with them from their unobstructed and comfortable positions in the socially hierarchical Broadway theaters. Under these circumstances, *Harlem* on Broadway offered a view of Harlem that few audience members would have had the opportunity to see in real life. Most of the play occurs in the Williamses’ household, a five-room, 132nd Street railroad flat, which the family shares with several tenants. The setting’s careful attention to physical and atmospheric detail, as described in the stage descriptions, pictures from the production, and critical responses, demonstrates the way in which the production strove for photographic realism of a Harlem flat. Reconstructed in a highly naturalistic manner, the apartment is in need of repair, “feebly lit,” and constantly assaulted by outside noises such as the screeches of clothes line pulleys, screaming and cursing neighbors, and the “salacious moans of a deep toned blues singer” emanating from a nearby Victrola. The audience is constantly reminded that the Williamses’ home is cramped and the rest of the neighborhood is closing in on it, invoking the crowded living conditions of the community. The careful attention to details of the environment (within the confines of the Williamses’ home as well as its relationship to the “real” Harlem) is indicative of the play’s claim to naturalism. The description of the set, for example, seems to be a direct imitation of Strindberg’s “backdrop-at-an-angle” design that enhanced the naturalistic effect of *Miss Julie*. In the stage descriptions, the playwrights say that the living room of the Williamses’ home is to be constructed “on a slant in relation to the footlights, so that the end of the rear wall on the right is nearer the front wall on the left.” Because this gives the sense that the walls are literally closing in on the characters (from the audience’s standpoint anyway), the design would reduce the playing space, causing the flat to appear crowded and too small for the family and the several lodgers. More importantly, however, is the sense that the play offered a wholly different view of Harlem. The effect of this slanted depiction of the Harlem home would be what Strindberg called “an unfamiliar perspective” for the audience. The play’s naturalistic setting offered a perspective of Harlem seldom seen by tourists—the private, domestic lives of Harlem residents. Through this heightened realism and overt claims of “authenticity,” Thurman and Rapp wanted to galvanize new images of African Americans and the neighborhood. Previously, works using the neighborhood as their setting tended to depict Harlem’s public spaces, such as the streets, nightclubs, and speakeasies. But *Harlem* not only offered an after-hours view of the neighborhood, it also depicted a domestic side of the community. As Una Chaudhuri says in her discussion of stage naturalism, this manner of disclosure of the pri- vate within a public sphere allows for a theater of “total visibility,” or one that promises to “deliver the whole truth” of the world it unmasks. Even though the play’s exposure of a private realm pointed to the dire economic and social situation of the neighborhood’s residents, its emphasis on crime, jazz, and sultry dancing also revealed the depths of the presumed mysterious, exotic world of lower-class black people. The realistic scenic design and staging exposed the peripheries of the primitive, unrestrained behavior of black people in their natural setting. The play’s heightened realism and presumed authenticity also stemmed from the careful attention applied to the dialogue. According to press reports, the playwrights attempted to capture the speech patterns and singular phrases of the neighborhood and to further portray the foreignness of Harlem. To this end, they liberally peppered the script with “genuine” bits of dialogue supposedly spoken by native Harlemites. The “Glossary of Harlemisms” (an authenticating device Carl Van Vechten also employed in Nigger Heaven) listed in the playbill included twenty-four terms, defined so white audiences would not feel alienated by the language. A few examples include: **Sweetback.** A colored gigolo, or man who lives off women. **Dicty.** Highbrow. **Monkey-hip-eater.** A derisive name applied to a Barbados Negro; supposed to have originated with the myth that Barbados Negroes are passionately fond of monkey meat, particularly “monkey hips with dumplings.” **Chippy.** A tart; a fly, undiscriminating young wench. **Mess-around.** A whirling dance; a part of the Charleston. **38 and 2.** That’s fine. **Forty.** Okay. The use of these terms and the printed translation may have provided local color and a level of verity to the play, but there is also a potential parodic element in their inclusion. In the play language is used in a manner similar to the black folklore recorded by Zora Neale Hurston. In the introduction to Mules and Men (1935), Hurston describes rural black folk’s use of language as a method of resistance. That is, they will speak only in “pleasantries” and superficialities and not divulge what they truly think and feel to meddlesome whites. According to Hurston, Blacks’ language to strangers is evasive, and while white strangers may think they understand black speech, they really don’t: “He can read my writing but he sho’ can’t read my mind. I’ll put this play toy in his hand, and he will seize it and go away. Then I’ll say my say and sing my song.” While white Broadway audiences assumed that the glossary was provided as a tool for cracking the code created by idiomatic expressions and regional dialect, this may have been Thurman’s elaborate play toy for the audiences. Parodic or not, Thurman and Rapp took great pains in the press to argue that the value of *Harlem* was not simply as a form of entertainment. In an article written together called “Few Know Harlem, the City of Surprises,” they state that the play highlights the differences between black and white people, which boils down to class distinctions. They point out, for example, that there is a steadily increasing black middle class, who similar to their white counterparts “go for vacations in Europe, Atlantic City, the Maine woods and Southern California.” But on the other hand, they state, “There are some phenomena peculiar to Harlem alone, phenomena which are inherently expressions of the Negro character before it was conditioned by the white world that now surrounds him.” These main differences include the numbers game, which they call “Harlem’s most popular indoor sport and the outlet for the Negro’s craving for gambling,” and the house rent parties. They report, “Some people have found rent parties so profitable that they have become professional givers of house rent parties, getting their whole income from them.” Although the playwrights insist that the community is marked by its economic and ethnic diversity, it is the last two “institutions peculiar to Harlem” and not the hobbies of the “Americanized” black middle class that are given life in their play. The comments reinforce the notion that class, as Martin Favor explains, is “a primary marker of racial difference.” Du Bois indicated as much when he invited *Crisis* readers to respond to a questionnaire about appropriate representations of black people in art and literature. Among other questions associated with class differences, he asked: “Can publishers be criticized for refusing to handle novels that portray Negroes of education and accomplishment, on the grounds that these characters are no different from white folk and therefore not interesting?” The question itself points to the conflation of middle-classness with whiteness (and bland normalcy). “Authentic blackness,” then, is not determined by the color of one’s skin but primarily by the (lower) class status of the black individual. Within this conceptual framework, Thurman and Rapp attempted to present a more complex portrayal of familiar character types. In another joint essay, for instance, they claim that their play earns the right to be called “educational theatre” because *Harlem* “presents the [N]egro as he is,” rather than reasserting the age-old images of the “stage Negros,” or as they bluntly call them, “white folks’ niggers.” The latter images, according to the authors, consist of “the old servant or mammy type known derisively among Harlemites as ‘Uncle Toms’ and ‘handkerchiefs,’ the lazy slowfoot type typified by such vaudevillians as Bert Williams and [the Shuffle Along creators] Miller and Lyles, and the superstitious, praying type who is always thrown into abject fear by darkness, lightning and thunder.” In the same article, they quote an unnamed black critic who praises the play for making black people “understandable” to white audiences and for “educating the theater-going public.” The critic writes: “The [white] man in the orchestra seat may not sympathize with [the black characters’] motives, but he can readily understand them. And understanding these characters helps him to better comprehend the concrete Negroes he has seen in the subway or reads about in the crime columns of the newspapers.” Of course, as the critic implies, these two nonsegregated arenas would have been the most common places for whites to encounter black people directly. To Thurman and Rapp, Harlem would offer a different version of the incomprehensible, scandal-driven image propagated in the press and in literature. Therefore, in order to make the “inhabitants” of Harlem’s Black Belt understandable, they presented a cross-section of “concrete Negroes,” reflecting the multiple, often conflicting, and sometimes derogatory representations of Blacks in Harlem. The play and its Broadway production, however, were constantly at odds with this objective. The goal of redefining Blacks on the Broadway stage was a noble one, but nevertheless it often perpetuated “exotic” and “primitive” images of African Americans. For example, a publicity handbill hailed the play for those very images: “Harlem! The City that Never Sleeps! A Strange, Exotic Island in the Heart of New York! Rent Parties! Number Runners! Chippies! Jazz Love! Primitive Passion!” The “educational” intentions of Thurman and Rapp were pitted against the desires of Broadway theatergoers, who expected to see a version of the “real” as perpetuated by Nigger Heaven or Lulu Belle. The public relations campaign helped to ensure that these expectations would be met, and it often reconfirmed the worst possible stereotypes of black people in its effort to demonstrate the “naturalness” of the performances on stage. One of the most egregious examples of this appears in a New York Times profile of the twenty-five-year-old director Chester Erskin two weeks after the show opened. Erskin, according to the article, understood “that good [N]egro dramatic players are rare,” so he “visited dives, speakeasies, rent parties, restaurants, cabarets and private homes” to find suitable, authentic “personalities” for FIG. 2. *Harlem* program cover for the touring production at the Majestic Theatre in Chicago circa 1930. Artist unknown. (Billy Rose Theatre Division, The New York Public Library for the Performing Arts, Astor, Lenox and Tilden Foundations.) his production. The young director accumulated his cast in this manner, and with the patience that “could give Job a tussle,” Erskin “instructed” his cast on the fine points of acting. Reconfirming a stereotypical notion that black people are naturally inferior to whites, the article explains the procedure in which Erskin staged the play: [Erskin’s] first direction was to make his players repeat the lines after him, word for word, until they could recite them from memory. Then he permitted a few gestures and later he taught them the art of entrances and exits and how to ignore the audience. When they proved a bit slow in grasping things, their great lament was: “You know, Misto’ Erskin, we’se colored people. We cain’t think as fast as white folks.” When the play actually opened and they were praised for their individual performances they replied, “Misto’ Erskin done it.” While Thurman and Rapp took great pains in their attempts to banish the “Uncle Tom” and “the lazy slowfoot” types from their play, as well as the white cultural imagination, the publicity reinserted it. The article concludes with another instance of the childlike image associated with African Americans in a tribute to Erskin’s paternal patience and kindness: “[The black actors] at first insisted that he sit in the front row and watch them during every performance and often he still does. Whenever they are applauded they look in his direction for his approval.” The playwrights were evidently powerless to halt the Broadway publicity machinery that relied on such tactics to make a “black play” sell to its mostly white audiences. Yet the conflicting images, which combined those based on elements left over from minstrelsy with more progressive representations, enacted the struggle to form a fully integrated black identity. In this regard, the play Harlem mirrored the racial complexities that characterized the neighborhood. “THE DOOMED CHILDREN OF HAM” The characters of the play are from the poor working class, and the neighborhood is certainly taking its toll, especially on the older characters. They are being gradually subsumed by the effects of modernization. On one level, the exposure of the social and economic conditions of the characters was not unlike other Broadway plays of the era that theatrically realized the lives of the urban poor. Although contemporary descriptions of the play highlighted the racy rent party dancing and the melodramatic hijinks of the gangsters and detectives who appear prominently in the play, *Harlem* also evokes the social realism of such plays as DuBose and Dorothy Heyward’s *Porgy* (1927) and Elmer Rice’s *Street Scene* (1929). The genre was a familiar one on Broadway in the 1920s, and the plays within the category tended to address the distressing results of “an oppressive urban environment.” As with these plays, *Harlem* stresses the tragic dehumanization of its characters as a result of city living, and points to the personal and familial rifts that the corrupting environment causes. In Thurman and Rapp’s play, for instance, several of the characters pine for a simpler (though far from idyllic) southern lifestyle, which they have recently left, and they repudiate the northern urban environment, which now consumes them. One of the most caustic and darkly comic expressions of this urban discontent is Father’s response to another character’s complaint about the crowded subway conditions. He answers, “Dey may lynch you down home, but dey shure don’t squeeze you to death on no subway.” Whereas the South has its share of random misery, the North’s modern conditions are much more stifling and suffocating (both physically and socially). According to Father, there is, ironically, far less freedom for black people in this new environment than there had been in the South. It is certainly not the “City of Refuge” black migrants had been promised. For Broadway audiences, however, *Harlem*’s constricting backdrop seems little more than a mere gripe for party poopers like Mother and Father who complain nonstop about the living conditions and who refuse to enjoy the raucous rent party. In addition to the Broadway realism of the play, there are characteristics of other genres that were also prevalent in the 1920s. These variant dramaturgical components, as several critics pointed out, do not always successfully meld in *Harlem*. Brooks Atkinson, for instance, called the play “a rag-bag drama and high pressure blow-out all in one,” and Richard Lockridge described it as “a play which at its least is sudden melodrama, broken by pistol shots, and at its best a colorful, changing picture of the dark civilization within our lighter one.” Arthur Ruhl saw a dramatic structural divide based on the supposed logical outcome of its racially divergent authors. He writes that the play “was composed of two different strains, and one of these what might be described as the white or Broadway element overlaid the black.” Judging from the critics’ reactions, one can see that the familiar conceits of the melodrama and social realism (forms associated with white playwrights) did not integrate well with the “authentic” pictures of black life (identified with Thurman’s contribution). The opening of the play, for example, juxtaposes the expectations of the urban social realism drama, and its tawdry, tragic implications, with a kinder, gentler form. Aside from the laments about the ill effects of urbanization, Harlem later gives the impression that it is closer in form to a folk drama, which tended to employ provincial settings. For instance, the first act begins in the Williams household as the family prepares for the rent party, and the act concludes with the party itself. Little else happens between. The characters clean, discuss burned bread, and debate whether or not they are better off in Harlem than they were down South. New York World critic Alison Smith praised this slice-of-life aspect of the play, stating, “It has the deep, half unconscious thrill of compassion which the Negro actors give to a study of nostalgia, the bewildered, inarticulate homesickness of a little family, lured from their North Carolina cabin into the smouldering jungle of Harlem.” The domestic setting and the leisurely unfolding of the action bear the hallmarks of black folk drama form, especially in its presentation of a family faced with adversity. This form, incidentally, would not have been a completely unfamiliar one to many in the audience at Harlem. The black folk drama was primarily a staple of church groups and playwriting competitions in black journals, and the plays occasionally appeared in commercial theaters. In fact, the first nonmusical play written by an African American to appear on Broadway, Willis Richardson’s The Chip Woman’s Fortune (1923), fit this genre. Historically, the folk drama form, to which Richardson subscribed, was consciously modeled after the Irish folk plays of writers such as J. M. Synge and Lady Gregory—a comparison echoed by Heywood Broun’s remarks. Just as Thurman and Rapp intended to banish the “white folks’ niggers” from their play, the Irish authors intended to banish the stereotypically sentimental, drunk, and pugnacious “stage Irishman” and instead depict honestly the provincial Irish. Similarly, the African American folk playwrights attempted to capture, in James Hatch and Leo Hamalian’s description, “the everyday life of ordinary black people during hard times.” The handwringing, destitute Mother of Harlem, for instance, who continually prays for the souls of her family, seems to be the direct descendant of the keening Maurya in Synge’s Riders to the Sea (1904). An indication of this background occurs midway through the first act, when Mother, overwhelmed by the family’s misfortunes and their propensities for rent parties, “buries her head in her hands and sways the upper part of her body,” beseeching: “Father in heaven! Father in heaven! Forgive dis sinful household. Lawd, fo’give dem. Save my poor wicked children. Watch over dem. Show dem de light. Guide dem, Father. Shield dem from de devil and cleanse der bodies with de Holy Spirit. Amen! Father! Amen!” Yet pitted against the urban realities of this play, the folk characteristics come off as quaint, nostalgic, and outdated. The two oldest family members, Mother and Father, for instance, are par- ticularly denotative of the folk drama form. They represent bucolic domesticity, but they are subsumed by urban industry. The stage directions, for example, describe Mother as a “typical southern woman, ready to moan and pray at the slightest provocation,” but she has no control over her children. About Father, a large, gruff man, the stage directions say, “The North has rendered him helpless. He is just a big hulk being pushed around by economic necessity.” Displaced and discontent, Mother and Father represent what Alain Locke in 1925 called the “Old Negro.” That is, as opposed to the “New Negro,” who is “inevitably moving forward under the control largely of his own objectives,” Mother and Father represent the previous generation of Blacks who lack autonomy, consciousness, and self-respect. These characters are bereft of proper names in the play perhaps because, as Locke also explains, the Old Negro “was more of a formula than a human being—a something to be argued about, condemned or defended, to be ‘kept down,’ or ‘in his place,’ or patronized, a social bogey or a social burden.” Even more significantly, the parents lack control over their family as well as the rent party in their home. The parental roles actually belong to Jasper, who brought the family to Harlem, and his sister Cordelia, who runs the household. Mother and Father have succumbed to what Cornel West describes as the “white world’s view” of themselves and their condition. They have little or no agency and do not foresee that black people will improve their conditions; in short, they have accepted the circumstances of white supremacy. Mother places all of her hope for progress in religion, and Father has simply lost hope that black people will endure in a white world. As Father despairingly explains, “Dey ain’t nothin’ for a nigger nowhere. We’s de doomed children of Ham.” Their “devaluation” and “degradation” have essentially made them void of effectiveness in the environment in which they are placed. As West argues in relation to Ellison’s *The Invisible Man* (1952), when total submission or hopelessness saturates a black individual, the situation renders him or her invisible and without humanity, hence “nameless.” Mother’s and Father’s own namelessness corresponds with their lack of connection to a community, and as West also writes, the “theme of black rootlessness and homelessness is inseparable from black namelessness.” For Father, the sense of eternal displacement, no matter where he is placed, has turned in on itself to become a racial hatred, which is evident in an exchange with Jasper: **Father**: You know what’s wrong wid’ Harlem? Dey’s too many niggers! Dat’s it—too many niggers. **Jasper**: You said the same thing ’bout down home. The exchange also shows the suffocating effects of segregation. The lack of diversity in a ghetto produces frustration and dissatisfaction among the clustered masses. Whereas Mother and Father appear antiquated and ineffectual in this environment, and the hope of a new homeland for industrious African Americans and a place where they may establish roots is unrealized, the promise of social betterment is rendered through their oldest son, twenty-eight-year-old Jasper. He represents the epitome of Locke’s definition of the “New Negro” and is the model of racial uplift that Du Bois and others advocated in the black arts. Unlike his parents, Jasper is forward thinking, hardworking, and optimistic about improved social conditions for Blacks. More importantly, rather than being subsumed by Harlem, he is empowered by it. He says about his environment, “Why, Harlem is the greatest place in the world for Negroes. You can be a man here. You can ride in the subway and go anywhere your money an’ sense can carry you.” In direct contrast to his father’s unmanly inability to lead the family, hold a job, or secure self-respect, Jasper is autonomous, driven, and self-reliant. He also represents the powerful synthesizing of the black split subjectivity as articulated in W. E. B. Du Bois’s definition of “double consciousness.” In Du Bois’s system of black empowerment, Jasper represents the fulfillment of the desire to integrate the fractionated black (male) subject, which Du Bois describes as the “longing to attain self-conscious manhood, to merge the double self into a better and truer self” and ultimately “make it possible for a man to be both a Negro and an American, without being cursed and spit upon by his fellows, without having the doors of Opportunity closed roughly in his face.” In the first act of the play, the Williams home becomes a battleground for the opposing forces of the Old and New Negro, and Locke’s ideas are given dramatic immediacy. These dialectical representations personify the transformational black cultural identity of the 1920s. As Stuart Hall articulates, “Cultural identities come from somewhere, have histories. But, like everything that is historical, they undergo constant transformation. Far from being eternally fixed in some essentialized past, they are subject to the continuous ‘play’ of history, culture and power.” The Williams home symbolizes the nexus of black culture. Past and present collide here, and black cultural identity is (to reiterate Bhabha) “in the process of being formed.” But this process is certainly not without resistance. If Mother and Father represent what Blacks used to be, and Jasper represents what Blacks are “becoming” according to Alain Locke’s specifications, then thrown into this atmosphere is the menace to that cultural identity, Cordelia Williams, Harlem’s Pandora, Lulu Belle, and Lasca Sartoris all rolled into one. “**SUGAR FOOT MISTERS AN’ SUN DODGIN’ SISTERS**” Cordelia, the oldest Williams daughter, is the central character of the play and the cause of the sensational events that occur. Her madcap machinations threaten to bring down the entire house and throw the dramaturgical structure off-kilter. In fact, by the beginning of the rent party, it is clear that the quaint black folk drama form combined with the urban social realism cannot repress the divisive, unrestrained, and explosive energy that Cordelia has unleashed on this vision of the Harlem neighborhood. Near the end of act 1, the play has veered off from the picturesque realism and into full-blown melodrama, reminiscent of the white-concocted *Lulu Belle*. Similar to the title character of that play, and also like Lasca Sartoris in *Nigger Heaven* (comparisons several critics invoked), Cordelia is a brazen, hard-hearted, young black woman. Walter Winchell referred to her in his review as a “chippie off the old block,” and throughout *Harlem*, she is variously referred to as a “chippie” (or a loose woman), a “hincty [or “snooty”] wench,” and a “good-for-nothin’ strumpet.” While Mother, Father, and Jasper evoke issues of race associated with class, Cordelia is defined by her alluring, but dangerous, sexuality. From her initial appearance, the stage directions make this perfectly clear: > [Cordelia] is about eighteen years old and has dark brown skin and bobbed hair. She is an overmatured, southern girl, selfish, lazy, and sullen. She is inspired by activity or joy only when some erotic adventure confronts her or a good time is in view. She has no feeling for her parents or for her brothers and sisters. Considering herself a woman of the world, she holds their opinions and advice in contempt. She is extremely sensual and has an abundance of sex appeal. Her body is softly rounded and graceful. Her every movement and gesture is calculated to arouse a man’s eroticism. Cordelia’s uninhibited sexuality and uncontrollable need for excitement explode the conventions of the outmoded folk drama form, and she sets the melodramatic apparatus into play. The backdrop for this modern morality play is the sexually charged onstage rent party (or as the playbill’s glossary defines it, “A Saturday night orgy staged to raise money to pay the landlord”), which Cordelia commandeers. By the end of the first act, the guests and musicians have all arrived, and the party is in full swing. Robert Littell referred to this scene as “a queer, sordid, good-natured orgy, with fifteen or more couples hugging each other in the most extraordinary dances.” The scene was particularly significant in that it re-created the Harlem that audiences wanted to see: A Harlem infused with sultry jazz music and torrid dancing. According to the responses in the press, the dancing in this scene was “sensual,” “barbaric,” and “anything but lovely” (one critic described it as “grizzly bear dancing”). The stage directions confirm that its blatant allusion to sexual activity was the intended result. The playwrights describe the staging in the following manner: Body calls to body. They cement themselves together with limbs lewdly intertwined. Another couple is dipping to the floor and slowly shimmying belly to belly as they come back to an upright position. A slender, dark girl with wild eyes and wilder hair stands in the center of the room supported by the strong lithe arms of a longshoreman. Her eyes are closed. Her teeth bite into her lower lip. Her trunk is bent backward until her head hangs below her waist, and all the while the lower portion of her body is quivering like so much agitated Jell-O. As evidenced by the critical responses, the erotic, “quivering” black bodies on display in this scene delivered the third act “wow” that the playwrights so desperately sought. For some critics, the scene underscored the supposed cultural and instinctual differences between black people and white people. Richard Lockridge, for example, referred to the black dancers as “unself-conscious and barbaric,” and in the rent party scene “the members of the cast seem to forget they are acting and . . . give themselves over to rhythms which the [N]egro has brought to the white man and which the white man, however he may try, is always a little too self-conscious to accept.” The seemingly “natural” and spontaneous dancing on view in the rent party scene reiterated the entrenched view of an undeniable black primitivism. For Broadway audiences accustomed to seeing the energetic, precisely choreographed dances of musical comedies and revues, the undulating, groping black dancers offered a physicality that seemed unrehearsed, unrestrained, and unconscious. That is, the scene authenticated the romantic and popular notion that black people are naturally “exotic” and “primitive.” Lockridge, for example, went even further in his review to argue that the overtly sexual dancing actually made the melodramatic murders in the play’s plot frighteningly believable. The glimpses of “actual” black behavior provided a backdrop for the formulaic aspects of the play, which gave the production a layer of truth and authenticity. He states that the actors “dance lustily, swayingly, shamelessly and reveal the simplicity and deep earthiness of their race’s hold on life. And the melodrama of murder is made the more real and plausible by the revelation which the dancing gives of their uncerebral directness. Men and women who dance like that have the strength for violence.” To this particular critic, the primal movement of the black dancers, framed within the proscenium at the Apollo Theatre on Broadway, pointed to a presumed historical and biological primitiveness and barbarism associated with black bodies. Similarly, Whitney Bolton wrote that he was “not at all sure that many of the players didn’t forget they were on a stage and believed themselves actually participants in a rent party.” Therefore, the enactment of the rent party potentially granted what Barbara Kirshenblatt-Gimblett describes as an “unmediated encounter” for the Broadway audiences, or one in which the “performances . . . create the illusion that the activities one watches are being done rather than represented, a practice that creates the illusion of authenticity, or realness.” Separate from the contrivances of the play’s plotting, the rent party scene offered not just an image of the “real,” but an interaction with it and moments of complicity in the illusion. As Robert Littell wrote about this sensation, “Stage parties are as a rule pretty terrible, but the [N]egro rent-paying guests throw themselves into it with such spontaneous go and enthusiasm that one feels as if one was there.” The unrestrained sexual behavior that characterized this appreciation for Harlem, however, was not completely at home on the notoriously conservative Broadway. Activities tolerated and applauded in Harlem were cause for arrest on Broadway as a result of the Wales Padlock Law established in 1927. As Brooks Atkinson explains in his 1970 book Broadway, this law “empowered the police to arrest the producers, authors, and actors of plays that the police disapproved of, and to padlock the theater for a year if the courts brought in a verdict of guilty.” About Harlem and its salacious rent party scene, Burns Mantle of the Daily News cautioned that some theatergoers might be offended by the erotic “animalistic exhibitions” of the “‘Harlem’ realists” because “unfortunately there are likely to be those in the audience who are a bit sensitive about learning the facts of life in mixed company.” Some of the other critics feared as well that the overly suggestive dancing by the fifty-or-so supernumeraries might cause the police to halt the show and close it down. Atkinson predicted in the Times review that the show would have a good run, “Or will if the police censors, who were in the audience last evening do not clang down Forty-second Street with their patrol wagons.” Like Atkinson, Bide Dudley of the Evening World implied that the censor might forcefully tone down the “exaggerated dancing” a bit, but Whitney Bolton said that “such dancing is on view in any [N]egro cabaret and if the police interfere with this, they ought, in fairness, to interfere uptown.”84 There were, however, no raids upon Harlem. Although chiefly a gimmick to attract audiences who craved the exuberant and sensational side of Harlem, the rent party also figured rather importantly in the plot. Cordelia, who represents this image of the devil-may-care Harlemitite, uses the party as an opportunity to seduce one of the guests, the “shy and slippery” Roy, a “numbers runner,”85 and impetuously, she agrees to move in with him without the benefit of marriage. And just as Lulu Belle tormented the upstanding and faithful George and led him to ruin with her own wily ways, and Lasca Sartoris brought about the destruction of Byron Kason in Nigger Heaven, Cordelia leads the young man who thought he could domesticate her, the love-struck Basil, to the brink of a murder he is later accused of committing. As the curtain descends on the first act, and as the dancing at the rent party becomes more intense, Basil vows to “slit” Roy’s “dirty guts” while Cordelia exits with “loud mocking laughter.”86 The slice-of-life portrait of Harlem all but dissipates, and the high-speed melodramatic antics precipitated at the end of act 1 continue into act 2. The second act takes place in Roy’s apartment, where he and Cordelia have begun to make a home for themselves (in time sequence, it takes place almost immediately after the first act). Whereas the previous act takes its time in building the momentum that culminates in the rent party, in this, the shortest of the acts, the events unfurl at a breathless pace. First we meet Kid Vamp, Roy’s dashing but insidious “banker.” When Cordelia goes out for cigarettes, the “Kid” kills Roy for withholding money from him and hides the body behind an arras. By the end of the act, and after several dramatic twists and turns, Cordelia, not knowing that the “Kid” is a murderer, promises to move in with him. In addition, Basil, who has followed Cordelia to Roy’s apartment, gets into a fight with the Kid. (Cordelia has exited again and does not witness it.) Basil is knocked out in the tussle, and the Kid seizes the opportunity to place the gun in Basil’s hand, framing him for Roy’s murder. And in nail-biting melodramatic fashion, Basil resumes consciousness as the police are banging on the door, and he flees out the bathroom window to safety. By the third act, Cordelia has returned home where the rent party continues, and she has implicated her entire family in the swirl of disorder she initiated. It will take an outside (white) presence to sort things out. In this act, the various theatrical genres crash together and create an atmosphere of combustible energy. Once again, returning to the Williamses’ home, the play reverts to its previous social realism and folk drama forms. For example, there are two rather lengthy bits in which Dr. Voodeo, a dealer of spiritual powders and herbs, and the Hot-Stuff Man, a dealer in stolen clothing, ply their wares. Neither character advances the plot, but they provide local color and offer a glimpse into particular aspects of black life. The Hot-Stuff Man explains, for instance, that he does such strong business in Harlem because black people cannot appear to be poor if they are to be accepted by white society. He says: “Folks in Harlem has to dress. They gotta’ look as good or betta than white folks and they don’ have as much money to spend. It takes fellows like me to fix ’em up—see?” The scenes with these characters give way to the obligatory unraveling of the melodramatic crime, which is the central feature of the act. The tension builds increasingly, and the act includes a shoot-out, the death of the villain (Kid Vamp), and the vindication of the hero (Basil). The troubles wrought upon the house by Cordelia are sorted out by Detective Sergeant Palmer (named Donohue in the original script)—the sole white character in the play. His presence, even in this predominantly black neighborhood, serves as a palpable reminder of the social hierarchy of the 1920s and affirms what many race theorists argue: Race as a legal construct cannot be denied. In this hot pot of lawlessness and social unrest, the white patriarchal figure is on the scene almost immediately to solve the problems among the black residents and restore order to this very public domestic space. The hope for an autonomous, independent black (male) leader, as embodied by Jasper, is dashed. It turns out Jasper is powerless to control his sister, and a white deus ex machina is necessary to settle the chaos. As Daniel Gerould explains, this reinscription of the social status quo is typical of melodrama, and according to C. W. E. Bigsby, early-twentieth-century realism is characterized by “a faith in social and metaphysical order which remained curiously untroubled.” The play ends as Cordelia, rebellious as ever, exits the Harlem flat with one of the party’s musicians, Ippy (for those who are keeping count, he is her fourth lover in the play), vowing to be a star on the stage. Mother, on the other hand, is overwhelmed by the events of the evening, and defeatingly cries, “Lawd! Lawd! Tell me! Tell me! Dis ain’t de City of Refuge?” The plaintive sigh of Mother is overshadowed by the sensational exit of Cordelia and the possibilities that lie ahead for her. As Ippy explains, She don’ have to stay in Harlem. Look at Josephine Baker—makin’ all Paris stand on its head! Look what Florence Mills did! Look at Ethel Waters! Why Delia got more than all of them—more voice, more shape, more pep to her dancing! Given a chance and someone to coach her, she’d set the world on fire. According to the reviews in the popular press, this assessment was not too much of an exaggeration. Isabel Washington apparently played the role to the hilt in the original New York production and received mostly raves. Alison Smith described her performance as “almost fatally realistic.” Robert Garland referred to her as “Vivid, cheap as cheap can be, you believe in her and her tawdry affairs.” Robert Littell wrote, “The wild, raucous, hard-boiled, sensuous abandon of Isabel Washington is worth going a long way to see,” and “Miss Washington’s inexhaustible natural pep, and a gorgeous hoarse voice, which blows out of her like a factory whistle when she is angry, makes this character something quite new and fascinating.” On the other hand, Whitney Bolton found her performance offensive in its unrestrained physical exhibition of sexuality, and in his review said that the producer, Edward A. Blatt, should “urge Miss Washington to curb her dislocations in the interest of peace and prosperity.” Likewise, Bide Dudley of the *Evening World* suggested that she “pipe down a bit” and rein in her unseemly lewdness.\(^{92}\) Paradoxically, the excessiveness of Washington’s performance was hailed, or disparaged in a few cases, because of its remarkable “naturalness.” The reactions to the performance recall similar points that Alisa Solomon makes in her discussion of Nora in Ibsen’s *A Doll’s House.* Just as actresses playing Nora created a stir in their offensive portrayals of “inappropriate behavior” for upstanding women, Washington’s performance as Cordelia registers as “naturalistic” precisely because it is “unbecoming.” \(^{93}\) This “unladylike,” predatory manner was indeed not strictly a “new and fascinating” creation, as Littel writes. To a large extent, expectations of black femininity had already been conditioned by what people had read about or seen in other Broadway shows and in the nightclubs uptown. Lasca Sartoris from *Nigger Heaven* and Lulu Belle from Sheldon and MacArthur’s play, for example, were well-known representations of the trope of the female, black, sexual snare. Isabel Washington, however, supplied an additional layer of authenticity to her performance that may qualify it as “new and fascinating”: Unlike the stage incarnations of the aforementioned black characters, Washington was actually an African American. The few times that *Nigger Heaven* had been represented in musical reviews the performers were in blackface, and Lenore Ulric, a white actress, likewise played Lulu Belle in blackface. Therefore, the representation was certainly not new, but the chippie of Rapp and Thurman’s *Harlem* was at least played by a black woman. The fate of Cordelia in the play represents an even more transgressive dramaturgical act. Rapp and Thurman may have given the Broadway backers the third act “wow” they demanded, but the playwrights did not budge on the fate of Cordelia. In typical melodramatic structure, decadent and dangerous Cordelia, along with the gangsters and murderers, should have been punished (or destroyed) in the end. Accordingly, good must will out in the moralistic framework of the well-made play. In fact, Rapp and Thurman were advised to rewrite the ending of Harlem to make it more palatable for Broadway audiences and the New York censor. Ben Hecht and Charles MacArthur, the playwrights of the smash hit The Front Page (1928), and the latter the cowriter of Lulu Belle, offered a detailed scenario for the recommended revision. According to Rapp and Thurman, Hecht and MacArthur suggested “the play should show Cordelia Williams going on and on along her sinful career and finally ending up disastrously, say, in Paris.”94 This is exactly the way, perhaps not surprisingly, that Lulu Belle ends, and Rapp and Thurman politely declined the advice. Wallace Thurman’s tendency to avoid literary moralizing is evident in much of his work (and incensed many of his contemporary critics), and perhaps this is why he and Rapp left Cordelia’s future uncertain. She is the portrait of a true individual, not bound by gender, race, or sexuality, and in her final renunciation, she claims that she is “gonna’ be livin’ high, standin’ in de lights above deir heads, makin’ de whole world look up at me.”95 She is the embodiment of youthful dreams and creative expression, a utopian view of the black artist. It is also tempting to read a little of Ibsen’s Nora into Rapp and Thurman’s Cordelia. Both characters are defiant in the end, leaving their confining domestic spheres for journeys of self-discovery. A Doll’s House ends with a distraught Torvald, all alone, questioning his own moral beliefs. Similarly, Rapp and Thurman’s play concludes with a keening Mother Williams reconsidering Harlem as a place where African Americans can live freely and morally. Her entreaty is drowned out, however, by the throbbing sounds of partying and jazz music. Throughout Harlem, there are moments when the play threatens to collapse under the weight of the musical underscoring, metatheatricality, and overlaid dramatic forms. The strain caused by these different aspects of the play is a result of the dramaturgical “hybridity,” to apply Homi Bhabha’s term, and its uneasy mixture of several dramatic genres.96 Between the gaps of the melodramatic and naturalistic forms, critics believed they detected the “bits of authentic [N]egro life,” or photographic glimpses of a “real” Harlem. Within these rifts, such as during the first-act rent party scene, they argued, genuine black behavior could be observed, for as Solomon poetically explains in relation to A Doll’s House, “realism trembles to life in the tension between melodrama and metaphor.”97 The play’s moments of presumed “naturalness” were therefore the ironic result of the very visible seams of the theatrical forms. The dramaturgi- cal forms and character representations shift and turn back on themselves in Harlem and make the “real” purely conjectural. Plumbing the depths of the play for a putative black authenticity reveals not a fixed cultural identity but one that is constantly transforming. The merging of the distinct forms, and the pre- sumptions surrounding the combination of black and white elements, reflect the neighborhood’s own manufactured authenticity. Harlem in the 1920s was a mass of contradictions: Determining its essential character is a foolhardy ven- ture, for as one character says in Thurman and Rapp’s play, “Harlem is sho’ one funny place.” Yet examining the neighborhood as a contested space of racial images, weighing the varying notions of a unified definition of “African Amer- ican,” and sifting through the differing claims of a “real” Harlem, one exposes the fluid nature of an identity, presumed to be fixed, that is nonetheless elusive, deceptive, and fantastically mutable.
[REMOVED]
Citation for published item: Frank, Kévin J. A. and Huddart, Benjamin M. and Hicken, Thomas J. and Xiao, Fan and Blundell, Stephen J. and Pratt, Francis L. and Crisanti, Marta and Barker, Joel A. T. and Clark, Stewart J. and Štefančič, Aleš and Hatnean, Monica Ciomaga and Balakrishnan, Geetha and Lancaster, Tom (2018) 'Magnetic phases of skyrmion-hosting GaV4S8ySey (y = 0, 2, 4, 8) probed with muon spectroscopy.', Physical review B., 98 (5). 054428. Further information on publisher’s website: https://doi.org/10.1103/PhysRevB.98.054428 Publisher’s copyright statement: Reprinted with permission from the American Physical Society: Frank, Kévin J. A., Huddart, Benjamin M., Hicken, Thomas J., Xiao, Fan, Blundell, Stephen J., Pratt, Francis L., Crisanti, Marta, Barker, Joel A. T., Clark, Stewart J., Štefančič, Aleš, Hatnean, Monica Ciomaga, Balakrishnan, Geetha Lancaster, Tom (2018). Magnetic phases of skyrmion-hosting GaV4S8ySey (y = 0, 2, 4, 8) probed with muon spectroscopy. Physical Review B 98(5): 054428. © (2018) by the American Physical Society. Readers may view, browse, and/or download material for temporary copying purposes only, provided these uses are for noncommercial personal purposes. Except as provided by law, this material may not be further reproduced, distributed, transmitted, modified, adapted, performed, displayed, published, or sold in whole or part, without prior written permission from the American Physical Society. Additional information: Use policy The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-profit purposes provided that: - a full bibliographic reference is made to the original source - a link is made to the metadata record in DRO - the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult the full DRO policy for further details. Magnetic phases of skyrmion-hosting GaV$_4$S$_{8-y}$Se$_y$ (y = 0, 2, 4, 8) probed with muon spectroscopy Kévin J. A. Franke, 1 Benjamin M. Huddart, 1 Thomas J. Hicken, 1 Fan Xiao, 2,3 Stephen J. Blundell, 4 Francis L. Pratt, 5 Marta Crisanti, 6, 8 Joel A. T. Barker, 6 Stewart J. Clark, 1 Aleš Štefančič, 8 Monica Ciomaga Hatnean, 6 Geetha Balakrishnan, 6 and Tom Lancaster 1 1Centre for Materials Physics, Durham University, Durham, DH1 3LE, United Kingdom 2Laboratory for Neutron Scattering, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland 3Department of Chemistry and Biochemistry, University of Bern, CH-3012 Bern, Switzerland 4Oxford University Department of Physics, Clarendon Laboratory, Parks Road, Oxford OX1 3PU, United Kingdom 5ISIS Facility, STFC Rutherford Appleton Laboratory, Chilton, Didcot, Oxfordshire, OX11 0QX, United Kingdom 6Department of Physics, University of Warwick, Coventry, CV4 7AL, United Kingdom 7Institut Laue-Langevin, CS 20156, 38042 Grenoble Cedex 9, France 8Laboratory for Muon Spin Spectroscopy, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland I. INTRODUCTION In recent years, a number of spectacular advances have demonstrated the existence, not only of magnetic skyrmions, but also their ordering into a skyrmion lattice (SkL). The potential of skyrmions as high-density, low-power data-storage devices is driving their exploration [1,2]. Skyrmions have a noncentrosymmetric cubic crystal structure with point symmetry, but at around 40 K (in both materials) a Jahn-Teller transition changes this to a rhombohedral polar C$_{3v}$ symmetry by stretching the lattice along one of the four (111) cubic axes [15,21]. Below the respective magnetic ordering temperatures ($T_c$) 15, 16], in contrast to other bulk SkL-hosting systems, the SkL in GaV$_4$S$_8$ was observed over an increased temperature region, persisting from $T_c$ down to 9 K ($\approx 0.7 T_c$). In GaV$_4$Se$_8$ the SkL was reported to extend down to zero Kelvin. The large temperature range over which the SkL is observed in both of these compounds, and the fact that substituting S for Se appears to further increase the stability region of the SkL between both end compounds, or even find a further increase in the extent of the SkL region in the phase diagram. The use of transverse field (TF) muon-spin rotation ($\mu^+$SR) to investigate the SkL was motivated by its use in probing the vortex lattice in type-II superconductors, where the technique provides measurements of the internal field distribution caused by the magnetic field texture. It has been used to probe the SkL... region in bulk Cu2OSeO3 [27] and thin films of MnSi [28] and FeGe [29]. In contrast, the use of longitudinal field (LF) \( \mu^+ \)SR on skyrmion-hosting materials has not been reported in detail, despite initial hints that it was effective at probing the emergent dynamics that accompany the skyrmion phase [30]. Here, we present the results of a \( \mu^+ \)SR investigation of skyrmion-hosting materials GaV4S8 and GaV4Se8 and intermediate compounds from the GaV4S\(_{8-y}\)Se\(_y\) series, with \( y = 2 \) and 4. We present measurements in the LF, TF, and zero-field (ZF) geometries, allowing access to static and dynamic local magnetic properties. While GaV4S8 and GaV4Se8 are very similar in exhibiting a SKL phase, we show below that their magnetic behavior is quite different. This is likely attributable to subtle (but significant) differences in the electronic ground states of the two systems. We find that the intermediate compounds with \( y = 2 \) and 4 do not undergo a transition to a state of long-range magnetic order, but instead show a glassy freezing of dynamics as the temperature is reduced. Our LF \( \mu^+ \)SR measurements reveal that the skyrmion phases in GaV4S8 and GaV4Se8 give rise to emergent dynamics on the muon (microsecond) time scale, reflecting the slowing of magnetic fluctuations perpendicular to the applied field. In GaV4Se8 these dynamics are observed over the temperature and field range where the SKL was observed in single crystals [15,16]. For GaV4Se8, the observed dynamics are more limited in their extent in the \( H-T \) phase diagram than the reported extent of the SKL [17]. It is thus plausible that the SKL region of GaV4Se8 is less extensive than previously suggested. This paper is structured as follows: In Sec. II we describe our methods; in Sec. III we discuss the characterization of our samples using AC and DC magnetometry; in Sec. IV we describe ZF measurements on the \( y = 0, 2, 4, \) and 8 materials; in Sec. V we turn to the dynamics of the materials and reveal the dynamics signature of the SKL seen in LF measurements, while in Sec. VI we present TF measurements investigating static local field distributions. In Sec. VII we present the results of calculations of the nature of the muon sites in this system. We discuss our findings and their consequences in Sec. VIII and finally present our conclusions in Sec. IX. II. EXPERIMENTAL Polycrystalline samples of GaV\(_4\)S\(_{8-y}\)Se\(_y\) (\( y = 0, 2, 4, 8 \)) were prepared by reacting stoichiometric amounts of high-purity elements (Ga, V, S and Se) in evacuated silica tubes. The samples were heated at a rate of 10 °C/h to 810–830 °C, kept at this temperature for 300 h, and afterwards water quenched. DC magnetization measurements were performed using a Quantum Design MPMS. AC susceptibility measurements were performed on the same instrument at an excitation frequency of 10 Hz with an amplitude of 0.1 mT. In a ZF \( \mu^+ \)SR measurement [31,32], spin-polarized muons are implanted in a magnetic material. Muons stop at random positions on the length scale of the field texture where they precess about the total local magnetic field \( B \) at the muon site, with frequency \( \omega = \gamma_B \mu_B B \), where \( \gamma_B = 2\pi \times 135.5 \) MHz T\(^{-1}\) is the muon gyromagnetic ratio. The observed property of the experiment is the time evolution of the asymmetry \( A(t) \), which is proportional to the average muon spin polarization \( P_z(t) \). In a TF \( \mu^+ \)SR experiment [31,32], a magnetic field \( H \) is applied perpendicular to the initial muon spin direction. In a LF \( \mu^+ \)SR experiment, the external field \( H \) is applied in the direction of the initial muon-spin polarization, suppressing the contribution from static magnetic fields at the muon site. This allows us to probe the dynamics of the system, as time-varying magnetic fields at the muon site are able to flip muon spins and therefore to relax the average muon polarization. We performed ZF and TF \( \mu^+ \)SR measurements using the general purpose surface-muon instrument (GPS) at S\(_{\mu}S\). LF and ZF measurements were carried out using the HiFi spectrometer at the ISIS muon source. Polycrystalline samples were packed into Ag foil packets (foil thickness 25 \( \mu \)m) covering approximately 2 \( \times \) 2 cm\(^2\) in area and mounted in a He-4 cryostat. For measurements on HiFi, the sample was mounted on a silver backing plate, while on GPS the packet was suspended in the beam on a fork. Data analysis was carried out using the WiMDA analysis program [33]. For all measurements, the sample was warmed above the magnetic ordering temperature \( T_c \) and cooled in a fixed applied magnetic field \( H \). Measurements were made on warming. To understand the differences in the response of the muon in GaV4S8 and GaV4Se8 we carried out density functional theory (DFT) calculations using the plane-wave basis set electronic structure code CASTEP [34–37]. In order to identify muon stopping sites in this material, we carried out spin-polarized DFT calculations with the generalized gradient approximation (GGA) [38]. Further details of these calculations can be found in the Supplemental Material (SM) [39]. III. MAGNETIZATION MEASUREMENTS Magnetization measurements made in an applied field of 10 mT as a function of increasing temperature are shown in Fig. 1. The transition from a magnetically ordered to the paramagnetic phase is observed as a peak in the magnetization and indicated by a vertical line [40]. While the transition is observed at \( T_c = 12.5 \) K for GaV4S8 and \( T_c = 17.5 \) K for GaV4Se8, it drops to \( T_c = 2.3 \) K and \( T_c = 2.5 \) K for GaV4S6Se2 and GaV4S8Se4, respectively. This constitutes a dramatic decrease in the critical temperature for the intermediate compositions in the GaV4S\(_{8-y}\)Se\(_y\) series. ![FIG. 1. Magnetization](image-url) FIG. 1. Magnetization \( M \) of (a) GaV4S8, (b) GaV4Se8, (c) GaV4S4Se4, and (d) GaV4S6Se2 as a function of temperature in an applied magnetic field of 10 mT. Vertical lines indicate the location of the transition from a magnetically ordered to the paramagnetic phase. Figure 2(a) shows the real part $\chi'$ of the AC susceptibility from measurements on GaV$_4$S$_8$. Whereas $\chi'$ is commonly observed to decrease in the SKL phase relative to the conical phase in materials such as B20 compounds and Cu$_2$OSeO$_3$ [7,41,42], we see an increase in $\chi'$ in the region of the phase diagram where the SkL phase has been observed [15]. This suggests that the magnetization in this system more closely follows the excitation field in the SkL than the surrounding phases. Broad susceptibility peaks [Fig. 2(b)] indicate slow dynamics at the boundaries between C, SkL, and FP phases. The same increase in $\chi'$ surrounded by peaks in the susceptibility were reported for GaV$_4$S$_8$ single crystals by Butykai et al. [43]. As a consequence, we attribute the enhancement of susceptibility coinciding with the SkL phase directly to the SkL and not to an angular average over peaks marking phase transitions. As noted previously [15], the SkL phase is located between the C, FM, and PM phases, which is different from typical skyrmion hosting materials, where the SkL occurs in a small region of the phase diagram more consistent with the usual LRO regime. Above $T_c$, in the PM region, the oscillations vanish and the asymmetry $A(t)$ relaxes only weakly relative to the relaxation of the signal in the LRO regime. To parametrize the behavior of both GaV$_4$S$_8$ and GaV$_4$Se$_8$, the spectra were fitted to a function $$A(t) = A_1 e^{-\lambda_1 t} \cos(\gamma \mu_B B_1 t + \psi) + A_2 e^{-\lambda_2 t} \cos(\gamma \mu_B B_2 t) + A_3 e^{-\lambda_3 t} + A_{bg},$$ where $A_{bg}$ accounts for those muons that stop in the sample holder or cryostat tails. The third component exhibits a constant amplitude $A_3$ and relaxation rate $\lambda_3 \ll \lambda_{1,2}$ throughout the investigated temperature ranges. The observation of two oscillatory components corresponds to the occurrence of two magnetically distinct muon sites in each material. Best fits were obtained with the second field component constrained such that $B_2 = a B_1$, with scaling constant $a_{GaV_4S_8} = 0.25$ and $a_{GaV_4Se_8} = 0.66$ for all temperatures. Fitted parameters are plotted in Fig. 4. For GaV$_4$Se$_8$ the internal field shows the expected decrease with increasing temperature [Fig. 4(d)]. The transition from C to the PM phase is observed as a sharp peak in $\lambda_1$ and as an overall drop in the magnitude of both relaxation rates [Figs. 4(e) and 4(f)]. However, the behavior for GaV$_4$S$_8$ is more complex. We observe the expected decrease in the internal field with increasing temperature only above $T = 10$ K [Fig. 4(a)]. However, below 10 K the internal field is found to increase approximately linearly with increasing temperature. This unusual trend does not seem to be affected by the transition from FM to C phase that can be observed as a peak in the relaxation rate $\lambda_2$ [Fig. 4(c)]. The form of the oscillations is observed to change in the FM phase, with the phase $\psi$ taking different values in the FM ($53^\circ$) and C ($31^\circ$) phases. Note, that our data cannot be fitted with a Bessel function (which is often approximated by a damped cosine with $\psi = -45^\circ$), as might be expected for uniform sampling of an incommensurate magnetic texture [32]. The need for a phase factor is unusual for a uniform magnetization expected in the FM region, and in conjunction with the increase in $B_1$ with temperature, is suggestive of the magnetic state in this system being more complicated than the FIG. 2. Top: Mapping of the phase diagrams of (a) GaV$_4$S$_8$ and (c) GaV$_4$Se$_8$ using the real part $\chi'$ of the AC susceptibility. White lines show the GaV$_4$S$_8$ phase diagram for comparison. Bottom: Real part $\chi'$ of the AC susceptibility as a function of applied field for selected temperatures for (b) GaV$_4$S$_8$ and (d) GaV$_4$Se$_8$, respectively. Curves have been offset for clarity. FIG. 3. Typical time-domain spectra measured for GaV$_4$S$_8$ (left) and GaV$_4$Se$_8$ (right) in ZF in the FM, C, and PM phases. Lines are fits described in the text and the curves have been offset vertically for clarity. IV. ZF $\mu^+\text{SR}$ MEASUREMENTS Example time-domain spectra from ZF $\mu^+\text{SR}$ measurements are shown in Fig. 3. Oscillations are observed for measurements in the FM and C phases, characteristic of the presence of quasistatic long-range magnetic order. Above $T_c$, the PM region, the oscillations vanish and the asymmetry $A(t)$ relaxes only weakly relative to the relaxation of the signal in the LRO regime. To parametrize the behavior of both GaV$_4$S$_8$ and GaV$_4$Se$_8$, the spectra were fitted to a function $$A(t) = A_1 e^{-\lambda_1 t} \cos(\gamma \mu_B B_1 t + \psi) + A_2 e^{-\lambda_2 t} \cos(\gamma \mu_B B_2 t) + A_3 e^{-\lambda_3 t} + A_{bg},$$ where $A_{bg}$ accounts for those muons that stop in the sample holder or cryostat tails. The third component exhibits a constant amplitude $A_3$ and relaxation rate $\lambda_3 \ll \lambda_{1,2}$ throughout the investigated temperature ranges. The observation of two oscillatory components corresponds to the occurrence of two magnetically distinct muon sites in each material. Best fits were obtained with the second field component constrained such that $B_2 = a B_1$, with scaling constant $a_{GaV_4S_8} = 0.25$ and $a_{GaV_4Se_8} = 0.66$ for all temperatures. Fitted parameters are plotted in Fig. 4. For GaV$_4$Se$_8$ the internal field shows the expected decrease with increasing temperature [Fig. 4(d)]. The transition from C to the PM phase is observed as a sharp peak in $\lambda_1$ and as an overall drop in the magnitude of both relaxation rates [Figs. 4(e) and 4(f)]. However, the behavior for GaV$_4$S$_8$ is more complex. We observe the expected decrease in the internal field with increasing temperature only above $T = 10$ K [Fig. 4(a)]. However, below 10 K the internal field is found to increase approximately linearly with increasing temperature. This unusual trend does not seem to be affected by the transition from FM to C phase that can be observed as a peak in the relaxation rate $\lambda_2$ [Fig. 4(c)]. The form of the oscillations is observed to change in the FM phase, with the phase $\psi$ taking different values in the FM ($53^\circ$) and C ($31^\circ$) phases. Note, that our data cannot be fitted with a Bessel function (which is often approximated by a damped cosine with $\psi = -45^\circ$), as might be expected for uniform sampling of an incommensurate magnetic texture [32]. The need for a phase factor is unusual for a uniform magnetization expected in the FM region, and in conjunction with the increase in $B_1$ with temperature, is suggestive of the magnetic state in this system being more complicated than the... Note that it is not possible to fit the data with the product of a Gaussian and an exponential relaxation, which would model the separation between static and dynamic contributions at a single muon site. The need for two components thus suggests the presence of two distinct environments for muons stopped in these materials. Typically, a Gaussian relaxation approximately describes relaxation due to a static array of disordered spins (with some residual dynamics). If the spin configuration results in a normal internal field distribution, we would expect such relaxation with \( \sigma = \gamma B \sqrt{\ell B_0^2} \). In contrast, exponential relaxation reflects dynamic fluctuations in the magnetic field distribution. In a system of dense local moments in the fast fluctuation limit, dynamics will lead to relaxation at a rate in ZF given approximately by \( \lambda = 2\gamma^2 \Delta^2 \tau \), where \( \tau \) is the correlation time and \( \Delta \) the width of the internal field distribution. The addition of these two components suggests that a fraction of the muons experience static disorder, while the remainder experience dynamic field fluctuations. In both materials a gradual decrease in \( \sigma \) with increasing temperature is suggestive of a magnetic field distribution whose width grows as the temperature is reduced [Figs. 5(b) and 5(e)]. A peak in the exponential relaxation rate \( \lambda \) around \( T = 2.5 \text{ K} \) [Figs. 5(c) and 5(f)] coincides with the critical temperature established in DC magnetization measurements but is not reflected by any marked change in \( \sigma \). It is notable that this characteristic temperature is significantly lower than the ordering temperatures of \( T_c = 12.5 \text{ K} \) and \( T_1 = 17.5 \text{ K} \) observed for GaV\(_4\)S\(_8\) and GaV\(_4\)Se\(_8\), respectively. These narrow peaks in \( \lambda \) are suggestive of a sudden freezing out of dynamics below 2.5 K. We reason that, upon cooling, dynamics slow down and pass through the window of excitation frequencies probed by \( \mu^+\text{SR} \) (MHz–GHz). This assessment is supported by the observation of a peak in the ratio \( A_2/A_1 \) close to this characteristic temperature indicating a dominant contribution of dynamic fluctuations on the muon time scale. However, it is likely that the material retains a degree of disorder down to the lowest temperatures. This provides a picture of a glassy freezing out of spins and at least partially ordered ground state for these two materials. Our results show that although GaV\(_4\)S\(_8\) and GaV\(_4\)Se\(_8\) are structurally very similar and both exhibit a SKL phase, they are magnetically very different. Moreover, their magnetic phase diagrams cannot be continuously transformed from one into the other using random substitutions of S with Se, since the disorder introduced leads to a glassy ground state. This behavior is reminiscent of that shown by GaV\(_{4-x}\)Mo\(_x\)S\(_8\) \((0 \leq x \leq 4)\), where Mo is substituted for V [44]. V. LF \( \mu^+\text{SR} \) MEASUREMENTS To probe dynamics across the phase diagram LF measurements were made on GaV\(_4\)S\(_8\) and GaV\(_4\)Se\(_8\). In polycrystalline samples we expect 2/3 of the muon spin components to lie perpendicular to the local field and give rise to oscillations whereas the remaining 1/3 align along the direction of the local magnetic fields. We utilize the long time window at ISIS to measure slow relaxation in the 1/3 tail. This asymmetry can only be relaxed by dynamic fluctuations of the magnetic field distribution at the muon site, which can cause transitions... between the muon spin-up and -down states, split in energy by the applied magnetic field. ZF and LF $\mu^+\text{SR}$ time domain spectra measured at ISIS were found to decay following a simple exponential relaxation and the asymmetry was thus fitted to a function of the form $$A(t) = A e^{-\lambda t} + A_{bg}.$$ (3) Taking the applied field to define the $z$ direction, the relaxation rate in the fast fluctuation limit is approximated by $\lambda = \gamma \mu_B (\Delta_{1x}^2 + \Delta_{1y}^2) \tau$, where $\tau$ is the correlation time for fluctuations in the distribution $\Delta_{1x, y}$ of magnetic field components in both the $x$ and $y$ directions (expected to be equivalent in our polycrystalline samples). Results from fitting ZF and LF data are presented in Fig. 6. ZF data on GaV$_4$S$_8$ and GaV$_4$Se$_8$ show a narrow peak in $\lambda$ at the transition between the PM and C phases. For GaV$_4$S$_8$, the peak in $\lambda$ that marks the transition between magnetically ordered and disordered phases is larger and significantly broadened for applied fields of 50 mT and 90 mT. The temperature and field region of this sizable feature coincides with where the SkL phase has been observed previously [15] and is consistent with our AC susceptibility measurements (Fig. 2) and our TF $\mu^+\text{SR}$ (see Sec. VI). We conclude that the SkL can be detected via a sizable contribution to the relaxation rate in LF $\mu^+\text{SR}$ measurements. For measurements on GaV$_4$Se$_8$, the transition between the magnetically ordered and disordered phases is again seen as a narrow peak in the relaxation rates. However, we do not observe a significant qualitative change in dynamics in the magnetically ordered phase for measurements made with $\mu_0 H = 150$ mT where the SkL was previously observed. Instead, a roughly linear increase of $\lambda$ is observed with temperature as is the case for GaV$_4$S$_8$ at 180 mT. (Above the phase transition at $T_c = 14$ K, a very gradual decrease in the relaxation rate suggesting an intermediate region of behavior, c.f. our TF data.) However, for measurements made at $\mu_0 H = 95$ mT, we observe the characteristic enhancement of magnitude and broadened peak in $\lambda$, similar to that observed in the SkL region of GaV$_4$S$_8$ [Fig. 6(g)]. A slightly increased $\lambda$ can even be observed down to 6.5 K below which the relaxation rate decreases. Following our interpretation of the results obtained on GaV$_4$S$_8$, we ascribe this increase in $\lambda$ to the presence of the SkL phase. VI. TF $\mu^+\text{SR}$ MEASUREMENTS To investigate the local field distributions across the phase diagrams of the $y = 0$ and 8 materials, TF measurements were made. Time domain spectra from these measurements were found to be best fitted to the function $$A(t) = A_1 e^{-\lambda_1 t} \cos(\gamma \mu B_1 t) + A_2 e^{-\lambda_2 t} \cos(\gamma \mu B_2 t) + A_{bg}.$$ (4) Figures 7(a)–7(c) show the result of this fitting for GaV$_4$S$_8$ in an external field $\mu_0 H = 50$ mT (corresponding Fourier transformation spectra are shown in the SM [39]). In the PM phase two internal fields are observed and most likely due to the existence of two inequivalent muon sites, consistent with the ZF results. The internal fields decrease as $T_c$ is approached from above. The inset of Fig. 7(a) shows the corresponding muon Knight shifts $K_i = (B_i - \mu_0 H)/\mu_0 H$ of both field components $B_i$ as a function of susceptibility $\chi$ above $T_c$ [45]. The Knight shifts are negative with their magnitude increasing linearly with the susceptibility. This indicates significant hyperfine coupling, especially in the second muon site. In the magnetically ordered phase the second component of the internal field $B_2$ is not resolved. The relaxation rate $\lambda_2$, corresponding to the width of the distribution of the internal field $B_2$, shows a discontinuity. at $T_c$ and exhibits very large values in the magnetically ordered phases. The most likely explanation for this behavior is a very large, fluctuating local field in the magnetically ordered phase at the second muon site. The relaxation rate is too large compared to the muon precession frequency, and thus the $B_2$ component is not observed. As a result, the effect of the fluctuations in $B_2$ result in a purely relaxing component with a large relaxation rate $\lambda_2$. Turning to the first component, a notable feature is an increase in the internal field component $B_1$ in the temperature range where the SkL phase is stabilized. This effect appears to provide a signature of the SkL phase in TF $\mu^+$SR measurement on this system. The relaxation rate $\lambda_1$ increases continuously upon cooling, and we do not observe a resolvable influence of the SkL on the relaxation rate. We thus conclude that the dynamics emerging with the SkL seen in the LF measurements do not have a strong effect along the direction of the applied magnetic field (as it is correlations along this direction that principally determine the relaxation in TF measurements). Results from fitting TF $\mu^+$SR measurements made on GaV$_4$Se$_8$ in $\mu_0H = 150$ mT are presented in Figs 7(d)–7(f). As for GaV$_4$S$_8$ two internal fields are observed at all applied magnetic fields, consistent with two inequivalent muon sites. No increase in $B_1$ can be observed in any part of the magnetically ordered phase, which was the signature of the SkL for GaV$_4$S$_8$. ### VII. Muon Site Calculations To understand the details of the muon’s interaction with the system, candidate muon sites were determined using DFT. Structural relaxations of a periodic supercell of GaV$_4$S$_8$ using DFT reveal four distinct candidate muon stopping sites [Fig. 8], listed in Table I. Three of these (labeled I–III) involve the muon sitting close to a single S atom, which makes sense on chemical grounds, given the electronegativity of S. A fourth site (site IV) has the muon closer to V atoms and is the highest energy site. In the lowest energy site (site I) the muon sits between two S atoms in the plane defined by three S atoms within V$_4$S$_4$ units [Fig. 9(I)]. The two $\mu^+$-S distances are unequal (1.4 Å and 2.0 Å) with greater electron density found between the muon and the nearest S atom. (The S-$\mu^+$-S angle is 160°.) This site is therefore best described in terms of the muon forming a $\mu^+$-S bond (rather than an S-$\mu^+$-S state by analogy to the commonly observed F-$\mu^+$-F complex [46]), though the presence of a second nearby S atom does seem to stabilize this geometry. Two further sites involve the muon sitting close to a single S atom. In site II the muon sits along an edge of one of the V$_4$S$_4$ cubanelike units [Fig. 9(II)]. This site is 0.137 eV higher in energy than site I. The muon sits 1.5 Å from a S atom, which is similar to the shortest $\mu^+$-S distance for site I. In site III [Fig. 9(III)], the muon again sits 1.4 Å from an S atom, but this time the S atom belongs to a GaS$_4$ tetrahedron. Despite the similar coordination of the muon by S, this site is 0.288 eV higher in energy than site I. Unlike sites I–III, site IV does not involve the formation of a $\mu^+$-S bond. The muon sits above a face of one of the V$_4$S$_4$ cubanelike units, with the nearest S atom just over 2 Å away. The energy of this site is the highest, close to that of site 3: 0.293 eV higher in energy than site I. The results of analogous calculations for GaV$_4$Se$_8$ are presented in Table I. Muon stopping sites (labeled 1–4 in order of ascending energy) are similar to those calculated for GaV$_4$S$_8$, with three of the four sites involving the muon sitting close to a Se atom (sites 2 to 4) and a site in which the muon sits above a face of a V$_4$Se$_4$ unit (site 1). However, the ordering of sites is inverted in this case. In particular, the cube face site ![Fig. 8. Four classes of muon stopping site determined for GaV$_4$S$_8$. The sites are numbered in order of increasing energy.](image1) ![Fig. 9. Local geometry around the muon for each of the four classes of muon stopping site in GaV$_4$S$_8$.](image2) (site 1), which corresponds to the highest energy stopping site for GaV$_4$S$_8$, is the lowest energy site for GaV$_4$Se$_8$. The three sites in which the muon sits near an Se atom have analogous sites in GaV$_4$S$_4$. Site 2 is 0.145 eV higher in energy than the lowest energy site, and similar to site III in GaV$_4$S$_8$, with the muon bonded to a Se atom at the top of a GaSe$_4$ tetrahedron. Site 3 is 0.190 eV higher in energy than the lowest energy site and is similar to site II in GaV$_4$S$_8$, but with a longer $\mu^+$-Se distance (1.7 Å). Site 4 (0.381 eV higher in energy than site 1) is similar to site I in GaV$_4$S$_8$. We conclude that, although the sites for the two systems are similar owing to their similar structures, the energetic ordering of sites in the two cases is rather different. In particular, the lowest energy site, where we expect muons to stop, is different for the two materials despite their structural resemblance. **VIII. DISCUSSION** ZF and TF $\mu^+$-SR measurements suggest at least two magnetically inequivalent muon sites in GaV$_4$S$_8$ and GaV$_4$Se$_8$. Our DFT calculations reveal four crystallographically distinct candidate muon sites with different energies. They are however not necessarily all occupied, and in fact we expect the one with the lowest energy to be the most probable muon stopping site. The two magnetically inequivalent sites could thus be due to crystallographically different sites, or more likely structurally equivalent ones experiencing different internal fields due to the complex incommensurate spin textures in these systems. Nonetheless, the magnetic spin structures of these materials are based on an underlying length scale which is long compared to the size of the unit cell, making the details of exactly where the muon localizes within the unit cell relatively unimportant in comparing the different materials. We note that the location of the magnetic transitions, temperature and field-dependent details of the phase diagram, and collective properties of the system match with those measured using other techniques. Furthermore, we do not observe any significant structural distortions due to implanted muons in our DFT calculations [39]. We conclude, therefore, that the probe formed by the muon and any local distortion to the electronic structure appears to faithfully reflect the undistorted magnetism in this system. ZF $\mu^+$-SR on the S-containing material reveals an unusual increase of the total local magnetic field $B_1$ with temperature. This effect is not observed in the Se-containing analog suggesting that the ground state magnetism in these systems is significantly different. Recent observations of the S-containing material suggest that, on cooling, the periodicity of the magnetic cycloid spin structure increases, eventually transforming into a soliton lattice with periodically arranged domain walls in the FM phase [16,47]. This continuous change in magnetic structure might provide a mechanism for the unusual $T$ dependence of the local $B_1$ field. It would be expected to alter the sum of the dipolar fields at the muon sites, which could cause sufficient cancellation to cause the decrease in $B_1$ with decreasing temperature that we observe. Perhaps more significantly, as the cycloid unwinds and the structure becomes more ferromagnetic, there would be an increase in the size of the negative contribution from the demagnetizing field, further reducing $B_1$. It is notable that $B_1$ continues to decrease continuously through the 4 K, C-FM phase transition, marked in our data by a peak in the relaxation rate. This suggests that although some dynamic relaxation channels might freeze out at 4 K, the magnetic structure evolves continuously from the point of view of the muon ensemble. However, the need for a different phase offset in fitting the muon data in the FM region (often reflecting an incommensurate field distribution) is suggestive of a more complicated spin structure than simple ferromagnetism. The other possible contribution to the unusual temperature evolution of $B_1$ is the evolution of the hyperfine field at the muon site, arising from electronic spin density at the position of the implanted $\mu^+$ [48]. The observation of significant Knight shifts in GaV$_4$S$_8$ but not GaV$_4$Se$_8$, along with the unusual temperature dependence of the energy gap in GaV$_4$S$_8$ [49], means we might expect this contribution to be important. (We note that a hyperfine field is likely required to reconcile the calculated dipole field at the muon site with that seen in ZF measurements [39].) Specifically, in GaV$_4$S$_{\text{1-y}}$Se$_y$ compounds the magnetic units are metallically bonded $V_4$ tetrahedral clusters carrying an effective $S = 1/2$ spin. Electronic conduction occurs by electron hopping between clusters and thus electronic properties depend strongly on details of the $V_4$ tetrahedra. While GaV$_4$Se$_8$ exhibits a constant energy gap over a wide temperature range, the energy gap of GaV$_4$S$_8$ decreases upon cooling and may even reach zero close to the magnetic phase transition [49]. It may be that both the continuous change in magnetic structure and that of the energy gap reflect the same underlying $T$-dependent changes in the electronic structure. A further notable feature of ZF measurements on GaV$_4$S$_8$ is the somewhat surprising difference in the temperature evolution of the relaxation rates $\lambda_1$ and $\lambda_2$. The relaxation rate $\lambda_1$ peaks at $T_c$, as expected for the slowing of dynamics at a critical point, whereas $\lambda_2$ seems to reflect the behavior of the magnitude of the local fields close the the transition, while also peaking at the FM to C transition. This is suggestive of the two muon sites coupling differently to the dynamics in the system and might suggest that the local environment at the two sites is more distinct than would typically be expected for two positions in the unit cell that are magnetically inequivalent. This could also explain why the two components in the TF spectra behave quite differently. Based on our measurements on GaV$_4$S$_8$ we concluded that the SkL can be detected via a sizable contribution to the relaxation rate in LF $\mu^+$-SR measurements. An enhancement in $\lambda$ corresponds to an increase in the widths of the components of the field distribution, or an increase in the correlation time $\tau$ of the dynamic fluctuations (i.e., a decrease in their fluctuation rate) perpendicular to the applied field direction, or to both. From our TF measurements we conclude that we do not observe a change in dynamics or field distribution specific to the SkL, along the applied magnetic field direction. An increase in the relaxation rate in LF $\mu^+$-SR measurements thus most likely reflects a contribution of the SkL to dynamics perpendicular to the applied field directions. As the dynamics probed by $\mu^+$-SR can reach the GHz regime, our observation is attributable to the emergent excitation modes of individual skyrmions that have been observed in the GHz regime, where clockwise, counterclockwise, and breathing modes occur in the skyrmion plane, but not perpendicular to it [24]. It is notable that dynamics coinciding with the SkL... occur over a broad spectral range, with our AC susceptibility (on the kHz frequency scale) showing an increase in the imaginary component $\chi''$ in the SKL phases, indicating an increase in dissipation [39]. This increase is consistent with the increase in $\lambda$ seen in the muon results, since the fluctuation-dissipation theorem predicts that the spin correlation function $S(q, \omega = 0) \propto T \lim_{\omega \to 0} \chi'(q, \omega)/\omega$ and we expect that the muon-spin relaxation $\lambda \propto \sum_q A^2(q)S(q, 0)$, where $A$ is the coupling of the muon to the spin system. The LF and TF $\mu^+\text{SR}$ results for GaV$_4$Se$_8$ presented in Figs. 6(h) and 7(e), respectively, do not show the signature of a SKL phase for $\mu_0H = 150$ mT. However, for $\mu_0H = 95$ mT we observe a broadened peak in $\lambda$ down to 13.5 K [Fig. 6(g)]. Following our interpretation of the LF results obtained on GaV$_4$S$_8$ we ascribe this increase in dynamics to the presence of the SKL phase. From AC susceptibility measurements [shown in Fig. 2(d)] we identify an increase in $\chi'$ around 13 K and 100 mT as the location of the SKL phase in agreement with the lowest measured temperatures (2 K) [17,23]. Our muon results are therefore consistent with our AC susceptibility measurements, but do not match the phase diagram reported in [17,23]. The discrepancy could be due to the use of polycrystalline samples instead of single crystals, but we note that this does not lead to such a significant difference in the location and extent of the SKL phase in GaV$_4$S$_8$, where dynamics are observed over the temperature and field range where the SKL was seen in single crystals [15,16]. It is thus plausible that the SKL region of GaV$_4$S$_8$, is less extensive (at least in our polycrystalline sample) than previously suggested. IX. CONCLUSIONS We have used $\mu^+\text{SR}$ to investigate the skyrmion-lattice (SKL) phase in GaV$_4$S$_8$ and GaV$_4$Se$_8$. While GaV$_4$S$_8$ and GaV$_4$Se$_8$ are structurally very similar and both exhibit a SKL phase, we have shown that their magnetic phase diagrams and ground states are significantly different and that the intermediate $y = 2$ and 4 materials are glassy in their magnetic character. We have established the signature of the SKL in LF $\mu^+\text{SR}$ measurements, which has allowed us to observe characteristic dynamics on the MHz to GHz timescale. Our results suggest a phase diagram for polycrystalline GaV$_4$Se$_8$ in which the skyrmion phase appears substantially less extensive than reported in single crystal samples. Data presented in this paper will be made available via [50]. ACKNOWLEDGMENTS Part of this work was carried out at the Science and Technology Facilities Council (STFC) ISIS Facility, Rutherford Appleton Laboratory, UK and $\mu$S, Paul Scherrer Institut, Switzerland. We gratefully acknowledge access to the MPMS in the Materials Characterisation Laboratory at ISIS. We are grateful for the provision of beamtime and to A. Amato, H. Luetkens, and J.S. Lord for experimental assistance. DFT calculations were carried out using computing resources provided by STFC Scientific Computing Department’s SCARF cluster and the Durham HPC Hamilton cluster. We would like to thank M. N. Wilson and M. Gomišek for fruitful discussion. This work was supported by the Engineering and Physical Sciences Research Council (EP/N032128/1 and EP/N024028/1). [49] http://dx.doi.org/10.15128/r1fq977t79n.
Comparison of effects of the tyrosine kinase inhibitors AG957, AG490, and STI571 on BCR-ABL–expressing cells, demonstrating synergy between AG490 and STI571 Xuemei Sun, Judith E. Layton, Andrew Elefanty, and Graham J. Lieschke STI571 (formerly CGP57148) and AG957 are small molecule inhibitors of the protein tyrosine kinase (PTK) p145abl and its oncogenic derivative p210bcr-abl. AG490 is an inhibitor of the PTK Janus kinase 2 (JAK2). No direct comparison of these inhibitors has previously been reported, so this study compared their effects on factor-dependent FDC-P1, 32D, and MO7e cells and their p210bcr-abl-expressing factor-independent derivatives. STI571 was a more potent inhibitor of 3H-thymidine incorporation in p210bcr-abl-expressing cells than was AG957, and it showed superior discrimination between inhibitory effects on parental cell lines and effects on their p210bcr-abl-expressing derivatives. Assays performed with and without growth factor demonstrated that STI571 but not AG957 reversed the p210bcr-abl–driven factor independence of cell lines. p210bcr-abl–expressing cells were less sensitive to AG490 than to AG957 or STI571. However, for p210bcr-abl–expressing clones from all 3 cell lines, synergistic inhibition was demonstrated between STI571 and concentrations of AG490 with no independent inhibitory effect. Inhibition of nucleic acid synthesis with AG957 treatment was associated with reduced cell numbers, reduced viability, and small pyknotic apoptotic cells. At concentrations of STI571 that reversed the p210bcr-abl factor-independent phenotype, STI571 treatment and growth factor deprivation together were sufficient to induce apoptosis. This study concludes that, for the cell lines studied, (1) STI571 is a more potent and more selective inhibitor of a p210bcr-abl–dependent phenotype than AG957; (2) AG490 synergizes with STI571 to enhance its inhibitory effect on p210bcr-abl–driven proliferation; and (3) the combination of p210bcr-abl–tyrosine kinase inhibition and growth factor signal withdrawal can be sufficient to induce apoptotic death of transformed cells. (Blood. 2001;97:2008-2015) © 2001 by The American Society of Hematology Introduction The tyrosine kinase protein (PTK) product of the BCR-ABL fusion gene that results from the t(9;22) translocation1-5 of chronic myelogenous leukemia (CML) and some acute leukemias is an attractive therapeutic target. This has been particularly so since the demonstration that a myeloproliferative syndrome results from the overexpression of the commonest fusion protein resulting from this translocation (p210bcr-abl) in the bone marrow cells of mice,6 7 although the degree to which the murine model recapitulates human CML is somewhat dependent on the murine genetic background.8 That the overexpression of p210bcr-abl is sufficient to drive the disease phenotype suggests that a selective inhibitor of the p210bcr-abl, or even an inhibitor active against both wild-type p145abl and its oncogenic variants such as p210bcr-abl, may be able to suppress or reverse the CML disease phenotype. Because the kinase activity of the BCR-ABL fusion protein is integral to its transforming ability,9 this has led to the development of several small molecules with inhibitory activity against p145abl and/or p210bcr-abl. One particularly promising inhibitor is STI571 (formerly known as CGP57148), a 2-phenylaminopyrimidine class molecule that was designed, based on the structure of the adenosine triphosphate (ATP)-binding site of PTKs, and was selected for its specificity for the ABL tyrosine kinase.10 STI571 is equipotent at inhibiting p145abl and p210bcr-abl,11 but it is not completely selective for ABL kinases and shows similar inhibitory potency in biochemical evaluations on the platelet-derived growth factor receptor (PDGFR)12 and c-kit, the receptor for stem cell factor.11 In vitro cellular studies confirmed the ability of STI571 to reverse the p210bcr-abl–driven conversion of a cell line from hematopoietic growth factor dependence to factor independence and showed selective in vivo antitumor activity against tumor-forming p210bcr-abl–positive cell lines in murine models.10 Similar in vitro effects of STI571 have been shown for a PDGFR-R–driven cellular phenotype.12 STI571 has now entered early phase clinical studies, and preliminary data indicate it to be safe and to have clinical activity.13,14 Another ABL inhibitor is AG957, a tyrphostin identified in a large-scale evaluation of molecules designed as competitive antagonists of ATP binding to PTKs.15 Unlike STI571, it is a more potent inhibitor of p210bcr-abl than p145abl (50% inhibitory concentrations [IC50s], 1 and 7.1 μM, respectively),15 although this is less potent than that reported for STI571 in similar noncellular biochemical assays (0.025 μM).10 AG957 inhibits proliferation of the BCR-ABL–positive cell line K562 (derived from a CML patient)16, and related tyrphostin inhibitors promote differentiation of this cell line.17 From the Cytokine Biology Laboratory, Ludwig Institute for Cancer Research, Melbourne Tumor Biology Branch, The Royal Melbourne Hospital, Victoria, Australia; and the Walter and Eliza Hall Institute of Medical Research, The Royal Melbourne Hospital, Parkville, Victoria, Australia. Submitted July 21, 2000; accepted November 22, 2000. Supported by a World Health Organization fellowship (X. Sun). Reprints: Graham J. Lieschke, Ludwig Institute for Cancer Research, PO Box 2008, The Royal Melbourne Hospital, Victoria, 3050, Australia; e-mail: graham.lieschke@ludwig.edu.au. The publication costs of this article were defrayed in part by page charge payment. Therefore, and solely to indicate this fact, this article is hereby marked “advertisement” in accordance with 18 U.S.C. section 1734. © 2001 by The American Society of Hematology HEMATOPOIESIS © 2001 by The American Society of Hematology Recently, AG957 was shown to have greater potency against some subpopulations of BCR-ABL-positive hematopoietic progenitors than genotypically normal progenitors isolated from CML patients.\textsuperscript{18} Despite its action on the ABL kinases, AG957 is not totally specific; for example, it is a more potent inhibitor of the epidermal growth factor receptor (EGF-R) (IC\textsubscript{50} 0.25 \textmu M).\textsuperscript{15} Although both these semi-selective ABL inhibitors have each been studied individually, there is no direct side-by-side comparison of them, even though the separately published data cited above suggests that STI571 is the more potent ABL kinase inhibitor. Because both STI571 and AG957 have assumed prominence as relatively “specific” BCR-ABL inhibitors, we undertook these studies to directly compare their effects on a p210\textsuperscript{bcr-abl},-dependent cellular phenotype. AG490 is not an effective ABL inhibitor, but rather it is a tyrphostin that has received attention because of its inhibitory effects on the nonreceptor PTK Janus kinase 2 (JAK2),\textsuperscript{19} which is critical in signaling from many hematopoietic growth factors.\textsuperscript{20} JAK2 activity is implicated in the pathogenesis of some leukemias,\textsuperscript{21} its expression is significantly increased in others,\textsuperscript{19} and a signaling interaction between the JAK2 and BCR-ABL proteins is recognized.\textsuperscript{22} AG490 is not, however, totally specific for JAK2 (eg, it also inhibits JAK3).\textsuperscript{23} We were interested to include AG490 in our comparison as a non-ABL-inhibiting tyrphostin control, but we also hypothesized that its actions to interfere with JAK signaling pathways might provide additive or even synergistic inhibitory effects on leukemic cells. ### Methods and materials #### Cell lines The following cell lines used were all as previously described: FDC-P1\textsuperscript{24} and its derivative cell lines transfected to overexpress human p210\textsuperscript{bcr-abl} under control of either a retroviral promoter (FDrx210, 3 independently derived clones C, F, and H) or a weaker human BCR promoter (FDrx210, 3 independently derived clones A, E, and G),\textsuperscript{25} 32D cells\textsuperscript{26} and their human p210\textsuperscript{bcr-abl}-overexpressing derivative 32Dp210, a gift of Dr B. Druker (Portland, OR),\textsuperscript{27} MO7e cells\textsuperscript{27} and their human p210\textsuperscript{bcr-abl}-overexpressing derivative, a gift of Dr B. Druker,\textsuperscript{28} and K562 cells.\textsuperscript{29,30} All cell lines were propagated in RPMI 1640 medium (Gibco BRL, Grand Island, NY) supplemented with 10% fetal calf serum (FCS) (CSL Ltd, Parkville, Victoria, Australia) at 37°C in an atmosphere of 5% CO\textsubscript{2} in air. Cultures of the factor-dependent FDC-P1 and 32D cell lines were further supplemented with 10% WEHI-3B-conditioned medium as a source of interleukin-3 (IL-3), and of MO7e with 10 ng/mL human granulocyte-macrophage colony-stimulating factor (GM-CSF), a gift from Amgen (Thousand Oaks, CA). All BCR-ABL-expressing transfectants were maintained in medium without growth factor. #### Reagents The tyrphostin inhibitors AG957\textsuperscript{17} and AG490\textsuperscript{19} were provided as white powders by Dr A. Levi-tiski (Hebrew University of Jerusalem, Jerusalem, Israel). Mass spectroscopic analysis by electrospray (Quattro II, Micromass, Manchester, United Kingdom) confirmed these to be of the calculated molecular mass (273 Da and 294 Da, respectively). Stock solutions at 50 mM in dimethyl sulfoxide (DMSO) were prepared and stored as aliquots at −20°C, from which fresh working solutions were prepared in RPMI 1640 for each experiment. STI571\textsuperscript{18} was a gift of Dr E. Buchdunger (Novartis, Basel, Switzerland). A 10-mM stock solution in phosphate-buffered saline (PBS) was prepared and stored at −20°C, from which fresh working solutions were prepared in RPMI 1640 for each experiment. ### Immunofluorescence and flow cytometry p210\textsuperscript{bcr-abl} and p145\textsuperscript{abl} expression in cell lines was detected by indirect immunofluorescence FACS profiling.\textsuperscript{22} Briefly, cells were fixed for 10 minutes at room temperature in 1% paraformaldehyde in PBS and permeabilized in 0.3% saponin and 0.5% Triton X-100 in PBS with 1% FCS at 4°C for 15 minutes prior to staining with the monoclonal anti-ABL antibody 24-21 (Oncogene Science, Manhasset, NY), using a fluorescein isothiocyanate-conjugated goat anti-mouse immunoglobulin G (IgG) (Pharmingen, San Diego, CA) as secondary antibody. Cells were analyzed in a FACScan (Becton Dickinson, San Jose, CA). #### Cell proliferation assays Assays were performed in duplicate across 96-well microtiter plates and set up with robotic assistance (Biomek 2000, Beckman, Fullerton, CA). Pre-diluted inhibitor from frozen stocks was added to the first well to achieve the desired starting concentration in 200 μL and titrated as serial 2-fold dilutions across the plate, leaving 100 μL of medium with inhibitor per well. Cells were harvested and washed twice, and aliquots of 4 × 10\textsuperscript{4} cells in 100 μL were added to each medium-containing well to make a final assay volume of 200 μL. The assay medium was RPMI 1640 and 0.5% FCS with or without the appropriate growth factor and contained 1 μCi of \textsuperscript{3}H-thymidine. Cells were exposed to the inhibitors for either 30 minutes or 18 hours. For 30-minute exposures, cells were washed 2 times after a 30-minute incubation in inhibitor-containing medium, and the medium was replaced with fresh medium containing 1 μCi of \textsuperscript{3}H-thymidine for 18 hours but lacking inhibitor. In assays testing for synergistic effects between inhibitors, the second (noninhibited) inhibitor was added in a volume of 5 μL, making a total final assay volume of 205 μL. Cells were harvested onto a glass fiber filter, and \textsuperscript{3}H-thymidine incorporation was counted on a microplate scintillation counter (Top Count.NXT; Canberra Packard, Meriden, CT). In some experiments, cell number was determined by setting up the assay under identical conditions except that the final assay volume was 2 mL, but the concentration of cells remained at 2 × 10\textsuperscript{5}/mL at the start of the assay. After 18-hour exposure, total cell number was determined, and proportional cell viability was measured by trypan blue dye exclusion with the use of a hemocytometer. In parallel, cytospin preparations of cells were prepared and stained with May-Grunwald-Giemsa and examined at ×100 to ×1000 magnification. ### DNA integrity analysis Cytosolic and nuclear DNA was prepared from 5 × 10\textsuperscript{4} cells. Briefly, to prepare cytosolic DNA, cells were lysed in 500 μL of lysis buffer (0.5% Triton X-100, 20 mM Tris-HCl, 1 mM EDTA, pH = 7.4) for 5 minutes on ice. Lysates were spun at 13 000 rpm in a bench-top Eppendorf centrifuge (Heraeus “Biofuge pico,” Osterode, Germany) for 20 minutes, the supernatant was transferred to a fresh tube, and DNA was precipitated with NaCl/isopropanol. To prepare nuclear DNA, the pellets remaining from cytosolic DNA preparations were washed twice in PBS and incubated overnight at 55°C in 750 μL lysis buffer (50 mM Tris-HCl, 0.1 M EDTA, 0.1 M NaCl, 1% SDS, pH = 8.0) to which 40 μL Pronase 20 mg/mL had been added. Then, 310 μL 5 M NaCl was added, samples were spun as above, 800 μL of the supernatant was transferred to a new tube, and DNA was precipitated with 500 μL isopropanol. DNA was pelleted, washed with 70% ethanol, air-dried, electrohoresed through a 0.8% agarose gel, and viewed by ethidium bromide staining and UV illumination. ### Statistics Unless otherwise stated, data presented in figures are means of duplicate assays. Figures present data generated simultaneously in a representative experiment. Experiments were replicated 3 times or more except for the data shown in Figure 5 and 6 (2 replicates) and in Figure 3C (1 experiment). The effect of IL-3 on p210\textsuperscript{bcr-abl}-expressing cell lines (Table 1) was evaluated with a 2-sided sign test. From www.bloodjournal.org by guest on September 14, 2017. For personal use only. Table 1. IC50 for the tyrosine kinase inhibitors AG957, AG490, and STI571 in proliferation assays <table> <thead> <tr> <th>Cell line</th> <th>AG957 (+GF)</th> <th>AG957 (−GF)</th> <th>AG490 (+GF)</th> <th>AG490 (−GF)</th> <th>STI571 (+GF)</th> <th>STI571 (−GF)</th> </tr> </thead> <tbody> <tr> <td>FDC-P1</td> <td>5.0 (5)</td> <td>N/E</td> <td>12.5 (5)</td> <td>N/E</td> <td>&gt; 10 (3)</td> <td>N/E</td> </tr> <tr> <td>FDbcr210A</td> <td>2.0 (3)</td> <td>2.5 (3)</td> <td>10.0 (3)</td> <td>10.0 (3)</td> <td>—</td> <td>—</td> </tr> <tr> <td>FDbcr210E</td> <td>3.5 (2)</td> <td>3.5 (2)</td> <td>12.5 (3)</td> <td>11.0 (3)</td> <td>0.85* (5)</td> <td>0.08* (5)</td> </tr> <tr> <td>FDbcr210G</td> <td>3.2 (2)</td> <td>3.2 (2)</td> <td>12.5 (2)</td> <td>12.5 (2)</td> <td>—</td> <td>—</td> </tr> <tr> <td>FDrv210C</td> <td>2.7 (3)</td> <td>2.5 (3)</td> <td>10.0 (3)</td> <td>11.2 (3)</td> <td>—</td> <td>—</td> </tr> <tr> <td>FDrv210F</td> <td>2.0 (2)</td> <td>2.5 (2)</td> <td>11.3 (2)</td> <td>12.5 (2)</td> <td>—</td> <td>—</td> </tr> <tr> <td>FDrv210H</td> <td>3.5 (2)</td> <td>3.5 (2)</td> <td>15.0 (3)</td> <td>15.0 (3)</td> <td>2.0* (4)</td> <td>0.25* (6)</td> </tr> <tr> <td>32D</td> <td>1.0 (2)</td> <td>N/E</td> <td>12.5 (3)</td> <td>N/E</td> <td>25.0 (4)</td> <td>N/E</td> </tr> <tr> <td>32Dp210</td> <td>—</td> <td>—</td> <td>12.5 (3)</td> <td>10.0 (3)</td> <td>4.0* (3)</td> <td>0.08* (3)</td> </tr> <tr> <td>MO7e</td> <td>1.0 (3)</td> <td>N/E</td> <td>&gt; 40 (2)</td> <td>N/E</td> <td>12.5 (4)</td> <td>N/E</td> </tr> <tr> <td>MO7p210</td> <td>4.0 (2)</td> <td>3.2 (2)</td> <td>26.7 (2)</td> <td>26.7 (2)</td> <td>0.7* (4)</td> <td>0.2* (4)</td> </tr> <tr> <td>K562</td> <td>8.0 (3)</td> <td>8.0 (3)</td> <td>&gt; 40 (1)</td> <td>&gt; 40 (1)</td> <td>0.8† (3)</td> <td>0.8† (3)</td> </tr> </tbody> </table> Data are median values for (n) experiments. For MO7e and MO7p210, the growth factor (GF) was granulocyte-macrophage colony-stimulating factor. In all other cases, the growth factor was interleukin 3 (IL-3) supplied by WEHI-3BD-conditioned medium. IC50 indicates 50% inhibitory concentration; N/E, nonevaluable (the nontransfected cell lines were factor dependent). *Analysis by 2-sided sign test. There were 16 tests across these cell lines, each providing a test of the effect of adding IL-3. In all 16 tests, the + IL-3 result was greater than the − IL-3 result. The chance of this occurring at random is only 1 in 32,768. †Analysis by 2-sided sign test. There were 3 tests for the K562 cell line. In only 1 of 3 tests was the + IL-3 result noticeably greater than the − IL-3 result; there was no discernible difference in the other 2 tests. Results Confirmation of BCR-ABL overexpression The FDC-P1 cell line and its derivative clones FDbcr210A, FDbcr210E, FDbcr210G, FDrv210C, FDrv210H, and FDrv210F were studied to confirm p210bcr-abl overexpression because they formed the basis of all initial observations in these studies. Flow cytometric detection of ABL protein expression by indirect immunofluorescence was used to parallel the previous characterizations of these cell lines, employing the anti-ABL antibody 24-21 directed against the C-terminus of ABL, which hence recognizes both p145abl and p210bcr-abl. This process demonstrated that FDbcr clones and FDrv clones showed higher fluorescence intensity than parental FDC-P1 cells (Figure 1). The clones driving the BCR-ABL, p210 complementary DNA (cDNA) from the stronger retroviral promoter showed higher level expression (clone H > F > C) than those driven from the weaker BCR promoter (clone A < E and G), confirming retention of transduced p210bcr-abl expression despite the passage and storage of these cell lines since previous studies. For some studies (eg, those comparing AG957 and STI571), the clones FDrv210H and FDbcr210E were selected as being representative of each group, because they showed the highest level of BCR-ABL expression. K562 cells, which have previously been shown to overexpress BCR-ABL, were confirmed to overexpress ABL, based on higher fluorescence intensity observed compared to negative controls omitting one or both detection antibodies (data not shown). Effects of inhibitors in proliferation assays To evaluate the effects of inhibitors on proliferation of p210bcr-abl expressing cell lines, we measured [3H]-thymidine incorporation. Data are median values for (n) experiments. <table> <thead> <tr> <th>Cell line</th> <th>AG957 (+GF)</th> <th>AG957 (−GF)</th> <th>AG490 (+GF)</th> <th>AG490 (−GF)</th> <th>STI571 (+GF)</th> <th>STI571 (−GF)</th> </tr> </thead> <tbody> <tr> <td>FDC-P1</td> <td>5.0 (5)</td> <td>N/E</td> <td>12.5 (5)</td> <td>N/E</td> <td>&gt; 10 (3)</td> <td>N/E</td> </tr> <tr> <td>FDbcr210A</td> <td>2.0 (3)</td> <td>2.5 (3)</td> <td>10.0 (3)</td> <td>10.0 (3)</td> <td>—</td> <td>—</td> </tr> <tr> <td>FDbcr210E</td> <td>3.5 (2)</td> <td>3.5 (2)</td> <td>12.5 (3)</td> <td>11.0 (3)</td> <td>0.85* (5)</td> <td>0.08* (5)</td> </tr> <tr> <td>FDbcr210G</td> <td>3.2 (2)</td> <td>3.2 (2)</td> <td>12.5 (2)</td> <td>12.5 (2)</td> <td>—</td> <td>—</td> </tr> <tr> <td>FDrv210C</td> <td>2.7 (3)</td> <td>2.5 (3)</td> <td>10.0 (3)</td> <td>11.2 (3)</td> <td>—</td> <td>—</td> </tr> <tr> <td>FDrv210F</td> <td>2.0 (2)</td> <td>2.5 (2)</td> <td>11.3 (2)</td> <td>12.5 (2)</td> <td>—</td> <td>—</td> </tr> <tr> <td>FDrv210H</td> <td>3.5 (2)</td> <td>3.5 (2)</td> <td>15.0 (3)</td> <td>15.0 (3)</td> <td>2.0* (4)</td> <td>0.25* (6)</td> </tr> <tr> <td>32D</td> <td>1.0 (2)</td> <td>N/E</td> <td>12.5 (3)</td> <td>N/E</td> <td>25.0 (4)</td> <td>N/E</td> </tr> <tr> <td>32Dp210</td> <td>—</td> <td>—</td> <td>12.5 (3)</td> <td>10.0 (3)</td> <td>4.0* (3)</td> <td>0.08* (3)</td> </tr> <tr> <td>MO7e</td> <td>1.0 (3)</td> <td>N/E</td> <td>&gt; 40 (2)</td> <td>N/E</td> <td>12.5 (4)</td> <td>N/E</td> </tr> <tr> <td>MO7p210</td> <td>4.0 (2)</td> <td>3.2 (2)</td> <td>26.7 (2)</td> <td>26.7 (2)</td> <td>0.7* (4)</td> <td>0.2* (4)</td> </tr> <tr> <td>K562</td> <td>8.0 (3)</td> <td>8.0 (3)</td> <td>&gt; 40 (1)</td> <td>&gt; 40 (1)</td> <td>0.8† (3)</td> <td>0.8† (3)</td> </tr> </tbody> </table> IL-3 result. The chance of this occurring at random is only 1 in 32,768. *Analysis by 2-sided sign test. There were 16 tests across these cell lines, each providing a test of the effect of adding IL-3. In all 16 tests, the + IL-3 result was greater than the − IL-3 result. The chance of this occurring at random is only 1 in 32,768. †Analysis by 2-sided sign test. There were 3 tests for the K562 cell line. In only 1 of 3 tests was the + IL-3 result noticeably greater than the − IL-3 result; there was no discernible difference in the other 2 tests. The superiority of STI571 compared to AG957 was replicated. for other cell lines and their p210<sup>bcr-abl</sup>-expressing derivatives. Parental 32D cells were 25-fold more sensitive to AG957 than STI571 (consistent with AG957's nonspecific effect on parental FDC-P1 cells), and significant STI571-mediated reversion of p210<sup>bcr-abl</sup>-expressing 32Dp210 cells to IL-3 responsiveness was demonstrated (Table 1). Similar observations applied to MO7e and its GM-CSF-independent p210<sup>bcr-abl</sup>-expressing derivative MO7p210 (Table 1). The initial assays all involved 18-hour incubations in the presence of inhibitor. We also evaluated the effect of a 30-minute exposure to inhibitor (Figure 2D). There was a significant inhibition by AG957 (IC<sub>50</sub> = 10 μM) but again little rescue of this effect by IL-3. In contrast, although the cells were now less sensitive to STI571, IL-3 rescue of STI571 inhibition was demonstrated. We compared the effects of AG957 and STI571 on K562 cells, a human BCR-ABL-positive cell line that has been employed previously in studies of these 2 and related inhibitors. Parental K562 cells were 10 times more sensitive to STI571 than to AG957 (Figure 2E and Table 1). Although K562 cells are growth factor independent, the exact role of BCR-ABL in conferring the independence of this cell line to any particular growth factor is not known, and so comparative experiments with and without growth factor were not performed. Synergistic inhibition by STI571 and AG490 Because AG490 is known to inhibit JAK2 and JAK2 is a component of signaling mediated by many cytokines, including IL-3 and GM-CSF, we tested whether there was any synergistic effect between STI571-mediated inhibition and AG490. Data are presented for the FDC-P1-derivative cell line expressing the highest level of p210<sup>bcr-abl</sup>- (FDrv210H) and 18-hour assay incubations (Figure 3A). STI571-mediated IL-3-rescuable inhibition is displayed for this cell line (Figure 3A). AG490 alone had no effect on proliferation of this cell line at a concentration of 5 μM. However, when this no-effect concentration of AG490 was added to STI571, significant synergy occurred with a suppression of maximal <sup>3</sup>H-incorporation by 42%. AG490 (5 μM) did not suppress all IL-3-mediated signaling, and IL-3 rescue could still be demonstrated in its presence. Similar observations were obtained for studies with the cell line FDbcr210E (data not shown). Studies with 32Dp210 cells replicated this phenomenon (Figure 3B). For this cell line also, 5 μM AG490 was a no-effect concentration for this agent alone. Figure 3B displays the typical effect of STI571, with an IC<sub>50</sub> of 0.03 μM in the absence of IL-3 and more than 2 μM with IL-3. Addition of 5 μM AG490 suppressed maximal proliferation by approximately 31%. Figure 3C demonstrates this synergistic effect for a third cell line, MO7p210. Furthermore, this titration demonstrates that for a given no-effect concentration of STI571 on this cell line of 0.01 μM, the cells are sensitized to the synergistic inhibitory effect of increasing no-effect doses of AG490. Consistent with the notion that this synergy was dependent on inhibition of p210<sup>bcr-abl</sup> by STI571, synergy was not detected between STI571 and AG490 for nontransfected FDC-P1, 32D, and MO7e cell lines. Effects of inhibitors on cell numbers and viability Because 3H-thymidine incorporation only indirectly measures cellular proliferation by the surrogate of nucleic acid synthesis, we determined whether cell numbers and viability were changing over the duration of these relatively short proliferation assays. Even in 10 μM of STI571, FDC-P1 cells in IL-3 proliferated (because the total cell number exceeded the starting cell number by 3.1-fold) (Figure 4A), although there was an increased proportion of dead cells, correlating with the 20% below-maximal 3H-thymidine incorporation observed for these conditions. Approximately 80% to 85% of FDC-P1 cells were viable over noninhibitory concentrations of STI571, but there was increased death (54% viable) at 10 μM STI571. For both AG957 and AG490, FDC-P1 cell numbers were 26% and 18% lower than control at 5 μM inhibitor, and there was increased death. No FDC-P1 cells were viable after 18 hours in 50 μM AG957. FDbcr210E cells in IL-3 also proliferated under all assay conditions (Figure 4B), with total cell number increasing in the assay period by 2.5- to 4-fold, except in high tyrphostin concentrations that also suppressed 3H-thymidine incorporation (Figure 2A,C). At 50 μM, AG957 killed all cells in 18 hours, whereas 65% of cells remained viable in AG490. In the absence of IL-3, the profile of final cell numbers was essentially similar, except that the fold increase overall approximated 3-fold rather than 3- to 4-fold, consistent with the previously documented residual IL-3 responsiveness of this particular FDC-P1-derived p210bcrl-abl expression clone25 (Figure 4C). The different total cell yields in STI571 with and without IL-3 (comparing Figure 4B,C) reflect the effect of the restoration of IL-3 dependence for this particular clone. Induction of apoptosis by inhibitors There were significant morphologic changes in cells under the various incubation conditions. Reducing the concentration of FCS from 10% to 0.5% for the assay itself resulted in FDC-P1 cells assuming a vacuolated appearance and increased diameter with increased nuclear pleiotrophy (Figure 5A,B), but mitotic figures indicated proliferation continued (Figure 5B). Adding STI571 at a high but noninhibitory concentration (10 μM) did not further affect cell appearance (Figure 5C). A toxic concentration of AG957 (50 μM) resulted in shrunken cells with pyknotic nuclei (Figure 5D), consistent with the nonviability of these cells (Figure 4A), but AG957 concentrations as low as 5 μM resulted in a similar appearance (data not shown). Transfected p210 **Discussion** Both AG957 and STI571 have been extensively studied because of their inhibition of p145 Within the context of the cell lines we have employed, our studies demonstrate the superiority of STI571 over AG957 as a selective p210<sup>bcr-abl</sup> inhibitor. Despite the 7-fold superior potency of AG957 for p210<sup>bcr-abl</sup> over p145<sup>bcr-abl</sup> ,<sup>14</sup> AG957 was still a much less effective inhibitor of a p210<sup>bcr-abl</sup>-dependent cellular phenotype (factor independence) than STI571. The similar IC<sub>50</sub> of AG957 on nontransfected cell lines and their p210<sup>bcr-abl</sup>-expressing derivatives suggests the inhibitory effects of AG957 are either unrelated to p210<sup>bcr-abl</sup> inhibition or are not restricted to p210<sup>bcr-abl</sup> inhibition. AG957 also appeared more toxic than STI571 because even the lowest inhibitory concentrations induced rapid cell death by apoptosis, whereas, in the presence of growth factor, inhibitory concentrations of STI571 suppressed nuclear DNA replication measured by ³H-thymidine incorporation without significant cell death. Although the previously reported effects of AG957 on K562 cell proliferation were associated with inhibition of p210<sup>bcr-abl</sup> kinase activity,<sup>16</sup> this association does not prove that the AG957 effects were primarily by its action on p210<sup>bcr-abl</sup>. Our data suggest that AG957 has other inhibitory effects in its spectrum of activity that also need to be considered in analyses such as these. In a comparison of AG957 effects on normal and CML patient-derived hematopoietic stem cells, although statistically significant differences were seen for some cell types, the discrimination was not biologically great, IC<sub>50</sub> being in the range of 12 to 181 µM, and never more than 5.3-fold for any progenitor cell type.<sup>18</sup> Indeed, AG957 displays an IC<sub>50</sub> for the epidermal growth factor receptor (EGF-R) of 0.25 µM, making it a better inhibitor of the EGF-R than of p210<sup>bcr-abl</sup>, and leaving considerable scope for it to exert inhibitory effects on other kinase molecules. It is even possible that the spectrum of activity of AG957 includes inhibition of kinases critical for signaling from the IL-3 and GM-CSF receptors, which would have masked the recognition of its effect on p210<sup>bcr-abl</sup> in the assays we performed. In contrast, the inhibitory effect observed in these and other<sup>10</sup> assays strongly implicates an effect of STI571 on the p210<sup>bcr-abl</sup> kinase. In our own experiments, in the presence of growth factor, STI571 showed up to 17-fold greater potency for p210<sup>bcr-abl</sup>-expressing derivatives of cell lines than for parental cell lines (Table 1). As a biological test of biochemical specificity, we based our assays on cell lines with a p210<sup>bcr-abl</sup>-driven specific phenotype—the acquisition of factor independence—and, as has been previously observed,<sup>10</sup> STI571 reversed this back to the factor-dependent phenotype of nontransfected cells. To determine its spectrum of activity, STI571 has been specifically tested against a broad range of protein kinases.<sup>10,12</sup> Although our experiments and those of others<sup>10</sup> indicate that STI571 acts on cellular p210<sup>bcr-abl</sup>, these surveys of its activity indicate that it is not totally specific. It is equipotent against the kinase activity of PDGF-R<sup>10</sup> and c-kit.<sup>11</sup> This is unlikely to be of relevance in the assays on which these present studies are based, because we did not supplement media with ligands for these receptors, and FDC-P1 and 32D cells are not known to express activated forms of them, although MØ7e cells express c-kit.<sup>30</sup> However, this may be important in clinical situations, because, for example in CML, BCR-ABL-expressing early hematopoietic stem cells are likely to express c-kit and to be responsive to its ligand. Even in the case of the PDGF-R, aberrant activation in hematopoietic cells drives a leukemic phenotype,<sup>12,31</sup> and this may contribute to the clinical effectiveness of STI571 in some circumstances. We had expected that AG490, as a JAK2 inhibitor,<sup>19</sup> would have no significant effect on factor-independent p210<sup>bcr-abl</sup>-expressing cells. Indeed, in the presence of IL-3, the IC<sub>50</sub> observed for transfected p210<sup>bcr-abl</sup>-expressing cells and their respective parental cell lines were similar for all 3 cell lines evaluated (Table 1). Such a comparison cannot be made in the absence of IL-3, because nontransfected cells do not grow without growth factor. However, once the growth factor dependence of p210<sup>bcr-abl</sup>-expressing cells was restored by STI571 treatment, addition of AG490 at a dose having no or minimal effect in its own right resulted in a significant further suppression of proliferation. Noting that a combination of growth factor deprivation and p210<sup>bcr-abl</sup> inhibition by STI571 provided an initiating signal for cell death to proceed by apoptosis, it is tempting to attribute the synergy between STI571 and AG490 to the known ability of AG490 to inhibit JAK2, which is utilized in IL-3 signaling.<sup>20</sup> However, AG490’s profile of activity has only been tested against a very small number of kinases,<sup>19</sup> and recently it was shown to also inhibit JAK3.<sup>23</sup> There is ample scope for this effect to be mediated by inhibition of another kinase or enzyme yet to be identified as susceptible to inhibition by AG490. Nonetheless, the combination of an effective p210<sup>bcr-abl</sup> inhibitor such as STI571 and inhibition or antagonism of growth factor signaling might form an efficacious and highly novel signaling-based combination therapy for diseases such as CML. In this regard, it would be of interest to evaluate this synergistic inhibitor combination using cell lines with acquired resistance to inhibition by STI571.<sup>32,34</sup> As expected, AG957 killed cells by apoptosis;<sup>17</sup> p210<sup>bcr-abl</sup>-expressing FDC-P1 cells in AG957 showed characteristic apoptotic morphologic changes, although we did not detect endonuclease-mediated apoptotic fragmentation of DNA. It is possible, because apoptosis is energy dependent and tyrphostins such as AG957 were designed as competitors of ATP binding to ATP-dependent enzymes, that the apparently broad spectrum of inhibitory activities of AG957 includes paralysis of ATP-dependent steps of the apoptotic enzymic cascade. STI571 has previously been reported to induce apoptosis in BCR-ABL-positive CML cells.<sup>35,36</sup> We have shown that for p210<sup>bcr-abl</sup>-expressing FDC-P1 cells, the combination of p210<sup>bcr-abl</sup>-inhibition by STI571 and growth factor deprivation led not just to arrest of cell growth but also to apoptotic cell death. At STI571 concentrations specifically reversing the p210<sup>bcr-abl</sup>-dependent phenotype, the restoration of growth factor signaling largely reversed apoptotic death. These observations provide evidence that, although specific inhibition of p210<sup>bcr-abl</sup> alone may not necessarily kill a p210<sup>bcr-abl</sup>-driven leukemia, a rationally designed combination of interruptions to signaling pathways may indeed induce death of such cells. In this regard, the nonspecificity of STI571 may be advantageous—its potency against the c-kit receptor (mediating growth factor signals from its ligand stem cell factor) may be contributing to its efficacy in hematologic disease. Our studies have demonstrated in a head-to-head comparison the superior specificity and potency of STI571 over AG957 as a p210<sup>bcr-abl</sup> inhibitor. Additionally, our studies of synergy between STI571 and AG490 or growth factor deprivation support the view that there is scope for synergistic combinations of rationally selected signaling-based therapies to provide novel approaches of increased anti-leukemic potency for the treatment of kinase-driven malignancies such as p210<sup>bcr-abl</sup>-driven CML. Acknowledgments We thank Dr A. Levitski for synthesizing and providing AG957 and AG490 for these studies; Dr B. Druker for providing cell lines and some helpful comments; Dr E. Buchdunger and staff at Novartis for providing STI571; Dr F. Walker for assistance with the proliferation assays; Ms M. Nerrie and Ms D. McPhee for technical assistance; Mr G. Rennie for help with the statistical analysis; Ms E. Passmore for secretarial assistance; and Professor A. W. Burgess, Professor W. Robinson, Dr A. Scott, Dr P. Ekert, and Dr B. Brady for helpful discussions. G. Lieschke is the recipient of a Wellcome Senior Research Fellowship in Medical Sciences in Australia. References Comparison of effects of the tyrosine kinase inhibitors AG957, AG490, and STI571 on BCR-ABL–expressing cells, demonstrating synergy between AG490 and STI571 Xuemei Sun, Judith E. Layton, Andrew Elefanty and Graham J. Lieschke
Exposing the pain and celebrating the triumphs of the black female spirit: an analysis of Alice Walker's In Love and Trouble Brittan Nelisa Swanagan Iowa State University Follow this and additional works at: http://lib.dr.iastate.edu/rtd Part of the African American Studies Commons, American Literature Commons, Literature in English, North America Commons, Literature in English, North America, Ethnic and Cultural Minority Commons, and the Women's Studies Commons Recommended Citation Swanagan, Brittan Nelisa, "Exposing the pain and celebrating the triumphs of the black female spirit: an analysis of Alice Walker's In Love and Trouble" (1996). Retrospective Theses and Dissertations. 44. http://lib.dr.iastate.edu/rtd/44 This Thesis is brought to you for free and open access by Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact digirep@iastate.edu. Exposing the pain and celebrating the triumphs of the black female spirit: An analysis of Alice Walker's *In Love and Trouble* by Brittan Nelisa Swanagan A thesis submitted to the graduate faculty in partial fulfillment of the requirements for the degree of MASTER OF ARTS Department: English Major: English (Literature) Major Professor: Kathleen Hickok Iowa State University Ames, Iowa 1996 This is to certify that the Master's thesis of Brittan Nelisa Swanagan has met the thesis requirements of Iowa State University. DEDICATION To my mother, Patricia Ann Swanagan, who, with a lot of help from the Infinite Spirit, continues to be the best mama and daddy anyone could ever wish for. Your never-ending encouraging words pushed me through when I could have easily given up. I love you! To those who taught me that there is always room for improvement: Kathy Hickok, You'll never know how much those two little words helped my tiring spirit: "Carry On!" Neil Nakadate, I'll never forget the way you stared in my eyes, pulling out what you knew I knew, and saying: "Good! Right! Now go further with that thought!" Brenda Daly, God bless your gift of knowing exactly what I was trying to say. Thank You for all your support and encouragement. And last but certainly not least, to Roselily: If I see you in Chicago, you've got a friend in me! ### TABLE OF CONTENTS **ACKNOWLEDGMENTS** <table> <thead> <tr> <th>Chapter</th> <th>Title</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>I</td> <td>SELFHOOD DENIED: THE HISTORICAL BACKGROUND OF THE BLACK FEMALE SPIRIT</td> <td>1</td> </tr> <tr> <td>II</td> <td>THE CONSTRAINTS OF SOCIETAL CONVENTIONS</td> <td>19</td> </tr> <tr> <td>III</td> <td>THE RESTRICTION OF BLACK FEMALENESS WITHIN SOUTHERN TRADITION</td> <td>32</td> </tr> <tr> <td>IV</td> <td>REVOLUTION AS AN OBSTACLE TO UNSUPRESSED AGWU</td> <td>43</td> </tr> <tr> <td>V</td> <td>FROM VICTIM TO SURVIVOR: APPROACHING AFRICAN AMERICAN LITERATURE WITH ALTERED EXPECTATIONS</td> <td>56</td> </tr> </tbody> </table> **BIBLIOGRAPHY** 59 ACKNOWLEDGMENTS First of all, I must thank the Higher Spirit for blessing me with my mother, Pat. Between You and her, I've managed to do the impossible. You brought me through some very tough times, and made me stand when sitting was much, much easier. Your patience with my disbelief made me a true believer. Please continue to lead me in the right direction ... I also want to thank all those who always had an encouraging word for me: Brenda Daly, Kathy Hickok, Neal Bowers, Neil Nakadate, Carol David, Celia Naylor-Ojuronge and Madama Labode. We must always thank the ones who paved the way and lent a helping hand: DeRionne Pollard and Petrina Jackson. I hope I can be an inspiration to others, just as you have to me. To all my Sisters: LaVonne, Abi, Val, Cherisse, Janea, Karla, Vauncy, Tanisha, Dashan, and all the rest of you guys! Please know how much your sisterhood has meant to me. To Ife: My Ship, My Captain, My Best Friend. I know I don't see you much, but I'm counting on you to deliver my first baby. To ALL of you: I'll see you in the nuthouse! And finally, I must thank my support system: Aunt Bessie, Janie, Johnny, Oscar, John and family, Pixie-Poo and family, Pat and family, Frannie and family, Aunt Ruby and family, Rob and family, and everyone else who said, "You can do this, girl!" I Love You All! CHAPTER I. SELFHOOD DENIED: THE HISTORICAL BACKGROUND OF THE BLACK FEMALE SPIRIT An evaluation of the Black female spirit as present in Alice Walker's *In Love and Trouble: Stories of Black Women* (1973) must begin by exposing the adversities Black women have been forced to overcome in America; despite undying efforts to nourish their selfhood, they have battled a society that continuously uses race and gender as suppressers of their identity. Existing as a Black woman in the twentieth century involves dismantling the "double-negative stigma," a term used to describe the towering obstacles placed before the individual who is both Black and female. In addition, little nourishment has been given to those labeled powerless and unimportant, whereas our society has encouraged white males to embrace their race and gender as symbols of authority. Unfortunately, their maleness has meant a history of oppression for minority men and women. I believe the internal spirits of Black women have been devalued, ignored, and in the most horrible situations, annihilated. The Black woman has had to conquer enormous odds in her attempt to sustain a sense of self, and this is evident in today's society and during slavery. On slave ships African women and men were considered inhuman and received inhumane treatment, which included rape, lashings, the eradication of dignity, the removal of any observable heritage, and daily terrorization. However, although the slave experience was horrific for both men and women, historians have given minimal recognition to the effect slavery had on the emotional health of Black females. In fact, enslaved women endured abuse equal to that of enslaved men and in *Ain’t I A Woman?*, bell hooks asserts that the experiences of the Black female slave have been unfairly devalued: Scholars have been reluctant to discuss the oppression of black women during slavery because of an unwillingness to seriously examine the impact of sexist and racist oppression on their social status. Unfortunately this lack of interest and concern leads them to deliberately minimize the black female slave experience. (22) For instance, although enslaved women were forced to assume a "masculine" role on the plantation field, enslaved men were rarely forced to perform labor as domestics in white households. In this respect, the Black woman's femininity was confiscated whereas the male slave was, to some extent, able to maintain his masculinity; this issue of femaleness will be discussed further in the chapter entitled "The Restriction of Black Femaleness Within Southern Tradition." My point is that women felt a sense of loss when unable to sustain their womanly essence. However, the greatest negative impact on the Black woman's sense of self during slavery was her continued sexual exploitation. More specifically, the most profound violation of self was undoubtedly the frequent occurrence of rape and the violent removal of children from their mothers. To be black and female, slave or free, elicits feelings of pride in the ability to share one's sexuality at will and reproduce in the name of "family." These are physical abilities, but their true roots thrive in the soul within the female body. When white, lewd slave owners preyed on the vulnerability of Black women, they simultaneously etched out a great portion of the Black female spirit. Moreover, when they snatched babies from their mothers' breasts, they invaded yet another one of these same spirits. On the issue of rape, Black activist Angela Davis adds that "the rape of black female slaves was not, as other scholars have suggested, a case of white men satisfying their sexual lust, but was in fact an institutionalized method of terrorism which had as its goal the demoralization and dehumanization of black women" (hooks 27). The connection is easy to ascertain: When a slave woman was demoralized and dehumanized, the effects were felt in her emotional, internal self. Over hundreds of years, the Black female has made continuous attempts at freeing her troubled spirit, and in doing so she has battled with her longtime foes: racism and sexism. For example, even in the twentieth century, more than a century after slavery was abolished, Black women have fought various societal conventions that discourage them from being their spontaneous, unique, artistic selves. Evidence can be found in the media, where physical beauty is still dependent upon the existence of traits similar to those of the European woman. In the 1960's, "Black is Beautiful" was an expression on the lips of almost every Black man and woman, and pride in the natural, physical beauty of the African American was at its peak. However, it didn't obliterate for Blacks, especially women, the memory of being labeled "ugly" and "unattractive" because of dark skin and kinky hair. As the twenty-first century approaches, significant progress has been made in assuring the non-European female of her unique splendor, but a white, male-dominated society still promotes the white female counterpart as the ideal symbol of beauty. As mentioned earlier, in the discussion of slavery, rape had a ruinous effect on the Black woman's emotional self, and its effects have manifested in nations around the world. Obviously, rape has a disastrous impact on any woman, and can happen to women of different ages, cultures, races and classes. Many women have said that although their physical selves may have been brutally beaten and abused, the pain of rape runs deep into their hearts and has a lasting effect on their psyches. In "Surviving Rape: A Morning/Mourning Ritual" Andrea Benton Rushing, a Black English professor, shares the specifics of her actual rape experience as well as her quest for spiritual healing. A successful university professor, Rushing lived alone with her college-aged daughter and seemed "like a woman accustomed to being in control" (130). However, on October 16, 1988, a strange man crept into her bedroom in the middle of the night, straddled and raped her, demanded her cash, and left. The entire incident took place in the dark, and when asked to describe the perpetrator she could only recall the absence of a "beard, sideburns, or goatee," his "clammy imitation leather jacket and heavyish gloves," and his "short, flabby penis" (127-128). After the police came to secure her safety, she claimed the "strong-Black-wonder-woman" impersonation and drove herself to the hospital; her daughter and her best friend (who entered the apartment just after the rape) were too distressed to drive. At the hospital Rushing received the routine post-rape medical exam (physically, she was perfectly sound) and when all was over, returned to her apartment. When she told friends and family of her ordeal, she assured them her "body" was "fine" (132). Rushing was certain that since the night of the rape was over and in the past, and she had not been physically affected, her life could pick up where it left off. However, her assumption was far from the truth. Three years after the incident she said: If you'd told me way back then that I'd still be recovering from rape now, I wouldn't have laughed in your face, but I wouldn't have believed you either. I'd faced traumas before - tenure review, major surgery, heartshattering divorce stumbled through some and transcended others, so I expect[ed] rape to slip from me like a boiled beet's rough skin ... Since I'd felt so good and been so clear-headed and capable in the immediate aftermath of being raped - no signs of the physical exhaustion, disorientation, anxiety, or amnesia that, later, become my almost constant companions - I was unprepared as everyone else when shock's soft shawl slipped from my shoulders. (128, 134) What Rushing realized, then, is that her internal spirit had taken the majority of abuse; the rapist irritated the inner part of her that once rested comfortably and with ease. I use this example of rape because it accurately demonstrates my belief that damage to the spiritual self is extremely destructive. Similarly, in Walker's short stories, racism, sexism, and male domination (among other societal factors) have a traumatic effect that undoubtedly wounds the Black woman's spirit. As Rushing found, mental health is an important ingredient in maintaining a healthy spirit. In "Health, Social Class and African-American Women" by Evelyn Barbee and Marilyn Little, we learn that Black men, white men and white women have a level of mental well-being higher than that of African American women (190). Their essay focuses on the limited health care received due to "membership in two subordinate groups, African American and women" (183), and recognizes that Black women have a difficult time attaining happiness when their emotional self is tattered. **Literary Background** A historical background of the Black female spirit is necessary if one is to recognize the various ways Black women writers, both past and present, address this same spirit. With this in mind, I would like to focus on the contributions Black women have made to the short story genre; by revisiting their own experiences and those of their sisters, they have expressed the pain, frustration and joy felt by women from different backgrounds, age groups and regions. Once I have investigated this literary tradition, I will then explain how six stories in Alice Walker's *In Love and Trouble* expose women trying their absolute best to save their individual sense of self without further damaging their inner spirits. Many Black, female writers have addressed, at least indirectly, the repercussions of harboring an injured spirit. This is certainly the primary focus of Walker's collection of short stories which "probes the extent to which black women have the freedom to pursue their selfhood within the confines of a sexist and racist society" (Christian, "Wayward" 92). Further, Walker writes within a tradition of Black women who feel an obligation to voice their anger and frustration at a society that turns a blind eye to psyches damaged by abusive relationships, racism, sexism, male-domination, and a myriad of other troubling issues. Immediately, contemporary novelists like Toni Morrison and Terry McMillan are names, along with Alice Walker, that come to most minds when asked to recall the identities of powerful African American women writers. Certainly they deserve this recognition, for in 1992 they each had a book on the best-seller list; Morrison's *Jazz*, McMillan's *Waiting to Exhale*, and Walker's *Possessing the Secret of Joy* collectively sold millions of copies and impacted the lives of all women, especially African American women. At the same time, as with Walker's collection of short stories, Black women writers have used this genre to address issues relevant to the Black woman. In the introduction to their anthology of short stories, *Centers of the Self*, Judith Hamer and Martin Hamer assert that the short story "more readily reflects the moods and attitudes of black people" because it is the form generally published by African American magazines and journals (5). Because the short story as a genre has existed for decades, the "moods and attitudes" alluded to above can be found in the early writings of historical authors like Frances Ellen Watkins Harper. In fact, her short story "The Two Offers," often considered the first tale published by an African American, "raised questions that were probably central to the lives of many women" (Hamer and Hamer 8). The story questions whether the union with a man is a commitment more important than spinsterhood, and if a woman's place is only in the home. Harper's story is about choices a woman must make; in making life-altering decisions, a woman undoubtedly has to weigh the ratio of self-sacrifice to happiness. That is, how much of her own personal spirit is compromised when her purpose in life moves from the internal to the external? At the turn of the century, Black women writers became more visible, partly because of the emergence of two new magazines, The Colored American Magazine (1900) and The Crisis (1910). However, many of these writers, like many African Americans, still held to the notion that they could become complete citizens by embracing the values of middle-class White Americans. In turn, domestic allegories emerged that "created an instructive but fictive world, where art did not imitate reality" (Hamer and Hamer 9). Fortunately, the attention to social realism was reborn in the literature of Black women and can be found to this day; many of their writings return to issues that have a direct impact on the individual woman. This is illustrated most clearly in the late 1960's when the racial oppression challenged in Black women's writings took a back seat to explorations of gender maltreatment. More specifically, "women who protest their treatment by men" (Hamer and Hamer 13) are the subjects of vocal writers like Ntozake Shange, who in her short story "comin to terms" (1979) examines the relationship between a young woman and her boyfriend. Here, the protagonist refuses to have sex on demand with her live-in mate, and in doing so "attempts to free herself from male domination ... sadly, written over one hundred years after slavery, the Black woman is still fighting for control over her body" (Hamer and Hamer 13). And, as mentioned earlier, the physical body is the gate to the spiritual self; when the former is distressed, the latter is as well, often times to a higher degree. "After Saturday Night Comes Sunday" (1971) by Sonia Sanchez is an example of the strength with which women endure adversity. In this story, Sandy, mother of twins, is patient with her abusive, drug-addicted lover who is overly absorbed in his own self-satisfaction. Her nakedness at the close of the story represents the extent to which she has given herself while receiving little in return. Ironically, during the story Sandy often feeds her little boys and her lover, but Sanchez makes little mention of her taking food for herself; this can certainly be seen as representative of her unfulfilled, desolate spirit. A striking resemblance to Walker's "Her Sweet Jerome" is seen in Tina McElroy Ansa's short story entitled "Willie Bea and Jaybird." In her story, Ansa portrays Willie Bea as a young wife "five feet tall, ninety pounds, stick legs, Coke-bottle glasses" (15). Her strength is tested when she discovers a dreadful truth about her husband, Jaybird. When the story comes to a close, Willie Bea is uncertain what to do with what she has just learned, much like Mrs. Jerome Washington, the protagonist of Walker's tale. In short, she is unsure how to handle her troubled spirit and comes up empty handed when she relies on her husband to do this for her. What, then, can be ascertained when we look at the progression of the short stories of Black writers? And more specifically, how does Alice Walker fit into this literary tradition? If it is true that this genre "is a way to celebrate our fantasies, to mark our presence in time, to pass down our loves, fears, and foibles from one generation to the next" (Hamer and Hamer 5), then we know its presence is both essential and therapeutic. For over a century, Black women writers have taken on the role of "therapist" by giving voice to the female forced to sacrifice and compromise self because of her race and gender. But at the same time, Walker adds a new component to the race/gender theme by revealing the troubles of the Southern, Black woman who insists on "challenging convention, on being herself, sometimes in spite of herself" (Christian, "Wayward" 87). Born and bred in Eatonton, Georgia, the eighth child of sharecropping parents, Walker undoubtedly has a pronounced connection to her Southern heritage and uses her experiences to give voice to women characters that might otherwise remained silenced. In "The Black Writer and the Southern Experience" Walker says the "black Southern writer inherits as a natural right ... a sense of community" (17). This may very well be an explanation for Walker's ability to capture the essence of the communities in which her protagonists live and die. Further, Walker says Black women writers have a "clarity of vision," and it is certainly this "vision" that allows Walker to understand the forces that restrict and prohibit many of her female characters. Clearly, Walker's Southern connection is present in In Love and Trouble; as she investigates the Black male-female relationship, she uses the South to intensify the importance of these varied situations. Bettye J. Parker-Smith proclaims in "Alice Walker's Women: In Search of Some Peace of Mind" that the South is an integral factor in the total success of these stories: What is clear ... is her articulation of the complete Black male-female dialogue in all of her fiction. She captures the exactness of their experiences by using the South as a backdrop. She draws upon the language: a quick, choppy, picturesque recipe of words and phrases. She plays upon the land: open, swallowing, birthrighted, but for the most part unattainable. (480) Parker-Smith eloquently summarizes the extent to which Walker has used the South, and examples of each of the above descriptors can easily be found. It is, for instance, the "open, swallowing" land that Roselily is eager to leave; in the midst of her wedding, she "thinks of cemeteries and the long sleep of grandparents mingling in the dirt" (*Trouble* 6). In addition, the "recipe of words and phrases" in Myrna's journals serves as our only insight into the mind of this closet-writer. There are even times when the South's beauty is so engrossing that it distracts the reader from the trouble lingering in many of the stories. For example, in "The Child Who Favored Daughter," "the rows of cotton that stretch on one side of her (Child) from the mailbox to the house in long green hedges" (*Trouble* 35) create a pleasant picture that fails to prepare us for the horror waiting at the close of this tale. Parker-Smith adds that for Alice Walker, "the South provides a spiritual balance and an ideological base from which to construct her characters" (478). Alice Walker herself is as complex as her characters; she wears many hats at the same time, such as Black, female, Southern, and feminist. Thus far in my discussion, I have explored her identity as an African American, Southern woman writer, but her position as a feminist deserves some attention. Although she is dedicated to the survival of women in general, Walker's primary concern is undoubtedly the Black woman. Because of this, she prefers the term "womanist" to "feminist," and as a preface to *In Search of Our Mother's Gardens* Walker defines the characteristics of a "womanist": 1. **From womanish.** (Opp. of "girlish," i.e., frivolous, irresponsible, not serious.) A black feminist or feminist of color. From the black folk expression of mothers to female children, "You acting womanish," i.e., like a woman. Usually referring to outrageous, audacious, courageous or willful behavior. Wanting to know more and in greater depth than is considered "good" for one. Interested in grown-up doings. Acting grown-up. Being grown up. Interchangeable with another black folk expression: "You trying to be grown." Responsible. In charge. Serious. 2. Also: A woman who loves other women, sexually and/or nonsexually. Appreciates and prefers women's culture, women's emotional flexibility (values tears as natural counterbalance of laughter), and women's strength. Sometimes loves individual men, sexually and/or nonsexually. Committed to survival and wholeness of entire people, male and female. Not a separatist, except periodically, for health. Traditionally universalist, as in: "Mama, why are we brown, pink, and yellow, and our cousins are white, beige, and black?" Ans.: "Well, you know the colored race is just like a flower garden, with every color flower represented." Traditionally capable, as in: "Mama, I'm walking to Canada and I'm taking you and a bunch of other slaves with me." Reply: "It wouldn't be the first time." 4. Womanist is to feminist as purple to lavender. (xi-xii) So, because Walker identifies herself as a "womanist" who appreciates the strength of Black women as well as their spiritual wholeness, it is natural that she would probe the extent to which that same strength - that sense of self - is tested under adverse conditions. Walker herself adds she is "preoccupied with the spiritual, the survival whole of my people. But, beyond that, I am committed to exploring the oppressions, the insanities, the loyalties, and the triumphs of Black women" (Christian, "Wayward" 82). In "A Womanist Response to the Afrocentric Idea," womanist preacher Lorine L. Cummings discusses the relationship between Afrocentrism and womanism, and asserts that Afrocentrism, although beneficial in theory, fails to acknowledge the needs of the Black woman. To Cummings, Black women must depend on themselves for understanding and self-nourishment: No one can accurately reflect and/or speak about African American women better than ourselves. Others attempt to discuss their understanding of our experience, but they cannot tell the entire story ... Womanists are voicing concerns of African American women which are often very different from those articulated by their white female and African American male counterparts. (58) "To what extent does one expose the pain of being a black woman?" This is the question Barbara Christian asks in "The Contrary Women of Alice Walker: A Study of Female Protagonists in In Love and Trouble." Walker, willing to reveal the shortcomings of the Southern, Black community at the expense of tainting the image of that same community, reveals the insanities of her women, as in "Really, Doesn't Crime Pay?" where the restrictions on Myrna nearly drive her to murder. In addition, we become part of their daydreams, where the onset of a more respected, secure life is foreshadowed by entrapping thoughts for Roselily on her wedding day. And in the most desperate situation, we see the Black woman, after being deceived by the Black male, destroy her own life as her only means of escape. In short, we witness the tortured, abused spirits of these women as they try to seek freedom from situations that hinder self-betterment and happiness. Because we, as removed readers, have the privilege of being judgmental, it's easy to question the actions of these women. We wonder why Mrs. Jerome Washington burns herself as well as "the other woman" in "Her Sweet Jerome." We fail to understand why the young girl in "The Child Who Favored Daughter" doesn't deny her white lover at the beginning of the story; if she had done this, her fate may have been different. And we are confused when Maggie in "Everyday Use" is willing to give her grandmother's antique quilt to her Afrocentric sister. Nevertheless, it's our responsibility to empathize with their situations and understand that they do what they must to free their spirits from the conventions of marriage, racism and sexism. In "Boundaries of Self", a chapter from Alice Walker, Donna Haisty Winchell says: One of the dragons that threatens these women is racism in its various individual and institutional forms. Another is their love of black men who use and abuse them. In the stories being in love often means being in trouble. (29) Above all, we see Walker's women trying to have hope, trying to find a safe harbor for their individual selves that want to be "characteristically and spontaneously themselves" (Christian, "Contrary" 34). In "Alice Walker: The Achievement of the Short Fiction" Alice Petry says, "Walker manages to counterbalance the oppressive subject matter of virtually all these thirteen stories by maintaining the undercurrent of hope" (13). For instance, in "The Revenge of Hannah Kemhuff," Hannah's sole reason for living (after the death of all her children) is her belief that the white woman who denied her food during the Depression will be repaid for the grief her uncharitable heart brought on this poor, needy woman. For most, using retribution as a primary motivation for living is quite unhealthy. However, I would argue that if close attention is paid to the external factors that irritate the spirits of each of Walker's women, it will undoubtedly become clear why these protagonists take their chosen course of actions. In addition, this close analysis will also reveal that even in the most tragic situation where death is the end result, we should celebrate that some form of escape is attained. Winchell adds that although Black women are eager to embrace "the invincibility of the strong women of color ... they seemed to have little sympathy for women whose personal struggles ended in defeat" (29). Thus far I have used the word "spirit" to describe the innermost self of Walker's women - the foundation of true happiness and the source of most of their anguish. With this in mind, Walker prefaces this collection of short stories with two excerpts, one of which is from Nigerian author Elechi Amadi's book The Concubine (1966). Amadi describes the troubled spirit of a young girl whose marriage had been arranged since she was an infant: Wonuma soothed her daughter, but not without some trouble. Ahurole has unconsciously been looking for a chance to cry. For the past year or so her frequent unprovoked sobbing had disturbed her mother. When asked why she cried, she either sobbed the more or tried to quarrel with everybody at once. She was otherwise very intelligent and dutiful. Between her weeping sessions she was cheerful, even boisterous, and her practical jokes were a bane on the lives of her friends ... But though intelligent, Ahurole could sometimes take alarmingly irrational lines of argument and refuse to listen to any contrary views, at least for a time. From all this her parents easily guessed that she was being unduly influenced by agwu, her personal spirit. Anyika did this best but of course the influence of agwu could not be nullified overnight. In fact it would never be completely eliminated. Everyone was mildly influenced now and then by his personal spirit. A few like Ahurole were particularly unlucky in having troublesome spirits. Ahurole was engaged to Ekwueme when she was eight days old [my emphasis]. (128) The use of the word agwu in Amadi's book seems appropriate considering Ahurole's marital arrangement; an agwu is a state of "mental derangement; madness, which, though not very acute, makes the victim very quarrelsome" (Williamson 16). The word derives from the Igbo language, used primarily by those from Igboland, Nigeria. The Igbo people are those from the states of Anambra and Imo, a society that has continuously struggled to become more urbanized (Ofoegbu 203). The Igbo language itself, especially in literature, is not as prevalent as Yoruba because "creative literature that has been published in Igbo, about the Igbo and by the Igbo is nothing to compare in quantity and in quality with what is to be found in Yoruba" (Emenanjo 47). Walker's reference to Amadi's book is significant because it is this same agwu that troubles most of her women. More importantly, just as "the influence of agwu could not be nullified overnight," neither do Walker's women attain a sense of peace instantaneously. For most, years of pain and grief must pass before they are able to reach the state of freedom that is right and appropriate for them. Christian adds, "these stories are about the most natural law of all, that all living beings must love themselves, must try to be free - that spirit will eventually triumph over convention, no matter what the cost" ("Contrary" 46). With this in mind, I will frequently refer to the troubled personal spirits of Walker's protagonists as their agwus, just as Amadi does in The Concubine. **In Love and Trouble** I chose Alice Walker's *In Love and Trouble: Stories of Black Women* (1973) for many reasons. First, I wanted to concentrate on the emotional well-being of the African American female in a male-dominated society. Second, I wanted to focus on this issue in the short story because although a great deal has been written on Walker's novels, little attention has been given to her contribution to this genre. And third, Walker's variety of settings and circumstances helped me arrive at the fact that women react to their troubled spirits differently, depending on a myriad of external factors. For centuries the Black female spirit has been ignored, and although many will attest to her strength, she has still been seen as victimized by circumstances beyond her control. A true understanding of the spirit of the Black woman cannot occur until she is taken out of this victimized position. Further, before she is judged or condemned for her actions, a critical eye should be set upon the conventions she must battle. In doing this, it will become more evident that although she may appear to have been defeated, she has really triumphed in a manner that unleashes her own unique, individual self. I would like to address the Black woman's spiritual freedom by looking at three interrelated issues: the constraints of societal convention, the restriction of black femaleness within Southern tradition, and the ideology of revolution as an obstacle to an unsuppressed agwu. To examine how the constraints of societal convention affect the Black female spirit, I chose Walker's "Roselily" and "The Revenge of Hannah Kemhuff" because the protagonists in these stories are clearly at odds with themselves in a racist, sexist society that limits their ability to reach happiness. As they exist in a world that functions for the benefit of white males, their struggle to survive is that much more difficult. Societal conventions have restricted most people of color, but in America their effect on African Americans is clear. Not only has financial security been threatened, but so has the overall state of Black Americans' physical and mental health. Other stories in In Love and Trouble address this same issue; in fact, all of Walker's women battle the conventions of a racist, sexist, male-dominated society. However, "Roselily" and "The Revenge of Hannah Kemhuff" are particularly interesting because of the unique perspective we have on the lives of these women. At the same time, I wanted to spend some time on the effect these societal conventions have on Southern women, and "Really, Doesn't Crime Pay?" and "The Child Who Favored Daughter" seemed to portray the most critical situations involving the restricted Black female spirit. The first story examines the woman who is discouraged from doing anything more than fostering her femininity, while the second looks at the fate of the young Black woman who is determined to explore her womanly self with anyone she pleases, including the white male. In reading these two tales, I was especially enthralled by their endings, with one resulting in death and the other an opportunity for escape. Recognizing that both conclusions are means of escape removes the Black woman from a traditional, victimized role. Again, Walker addresses this same issue in other stories as well, including "We Drink the Wine in France" where a young Black girl and her white professor of French fantasize about each other without ever acting on their hidden desires. Although the tone of this story is different from "The Child Who Favored Daughter," it still demonstrates how the Black female, because of societal conventions, is not free to explore her sexuality with the male of her choice. Finally, I chose the last topic, revolution as an obstacle to the unsuppressed spirit, or agwu, for two primary reasons. First, I was curious how the issue of change is interdependent upon racism and sexism; this was most evident in "Her Sweet Jerome" and "Everyday Use." Similar to the two stories I've selected for my analysis of the restriction of Black femaleness within Southern tradition, these two stories end quite differently. However, both examine the various ways the Black female spirit manifests when revolution has an impact on self-identity. In addition, these two stories are complemented by other tales in Walker's collection of short stories, such as "Entertaining God," the tale of a mother who uses the Black revolution as a means of escaping her flawed history. Further, because "Everyday Use" is the most widely anthologized story, I wanted to look at revolution as it pertains to the internal self, an angle few critics have taken. Although I have selected only six stories from Walker's *In Love and Trouble* for my analysis of the Black female spirit under varied conditions and circumstances, this theme is prevalent in most of the stories in this collection; looking at a few selected stories allows for a closer examination of the issues at hand. CHAPTER II. THE CONSTRAINTS OF SOCIETAL CONVENTIONS For many Black women, past and present, finding a safe harbor against the perils of racism and sexism has been strenuous, demanding, and at times, virtually impossible. Our prejudiced society places potholes in the pavement on which Black women walk, and, at the same time, a nation that is dominated by men transforms these same potholes into ditches most women have a difficult time avoiding. Walker's treatment of this issue is enhanced by her concentration on the Southern Black woman who, in addition to being both Black and female, must exist in an atmosphere that is both rewarding and detrimental. First, the South offers the Black individual a sense of history that connects her to her ancestry; this sense of belonging is beneficial to any person. However, the South is also viewed as the birthplace of American racism, with many violent acts against Blacks taking place on the very land where many were enslaved. Christian adds: Focal to Walker's presentation is the point of view of individual black southern girls or women who must act out their lives in the web of conventions that is the South, conventions that they may or may not believe in, may or may not feel at ease in, conventions that may or not help them grow. ("Contrary" 33) Christian's description of the Black, Southern female is an appropriate portrayal of the women in Walker's collection of short stories, but appreciating their struggle depends heavily on the level of optimism we, as readers, bring to these tales. What, then, does a Black woman do when her progression from one society to another yields similar restrictive conventions? This is the question that comes to mind when reading "Roselily," the first short story in In Love and Trouble. Young Roselily, the unmarried mother of three (the fourth given to the child's wealthy father) stands before her friends and family as she is wedded to a Chicago Muslim. The "stiff severity of his plain black suit" (Trouble 5) frightens and comforts Roselily, who, captured in the thoughts of her daydream, pays little attention to the words of her own wedding ceremony. Her marriage, based on necessity as opposed to love, is certain to render "respect, a chance to build ... A chance to be on top" (Trouble 4), but reflections on this impending happiness are diverted by images of entrapment. Roselily is depending on her husband to "free her ... A new life! Respectable, reclaimed, renewed. Free!" (Trouble 7). But at the same time, this young Southern woman's exhilaration is stifled by the climactic fall of her thought patterns; she is aware that by voluntarily becoming a part of this new life, she must abandon any traces of freedom she once had in her single, Southern life. In other words, Walker suggests that Roselily must trade freedom for freedom, a generous price to pay for that which should be, in and of itself, free. This ambiguity in "Roselily" is characteristic of Walker's work; rarely are the specifics of life in her stories wholly beneficial or detrimental, and this makes it difficult to conclude whether, in young Roselily's situation, her marriage is indeed a blessing. In fact, the title character's name itself elicits mixed emotions in readers of this story. That is, a rose has prickly thorns and a lily is considered pure and fair in appearance, just as Roselily's marriage to the Muslim is going to be both pleasant and unpleasant as she tries "to obtain for herself some measure of social and economic security" (Petry 13). In addition, other conventional associations of the rose are love, sexuality, and beauty, all factors to be compromised once Roselily returns to Chicago with her new husband; the desire to obtain a "better" life becomes more important than love, repeated pregnancies hinder her sexuality, and Roselily's beauty will be hidden behind the veil she must wear. So to what extent is she sacrificing self for the safety net her new marriage will provide? Throughout the story, it is obvious Roselily is entering a societal convention just as entrapping as the one she is leaving, and we know the conventions offered to her prior to her marriage have afforded minimal freedom. We learn early on that she is unhappy and that the life she is living in the South has dampened her spirit. Her agwu, weakened by single parenthood and tiring hours in a sewing plant, is so distressed that her daydream conjures memories of her fourth child given away to his biological father. This relationship, just like the one with the Chicago Muslim, suppressed Roselily's agwu: Her fourth child she gave away to the child's father who had some money. Certainly a good job. Had gone to Harvard. Was a good man but weak because good language meant so much to him he could not live with Roselily. (Trouble 4-5) When Roselily was herself, uneducated and cultured only by the South, this man could not be with her. Oddly enough, it is he that Walker reveals as the weaker of the two, for he "cried off and on throughout her pregnancy. Went to skin and bones. Suffered nightmares, retching and falling out of bed. Tried to kill himself" (Trouble 5). Nevertheless, her fourth child's father had the luxury of relocating to New England where his agwu could be at peace with "Bach" and "chess," while Roselily, already weighed down with the responsibility of three young children, must stay in the South, among her people who will, if nothing else, accept her for the Southerner she is. Sadly, their acceptance is due to their similarities to her; she feels linked to them because of this, especially to her dead mother with whom Roselily feels a "confusing" bond. The Southern conventions which Roselily battles prevent her from having a free inner spirit; she does not have the same freedom as her fourth child's father (as the maternal figure, she cannot relocate with the same ease he can), and her life in the country is unlikely to offer escape from the "detrimental wheel" of the South. What we have, then, is a young woman desperate to create a better life for herself and her children; understanding her situation is essential in understanding why she is willing to commit to "a lifetime of black and white" (Trouble 5). As wife to a Muslim, Roselily knows what her role will be, and she fears that any freedom her spirit had in the South will be eradicated once she moves to the North with her husband. The narrator reveals that "even now her body itches to be free of satin and voile, organdy and lily of the valley. Memories crash against her. Memories of being bare to the sun" (Trouble 6). But Roselily, conscious of the sacrifices she must make, chooses her Chicago life because her options are limited. In "To Marry or Not to Marry," Carol Nadelson and Malkah Notman explain society's role in a woman's decision to marry: In the past, few women chose not to marry, because remaining unmarried carried with it a strong social stigma, as well as economic problems. An unmarried woman was seen as unattractive, unworthy, and unwanted. Women also felt this way about themselves. (111) The man standing beside Roselily offers no comfort, but she still "presses her worried fingers into his palm" because although he is not the soothing image of love she wishes she had, he is a tangible, safe figure to hold on to. And what of Roselily's new husband? His motivations for marrying her are just as disturbing as Roselily's reasons for marrying him. Little is known about this Northerner except that "he blames Mississippi for the respectful way the men turn their heads up in the yard, the women stand waiting and knowledgeable, their children held from mischief by teachings from the wrong God (emphasis mine)" (Trouble 3). However, this excerpt reveals two important elements. First, it materializes his antagonistic feelings in regard to the South and second, it exposes his severe convictions connected with the Islamic faith. And still, this information both clarifies and encourages additional questions concerning Roselily's husband. For example, why does he choose to marry a girl from the South if the South itself, and all it represents for Black people, enrages him to the extent it does? I believe when he "glares ... to the occupants of the cars, white faces glued to promises beyond a country wedding" (Trouble 3), Walker is sending covert signals regarding race and racism. Roselily's soon-to-be-husband deals with his Blackness in a white society by using Roselily as a symbol of his acquired uprightness. Christian adds that "for him a veiled black woman in his home is a sign of his righteousness, and in marrying Roselily he is redeeming her from her backward values. With him, she will have black babies to people the nation" ("Contrary" 35). For the Muslim, Roselily's presence sends a hidden message to the white Southerners who pass by this country wedding: His people, his God and his religion will be strengthened by one (Roselily, that is) who was once in their controlling grasp. And, although it is true that Roselily's new life will be built on Northern territory, Walker makes clear that her condition is not necessarily bettered, for she is moving to the "South Side" of Chicago, where his efforts to "redo her into what he truly wants" (Trouble 8) will undoubtedly further diminish Roselily's individuality and sense of independence. Furthermore, his marrying the Christian Roselily demonstrates his ability to strengthen and expand the Black race; to wed a Muslim woman would not bring him closer to these goals, and would fail in making him feel as if he has stolen something from the White man. In *The Black Muslims in America*, C. Eric Lincoln explains the strong animosity many Muslims have towards Blacks' involvement in Christianity: > It would be difficult, probably impossible, to separate the Black Muslim teachings on Christianity from those on race. A fundamental tenet of the sect is that all blacks are Muslims by nature and that Christianity is a white religion. Thus there is not even a possibility than an awakened black would accept Christianity. (72) Roselily's husband considers it his obligation to "awaken" her, and his union with her is the most effective method he has for transforming her into the Black woman he feels she should be. So, between the societal conventions of the North and the South, single parenthood and married life, and Christianity and the Islamic faith, Roselily cannot escape racism and sexism. Again I question how free her agwu is, and I am tempted to see her as a victim. Her husband will always stand "in front of her" and his hand will remain "like the clasp of an iron gate" (*Trouble* 8-9). But if some consideration is given to Roselily's limited options, it becomes easier to believe that she is freeing her troubled agwu. Roselily is certainly aware that the life she is entering will not be perfect, and she is mindful that she will not have control of her life in the North; her husband will be dominating and authoritative. But I strongly believe that for Roselily, this is an acceptable price to pay for a "brand-new life." In short, the fact that "his love of her makes her completely conscious of how unloved she was before" (*Trouble* 8) allows Roselily to bear the adversarial factors of her life as a Muslim wife. And, although I am concerned about Roselily's future with her new husband, I am more fearful of her hurtful past; because of this I cannot pass judgement on her choices. Winchell says "the life she foresees in Chicago promises to be a nightmare; the marriage veil will merge with the veil she will have to wear as the wife of a Muslim" (30), but I do not see her future life as any more horrific than her past life. I feel strongly that readers of this tale, as with the majority of Walker's short stories in this collection, must redefine the convention of "happily ever after." **Hannah Kemhuff** Of all the stories in *In Love and Trouble*, racism as a restrictive convention is most clearly seen in "The Revenge of Hannah Kemhuff." Petry says Walker's women are "in love and trouble" because of the "roles, relationships, and self-images imposed upon them by a society which knows little and cares less about them as individuals" (13), and such is the case with Hannah Kemhuff. On the other hand, Petry continues by suggesting that the success of Walker's collection of short stories is due to the fact that although the women are downtrodden and often subjects of victimization, an undercurrent of hope in her characters keeps the book upbeat and pleasurable. Perhaps this is why Hannah Kemhuff's story is not the depressing, desolate tale one might expect it to be. Hannah's *agwu* is kept alive by the optimism she has in a rootworker's ability to punish the white woman who set her life on the path of destruction. The story's title alone tells the plot of the tale, with Hannah seeking the assistance of Tante Rosie, a master of the rootworking trade. Rosie, able to "see" Hannah's past by looking in a water-filled tank, quickly comes to the conclusion "that although the woman looked old, she was not" (*Trouble* 61). Tante Rosie also learns that Hannah was once a young, beautiful girl who married while she herself was still a child and, only shortly thereafter, became the mother of four small children. Her husband, however, did not love Hannah, and instead gave his love and attention to other women. It was during the Depression that Hannah's pride had to take a backseat to necessity; the mill where she worked as a cook closed down, her husband was jobless, and there was little money to feed herself and the children. In her initial meeting with the rootworker she explains her desperate situation: We were on the point of starvation. We was so hungry, and the children were getting so weak, that after I had crapped off the last leaves from the collard stalks I couldn't wait for new leaves to grow back. I dug up the collards, roots and all. After we ate that there was nothing else. (Trouble 62) With no food and no money, Hannah dressed herself and her children in warm winter clothes sent to her by her sister, Carrie Mae; she worked for "good white people" in the North and they gave Carrie Mae clothes to send down to Hannah and her family. Once they reached the charity line, Mrs. Kemhuff came face to face with a young, white woman who refused her food because of the way she and her children were dressed. It is this woman, named by Hannah Kemhuff as "the little moppet," who becomes the object of Hannah's sought revenge. In "The Case for Revenge," Andrew Oldenquist asserts that "we cannot have a moral community unless its members are personally accountable for what they do" (76). Perhaps it is true that morals have been compromised when Hannah takes on the role of avenger, but how ethical was it for "the little moppet" to give Hannah's meal tickets to someone else (a gambler, at that) simply because the Kemhuffs were not begging in ragged clothes with their heads bowed and bodies stooped? Hannah recalls for Tante Rosie the incident that changed her life forever: I want you to know that that little slip of a woman, all big blue eyes and yellow hair ... took my stamps and then one long look at me and my children and across at my husband all of us dressed to kill I guess she thought - and she took my stamps in her hand and looked at them like they was dirty ... (Trouble 65) In short, why weren't the supervisors of the relief program present to take responsibility for making sure all those in line, dressed in rags or fine clothes, received the government's goods? Oldenquist's point is well taken, but because of racism and prejudices, the moral community itself failed to serve and support its members. Hannah's agwu, then, is troubled by the racist conventions she had to withstand, and perhaps, deep within Hannah's self-conscious, the target of her vengeance was not simply "the little moppet" (who became an adult Mrs. Holley), but a racist, prejudiced society as well. After all, she is aware that whites in need of assistance received rations greater than Blacks, and she tells Tante Rosie that she "later heard, by the by, that the white folks in the line got bacon and grits, as well as meal, but that is neither here nor there" (Trouble 63). Or is it? It seems extremely important that Hannah, when telling her story to the rootworker, feels the need to include this observation. In addition, years later, when Tante Rosie's young apprentice (I believe this is Alice Walker herself because of Walker's interest in Zora Hurston's preoccupation with voodoo and rootworking) confronted Mrs. Holley, she notes that "no shaft of remembrance probed the depths of what she had done to colored people more than twenty years ago" (Trouble 76-7). Can the cause of Hannah's troubled spirit be pinpointed? Is it the racism that brought about the young Mrs. Holley's behavior, or should we look to sexism as the cause of her irritated inner agwu? After all, Hannah's husband was free to abandon his family at a time when they needed him the most. Hannah describes for Tante Rosie the woman her husband left her for: I could see my husband over talking to the woman he was going with on the sly. She was dressed like a flysweep! Not only was she raggedy, she was dirty! Filthy dirty, and with her filthy slip showing. She looked so awful she disgusted me. And yet there was my husband hanging over her while I stood in the line holding on to all four of our children. (Trouble 64). I believe Hannah's inner spirit receives a double blow: A racist society prevented her from receiving the help she was entitled to, and a sexist society allowed her husband to forsake his family. As readers of this story, it is easy for us to view Hannah as a victim of societal conventions just as we saw Roselily in this same role. And when Hannah dies, we may be quick to say she took a troubled agwu to her grave with her. It is true that through a series of unfortunate events, she loses her husband, her children, her health, and in the very end, her life. Of all the women in In Love and Trouble, Hannah's spirit remains in a troubled state the longest, but it is set free once she is certain Tante Rosie will seek the revenge that she, tired and old, cannot seek herself. Further, when she dies, Hannah's agwu is at peace because she believes Tante Rosie's rootworking skills will be successful: It is enough that I have endured my shame all these years and that my children and my husband were taken from me by one who knew nothing about us. I can survive as long as I need with the bitterness that has laid every day in my soul. But I could die easier if I knew something, after all these years, had been done to the little moppet (my emphasis). (Trouble 67) And something was done to the little moppet; it was a revenge tactic that would, according to Tante Rosie, right the terrible wrong Mrs. Holley rendered to Hannah. some twenty years ago. The "potion," consisting of "a mixture of hair and nail pairings," "goober dust" and a portion of Mrs. Holley's "water and feces," guaranteed that she would not live more than six months longer than Mrs. Kemhuff (Trouble 89). Prior to Walker's visit to "the little moppet," Mrs. Holley was an extremely social, outgoing individual who could often be found "shopping for antiques, gossiping with colored women, discussing her husband's health and her children's babies, and making spoon bread" (Trouble 73). Her life, in sum, was carefree and nonchalant, and she felt that she controlled her destiny. She had an abundance of confidence in herself and her abilities, and viewed rootworking as a ridiculously superstitious practice. To the apprentice she asserts her disbelief in voodooism: > I been hearing about Tante Rosie since I was a little bitty child, but everybody always said that rootworking was just a whole lot of n____, I mean colored foolishness. Of course, we don't believe in that kind of thing, do we, Caroline? (Trouble 75) Caroline, the young Black friend of Mrs. Holley, replies with an emphatic "No," but as the story progresses, we find Mrs. Holley unable to convince herself of the absurdity of Tante Rosie's powers; by the end of the story, she dies not from the voodoo itself, but from her dangerously paranoid demeanor. In "Paranoia and the Structure of Powerlessness," John Mirowsky and Catherine Ross assert that "belief in external control, mistrust, and paranoia form a stairway of deepening alienation" (238), and although Mrs. Holley said she didn't believe in Tante Rosie's powers, she obviously did because she ultimately lost charge of her life. Walker writes of Mrs. Holley's eventual demise: > She collected stray hairs from her head and comb ... She ate her fingernails. But the most bizarre of all was her response to Mrs. Kemhuff's petition for a specimen of feces and water. Not trusting any longer the earthen secrecy of the water mains, she no longer flushed. (Trouble 79-80) What did Mrs. Holley do with her excretions and how does her death contribute to Mrs. Kemhuff's freed spirit? Well, "the little moppet" stored her feces and water in bags and other large containers and kept them in the upstairs closet. She died alone, probably from starvation because she refused to eat anything for fear of being poisoned. Mrs. Kemhuff, then, rests easily in her grave, perhaps with a smile on her face because she knew Tante Rosie would "handle" Mrs. Holley. It matters not to Mrs. Kemhuff's agwu if Tante Rosie's rootworking attempt was successful; what is important is that Mrs. Holley suffered just as Hannah herself had. Again, the common reader may look upon the pitfalls of Hannah's life with pity, but the sense of freedom and relief felt by Hannah at the end of her life indicates she was released from her irritated personal spirit. In the dedication of "The Revenge," Walker says the story is written "In grateful memory of Zora Neale Hurston," and if one is familiar with Hurston's Mules and Men the similarities in plot and theme cannot be mistaken. Alma Freeman's essay "Zora Neale Hurston and Alice Walker: A Spiritual Kinship" examines the relationship between these two African American writers: Not only do both women stand as exemplary representatives of the achievement of the American Black woman as writer, but their fiction reveals a strong spiritual kinship. Though separated by place and by time, these two Black women writers, inevitably it seems, were drawn together, and Zora Hurston became an important influence in Alice Walker's life. (37) The "spiritual kinship" that Freeman refers to is most evident in "The Revenge" and Hurston's Mules and Men. For example, Walker weaves material on voodoo practices from Hurston's book of folklore, as when Walker as an apprentice quotes a "curse prayer" taught and used by rootworkers like Tante Rosie. It is quite odd that two people who existed in separate lifetimes could have such a strong relationship, but in the case of Walker and Hurston this bond seems to be unmistakably natural. As many of their protagonists are "fighting against both racial and sexual oppression, they choose either a life of continued subservience, anguish, and pain, or they opt to become growing, emergent women who seek to take control of their own lives" (Freeman 38). The latter is the essence of Hannah Kemhuff's story; a woman has overcome a great deal of torment and, in the end, takes control of her own life in the only way she knows how. CHAPTER III. THE RESTRICTION OF BLACK FEMALINESS WITHIN SOUTHERN TRADITION For centuries, American patriarchal society has been detrimental to the spirits of Black women in a variety of ways. In our male-dominated world, women of all races and cultures are, to a large degree, discouraged from setting foot outside of their female-identified arenas. In Engendering the Subject: Gender and Self-Representation in Contemporary Women's Fiction, Sally Robinson maintains that women who attempt to defy the norms of our society experience great turmoil: The fact that women remain subject to normative representations - of Woman, the feminine, the biologically female - reminds us that such representations continue to exert a great deal of pressure on any attempt to represent women as the subjects of feminism, or indeed, as the subjects of any discourse or social practice. (8) In "Roselily" and "The Revenge of Hannah Kemhuff," Walker presents the Black woman who cannot explore her creative self because her foremost concern is simple survival. We do not know Roselily's talents, for example, because she has not the capability of expressing them; her strength is spent on maintaining the existence of herself and her children. Further, we cannot visualize Hannah Kemhuff as a sexual being because when we meet her, we see only the ragged, old woman underneath layers of shawls. But at the same time, Walker writes of women whose spirits are irritated by more than the struggle to attain the necessities of life. In one tale the protagonist attempts to adopt a role typically held by white men: the American writer. And yet, my argument remains that these women are not victims, but survivors of troubled agwus. Myrna In "Really, Doesn't Crime Pay?" Walker addresses the minority woman who must restrict her creative expression to journal-writing, similar to the way Roselily can express her feelings only through her daydream. Myrna, wife to Ruel and lover to Mordecai, has tired of having her writing ridiculed by Ruel, who, as an alternative, "brings up having a baby or going shopping, as if these things are the same. Just something to occupy ... time" (Trouble 15). Because her husband rejects her writing, "Myrna is open, both sexually and artistically, to Mordecai, an artiste" (Christian, "Contrary" 37). But this relationship is just as detrimental as the one with her husband, who insists that she represent the perfect Southern belle; instead, Mordecai steals the stories she willingly shares with him. Because of her relationship with these two men, Myrna, a beautiful, talented young woman, is subjected to abuse in two different forms, both detrimental to her inner self. When we are introduced to Myrna, in September of 1961, we learn about her life through a series of journal entries. In these, she is able to record everything from her present circumstances to her planned escape from them, not to be executed until she is ready. From the first entry alone a great deal is learned of Myrna's anger at the role she is portraying; she is sarcastic when referring to her "Helena Rubenstein hands" that are "sweet-smelling, small, and soft..." (Trouble 10). Most may wonder why Myrna complains. After all, unlike Hannah Kemhuff's hands that are calloused by life's cruelties, Myrna's have the luxury of doing nothing at all. Unfortunately, this is the very thing that troubles an agwu that functions differently from Myrna's flower-scented physical self. "'I have a surprise for you,'" Ruel said, the first time he brought me here. And you know how sick he makes me now when he grins." (Trouble 11). This is an excerpt from the notebook Myrna allows Mordecai to read; immediately, we learn of Myrna's dislike for her husband who insists she enjoy what he wants her to enjoy, such as a beautiful house with new furniture, frequent trips to the shopping mall, and designer creams and perfumes. And still, what is most troubling about Ruel is his inability to accept what Myrna herself wants; his response to her desire to write is simply, "No wife of mine is going to embarrass me with a lot of foolish, vulgar stuff" (Trouble 15). In The Same River Twice: Honoring the Difficult, Alice Walker says she is disheartened by men's inability to appreciate women's writing: It was painful to realize that many men rarely consider what women write, or bother to listen to what women are saying about how we feel. How we perceive life. How we think things should be. That they cannot honor our struggles or our pain. That they see our stories as meaningless to them, or assume they are absent from them, or distorted. Or think they must own or control our expressions. And us. (39) Moreover, once it has been made clear what Myrna's true passion in life is, as well as her husband's reaction and response to it, a portion of Myrna's troubled agwu becomes unmistakably clear. Further, both of the above factors lead her to her lover, Mordecai. Mordecai encourages Myrna's writing, and this, coupled with her need to find refuge from Ruel's expectations of her as his wife, causes Myrna to willingly expose her creative ideas (an extension of her self). But Myrna is not a naive woman; she perceives the genuine nature of both her husband and Mordecai. Of the latter she says: I think Mordecai Rich has about as much heart as a dirt-eating toad. Even when he makes me laugh I know that nobody ought to look on other people's confusion with that cold an eye. (Trouble 14) Nevertheless, she welcomes Mordecai into her life and, thinking she has nothing to lose from sharing her work with him, allows him to read a character sketch about a woman who kills herself because, after being crippled, she can no longer satisfy her husband sexually. Myrna shares more than her writing. She says, "Under Mordecai’s fingers my body opened like a flower and carefully bloomed" (Trouble 17); in addition, it comes without surprise that she is totally vulnerable after he compares her to Zora Neale Hurston, Walker’s (and Myrna’s) major literary influence. After Myrna shares her story idea with her lover, she says she is "nearly strangled" by her fear, which has escalated when her visits with Mordecai come to a halt. All of this, in conjunction with her unhappy marriage, drives Myrna near the edge, but while she is sitting in a fertility clinic waiting to see why she and Ruel cannot conceive a child, she sees a story in a journal that eradicates any sanity she had. She writes in her journal: Today at the doctor’s office the magazine I was reading fell open at a story about a one-legged woman. They had a picture of her ... not black and heavy like she was in the story I had in mind. But it is still my story, filled out and switched about as things are. The author is said to be Mordecai Rich. (Trouble 21) Four days later after washing "the prints of his (Ruel’s) hands off (her) body," Myrna tries to kill her husband by slicing his head off with a chain saw. Fortunately, or perhaps (for Myrna) unfortunately, "this failed because of the noise. Ruel woke up right in the nick of time" (Trouble 21). This latter event sends Myrna to an asylum. In the last two entries, dated three years after she attempted to decapitate Ruel, Myrna is back in her home with her husband, new clothes, and her carefully manicured hands that write only in her journals. How, then, has Myrna's troubled agwu been freed? It appears she has made little progress in this story, for her writing has been stolen and she is still under the careful eye of Ruel. In short, for Myrna, her Blackness and her femaleness are not fulfilled because she cannot write freely; that is, she is prohibited from using her experiences as a Black woman to write about that which she deems important. However, the ill-behavior of Ruel and Mordecai teaches Myrna two valuable lessons that allow readers to be optimistic about the story's conclusion and Myrna's future. First, although Mordecai capitalizes off the "story about a one-legged woman" (Trouble 21), Myrna still receives the encouragement she needs to feel her writing is worthwhile. For once, she is complimented for traits beyond the physical: Mordecai praised me for my intelligence, my sensitivity, the depth of the work he had seen - and naturally I showed him everything I had ... Already I see myself as he sees me. A famous authoress, miles from Ruel, miles away from anybody. I am dressed in dungarees, my hands are a mess. I smell of sweat. I glow with happiness. (Trouble 18) Although this vision does not immediately become Myrna's reality, it materializes in the form of a tangible possibility. Mordecai's presence would be eliminated, but the substitution of sweat for Helena Rubenstein hands is enough to pacify Myrna's irritated spirit. Second, Myrna has learned a valuable lesson that makes her triumphant at the story's conclusion: She realizes she has the capability to transform her feminine silence into a powerful, internal force. During her internment at the mental hospital, Myrna realizes the one thing that converts her from victim to survivor. That is, she learns how to manipulate the spoken word even though, once back in Ruel's home and freshly manicured, her written words are discouraged. Christian explains: Like countless Southern belles, she has found that directness based on self-autonomy is ineffectual and that successful strategies must be covert. Such strategies demand patience, self-abnegation, falsehood. ("Contrary" By realizing what she can't do (outwardly express her anger at her husband) to escape her unhappy situation, she places herself in a powerful position which will allow her to leave Ruel once he has "tired of the sweet, sweet smell" of her body (Trouble 23). The power she gains derives from her ability to control words, making yes mean a firm no. Specifically, Myrna decides to become (superficially) the wife Ruel wants, but this is merely part of her plan to eventually release her inner self: I wait, beautiful and perfect in every limb, cooking supper as if my life depended on it. Lying unresisting on his bed like a drowned body washed to shore. But he is not happy. For he knows that I intend to do nothing but say yes until he is completely exhausted. (Trouble 23) So, in the end, Myrna's agwu appears, similar to that of Roselily and Hannah Kemhuff, to be unhappy because of societal conventions and expectations that suppress her femaleness (this, for her, is partly identified by her writing), but in fact she has taken control of her situation; this pleases Myrna and in turn makes her a patient, strong-willed woman. Winchell says, "Seldom in In Love and Trouble do we see Walker's women fighting back successfully against preconceived, stultifying, and restrictive notions of women's roles" (31). However, I disagree because Winchell ignores the spiritual triumphs of these women; just as Roselily and Hannah had to find their own way of expressing their troubled spirits, Myrna must do the same: extenuating circumstances call for bizarre reactions. The role the South itself plays in Myrna’s suppressed agwu can be seen most clearly by looking at the regional differences of the men in her life. Ruel, a Southern man who "has never left Hancock County, except once, when he gallantly went off to war" (Trouble 12), disapproves of Myrna’s writing, and wants her to live as other (white?) Southern belles do. As the man, he wants to show that his long hours in the peanut field pay off, and his dainty, beautiful wife is a symbol of his success. But simultaneously, Walker chooses to make Mordecai a Northern man who "never saw a wooden house with a toilet in the yard" (Trouble 14), and also a man who accepts (and steals) her writing unconditionally. There is also a difference in the way these men accept Myrna’s physical self. She says, "He [Ruel] married me because although my skin is brown he thinks I look like a Frenchwoman. Sometimes he tells me I look Oriental: Korean or Japanese" (Trouble 13). Although Ruel wants a Southern, Black woman, he can only accept Myrna’s Blackness if she is physically unlike other Southern Black women; Myrna’s Black femaleness is restricted by her own identity as a Southern, Black woman. Mordecai, on the other hand, cherishes her "heavy, sexy hair" and accepts her as the beautiful, Black woman she is. Of course in the end, her acceptance or unacceptance by either man becomes irrelevant because she has accepted herself and her ability to take control of her life. **Child** In all the stories I have looked at thus far, the protagonists’ agwus are irritated by both internal and external factors. That is, Roselily, Hannah Kemhuff, and Myrna are not at peace with their inner selves; they cannot look within for the support they are not receiving from their husbands, societal conventions, and other outward factors. However, in the most troublesome story in *In Love and Trouble*, we meet a young woman whose spirit is threatened externally, but who is otherwise at peace. And, unlike Myrna's femaleness that restricts her from expressing her love for the written word, the protagonist in "The Child Who Favored Daughter" is literally detached from her femaleness by her crazed, jealous father. We know her only as Child, daughter of a man who destroys the lives of three women including his wife, who committed suicide "while she was still young enough and strong enough to escape him," a sister, who was found "impaled on one of the steel-spike fence posts near the house," and Child herself, who dies at the brutal hands of her father (Trouble 39-40). Even so, when we meet Child we immediately know that she is comfortable with her existence, including her sexuality, for she is in love with a Southern white man, a forbidden love. Child sees her father sitting "tensely in the chair" as she walks the path leading from her school bus to her front porch, and she also knows he has found the letter she wrote to the unnamed white lover. Although the day is hot, the air is dry, and she sees the anger on her father's face, Child appreciates the warmth of the sun's rays and is not rushed by her father's impatience; instead, she gazes "intently at a small wild patch of black-eyed Susans and a few stray buttercups. Her fingers caress lightly the frail petals and she stands a moment wondering" (Trouble 36). At the same time, and in direct contrast to his daughter, the father's thoughts are impure: She is near enough for him to see clearly the casual slope of her arm that holds the schoolbooks against her hip. The long dark hair curls in bits about her ears and runs in corded plainness down her back. Soon he will be able to see her eyes, perfect black-eyed Susans. Flashing back fragmented bits of himself. Reflecting his mind. (Trouble 38) The thoughts in her father's mind send him back to his own youth when he had a sister, "tawny, wild, and sweet," and known only as "Daughter." Similar to the incestuous thoughts he has for his own daughter, he remembers Daughter's life and death, which was caused by the punishment she received after her family discovered she had an affair with a white man. His sister, who had "chosen to give her love to the very man in whose cruel, hot and lonely fields he, her brother, worked" (Trouble 38), flirted with her brother by batting her eyelashes and stroking his cheek. Walker says the hurt he felt throughout his life "poisoned" him, making him blind to the beauty of love and "weary of living as though all the world were out to trick him" (Trouble 40). Some critics say it is this "poison," coupled with his rage and insanity, that drives him to his vicious act of cruelty. Even when she is face to face with her father, Child, refusing to let his fierce gaze and his shotgun disrupt her tranquil spirit, "sways back against the porch post, looking at him and from time to time looking over his head at the brilliant afternoon sky" (Trouble 41). He leads her to the shed, where he beats her and leaves her wet, bloody and alone. The next morning, after looking at old photographs of his dead sister Daughter, he returns to the shed and finds her "dark eyes reflecting the sky through the open door" (Trouble 44). He begs her to deny the letter he has found and never to see her white lover again, but she refuses. It is here where she is, I believe, aware of his sexual thoughts toward her, and it is also here where she speaks the only words she needs to say: "No ... Going" (Trouble 44). I agree that "what the daughter sees in his eyes is more terrifying than the darkness in the shed where she waited alone overnight. What she sees is his desire for her" (Winchell 37), and I would add that this same desire is the only circumstance that could totally disrupt Child's otherwise peaceful agwu. This incest driven type of story is familiar to Alice Walker, and "The Child Who Favored Daughter" is not her first tale that exposes such a sensitive topic. For example, *The Color Purple* (1982) "drew a plethora of emotional responses from black communities as well as from black academics" (Harris 903). Many questioned how Blacks could rise as a nation when Walker, as well as others, exposed disaster within this same community. But Walker, willing to expose the pain and celebrate the triumphs of Black people, especially women, feels that only injury can come from silence. Of *The Color Purple* says, "I have been glad to see how the issues of incest and domestic violence were opened up by the book" (*River* 41). At any rate, when Child continues to maintain her position, her father restricts her femaleness in the most horrendous way: She gazes up at him over her bruises and he sees her blouse, wet and slippery from the rain, has slipped completely off her shoulders and her high young breast are bare. He gathers their fullness in his fingers and begins a slow twisting. The barking of the dogs creates a frenzy in his ears and he is suddenly burning with unnamable desire. In his agony he draws the girl away from him as one pulling off his own arm and with quick slashes of his knife leaves two bleeding craters the size of grapefruits on her bare bronze chest and flings what he finds in his hands to the yelping dogs. (*Trouble* 45) At this point in the story, Child is dead and her blossoming spirit has been stopped in its track, forever locked in the murderous hands of her father. Or has it? In "Tiptoeing through Taboo: Incest in 'The Child Who Favored Daughter'," Trudier Harris explores the effect incest has on Child, and she makes some interesting comments about Child's father and the way in which his troubles destroy his daughter's life: He tries to free himself from what he cannot name, what he cannot express, and once again destroys a woman in his life. Daughter, his wife, and his daughter are dead because he cannot face the image of the nonbrotherly love he wanted to bestow upon his sister. (502) This may be true, but I can render little sympathy for Child's father, for in the end he has his life (however pathetic), whereas Child's life has been literally cut short. "By killing his daughter, he has at once shut out the image of Daughter which haunts him, he has murdered his own incest, and he has eliminated the last woman who has the power to haunt him" (Washington 94). He has, in a sense, freed his own troubled agwu, but at the close of the story Walker suggests he has grown closer to the insanity that lurks inside his twisted mind, for "if he stirs he might take up the heavy empty shotgun and rock it back and forth on his knees, like a baby" (Trouble 46). Again I return to Child's agwu, and discover a hint of irony in the story that demonstrates my belief that Child's spirit, as inconceivable as it may seem, is freed by the close of this tale. Although, as critic Mary Helen Washington asserts, Child's father has supposedly freed himself from the haunting images and memories of the three women in his life, these same images may very well reappear if he moves in any way. That is, the summer wasps and the red dust of the South pose a threat to his continued existence. However, Child, unable to ever again enjoy the beauty of life, no longer has to use her inner self to battle the wrongs of the outside world, specifically her father. Parker-Smith says Walker's "modern women accept every challenge necessary to protect their mental and physical selves" (486). If we accept this assumption, then it can be said that Child's peace of mind cannot be attained until she is removed from her external atmosphere. In short, her agwu cannot be freed until she escapes, by any means necessary, the negativity surrounding her; death is her ultimate means of escape. Chapter IV. Revolution as an Obstacle to Unsuppressed AGWU I have not labeled myself yet. I would like to call myself revolutionary, for I am always changing, and growing, it is hoped for the good of more black people. I do call myself black when it seems necessary to call myself anything, especially since I believe one's work rather than one's appearance adequately labels one. -Alice Walker Alice Walker's above definition of "revolutionary" fosters betterment and improvement regarding the societal conditions of Black people, but at the same time, she is committed to investigating the effect revolution has on Black women. "Her Sweet Jerome" and "Everyday Use" are stories in In Love and Trouble that examine the notion of change and advancement, but she is wary of the repercussions the inner spirit of the Black woman must endure in exchange for the progression of Black people as a whole. Evidence of this is present in her novels, essays, and poems as well as the short stories mentioned above, particularly in Meridian, Alice Walker's second novel published three years after In Love and Trouble. This novel "chronicles the sexual and racial politics of the civil rights movement" ("Alice Walker" 903); the title character jeopardizes her own physical and emotional well-being, breathing the air from her frail body into the lungs of the Black revolution. The love Meridian has for "the cause" is greater than that for herself, but what does she gain from her commitment to Black people and Black power? As the novel journeys through Meridian's failed marriage, abandoned baby, and relationship with a Jewish woman, Walker portrays a woman whose agwu is in a continuous state of turmoil. In "Meridian: Alice Walker's Critique of Revolution," Karen Stein asserts that Walker uses this novel to show that "the Movement failed to acknowledge women's selfhood and thus perpetuated the counterrevolutionary values of a destructive society" (129). I agree, but when looking at the protagonists in In Love and Trouble, I refuse to position these Black women as victims of various societal conventions such as sexism, racism, and in the case of Meridian, a human liberation movement that leaves her looking "like death eating a soda cracker" (Meridian 25). Mrs. Jerome Franklin Washington, III Similar to Walker's Meridian, the inner self of the protagonist in "Her Sweet Jerome" suffers at the hands of the Black revolution; however, Mrs. Jerome isn't a civil rights activist; in fact, she is totally unaware of the movement and her husband's participation in it. She suffers because of her naïveté, and her death at the close of the story serves as her only means of escape. In realizing and accepting this, we can understand that her troubled agwu is eternally released. Identical to Child in "The Child Who Favored Daughter," the protagonist in this tale is known only by her relationship to someone else. But, unlike Child's, Mrs. Jerome's agwu is irritated by a myriad of internal and external factors, and this makes it very difficult to perceive her death as beneficial to her inner spirit. In other words, Child's inner self is already tranquil and calm, but Mrs. Jerome's is chaotic and unsettled. And yet, I continue to believe that as readers we must approach this story, as with most literary works that do not rest comfortably in the American canon, with adjusted expectations. Although "scholars of women's studies have accepted the work and lives of black women as their subject matter in a manner unprecedented in the American academy" (Gates 92), we have still been trained, by a male-dominated discourse, to anticipate certain actions (and reactions) of these same Black women to correspond with those of their white, female counterparts. Appreciating the plight of Mrs. Jerome requires that we approach her story in an unconventional fashion. Mrs. Jerome Franklin Washington, III is truly in love and trouble because of an unhealthy relationship with a man ten years younger than herself. Critic Mary Helen Washington labels her as a "suspended" woman because she cannot attain happiness until she removes herself from a "marriage that destroys her little by little" (92). When we first meet Mrs. Jerome she is rummaging through her husband's clothes, looking for some clue that will explain why he has further distanced himself from her. Financially, she does not need his support, and was "proud to say that she could make her own way ... she was fond of telling schoolteachers (women schoolteachers) that she didn't miss her 'eddicashion' as much as some did who had no learning and no money together" (Trouble 26). Nevertheless, her attention is diverted when she meets Jerome, a cute young schoolteacher, "dapper, every inch of a gentleman" (Trouble 26), but unfortunately, attracted to his wife only because of her father's money. Baffled by their relationship, the townspeople, especially those sitting in Mrs. Jerome's beauty salon, assume Jerome is "'sticking his finger into somebody else's pie'" (Trouble 28). When she hears this, she lets her otherwise bulky figure turn flabby as she tramps around town looking for the reason her husband doesn't want to be intimate with her. "She turned the whole town upside down, looking at white girls, black women, brown beauties, ugly hags of all shades" (Trouble 29). To her total surprise, she discovers that the object of Jerome's affections is a stack of paperback books on Black power and revolution, and "with a sob she realized she didn't even know what the word 'revolution' meant, unless it meant to go round and round, the way her head was going" (Trouble 34). Walker intentionally alters the connotation of revolution to illustrate Mrs. Jerome's non-progressive state of being; confused and frustrated, Mrs. Jerome burns her husband's books, ultimately setting fire to herself. Because of an "inherent weakness," she is unable to remove herself from the "denigrating and immoral situation in any other way" (Parker-Smith 486). What is it that shatters Mrs. Jerome's agwu? Specifically, why does suicide become her sole option once she learns of her husband's true love? In order to answer these questions, some attention must be given to her oppressive surroundings, including her abusive husband. The effects of a racist society are not explored as deeply in "Her Sweet Jerome" as in other stories in In Love and Trouble, but Walker clearly attempts to expose sexism and its repercussions by painting for her reader a picture of Jerome that is different from the angelic one seen by his wife. Although he was "studiously quiet," he made a habit of "beating her black and blue," which she continuously denied. But this physical abuse, harmful to her exterior, wasn't as distressing as Jerome's belittling gestures and comments: She could not open her mouth without him wincing and pretending he couldn't stand it... Other times, when he didn't bother to look up from his books and only muttered curses if she tried to kiss him good-bye, she did not know whether to laugh or cry. (Trouble 26-27) When Walker reveals this, we realize the extent to which Mrs. Jerome's agwu is troubled, and we also understand more thoroughly why she is adamantly determined to find the "woman" her husband spends so much time with: She cannot accept that she is the problem, and by spending all her energy looking for an external excuse, she doesn't have to acknowledge her own shortcomings. For Mrs. Jerome, the truth is as painful as her death: Whether Jerome's passion burns for another woman or for the Black revolution itself, he doesn't want her. When she burns their marriage bed and "the bits of words transformed themselves into luscious figures of smoke" (*Trouble* 34), she is expressing the sexuality that Jerome detested. It is extremely difficult to free Mrs. Jerome's *agwu*. After all, unlike Child, she ends her own life, for once the flames and smoke in the bedroom have become unbearable, she backed "enraged and trembling into a darkened corner of the room, not near the open door [my emphasis]" (*Trouble* 34). The same revolution that praised Jerome as a "scholar" and an "intellectual" ridiculed and alienated Mrs. Jerome. In "Afrocentrism and Male-Female Relations in Church and Society," Delores S. Williams asserts that Afrocentrism and Black power do not serve the needs of Black women. She explains: So what specifically is this Afrocentricism besides woman-exclusive? According to its main proponent, [Molefi Kete] Asante, Afrocentrism is a spiritual and philosophical ideology (a state of living, thinking, and knowing) that places African American history, culture and African heritage at the center of black people's lives ... [but] Women are invisible in Afrocentrism until Asante begins to define the nature of male-female relationships within Afrocentric thinking. (46-7) I agree, and it is dispiriting that Mrs. Jerome's marriage exists in name only; if the revolution and Afrocentrism were functioning at a level that encouraged the male-female relationships Williams speaks of, both Jerome and the community itself would have viewed the Washington's union as more than a cruel joke. And yet, I believe Walker's strategic use of particular words and phrases indicates that Mrs. Jerome's life did not end without some gain on the part of her *agwu*. Careful insight reveals that Mrs. Jerome's death allows her to eliminate the cause of her unhappiness, thus liberating her internal spirit. For instance, it appears as though Mrs. Jerome's suicide only eradicates her existence, but as she screams "I kill you! I kill you!" whom or what is she addressing? If we interpret this literally, we can say with certainty that she has destroyed the books that "ignorantly amused" her. But the books represent both her unloving husband and the revolution, and I would argue that although she allows herself to die, she takes with her that which brought her misery and anguish. Her situation is similar to that of Hannah Kemhuff, who dies knowing "the little moppet's" life is also drawing to a close. In short, there is a sense of satisfaction, for both Mrs. Jerome and Hannah, in knowing that their lives did not end in vain. In addition, Mrs. Jerome's final moments disclose that her death, although physically painful, brought her gratification; Walker's manipulation of the concept of agony and ecstasy illustrates my point. Walker says, "... the fire and the words rumbled against her together, overwhelming her with pain and enlightenment" (*Trouble* 34). Mrs. Jerome discovered she had ultimately found a means of expressing her anger, and her endless attempts to find Jerome's "other woman" had proven successful. Her life was the only possession she had to render, a meager price to pay for the freedom of her agwu. In "The Civil Rights Movement: What good was it?" Walker comments on the Black revolution and its ability (or inability) to advance Blacks' position through education and knowledge. She says, "Man only truly lives by knowing; otherwise he simply performs ... accepting someone else's superiority and his own misery" (121-22). I feel that Mrs. Jerome doesn't truly live until her naivete has been eradicated. Simultaneously, her agwu, fed by truth, acceptance and self-worth, is not freed until she dies. Parker-Smith says that for Walker's characters "death is never lamented. There is no jumping up and shouting and falling out after her deaths. Rather, one feels a calmness, a hush prevailing [my emphasis]" (489). Although it is indeed a challenge, this "calmness" noticed by Parker-Smith can be found once Hannah Kemhuff, Child and Mrs. Jerome's physical beings have ceased. All one has to do is tune in to the "hush prevailing" of their (Walker's protagonists) liberated internal spirits, and one, anyone, can hear the resonant sound of silence. Lost my voice? Of course. You said "Poems of love and flowers are a luxury the Revolution cannot afford." Here are the warm and juicy vocal cords, slithery, from my throat. Allow me to press them upon your fingers, as you have pressed that bloody voice of yours in places it could not know to speak, nor how to trust. Alice Walker- "Lost My Voice? Of Course." Mama, Dee and Maggie "Everyday Use" is probably Alice Walker's most widely anthologized short story, and much has been written about the tale and its relevance to family, cultural identity and the art of quilting as an African American pastime. However, in continuing with my theme I want to examine the personal spirits of the three women in the story, including Mama, Dee and Maggie. When the story opens, Mama and her daughter Maggie have just cleaned the front yard of their Southern home. Maggie is "homely and ashamed of the burn scars down her arms and legs," Mama is "a big-boned woman with rough, man-working hands" who watches protectively over Maggie, and they are both waiting for Dee, who is "lighter than Maggie, with nicer hair and a fuller figure" (Trouble 47-9), to return to her country home for a visit. She has been away at a school in Augusta, and during previous trips home Mama says she "washed us in a river of make-believe, burned us with a lot of knowledge we didn't necessarily need to know" (Trouble 50). When Dee arrives, she is with "a short, stocky man" with hair "a foot long and hanging from his chin like a kinky mule tale" (Trouble 52). However, it is her appearance, extravagant, bright and Afrocentric, that grasps the attention of Mama and Dee. Everything about her defines her presence as culturally hip, including the first words out of her mouth: "Wa-su-zo-Tean-o!" As readers, to some extent we accept Dee's cultural awareness as beneficial, and we are not wary when she snaps numerous pictures of her family and her meager house. For a moment, Walker even tricks us into thinking Dee is proud of her home, and we are not terribly bothered when she demands to be called by her Afrocentric name, "Wangero Lee-wanika Kemanjo," proclaiming she has abandoned her slave name. Christian says, "she has returned to her black roots because now they are fashionable ... Ironically, in keeping with the times, Dee has changed her name to Wangero, denying the existence of her namesake, even as she covets the quilts ... [she] made" ("Wayward" 86). The quilts Christian refers to becomes the element that reveals Dee's true intention, which is to lay "claim to various homemade items fadishly valued as decorations" (Richards 447). These quilts, made by Dee and Maggie's grandmother and great-grandmother, are cherished and appreciated by Maggie, who has learned the skill itself and treasures the cultural significance of each piece of the quilts. Mama says of these quilts: After dinner Dee (Wangero) went to the trunk at the foot of my bed and started rifling through it. Maggie hung back in the kitchen over the dishpan. Out came Wangero with two quilts. They had been pieced by Grandma Dee and then Big Dee and me had hung them on the quilt frames on the front porch and quilted them. One was in Lone Star pattern. The other was Walk Around the Mountain. In both of them were scraps of dresses Grandma Dee had worn fifty and more years ago. (Trouble 56) Up to this point in this tale, Mama, Dee and Maggie's spiritual states are relatively obvious. For example, we know Mama is a strong maternal figure, accepting of the faults and attributes of both her daughters. And we know Maggie is timid and lacks assertiveness, yet is tied closely to her family background. (Ironically, it is her unappreciative sister who receives the family's name.) Dee, we believe, is sound and comfortable with her agwu, one fed by education and experiences beyond a dirt-covered front yard. But, when Mama denies Dee the quilts and gives them to Maggie (who, culturally, has more of a right to them than Dee), Wangero becomes angry and yells that, "Maggie can't appreciate these quilts! ... She'd probably be backward enough to put them to everyday use" (Trouble 57). Immediately we know that Dee's inner spirit is off-balance because she has abandoned community, the root of any culture. On the other hand, "Maggie is not aware of the word heritage. But she loves her grandma and cherishes her memory in the quilts she made. Maggie has accepted the spirit that was passed on to her" (Christian, "Wayward" 87). Essentially, I believe the above is the difference between Dee and Maggie, for although Dee looks, in appearance, like one in touch with the true meaning of culture and tradition, she is more in tune with how others will react to the antique quilt she plans to hang on her wall. In short, Dee doesn't realize that by redefining her family's inventions as mere cultural artifacts, she is forfeiting her ability to truly connect with her heritage. In addition, Maggie, in contrast to Dee, is dressed plainly and has a simple name, but her ability to connect spiritually with her ancestry demonstrates that because she is timid, soft-spoken, and stays close to her family's history, she is more in tune with her inner spirit, her agwu, than Dee could ever be. When Maggie tells Mama that Dee can have the quilts because she "can 'member Grandma Dee without the quilts,'" Maggie confirms for us, as readers, that she draws strength from her sense of community and family and doesn't need the idealized concept of Afrocentrism to define or reinforce her Black-ness. Further, from the reactions of Dee and Maggie, I believe Walker is again questioning the notion that change is equivalent to advancement and betterment. Instead, she is asserting that culture can be found within one's self, and is easily lost when one, such as Dee, is blind to her true identity. Christian adds, "Walker challenged the idealistic view of Africa as an image, a beautiful artifact to be used by Afro-Americans in their pursuit of racial pride" ("Wayward" 83). Pride in one's race, I believe, cannot be demonstrated by physical elements, and Maggie, unlike Dee, realizes this as truth. In From Civil Rights to Black Liberation, William W. Sales, Jr., adds that the Civil Right Movement's failure to nourish the relationship between self and community was one of its major flaws: While the opportunity to participate in nonviolent direct action was an important part of the process of psychologically redeeming southern Blacks, the Civil Rights Movement generated no specific demands relevant to the protection and enhancement of the cultural identity of the African American. (45). If Sales' assertion is plausible, then it is logical that Dee and Mama have identified, to an extent surpassing Dee, with their culture and sense of family and community. And finally, the condition of Mama's spirit is healthy and solid because of her self-acceptance. For instance, sometimes she daydreams she is on a television show where she and Dee are reunited, but she knows her reality consists of her daughter constantly wishing she was "a hundred pounds lighter" with "skin like an uncooked barley pancake" (Trouble 48). Still, Mama's agwu is further developed when she takes a stand against Dee, giving the quilts to her plain, quiet Maggie. In "Patches: Quilts and Community in Alice Walker's 'Everyday Use'," Houston Baker, Jr. and Charlotte Pierce-Baker contend that "Maggie is the arisen goddess of Walker's story; she is the sacred figure who bears the scarifications of experience and knows how to convert patches into robustly patterned and beautifully quilted wholes" (162). I believe that Mama, not blinded by the unnecessaries of life, recognizes this wonderful characteristic of her daughter, and acts out of a spiritual love when she "snatched the quilts out of Miss Wangero's hands and dumped them into Maggie's lap" (Trouble 58). Walker summarizes the peaceful agwus of Maggie and Mama at the close of the story, just as Dee, irritated by both of them, returns to a heritage that has little to do with the true meaning of the word. Walker writes: Maggie smiled ... a real smile, not scared. After we watched the car dust settle I asked Maggie to bring me a dip of snuff. And then the two of us sat there just enjoying, until it was time to go in the house and go to bed. (Trouble 59) Walker's concept of revolution, that of change and growth, manifests in the final glimpse we have of the lives of Maggie and Mama. They have both undergone a type of transformation: Maggie has learned that her sense of self gives her the strength to face the world with brevity, and Mama realizes how precious her Maggie, scarred hands and all, truly is. In "Her Sweet Jerome" and "Everyday Use" Walker explores revolution, a controversial issue for many living in the 1960's and early 1970's. But she is ambiguous about the effect "change" has on Black women. In reading her stories, we must question if change is genuinely beneficial. Obviously, illustrated through Wangero, the inner spirits of Black women are sometimes injured as activists rally for "the good of more black people" (Walker, Same River). But if you're Mrs. Jerome, what does the revolution get you? Death? Ridicule? Again, Walker's ambivalence and complex understanding require a patient probing on the part of her readers to seek answers to questions she deems fundamental and critical to the spiritual growth of Black women. CHAPTER V. FROM VICTIM TO SURVIVOR: APPROACHING AFRICAN AMERICAN LITERATURE WITH ALTERED EXPECTATIONS Alice Walker's *In Love and Trouble: Stories of Black Women* gives voice to women who might, without the venue provided by Walker, remain silenced and forgotten. Reading this collection of short stories is by no means a comfortable experience. It is difficult to read the trials and tribulations of Walker's protagonists without feeling anger at the racist, sexist societal conventions they must battle. In addition, we are saddened by their troubles, and we wish we could somehow help these women. However, this leads to our most difficult experience: We feel hopeless because we know we cannot help these women out of their predicaments, nor can we pretend their circumstances are different or nonexistent. As a Black woman, I feel a connection to Walker's women that is eerie in nature; yet, at the same time, I believe this sensation allows me to be optimistic about their fates. Perhaps this comes from my own experiences, where the end results of various circumstances may have appeared to be disastrous to others, but were the only way out for myself. In short, the choices we all make in life depend heavily on our personal situations; no one has an obligation (or right) to place judgment on the decisions we make in life. Because "the women in this volume truly are 'in love and trouble' due in large measure to the roles, relationships, and self-images imposed upon them by a society which knows little and cares less about them as individuals" (Petry 13), this same society should avoid labeling their actions or placing them in the same victimized roles Black women have struggled to escape for centuries. Walker's women, as difficult as it may be to believe, do what is necessary to free their inner spirits, and sometimes that means the destruction of their physical selves. This is difficult to accept, but as Petry concurs, *In Love and Trouble was necessary if Walker was to follow it with her second collection of short stories, *You Can't Keep A Good Woman Down* (1981). The difference between the two is easy to ascertain from the titles alone. In *Good Woman*, Walker portrays the Black woman who is, to a large extent, victorious over her male counterpart. In fact, it may be said that the roles are reversed. That is, in *In Love and Trouble*, the Black men are the oppressors, and they keep the good women in these thirteen stories down (and sometimes out). This may very well be the reason why the personal spirits, or *agwus*, of the women in the 1981 collection are less troubled. Nevertheless, when examining the spiritual health of Walker’s protagonists, it is imperative that we examine closely their environments as well as the obstacles they must battle. In doing this, we can better understand their *agwus*, and this, in turn, allows us to appreciate the various ways these same *agwus* seek freedom. Through my analysis I have concluded that based on the circumstances of Roselily, Hannah Kemhuff, Myrna, Child, Mrs. Jerome, Dee, Maggie and Mama, each of them does what is necessary to attain the spiritual peace we are all, in our individual ways, constantly striving for. I have analyzed only six of the thirteen stories in *Trouble*, but I would encourage readers of the remaining seven tales to approach their protagonists in the same manner as myself. For instance, when reading "Strong Horse Tea," a story about a young mother who believes horse urine will cure her ailing son, remember the spirit of Hannah Kemhuff, who died believing a rootworker would avenge her death. Rannie Toomer, the mother in "Strong Horse Tea," believed the urine would serve as the cure. And, even though her son died, *her agwu* was, to some extent, released because she had the courage to take matters into her own hands; Rannie Toomer realized that only she, not the white, racist postman or the town doctor, had enough compassion and determination to do anything necessary to save her baby boy. Readers of the works of Black women writers must remember that just as the stories themselves are culturally, uniquely different, so are the choices the characters have in life equally different from what we have been exposed to in various "canonized" works. BIBLIOGRAPHY
Finding a Vector Orthogonal to Roughly Half a Collection of Vectors Pierre Charbit, Emmanuel Jeandel, Pascal Koiran, Sylvain Perifel, Stéphan Thomassé To cite this version: HAL Id: ensl-00153736 https://hal-ens-lyon.archives-ouvertes.fr/ensl-00153736 Preprint submitted on 11 Jun 2007 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Finding a Vector Orthogonal to Roughly Half a Collection of Vectors Pierre Charbit\textsuperscript{1}, Emmanuel Jeandel\textsuperscript{2}, Pascal Koiran\textsuperscript{2}, Sylvain Perifel\textsuperscript{2}, and Stéphan Thomassé\textsuperscript{1} \textsuperscript{1} LAPCS, Université Claude Bernard – Lyon 1 [Pierre.Charbit,Stephan.Thomasse]@univ-lyon1.fr \textsuperscript{2} LIP, École Normale Supérieure de Lyon [Emmanuel.Jeandel,Pascal.Koiran,Sylvain.Perifel]@ens-lyon.fr Abstract. Dimitri Grigoriev has shown that for any family of $N$ vectors in the $d$-dimensional linear space $E = (\mathbb{F}_2)^d$, there exists a vector in $E$ which is orthogonal to at least $N/3$ and at most $2N/3$ vectors of the family. We show that the range $[N/3, 2N/3]$ can be replaced by the much smaller range $[N/2 - \sqrt{N/2}, N/2 + \sqrt{N/2}]$ and we give an efficient, deterministic parallel algorithm which finds a vector achieving this bound. The optimality of the bound is also investigated. Keywords: algebraic complexity, decision trees, parallel algorithms, derandomization. 1 Introduction Dimitri Grigoriev \cite{Grigoriev} has shown that the point location problem\footnote{It is misleadingly called “range searching problem” in \cite{Koiran} and \cite{Koiran2}.} in arrangements of $m$ algebraic hypersurfaces of degree $D$ in $\mathbb{R}^n$ can be solved by topological decision trees of depth $O(n \log(mD))$. In topological decision trees \cite{Koiran, Koiran2} nodes are labelled by arbitrary polynomials, i.e., the cost of their evaluation is ignored. The key ingredient in his nonconstructive proof is the following combinatorial lemma. Let $\mathbb{F}_2$ be the two-element field. For any family of $N$ vectors in the $d$-dimensional linear space $E = (\mathbb{F}_2)^d$, there exists a vector in $E$ which is orthogonal to at least $N/3$ and at most $2N/3$ vectors of the family. Orthogonality is defined with respect to the $\mathbb{F}_2$-valued “inner product” $u \cdot v = \sum_{i=1}^{d} u_i v_i$ (strictly speaking, this is of course not a “honest” inner product since for instance a vector can be orthogonal to itself). In order to explore the constructive aspects of Grigoriev’s point location theorem it is useful to have a constructive version of this combinatorial lemma. Here one main goal is to obtain new transfer theorems for algebraic versions of the P vs. NP problem. It is well known that the point location problem in arrangements of hyperplanes can be solved efficiently by linear decision trees \cite{Koiran} and \cite{Koiran2}. This was the main technical tool in the proof that the P vs. NP problem for the real numbers with addition and order is equivalent to the classical problem \cite{Cook}. As suggested in \cite{Koiran} and \cite{Koiran2}, a better understanding of point location in arrangements of hypersurfaces will make it possible to obtain transfer theorems for a richer model of computation in which multiplication is allowed. Precise statements and proofs can be found in \cite{Koiran}. The goals of the present paper are to improve Grigoriev’s lemma and to give a constructive version of it. Namely, we show that the range $[N/3, 2N/3]$ can be replaced by the much smaller range $[N/2 - \sqrt{N/2}, N/2 + \sqrt{N/2}]$ and we give an efficient, deterministic parallel algorithm which finds a vector achieving this bound. Our algorithm is logspace uniform NC, i.e., it can be implemented by a family of logspace uniform boolean circuits of polynomial size and polylogarithmic depth. Organization of the paper. Grigoriev’s lemma is stated in \cite{Grigoriev} and at the beginning of this introduction in the language of linear algebra. There is an equivalent formulation in a purely set-theoretic language. Namely, we are given a set $\mathcal{F}$ of $N$ distinct subsets of a finite set $X$. The goal is to find a subset $F$ of $X$ such that roughly $|F|/2$ elements of $\mathcal{F}$ have an intersection with $F$ of even cardinality. This set-theoretic point of view is developed in Section 2. In Section 2.1 we give a probabilistic proof of the combinatorial lemma which yields the improved range $[N/2 - \sqrt{N/2}, N/2 + \sqrt{N/2}]$. Moreover, we show that a random subset $F \subseteq X$ will fall in the slightly bigger range $[N/2 - \sqrt{N}, N/2 + \sqrt{N}]$ with probability at least $3/4$, so there is a quite simple randomized algorithm for our problem. We then show that a deterministic algorithm can be obtained by derandomizing the probabilistic proof of the combinatorial lemma. In Section 2.2 we give another (graph- theoretic) proof of the lemma which achieves the same bound as the probabilistic proof. This yields another deterministic sequential algorithm given in Section 2.3. The optimality of the bound is then discussed in Section 2.4. We return to the language of linear algebra in Section 3 to describe our parallel algorithm. Note that this algorithm relies on elementary facts about extensions of finite fields. Field extensions seem to be of an intrinsically algebraic nature, so the linear algebraic point of view seems most appropriate to state and prove the results of that section. It would be interesting to find out whether the probabilistic proof of Section 2.1 can be derandomized to yield not only an efficient sequential algorithm, but also an efficient parallel algorithm (more on this at the end of Section 2.1). We conclude this introduction with a long quote from [12]: “A natural approach towards derandomizing algorithms is to find a method for searching the associated sample \( \Omega \) for a good point \( w \) with respect to a given input instance \( I \). Given such a point \( w \), the algorithm \( A(I, w) \) is now a deterministic algorithm and it is guaranteed to find a correct solution. The problem faced in searching the sample space is that it is generally exponential in size. The result of Adleman showing that \( RP \subseteq P/poly \) implies that the sample space \( \Omega \) associated with a randomized algorithm always contains a polynomial-sized subspace which has a good point for each possible input instance. However, this result is highly non-constructive and it appears that it cannot be used to actually de-randomize algorithms.” Our paper gives an example of a problem for which this “Adlemanian” approach to derandomization is actually feasible. Indeed, our parallel algorithm constructs a polynomial-size list of “candidate vectors” which for any set of \( N \) input vectors is guaranteed to contain a vector orthogonal to roughly \( N/2 \) input vectors. This list is made up of all vectors in a polynomial-size family of “candidate subspaces” of small (logarithmic) dimension. Once the list is constructed we only have to solve an exhaustive search problem, and this can be done quite easily in parallel. 2 The set theoretic point of view In this section we study the set theoretic formulation of our problem: \( X \) is a finite set and \( \mathcal{F} \) a set of \( N \) nonempty distinct subsets of \( X \). The goal is to find a subset \( F \) of \( X \) such that the number of elements of \( \mathcal{F} \) which have an odd intersection with \( F \) is as close as possible to \( |\mathcal{F}|/2 \). 2.1 A probabilistic proof The first natural idea for this problem is to take for \( F \) a random subset of \( X \). **Theorem 1.** Let \( X \) be a finite set and \( \mathcal{F} \) be a set of \( N \) nonempty subsets of \( X \). There is a subset \( F \subseteq X \) such that \[ -\sqrt{\frac{N}{2}} \leq |\{i : |F \cap F_i| \text{ even}\}| - \frac{N}{2} \leq \sqrt{\frac{N}{2}}. \] **Proof.** Call \( F_1, \ldots, F_N \) the elements of \( \mathcal{F} \). We choose a random subset \( F \) of \( X \) obtained by selecting or not every element of \( X \) with probability \( 1/2 \). Let \( Y_i \) be the random variable defined by: \[ Y_i = 1 \text{ if } |F \cap F_i| \text{ is even, and } Y_i = -1 \text{ otherwise.} \] Therefore we are interested in the random variable \[ Y = \sum_{i=1}^{N} Y_i = |\{i : |F \cap F_i| \text{ even}\}| - |\{i : |F \cap F_i| \text{ odd}\}| = 2|\{i : |F \cap F_i| \text{ even}\}| - N. \] We want to show that there exists an \( F \) for which \( |Y| \leq \sqrt{N} \), i.e. \( Y^2 \leq N \). First, let us prove that \( P(Y_i = 1) = 1/2 \). This follows immediately from the facts that every subset \( F \) occurs with same probability and that there are as many odd as even subsets in each \( F_i \). Thus \( E(Y_i) = 0 \). Then we prove that the events \( \{ Y_i = 1 \} \) are pairwise\(^4\) independent. For this let us consider two elements \( F_1 \) and \( F_2 \) of \( \mathcal{F} \). We have to prove that \[ P(Y_1 = 1 \cap Y_2 = 1) = P(Y_1 = 1)P(Y_2 = 1) = 1/4. \] There are three cases: - \( F_1 \) and \( F_2 \) are disjoint. In this case, it is clear that the events are independent. - \( F_1 \subseteq F_2 \). This case can be reduced to the previous one for \( F_1 \) and \( F_2 \setminus F_1 \) and we still have \( \emptyset \). - The three sets \( A = F_1 \setminus F_2, B = F_1 \cap F_2 \) and \( C = F_2 \setminus F_1 \) are nonempty. Then \( Y_1 = 1 \) and \( Y_2 = 1 \) is equivalent to \( |A \cap F| \equiv |B \cap F| \equiv |C \cap F| \mod 2 \). But since these three sets are disjoint, we have a probability 1/8 to be in the case even-even-even and 1/8 to be in the case odd-odd-odd. Eventually, we also have \( \emptyset \). Since the events are pairwise independent we have \( E(Y_i Y_j) = E(Y_i)E(Y_j) = 0 \) if \( i \neq j \). Furthermore, \( E(Y_i^2) = 1 \) so by linearity of the expectation we have \[ E(Y^2) = E(\sum_{i=1}^{N} Y_i^2 + \sum_{i \neq j} Y_i Y_j) = N. \] Hence there exists \( F \) for which \( Y^2 \leq N \): this is the desired set. \( \square \) Remark 1. In the above proof, taking into account the fact that \( Y^2 = N^2 \) for \( F = \emptyset \), we obtain \( E(Y^2|F \neq \emptyset) < N \). Thus there exists a set \( F \) for which the inequality is strict, i.e. \( Y^2 < N \). In other words, there exists a set \( F \) satisfying the stronger inequality: \[ -\frac{\sqrt{N}}{2} < |\{ F_i \in \mathcal{F} : |F \cap F_i| \text{ even} \}| - \frac{N}{2} < \frac{\sqrt{N}}{2}. \] Remark 2. The pairwise independence of the \( Y_i \) enables us to evaluate the variance of \( Y \): \( Var(Y) = \sum_{i=1}^{N} Var(Y_i) = N \). By Tchebycheff’s inequality, we have: \[ P(|Y - E(Y)| > 2\sqrt{N}) = P(|Y| > 2\sqrt{N}) < Var(Y)/(2\sqrt{N})^2 = 1/4. \] This ensures that at least 3/4 of the subsets \( F \) fall within the range \([N/2 - \sqrt{N}, N/2 + \sqrt{N}]\), and yields a trivial randomized algorithm for finding such a set. The deterministic algorithms of Proposition \( \emptyset \), Section \( \emptyset \), and Section \( \emptyset \) achieve however the better range \([N/2 - \sqrt{N}/2, N/2 + \sqrt{N}/2]\) obtained in the theorem. We now show how to derandomize the proof of Theorem \( \emptyset \) by the method of conditional expectations, in order to obtain a deterministic algorithm. Note that a simpler deterministic algorithm will be presented in Section \( \emptyset \). Proposition 1. The proof of Theorem \( \emptyset \) can be derandomized using the method of conditional expectations. This yields a polynomial-time deterministic algorithm for finding a set of even intersection with at least \( N/2 - \sqrt{N}/2 \) and at most \( N/2 + \sqrt{N}/2 \) of the \( F_i \)'s. Proof. Following the proof of Theorem \( \emptyset \), this amounts to finding a set \( F \) for which \( Y^2 \leq N \). We build such a set by enumerating the elements of \( X \) and deciding in turn for each \( x \in X \) whether it must belong to \( F \). Along the way, we keep \( E(Y^2) \) bounded above by \( N \), thus giving a guarantee that the final set \( F \) will have the expected property. At the beginning, we know from the proof of Theorem \( \emptyset \) that \( E(Y^2) \leq N \). At each subsequent step, we have already determined for some elements whether they belong to \( F \): let us call \( C \) this condition (for \(^4\) It can be shown that these events are not always 3-wise independent. example, $C \equiv (x_1 \in F) \land (x_2 \not\in F)$). By induction hypothesis we have $E(Y^2|C) \leq N$. The next step is to determine whether an element $x \in X$ is in $F$. We have: $$E(Y^2|C) = 1/2(E(Y^2|C \land (x \in F)) + E(Y^2|C \land (x \not\in F))).$$ Therefore there exists a choice $c$ (either $c \equiv (x \in F)$ or $c \equiv (x \not\in F)$) for which $E(Y^2|C \land c) \leq E(Y^2|C) \leq N$. We then move on to the next step according to this choice: this will ensure that the induction hypothesis is satisfied at the next step. At the end of the algorithm, i.e., when every element $x \in X$ has been considered, the set $F$ obtained satisfies $E(Y^2|F) \leq N$, and hence the statement of Theorem 1. The only remaining point to settle is how to compute $E(Y^2|C \land (x \in F))$ and $E(Y^2|C \land (x \not\in F))$: we need these values in order to make our choice. More generally, we want to be able to compute $E(Y^2|C)$ for an arbitrary condition $C$: $$C \equiv \bigcap_{x \in A} (x \in F) \land \bigcap_{x \not\in B} (x \not\in F).$$ Let $T_i$ be the random variable defined by $$T_i = 1 \text{ if } |(F_i \setminus (A \cup B)) \cap F| \text{ is even, and } T_i = -1 \text{ otherwise.}$$ We have $E(Y_i|C) = (-1)^{|F_i \cap A|}E(T_i)$. Note that some sets $F_i \setminus (A \cup B)$ can be equal (even if by assumption the $F_i$’s are different), and can even be empty, thus evaluating $E(Y^2|C)$ amounts to computing the expectation of $Z^2$ where $Z = \sum_i \alpha_i Y_i$ for a set $\{F_1, \ldots, F_k\}$ of (possibly empty) subsets of $X$ together with weights $\alpha_1, \ldots, \alpha_k \in \mathbb{Z}$. As in the proof of Theorem 1, the events $\{Y_i = 1\}$ are pairwise independent. Furthermore, if $F_i = \emptyset$ then of course $E(Y_i) = 0$. Finally, $E(Y_i^2) = 1$ for any $i$. Thus computing $E(Z^2)$ is easy, because $$E(Z^2) = E\left(\sum_i \alpha_i^2 Y_i^2 + \sum_{i \neq j} \alpha_i \alpha_j Y_i Y_j\right) = \sum_i \alpha_i^2 E(Y_i^2) + \sum_{i \neq j} \alpha_i \alpha_j E(Y_i)E(Y_j).$$ This implies that in polynomial time one can compute $E(Y^2|C \land (x \in F))$ and $E(Y^2|C \land (x \not\in F))$, and decide whether $x$ should be taken in $F$ or not. The construction of $F$ thus requires $|X|$ steps, each computable in polynomial time: the overall deterministic algorithm finds a set with the expected property in polynomial time. As explained in the introduction, the main goal of Section 3 is to obtain a deterministic parallel algorithm for our problem. It would be interesting to obtain such an algorithm from a different derandomization of Theorem 1. The main derandomization method that yields efficient parallel algorithms is the method of bounded independence, as described for instance in Section 15.2 of [3]. At first sight it looks like this method might be applicable since the proof of Theorem 1 is based on the pairwise independence of the random variables $Y_i$. Unfortunately, the method is not applicable directly because $Y_i$ is defined only indirectly through the formula $$Y_i = 1 \text{ if } |F \cap F_i| \text{ is even, and } Y_i = -1 \text{ otherwise.}$$ One must therefore construct a small sample space not for the $Y_i$ but for the random set $F$. This is achieved in Section 3 through an ad-hoc method. ### 2.2 A graph-theoretic proof Here we model the problem as a cut problem in a bipartite graph. We want to find a subset $F$ that minimizes the range between $|\{i : |F \cap F_i| \text{ even}\}|$ and $|\{i : |F \cap F_i| \text{ odd}\}|$. But this means exactly finding $F$ that maximizes the number of pairs $\{F_i, F_j\}$ with $|F \cap F_i| \not\equiv |F \cap F_j| \mod 2$. Indeed, if $t$ denotes $|\{i : |F \cap F_i| \text{ odd}\}| - \frac{N}{2}$, the number of such pairs is exactly $(N/2 - t)(N/2 + t) = N^2/4 - t^2$. The crucial fact is that if $F \subseteq X$ and $F_i, F_j$ are two elements of $\mathcal{F}$: $$|F \cap F_i| \not\equiv |F \cap F_j| \mod 2 \iff |F \cap (F_i \Delta F_j)| \equiv 1 \mod 2.$$ Thus, finding $F$ that minimizes the range between $|\{i : |F \cap F_i| \text{ even}\}|$ and $|\{i : |F \cap F_i| \text{ odd}\}|$, is exactly finding $F$ that maximizes $|\{\{i,j\} : |F \cap (F_i \triangle F_j)| \text{ odd}\}|$. We consider the following bipartite graph $(V,E)$: - $V = V_1 \cup V_2$ where $V_1 = \{(i,j) : 1 \leq i < j \leq N\}$, and $V_2 = \mathcal{P}(X)$; - $\{(i,j), F\} \in E$ iff $|F \cap (F_i \triangle F_j)|$ odd. What we are looking for is a vertex of $V_2$ of maximum degree. Let $N(x)$ denote the set of neighbours of $x$. We will only need to apply the following lemma for $A = V_2$, as in Lemma 1. However it turns out that we can characterize in Lemma 1 all the subsets $A \subseteq V_2$ for which the proof still holds (see also Remark 1 in Section 3). **Lemma 1.** Let $A \subseteq V_2$ be such that $\emptyset \in A$ and $\forall F,F' \in A$, $(F \triangle F') \in A$. Assume moreover that $\forall x \in V_1$, $N(x) \cap A \neq \emptyset$. Then \[ \forall x \in V_1, |N(x) \cap A| = \frac{|A|}{2}. \] **Proof.** Let $x \in V_1$. By hypothesis, there exists $F \in A$ such that $(x,F)$ is an edge of the graph. And by the other hypothesis the following map is well-defined, \[ \phi : A \rightarrow A \] \[ F' \mapsto (F \triangle F') \] and is a bijection of $N(x) \cap A$ onto $A \setminus N(x)$ which proves the result. \hfill \Box **Lemma 2.** There exists a subset $A \subseteq V_2$ satisfying the hypothesis of Lemma 1. **Proof.** It suffices to take $A = V_2$. \hfill \Box **Corollary 1.** There exists $F \in V_2$ such that $|N(F)| \geq \frac{|V_1|}{2}$. **Proof.** By Lemmas 1 and 2 every $x \in V_1$ has $|N(x)| = \frac{|V_2|}{2}$ neighbours. By double counting, there exists an $F \in V_2$ satisfying the hypothesis of the corollary. \hfill \Box **Corollary 2.** There exists $F \subseteq X$ such that $|\{i : |F \cap F_i| \text{ even}\}| = \frac{N}{2} - \frac{t^2}{4}$ and by hypothesis on $F$: \[ \frac{N^2}{4} - t^2 \geq \frac{|V_1|}{2} = \frac{N(N-1)}{4} = \frac{N^2 - N}{4} \] which implies $|t| \leq \frac{\sqrt{N}}{2}$. \hfill \Box 2.3 **A simple deterministic polynomial time algorithm** We now present a very simple polynomial algorithm which finds a subset $F$ achieving inequality (1) from Theorem 1. We work from the point of view described in Subsection 2.2: given the subsets $F_i$, we need to find a subset $F$ that has an odd intersection with more than half of the $F_i \triangle F_j$ (taking multiplicities into account). Note that these symmetric differences are all nonempty since the $F_i$ are distinct. The algorithm goes this way. 1. We construct all the sets $F_i \triangle F_j$ and denote by $G$ the multiset obtained. 2. Let $x \in X$. Let $G'$ be the multiset of all elements of $G$ not containing $x$. - Apply recursively the algorithm to $X \setminus \{x\}$ and $G'$. - Thus we get a subset $F'$ of $X \setminus \{x\}$ that has an odd intersection with more than half of the elements of $G'$. Now there are two cases: - $F'$ has an odd intersection with more than half of the elements of $G \setminus G'$. In this case $F = F'$ is a solution to our problem. - Otherwise, since $x$ belongs to all elements of $G \setminus G'$, taking $F = F' \cup \{x\}$ gives a solution. 2.4 Discussion of the bounds With the help of Theorem 1, we know that it is possible to reach the expected value within a range of order $\sqrt{N}$. One can wonder whether it is possible to ensure a constant range. The following examples prove that this is impossible. Let us consider a set $X$ with $n = 4k^2 + 1$ elements and $\mathcal{F}$ be the set of all subsets of $X$ of size 2. Let $N = |\mathcal{F}| = (n(n-1))/2$. In this context, the problem is to partition $X$ into two parts and count the number of edges through the cut, which are precisely the sets of $\mathcal{F}$ with odd intersection. We want to find $0 \leq a \leq n/2$ such that $a(n-a)$ is as close as possible to $N/2 = k^2(4k^2 + 1)$. But: $$(2k^2 - k)(2k^2 + k + 1) = 4k^4 + k^2 - k$$ and $$(2k^2 - k + 1)(2k^2 + k) = 4k^4 + k^2 + k.$$ The function $a \mapsto a(n-a)$ being increasing on $[0, n/2]$, this proves that these are the two best values and that the error is at least $k$, which is of the order of $N^{1/4}$. It is possible to refine this argument further. For instance, the consideration of subsets with 3 elements instead of 2 yields the following result. **Proposition 2.** Let $\mathcal{F}_n$ be the family of subsets of three elements of $\{1, \ldots, n\}$. There exists a constant $c > 0$ such that for infinitely many $n$, for any subset $G$ of $\{1, \ldots, n\}$, $$\left|\left\{F \in \mathcal{F}_n : |F \cap G| \text{ even}\right\}\right| - \frac{|\mathcal{F}_n|}{2} \geq c|\mathcal{F}_n|^{1/3}.$$ **Proof.** Let $F \subseteq \{1, \ldots, n\}$ be a subset of cardinality $a$. The number of elements of $\mathcal{F}_n$ whose intersection with $F$ is of odd cardinality is then $a \binom{n}{2} + \binom{a}{3}$. Therefore, let $$f(a) = \frac{a(n-a)(n-a-1)}{2} + \frac{a(a-1)(a-2)}{6} - \frac{n(n-1)(n-2)}{12}$$ be the difference with $|\mathcal{F}_n|/2$. We aim at showing that $f$ is far from zero on integer values, when $n$ is well chosen. The zeros of $f$ are $n/2$ and $n/2 \pm \sqrt{3n-2}/2$. From the variations of $f$, we see that the integers $i$ so that $|f(i)|$ is minimal are among the six integers around the zeros. Intuitively, these values should be maximized if the zeros are far from integers (that is, if they are near half-integers). This requires $n$ to be odd and $\sqrt{3n-2}/2$ to be near an integer (i.e. $3n-2 \approx 4k^2$ for some $k$). These considerations lead to the choice $n = 4k^2/3 + 1$ where $k \equiv 0 \mod 3$. The integer $n$ is then odd, so $$f(\lfloor n/2 \rfloor) = f(n/2 - 1/2) = n/4 - 1/4$$ $$f(\lfloor n/2 \rfloor) = f(n/2 + 1/2) = -n/4 + 1/4$$ Furthermore, if $k \geq 2$ then $\sqrt{3n-2} = 4k^2 + 1$ is at most $1/8$ away from $2k$, so that $$f(\lfloor n/2 + \sqrt{3n-2}/2 \rfloor) = f(n/2 - 1/2 + k) = f(n/2 - 1/2 + \sqrt{3n-3}/2) = -n/2 + O(\sqrt{n}).$$ Similarly, the other three integers around the zeros have $\Omega(n)$ as image. Since the total number $N$ of subsets of three elements among $n$ is $O(n^3)$, the error is at least $\Omega(N^{1/3})$. □ The same kind of calculations for subsets with 5 elements yields an $\Omega(|\mathcal{F}_n|^{2/5})$ lower bound. The best lower bound that we have obtained is $\Omega(\sqrt{|\mathcal{F}_n|} / (\log |\mathcal{F}_n|)^{1/3})$. As shown below, this almost optimal lower bound is achieved by taking for $\mathcal{F}_n$ the set of all subsets of size $(n-1)/2$. 6 Theorem 2. Let $\mathcal{F}_n$ be the family of subsets of $(n-1)/2$ elements of $\{1, \ldots, n\}$, where $n$ is an odd integer. There exists a constant $c > 0$ such that for infinitely many $n$, for any subset $G$ of $\{1, \ldots, n\}$, $$\left| \{ F \in \mathcal{F}_n : |F \cap G| \text{ even} \} \right| - \frac{|\mathcal{F}_n|}{2} \geq c \sqrt{|\mathcal{F}_n|/(\log |\mathcal{F}_n|)^{1/4}}.$$ Proof. Recall the definition of the binomial coefficient: for $x \in \mathbb{R}$ and $k \in \mathbb{N}$, $$\binom{x}{k} = \frac{\prod_{i=0}^{k-1} (x-i)}{k!}.$$ The special case when $x$ is half an integer will be useful. Namely, for $n < k - 1$ we have $$\binom{n+1/2}{k} = \frac{\prod_{i=0}^{k-1} (n+1/2-i)}{k!} = \frac{(-1)^{k-n+1}(2n+1)!(2k-2n-3)!}{2^{2k-2}n!(k-n-2)k!}.$$ (3) Now, let us consider a set $X$ with $n = 4k+1$ elements and let $\mathcal{F}$ be the set of all subsets of $X$ of size $2k$. The number of sets in $\mathcal{F}$ that a set $Y$ of cardinality $j$ intersects an even number of times is: $$f(j) = \sum_{p \text{ even}} \binom{j}{p} \binom{n-j}{2k-p}.$$ The total number of sets is $$|\mathcal{F}| = \binom{n}{2k} = \sum_{p} \binom{j}{p} \binom{n-j}{2k-p},$$ so that we are interested in the quantity $$g(j) = f(j) - \frac{|\mathcal{F}|}{2} = \frac{1}{2} \sum_{p} (-1)^p \binom{j}{p} \binom{n-j}{2k-p}.$$ Our immediate goal is to prove that $$g(j) = -4^{2k} \left( \binom{(j-1)/2}{2k+1} - \binom{j/2}{2k+1} \right).$$ (4) We start from the following identity ([13], identity 3.42 or [13]): $$\sum_{p} (-1)^p \binom{j}{p} \binom{2m-j}{m-p} = (-4)^m \binom{(j-1)/2}{m}.$$ It is not difficult to check that $$\binom{j}{p} \left( \frac{4k+1-j}{2k-p} \right) - \binom{j}{p-1} \left( \frac{4k+1-j}{2k-(p-1)} \right) = \binom{j}{p} \left( \frac{4k+2-j}{2k+1-p} \right) - \binom{j+1}{p} \left( \frac{4k+2-(j+1)}{2k+1-p} \right).$$ As a consequence, $$2 \sum_{p} (-1)^p \binom{j}{2k-p} \left( \frac{4k+1-j}{2k-p} \right) = \sum_{p} (-1)^p \left[ \binom{j}{p} \left( \frac{4k+1-j}{2k-p} \right) - \binom{j}{p-1} \left( \frac{4k+1-j}{2k-(p-1)} \right) \right]$$ $$= \sum_{p} (-1)^p \left[ \binom{j}{p} \left( \frac{4k+2-j}{2k+1-p} \right) - \binom{j+1}{p} \left( \frac{4k+2-(j+1)}{2k+1-p} \right) \right]$$ $$= (-4)^{2k+1} \left( \binom{(j-1)/2}{2k+1} - \binom{j/2}{2k+1} \right),$$ which proves (4). When $j$ is even, $g(j)$ reduces to This is a product of half integers. This product is therefore minimal in absolute value when it is centered around 0, that is when $j = 2k$ or $j = 2k + 2$. In both cases, we have $$|g(j)| = 4^{2k} \left| \frac{k - 1/2}{2k + 1} \right|.$$ When $j$ is odd, $|g(j)|$ reduces to $$|g(j)| = 4^{2k} \left( \frac{j/2}{2k + 1} \right)$$ which is minimal when $j = 2k - 1$ or $j = 2k + 1$. The minimum is the same as in the even case. By (3), the minimal absolute value that $g$ takes is therefore $$\mu = 4^{2k} \left| \frac{k - 1/2}{2k + 1} \right| = \left( \frac{2k - 1}{k} \right) \sim \frac{2^{2k-1}}{\sqrt{\pi k}}$$ whereas $$|\mathcal{F}| = \left( \frac{4k + 1}{2k} \right) \sim \frac{2^{4k+1}}{\sqrt{2\pi k}}.$$ Hence $\mu = \Omega\left(\sqrt{|\mathcal{F}|/\sqrt{\log|\mathcal{F}|}}\right)$. 3 The linear algebraic point of view In this section we are concerned with a parallel algorithm for our problem. More precisely, we shall build a logspace-uniform family of circuits of polylogarithmic depth for our problem. In the meantime we are led to exhibit another polynomial-time sequential algorithm, which is a first step towards the parallel one. We use here techniques of linear algebra, dealing now with 0-1 vectors instead of sets. Let us first formulate Theorem 3 in these terms. **Corollary 3.** Let $u_1, \ldots, u_N \in E = (\mathbb{F}_2)^d$ be distinct nonzero vectors. There exists a vector $v \in E$ such that $$-\frac{\sqrt{N}}{2} \leq \left| \{1 \leq i \leq N : u_i \cdot v = 0\} \right| - \frac{N}{2} \leq \frac{\sqrt{N}}{2}.$$ In what follows, a vector $v \in E$ as in the corollary is called “good” for $u_1, \ldots, u_N$. We now turn to two algorithms for finding a good vector. As input we have $N$ distinct nonzero vectors $u_1, \ldots, u_N$ of $E$, given by their coordinates (hence the size of the input is of order $Nd$). The output will be a good vector for $u_1, \ldots, u_N$. The principle of the algorithms is to restrict the search to a small set $V$ where a suitable vector $v$ is guaranteed to exist. If this “sample space” is small enough, we will then be able to find the vector by exhaustive search. 3.1 Existence of a small sample space **Lemma 3.** Let $V$ be a subspace which is orthogonal to none of the $u_i - u_j$ (i.e. for all $1 \leq i < j \leq N$, there is $v \in V$ so that $v \cdot (u_i - u_j) = 1$) and to none of the $u_i$. Then there exists a good vector $v \in V$ for $u_1, \ldots, u_N$. 8 Proof. Let \( v_1, \ldots, v_k \) be a basis of \( V \). The condition that \( V \) is orthogonal to none of the \( u_i - u_j \) implies that the new vectors \( u'_i \) defined by \( u'_i = \sum_{l=1}^k (u_i,v_l)v_l \) are pairwise distinct. This is because for all \( i \neq j \), there exists \( l \) such that \( u_i,v_l \neq u_j,v_l \). Moreover, the condition that \( V' \) is orthogonal to none of the \( u'_i \) implies that none of the \( u'_i \) is equal to zero. In geometric terms, \( u'_i \) may be thought of as the projection of \( u_i \) onto \( V' \). We now define on \( V \) a new product \( \circ \) by the formula \( (\sum_{l=1}^k \lambda_l v_l) \circ (\sum_{l=1}^k \mu_l v_l) = \sum_{l=1}^k \lambda_l \mu_l \). For this new product (which comes just from a change of basis compared to the original inner-product), Corollary 3 asserts the existence of \( w = \sum_{l} \lambda_l v_l \) which is \( \circ \)-orthogonal to at least \( N/2 - \sqrt{N}/2 \) vectors \( u'_i \) and at most \( N/2 + \sqrt{N}/2 \). But \( w \circ w' = \sum_{l} \lambda_l v_l \circ u'_l = \sum_{l} \lambda_l (u_i,v_l) = w \cdot u_i \), and thus \( w \) is also suitable for \( E \) as a whole (with the usual product on \( E \)). \( \square \) Remark 3. The above lemma can also be derived from the set theoretic point of view as a consequence of Lemma 1. Note in particular that in Lemma 1, the hypothesis on \( d \geq w \) and thus \( N/2 \) is equal to zero. In geometric terms, \( u'_i \) may be thought of as the projection of \( u_i \) onto \( V' \). We now show that the subspace of Lemma 3 can have small dimension. Recall that \( E \) is a vector space over \( \mathbb{F}_2 \) of dimension \( d \). **Lemma 4.** Let \( U \) be a subset of \( E \) not containing 0. Then there exists a subspace \( W \) of \( E \), of dimension \( \geq d - \log(|U| + 1) \), which does not intersect \( U \). Proof. By induction on the dimension \( d \) of \( E \). For \( d = 0 \), \( |U| = 0 \) and the result trivially follows. Assume \( d > 0 \). If \( |U| = 2^d - 1 \), i.e. \( U = E \setminus \{0\} \), we can choose \( W = \{0\} \). Hence we shall assume that there exists a nonzero vector \( w_0 \) in \( E \setminus U \). Let \( W_0 \) be the subspace (with two elements) generated by \( w_0 \). If \( |U| \geq 2^{d-1} - 1 \), then \( W_0 \) suits our needs. Otherwise, \( E/W_0 \) is a vector space of dimension \( d - 1 \) and we can apply the induction hypothesis to the set \( U \) of the classes of elements of \( U \), which are all different from zero. This set satisfies \( |U| \leq |U| \), hence there exists a subspace \( W_1 \) of \( E/W_0 \) of dimension \( \geq d - 1 - \log(|U| + 1) \), which does not intersect \( U \). Call \( W_1 \) the subspace of \( E \) of dimension \( 1 + \dim(W_1) \), consisting of all elements of all classes of \( W_1 \). By definition of \( E/W_0 \), \( W_1 \) does not intersect \( U \), and is of dimension \( \geq d - \log(|U| + 1) \). \( \square \) We now apply Lemma 3 to \( U = \{u_i - u_j \mid 1 \leq i < j \leq N\} \cup \{u_i \mid 1 \leq i \leq N\} \); we have \( |U| = N(N+1)/2 \). Hence there exists a subspace \( W \) of \( E \) of dimension at least \( d - 2 \log N \) that does not contain any of the \( u_i - u_j \) and of the \( u_i \). The orthogonal space \( V \) of \( W \) is then of dimension \( \leq 2 \log N \) and is orthogonal to none of the \( u_i - u_j \) and to none of the \( u_i \) (because \( V^\perp = W^\perp = W \), as is easily verified). Note that \( V \) contains at most \( N^2 \) elements. This gives a polynomial sequential algorithm for finding a good vector (we only sketch it since we have already described a simpler sequential algorithm in Section 2.3): 1. Find a basis \( e_1, \ldots, e_b \) of a subspace \( W \) of dimension \( \geq d - 2 \log N \) which does not contain any of the \( u_i - u_j \) and of the \( u_i \) (for \( 1 \leq i < j \leq N \)). This is done by induction, taking the quotient space at each step as in the proof of Lemma 3. 2. Find the orthogonal space \( V \) of \( W \). This is done by solving the linear system \( (e_i,x = 0)_{1 \leq i \leq b} \). 3. Find a good vector \( v \) in \( V \). This is done by exhaustive search. ### 3.2 A parallel algorithm As in the sequential algorithm sketched above, we plan to perform an exhaustive search in a small sample space. The use of Lemma 4 for finding a sample space is unfortunately intrinsically sequential, since the proof works inductively in a quotient space. In fact, there is no reason to restrict the search to only one subspace: an exhaustive search can also be performed in polynomially many subspaces of small dimension in parallel. An idea to overcome the difficulty \footnote{This follows from the inequality \( \log(N(N+1)/2 + 1) \leq 2 \log N \), which holds true for \( N \geq 2 \). There is no loss of generality in assuming that \( N \geq 2 \) since any vector \( v \in E \) will satisfy Corollary 3 for \( N = 1 \).} of using Lemma 4 then consists in the following. At the beginning of the algorithm, we build a family of subspaces of large dimension \( W = \{W_1, \ldots, W_k\} \) that is “generic” in the sense that for all subsets \( U \subseteq E \setminus \{0\} \) of cardinality \( N(N+1)/2 \), there exists \( W_i \subseteq W \) for which \( U \cap W_i = \emptyset \). For the particular choice \( U = \{u_i - u_j : 1 \leq i < j \leq N\} \cup \{u_i : 1 \leq i \leq N\} \) we see that at least one \( W_i \) contains none of the \( u_i - u_j \) or of the \( u_i \), so by Lemma 4 \( W_i \) must contain a good vector. If the \( W_i \)’s are of sufficiently large dimension, \( W_i \) has only polynomially many elements and can be searched efficiently. This yields the following theorem, which is proved in the sequel. **Theorem 3.** There is a parallel algorithm which, given two positive integers \( N \) and \( d \) with \( N \leq 2^d \), builds in time \( O(\log N + \log d \log \log(\log(N))) \) a family \( \mathcal{F} \) of \( d^2N^2(N+1)^2 \) elements of \( \mathbb{F}_2^d \) that contains, for any distinct nonzero vectors \( u_1, \ldots, u_N \in \mathbb{F}_2^d \), a vector \( v \) such that \[ N/2 - \sqrt{N}/2 \leq \|\{1 \leq i \leq N : u_i \cdot v = 0\}\| \leq N/2 + \sqrt{N}/2. \] An exhaustive search in this family can therefore be performed in \( O(\log(dN)) \) parallel time, enabling us to find a good vector \( v \) on input \( u_1, \ldots, u_N \) in polylogarithmic parallel time \( O(\log N + \log d \log \log(dN)) \). In Section 3.3, we show that a generic family \( W = \{W_1, \ldots, W_k\} \) for sets \( U \) of size \( N(N+1)/2 \) indeed exists and can be built efficiently. Our family is of cardinality \( k \leq 2d|U| \) and each subspace \( W_i \) of dimension at least \( d - 1 - \log(d|U|) \). The \( W_i \)’s will be given as intersection of hyperplanes, so that a spanning family of \( W_i \) is immediately found. In Section 3.3, \( U \) denotes an arbitrary subset of \( E \setminus \{0\} \). As explained above, a typical choice for \( U \) will be \( \{u_i - u_j : 1 \leq i < j \leq N\} \cup \{u_i : 1 \leq i \leq N\} \). ### 3.3 A generic family of subspaces To allow more room, we first work in a field extension of \( \mathbb{F}_2 \). More precisely, we fix an extension \( K \) of degree \( e > \log((d-1)|U|) \), so that there are more than \( (d-1)|U| \) elements in \( K \). Note that for \( |U| = N(N+1)/2 \), a suitable choice is \( e = \lfloor \log(dN(N+1)) \rfloor \). This is the choice which will be made in Section 3.4. We look at \( K = \mathbb{F}_2[X]/(P(X)) \) where \( P(X) \) is an irreducible polynomial of \( \mathbb{F}_2[X] \) of degree \( e \). Thus the elements of \( K \) will be viewed as classes of polynomials modulo \( P \). Once the polynomial \( P \) is found, it is easy to calculate in \( K \) by manipulating polynomials of degree less than \( e \) with coefficients in \( \mathbb{F}_2 \) (details will be given in Section 5.3). In \( K^d \), we are able to find \( |K| \) hyperplanes so that every set of cardinality \( |U| \) has an empty intersection with at least one of them. For every \( \theta \in K \), let us indeed consider the hyperplane \( H_\theta \) of \( K^d \) defined by the equation \( x_1 + \theta x_2 + \theta^2 x_3 + \cdots + \theta^{d-1} x_d = 0 \). There are \( |K| > (d-1)|U| \) different hyperplanes in this family \( \{H_\theta\}_{\theta \in K} \), and a point \( a \in K^d \setminus \{0\} \) belongs to at most \( d-1 \) distinct hyperplanes: this is due to the fact that there are at most \( d-1 \) distinct roots of the polynomial \( P(\theta) = a_1 + a_2 \theta + \cdots + a_d \theta^{d-1} \). Thus among these hyperplanes, at least one does not intersect \( U \). To obtain our family over \( \mathbb{F}_2 \) (instead of \( K \)), we now consider the trace of \( H_\theta \) on \( \mathbb{F}_2^d \). For \( (x_1, \ldots, x_d) \in \mathbb{F}_2^d \), the equation of the hyperplane \( H_\theta \) can be rewritten according to the powers of \( X \): \[ x_1 + \theta x_2 + \cdots + \theta^{d-1} x_d \equiv \sum_{i=0}^{e-1} \mu_i(x_1, \ldots, x_d) X^i \pmod{P} \] where the \( \mu_i \) are \( \mathbb{F}_2 \)-linear combinations of the \( x_j \) (the coefficient of \( x_j \) in \( \mu_i \) is equal to the \( X^i \)-coordinate of \( \theta^{i-1} \) in the \( \mathbb{F}_2 \)-basis \( 1, X, \ldots, X^{e-1} \) of \( K \)). The intersection \( W_\theta = H_\theta \cap \mathbb{F}_2^d \) is then defined by the system of equations \( \mu_i(x) = 0 \) where \( i \) ranges over \( \{0, 1, \ldots, e-1\} \). It is therefore a subspace of \( E = (\mathbb{F}_2)^d \) of codimension at most \( e \). This construction yields a family \( W = \{W_1, \ldots, W_k\} \) of \( k \leq 2^e \) subspaces with the expected genericity property: for all subsets \( U \subseteq E \setminus \{0\} \) of cardinality \( N(N+1)/2 \), there exists \( W_i \in W \) for which \( U \cap W_i = \emptyset \). Since \( e \) can be taken as \( 1 + \log(d|U|) \), we get at most \( 2d|U| \) subspaces, of dimension at least \( d - 1 - \log(d|U|) \) each. As promised, these subspaces are given as intersections of hyperplanes. 3.4 High level description of the algorithm Let us sum up the main steps of this parallel algorithm. Its implementation and analysis are discussed in the next section. The input is a set \{u_1, \ldots, u_N\} of \(N\) distinct nonzero vectors of \(E = (\mathbb{F}_2)^d\), and the output is a vector orthogonal to at least \(N/2 - \sqrt{N}/2\) and at most \(N/2 + \sqrt{N}/2\) of them. 1. Let \(e = \lfloor \log(dN(N+1)) \rfloor\). By enumerating in parallel all the polynomials of \(\mathbb{F}_2[X]\) of degree \(e\), find an irreducible polynomial \(P\). Let \(K = \mathbb{F}_2[X]/(P(X))\). 2. Consider the family \(F\) of hyperplanes in \(K^d\) consisting of the \([K] = 2^e\) hyperplanes \((H_\theta)_{\theta \in K}\) described in Section 3.3. Rewrite the equation of each hyperplane of \(F\) as a system of \(e\) equations in \(\mathbb{F}_2\). This is only a rearrangement of terms. We obtain one subspace \(W_\theta\) of \((\mathbb{F}_2)^d\) of codimension at most \(e\) for each hyperplane \(H_\theta\). As a whole, this generic family thus contains at most \(2^e\) subspaces of \((\mathbb{F}_2)^d\). 3. Search in parallel in \(W^\perp\), for all \(W\) in the generic family. A good vector must exist in at least one of them (note that it is only this third step which actually depends on the input). As explained in the next section, the execution time of this algorithm is polylogarithmic in the size \(dN\) of the input. 3.5 Implementation and analysis We need now explain how to perform this procedure quickly in parallel. First, in order to find an irreducible polynomial \(P \in \mathbb{F}_2[X]\) of degree \(e\), we merely enumerate in parallel all polynomials \(A \in \mathbb{F}_2[X]\) of degree \(e\) and test their irreducibility. There are \(2^e \leq dN(N+1)\) such polynomials. The polynomial \(A\) is irreducible if and only if it is not divisible by another non-constant polynomial of degree \(\leq e/2\). This yields a straightforward irreducibility test: compute in parallel the division with remainder of \(A\) by all non-constant polynomials \(B\) of degree \(\leq e/2\) and test whether one of the remainders is zero. Finding \(P\) therefore takes parallel time \(O(e) + T(e)\), where \(T(e)\) is the cost of a division in \(\mathbb{F}_2^d\). Hence we only need to use a division algorithm of parallel complexity \(O(e)\). Within that generous time bound we may even try in parallel all possible quotients \(Q\), and check whether \(A = BQ\). Some parallel division algorithms are of course much faster (but overcomplicated for the problem at hand), see for instance [3]. One could also use Berlekamp’s algorithm in order to find an irreducible polynomial. We now proceed to the second step of the algorithm, which we begin with a preliminary computation. Let \(P\) be the irreducible polynomial found at the first step, and let \(K = \mathbb{F}_2[X]/(P(X))\) be the field with \(2^e\) elements. We first compute \(X^i \bmod P\) for all \(i \in [e, 2(e-1)]\). The first element of this sequence is obtained immediately from \(P\), and \(X^{i+1} \bmod P\) can be obtained in constant parallel time from \(X^i \bmod P\) and \(X^i \bmod P\) (basically by a shift of coefficients followed by at most one addition in \(\mathbb{F}_2^d\)). The whole sequence can therefore be constructed in time \(O(e)\). At step 2, our main task is to compute \(\theta^i\) for all \(i = 0, \ldots, d-1\) and all \(\theta \in K\). By fast exponentiation \(\theta^i\) can be obtained from \(\theta\) by \(O(\log d)\) multiplications in \(K\), each of boolean cost \(O(\log d)\). Indeed, to perform such a multiplication we multiply two polynomials of degree \(\leq e-1\) with coefficients in \(\mathbb{F}_2\) and take the remainder modulo \(P(X)\). The cost of the multiplication in \(\mathbb{F}_2)\) is \(O(\log e)\), and yields a polynomial of degree at most \(2(e-1)\). At the beginning of step 2 we have precomputed a representation modulo \(P(X)\) of all the monomials which can possibly occur in this polynomial. Hence it simply remains to add up at most \(e\) polynomials of degree \(\leq e-1\). This can be done in parallel time \(O(\log d)\). The parallel cost of generating our generic family of \(2^e\) subspaces is therefore \(O(\log d \log e)\), which is \(O(\log d \log \log dN)\) The orthogonal space of each subspace \(W_\theta\) contains at most \(2^e\) points since it is of dimension at most \(e\). Altogether, we have at most \((2^e)^2\) points in the union of all orthogonal spaces. Since \(2^e \leq dN(N+1)\), this yields the bound \(d^2N^2(N+1)^2\) of Theorem 3. The additional cost of the explicit enumeration of all those points is \(O(\log e)\) since each point is the sum of at most \(e\) spanning vectors of some orthogonal space. Finally, we can find a good vector among the \(d^2N^2(N+1)^2\) candidates in time \(O(\log (dN))\) by exhaustive search. First, we compute in parallel the inner products \(u_i.v\) for all inputs \(u_i\) and all candidate vectors \(v\). This is done in depth \(O(\log d)\). Then for fixed \(v\), we have to sum over all \(u_i\) to obtain the number of \(i\) such that \(u_i.v = 1\). It is well known that such an iterated addition can be performed in depth \(O(\log N)\) (see for instance [2], proof of Theorem 1.7.2). To that sum we substract $N/2$ and take the absolute value, so that for every candidate $v$ we have computed $|\{1 \leq i \leq N : u_i \cdot v = 1\}| - N/2$. We now have to find the minimum among the $d^2 N^2 (N+1)^2$ values; this can be done in depth $O(\log(d^2 N^2 (N+1)^2)) = O(\log(dN))$ since computing the minimum is an $AC^0$ problem (see for instance [2], example 6.2.2). Thus the exhaustive search requires parallel time $O(\log(dN))$ as claimed in Theorem 3. The overall parallel execution time of our algorithm is therefore $O(\log N + \log d \log \log(dN))$, which proves the theorem. Remark 4. This parallel algorithm can be implemented by a family of logspace uniform boolean circuits of polynomial size and polylogarithmic depth since each of the three steps of the algorithm can (note that there is some redundancy in this statement since a logspace bounded Turing machine can only construct circuit families of polynomial size). 3.6 Logarithmic space This section proves that the problem at hand is also in the complexity class $L$ of problems decided by a Turing machine using $O(\log n)$ work space. The three steps of the algorithm in Section 3.4 can indeed be performed in logarithmic space: - Step 1 first consists in an enumeration of polynomials of logarithmic degree $e$ and with coefficient in $\mathbb{F}_2$. This takes $O(e)$ work space. Then there is another enumeration of polynomials together with a divisibility test. This still requires $O(e)$ work space. - Step 2 consists in arithmetic operations in $K$ in order to compute $\theta^i$ for $i$ from 0 to $d$. This amounts to multiplications of polynomials of degree $e$ and reduction modulo $P(X)$; logarithmic space is again enough. - Step 3 computes the orthogonal space of a vector space $W$ given by its $e$ equations $(w_i \cdot x = 0)_{i \leq e}$. The orthogonal space $W^\perp$ is merely the vector space spanned by the $w_i$’s. Enumerating the $e$ coefficients for these vectors therefore suffices to enumerate all the vectors of $W^\perp$. This is done in work space $O(e)$. Now, checking whether a vector is good can once again be performed in logarithmic space. This proves the following result. **Theorem 4.** There is an algorithm working in space $O(\log(dN))$ which, when given a family of $N$ vectors of $(\mathbb{F}_2)^d$, outputs a good vector $v$ for this family. Remark 5. Considering again circuit complexity, we see that the circuit depth obtained in Theorem 3 is by no means optimal. We have indeed chosen to describe the construction of the list of all $d^2 N^2 (N+1)^2$ candidate vectors explicitly as a part of our parallel algorithm, but if we work with logspace uniform circuits any precomputation requiring only logspace uniformity is allowed. We have seen that we can construct in logarithmic space the whole list of candidate vectors. After that one simply has to perform an exhaustive search, which can be realized in depth $O(\log dN)$ as explained above. This shows that our problem is in logspace uniform $NC^1$ (it can be argued, however, that logspace uniformity is not the right uniformity condition for $NC^1$; see for instance [3], chapter 4). References
Abstract Modern mobile platforms like Android enable applications to read aggregate power usage on the phone. This information is considered harmless and reading it requires no user permission or notification. We show that by simply reading the phone’s aggregate power consumption over a period of a few minutes an application can learn information about the user’s location. Aggregate phone power consumption data is extremely noisy due to the multitude of components and applications that simultaneously consume power. Nevertheless, by using machine learning algorithms we are able to successfully infer the phone’s location. We discuss several ways in which this privacy leak can be remedied. 1 Introduction Our phones are always within reach and their location is mostly the same as our location. In effect, tracking the location of a phone is practically the same as tracking the location of its owner. Since users generally prefer that their location not be tracked by arbitrary 3rd parties, all mobile platforms consider the device’s location as sensitive information and go to considerable lengths to protect it: applications need explicit user permission to access the phone’s GPS and even reading coarse location data based on cellular and WiFi connectivity requires explicit user permission. In this work we show that despite these restrictions applications can covertly learn the phone’s location. They can do so using a seemingly benign sensor: the phone’s power meter that measures the phone’s power consumption over a period of time. Our work is based on the observation that the phone’s location significantly affects the power consumed by the phone’s cellular radio. The power consumption is affected both by the distance to the cellular base station to which the phone is currently attached (free-space path loss) and by obstacles, such as buildings and trees, between them (shadowing). The closer the phone is to the base station and the fewer obstacles between them the less power the phone consumes. The strength of the cellular signal is a major factor affecting the power used by the cellular radio [29]. Moreover, the cellular radio is one of the most dominant power consumers on the phone [14]. Suppose an attacker measures in advance the power profile consumed by a phone as it moves along a set of known routes or in a predetermined area such as a city. We show that this enables the attacker to infer the target phone’s location over those routes or areas by simply analyzing the target phone’s power consumption over a period of time. This can be done with no knowledge of the base stations to which the phone is attached. A major technical challenge is that power is consumed simultaneously by many components and applications on the phone in addition to the cellular radio. A user may launch applications, listen to music, turn the screen on and off, receive a phone call, and so on. All these activities affect the phone’s power consumption and result in a very noisy approximation of the cellular radio’s power usage. Moreover, the cellular radio’s power consumption itself depends on the phone’s activity, as well as the distance to the base-station: during a voice call or data transmission the cellular radio consumes more power than when it is idle. All of these factors contribute to the phone’s power consumption variability and add noise to the attacker’s view: the power meter only provides aggregate power usage and cannot be used to measure the power used by an individual component such as the cellular radio. Nevertheless, using machine learning, we show that the phone’s aggregate power consumption over time completely reveals the phone’s location and movement. Intuitively, the reason why all this noise does not mislead our algorithms is that the noise is not correlated with the phone’s location. Therefore, a sufficiently long power measurement (several minutes) enables the learning algorithm to “see” through the noise. We refer to power consumption measurements as time-series and use methods for comparing time-series to obtain classification and pattern matching algorithms for power consumption profiles. In this work we use machine learning to identify the routes taken by the victim based on previously collected power consumption data. We study three types of user tracking goals: 1. **Route distinguishability**: First, we ask whether an attacker can tell what route the user is taking among a fixed set of possible routes. 2. **Real-time motion tracking**: Assuming the user is taking a certain known route, we ask whether an attacker can identify her location along the route and track the device’s position on the route in real-time. 3. **New route inference**: Finally, suppose a user is moving along an arbitrary (long) route. We ask if an attacker can learn the user’s route using the previously measured power profile of many (short) road segments in that area. The attacker composes the power profile of the short road segments to identify the user’s route and location at the end of the route. We emphasize that our approach is based on measuring the phone’s aggregate power consumption and nothing else. In particular, we do not use the phone’s signal strength as this data is protected on Android and iOS devices and reading it requires user permission. In contrast, reading the phone’s power meter requires no special permissions. On Android reading the phone’s aggregate power meter is done by repeatedly reading the following two files: `/sys/class/power_supply/battery/voltage_now` `/sys/class/power_supply/battery/current_now` Over a hundred applications in the Play Store access these files. While most of these simply monitor battery usage, our work shows that all of them can also easily track the user’s location. **Our contributions.** Our work makes the following contributions: - We show that the power meter available on modern phones can reveal potentially private information. - We develop the machine learning techniques needed to use data collected from the power meter to infer location information. The technical details of our algorithms are presented in sections 4, 5 and 6, followed by experimental results. - In sections 8 and 9 we discuss potential continuation to this work, as well as defenses to prevent this type of information leakage. ## 2 Threat Models We assume a malicious application is installed on the victim’s device and runs in the background. The application has no permission to access the GPS or any other location data such as the cellular or WiFi components. In particular, the application has no permission to query the identity of visible cellular base stations or the SSID of visible WiFi networks. We only assume access to power data (which requires no special permissions on Android) and permission to communicate with a remote server. Network connectivity is needed to generate dummy low rate traffic to prevent the cellular radio from going into low power state. In our setup we also use network connectivity to send data to a central server for processing. However, it may be possible to do all processing on the phone.\(^1\) As noted earlier, the application can only read the aggregate power consumed by the phone. It cannot measure the power consumed by the cellular radio alone. This presents a significant challenge since many components on the phone consume variable amounts of power at any given time. Consequently, all the measurements are extremely noisy and we need a way to “see” through the noise. To locate the phone, we assume the attacker has prior knowledge of the area or routes through which the victim is traveling. This knowledge allows the attacker to measure the power consumption profile of different routes in that area in advance. Our system correlates this data with the phone’s measured power usage and we show that, despite the noisy measurements, we are able to correctly locate the phone. Alternatively, as for many other machine learning cases, the training data can also be collected after obtaining the unlabeled query data. For instance, an attacker obtained a power consumption profile of a user, the past location of whom it is extremely important to determine. She can still collect, after the fact, reference profiles for a limited area in which the user has likely been driving and carry out the attack. For this to work we need the tracked phone to be moving by a car or a bus while being tracked. Our system cannot locate a phone that is standing still since that only provides the power profile for a single location. We need multiple adjacent locations for the attack to work. Given the resources at our disposal, the focus of this work is on locating a phone among a set of local routes in a pre-determined area. A larger effort is needed to scale the system to cover the entire world by pre-measuring the power profile of all road segments worldwide. Nevertheless, our localized experiments already show that tracking users who follow a daily routine is quite possible. For example, a mobile device owner might choose one of a small number of routes to get from home to work. The \(^1\) It is important to mention here that while a network access permission will appear in the permission list for an installed application, it does not currently appear in the list of required permissions prior to application installation. system correctly identifies what route was chosen and in real-time identifies where the phone is along that route. This already serves as a cautionary note about the type of information that can be leaked by a seemingly innocuous sensor like the power meter. We note that scaling the system to cover worldwide road segments can be done by crowd-sourcing: a popular app, or perhaps even the core OS, can record the power profile of streets traveled by different users and report the results to a central server. Over time the resulting dataset will cover a significant fraction of the world. On the positive side, our work shows that service providers can legitimately use this dataset to improve the accuracy of location services. On the negative side, tracking apps can use it to covertly locate users. Given that all that is required is one widespread application, many actors in the mobile space are in a position to build the required dataset of power profiles and use it as they will. 3 Background In this section we provide technical background on the relation between a phone’s location and its cellular power consumption. We start with a description of how location is related to signal strength, then we describe how signal strength is related to power consumption. Finally, we present examples of this phenomenon, and we demonstrate how obtaining access to power measurements could leak information about a phone’s location. 3.1 Location affects signal strength and power consumption Distance to the base station is the primary factor that determines a phone’s signal strength. The reason for this is, for signals propagating in free space, the signal’s power loss is proportional to the square of the distance it travels over [11]. Signal strength is not only determined by path loss, it is also affected by objects in the signal path, such as trees and buildings, that attenuate the signal. Finally, signal strength also depends on multi-path interference caused by objects that reflect the radio signal back to the phone through various paths having different lengths. In wireless communication theory signal strength is often modeled as random variation (e.g., log-normal shadowing [11]) to simulate many different environments. However, in one location signal strength can be fairly consistent as base stations, attenuators, and reflectors are mostly stationary. A phone’s received signal strength to its base station affects its cellular modem power consumption. Namely, phone cellular modems consume less instantaneous power when transmitting and receiving at high signal strength compared to low signal strength. Schulman et. al. [29] observed this phenomenon on several different cellular devices operating on different cellular protocols. They showed that communication at a poor signal location can result in a device power draw that is 50% higher than at a good signal location. The primary reason for this phenomenon is the phone’s power amplifier used for transmission which increases its gain as signal strength drops [11]. This effect also occurs when a phone is only receiving packets. The reason for this is cellular protocols which require constant transmission of channel quality and acknowledgments to base stations. 3.2 Power usage can reveal location The following results from driving experiments demonstrate the potential of leaking location from power measurements. We first demonstrate that signal strength in each location on a drive can be static over the course of several days. We collected signal strength measurements from a smartphone once, and again several days later. In Figure 1 we plot the signal strength observed on these two drives. In this figure it is apparent that (1) the segments of the drive where signal strength is high (green) and low (red) are in the same locations across both days, and (2) that the progression of signal strength along the drive appears to be a unique irregular pattern. Next, we demonstrate that just like signal strength, power measurements of a smartphone, while it communicates, can reveal a stable, unique pattern for a particular drive. Unlike signal strength, power measurements are less likely to be stable across drives because power depends on how the cellular modem reacts to changing signal strength: a small difference in signal strength between two drives may put the cellular modem in a mode that has a large difference in power consumption. For example, a small difference in signal strength may cause a phone to hand-off to a different cellular base station and stay attached to it for some time (Section 3.3). Figure 2 shows power measurements for two Nexus 4 phones in the same vehicle, transmitting packets over their cellular link, while driving on the same path. The power consumption variations of the Nexus 4 phones are similar, indicating that power measurements can be mostly stable across devices. Finally, we demonstrate that power measurements could be stable across different models of smartphones. This stability would allow an attacker to obtain a reference power measurement for a drive without using the same phone as the victim’s. We recorded power measurements, while transmitting packets over cellular, using two different smartphone models (Nexus 4 and Nexus 5) during the same ride, and we aligned the power samples, according to absolute time. The results presented in Figure 3 indicate that there is similarity between different models that could allow one model to be used as a reference for another. This experiment serves as a proof of concept: we leave further evaluation of such an attack scenario, where the attacker and victim use different phone models, to future work. In this paper, we assume that the attacker can obtain reference power measurements using the same phone model as the victim. ### 3.3 Hysteresis A phone attaches to the base station having the strongest signal. Therefore, one might expect that the base station to which a phone is attached and the signal strength will be the same in one location. Nonetheless, it is shown in [29] that signal strength can be significantly different at a location based on how the device arrived there, for example, the direction of arrival. This is due to the hysteresis algorithm used to decide when to hand-off to a new base station. A phone hands-off from its base station only when its received signal strength dips below the signal strength from the next base station by more than a given threshold [26]. Thus, two phones that reside in the same location can be attached to two different base stations. Hysteresis has two implications for determining a victim’s location from power measurements: (1) an attacker can only use the same direction of travel as a reference power measurement, and (2) it will complicate inferring new routes from power measurements collected from individual road segments (Section 6). ### 3.4 Background summary and challenges The initial measurements in this section suggest that the power consumed by the cellular radio is a side chan- nel that leaks information about the location of a smartphone. However, there are four significant challenges that must be overcome to infer location from the power meter. First, during the pre-measurement phase the attacker may have traveled at a different speed and encountered different stops than the target phone. Second, the attacker will have to identify the target’s power profile from among many pre-collected power profiles along different routes. Third, once the attacker determines the target’s path, the exact location of the target on the path may be ambiguous because of similarities in the path’s power profile. Finally, the target may travel along a path that the attacker only partially covered during the pre-measurement phase: the attacker may have only pre-collected measurements for a subset of segments in the target’s route. In the following sections we describe techniques that address each of these challenges and experiment with their accuracy. 4 Route distinguishability As a warm-up we show how the phone’s power profile can be used to identify what route the user is taking from among a small set of possible routes (say, 30 routes). Although we view it as a warm-up, building towards our main results, route distinguishability is still quite useful. For example, if the attacker is familiar with the user’s routine then the attacker can pre-measure all the user’s normal routes and then repeatedly locate the user among those routes. Route distinguishability is a classification problem: we collected power profiles associated with known routes and want to classify new samples based on this training set. We view it as a warm-up, building towards our main results, route distinguishability is still quite useful. For example, if the attacker is familiar with the user’s routine then the attacker can pre-measure all the user’s normal routes and then repeatedly locate the user among those routes. Route distinguishability is a classification problem: we collected power profiles associated with known routes and want to classify new samples based on this training set. We treat each power profile as a time series which needs to be compared to other time series. A score is assigned after each comparison, and based on these scores we select the most likely matching route. Because different rides along the same route can vary in speed at different locations along the ride, and because routes having the same label can vary slightly at certain points (especially before getting to a highway and after exiting it), we need to compare profile features that can vary in time and length and allow for a certain amount of difference. We also have to compensate for different baselines in power consumption due to constant components that depend on the running applications and on differences in device models. We use a classification method based on Dynamic Time Warping (DTW) [23], an algorithm for measuring similarity between temporal sequences that are misaligned and vary in time or speed. We compute the DTW distance\(^3\) between the new power profile and all reference profiles associated with known routes, selecting the known route that yields the minimal distance. More formally, if the reference profiles are given by sequences \(\{X_i\}_{i=1}^n\) and the unclassified profile is given by sequence \(Y\), we choose the route \(i\) such that \[ i = \arg\min_i \text{DTW}(Y, X_i) \] which is equivalent to 1-NN classification given DTW metric. Because the profiles might have different baselines and variability, we perform the following normalization for each profile prior to computing the DTW distance: we calculate the mean and subtract it, and divide the result by the standard deviation. We also apply some preprocessing in the form of smoothing the profiles using a moving average (MA) filter in order to reduce noise and obtain the general power consumption trend, and we downsample by a factor of 10 to reduce computational complexity. 5 Real-time mobile device tracking In this section we consider the following task: the attacker knows that a mobile user is traveling along a particular route and our objective is to track the mobile device as it is moving along the route. We do not assume a particular starting point along the route, meaning, in probabilistic terms, that our prior on the initial location is uniform. The attacker has reference power profiles collected in advance for the target route, and constantly receives new power measurements from an application installed on the target phone. Its goal is to locate the device along the route, and continue tracking it in real-time as it travels along the route. 5.1 Tracking via Dynamic Time Warping This approach is similar to that of route distinguishability, but we use only the measurements collected up to this point, which comprise a sub-sequence of the entire route profile. We use the Subsequence DTW algorithm [23], rather than the classic DTW, to search a sub-sequence in a larger sequence, and return a distance measure as well as the corresponding start and end offsets. We search for the sequence of measurements we have accumulated since the beginning of the drive in all our reference profiles and select the profile that yields the minimal DTW distance. The location estimate corresponds to the location associated with the end offset returned by the algorithm. \(^3\)In fact we compute a normalized DTW distance, as we have to compensate for difference in lengths of different routes - a longer route might yield larger DTW distance despite being more similar to the tested sequence. 5.2 Improved tracking via a motion model While the previous approach can make mistakes in location estimation due to a match with an incorrect location, we can further improve the estimation by imposing rules based on a sensible motion model. We first need to know when we are “locked” on the target. For this purpose we define a similarity threshold so that if the minimal DTW distance is above this threshold, we are in a locked state. Once we are locked on the target, we perform a simple sanity check at each iteration: “Has the target displaced by more than X?” If the sanity check does not pass we consider the estimate unlikely to be accurate, and simply output the previous estimate as the new estimated location. If the similarity is below the threshold, we switch to an unlocked state, and stop performing this sanity check until we are “locked” again. Algorithm 1 presents this logic as pseudocode. Algorithm 1 Improved tracking using a simple motion model \[ \text{locked} \leftarrow \text{false} \quad \text{> Are we locked on the target?} \] while \( \text{target moving} \) do \[ \text{loc}[i], \text{score} \leftarrow \text{estimateLocation}() \] \[ d \leftarrow \text{getDistance}(\text{loc}[i], \text{loc}[i-1]) \] if \( \text{locked} \) and \( d > \text{MAX\_DISP} \) then \[ \text{loc}[i] \leftarrow \text{MAX\_DISP} \quad \text{> Reuse previous estimate} \] end if if \( \text{score} > \text{THRESHOLD} \) then \[ \text{locked} \leftarrow \text{true} \] end if end while 5.3 Tracking using Optimal Subsequence Bijection Optimal Subsequence Bijection (OSB) [17] is a technique, similar to DTW, that enables aligning two sequences. In DTW, we align the query sequence with the target sequence without skipping elements in the query sequence, thereby assuming that the query sequence contains no noise. OSB, on the other hand, copes with noise in both sequences by allowing to skip elements. A fixed jump-cost is incurred with every skip in either the query or the target sequence. This extra degree of freedom has potential for aligning noisy subsequences more efficiently in our case. In the evaluation section we present results obtained by using OSB and compare them to those obtained using DTW. 6 Inference of new routes In Section 4 we addressed the problem of identifying the route traversed by the phone, assuming the potential routes are known in advance. This assumption allowed us to train our algorithm specifically for the potential routes. As previously mentioned, there are indeed many real-world scenarios where it is applicable. Nevertheless, in this section we set out to tackle a broader tracking problem, where the future potential routes are not explicitly known. Here we specifically aim to identify the final location of the phone after it traversed an unknown route. We assume that the area in which the mobile device owner moves is known, however the number of all possible routes in that area may be too large to practically pre-record each one. Such an area can be, for instance, a university campus, a neighborhood, a small town or a highway network. We address this problem by pre-recording the power profiles of all the road segments within the given area. Each possible route a mobile device may take is a concatenation of some subset of these road segments. Given a power profile of the tracked device, we will reconstruct the unknown route using the reference power profiles corresponding to the road segments. The reconstructed route will enable us to estimate the phone’s final location. Note that, due to the hysteresis of hand-offs between cellular base stations, a power consumption is not only dependent on the traveled road segment, but also on the previous road segment the device came from. In Appendix A we formalize this problem as a hidden Markov model (HMM) [27]. In the following we describe a method to solve the problem using a particle filter. The performance of the algorithm will be examined in the next section. 6.1 Particle Filter A particle filter [1] is a method that estimates the state of a HMM at each step based on observations up to that step. The estimation is done using a Monte Carlo approximation where a set of samples (particles) is generated at each step that approximate the probability distribution of the states at the corresponding steps. A comprehensive introduction to particle filters and their relation to general state-space models is provided in [28]. We implement the particle filter as follows. We denote \( O = \{ o^{r}_{xy} \} \), where \( o^{r}_{xy} \) is a power profile prerecorded over segment \((x,y)\) while the segment \((x,y)\) had been traversed just before it. We use a discrete time resolution \( \tau = 3 \) seconds. We denote \( \Delta_{\text{min}} \) and \( \Delta_{\text{max}} \) to be the minimum and maximum time duration to traverse road segment \((y,z)\), respectively. We assume these bounds can be derived from prerecordings of the segments. At each it- iteration $i$ we have a sample set of $N$ routes $P_i = \{(Q, T)\}$. The initial set of routes $P_0$ are chosen according to $\Pi$. At each step, we execute the following algorithm: **Algorithm 2** Particle filter for new routes estimation ```plaintext for all route $p$ in $P$ do $t_{end} \leftarrow$ end time of $p$ $(x, y) \leftarrow$ last segment of $p$ $z \leftarrow$ next intersection to traverse (distributed by $A$) $W_p \leftarrow \min_{t \in [t_{min}, t_{max}]} \{\text{DTW}( O_{[t_{end}+t]}, o_{xy}^z) \}$ $p \leftarrow p||\{(y, z)\}$ Update the end time of $p$ end for Resample $P$ according to the weights $W_p$ ``` At each iteration, we append a new segment, chosen according to the prior $A$, to each possible route (represented by a particle). Then, the traversal time of the new segment is chosen so that it will have a minimal DTW distance to the respective time interval of the tracked power profile. We take this minimal distance as the weight of the new route. After normalizing the weights of all routes, a resampling phase takes place. $N$ routes are chosen from the existing set of routes according to the particle weights distribution$^4$. The new resampled set of routes is the input to the next iteration of the particle filter. The total number of iterations should not exceed an upper bound on the number of segments that the tracked device can traverse. Note however that a route may exhaust the examined power profile before the last iteration (namely, the end time of that route reached $t_{max}$). In such a case we do not update the route in all subsequent iterations (this case is not described in Algorithm 2 to facilitate fluency of exposition). Before calculating the DTW distance of a pair of power profiles the profiles are preprocessed to remove as much noise as possible. We first normalize the power profile by subtracting its mean and dividing by the standard deviation of all values included in that profile. Then, we zero out all power values below a threshold percent. This last step allows us to focus only on the peaks in power consumption where the radio’s power consumption is dominant while ignoring the lower power values for which the radio’s power has a lesser effect. The percentile threshold we use in this paper is 90%. Upon its completion, the particle filter outputs a set of $N$ routes of various lengths. To select the best estimate route the simple approach is to choose the route that appears the most number of times in the output set as it has the highest probability to occur. Nonetheless, since a route is composed of multiple segments chosen at separate steps, at each step the weight of a route is determined solely based on the last segment added to the route. Therefore, the output route set is biased in favor of routes ending with segments that were given higher weights, while the weights of the initial segments have a diminishing effect on the route distribution with every new iteration. To counter this bias, we choose another estimate route using a procedure we call iterative majority vote, described is Appendix B. 7 Experiments 7.1 Data collection Our experiments required collecting real power consumption data from smartphone devices along different routes. We developed the PowerSpy android application$^5$ that collects various measurements including signal strength, voltage, current, GPS coordinates, temperature, state of discharge (battery level) and cell identifier. The recordings were performed using Nexus 4, Nexus 5 and HTC mobile devices. 7.2 Assumptions and limitations Exploring the limits of our attack, i.e. establishing the minimal necessary conditions for it to work, is beyond our resources. For this reason, we state the assumptions on which we rely in our methods. We assume there is enough variability in power consumption along a route to exhibit unique features. Lack of variability may be due to high density of cellular antennas that flatten the signal strength profile. We also assume that enough communication is occurring for the signal strength to have an effect on power consumption. This is a reasonable assumption, since background synchronization of data happens frequently in smartphone devices. Moreover, the driver might be using navigation software or streaming music. However, at this stage, it is difficult to determine how inconsistent phone usage across different rides will affect our attacks. Identifying which route the user took involves understanding which power measurements collected from her mobile device occurred during driving activity. Here we simply assume that we can identify driving activity. Other works (e.g., [22]) address this question by using data from other sensors that require no permissions, such as gyroscopes and accelerometers. Some events that occur while driving, such as an incoming phone call, can have a significant effect on power consumption. Note that the resampling of the new routes can have repetitions. Namely, the same route can be chosen more than one time. $^4$Note that the resampling of the new routes can have repetitions. $^5$Source code can be obtained from https://bitbucket.org/ymrcat/powerspy. consumption. Figure 4 shows the power profile of a device at rest when a phone call takes place (the part marked in red). The peak immediately after the phone call is caused by using the phone to terminate the phone call and turn off the display. We can see that this event appears prominently in the power profile and can cope with such transient effects by identifying and truncating peaks that stand out in the profile. In addition, smoothing the profile by a moving average should mitigate these transient effects. 7.3 Route distinguishability To evaluate the algorithm for distinguishing routes (section 4) we recorded reference profiles for multiple different routes. The profiles include measurements from both Nexus 4 and Nexus 5 models. In total we had a dataset of 294 profiles, representing 36 unique routes. Driving in different directions along the same roads (from point A to B vs. from point B to A) is considered two different routes. We perform cross validation using multiple iterations (100 iterations), each time using a random portion of the profiles as a training set, and requiring equal number of samples for each possible class. The sizes of the training and test sets depend on how many reference profiles per profile we require each time. Naturally, the more reference profiles we have, the higher the identification rate. One evaluation round included 29 unique routes, with only 1 reference profile per route in the training set, and 211 test routes. It resulted in correct identification rate of 40%. That is compared to the random guess probability of only 3%. Another round included 25 unique routes, with 2 reference profiles per route in the training set and 182 routes in the test set, and resulted in correct identification rate of 53% (compared to the random guess probability of only 4%). Having 5 reference profiles per route (for 17 unique routes) raises the identification rate to 71%, compared to the random guess probability of 5.8%. And finally, for 8 reference profiles per route we get 85% correct identification. The results are summarized in Table 1. We can see that an attacker can have a significant advantage in guessing the route taken by a user. 7.4 Real-time mobile device tracking We evaluate the algorithm for real-time mobile device tracking (section 5) using a set of 10 training profiles and an additional test profile. The evaluation simulates the conditions of real-time tracking by serially feeding samples to the algorithm as if they are received from an application installed on the device. We calculate the estimation error, i.e. the distance between the estimated coordinates and the true location of the mobile device at each step of the simulation. We are interested in the convergence time, i.e. the number of samples it takes until the location estimate is close enough to the true location, as well as in the distribution of the estimation errors given by a histogram of the absolute values of the distances. Figure 5 illustrates the performance of our tracking algorithm for one of the routes, which was about 19 kilometers long. At the beginning, when there are very few power samples, the location estimation is extremely inaccurate, but after two minutes we lock on the true location. We obtained a precise estimate from 2 minutes up until 20 minutes on the route, where our estimate slightly diverges, due to increased velocity on a freeway segment. Around 26 minutes (in figure 5a) we have a large estimation error, but as we mentioned earlier, these kinds of errors are easy to prevent by imposing a simple motion model (section 5.2). Most of the errors are small compared to the length of the route: 80% of the estimation errors are less than 1 km. We also tested the improved tracking algorithm explained in section 5.2. Figure 5b presents the estimation error over time, and we can see that the big errors towards the end of the route that appeared in 5a are not present in fig. 5b. Moreover, now almost 90% of the estimation errors are below 1 km (fig. 6). We provide animations visualizing our results for real-time tracking at the following links. The animations, generated using our estimations of the target’s location, depict a moving target along the route and our estimation of its location. The first one corresponds to the method described in 5.1, and the second to the one described in 5.2 that uses the motion model based correction: crypto.stanford.edu/powerspy/tracking1.mov crypto.stanford.edu/powerspy/tracking2.mov <table> <thead> <tr> <th># Unique Routes</th> <th># Ref. Profiles/Route</th> <th># Test Routes</th> <th>Correct Identification %</th> <th>Random Guess %</th> </tr> </thead> <tbody> <tr> <td>8</td> <td>10</td> <td>55</td> <td>85</td> <td>13</td> </tr> <tr> <td>17</td> <td>5</td> <td>119</td> <td>71</td> <td>6</td> </tr> <tr> <td>17</td> <td>4</td> <td>136</td> <td>68</td> <td>6</td> </tr> <tr> <td>21</td> <td>3</td> <td>157</td> <td>61</td> <td>5</td> </tr> <tr> <td>25</td> <td>2</td> <td>182</td> <td>53</td> <td>4</td> </tr> <tr> <td>29</td> <td>1</td> <td>211</td> <td>40</td> <td>3</td> </tr> </tbody> </table> Table 1: Route distinguishability evaluation results. First column indicates the number of unique routes in the training set. Second column indicates the number of training samples per route at the attacker’s disposal. Number of test routes indicates the number of power profiles the attacker is trying to classify. Correct identification percentage indicates the percentage of correctly identified routes as a fraction of the third column (test set size), which could be then compared to the expected success of random guessing in the last column. (a) Convergence to true location. (b) Location estimation error for improved tracking algorithm. Figure 5: Location estimation error for online tracking. (a) Errors histogram. Almost 90% of the errors are less than 1 km. (b) Error cumulative distribution. Figure 6: Estimation errors distribution for motion-model tracking. 7.4.1 OSB vs. DTW We compare the performance of Dynamic Time Warping to that of Optimal Subsequence Bijection (section 5.3). Figure 7 present such a comparison for the same route, using two different recordings. The tracking was performed without compensating for errors using a motion model, to evaluate the performance of the subsequence matching algorithms as they are. We can see that, in both cases, Optimal Subsequence Bijection outperforms the standard Subsequence-DTW most of the time. Therefore, we suggest that further experimentation with OSB could potentially be beneficial for this task. 7.5 Inference of new routes 7.5.1 Setup For the evaluation of the particle filter presented in Section 6 we considered an area depicted in Figure 8. The area has 13 intersections having 35 road segments. The average length of a road segment is about 400 meters. The average travel time over the segments is around 70 seconds. The area is located in the center of Haifa, a city located in northern Israel, having a population density comparable to Philadelphia or Miami. Traffic congestion in this area varies across segments and time of day. For each power recording, the track traversed at least one congested segment. Most of the 13 intersections have traffic lights, and about a quarter of the road segments pass through them. We had three pre-recording sessions which in total covered all segments. Each road segment was entered from every possible direction to account for the hysteresis effects. The pre-recording sessions were done using the same Nexus 4 phone. We set the following parameters of the HMM (as they are defined in Appendix A): 1. \( A \) – This set defines the transition probabilities between the road segments. We set these probabilities to be uniformly distributed over all possible transitions. Namely, \( a_{yz} = \frac{1}{|I_y||I_z|} = \{ w \mid (y, w) \in R, w \neq x \} \). 2. \( B \) – This set defines the distribution of power profile observations over each state. These probabilities depend on the road segments and their location relative to the nearby based stations. We do not need an explicit formulation of these probabilities to employ the particle filter. The likelihood of a power profile to be associated with a road segment is estimated by the DTW distance of the power profile to prerecorded power profiles of that segment. 3. \( \Pi \) – This set defines the initial state distribution. We assume that the starting intersection of the tracked device is known. This applies to scenarios where the tracking begins from well-known locations, such as the user’s home, office, or another location the attacker knows in advance. For testing, we used 4 phones: two Nexus 4 (different from the one used for the pre-recordings), a Nexus 5 Tables 3 to 5 summarize the results of route estimation for each of the four phones. For each route we have two alternatives for picking an estimate: (1) the most frequent route in the particle set as output by Algorithm 2; (2) the route output by Algorithm 3. For each alternative we note the road segment in which the phone is estimated to be after the completion of its track and compare it with the final road segment of the true route. This allows us to measure the accuracy of the algorithm for estimating the location of the user’s destination (the end of the track). This is the most important metric for many attack scenarios where the attacker wishes to learn the destination of the victim. In some cases it may also be beneficial for the attacker to know the actual route through which the victim traversed on his way to the destination. For this purpose, we also calculate for each alternative estimate the Levenshtein distance between it and the true route. The Levenshtein distance is a standard metric for measuring the difference between two sequences [18]. It equals the minimum number of updates required in order to change one sequence to the next. In this context, we treat a route as a sequence of intersections. The distance is normalized by the length of the longer route of the two. This allows us to measure the accuracy of the algorithm for estimating the full track the user traversed. For each estimate we also note whether it is an exact fit with the true route (i.e., zero distance). The percentage of successful localization of destination, average Levenshtein distance and percentage of exact full route fits are calculated for each type of estimated route. We also calculate these metrics for both estimates combined while taking into account for each track the best of the two estimates. To benchmark the results we note in each table the performance of a random estimation algorithm which simply outputs a random, albeit feasible, route. The results in Table 3 show the accuracy of destination identification. It is evident that the performance of the most frequent route output by the particle filter is comparable to the performance of the best estimate output by Algorithm 3. However, their combined performance is significantly better than either estimates alone and predict more accurately the final destination of the phone. This result suggests that Algorithm 3 extracts significant amount of information from the routes output by the particle filter beyond the information gleaned from the most frequent route. Table 3 indicates that for Nexus 4 #1 the combined route estimates were able to identify the final road segment for 80% of all scenarios. For Nexus 4 #2 which was running many applications the final destination estimates are somewhat less accurate (72%). This is attributed to the more noisy measurements of the aggregate power consumption. The accuracy for the two models – Nexus 5 and HTC Desire – is lower than the accuracy achieved for Nexus 4. Remember that all our pre-recordings were done using a Nexus 4. These results may indicate that the power consumption profile of the cellular radio is dependent on the phone’s model. Nonetheless, for both phones we achieve significantly higher accuracy of destination localization (55% and 65%) as compared to the random case (about 20%). Tables 4 and 5 present measures – Levenshtein distance and exact full route fit – of the accuracy of estimates for the full route the phone took to its destination. Here, again, the algorithm presented for Nexus 4 #1 superior performance. It was able to exactly estimate 45% Table 4: Levenshtein distance <table> <thead> <tr> <th></th> <th>random</th> <th>frequent</th> <th>Alg. 3</th> <th>combined</th> </tr> </thead> <tbody> <tr> <td>Nexus 4 #1</td> <td>0.61</td> <td>0.38</td> <td>0.27</td> <td>0.24</td> </tr> <tr> <td>Nexus 4 #2</td> <td>0.63</td> <td>0.61</td> <td>0.59</td> <td>0.52</td> </tr> <tr> <td>Nexus 5</td> <td>0.68</td> <td>0.6</td> <td>0.55</td> <td>0.45</td> </tr> <tr> <td>HTC Desire</td> <td>0.65</td> <td>0.59</td> <td>0.5</td> <td>0.45</td> </tr> </tbody> </table> Table 5: Exact full route fit <table> <thead> <tr> <th></th> <th>random</th> <th>frequent</th> <th>Alg. 3</th> <th>combined</th> </tr> </thead> <tbody> <tr> <td>Nexus 4 #1</td> <td>4%</td> <td>38%</td> <td>22%</td> <td>45%</td> </tr> <tr> <td>Nexus 4 #2</td> <td>5%</td> <td>8.5%</td> <td>5%</td> <td>15%</td> </tr> <tr> <td>Nexus 5</td> <td>3%</td> <td>15%</td> <td>9%</td> <td>20%</td> </tr> <tr> <td>HTC Desire</td> <td>5%</td> <td>10%</td> <td>12%</td> <td>17%</td> </tr> </tbody> </table> of the full route to the destination. On the other hand, for the more busy Nexus 4 #2 and the other model phones the performance was worse. It is evident from the results that for these three phones the algorithm had difficulties producing an accurate estimate of the full route. Nonetheless, in all cases the accuracy is always markedly higher than that of the random case. To have a better sense of the distance metric used to evaluate the quality of the estimated routes Figure 9 depicts three cases of estimation errors and their corresponding distance values in increasing order. It can be seen that even estimation error having relatively high distances can have a significant amount of information regarding the true route. 8 Future directions In this section we discuss ideas for further research, improvements, and additions to our method. 8.1 Power consumption inference While new (yet very common) smartphone models contain an internal ampere-meter and provide access to current data, other models (for instance Galaxy S III) supply voltage but not current measurements. Therefore on these models we cannot directly calculate the power consumption. V-edge [31] proposes using voltage dynamics to model a mobile device’s power consumption. That and any other similar technique would extend our method and make it applicable to additional smartphone models. Ref. [33] presents PowerTutor, an application that estimates power consumption by different components of the smartphone device based on voltage and state of discharge measurements. Isolating the power consumed by the cellular connectivity will improve our method by eliminating the noise introduced by other components such as audio/Bluetooth/WiFi etc. that do not directly depend on the route. 8.2 State of Discharge (SOD) The time derivative of the State-of-Discharge (the battery level) is basically a very coarse indicator of power consumption. While it seemed to be too inaccurate for our purpose, there is a chance that extracting better features from it or having few possible routes may render distinguishing routes based on SOD profiles feasible. Putting it to the test is even more interesting given the HTML 5 Battery API that enables obtaining certain battery statistics from a web-page via JavaScript. Our findings demonstrate how future increases in the sampling resolution of the battery stats may turn this API even more dangerous, allowing web-based attacks. 8.3 Choice of reference routes Successful classification depends among other factors on good matching between the power profile we want to classify and the reference power profiles. Optimal matching might be a matter of month, time of day, traffic on the road, and more. We can possibly improve our classification if we tag the reference profiles with those associated conditions and select reference profiles matching the current conditions when trying to distinguish a route. That of course requires collecting many more reference profiles. 8.4 Collecting a massive dataset Collecting a massive dataset of power profiles associated with GPS coordinates is a feasible task given vendors’ capability to legally collect analytics about users’ use of their smartphones. Obtaining such big dataset will enable us to better understand how well our approach can scale and whether it can be used with much less prior knowledge about the users. 9 Defenses 9.1 Non-defenses One might think that by adding noise or limiting the sampling rate or the resolution of the voltage and current measurements one could protect location privacy. However, our method does not rely on high sampling frequency or resolution. In fact, our method works well with profiles much coarser than what we can directly get from the raw power data, and for the route distinguishing task we actually performed smoothing and downsampling of the data yet obtained good results. Our method also works well with signal strength, which is provided with much lower resolution and sampling frequency\textsuperscript{7}. 9.2 Risky combination of power data and network access One way of reporting voltage and current measurements to the attacker is via a network connection to the attacker’s server. Warning the user of this risky combination may somewhat raise the bar for this attack. There are of course other ways to leak this information. For instance, a malicious application disguised as a diagnostic software can access power data and log it to a file, without attempting to make a network connection, while another, seemingly unrelated, application reads the data from that file and sends it over the network. 9.3 Secure hardware design The problem with access to total power consumption is that it leaks the power consumed by the transceiver circuitry and communication related tasks that indicate signal strength. While power measurements can be useful for profiling applications, in many cases, examining the power consumed by the processors executing the software logic might be enough. We therefore suggest that supplying only measurements of the power consumed by the processors (excluding the power consumed by the TX/RX chain) could be a reasonable trade-off between functionality and privacy. 9.4 Requiring superuser privileges A simple yet effective prevention may be requiring superuser privileges (or being root) to access power supply data on the phone. Thus, developers and power-users can install diagnostic software or run a version of their application that collects power data on a rooted phone, whereas the release version of the software excludes this functionality. This would of course prevent the collection of anonymous performance statistics from the install-base, but as we have shown, such data can indicate much more than performance. 9.5 Power consumption as a coarse location indicator Same as the cell identifier is defined as a coarse location indicator, and requires appropriate permissions to be accessed, power consumption data can also be defined as one. The user will then be aware, when installing applications that access voltage and current data, of the application’s potential capabilities, and the risk potentially posed to her privacy. This defense may actually be the most consistent with the current security policies of smartphone operating systems like Android and iOS, and their current permission schemes. 10 Related work Power analysis is known to be a powerful side-channel. The most well-known example is the use of high sample rate (~20 MHz) power traces from externally connected power monitors to recover private encryption keys from a cryptographic system [15]. Prior work has also established the relationship between signal strength and power consumption in smartphones [6,29]. Further, Bartendr [29] demonstrated that paths of signal strength measurements are stable across several drives. PowerSpy combines these insights on power analysis and improving smartphone energy efficiency to reveal a new privacy attack. Specifically, we demonstrate that an attacker can determine a user’s location simply by monitoring the cellular modem’s changes in power consumption with the smartphone’s alarmingly unprotected ~100 Hz internal power monitor. 10.1 Many sensors can leak location Prior work has demonstrated that data from cellular modems can be used to localize a mobile device (an extensive overview appears in Gentile et al. [10]). Similar to PowerSpy, these works fingerprint the area of interest with pre-recorded radio maps. Others use signal strength to calculate distances to base stations at known locations. All of these methods [16, 24, 25, 30] require signal strength measurements and base station ID or WiFi network name (SSID), which is now protected on Android and iOS. Our work does not rely on the signal strength, cell ID, or SSID. PowerSpy only requires access to power measurements, which are currently unprotected on Android. PowerSpy builds on a large body of work that has shown how a variety of unprotected sensors can leak location information. Zhou et al. [34] reveal that audio on/off status is a side-channel for location tracking without permissions. In particular, they extract a sequence of intervals where audio is on and off while driving instructions are being played by Google’s navigation application. By comparing these intervals with reference sequences, the authors were able to identify routes taken by the user. SurroundSense [3] demonstrates that ambient sound and light can be used for mobile phone localization. They focus on legitimate use-cases, but the same methods could be leveraged for breaching privacy. AccelPrint [9] shows that smartphones can be fingerprinted by tracking imperfections in their accelerometer measurements. Fingerprinting of mobile devices by the characteristics of their loudspeakers is proposed in [7, 8]. Further, Bojinov et al. [4] showed that various sensors in smartphones can be used to identify a mobile device by its unique hardware characteristics. Lukas et al. [20] proposed a method for digital camera fingerprinting by noise patterns present in the images. [19] enhances the method enabling identification of not only the model but also particular cameras. Sensors can also reveal a user’s input such as speech and touch gestures. The Gyrophone study [21] showed that gyroscopes on smartphones can be used for eavesdropping on a conversation in the vicinity of the phone and identifying the speakers. Several works [2, 5, 32] have shown that the accelerometer and gyroscope can leak information about touch and swipe inputs to a foreground application. 11 Conclusion PowerSpy shows that applications with access to a smartphone’s power monitor can gain information about the location of a mobile device – without accessing the GPS or any other coarse location indicators. Our approach enables known route identification, real-time tracking, and identification of a new route by only analyzing the phone’s power consumption. We evaluated PowerSpy on real-world data collected from popular smartphones that have a significant mobile market share, and demonstrated their effectiveness. We believe that with more data, our approach can be made more accurate and reveal more information about the phone’s location. Our work is an example of the unintended consequences that result from giving 3rd party applications access to sensors. It suggests that even seemingly benign sensors need to be protected by permissions, or at the very least, that more security modeling needs to be done before giving 3rd party applications access to sensors. Acknowledgments We would like to thank Gil Shotan and Yoav Shechtman for helping to collect the data used for evaluation, Prof. Mykel J. Kochenderfer from Stanford University for providing advice regarding location tracking techniques, Roy Frostig for providing advice regarding classification and inference on graphs, and finally Katharina Roesler for proofreading the paper. This work was supported by NSF and the DARPA SAFER program. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSF or DARPA. References A Formal model of new route inference In this section we formalize the problem of the new route inference (Section 6) as a hidden Markov model (HMM) [27]. Let $I$ denote the set of intersections in an area in which we wish to track a mobile device. A road segment is given by an ordered pair of intersections $(x, y)$, defined to be a continuous road between intersection $x$ and intersection $y$. We denote the set of road segments as $R$. We assume that once a device starts to traverse a road segment it does not change the direction of its movement until it reaches the end of the segment. We define a state for each road segment. We say that the tracked device is in state \( s_{xy} \) if the device is currently traversing a road segment \((x,y)\), where \(x,y \in I\). We denote the route of the tracked device as a \((Q,T)\), where \[ Q = \{ q_1 = s_{x_1y_1}, q_2 = s_{x_2y_2}, \ldots \} \quad T = \{ t_1, t_2, \ldots \} \] For such a route the device has traversed from \( x_1 \) to \( x_{i+1} \) during time interval \([t_{i-1}, t_i]\) \((t_0 = 0, t_{i-1} < t_i \forall i > 0)\). Let \( A = \{ a_{xyz} \mid \forall x,y,z \in I \} \) be the state transition probability distribution, where \[ a_{xyz} = p\{q_{i+1} = s_{yz} \mid q_i = s_{xy}\} \] Note that \( a_{xyz} = 0 \) if there is no road between intersections \(x\) and \(y\) or no road between intersections \(y\) and \(z\). A traversal of the device over a road segment yields a power consumption profile of length equal to the duration of that movement. We denote a power consumption profile as an observation \( o \). Let \( B \) be the probability distribution of yielding a given power profile while the device traversed a given segment. Due to the hysteresis of hand-offs between cellular base stations, this probability depends on the previous segment the device traversed. Finally, let \( \Pi = \{ \pi_{xy} \} \) be the initial state distribution, where \( \pi_{xy} \) is the probability that the device initially traversed segment \((x,y)\). If there is no road segment between intersections \(x\) and \(y\), then \( \pi_{xy} = 0 \). In our model we treat this initial state as the state of the device before the start of the observed power profile. We need to take this state into account due to the hysteresis effect. Note that an HMM is characterized by \( A, B, \) and \( \Pi \). The route inference problem is defined as follows. Given an observation of a power profile \( O \) over time interval \([0,t_{\text{max}}]\), and given a model \( A, B, \) and \( \Pi \), we need to find a route \((Q,T)\) such that \( p\{(Q,T)\mid O\} \) is maximized. In the following we denote the part of \( O \) which begins at time \( t' \) and ends at time \( t'' \) by \( O[t',t''] \). Note that \( O = O[0,t_{\text{max}}] \). We consider the time interval \([0,t_{\text{max}}]\) as having a discrete resolution of \( \tau \). ### B Choosing the best inferred route Upon its completion, the particle filter described in section 6.1 outputs a set of \( N \) routes of various lengths. We denote this set by \( P_{\text{final}} \). This set exhibits an estimate of the distribution of routes given the power profile of the tracked device. The simple approach to select the best estimate is to choose the route that appears most frequently in \( P_{\text{final}} \) as it has the highest probability to occur. Nonetheless, since a route is composed of multiple segments chosen at separate steps, at each step the weight of a route is determined solely based on the last segment added to the route. Therefore, in \( P_{\text{final}} \) there is a bias in favor of routes ending with segments that were given higher weights, while the weights of the initial segments have a diminishing effect on the route distribution with every new iteration. To counter this bias, we choose another estimate using a procedure we call iterative majority vote. This procedure ranks the routes based on the prevalence of their prefixes. At each iteration \( i \) the procedure calculates – Prefix[i] – a list of prefixes of length \( i \) ranked by their prevalence out of the all routes that has a prefix in Prefix[i-1]. Prefix[i][n] denotes the prefix of rank \( n \). The operation \( p||j \) – where \( p \) is a route and \( j \) is an intersection – denotes the appending of \( j \) to \( p \). At each iteration \( i \) algorithm 3 is executed. In the following we denote RoutePrefixed(R, p) to be the subset of routes out of the set \( R \) having \( p \) as their prefix. #### Algorithm 3 Iterative majority vote \[ T' \leftarrow I \] while not all prefixes found do \[ \text{Prf} \leftarrow \text{next prefix from Prefix}[i]. \] Find \( j \in I' \) that maximizes \[ \text{RoutePrefixed(RoutePrefixed(P_{\text{final}}, \text{Prf}), \text{Prf})}[j] \] if no such \( j \) is found then \[ I' = I \] continue loop end if Prefixed[i + 1] \( \leftarrow \) Prefix[i + 1] \( \cup \) \{Prf||j\} \[ I' = I' \setminus \{j\} \] end while At each iteration \( i \) we rank the prefixes based on the ranks of the previous iteration. Namely, prefixes which are extensions of a shorter prefix having a higher rank in a previous iteration will always get higher ranking over prefixes which are extensions of a lower rank prefix. At each iteration the we first find the most common prefixes of length \( i + 1 \), which start with the most common prefix of length \( i \) found in the previous iteration, and rank them according to their prevalence. Then we look for common prefixes of length \( i + 1 \), that start with the second most common prefix of length \( i \) found in the previous iteration, and so on until all prefixes of length \( i + 1 \) are found. The intuition is as follows. The procedure prefers routes traversing segments that are commonly traversed by other routes. Those received a high score when were chosen. Since we cannot pick the most common segments separately from each step (a continuous route probably will not emerge), we iteratively pick the most common segment out of the routes that are prefixed with the segments that were already chosen.
This version is available at https://strathprints.strath.ac.uk/55289/ Strathprints is designed to allow users to access the research output of the University of Strathclyde. Unless otherwise explicitly stated on the manuscript, Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Please check the manuscript for details of any other licences that may have been applied. You may not engage in further distribution of the material for any profitmaking activities or any commercial gain. You may freely distribute both the url (https://strathprints.strath.ac.uk/) and the content of this paper for research or private study, educational, or not-for-profit purposes without prior permission or charge. Any correspondence concerning this service should be sent to the Strathprints administrator: strathprints@strath.ac.uk A model for availability growth with application to new generation offshore wind farms Athena Zitrou *, Tim Bedford, Lesley Walls University of Strathclyde, Department of Management Science, Graham Hills Building, 40 George Street, Glasgow G1 1QE, Scotland A R T I C L E I N F O Article history: Received 31 January 2014 Received in revised form 8 December 2015 Accepted 10 December 2015 Available online 12 February 2016 Keywords: Availability growth Systemic risk Offshore wind farm Condition monitoring A B S T R A C T A model for availability growth is developed to capture the effect of systemic risk prior to construction of a complex system. The model has been motivated by new generation offshore wind farms where investment decisions need to be taken before test and operational data are available. We develop a generic model to capture the systemic risks arising from innovation in evolutionary system designs. By modelling the impact of major and minor interventions to mitigate weaknesses and to improve the failure and restoration processes of subassemblies, we are able to measure the growth in availability performance of the system. We describe the choices made in modelling our particular industrial setting using an example for a typical UK Round III offshore wind farm. We obtain point estimates of the expected availability having populated the simulated model using appropriate judgemental and empirical data. We show the relative impact of modelling systemic risk on system availability performance in comparison with estimates obtained from typical system availability modelling assumptions used in offshore wind applications. While modelling growth in availability is necessary for meaningful decision support in developing complex systems such as offshore wind farms, we also discuss the relative value of explicitly articulating epistemic uncertainties. © 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). 1. Introduction Our model is motivated by the need to support risk management decisions in offshore wind, where there is considerable innovation as the industry expands [20]. Empirical evidence indicates that availability performance of new farms has been below expectations during early operational life, with operating targets only being achieved after growing availability through the implementation of effective fixes over, typically, the first four years of operation [2]. However, responsive remedial action to improve availability not only impacts on income generation, but it also implies that extra capital expenditure is being incurred during periods when only operational expenditure had been planned. This contributes to the problem of lack of equity in the UK offshore wind energy market [40,11] since projects are in competition for capital with other investment opportunities, and hence have to be competitive in terms of risk and return. In a bid to increase capacity and reduce Operation & Maintenance (O&M) costs, the Cost Reduction Task Force [20] recommends the use of innovative designs of high-yield, high-reliability turbines. However, new generation turbines are technically immature systems that are to operate further from the UK shore and in deeper waters than earlier versions. Hence, these new systems are subject to high physical stresses and are potentially vulnerable to systemic weaknesses in design, operation, installation and manufacturing. Therefore, paradoxically, the bid to decrease cost and accelerate offshore wind deployment actually increases some investor risks. Of course, as manufacturers and operators gain better understanding of operation and the environment, technical issues can be resolved through a series of interventions such as design upgrades, modified operational processes or changes in maintenance activities. However, commercial organisations, private investors and governments are required to make investment decisions prior to construction, before operating experience is accumulated. Our model is designed to be used in this setting. By modelling the availability growth process, we are positioned to inform the modelling of future income streams and capital and maintenance costs. The value of growing reliability during system design and development is widely acknowledged [51]. Nevertheless, there has been no reported use of reliability growth analysis in an offshore context. Instead, modelling effort has focussed upon estimating availability performance under operational and maintenance strategies assuming that the wind farm is operating in steady-state [43,39,43,41,12,25,16]. Only [3] and [16] consider departures from... steady state by considering ageing; that is, late rather than early life. It is not possible to investigate growth using the existing availability models through sensitivity analysis since the models’ structures do not allow for this. Hence, the existing models used in offshore wind do not address the issue of growing performance, which is an important modelling challenge if effective and efficient risk management decisions are to be made. Here we develop a model for availability growth to address the particular challenges that the offshore wind sector faces. Though our model has a general formulation meaning and it should have applicability to other systems for which availability, rather than just reliability, is a key performance measure. We formulate a model to represent systematic failures triggered by weaknesses in, for example, design, manufacture and/or installation. The model, when appropriately combined with stochastic processes representing random failure and restoration events, provides a measure of availability. We assume that major interventions to address systemic weaknesses are made at discrete time points associated with what we term an innovation. By the term innovation we include, for example, re-design of system parts, major changes to installation processes, new vessel options for routine maintenance. In an offshore wind context, such innovations are likely to be scheduled to allow for the logistic delays in accessing the farm. Between innovations we allow for learning effects, since it is not unreasonable to expect maintainers and operators to continuously adapt their procedures and processes to improve the execution of routine tasks. The creation of an availability growth model allows us to explore the impact of different scenarios arising from systemic weaknesses in equipment, and to examine the cost-effectiveness of mitigation strategies. In offshore wind, as for many other system development processes, the design is evolutionary implying that the current generation is related to the previous one [37,53]. For example, technology is largely based on modified onshore and early offshore wind turbines. In some areas, such as cable installation, there has been significant learning through method adaptation [1]. Offshore wind foundations are designed on the basis of principles applied in oil and gas, and installation of these structures is performed using mainly oil and gas vessels and procedures [47]. Nevertheless, innovation is necessary for new generation farms – such as the UK Round III sites – to deal with increased water depth and distance from shore [1]. Innovation is the driver of change between generations of product or process design, but is also of itself a major risk to future performance. Typically a new system evolutionary design needs to meet an availability performance target at least equal to that achieved by the previous generation. On the basis of operational experience from earlier generation systems and analogous systems, it is possible for suitably qualified experts to make assessments of potential failure modes, make useful assessments of their impact (e.g. in terms of shortening lifetimes), and advise on potential mitigation strategies. By using the existing methodologies for expert judgement processes for this type of problem [49,23,7], we have structured our model through discussion with domain experts and practitioners. In developing our model we draw upon the existing body of knowledge for reliability growth modelling and the limited consideration of availability growth. For example [9,10,26,42,17] are amongst many authors who propose models for reliability growth that is typically positioned during product development, where the goal is to improve reliability by identifying and removing weaknesses. The effect of modifications in such models is represented as a learning curve [15,10] but models also exist that allow for the representation of a series of discrete modifications through, for example, structural changes in the failure intensity [44,17]. Beyond the classical reliability growth methods for both hardware and software systems in development, there are also models proposed for supporting reliability growth during design [51] and through life [50]. These models tend to be framed from the perspective of the owner of the design blueprint. To model availability – rather than reliability – growth, the premise of modelling needs to be extended to represent interventions that intend not only to remove the sources of potential failures, but also to reduce the restoration time. There is limited mention of such models in the literature. For example, the models found in [48,29] assess availability growth for software rather than hardware, but this is achieved exclusively through a fault removal process – implying that there are no interventions associated with the restoration process. Hence, these papers essentially apply reliability growth models to situations where restoration durations are assumed constant. Our context requires us to draw on existing thinking about reliability growth to develop a model for availability growth that can be used not only by those with design responsibility, but also by those involved in financing and operating the system. We seek to model availability during early operational life of a system because this is the period during which many teething problems are surfaced in use and because of the limited nature of Original Equipment Manufacturer (OEM) warranties, unavailability in early life has an impact on both OEM and system operator. Our modelling approach is distinctive because we provide a single framework which integrates the effect of interventions intended to improve reliability with the effect of interventions intended to reduce restoration times, in order to estimate availability during specified time horizons. We explicitly include in the model the effect of condition monitoring, as this would allow us to predict the likely impact of investing in this type of maintenance strategy on system availability. The model output is an indicator of availability-informed capability that captures the effect of partially operating turbines on farm energy generation. Reduced output might occur, for example, when operators de-rate degraded turbines to accommodate logistic delays in gaining access for maintenance. In this paper we describe the formulation of the growth model and illustrate its application to an offshore wind farm example. We believe this paper makes both a methodological and a contextual contribution. Methodologically we introduce a new model for system availability growth that extends current knowledge of reliability growth modelling. Contextually we show the effects of systemic risk on offshore wind farm availability, thereby addressing a shortcoming of the existing availability models proposed for operational and maintenance decision support in this industry. As presented in this paper, our model only considers aleatory uncertainty; that is, natural variability between different systems, for example the stochastic time to failure of each wind turbine. When considering the behaviour of future systems, which is when this model will be particularly useful for decision support, there are clearly also state-of-knowledge (i.e. epistemic) uncertainties. For example, in the application example given here, the design modifications are modelled as perfectly removing anticipated weaknesses. But assuming perfect fixes can be naive and by extending the model to include representation of state-of-knowledge uncertainties, we can better model the efficacy of innovations on performance. The modelling required to represent state-of-knowledge uncertainty in this setting is quite substantial and goes beyond the objectives of the present paper. In [54] we explain how the availability growth model can include representation of state-of-knowledge uncertainty, as well as aleatory uncertainty, and examine the implications of uncertainty assessment for more effective systemic risk reduction to better support dialogue between the financial and engineering stakeholders in the offshore wind sector. This paper is structured as follows: Section 2 introduces our general rationale for availability growth modelling, while Section 3 presents the mathematical foundations of our model. Section 4 provides an example that explains how we might scope, populate and use the model for a real context based on a typical UK Round III wind farm and examines the impact of appropriately modelling growth. Section 5 concludes by reviewing the limitations as well as benefits of our approach and identifies areas of further work, including a discussion of the relative value of modelling state-of-knowledge uncertainties. 2. Modelling rationale Technical availability is the key modelling criterion of the system (i.e. the offshore wind farm). The system is assumed to be operating fully or partially (i.e. uptime performance) or not (i.e. downtime performance). System performance depends on the performance of constituent subassemblies. Uptime performance reaches target levels when the actual reliability of subassemblies is as planned. Likewise, target downtime performance is achieved when there are no prolonged downtimes of subassemblies due to, for example, logistics or weather-induced delays. Fig. 1 presents a visual representation of our modelling rationale showing the factors that may increase the chance of below-target uptime and/or downtime performance and subsequently impact on system availability. The factors have been identified through conversations with relevant engineers and categorised according to their effect on failure or restoration processes. 2.1. Factors influencing uptime Inadequacies in the design, manufacturing defects or operational errors are factors that can lead to premature wear-out, increased vulnerability to external shocks, or both. Collectively we call these factors Triggers since they are sources of systemic risk that can reduce subassembly reliability. We define three classes of trigger as follows. - **Design inadequacies** are issues with system design caused either by an inappropriate blueprint for the specified operating conditions, or by design environmental parameters that poorly reflect actual operating conditions. Consider offshore wind transformers which can be placed in the bedplate exposing them to vibration. Levels of vibration are not fully understood because new generation turbines are larger and operate further from shore. This introduces risk of design inadequacy. We anticipate that upscaling offshore wind subassemblies can introduce more general issues with the design. For example, it has been observed that larger gearboxes tend to be less reliable than smaller ones [46]. - **Manufacturing faults** occur when a shortcoming in the production process control and quality management of the manufacturer allows for defects to remain and be realised in operation. For example, offshore wind turbine blades are prone to manufacturer faults as they require a particularly labour-intensive manufacturing process, increasing the potential for human error during manufacturing. - **Operational errors** relate to human error during repair or installation. For offshore wind farms in particular, installation error can be an important driver of early life reliability. Activities such as the connection of transmission cables, for example, are prone to this type of issue: a combination of tight deadlines, schedule pressures and task complexity introduce the potential for faults and errors during installation that can lead to decreased cable reliability. 2.2. Factors influencing downtime In general, restoration depends on factors such as difficulties in acquiring resources and gaining access to site. For example, harsh wind and wave conditions can render an offshore wind farm site inaccessible for extended periods of time delaying maintenance activities and extending restoration times. Offshore wind sites can also experience considerable logistic delays. Operations like gearbox replacement require expensive specialised jack-up vessels which are typically hired. So, repair is associated with procedures... such as booking and transferring the vessel between sites, which can result in additional delay. We model such weather-induced delay as a random variable, which we call waiting time. Waiting time represents the period between when maintenance crew and resources are ready and when the trip to the site commences. The uncertainty on waiting time is determined conditionally on the failed subassembly, since waiting times are longer in the winter months – at least in the UK. We estimate the waiting time distributions using historical wind and wave data using an algorithm developed in [13]. 2.3. Interventions The model aims to capture the integrated effect of all factors affecting subassemblies on system availability, and to predict the evolution of availability as technical, operational, and organisational interventions are implemented. We classify Interventions in terms of their effect on availability. As in Ansell et al. [5], we separate interventions into innovations, which have a major effect on performance, and minor adjustments, which result in less radical improvement. We define Innovations to be radical actions that change the basic underlying properties of the system. For example, redesigns to address design issues typify an innovation that affects subassembly reliability. We allow for the chance of achieving target reliability to differ between a new generation design and an upgrade. Innovations also relate to asset-management decisions where, for example, employing different operational strategies, such as fix on failure or charter contracts, might result in different logistic delays. Equally purchasing a new vessel might affect weather waiting times. We define Minor Adaptations to be interventions that impact on the system in a more gradual manner relative to the effect of innovations. Typically, Minor Adaptations are related to learning and the accumulation of experience with the system and its operation. For example, as time progresses, maintenance crews can become more effective conducting low-level maintenance activities such as inspections and calibrations, and so may be less likely to make an error during large-scale maintenance operations such as replacements. We also identify a third class of intervention that requires separate consideration in our model. We name this third class Maintenance Strategy. It represents the influence of maintenance on the condition of subassemblies and, thus, on the pattern of failures. Maintenance Strategy encompasses both the type of intervention (i.e. preventive maintenance, corrective maintenance or condition monitoring) and the effect of intervention on the system condition (i.e. perfect or imperfect repair). For example, maintenance actions such as carbon brush replacement have a minor effect on turbine condition and are modelled as imperfect repair, implying the subassembly state after maintenance is either as it was just before failure, or somewhere in between this and as good as new. Major maintenance activities, such as hub replacements, restore the subassembly to its original condition, and are modelled as perfect repairs. Our model allows for the modelling of different levels of imperfect maintenance; however, we note that it is not primarily designed to optimise maintenance logistics, as this would go beyond the level of discrimination of the model. 3. Availability growth model mathematical formulation 3.1. A parametric model for the hazard rate of a subassembly To represent subassembly failure behaviour we classify underlying failure mechanisms broadly into shocks and wear-out. Shocks are external single stress events whereas wear-out relates to accumulated damage. We assume that subassemblies go initially through a wear-out free period where shocks dominate, which ends when wear-out begins. It is not expected for subassemblies to age prematurely, and target reliability profiles assume that wear-out occurs after early life. We refer to the initial shock-dominating period as Stage 1, and to the succeeding wear-out and shock period as Stage 2. Let \( S_j \) be the time the subassembly leaves Stage \( j \), for now considered fixed. The lifetime of the system is broken down into distinct intervals \( [S_0, S_1) \) and \( [S_1, S_2) \) where \( S_0 = 0, S_2 = \infty \). Let \( U(t) \) denote the system stage at time \( t \) viz: \[ U(t) = j \iff S_{j-1} \leq t < S_j, \quad \text{for } j = 1, 2 \] First, we define the failure behaviour of the subassembly distinctly over the different lifetime stages. For \( j = 1, 2 \), let \( T_j \) be the elapsed time from \( S_{j-1} \), the time the subassembly leaves Stage \( j - 1 \), until its first failure from a mechanism relevant to Stage \( j \). We assume \( T_j \) is a continuous random variable with cumulative distribution function \( F_j \). Given that \( U(t) = j \), the system has (conditional) hazard rate function, or Force of Mortality (FOM), given by \[ m_j(t_j) = \frac{P(t_j \leq T_j < t_j + \Delta t_j)}{P(T_j > t_j)} = \frac{f_j(t_j)}{1 - F_j(t_j)}, \quad \text{where } t_j = t - S_{j-1}. \] Furthermore, let random variable \( W_t \) with distribution function \( G_t \) represent the time when wear-out starts having an effect. A subassembly enters Stage 2 only if the onset of wear-out precedes a shock failure. Fig. 2 presents a visual representation of this reasoning. Let random variable \( T \) with distribution function \( F \) represent the lifetime of the system, measured from the start of operation until the first failure. Assuming shocks and the onset of wear-out act as independent competing risks, we can write \[ T = \min\{T_1, W_1\} + T_2 I_{T_1 > W_1} \] where \( I_A \) is the indicator variable of the event \( A \). Now, the (unconditional) hazard rate of the subassembly given by \[ h(t) = \frac{P(t \leq T < t + \Delta t)}{P(T > t)} = \frac{f(t)}{1 - F(t)} \] can be defined conditionally as \[ h(t) = h(t | S_j \rightarrow) = m_j(t - S_{j-1}) \] where \( S_j \rightarrow \) is the relevant system data observed until just before time \( t \), such as the lifetime stage, as well as wider operation and ![Fig. 2.](image) maintenance information. Later this will be specified in more detail. Shock failures, which dominate Stage 1 of the subassembly lifetime, occur at random and are represented by a constant hazard rate. Using an exponential distribution for \( F_1 \) implies that \( m_1(t_1) = \rho \) is constant. Wear-out mechanisms appear when the subassembly enters Stage 2, in addition to shock failures, implying that \( m_2(t_2) = \rho + h(t_2) \) where \( h(t_2) \) is the wear-out hazard and can be represented by a monotonically increasing function – with time, or any other proxy of damage accumulation. The choice of an increasing hazard rate function to represent wear-out depends on the level of knowledge of the underlying degradation mechanisms and the available data. Our model structure allows degradation to be modelled explicitly or implicitly, depending on the application. When degradation data are available allowing internal failure mechanisms to be traced, then a degradation model can be used. See, for example [32]. If sufficient degradation data to allow model specification are not available, we represent wear-out failure using parametric models for the lifetime distribution. For illustration in this paper we assume a Weibull model to represent wear-out failures, implying that \( m_2(t_2) = \rho + \eta \beta (t_2 - s_1)^{\beta - 1} \). Our parametric model bears similarities with other approaches. For example, we break down the time to signal into smaller segments (i.e. shock and wear-out dominated periods) to model system lifetime in more detail than the Delay Time model [8,52] and we relax the assumption made by [6] that the times at which the system enters a lifetime stage are always observable by the operator. Fig. 3 illustrates the hazard rate for a subassembly entering Stage 2 at time \( s_1 = s_1 \). A subassembly achieving at least target reliability will have relatively lower rate of shock failures \( \rho \), an onset of wear-out \( s_1 \) outside the early life window, and relatively slower rate of increase in the wear-out hazard rate, as shown in Fig. 3(a). If the subassembly performs below target then it is subject to more frequent random failures (\( \rho' > \rho \)) throughout the whole early life and premature, more severe wear-out (\( s'_1 < s_1 \)); see Fig. 3(b). 3.2. Condition monitoring of subassemblies subject to wear-out Condition Monitoring (CM) can indicate incipient failure by tracking measurable wear-out indicators associated with the underlying degradation process and releasing a signal prior to failure; see Fig. 4. For example, wear-out of offshore wind turbine gears and bearings can increase the generation rate of particles above a certain size in gearbox oil [24]. Upon the observation of the CM signal, operators can respond by, for example, de-rating a damaged turbine, to extend its residual life and allow time to plan maintenance actions. We include CM explicitly within the availability growth model because it allows us to predict the likely impact of investing in CM on farm availability. To capture the effect of CM on a subassembly’s failure behaviour, we extend the hazard model presented in Section 3.1 to include the wear-out indicator. We assume the CM indicator starts evolving when the subassembly enters Stage 2 at time \( s_1 \) (i.e. it begins to wear). Given that the signal threshold is passed after time \( W_s \), counted from \( s_1 \), then time \( S_2 = S_1 + W_s \) is when the subassembly enters Stage 3. \( T_3 \) denotes the subassembly’s lifetime given that a CM signal is observed. Therefore, the CM signal further partitions the subassembly lifetime, as shown in Fig. 5, into \[ 0 = S_0 < S_1 < S_2 < S_3 = \infty. \] Since the degradation and indicator processes are associated, the time to the CM signal, \( W_s \), and the conditional lifetime of the subassembly in Stage 2, \( T_2 \), should both depend on the same underlying degradation process. Let \( W_2 \) have distribution \( G_2 \). We can write \( F_2(t_2) = F_2(t_2(\theta)) \) and \( G_2(w_2) = G_2(w_2(\theta)) \) where \( \theta \) is the vector of the degradation model parameters. Given \( \theta \), \( T_2 \) and \( W_2 \) are conditionally independent random variables, then within an independent competing risks framework, the subassembly lifetime in (3) can be written as \[ T = \min(T_1, W_1) + \min(T_2, W_2)I_{[T_1 > W_1]} + T_3I_{[T_2 > W_2]} \tag{7} \] where \( I_a \) is the indicator variable of event \( a \). Note that if upon observation of the CM signal at time \( S_1 \), an operator chooses not to act (e.g. to de-rate the turbine comprising the degrading subassembly) then random variable \( T_3 \) has the same distribution as \( T_2 \). To apply the availability model, the anticipated effectiveness of CM (i.e. the more correlated \( F_2(\cdot) \) and \( G_2(\cdot) \), the more effective the CM) and the operating practice in response to the CM signal. should be indicated. For example, a particular CM system may give a signal far in advance of failure, upon which operating performance is reduced to partial operation through some planned intervention. 3.3. Intensity of events The hazard rate given in (2) describes the subassembly lifetime in terms of its time to first failure. Since offshore wind sub-assemblies are repairable systems, we use a marked point process \(\{(T_n, J_n)\}_{n \geq 1}\) to describe their alternating behaviour between failure and repair, where \(J_n = 1\) when a failure occurs at \(T_n\) and \(J_n = 0\) otherwise \((n = 0, 1, 2, \ldots)\). Let \(N(t)\) and \(M(t)\) be the number of failures and restorations in \((0, t]\) respectively, where \(t\) is calendar time. The conditional intensity of the marked point process is defined as \[ \lambda(t \mid \mathcal{H}_t) = \begin{cases} \dot{\lambda}(t \mid \mathcal{H}_t) & \text{subassembly operates just before time } t \\ \mu(t \mid \mathcal{H}_t) & \text{subassembly does not operate just before time } t = 0 \end{cases} \] where \[ \dot{\lambda}(t \mid \mathcal{H}_t) = \lim_{\Delta t \to 0} \frac{\Pr(\text{failure in } [t, t + \Delta t) \mid \mathcal{H}_t)}{\Delta t} \quad (8) \] and \[ \mu(t \mid \mathcal{H}_t) = \lim_{\Delta t \to 0} \frac{\Pr(\text{restoration in } [t, t + \Delta t) \mid \mathcal{H}_t)}{\Delta t} \quad (9) \] \(\mathcal{H}_t\) is the history of the subassembly until, but not including, time \(t\). History represents the information about a subassembly’s past life that needs to be captured to support model computations. For simplicity, from this point forward we use \(\lambda(t), \dot{\lambda}(t)\) and \(\mu(t)\) instead of \(\lambda(t \mid \mathcal{H}_t), \dot{\lambda}(t \mid \mathcal{H}_t)\) and \(\mu(t \mid \mathcal{H}_t)\) respectively. The intensity \(\lambda(t)\), or the Rate of Occurrence of Failures (ROCOF), is the outcome of the interaction of the inherent reliability characteristics of the subassembly, described by the hazard \(h(t)\), with the maintenance type (i.e. corrective or preventive) and effect (i.e. perfect or imperfect repair). The hazard defines the baseline condition of the subassembly, while the maintenance type and effect determine how this is controlled during operation. In our model, the effect of maintenance is captured via the concept of virtual age \(v(t)\) [27]. We have \[ \lambda(t) = h(v(t)), \quad t > 0 \quad (10) \] where \(v(t)\) is equal to the cumulative uptime denoted with \(x(t)\), where \[ x(t) = \sum_{k_{l=0}}^{\max_\{M(t)\}} T_{k} - T_{k-1}, \quad (11) \] For a new system \(v(t) = 0\). Therefore, perfect maintenance essentially resets the virtual age of the turbine to zero, whereas minimal repair sets its value to the one it had just before failure. Several models have been developed for cases where repair effect lies between perfect and imperfect, e.g. [14], and might provide alternative formulations for the availability model. Whereas virtual age \(v(t)\) describes the effect of maintenance actions and repair, the effect of routine maintenance, such as oil changes, cleaning and lubrication, is captured implicitly by assuming that the pattern implied by the intensity in (10) assumes that such actions are undertaken properly. It is interesting to note that under the assumption of minimal repair, the hazard rate \(h(t)\) and the failure intensity \(\dot{\lambda}(t)\) have the same mathematical formulation, even though they represent different quantities. It also emerges that the history \(\mathcal{H}_t\) in (10) not only includes a sub-assembly’s lifetime stage, but also its virtual age, as defined on the basis of information on the time and type of the last maintenance. The repair intensity \(\mu(t)\) can be expressed by a relationship similar to (10) where \(h(t)\) relates to the maintenance time distribution and \(v(t)\) accounts for the amount of continuous time the system is under repair (i.e. cumulative downtime) as measured from the last failure event and excluding any logistic or weather delays. 3.4. Effect of interventions 3.4.1. Innovations Since innovations are planned large-scale operations intended to have a radical effect on system performance, we model them discretely at times \(S_1, S_2, \ldots, S_m\), which are assumed to be known a priori. Within the UK offshore wind energy context this is a reasonable assumption, since interventions such as design upscaling and subassembly re-fitting typically take place during the summer months, to take advantage of the relatively less severe weather conditions on site. Therefore, innovations partition the early life \((0, T)\) of the system as \[ o = < R_1 < R_2 < \cdots < R_m = T. \] Let \(I(t)\) and \(h^i(t), t > R_i\), denote the conditional intensity and hazard function respectively of a system after the \(i\)-th innovation \((i = 0, 1, \ldots)\). Similarly to (10), the failure intensity and hazard function are associated through equation \[ \dot{\lambda}(t) = h^i(v(t)), \quad t > 0. \quad (12) \] We assume the \((i+1)\)-th innovation has an effect on the basic behaviour of the subassembly, as expressed via \(h^i(t)\). Innovations intend to bring the below-target reliability back to the target level and shift the subassembly profile from the one portrayed in Fig. 3(b) to the one in Fig. 3(a). This is achieved by making modelling choices to either reduce the shock failure rate \((\rho^i < \rho^{i+1})\), or delay wear-out \((S_i < S_i^t)\), or decrease the wear-out rate. For the latter case, the wear-out rate can be modified by modulating the scale parameter of the lifetime distribution. For example, [38] makes a similar assumption when capturing enhancements in a software reliability context. Ref. [17] assumes that innovations impact the scale parameter of the Non Homogeneous Poisson Process model, whereas the shape parameter after intervention remains the same. In the context of accelerated life testing, [32,35] allow a change in stress level to impact the location of the log-lifetime (i.e. the scale parameter of the lifetime distribution), rather than the failure mechanism as expressed via the shape parameter of the lifetime distribution. However, these assertions are typically formed on the basis of statistical analysis, and the assumption that increased stress impacts only on one parameter is not always appropriate – see [31,33] and references therein. To impose the orderings implied by the effect of innovations on shocks and wear-out, we intentionally use a simple version of the model and assume the following mathematical relationships: \[ \phi^{i+1} = \phi \eta^i, \quad S^{i+1} = (1 + \phi) S_i^i \quad \text{and} \quad \eta^{i+1} = \phi \eta^i \] where \( \eta^i \) is the scale parameter of the lifetime distribution of a system subjected to \( i \) innovations and \( 0 < \phi_i \leq 1 \) is a fix-effectiveness parameter. One can produce a more elaborate model by defining as many fix-effectiveness parameters as the number of parameters affected by the innovation, or simplify the model further by assuming that \( \phi_i = \phi \) for every \( i \). Regardless of the choice, determining the intensity in (10) requires information on the number of innovations undertaken to be included in history \( \mathcal{H}_T \). As an example, consider a subassembly subject only to wear-out with hazard rate \[ h^0(t) = \eta \beta t^{\beta-1}, \] and, suppose that the subassembly is subject only to corrective maintenance with minimal repair and negligible restoration times. These assumptions imply that \( v(t) = t \) and that \[ \lambda(t) = h^0(t) = \phi^{i-1} \eta \beta t^{\beta-1} \quad \text{for some} \quad 1 \leq i \leq m. \] Fig. 6 shows these assumptions result in a stepwise change in the subassembly intensity. 3.4.2. Minor adaptations Recall that the hazard in (10) expresses the failure behaviour of a subassembly subject to routine maintenance. As experience accumulates and operators learn, maintenance practices are adjusted and procedures are improved. These changes, referred to as minor adaptations, can have an almost continuous positive effect on system performance as expressed by the failure or restoration intensities. We model this effect in terms of function \( q_i(t) \). The failure intensity of a subassembly after the \( i \)-th innovation and subject to minor adaptations is now given by \[ \lambda(t) = h^0(t) q_i(t). \] A number of formulations for \( q_i(t) \) can be used to represent this ‘learning effect’ due to minor adaptations. Here, we choose a function that is bounded and non-increasing function of \( t \) to represent the decreasing chance of failure resulting from learning. \[ q_i(t) = \frac{1}{t + \gamma} \] and we have \[ \lambda(t) = h^0(t) \frac{1}{t + \gamma} \] Since learning is the result of accumulated operating experience, it is reasonable to assume that minor adaptations depend on calendar time \( t \), and the history \( \mathcal{H}_T \) should include this information to allow determination of the failure intensity. In Fig. 7 one can see how the failure intensity of the subassembly used in the simple example described previously is modified due to minor adaptations, before any innovations take place. 3.5. Estimation of system availability performance A performance indicator we call availability-informed capability is derived as an output of the mathematical model. Our capability measure aims to capture the effect of partial performance of subassemblies on the system output, in particular the effect of partial operation of wind turbines on the energy output from the farm. Since the power generated by a farm is the aggregate of the power generated by individual turbines, the availability-informed capability is defined as the fraction \[ C_{farm}(t) = \frac{\sum_{i=1}^{\eta} P_i(t)}{\sum_{i=1}^{\eta} P_i(t)} \] where \( P_i(t) \) is the average output power of turbine \( i \) at time \( t \) (calculated by applying the power curve of a turbine to a reference wind speed distribution at hub height), given the turbine’s operating condition, and \( PO_i(t) \) is the average output power of turbine \( i \) at time \( t \) assuming it is fully operational. Therefore the average farm availability-informed capability over some interval \((\tau_1, \tau_2)\) is given by \[ C_{farm}(\tau_1, \tau_2) = \frac{\int_{\tau_1}^{\tau_2} C_{farm}(t) dt}{\tau_2 - \tau_1}. \] A full explanation of this performance indicator and a discussion of why it is regarded as a meaningful measure of production capability in the context of financial analysis of offshore wind farms is given in [54]. A capability estimate is computed by representing the mathematical model as a point process simulation. The flowchart in Fig. 8 provides the high level logic of the simulation of events through time and shows the types of events simulated and the relationships between them. For example, exposure to the triggers of systemic risk, shown by the shaded nodes, influences the failure events of subassemblies by modulating the hazard function, as does the condition of the subassembly (i.e. the virtual age) that gets modulated by maintenance. History represents the combined information about, for example, the number of past innovations and calendar time. 4. Example We now illustrate the application of the availability growth model for a new generation offshore wind farm. Unlike other availability modelling approaches used in an offshore wind context, our model allows for the representation of both the gradual effect of minor adaptations, introduced through the accumulation of operating experience, as well as the more radical effect of innovations, such as the replacement of subassemblies with inherent weaknesses with improved versions. In our example we compare model outputs under two scenarios: when systemic risk due to design weaknesses is considered (i.e. growth is explicitly modelled) and when this type of risk is omitted (i.e. as in current availability models for offshore wind). The aim of this comparison is to demonstrate the consequence of failing to represent systemic risks, as well as the subsequent availability growth resulting from restorative action, in estimating farm technical performance, energy output and hence expected financial return. Our example is based on a typical large-scale Round III UK offshore wind farm and our modelling has been developed in collaboration with wind energy experts. Specifically, we translated the conceptual framework shown in Fig. 1 into a process to support the customisation of the general model for the particular context as follows: firstly, we defined the system and its critical subassemblies, for which the model was to be built and scoped the availability growth model; secondly, we articulated the reliability and restoration targets for the system subassemblies based upon the achievable performance of similar relevant parts which have accrued operational experience; thirdly, we considered the causes and effects of failure so that we appropriately model the triggers on the uptime performance, as well as the impact of interventions on uptime and downtime performance. 4.1. Scoping the wind farm system model Our UK round III wind farm, currently at pre-construction stage, will comprise 150 5MW turbines. The turbines have novel design features and are larger scale than earlier versions. Eight subassemblies (i.e. gearbox, generator, frequency converter, transformer, main shaft bearing, blades, tower, foundations, collection cable and transmission cable) have been identified as critical through discussion with subject experts, because they are considered to be subject to high technical and physical risk. We model each of the critical subassemblies explicitly and treat the remaining non-critical subassemblies as one modelling group. Availability-informed capability is to be estimated for the first five years of operation, which is the UK warranty period. The farm is intended to start operation in the summer months. Engineering experts have identified the gearbox and the frequency converter as being at high risk because these are the subassemblies more likely to have design weaknesses. Therefore, in the modelling we examine scenarios associated with the prevalence of systemic risk associated with such design weaknesses and the impact of intervention strategies both on availability levels and financially in terms of energy production loss. We set the target reliability for offshore turbines to equal that achieved by mature onshore turbines since this is consistent with engineering requirements. Analysis of relevant data shows that onshore turbines achieve a failure rate of 3.81 failures/year. This failure rate includes failures of any subassembly and severity, and can be broken down to rates for specific subassemblies [22]. We use a turbine breakdown similar to that used in onshore analyses, which allows us to set the target reliability for each offshore subassembly equal to the level achieved by its onshore counterpart. Table 1 gives values for the target failure rates for the critical subassemblies, whereas the target failure rate for the non-critical group is the sum of the rates of the non-critical subassemblies comprising the group [22]. Following [41,18], we categorise the effects of failure into minor, moderate and major. Restoration durations depend on the failure severity and are taken to be 6 h, 1 day and 2 days for a minor, moderate and major failure respectively. The proportion of failures of different severities for each of the critical subassemblies is also shown in Table 1 and, again, is based on the experience from onshore farms which is considered requisite for our offshore context in this example. Our farm maintenance strategy includes preventive and corrective actions. The turbines will be subject to bi-annual overhauls during which subassemblies are refurbished and for modelling purposes we treat this as re-setting the subassembly virtual age to 50% of its value prior to the refurbishment. Condition monitoring (CM) will be installed on the gearboxes and will provide continuous data giving information about the state of the subassembly with an average run length between signal and occurrence of failure of approximately 1.5 months. Finally, minor adaptations are assumed to improve subassembly reliability in a gradual manner. The minor adaptation parameter \( \gamma \) has been chosen on the basis of providing a reasonable learning curve effect based on historical experience from related farms. Observation of the CM signal will allow operators to de-rate the turbine to limit its output in order to extend its life until the next scheduled maintenance and to reduce the chance of a hard failure. If the fault signalled by the CM cannot be rectified remotely, then the affected subassemblies join the list of jobs awaiting repair. More generally, corrective repair will be conducted on a first come, first served basis and will be constrained by the available maintenance resources and the logistical accessibility. Weather delays are determined as described in [13] for subassembly failure types. For example, the average waiting time for a major gearbox failure is 9 days during the summer months and 18 days during the winter months. The condition to which an affected subassembly returns after maintenance depends on the severity of failure determined previously. A minimal failure is treated with minimal repair and the subassembly is returned to an as-bad-as-old condition, while moderate and major failures result in repairs that are believed to return the subassembly to 85% and 60% of what its condition was just before failure respectively. As mentioned, the major concerns about the new turbine to be installed in our wind farm are the design weaknesses in the gearbox and the frequency converter. These weaknesses, should they exist, will be prevalent in all turbines in the farm, therefore they will trigger all similar subassemblies to wear prematurely and will therefore be a source of systemic risk. To represent systemic weaknesses in the model, it is necessary to determine the reliability of subassemblies, in terms of hazard functions, given the presence of triggers. In our example we used a structured expert judgement elicitation process to obtain point value estimates of the parameters for the hazard-induced hazard functions of each critical subassembly. Note that the expert judgement information was obtained as part of a larger exercise reported in [55]. Table 2 shows the point values used in this application for the scenario where systemic risk due to design weaknesses is to be explicitly modelled. Our example aims to highlight the importance of representing systemic risks in farm availability performance – which is a novel feature of our growth model. Therefore we now examine the scenario where upgrades intended to address design weaknesses of the gearbox and frequency converter designs are rolled out across the turbines in the farm in Year 2 (i.e. a trigger exists) and compare it to a baseline scenario where there are no systemic weaknesses (i.e. the trigger does not exist). <table> <thead> <tr> <th>Subassembly</th> <th>Target failure rate</th> <th>Failure apportionment</th> <th>Major</th> <th>Moderate</th> <th>Minor</th> </tr> </thead> <tbody> <tr> <td>Gearbox</td> <td>0.228 f/yr</td> <td>0.09</td> <td>0.27</td> <td>0.64</td> <td></td> </tr> <tr> <td>Generator</td> <td>0.266 f/yr</td> <td>0.30</td> <td>0.26</td> <td>0.64</td> <td></td> </tr> <tr> <td>Frequency converter</td> <td>0.456 f/yr</td> <td>0.04</td> <td>0.18</td> <td>0.78</td> <td></td> </tr> <tr> <td>Transformer</td> <td>0.076 f/yr</td> <td>0.04</td> <td>0.16</td> <td>0.8</td> <td></td> </tr> <tr> <td>Main shaft bearing</td> <td>0.038 f/yr</td> <td>0.25</td> <td>0.15</td> <td>0.6</td> <td></td> </tr> <tr> <td>Blades</td> <td>0.114 f/yr</td> <td>0.04</td> <td>0.21</td> <td>0.75</td> <td></td> </tr> <tr> <td>Tower</td> <td>0.114 f/yr</td> <td>0.01</td> <td>0.19</td> <td>0.8</td> <td></td> </tr> <tr> <td>Foundations</td> <td>0.038 f/yr</td> <td>0.01</td> <td>0.19</td> <td>0.8</td> <td></td> </tr> <tr> <td>Non-critical group</td> <td>2.47 f/yr</td> <td>0.01</td> <td>0.19</td> <td>0.8</td> <td></td> </tr> <tr> <td>Collection cable</td> <td>(1 \times 10^{-6} ) f/km/yr</td> <td>0.01</td> <td>0.19</td> <td>0.8</td> <td></td> </tr> <tr> <td>Transmission cable</td> <td>(1 \times 10^{-6} ) f/km/yr</td> <td>0.01</td> <td>0.19</td> <td>0.8</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Subassembly</th> <th>Gearbox frequency converter</th> </tr> </thead> <tbody> <tr> <td>Shocks (Exponential)</td> <td>( \lambda = 0.019 )</td> </tr> <tr> <td>Wear-out onset (Normal)</td> <td>( \mu = 0.335 ) ( \sigma = 0.01 )</td> </tr> <tr> <td>Signal (Weibull)</td> <td>( \eta = 15 )</td> </tr> <tr> <td>Full operation (Weibull)</td> <td>( \beta = 1.5 ) ( \eta = 5 )</td> </tr> <tr> <td>Partial operation (Weibull)</td> <td>( \eta = 5 )</td> </tr> <tr> <td></td> <td>( \beta = 1.5 )</td> </tr> </tbody> </table> 4.2. Findings Our modelling provides performance profiles for the farm over the first five years of operation, starting in summer of Year 0, for both scenarios. The model has been developed as a modular simulation in Matlab, making it feasible to replace or to extend modelling features. Monte Carlo simulations based on the computational model logic shown in Fig. 8 are used to calculate the aleatory uncertainty on the availability-informed capability on a two-weekly basis using \( N = 100 \) runs. This is a limited number of simulation runs but the choice was made as a practical trade-off between simulation runtime and estimation accuracy. Further, since our primary goal here is to examine patterns in availability performance profiles, we have shown only the 50% quantiles in the model output plots. Fig. 9 illustrates the 50% quantiles of bi-weekly availability-informed capability profiles under the two scenarios. When systemic design weaknesses are not considered explicitly in the analysis, Fig. 9(a) shows that performance is below the typical target of 97% capability for the first quarter of Year 1, before gradually improving due to the effects of minor adaptations to achieve an availability of around 99%. However, as Fig. 9(b) shows, the systemic effects of design inadequacies can reduce early farm performance to a level below 90% capability. Our results show that the predicted farm performance deteriorates prematurely during the first two years of operation until innovations in the form of the design upgrades are undertaken during the summer months of Year 2. Following the successful mitigation of systemic risk, performance increases gradually. Fig. 10 shows the equivalent estimated failure intensity rate for the farm for our two scenarios. The common learning effects due to minor adaptations of, for example, procedures lead to pattern of reduction in the failure intensity under Scenario 1. The impact of systemic risk due to the design weaknesses appears as an increasing failure intensity over the first two years of operation before decreasing substantially over the last half of Year 3 when the full effects of the design modification combined with the minor adaptations are achieved across the farm. By applying the wind speed distribution on the power curve of a turbine, the total farm energy production and associated revenue can be estimated. Table 3 provides the results under our two scenarios. If the energy price is £155 per MWh, and without modelling triggers of systemic risk, then the expected revenue over the first 5 years of operation is computed to be £1760 million pounds. However when systemic risk is properly accounted for in the analysis, the farm generates a revenue of £1722 million pounds over the same period. This implies that failing to model growth in availability, but instead assuming that steady-state performance can be achieved from the outset, can lead to an overestimation of farm revenue in the range of £38 million pounds even before taking into account the cost of innovations. In this example these costs would be those accrued in the re-design and re-fitting of 150 problematic frequency converters and gearboxes. The example shows clearly what kind of impact systemic risks can have on wind farm financial performance. Current modelling of offshore wind farm availability does not take account of growth due to the risks associated with innovation, leading to over-optimistic planning and high costs of mitigation. Simply having awareness of this type of problem during planning and contracting can focus minds on maintaining options to deal with the nature of this issue. 5. Conclusions and further work We have presented an availability growth model for a system, such as an offshore wind farm, where innovations might be made during early operation to improve performance and estimates of availability are required prior to entry into service. Importantly, this includes exploration of mitigation strategies for the initial period of operation, should availability problems emerge, and should influence logistics planning and options on service provision. While our availability growth model has been motivated by, and its application illustrated for, the offshore wind problem, the generic structure of the model means that it can be adapted to other domains where commercially unproven technology or processes are used. The model presented is designed to provide insight into the effectiveness of interventions on growth in system performance by providing availability estimates under different scenarios. Our example for a typical UK Round III wind farm highlights the importance of being able to meaningfully assess farm performance over early life when systemic risks due to design, maintenance and operational weaknesses may still exist. The model provides a means of measuring the impact of systemic risk on availability performance and can be used to quantify the financial implications of underestimating performance relative to target. The model, as presented here, considers only aleatory uncertainties and allows the exploration of different scenarios with decision makers. This is useful for dealing with managers in industry as it allows them to explore the implications of issues that they are aware of, but are not currently modelling. A more sophisticated mathematical approach which uses epistemic uncertainties to create a more formal rational decision-making model framework is developed in a further paper [54]. However, this further approach inevitably requires that decision makers ‘buy-in’ to the expert uncertainty assessments which have to be gathered from a variety of different stakeholders. Since the availability growth scenario approach presented here already enables decision makers to explore key problems without having to commit to a more conceptually sophisticated and complex approach, it is genuinely useful both to deal with those problems where the more complex approach would probably not make a difference, and also to motivate them to go on to the more complex approach when it is needed. Our point of view in this regard is consistent with that expressed by I.J. Good [21] who said that a rational decision maker should take account of the cost of the decision analysis (to all parties) as well as the direct costs and benefits of the decision. Our current model code is based on the set of assumptions described. While reasonable for our example domain, they might need to be developed for other application areas. Further, the implementation of sensitivity or uncertainty analysis would require further consideration of the simulation model computation so that appropriate numbers of simulation runs can be efficiently generated to provide suitably accurate results. For example, future work could involve the use of metamodels such as emulators [36,28] to approximate the simulation model and to speed up computation. Table 3 Expected farm output over early life assuming average wind speed under two scenarios and an energy price of £155 per MWh. <table> <thead> <tr> <th>Scenario</th> <th>Early life output</th> <th></th> </tr> </thead> <tbody> <tr> <td>No triggers</td> <td>11,355</td> <td>1760</td> </tr> <tr> <td>Triggers</td> <td>11,109</td> <td>1722</td> </tr> </tbody> </table> Fig. 10. Estimated early life failure intensity rate denoted by ROCOF for simulated scenario: (a) no recognised gearbox and frequency converter design weaknesses; (b) design inadequacy weaknesses result in increasing rate over the next two years before reducing after effect of innovation. Table 3 Expected farm output over early life assuming average wind speed under two scenarios and an energy price of £155 per MWh. wind energy experts, and can be modified to reflect the systemic risks relevant to a particular situation. Similarly, the condition monitoring characteristics, which we represented by the timing of the signal relevant to failure and the operational response, can be modified to represent actual maintenance of a given system. To build a meaningful model for decision makers requires engagement with relevant engineering experts to both qualitatively structure the model and to quantify selected parameters. We have developed a scientific protocol to support collection and preparation of data details of which are provided in [55]. Ongoing work includes further engagement with stakeholders experienced in offshore wind farm engineering, technology and operations to conduct validation studies of the availability growth model and supporting data management processes. Acknowledgements This research was supported by EPSRC Grant EP/I017380/1. We gratefully acknowledge the support from Mr. Graeme Hawker, Dr. Keith Bell, Prof. David Infield and Dr. Julian Feuchtang and thank them for their help in understanding and modelling the offshore wind farm problem. Special thanks should go to Dr. Kevin Wilson, for supporting the construction of the MATLAB code. We should also like to thank the engineers, farm operators and developers that shared their valuable insights and experiences with us. Datasets associated to the simulations discussed here are available from http://dx.doi.org/10.15129/bbc056e4-b73a-4ac4-a4fe-af8e5bd54804. References
Evaluation of composition and mineral structure of callus tissue in rat femoral fracture. Turunen, Mikael J; Lages, Sebastian; Labrador, Ana; Olsson, Ulf; Tägil, Magnus; Jurvelin, Jukka S; Isaksson, Hanna Published in: Journal of Biomedical Optics DOI: 10.1117/1.JBO.19.2.025003 2014 Link to publication Citation for published version (APA): General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Evaluation of composition and mineral structure of callus tissue in rat femoral fracture Mikael J. Turunen Sebastian Lages Ana Labrador Ulf Olsson Magnus Tägil Jukka S. Jurvelin Hanna Isaksson Evaluation of composition and mineral structure of callus tissue in rat femoral fracture Mikael J. Turunen,a,* Sebastian Lages, b Ana Labrador, c Ulf Olsson, b Magnus Tägil,d Jukka S. Jurvelin,a and Hanna Isaksson a,d,e aUniversity of Eastern Finland, Department of Applied Physics, POB 1627, FIN-70211 Kuopio, Finland bLund University, Division of Physical Chemistry, POB 124, SE-22100 Lund, Sweden cLund University, MAX IV Laboratory, POB 118, SE-22100 Lund, Sweden dLund University, Department of Orthopaedics, Clinical Sciences, POB 118, SE-22100 Lund, Sweden eLund University, Division of Solid Mechanics, POB 118, SE-22100 Lund, Sweden Abstract. Callus formation is a critical step for successful fracture healing. Little is known about the molecular composition and mineral structure of the newly formed tissue in the callus. The aim was to evaluate the feasibility of small angle x-ray scattering (SAXS) to assess mineral structure of callus and cortical bone and if it could provide complementary information with the compositional analyses from Fourier transform infrared (FTIR) microspectroscopy. Femurs of 12 male Sprague–Dawley rats at 9 weeks of age were fractured and fixed with an intramedullary 1.1 mm K-wire. Fractures were treated with the combinations of bone morphogenetic protein-7 and/or zoledronate. Rats were sacrificed after 6 weeks and both femurs were prepared for FTIR and SAXS analysis. Significant differences were found in the molecular composition and mineral structure between the fracture callus, fracture cortex, and control cortex. The degree of mineralization, collagen maturity, and degree of orientation of the mineral plates were lower in the callus tissue than in the cortices. The results indicate the feasibility of SAXS in the investigation of mineral structure of bone fracture callus and provide complementary information with the composition analyzed with FTIR. Moreover, this study contributes to the limited FTIR and SAXS data in the field. © 2014 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/1.JBO.19.2.025003] Keywords: cortical bone; fracture healing; composition; mineral structure; Fourier transform infrared microspectroscopy; small angle x-ray scattering. Paper 130670RR received Sep. 15, 2013; revised manuscript received Jan. 15, 2014; accepted for publication Jan. 17, 2014; published online Feb. 12, 2014. 1 Introduction Callus formation is a critical step for successful fracture healing.1 About 5% to 10% of all fractures suffer from delayed healing or lead to a nonunion.2 Bone morphogenetic proteins (BMPs) increase callus formation3 and have been shown to be valuable in the treatment of fracture nonunions. BMPs, however, increase the speed of remodeling and thereby also induce resorption via the RANK pathway. On the other hand, bisphosphonates can be used to reduce the resorption of bone5 and bisphosphonates on the molecular composition and mineral structure of the callus. Fourier transform infrared (FTIR) microspectroscopy and small angle x-ray scattering (SAXS) can be used to evaluate the molecular composition and mineral structure, respectively, of trabecular9–13 and cortical bone.5,10–13 However, the composition and mineral structure of the callus tissue during fracture healing has received little attention.14–16 Each molecular structure has its own specific IR absorption spectrum. When coupled to a microscope, FTIR imaging microspectroscopy provides a tool for fast measurements of the spatial composition of bone.10,17,18 Several compositional parameters can be calculated from the bone IR spectra, e.g., degree of mineralization, carbonate substitution, collagen maturity, crystallinity, and acid phosphate substitution (APS).19–22 In SAXS, a sample is irradiated by a narrow well-collimated x-ray beam and the intensity of x-rays scattered by the sample at small angles is measured. In bone, the scattering is related to bone mineral structure, i.e., the spatial arrangement of mineral crystals and collagen fibrils. When coupled with an automated scanning stage, spatial maps can be recorded where each pixel contains a two-dimensional (2-D) scattering pattern.13 Several parameters can be determined from the scattering pattern of bone, e.g., the mineral plate thickness, predominant orientation, and degree of orientation.5,13,12,13,22,23 Spatial distributions of these parameters provide a good insight to the variation in mineral structure in cortical bone and newly formed callus tissue. Different methods have been used to determine the mineral plate thickness. Fratzl et al. proposed a method in which a mineral phase fraction of 50% is assumed.5,13,23,24 This assumption might not be valid for, e.g., newly formed bone where the degree of mineralization may vary substantially. Another approach introduced by Bünger et al. evaluates the mineral plate thickness through an iterative curve fitting method where no assumptions of the mineral phase fraction are needed.13 The latter was used in this study. This study used FTIR imaging microspectroscopy and scanning SAXS to assess the molecular composition and mineral structure of newly formed callus tissue, the adjacent cortical... bone, and the nonfractured contralateral cortical bone, during long bone fracture healing in a rat nonunion model. The first aim was to evaluate the feasibility of SAXS to assess the mineral structure of callus tissue and if it could provide complementary information to the compositional analyses from FTIR. The subsequent aim was to study the differences in molecular composition and mineral structure among fracture callus, fracture cortex, and control cortex. 2 Materials and Methods 2.1 Experimental Overview Twelve male Sprague–Dawley rats at 9 weeks of age were anaesthetized with ketamine HCl (75 mg/mL, Parnell Laboratories, Rosebery, Australia) and xylazine (10 mg/mL, Ilium, Smithfield, Australia).25,26 The right femurs of the rats were osteotomized, stripped of periosteum and muscle, and fixed with an intramedullary 1.1 mm Kirschner wire. BMP-7 was placed locally around the fracture and zoledronate (ZO) or saline (NaCl) was injected after 2 weeks. Thus, four groups were created (A) NaCl, (B) BMP-7 + NaCl, (C) BMP-7 + ZO, and (D) ZO. After the operation, the rats received subcutaneous physiologic saline and buprenorphine (Temgesic, Reckitt and Colemann, Hull, UK) at 0.05 mg/kg twice a day. Rats were sacrificed after 6 weeks and both femurs, fractured (right), and control (left) were stripped of the soft tissue, defatted, and dehydrated in ascending series of ethanol solutions and embedded in polymethylmethacrylate (PMMA). The experimental protocol was approved by the local animal ethics committee. The experimental model results in 52% nonunions when the femur is left untreated.25 2.2 Imaging of Callus Size The femurs from the fracture side were imaged with a micro-CT system (Skyscan 1172, Aartselaar, Belgium) to visualize the fracture callus and cortex in each animal (Fig. 1). Images were acquired with an isotropic voxel size of 36 μm using 100 kV, 100 μA with a 0.5 mm aluminum filter, and 10 repeated scans. Image reconstruction was performed (NRecon, Skyscan, v. 1.5.1.4) by correcting for ring artifacts and beam hardening. Following reconstruction, the individual fracture lines were identified by simultaneously viewing multiple orthogonal slices (DataViewer, Skyscan, v. 1.4). The data were used to qualitatively assess and evaluate the status of healing of the fractures and the size of the calluses. Fig. 1 Micro-CT images reveal the state of healing of the bone fractures after 6 weeks in (a) NaCl, (b) BMP-7 + NaCl, (c) BMP-7 + ZO, and (d) ZO treatment groups. The images were used to qualitatively assess callus size and healing state. 2.3 Small Angle x-ray Scattering From the PMMA embedded specimens, 300 μm sections were sawed (EXAKT 400 CS, Cutting Grinding System, Hamburg, Germany). Two samples from each treatment group and one section from each sample were measured and analyzed. SAXS measurements were conducted at the 1911-SAXS beamline at the 1.5 GeV ring (MAX II) of the MAX IV Laboratory (Lund University, Lund, Sweden).27 The wavelength of the monochromatic radiation obtained from a Si(111) crystal was 0.91 Å and the size of the collimated synchrotron x-ray beam at the sample position was approximately 0.2 × 0.2 mm². The used detector was a MarCCD (Rayonix, L.L.C.) with a 165 mm active area and 79 μm pixel size. It was placed at 1911 mm behind the sample and the exposure time to collect each SAXS pattern was 5 s. The q-range measured at each measurement point was 0.01 to 0.30 Å⁻¹. Bone sections were mounted in a sample holder which was placed in a motorized x–y scanning stage to map the sample with a step size of 0.2 mm in both directions. About 23 mm² areas of the sections including the fracture callus and cortex and 10 mm² areas of the control cortex were measured [Fig. 2(b)]. From the anisotropic (noncircular symmetric) 2-D SAXS intensity pattern, I(q, θ), the size, shape, and orientation of the mineral crystals were analyzed for each measurement point [Fig. 3(a)]. Analysis of SAXS data were done by averaging the scattering pattern over 360 deg azimuthal range after masking of the beamstop and the area outside the detector [Fig. 3(a)]. This results in a one-dimensional (1-D) scattering pattern I(q). From this pattern, the mineral plate thickness was evaluated following the approach suggested by Bünger et al.13 Shortly, it is based on a curve fitting where the mineral crystals are assumed to be plates with a finite thickness, T, in 1-D and infinite size in the other 2-D. The scattering from one plate is \[ P(q) = \frac{1}{q^4} \left| \frac{\sin(qT/2)}{qT/2} \right|^2. \] (1) Here, \( q = (4\pi/\lambda) \sin(\theta/2) \) is the scattering vector magnitude, where \( \lambda \) is the x-ray wave length and \( \theta \) is the scattering angle. The variation of the thickness of the mineral plates is taken into account by assuming that the thickness variation follows a Schultz–Zimm distribution \( D(T, T_{av}) \). The average scattering then reads \[ P_{av}(q) = \frac{\int_0^\infty T^2 P(q)D(T, T_{av})dT}{\int_0^\infty T^2 D(T, T_{av})dT}. \] (2) Equation (1) represents the single particle scattering function. In bone, the mineral content is high and we also need to take into account the spatial arrangement of mineral platelets, through a structure factor. Bünger et al.13 suggested a structure factor that includes both short range and long range interparticle correlations. To describe the short range repulsive interactions, they considered the random phase approximation (RPA), where the effective RPA structure factor can be written as \[ S_{RPA}(q) = \frac{1}{1 + \nu P_{av}(q)}, \] (3) where \( \nu \) is an adjustable parameter depends on the strength of the interactions and that typically increases with increasing concentrations. In addition, the increasing scattering intensity Journal of Biomedical Optics 025003-2 February 2014 • Vol. 19(2) at lower \( q \) suggests a long range fractal arrangement of the mineral particles that is described by \[ S_{\text{frac}}(q) = 1 + Aq^{-\alpha}, \] where \( \alpha \) is the fractal dimension of the fluctuations. The total intensity \( I(q) \), i.e., model curve, is then given by \[ I(q) = C S_{\text{frac}}(q) S_{\text{RPA}}(q) P_{av}(q), \] where \( C \) is a variable scale factor that depends on the sample thickness, the mineral content, and the scattering contrast. Finally, the model curve [Eq. (5)] is fitted to the measured \( I(q) \) data by iterative weighted nonlinear least squares adjusting the mineral plate thickness \( T \), the width of the mineral plate thickness distribution, the RPA \( \nu \) value, \( \alpha \) and \( A \), and \( C \). The model curves were fitted by automated custom-made scripts using MATLAB (MATLAB R2011b, The MathWorks, Inc., Natick, Massachusetts). Predominant orientation and degree of orientation were determined by calculating the \( q \)-averaged scattering intensity as a function of azimuthal angle \( \chi \) [Fig. 3(b)]. Predominant orientation was calculated as \( \Psi + 90 \) deg, where \( \Psi \) is the angle \( \chi \), where the intensity reaches its maximum.\(^8\)\(^,\)\(^28\) Gaussian curves were fitted to both peaks [Fig. 3(b)]. Degree of orientation \( \phi \) was calculated as \[ \phi = \frac{A_1}{A_0 + A_1}, \] where \( A_1 \) is the area under the two Gaussian curves and \( A_0 \) is the background area.\(^8\) Thus, \( \phi \) gets values between 0 and 1, where 0 means that there is no predominant orientation within the plane of the section and 1 means that all mineral crystals are aligned perfectly in the same directions. These parameters were calculated for each measurement point (Fig. 4). A threshold of the integrated intensity of the 1-D scattering pattern \( I(q) \) was used to mask the background (PMMA) and each analyzed area contained the whole measured cortex or callus. ### 2.4 Fourier Transform Infrared Microspectroscopy Three-micrometers thick longitudinal sections of bone were cut (Polycut S, Reichert-Jung, Germany) from the PMMA plugs, and placed on ZnSe windows. Three samples of each treatment group and two sections of each sample from the control cortex, fracture cortex, and callus tissue were measured and analyzed. Analysis of bone composition was conducted with a PerkinElmer Instrument (Spectrum Spotlight 300, Perkin Elmer Inc., Wellesley, Massachusetts) in transmission mode. A 25 \( \mu \)m spatial resolution and spectral resolution of 4 \( \text{cm}^{-1} \) with eight repeated scans per pixels were used. The background spectrum was recorded on a clean area of the ZnSe window by using the same measurement parameters but with an average of 75 scans. Data were collected at the wavenumber range of 2000 to 8000 \( \text{cm}^{-1} \). Areal measurements (~30 mm\(^2\)) were performed on the fractured femurs, including both the cortex and callus [Fig. 2(c)], and on the control cortex of the intact femur. In addition, one areal measurement of only PMMA was measured and averaged. The bone spectra were normalized by using the averaged PMMA spectrum.\(^29\) Subsequently, the PMMA spectrum was subtracted from the bone spectra. From the preprocessed bone spectra, the areas of linearly baseline corrected amide I (1720 to 1585 \( \text{cm}^{-1} \)), phosphate... (1200 to 900 cm\(^{-1}\)) and carbonate (890 to 850 cm\(^{-1}\)) peaks were calculated.\textsuperscript{17,19-21} These peak areas were used to determine the mineral/matrix (phosphate/amide I) and carbonate/phosphate ratios. Mineral/matrix ratio is an indicator of the degree of mineralization, whereas carbonate/phosphate ratio describes the carbonate substitution.\textsuperscript{19} Collagen maturity and crystallinity of the hydroxyapatite (HA) crystals were evaluated through second derivative peak fitting. The ratio of areas of the subpeaks under the amide I peak at 1660 and 1690 cm\(^{-1}\) was used to describe the collagen maturity (collagen cross-linking ratio—XLR),\textsuperscript{21} whereas the ratio of areas of the subpeaks at 1030 and 1020 cm\(^{-1}\) under the phosphate peak was used as an indicator of crystallinity,\textsuperscript{20} i.e., the size and perfection of HA crystals. APS, which is associated with new mineral deposition, was determined as a ratio of intensities at 1127 and 1096 cm\(^{-1}\).\textsuperscript{22} The total absorption of the background corrected (PMMA subtracted) spectra was used for masking out the bone and each analyzed area contained the whole measured cortex or callus. All preprocessing and analyses of the FTIR and SAXS data were performed using MATLAB (MATLAB R2011b, The MathWorks, Inc., Natick, Massachusetts). ### 2.5 Statistics Micro-CT images were only evaluated qualitatively due to the small number of samples per group. Wilcoxon signed rank test was used to compare the molecular composition and mineral structure between paired samples (each rat) from the control cortex, fracture cortex, and fracture callus. For this test, all the rats were pooled into one group. Paired \(t\)-test was performed by comparing all the data points from the callus tissue and the cortices in the same animal.\textsuperscript{30} 3 Results Based on the micro-CT images, only small amounts of callus tissue were found in the NaCl group. Some mineralized callus tissue was forming between the fracture ends but union was not completed [Fig. 1(a)]. All fractures in the BMP-7 groups were judged to be completely healed based on the micro-CT images [Figs. 1(b) and 1(c)]. Callus size was larger in BMP + ZO group than in other groups (Fig. 1). Based on the FTIR data with all samples pooled, the mineral/matrix ratio [Fig. 5(a)] and XLR [Fig. 5(c)] were significantly lower in fracture callus than in fracture cortex (\(p < 0.01\) and \(p < 0.05\), respectively) and control cortex (\(p < 0.05\) and \(p < 0.01\), respectively), whereas crystallinity [Fig. 5(d)] of the control cortex was significantly lower than that in the fracture cortex and callus (\(p < 0.01\) in both). APS [Fig. 5(e)] was significantly higher in the fracture callus compared to control and fracture cortices (\(p < 0.05\) and \(p < 0.01\), respectively). When comparing all data points within one animal in each sample, the mineral/matrix ratio was lower in the callus tissue than in the cortices (\(p < 0.01\) [Table 1 and Fig. 5(a)]). Carbonate/phosphate ratio was significantly higher in callus tissue compared to cortices in all samples in the NaCl + BMP-7 and ZO groups (\(p < 0.01\) [Table 1 and Fig. 5(b)], but not in the other two groups. XLR was significantly lower in callus tissue compared to cortices in all the individual samples from all treatment groups [Table 1 and Fig. 5(c)]. Significantly higher crystallinity values were observed in fracture callus compared to cortices in the NaCl and ZO groups (\(p < 0.01\)), but not in the BMP-7 or BMP-7 + ZO treated samples [Table 1 and Fig. 5(d)]. APS was significantly higher in callus tissue than in fracture cortex in all samples from all treatment groups (\(p < 0.01\) [Table 1 and Fig. 5(e)]). ![Fig. 5 FTIR compositional analyses of (a) mineral/matrix ratio, (b) carbonate/phosphate ratio, (c) collagen cross-linking ratio, (d) crystallinity, and (e) acid phosphate substitution in control cortex, fracture cortex, and fracture callus for all samples together and in different treatment groups. Nonparametric Wilcoxon signed rank test for all samples: *\(p < 0.01\) and **\(p < 0.05\).](http://biomedicaloptics.spiedigitallibrary.org/ on 12/04/2014 Terms of Use: http://spiedl.org/terms) Table 1 Comparison of composition and mineral structure parameters in callus tissue and fracture cortex when comparing all data points within one animal. Significantly decreased (↓), increased (↑), or no systematic change (—) in values from all samples are indicated, when callus tissue is compared to fracture cortex tissue. Alterations in mineral/matrix ratio (M/M), carbonate/phosphate ratio (C/P), collagen maturity (XLR), crystallinity (Cryst), acid phosphate substitution (APS), mineral plate thickness (T), orientation, and degree of orientation (DoO) are indicated. <table> <thead> <tr> <th>Group</th> <th>M/M</th> <th>C/P</th> <th>XLR</th> <th>Cryst</th> <th>APS</th> <th>T</th> <th>Orientation</th> <th>DoO</th> </tr> </thead> <tbody> <tr> <td>NaCl</td> <td>↓</td> <td>—</td> <td>↓</td> <td>↑</td> <td>—</td> <td>↑</td> <td>—</td> <td>↓</td> </tr> <tr> <td>BMP-7 + NaCl</td> <td>↓</td> <td>↑</td> <td>↓</td> <td>—</td> <td>↑</td> <td>—</td> <td>—</td> <td>↓</td> </tr> <tr> <td>BMP-7 + ZO</td> <td>↓</td> <td>—</td> <td>↑</td> <td>—</td> <td>↑</td> <td>↑</td> <td>↑</td> <td>↓</td> </tr> <tr> <td>ZO</td> <td>↓</td> <td>—</td> <td>↑</td> <td>↑</td> <td>↑</td> <td>↑</td> <td>↑</td> <td>↓</td> </tr> </tbody> </table> Based on the SAXS data with all samples pooled, the fracture cortex had a lower mineral plate thickness than the control cortex and the callus tissue [Fig. 6(a)]. The difference between the fracture cortex and callus tissue was significant (p < 0.05). The predominant orientation was significantly different in the callus tissue compared to the fracture and control cortices (p < 0.05, in both) [Fig. 6(b)]. The predominant orientation of the mineral crystals in the cortices was clearly along the long bone axis, whereas the orientation was more random in the calluses. Degree of orientation was higher in control cortex than in the fracture cortex and callus (p < 0.05 in both) [Fig. 6(c)]. When comparing all data points within one animal, the mineral plate thickness was significantly higher in callus tissue than in the fracture cortex in BMP-7 + ZO treated groups [Table 1 and Fig. 6(a)]. In NaCl and NaCl + BMP-7 treated samples, no systematic differences between callus tissue and fracture cortex were observed. The mineral plate thickness was significantly higher in control cortex than in callus tissue in samples in NaCl and ZO groups. Significantly higher predominant orientation values were evident in callus tissue compared to cortices in all samples (p < 0.01) [Table 1 and Fig. 6(b)]. Additionally, the degree of orientation was significantly higher in cortices than in callus tissue (p < 0.01) in all samples regardless of the treatment [Table 1 and Fig. 6(c)]. 4 Discussion In this study, we demonstrate that FTIR and SAXS combined can be used to identify the differences in composition and mineral structure between the newly formed callus tissue and cortical bone tissue. Moreover, we initiated an investigation of the effect of treatment with BMPs and bisphosphonates on the composition and mineral structure of newly formed callus tissue. FTIR has been widely used to evaluate the composition of bone. However, studies on composition of newly formed fracture callus bone are limited. Yang et al. used FTIR to study the composition of callus and cortex during fracture healing in wild type and interleukin-6 knockout mice. They found a higher mineral/matrix ratio in the cortex compared to callus whereas crystallinity did not differ between the callus and cortex. The authors did not report other FTIR parameters. Their finding regarding the mineral/matrix ratio is consistent with our study. However, in contrast to their results, we found a higher crystallinity in the callus and fracture cortex compared to control cortex. The difference might originate from the different analysis methods, as they calculated the crystallinity from the intensity ratio in the spectra whereas we used the peak fitting method. Ouyang et al. studied the effects of estrogen and estrogen deficiency in fracture callus in rat femurs. They found a lower mineral/matrix ratio, higher carbonate/phosphate ratio, lower XLR, and slightly reduced crystallinity in the fracture callus when compared to the fracture cortex at a distance from the fracture site in the estrogen deficient rats. After 8 weeks from the start of the treatment, the estrogen sufficient rats showed similar mineral/matrix ratio and crystallinity as the estrogen deficient rats, and a slightly higher XLR and lower carbonate/phosphate ratio near the fracture. These results are generally consistent with the data presented in this study. However, they did not report statistical significance. APS is associated with new mineral deposition and high amount of APS has been shown ![Fig. 6 SAXS analyses of (a) mineral plate thickness, (b) predominant orientation, and (c) degree of orientation in control cortex, fracture cortex, and fracture callus for all samples together and in different treatment groups. Nonparametric Wilcoxon signed rank test for all samples: *p < 0.05.](image-url) to indicate areas of new bone formation. However, it has not been reported earlier in fracture callus tissue. Our results showed a higher APS in the fracture callus compared to the cortices, which seem highly reasonable. The mineral plate thickness has traditionally been evaluated using a method by Fratzl et al. It is based on a two-phase assumption, in which the mineral plate thickness is determined from the ratio of the integrated intensity and the Porod constant. Further, it is assumed that the fraction of mineral phase is 50%. This approach has been successful when a relatively constant mineral phase fraction could have been assumed in bone. The calculated mineral plate thickness increases with increasing mineral phase fraction. Thus, the method may underestimate the mineral plate thickness if the real mineral phase fraction >0.5, and in new formed bone with a lower mineral phase fraction, it might overestimate it. Moreover, the mineral fraction has been found to vary within dentin. Similar variation could be expected also in bone. Bünger et al. introduced a curve fitting method in which assumption of the fraction of the mineral phase is not needed, which may be more accurate especially when determining the mineral plate thickness in bones with clearly different degrees of mineralization. SAXS has been used to determine the mineral structure of trabecular and cortical bone. However, fracture callus tissue has received only minor attention in SAXS studies. Liu et al. evaluated the mineral plate thicknesses in fracture callus and fracture cortex in sheep undergoing fracture healing. Using the method proposed by Fratzl et al., they found a lower mineral plate thickness in fracture callus than in fracture cortex after 2 to 6 weeks, but no difference at 9 weeks after the fracture. Our study evaluated the healing after 6 weeks in a rat model. That is roughly comparable to 9 weeks healing time in a sheep model. In the present study, the mineral plate thickness tended to be higher in control cortex compared to the fracture cortex and callus, however, no significant differences were observed. Hypothetically, with a higher number of samples this may reach a significantly higher mineral plate thickness in control cortex than in fracture cortex and callus even at this stage of fracture healing. The number of studies combining measurements of composition and mineral structure of bone in the same samples is highly limited. Pleshko-Camacho et al. used FTIR and SAXS to study the composition and mineral structure of cortical and trabecular bone of one L-4 vertebra of a 14-month-old girl. They mapped the trabeculae with both techniques and compared the results between trabecular and cortical bone. They also compared the parameter outcome from the two techniques. They found that the total mineral content and mineral/matrix ratio were lower in trabecular bone than in cortical bone, whereas mineral plate thickness and degree of orientation were lower in cortical bone than in trabecular bone. Trabecular bone has a more irregular orientation than in cortical bone, and in some stages of fracture healing, callus tissue can be described to be of trabecular-like. Therefore, we compare these findings to those in the present study. We also found a higher mineral/matrix ratio in cortical bone than in callus. Degree of orientation was lower in callus tissue than in cortex. In the study by Pleshko-Camacho et al., they also found a significant correlation between crystallinity, determined from IR spectra, and mineral plate thickness, determined by SAXS. In our study, no correlation was found. However, this is not surprising, since crystallinity is suggested to be related to the HA crystal length. Thickness of the crystals is different from their length and may thus diverge from crystallinity without contradiction. Lower degree of mineralization (mineral/matrix ratio), collagen maturity, and degree of orientation of the mineral crystals and higher APS are indicators of new, immature, and less organized bone. As expected, this was found in our study when the callus tissue was compared with the cortices. Also, the larger spread of angles in the callus tissue reflects the disordered nature of the forming bone. This would be expected, since the orientation of the collagen fibers and mineral crystals in cortical lamellar bone are oriented primarily in the direction of the long axis of the bone, i.e., the main loading direction. During bone healing, new bone formation occurs primarily through endochondral ossification resulting in quick formation of immature and less organized bone. Some calcified cartilage in the callus area could still be present at this stage of fracture healing, but the classification would require histology not available in this study. From the compositional data, the separation of woven bone and calcified cartilage is difficult, or even impossible. Immature bone has a more random orientation, which is reflected by the orientation parameters. With time, this immature and less organized bone tissue remodels into lamellar bone with a more organized structure, and finally, the cortex is restored. In our study, only one time point of 6 weeks was chosen. In our previous studies, this time point has corresponded to bridging of the fracture but not yet full remodeling. The amount of unremodeled callus depends both on the chosen time point and on the treatment given to the rats, which in this case was different combinations of BMP-7 and the bisphosphonate zolendronate. Based on the findings from our previous studies using the same open fracture healing model and time point, we assume that no or very little calcified cartilage remains in the callus areas. To confirm this, histological assessment would be necessary, which unfortunately was not available. In the NaCl group, the calluses were smaller compared to, e.g., BMP-treated calluses. This seems to be reflected in the orientation parameters, where the NaCl treated samples showed more similar orientation to the cortical bone compared to the large callus formations in the BMP-treated calluses. When comparing all data points within one animal, significant differences were found between the callus tissue and the cortices. These differences were identified in some treatment groups, but not all. In all treatment groups, the mineral/matrix ratio, collagen maturity, and degree of orientation of the mineral plates were lower in the callus tissue than in the cortices. Additionally, the APS and orientation were higher in callus tissue than in the cortices in all samples and groups. However, when comparing the treatments in this manner, some differences between the treatments could also be detected. In BMP-7 + ZO and ZO treated samples, the mineral plate thickness was higher in callus tissue than in the fracture cortex. Also, crystallinity was higher in callus tissue compared to the fracture cortex in ZO treated samples. It might be postulated, that since bisphosphonates inhibit osteoclast activity and since the degree of bone resorption decreases, the mineral crystals are allowed to grow more freely and become larger, which could be seen in increasing mineral plate thickness and crystallinity. We found that union was not achieved in the NaCl group, despite some sparse bone formation in the gap area. On the contrary, all samples in the BMP-7 + ZO groups were judged to be completely healed. These findings are consistent with our previous studies using the same animal model with larger number of animals per group.\textsuperscript{5,26,40} In this study, the most noticeable limitation is the small number of samples. However, the current results are encouraging and suggest clear and significant differences in composition and mineral structure between the control cortex and fracture callus. Small number of samples is generally used in SAXS studies of bone.\textsuperscript{5,13,16} For example, Pleshko-Camacho et al. had one sample in their study,\textsuperscript{5} whereas Rinnerthaler et al. used two samples\textsuperscript{9} and in a more recent study Bünger et al. has 12 samples divided into four treatment groups.\textsuperscript{15} In the study by Liu et al., the callus tissue of four different aged sheep was investigated using SAXS.\textsuperscript{16} The main reason for the small number of samples is that the measurements are time consuming and that synchrotron beam time is highly limited. Another limitation is the spatial resolution of the SAXS scanning, which was 200 \(\mu m\) in this study. A more focused beam would allow better spatial resolution, which in turn would be beneficial when measuring the fine structured callus tissue. Additionally, the data were averaged over the sample thickness. The optimal thickness of SAXS samples ranges from below 100 up to 600 \(\mu m\).\textsuperscript{24} The thickness in this study was 300 \(\mu m\) and although it is somewhat thicker than the beam size used (200 \(\mu m\)), it is still within these limits. 5 Conclusion The current study contributes to the highly limited data on FTIR and SAXS on fracture healing and hard callus development and discusses important aspects of the use of SAXS on the newly formed bone. The methods indeed provide complementary information. Although FTIR enables reporting of the composition, the SAXS data enable assessment of mineral plate orientation and thickness, as well as the orientation of the collagen fibers. In conclusion, significant differences were found in the molecular composition and mineral structure between the fracture callus, fracture cortex, and control cortex. These techniques may also prove to be useful for the evaluation of effects of therapies on fracture healing. Taken together, the results indicate that SAXS is feasible when evaluating the mineral structure of the bone fracture callus and highlight the complementary nature of the two techniques. Acknowledgments This work was supported by the European Commission (FRACQUAL-293434), the Swedish Agency for Innovation Systems, the Swedish Research Council through the Linnaeus Center Organizing Molecular Matter, MAX IV Laboratory, the Foundation of Greta and Johan Kock, the strategic funding of the University of Eastern Finland, and the National Doctoral Programme of Musculoskeletal Disorders and Biomaterials (TBDP). References Journal of Biomedical Optics 025003-7 February 2014 • Vol. 19(2) Downloaded From: http://biomedicaloptics.spiedigitallibrary.org/ on 12/04/2014 Terms of Use: http://spiedl.org/terms **Mikael J. Turunen** is a postdoctoral researcher at the University of Eastern Finland, Kuopio, Finland, where he also received his PhD degree in medical physics in 2013. He has studied the composition, microstructure, and mineral structure of bone using a variety of analytical methods including spectroscopy, microscopy, tomography, and X-ray scattering. **Sebastian Lages** received his PhD degree from the University of Paderborn, Paderborn, Germany, and was a postdoctoral researcher at the Department of Physical Chemistry, Lund University, Lund, Sweden. He studies the behavior of polymer-, colloid-, and nanoparticle formulations using scattering, calorimetric, and rheological methods. **Ana Labrador** is a researcher at the SAXS beamline at the MAX IV Laboratory, Lund, Sweden. She received her PhD in physics from Aarhus University, Denmark, in 2001. Since then she has worked on synchrotron diagnostic system in Barcelona, and construction and operation of BM16 beamline at ESRF, France, as well as involved in Spanish synchrotron science in small molecules, cultural heritage, metallic glasses, and high pressure. **Ulf Olsson** is a professor in physical chemistry at Lund University. His research is focused on soft matter, mainly applying various scattering methods to the study of structure, interactions, and dynamics of self-assembly colloids. **Magnus Tägil** is a consultant and professor at the Department of Orthopedic Surgery at Lund University, Lund, Sweden. He has been active in bone research, both clinically and experimentally with the main focus on fractures and how to improve fracture healing. **Jukka S. Jurvelin** is a professor of medical physics at the Department of Applied Physics, University of Eastern Finland. He received his PhD degree in 1993 from the University of Kuopio, Finland, and worked as a postdoctoral researcher from 1993 to 1995 at the Mueller Institute for Biomechanics, Bern, Switzerland. His research interests include development of quantitative biomechanical and imaging methods for sensitive diagnostics of osteoporosis and osteoarthritis. **Hanna Isaksson** is an associate professor in biomechanics at Lund University, Lund, Sweden. She received her PhD degree in biomedical engineering from Eindhoven University of Technology in 2007, and was thereafter a postdoctoral researcher at the University of Eastern Finland, Kuopio, Finland (2008 to 2011). Her research focuses on the assessment of bone quality using imaging, spectroscopic, and diffraction methods in combination with computational modeling.
The Regional Data Platform (RDP): The data provided in this report was generated through the SoCal Atlas application of SCAG’s new Regional Data Platform (RDP). The RDP represents a revolutionary approach for facilitating collaborative interagency data sharing and for supporting local planning activities. The RDP is intended to enhance transparency in the local and regional planning processes, while also serving to promote inter-jurisdictional collaboration and data standardization. The RDP is designed to facilitate more equitable, efficient, and sustainable planning at all levels. The RDP may be accessed through the SCAG website at: [hub.scag.ca.gov](http://hub.scag.ca.gov) SoCal Atlas: SCAG’s SoCal Atlas application is an interactive, web-based mapping tool, integrated as an element of the RDP, that is specifically focused on the development and analysis of local and regional datasets. SoCal Atlas allows local agency planners, businesses, and members of the public to easily visualize data over a variety of geographies and topics through a collection of maps, graphics, and statistics. The tool leverages data available through various sources, including SCAG, the U.S. Census, and Esri. SoCal Atlas enables users to explore a wide range of local and regional indicators including housing, employment, transportation, and demographics at various geographical scales. SoCal Atlas may be accessed at: [https://rdp.scag.ca.gov/socal-atlas](https://rdp.scag.ca.gov/socal-atlas) Local Profiles Reports: Since 2011, SCAG has produced individualized ‘Local Profiles’ reports every two years for each of our member jurisdictions. While the reports will not be produced this year, SCAG is providing access to the 2021 Local Profiles dataset, featuring current information for each local jurisdiction, through the RDP Regional Hub Content Library: [hub.scag.ca.gov](http://hub.scag.ca.gov) Local Information Services Team (LIST): For more information regarding the RDP or SoCal Atlas, contact the SCAG LIST at: [list@scag.ca.gov](mailto:list@scag.ca.gov) THE REGIONAL DATA PLATFORM IS HERE! WHAT IS THE REGIONAL DATA PLATFORM? The RDP is a revolutionary system designed to facilitate interagency collaboration in the sharing of data and to support local and regional planning activities at all jurisdictional levels. WHAT ARE THE GOALS OF THE RDP? - To strengthen local planning practices through the provision of modern planning tools and the sharing of best practices to support the local General Plan update process - To enhance the regional planning process by streamlining the collection and integration of data between local agencies and SCAG - To promote transparency and interagency collaboration to foster a more inclusive, equitable, and sustainable regional planning practice WHO IS THE RDP FOR? - Local Jurisdiction Planners - Consultants, Academia - Developers, Other Data Stakeholders - Public, Partners, Regional Stakeholders BENEFITS TO LOCAL JURISDICTIONS TODAY! - Access standardized regional datasets, information, and resources for planning via the Regional Hub - Access to web-based tools and templates to support specific planning workflows, including Housing Element updates - Access to out-of-the-box Esri tools including ArcGIS Urban, Hub, Pro, and Business Analyst to support a wide range of common planning and resident workflows - Ability to submit feedback or request one-on-one technical assistance through SCAG’s Local Information Services Team (LIST) - A more streamlined experience for contributing, validating, and updating local jurisdiction data shared with SCAG - Ability to engage and collaborate with other local jurisdictions across the region to share best practices - Participate in a growing community of planners throughout Southern California in the sharing of data, resources, and best practices RDP FEATURES TWO KEY COMPONENTS 1. Planning and engagement tools for local jurisdiction 2. Local data exchange (LDX) system for data sharing and enhance workflows ACCESS THE RDP TO TAKE ADVANTAGE OF: - Regional Hub – one-stop access to data, tools, and information as well as a platform for two-way engagement - SoCal Atlas – explore spatial data, statistics, and maps across topics and geographies from live databases - Parcel Locator – find and discover information about specific parcels in your community - HELPR 2.0 – evaluate parcels for potential residential development (e.g. to support local Housing Element Updates) - General Plan Update Site Templates – preconfigured and ready to use templates for General Plan updates - LDX Tools & Workflows – contribute and directly edit data regionally significant datasets for Connect SoCal (RTP/SCS) 2024 - Technical Assistance Request – provide feedback and request one-on-one technical assistance CONTACT US For questions, please reach out to our Local Information Services Team (LIST) at list@scag.ca.gov. COMPLIMENTARY SOFTWARE LICENSES As part of the RDP, SCAG is providing all member agencies with a complimentary suite of Esri licenses to support planning activities and build technological capacity throughout the region. Licenses are available for assignment to one designated staff member per jurisdiction. The suite includes: - ArcGIS Online - ArcGIS Pro - ArcGIS Urban - Business Analyst Web TIMELINE SUMMER 2021 - Complimentary RDP software licenses available at: https://license-rdp.scag.ca.gov - Housing Element Parcel Tool (HELPR) 2.0 available - Collaborative tool testing with pilot jurisdictions EARLY 2022 - RDP 1.0 launch at hub.scaq.ca.gov - LDX soft launch at hub.scag.gov/pages/LDX - One-on-one technical assistance available SUMMER 2022 - RDP 1.5 launch with enhanced functionalities and tools - LDX complete launch with improved data and system - One-on-one technical assistance available SOUTHERN CALIFORNIA ASSOCIATION OF GOVERNMENTS REGIONAL DATA PLATFORM **SCAG Key Indicators** **SCAG Region** - **18,824,382** 2020 Total Population - **487** 2020 Persons Per Square Mile - **6,651,919** 2020 Total Housing Units - **2.99** 2020 Average Household Size - **47%** 2021 Renter Occupied Housing Units (%) - **8%** 2021 Population Age 25+ No Diploma (%) - **33%** 2021 Population Age 25+ Bachelor’s Degree or Higher Education (%) - **13%** 2019 Households Below the Poverty Level (%) - **$77,430** 2021 Median Household Income - **$628,825** 2021 Median Home Value *Data Source: 2020 data are from the 2020 Decennial Census PL-94 redistricting file which and have been processed by the California Department of Finance. 2021 data are Esri estimates (additional information on Esri demographics can be found [here]). 2019 data are from the American Community Survey (ACS) and have been processed and published by Esri.* Key Indicators Big Bear Lake, San Bernardino County 2020 Total Population: 5,046 2020 Persons Per Square Mile: 809 2020 Total Housing Units: 9,452 2020 Average Household Size: 2.14 2021 Renter Occupied Housing Units (%): 44% 2021 Population Age 25+: No Diploma (%): 6% 2021 Population Age 25+: Bachelor’s Degree or Higher Education (%): 29% 2019 Households Below the Poverty Level (%): 14% 2021 Median Household Income: $60,590 2021 Median Home Value: $594,570 Data Source: 2020 data are from the 2020 Decennial Census PL-94 redistricting file which and have been processed by the California Department of Finance. 2021 data are Esri estimates (additional information on Esri demographics can be found here). 2019 data are from the American Community Survey (ACS) and have been processed and published by Esri. ## Housing Statistics ### Big Bear Lake, San Bernardino County <table> <thead> <tr> <th>Total Acres</th> <th>4,116</th> </tr> </thead> <tbody> <tr> <td>2020 Total Population</td> <td>5,046</td> </tr> <tr> <td>2020 Total Housing Units</td> <td>9,452</td> </tr> <tr> <td>2021 Owner Occupied Housing Units</td> <td>55%</td> </tr> <tr> <td>2021 Renter Occupied Housing Units</td> <td>45%</td> </tr> </tbody> </table> ### Final 6th Cycle Regional Housing Needs Assessment (RHNA) Allocation <p>| | |</p> <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Total RHNA Allocation</td> <td>212</td> </tr> <tr> <td>Very Low Income (&lt;50% of AMI)</td> <td>50</td> </tr> <tr> <td>Low Income (50-80% of AMI)</td> <td>33</td> </tr> <tr> <td>Moderate Income (80-120% of AMI)</td> <td>37</td> </tr> <tr> <td>Above Moderate Income (&gt;120% of AMI)</td> <td>92</td> </tr> </tbody> </table> Data Source: 2020 data are from the 2020 Decennial Census PL-94 redistricting file which has been processed by the California Department of Finance. 2021 data are Esri estimates (additional information on Esri demographics can be found [here](#)). RHNA information are from SCAG. For additional housing data for SCAG jurisdictions, please refer to the Pre-certified Local Housing Data Reports developed for 6th cycle housing element updates. **Employment Statistics** **Big Bear Lake, San Bernardino County** **Employment** - **96%** - 2021 Employed Civilian Population Age 16+ - **27%** - 2021 Blue Collar Workers - **55%** - 2021 White Collar Workers - **18%** - 2021 Service Workers - **4.3%** - 2021 Unemployment Rate **Commute** - **11%** - 2021 Workers who commute 7+ hours weekly - **79.4%** - 2019 Workers who drive alone to work **Business** - **592** - 2021 Total Business Establishments - **7,994** - 2021 Total Employees - **5,302** - 2021 Workers **Daytime Population** - **2,614** - 2021 Residents Data Source: 2021 data are Esri estimates (additional information on Esri demographics can be found [here](#)). Specific information on the categorization of White Collar, Blue Collar, and Service employees can be found [here](#), along with additional information on Daytime population counts [here](#). 2019 data are from the American Community Survey (ACS) and have been processed and published by Esri. COVID-19 Vulnerability Indicators: Social and Health Big Bear Lake, San Bernardino County Population by Age - Under 5: 859 - 6 To 17: 2,954 - 18 To 64: 1,169 - 65+: 247 Insurance Coverage Population - Without Insurance: 4,535 - With Insurance: 673 Disability Population - Not in Labor Force: 130 - Unemployed: 214 - Employed: 20 Likelihood of Health Conditions of Adults and Seniors - Senior Diabetes Type 2: 16.01% - Adult Diabetes Type 2: 11.00% - Senior Heart Disease: 22.80% - Adult Heart Disease: 5.95% - Senior High Blood Pressure: 60.05% - Adult High Blood Pressure: 34.21% - Senior Obese: 22.55% - Adult Obese: 31.42% Race/Ethnicity (NH = Non-Hispanic) - Hispanic: 3,568 - NH White: 50 - NH African American: 63 - NH Asian: 14 - NH Native Hawaiian: 14 - NH Two or More Races: 0 - NH Some Other Race: 1,475 Data Source: SCAG, Census ACS 2013-2017 and 2014-2018, TCAC/HCD 2018, InfoUSA 2016, and State Controller 2018. 2020 are from the Decennial Census PL-94 redistricting file which has been processed by the California Department of Finance. Homeless population data processed by SCAG from county/city data sources conducting point-in-time counts, e.g. social services departments or homeless services authorities. COVID-19 has exposed health disparities and work on health equity is critical; California’s Healthy Place Index application is a great tool to assess community’s existing conditions. For questions, please contact Tom Vo at Vo@scag.ca.gov. ## Residence-Based Employee Occupations <table> <thead> <tr> <th>Occupation</th> <th>Employees</th> </tr> </thead> <tbody> <tr> <td>Protective Service</td> <td>28</td> </tr> <tr> <td>Farming, Fishing, and Forestry</td> <td>40</td> </tr> <tr> <td>Production</td> <td>42</td> </tr> <tr> <td>Personal Care and Service</td> <td>64</td> </tr> <tr> <td>Building and Grounds Cleaning and Maintenance</td> <td>93</td> </tr> <tr> <td>Healthcare Support</td> <td>123</td> </tr> <tr> <td>Office and Administrative Support</td> <td>194</td> </tr> <tr> <td>Construction, Extraction, and Maintenance</td> <td>222</td> </tr> <tr> <td>Transportation and Material Moving</td> <td>243</td> </tr> <tr> <td>Food Preparation and Serving Related</td> <td>258</td> </tr> <tr> <td>Sales and Related</td> <td>277</td> </tr> <tr> <td>Professional and Related</td> <td>280</td> </tr> <tr> <td>Management, Business, and Financial Operations</td> <td>342</td> </tr> </tbody> </table> ## Workplace-Based Employees <table> <thead> <tr> <th>Occupation</th> <th>Employees</th> </tr> </thead> <tbody> <tr> <td>Transportation and Warehousing</td> <td>127</td> </tr> <tr> <td>Other Services (w/o Public Administration)</td> <td>164</td> </tr> <tr> <td>Arts, Entertainment, and Recreation</td> <td>173</td> </tr> <tr> <td>Retail Trade</td> <td>345</td> </tr> <tr> <td>Accommodation and Food Services</td> <td>1,200</td> </tr> </tbody> </table> ## Tax Revenues <table> <thead> <tr> <th>Source</th> <th>Amount</th> </tr> </thead> <tbody> <tr> <td>Total Tax Revenues</td> <td>$18,780,564</td> </tr> <tr> <td>Sales and Use Tax</td> <td>$7,073,552</td> </tr> <tr> <td>Transient Occupancy Taxes</td> <td>$567,336</td> </tr> <tr> <td>Secured and Unsecured Property Tax</td> <td>$2,279,260</td> </tr> <tr> <td>Property Tax In-Lieu of Vehicle License Fees</td> <td>$5,350,348</td> </tr> </tbody> </table> ## Tax Revenues (%) <table> <thead> <tr> <th>Source</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>Sales and Use Tax</td> <td>28.49%</td> </tr> <tr> <td>Transient Occupancy Tax</td> <td>37.66%</td> </tr> <tr> <td>Secured and Unsecured Property Tax</td> <td>3.02%</td> </tr> <tr> <td>Property Tax In-Lieu of Vehicle License Fees</td> <td>3.02%</td> </tr> </tbody> </table> ## 2020 Total Population <table> <thead> <tr> <th>Category</th> <th>Number</th> </tr> </thead> <tbody> <tr> <td>Population</td> <td>5,046</td> </tr> <tr> <td>Housing Units</td> <td>9,452</td> </tr> </tbody> </table> ## 2020 Total Median HH Income <table> <thead> <tr> <th>Category</th> <th>Amount</th> </tr> </thead> <tbody> <tr> <td>Median HH Income</td> <td>$51,014</td> </tr> </tbody> </table> ## 2020 Total Senior Population <table> <thead> <tr> <th>Category</th> <th>Number</th> </tr> </thead> <tbody> <tr> <td>Senior Population</td> <td>1,169</td> </tr> </tbody> </table> ## 2020 Total Sales and Use Taxes <table> <thead> <tr> <th>Category</th> <th>Amount</th> </tr> </thead> <tbody> <tr> <td>Sales and Use Taxes</td> <td>$2,279,260</td> </tr> </tbody> </table> ## 2020 Food Stamps/ SNAP <table> <thead> <tr> <th>Category</th> <th>Number</th> </tr> </thead> <tbody> <tr> <td>Food Stamps/SNAP</td> <td>208</td> </tr> </tbody> </table> ## 2020 Food Stamps/ SNAP <table> <thead> <tr> <th>Category</th> <th>Amount</th> </tr> </thead> <tbody> <tr> <td>Food Stamps/SNAP</td> <td>$2,279,260</td> </tr> </tbody> </table> ## Data Source Data Source: SCAG, Census ACS 2013-2017 and 2014-2018, TCAC/HCD 2018, InfoUSA 2016, and State Controller 2018. 2020 are from the Decennial Census PL-94 redistricting file which has been processed by the California Department of Finance. Homeless population data processed by SCAG from county/city data sources conducting point-in-time counts, e.g. social services departments or homeless services authorities. COVID-19 has exposed health disparities and work on health equity is critical; California’s Healthy Place Index application is a great tool to assess community’s existing conditions. For questions, please contact Tom Vo at Vo@scag.ca.gov. COVID-19 Vulnerability Indicators: Housing and Transportation Big Bear Lake, San Bernardino County 5,046 2020 Total Population 9,452 2020 Total Housing Units $51,014 Median HH Income 1,169 Senior Population 208 Food Stamps/SNAP $2,279,260 Sales and Use Taxes Occupied Housing Units Overcrowded Units (>2 Person/Room) Cost-Burdened Units (Pay >50%) Means of Transportation High Segregation & Poverty Area Below Poverty Level Householder Tenure by Vehicles Available (Owner) Tenure by Vehicles Available (Renter) Household Income Distribution Based on County AMI Data Source: SCAG, Census ACS 2013-2017 and 2014-2018, TCAC/HCD 2018, InfoUSA 2016, and State Controller 2018, 2020 are from the Decennial Census PL-94 redistricting file which has been processed by the California Department of Finance. Homeless population data processed by SCAG from county/city data sources conducting point-in-time counts, e.g., social services departments or homeless services authorities. COVID-19 has exposed health disparities and work on health equity is critical; California’s Healthy Place Index application is a great tool to assess community’s existing conditions. For questions, please contact Tom Vo at Vo@scag.ca.gov. SCAG SoCal Atlas, version 1.0 Released February 2022 About this Tool SoCal Atlas is an interactive web-mapping tool developed by SCAG that allows planners, residents, and other users to explore data across geographies and topics through a collection of web maps and statistics. The tool leverages data primarily from the American Community Survey (ACS), U.S. Census Bureau’s Decennial Census, and Esri. Additional related data is made available through the Regional Hub and additional information regarding the platform can be found on the SCAG website. This tool was developed in collaboration with SCAG’s RDP, an ongoing “system of systems” effort to promote regional data sharing and collaboration and to provide long-range planning tools to all SCAG local jurisdictions (see https://scag.ca.gov/rdp for RDP details). Please submit a request for technical assistance for additional help or to schedule a detailed one-on-one technical assistance session. Using SoCal Atlas This tool allows users to explore different intriguing indicators—from housing to transportation categories—at different geographical levels. There are three components that made up the SoCal Atlas application, which are dynamic maps, infographics, and data. Similar to other applications (e.g., HELPR, Parcel Locator, etc.), users can select the local jurisdiction they wish to review. 1. **Maps** Users can change the collection of maps they are viewing by selecting the ‘Maps’ button of the category of interest in the left-hand panel. Additional maps in the collection can be explored by selecting the map name in the toolbar above the map. Additional information about each map, including the source, vintage, and link to learn more can be found in the right-hand ‘About the Map’ panel. 2. **Infographics** Users can also explore statistics, presented as interactive infographics, for each of the categories by selecting the ‘Statistics’ button from the left-hand panel. Each infographic of statistics has interactive elements; users can click on the various icons to learn more as well as print or export the infographic in the upper right corner. 3. **Data** The data and maps used in the SoCal Atlas (along with additional related data where applicable) are also can be accessed by clicking the ‘Data’ button for each category. This will bring users to the Regional Hub where they can download or use this data further. About the Maps Key Stats Maps Census Population Where are the population centers? This map shows population counts from the 2020 for counties, tracts, blocks, and block groups in the SCAG region. The data is sourced from the U.S. Bureau PL-94 redistricting file (processed and published by Esri) and the vintage is 2020. Learn more about this map. Household Income What is the predominant household income? This map shows the predominant household income for counties, tracts and block groups in the SCAG region. The data is sourced from Esri’s 2018 Demographic estimates. Learn more about this map. Educational Attainment What is the predominant educational attainment level? This map shows the predominant educational attainment for counties and tracts in the region. The data is sourced from the 2019 American Community Survey (ACS) and has been processed and published by Esri. Learn more about this map. Poverty Ratio Where are there people living in poverty? This map compares the number of people living above the poverty line to those living below for counties and tracts in the SCAG region. Data is from the 2018 American Community Survey (ACS) and has been processed and published by Esri. Learn more about this map. Housing Maps Tenure and Vacancy What is prevalence of owner, rented and vacant housing units? This map shows if owner-occupied, renter-occupied, or vacant housing is more prevalent at the county and tract level for the SCAG region. Data is from the 2019 American Community Survey (ACS) and has been processed and published by Esri. Learn more about this map. **Renter Cost** Where are people affected by high rent costs? This map shows the percent of renter households who are spending more than 50% of their income on rent. The data is sourced from the 2019 American Community Survey (ACS) and has been processed and published by Esri. [Learn more about this map.] **Owner Occupancy and Cost** Where do people own homes and what is the home value? This map shows a comparison of owner-occupied housing and the median home value for counties, tracts, and block groups in the region. The data is sourced from Esri’s 2018 Demographic estimates. [Learn more about this map.] **Housing Structure Type** What is the predominant type of housing structure? This map shows the predominant structure type of housing units, and number of housing units in structure for counties and tracts in the region. The data is sourced from the 2019 American Community Survey (ACS) and has been processed and published by Esri. [Learn more about this map.] **Housing Age** When was the housing stock built? This map shows the predominant time period in which housing units were built for counties and tracts within the region. The data is sourced from the 2019 American Community Survey (ACS) and has been processed and published by Esri. [Learn more about this map.] **Employment Maps** **Predominant Industry** What is the predominant industry? This map shows the predominant industry for counties and tracts in the SCAG region. Industry classifications are based on business NAICS codes. The data is sourced from Esri’s 2018 Demographic estimates with business data from Data Axle. [Learn more about this map.] **Business Risk** Where are businesses at risk in an economic downturn? This map shows where there are concentrations of businesses at risk in the event of an economic downturn for counties, tracts, and block groups in the SCAG region. Businesses at risk are those with a NAICS code in one or more of the following categories: Clothing/Accessory stores, General Merchandise stores, Arts/Entertainment/Recreation, Accommodation, and Food Service/Drinking Places. The data is sourced from Esri’s 2019 Demographic estimates with business data from Data Axle. Learn more about this map. **Means of Transportation** What is the predominant means of transportation to work? This map shows the most popular alternatives to driving alone to work by county and tract within the SCAG Region. The data is sourced from the 2019 American Community Survey (ACS) and has been processed and published by Esri. Learn more about this map. **Commute Time** What is the predominant commute time? This map shows the predominant commute time ranges by county and tract for the region. The data is sourced from the 2019 American Community Survey (ACS) and has been processed and published by Esri. Learn more about this map. **COVID-19 & At-Risk Maps** **COVID-19 Trends** What are the COVID-19 trends in my area? This map shows COVID-19 trends and active cases by County within the SCAG region. Additional information on methodology can be found [here](#). Data is sourced from Johns Hopkins University CSSE US Cases by County dashboard and USAFacts for Utah County level Data and data is updated weekly. Learn more about this map. **Uninsured Populations** Where are the uninsured? This map shows the percentage of the population with no health insurance by county and tract for the SCAG region. The data is sourced from the 2019 American Community Survey (ACS) and has been processed and published by Esri. Learn more about this map. **Senior Populations** Where do seniors live? This map shows the count and population share of seniors (age 65+) for counties and tracts within the region. The data is sourced from the 2019 American Community Survey (ACS) and has been processed and published by Esri. Learn more about this map. Disability Prevalence What is the prevalence of people with a disability in my area? This map shows the count and percent of individuals with a disability for counties and tracts within the region. The data is sourced from the 2019 American Community Survey (ACS) and has been processed and published by Esri. Learn more about this map. Additional Features 1.) Map Capture A screenshot of each map can be quickly captured from within the application by clicking the Map Capture icon in the upper left corner of each map. 2.) Request Technical Assistance If you need help with using the app or have additional questions, click this button to request technical assistance. 3.) App Switcher Quickly access other RDP applications by selecting the app switcher icon. A list of all available applications will appear. 4.) Jurisdiction Specific URL Parameters for Embedding Users can embed or share a version of SoCal Atlas with the jurisdiction pre-selected by specifying the jurisdiction name through a URL parameter. To do this, simply take the base URL of SoCal Atlas (https://rdp.scag.ca.gov/socal-atlas) and add the parameter '?jurisdiction=' followed by the jurisdiction name without any prefixes (e.g., Town of or City of). An example of the pattern is below for the City of Fullerton: https://rdp.scag.ca.gov/socal-atlas/?jurisdiction=Fullerton If the jurisdiction name is multiple words, add ‘%20’ in place of a space, such as below: https://rdp.scag.ca.gov/socal-atlas/?jurisdiction=San%20Bernardino
Alexander A. Chulok, Dmitry V. Suslov, Evgeny Ia. Moiseichev A CONTEMPORARY FRAMEWORK FOR NATIONAL SECURITY RELATED TECHNOLOGICAL RISKS MINIMISATION BASIC RESEARCH PROGRAM WORKING PAPERS SERIES: SCIENCE, TECHNOLOGY AND INNOVATION WP BRP 34/STI/2015 This Working Paper is an output of a research project implemented within NRU HSE’s Annual Thematic Plan for Basic and Applied Research. Any opinions or claims contained in this Working Paper do not necessarily reflect the views of HSE. Alexander A. Chulok,\textsuperscript{1} Dmitry V. Suslov,\textsuperscript{2} Evgeny IA. Moiseichev\textsuperscript{3} A CONTEMPORARY FRAMEWORK FOR NATIONAL SECURITY RELATED TECHNOLOGICAL RISKS MINIMISATION\textsuperscript{4} Last several decades have shown a steady broadening of national security issues’ spectrum along with an increase in the strictness of international competition driven by advances in high technologies and other factors. National security agenda is nowadays comprised not only of defense issues per se, but also includes economic, social, cultural and other aspects. All this is strongly influenced by the technological trends and the very possession of critical technologies has become a pressing national security issue. Thus, we are witnessing a gradual convergence of national security and technological agendas. Advocating a proactive approach to tackling national security risks of a technological nature, the authors make an attempt to outline the contemporary innovative methodology of assessing, harnessing and counteracting such risks. Their key recommendation lies in the appeal for joining the forces of theorists and practitioners in the field of both national security and science, technology and innovation (STI) policy to overcome the corresponding challenges. Keywords: national security, economic development, balance of power, science, technology and innovation policy, Russia. JEL codes: O3 \textsuperscript{1} National Research University Higher School of Economics. Institute for Statistical Studies and Economics of Knowledge. International Research and Educational Foresight Centre. Deputy Director. E-mail: achulok@hse.ru \textsuperscript{2} National Research University Higher School of Economics. School of World Economy and International Affairs. Center for Comprehensive European and International Studies. Deputy Director. E-mail: dsuslov@hse.ru \textsuperscript{3} National Research University Higher School of Economics. Institute for Statistical Studies and Economics of Knowledge. OECD – HSE Partnership Centre. Analyst. E-mail: emoiseichev@hse.ru \textsuperscript{4} This Working Paper is an output of a research project implemented within NRU HSE’s Annual Thematic Plan for Basic and Applied Research in 2014. Any opinions or claims contained in this Working Paper do not necessarily reflect the views of HSE. Introduction Analysing technologies’ effect on national and international security, particular attention is frequently paid to their impact on the country’s military and economic strength – and as a result, on the changing balance of power in the world 1-4. Fully mature technologies are primarily subjected to such analysis for they are the ones that mostly affect the society and its overall security in the short run. But as soon as a technology becomes mature, it becomes less susceptible to change and to external impact – which increases political controversy between those who wish to maintain the status quo, and proponents of more active application of innovations 5. Such controversy is put on the agenda in the form of challenges and threats associated with the use of the already existing technologies, and potential challenges and threats posed by the emerging technologies. A sectio aurea principle should be kept in mind therefore and overestimating potential threats from emerging technologies for instance may negatively affect country’s innovation potential 6. Evidence shows, that this can occasionally result in countries’ lagging behind in developing new technologies and their innovative breakthroughs only happen when external challenges outweigh the internal controversy 7. Correctly assessing emerging challenges posed by advanced technologies requires moving on from analysing their revolutionary effect on the society to studying more complex, not always linear, and chronologically extended technology development trajectories, and their impact on the society 8. At the same time analysis of challenges and threats which result from inadequate application of existing, and unsatisfactory development of innovative breakthrough technologies, must be integrated into the overall framework for economic, political, and social development. During the previous decades a trend towards relocation of production facilities to emerging Asian nations became apparent. According to the International Monetary Fund (IMF), the contribution of Asian economies (except Japan) into the global economy in 2014 will amount to 29,6% 9. It should be noted that in 1820 Asia (except Japan) generated 56,2% of the global GDP 10. The accelerating industrial revolution in the Western Europe became a key factor of the Asian economies’ declining role. In the middle of the 20th century their share declined to 15,5% 10. Asia started to reclaim its positions in the 1970s, when productivity growth in developed economies began to slow down. If in 1891-1972 (the period of accelerated development of the second industrial wave’s technologies, such as electric energy, communications, chemistry, etc.) labour productivity in the USA grew by 2.36% a year, in 1972-2013 the relevant figure was only 1.39%\(^\text{11}\). Automated production technologies developed in the 1940s – 1950s didn’t have a significant effect on productivity growth during the second half of the 20\(^{\text{th}}\) century, due to their immaturity\(^\text{12}\). In 1987 Robert Solow, Nobel prize winner for developing economic growth theory, noted that “you can find evidence of computer era anywhere but in productivity statistics”\(^\text{13}\). As early as in the 1960s, American companies started to relocate labour-intensive production to Asian countries, in the subsequent decades followed by more complex production processes such as plate manufacturing and certain R&D operations\(^\text{14}\). But proximity to production processes promotes emergence of new ideas to further develop these processes and products\(^\text{15}\). E.g. in 2009 American production companies’ R&D expenditures amounted to 70% of all American R&D costs\(^\text{16}\). Innovations followed the production facilities. If in the 1990s Asia’s share in global R&D expenditures (except Japan) was 13.2%, according to the Economist Intelligence Unit’s forecast, in 2016 it will reach 33.8%\(^\text{17}\). Relocating production abroad allows to quickly achieve tactical objectives. E.g. companies who’ve already experienced revenue reduction problems were more willing to move offshore\(^\text{18}\). Still, in the longer term offshoring is not always the most efficient solution: (1) saving on cheap labour has its limits; (2) increased distances between production and R&D facilities negatively affect intellectual property, while (3) the middle class emerging in the developing countries creates new markets\(^\text{18}\). Now companies wish to quickly enter new markets, make personalised products, and quickly react to changing demand on local markets. These days companies have new requirements to products and their distribution techniques. In the 1960s – 1970s, against the background of the first wave of emerging information technologies (IT), companies automated specific processes of their value creation chains; with the progress of the internet they’ve merged these processes into integrated systems, and now IT are becoming integral components of products\(^\text{19}\). Traditional product components – hardware and software – are now supplemented with connectivity. Connectivity allows to exchange information between the product and its manufacturer, user, and other systems. The need to process data quickly forces companies to organise their production and management process in a different way. The “lean production system” is becoming increasingly popular, whose roots lie in Toyota’s production system; it allows to increase labour and capital productiv- ity. Software products reduce complexity of organisation processes and open opportunities for integrating people and robots into production. Closer integration between production of equipment and software allows to reduce complexity of the physical world, and deal with emerging problems “with atoms or bits”. Productivity is increased by optimising product teams’ work “bottom-up”. Another source of competitiveness is more efficient use of natural resources. Their production costs are growing, and localisation of production processes coupled with increasing environment pollution are accelerating this trend even further. Companies must reconsider their approaches to optimising their use of natural resources. Thus three areas connected with technological development were identified (see Attachment A), which will significantly affect national security: (1) development and application of advanced production and organisation technologies; (2) technologies for efficient utilisation of resources; and (3) development of human capital. For each area, top 10 technologies were selected, to be validated through situational analysis – which will allow to formulate specific political recommendations to minimise threats to Russia’s national security connected with insufficiently developed advanced technological structures, including possible “asymmetrical responses”. **Technological gap and Russia’s national security** The National Security Strategy for the Russian Federation until 2020 defines national security as “a state of affairs when individuals, the society, and the country are protected from internal and external threats, which allows to provide constitutional rights, liberties, decent quality and level of life, ensure sovereignty, territorial integrity and sustainable development for the Russian Federation, defence and security of the country”. This secure state primarily depends on the country’s and society’s ability to adapt to external (and partially to internal) challenges, and make use of opportunities these environments provide to protect vital interests of individuals, the society, and the country mentioned in the above definition – i.e. rights and liberties, decent quality and level of life, sovereignty, territorial integrity and sustainable development for Russia, its defence and security. Thus its ability to adapt to changing external environment and to the challenges and opportuni- ties it creates that ultimately defines any country’s national security, including Russia. Of utmost importance here is timely monitoring, analysis, and forecasting of challenges’ and opportunities’ nature and direction (which rapidly change, especially in recent times), and the country’s ability to shape policies oriented towards the current and future – as opposed to the past – challenges. To successfully prepare for the last war guarantees losing the next one. If the government policy is targeting challenges of yesterday, which by now have disappeared or due to various reasons became irrelevant, that at the very least would result in wasting increasingly scarce financial and human resources, and make the country less secure against the current and future threats. At the same time the nature of challenges is directly connected with development of technolo- gies. Firstly, technological progress creates new tools, formats, and platforms for international competition; note that as S&T progress accelerates, the areas of more intense competition also change more rapidly, and countries frequently turn out to be unprepared for that. Technological development (evolutionary and abrupt alike) is the main driver of external environment’s chang- es – adapting to which, and transforming which according to the country’s interests, are the main goals of national security policy. Therefore having various technological gaps with other nations dooms the country to lose in rel- evant competitive areas, and hinders its adaptation to the changing environment – thus making it insecure to new risks and threats. Particularly dramatic consequences arise if the country “over- sleeps” not just emergence of a specific new competitive area, but a new round of S&T revolu- tion, emergence of a new technology structure. An example is the collapse of the USSR which relatively successfully competed with the West in the arms race, but had hopelessly lagged be- hind in cybernetics and ICT. Secondly, technological development in itself increases countries’ “muscle” potential, being the main driver of changing the balance of forces in the world; thus it creates risks to countries whose international positions deteriorate as a result of this process. Radical changes in the global balance of forces frequently result from emergence of certain “breakthrough” technologies, which for a certain period of time remain available only to a single country, or to a limited group of nations. Examples include emergence in the 15th century of long-term meat storage technolo- gy which made possible long sea voyages; emergence of steam ships in the 19th century; emer- gence of nuclear weapons in the middle of the 20th century, and high-precision conventional weapons in the late 20th century; shale oil and gas production technologies in the early 21st centu- ry; etc. Despite frequent protestations to the contrary, the “zero sum game” law still applies to international relations, and relative strengthening of some countries (due to technological progress) leads to relative weakening of others. At the same time the interconnection between technologies and national security risks is non-linear, and frequently controversial. New technologies may both strengthen and weaken countries – including the ones where such technologies have been developed. There are numerous examples when new technologies change people’s behavioural models, extend their opportunities and skills, and increase the requirements people make to the government - which is not always able to meet them (not by a long shot). This results in a crisis of confidence in the government and in the so-called “loyalty deficit”, occasionally leading to a political crisis. The country’s (and its institutions’) ability to adapt plays a crucial role here – the ability to change quickly enough and produce the social benefits the society begins to require as a result of the technological progress. Thirdly, there’s a group of technologies which themselves significantly affect specific segments of national security. These are the technologies heads of the RF national security agencies should pay the most attention to. Particular risks to national security emerge if the country starts lagging behind in developing these technologies. Such gaps sharply reduce its potential in the areas which will define its international positions, and increase the country’s vulnerability to external environment. Meanwhile the competition get new opportunities to weaken the country even further. Relevant examples again are plentiful. The USSR’s lagging behind in ICT and cybernetics – which were among the highest-growth sectors of the global economy in the 1980s and 1990s – determined the overall crisis and then the downfall of the centrally planned economy, the socialist economies’ increasing gap with the West, and as a consequence, the collapse of the country. Many energy exporting countries’ (including Russia) lagging behind in high-technology energy generation (such as development of slate, shale, and hard-to-reach fields, “green” energy, etc.) simultaneously creates risks of reduced production of Russian energy resources, and shrinking of markets for Russian energy exports, with increased competition in the global energy sector. This in turn poses a challenge to stability of Russian energy exports, and potentially can lead to a significant reduction of the country’s revenues. The gap between Russia and other countries in agriculture increases its dependency on imports of more competitive agricultural products – thus undermining the nation’s food security and generally making it more vulnerable to external chal- lenges. An example of such dependency is the situation which has emerged on the domestic Russian food market, and in Russia’s relations with Eurasian integration partners after Moscow limited imports of various agricultural products from Western countries. Accordingly, the following seems to be quite important to achieving national security: a) identifying technologies and technology groups most relevant to national security (in this case Russia’s); b) out of them, identifying areas where Russia lags behind the leaders; c) identifying related risks and threats; and d) identifying steps to be taken to overcome the risks (see Annex 1). The changing nature of security The nature of challenges and threats to Russia’s security was lately undergoing through fundamental transformation – which in turn was a result of radical transformation of international competition. Firstly, the competition – between everybody with everybody else – is becoming significantly tougher. It’s not just competition between “old” Western and “new” non-Western centres of gravity, but also between the great powers – members of the same group. The sharp aggravation of relations between Russia and the West in 2014, which have already deteriorated into a systemic confrontation (especially with the USA) makes this competition particularly severe, and raises the sides’ stakes (especially for Russia). Secondly, and most importantly, the spheres where competition becomes particularly tough and crucial for the international situation and security, are also changing. From the military sphere competition now flows into the economic, humanitarian, and information spheres. The military sphere and the need to maintain the defence potential of course remain important. Military power is still the main factor which ultimately ensures countries’ survival. However, due to such factors as nuclear weapons (which minimise the probability of war between the great powers), and political awakening of people (which reduces the ability to control other countries and peoples through use of military force, or achieve sustainable political results using it), military power ceased to be the factor which primarily determines countries’ position on the international arena, their political influence, and stability against external challenges. The rate and the quality of economic growth, the ability to present an attractive image and lead in information rivalry, countries’ stability in the most survival-wise important areas (such as food, energy), and the quality of human capital are increasingly becoming the determining factors of that kind. Therefore they predominantly define countries’ security, and gaps in this areas create – and will continue to create in the foreseeable future – the biggest threats and challenges to Russia’s security. That’s where the toughest competition is taking place. A vivid illustration of the role economic growth and its quality play in achieving security is the major shift in the balance of forces which took place in the early 21st century: the rise of non-Western centres of power and the relative weakening of the West – ultimately of economic, not military nature. The USA’s lead in the military area – far ahead of all other centres of military force – became the biggest at the end of George Bush Jr.’s presidency – and that was also the period when overall, the country was at its weakest since the end of the “cold war”. It was no chance that the Obama administration and the most influential American experts alike saw recovery of dynamic economic growth as the key to restoring the USA’s international positions and strengthening its security. Richard Haass, president of the Council on Foreign Relations, speaking about the main threats to the USA’s national security, didn’t mention Russia, China, or Iran, nor international terrorism and proliferation of weapons of mass destruction – but America’s foreign debt and budget deficit. Obama’s National Security Strategy 2010 was largely devoted to internal issues – economic improvement, correcting economic misbalances, and, most importantly, improving the quality of American education and health care systems as the key factors affecting the quality of human capital. The document clearly stresses the idea that reviving American innovation, inventiveness, and dynamism would make the biggest contribution to achieving the USA’s security and leadership in the 21st century – as opposed to military expenditures, wars, or military presence. At the same time the USA and the West still dominate in the information and media sphere. Their ability to create and present an attractive image is one of the main factors of their international influence in the world, which is now much more polycentric than it used to be. This potential allows the USA and the West to set the global political agenda, to establish rules and the mood in the media favourable for them, artificially increasing the potential of certain players (including themselves) while presenting others weaker than they are, and to pursue their political and economic interests through information campaigns and warfare – by setting appropriate agendas for the world’s mass media. An example of the latter is the political crisis and the coup d’état in Ukraine in 2014, which was largely caused by targeted media campaign to support “Euromaidan” and discredit and blackmail the then Ukrainian authorities. Another example of countries’ vulnerability to media campaigns and to narratives thrust upon people by international mass media – and of how that vulnerability may turn out to be fatal to national security – are the events of the “Arab spring”. The regimes in Tunisia, Egypt, Yemen, Libya, and Syria were simply not able to counter the information warfare which has mobilised their populations for mass protest and riots. Note that mass protests have recently become an almost universal phenomenon, common to developing countries (and among those, both the weak ones, and the new emerging centres of power) and developed nations alike. A good illustration of the toughest competition’s shifting from the military to the economic, information, and humanitarian areas is the pressure the West is currently applying to Russia because of the Ukrainian crisis, and the nature of the unilateral sanctions imposed against it. These are not of military nature. The actual scale of measures the USA and NATO have taken in response to the RF’s actions towards Ukraine, was minuscule. The main blow was dealt in the economic and information spheres. The economic and financial sanctions imposed by the West against Russia in 2014, as well as other (even more painful) steps, such as accelerated reduction of oil prices, are such that they cannot force Russia to change its foreign policy in the short term, but they’re quite efficient in weakening the RF in the medium and long term – pushing it out of the global competition and in an even longer term, creating in Russia preconditions for a profound internal political crisis, and even breakdown of the country. At the same time the recent events showed that Russia remains most vulnerable in the economic and financial areas. It has already suffered significant economic losses (and there’s more to come), and has almost immediately lost the information war with the West – having allowed it to create in the global media space an image of an “unhinged” revisionist imperial power – a kind of a big rogue country. The Ukrainian crisis clearly demonstrated that the main threats to Russia’s security are concentrated not in the military area – far from it, but in the economic and information spheres. An important factor of any country’s national security in today’s world is the relatively new information security element of cybersecurity – protection of information and data, securing automated production and technological processes’ management and control systems. Cyber-attacks and cyber warfare – when foreign states or other players obtain unauthorised access to databases, information, and control systems, for the purposes of stealing or getting control of them - are becoming increasingly important threats to security of most countries in the world, especially industrially developed ones which possess advanced infrastructures and powerful, technologically advanced armed forces, including Russia. Such actions may result in a technological disaster, use of various kinds of weapons including nuclear ones, in a collapse or paralysis of certain sectors of the economy or industries, malfunctioning of crucial infrastructures or production facilities. Taking into account individuals’, societies’, and states’ high vulnerability to such cyber-attacks, and the fact that these attacks (performed by countries, unofficial players, and individuals) are becoming increasingly common in international relations, putting in place a security system against them becomes one of the highest priorities of the national security policy. And there’s an obvious connection between this area and technology, first of all ICT. But ultimately, the most important factor which ensures national security in the new competitive international environment is the quality of human capital. High-quality human capital allows to: a) create “breakthrough” technologies capable of taking over and extending markets, to find new sources of growth and thus ensure competitiveness; b) promote diversification of the economy by discovering new grounds for growth and development, thus providing economic stability (which is particularly relevant in the sanctions situation); c) make the society less vulnerable to effects of radical destructive ideas and factors (political and religious radicalism, populism, etc.); d) ensure overall adaptability of the society, economy, and public institutions to changing external environment. In today’s world, development and stability of civil society (which is yet another direct function of human capital) are much more important to national security than the government’s ability to control societal processes. **Components of Russia’ national security** According to the changes in the nature of security threats described above, five areas can be identified which most significantly affect Russia’s security, the situation in which in turn is defined by Russia’s technological development or lack thereof. *Rate and quality of Russia’s economic growth* This area includes the following components: - Economic growth rate. Today, leadership and influence in the world, countries’ ability to present an attractive growth model directly depend on their economic growth rate. High growth rate also enables countries to accumulate financial reserves, solve socio-economic problems, implement important infrastructural projects. - **Quality of economic growth.** No less important are becoming the factors determining the economic growth. If high growth rate is achieved by extensive exploitation of resources (be it natural or labour ones), despite the short-term attractiveness, in a longer term such growth model cannot support sustainable economic growth – and therefore, economic security. A much stronger effect provides growth achieved by increasing productivity, by application of innovations, creation of new sectors of the economy and new markets, production of innovative products capable of conquering and holding the markets, and by being a leader in developing new technological structures. This is the model which mostly depends on technological development. - **Diversification of the economy.** An economy which has a number of sectors and thus doesn’t depend on the situation in any particular one, is much more sustainable. Economic diversification is also closely connected with technological progress. - **Availability of all relevant production inputs (natural, human, and financial resources, developed R&D system, access to external markets and a high-capacity domestic market, transport infrastructure).** Thus the rate and quality of economic growth directly depend on development and application of advanced production and organisation technologies, on Russia’s leading or, on the contrary, lagging behind the world leaders in developing technologies which are now, and will continue to be in the short to medium term, the most important ones for achieving high quantitative and qualitative parameters of economic growth. *Energy security* According to the Russian Energy Strategy Until 2030 approved on 13 November, 2009, energy security is defined as “a state of affairs when the country, its citizens, the society, the state, and the economy are protected from threats to reliable fuel and energy supply”. The document also describes the main energy security parameters: sufficient supply of resources, economic availability of resources, environmental and technological admissibility of their exploitation. Each of these parameters is directly relevant to technological development, and to Russia’s lagging behind the world leaders in certain technological areas important for the parameters. It would make sense to add to these parameters retaining the existing and entering new markets for Russian fuel and energy products; adopting such a management regime for energy markets which would best reflect Russian interests; and energy efficiency. Adding these seems to be important considering the role the fuel and energy sector is playing in the Russian economy and in the structure of the federal budget revenues. Losing some of the energy export markets, or accepting an unfavourable model of Russia’s energy relations with partners, would create major risks for the country’s economic stability and development. In turn, energy efficiency directly affects survival, sustainability, and competitiveness of the fuel and energy sector, especially in the medium to long term; ability to increase supply of energy resources through their more efficient production and usage; and increase exports through more efficient use of energy inside the country. According to the strategy, the unrealised potential for energy saving through organisational and technological measures amounts up to 40% of the total internal Russian energy consumption. Ability to implement these parameters also directly depends on the Russian fuel and energy sector’s technological development. The latter affects such factors as ability to maintain production volume when old mineral deposits are exhausted; efficient exploitation of mineral reserves; competitiveness of Russian energy products given the overall increase of competition on the global energy markets; reliability of supply by the Russian fuel and energy companies; Russia’s ability to build relevant infrastructure; energy efficiency; etc. The Russian Energy Strategy highlights the following main problems with the country’s energy security: high wear rate of the fuel and energy sector’s capital assets (in electric power industry and gas industry it’s almost 60%, in the oil refinery industry – 80%); low investments in development of the fuel and energy industries (during the last five years the investments in this sector amounted to about 60% of the level indicated in the Russian Energy Strategy Until 2020); over-dependence of the Russian economy and energy industry on natural gas, whose share in the internal consumption of energy resources is about 53%; the fuel and energy sector’s production capacity doesn’t match the global S&T level, including environmental standards; insufficient development of energy infrastructure in the Eastern Siberia and Far East. Note that the document stresses the gap between the Russian fuel and energy sector and the global S&T development level as one of the biggest challenges and problems. *Food security* By its very nature, availability of food is a key national security factor. Chronic shortage of food in the USSR in the 1970s – 1980s played the primary role in the crisis of the socialist development model, losing the competition with the capitalist world, the union’s demise and collapse. The population took the USSR breakdown quite lightly not because it was against the idea of the union state within the 1991 borders, but because it (the breakdown) was perceived as a natural “by-product” of moving on to market economy – which people expected to be able to “fill the shelves in shops”. The Russian Federation’s Food Security Doctrine of 30 January, 2010\(^\text{26}\) defines food security as “a state of affairs in the country and in the economy when the Russian Federation’s food independence is assured; physical and economic availability of food products matching the requirements of the Russian Federation laws on technical regulation is guaranteed to each citizen, in the amounts no less than reasonably required for active and healthy lifestyle”. The document clearly states that food security is one of the most important national security aspects, a major element of the country’s demographic and social policy, and a factor of maintaining national sovereignty. The Doctrine indicates that to ensure its food security, Russia must maintain the following threshold shares for domestically produced agricultural, fishery, and food products in the total turnover on the relevant domestic markets: grain at least 95%; sugar at least 80%; vegetable oil at least 80%; meat and meat products (in meat equivalents) at least 85%; milk and dairy products (in milk equivalents) at least 90%; fishery products at least 80%; potatoes at least 95%; white salt at least 85%. Food security directly depends on technological development. The Doctrine clearly states that one group of risks to Russia’s food security is “technological risks caused by lagging behind developed countries in terms of technological level of the Russian production capacity; different requirements to food safety, and to a system for monitoring and controlling compliance with them”. Information security The current understanding of information security includes two major aspects. The first is ability to set the media agenda, to present images and dominate in global media, to successfully wage information warfare and counter information campaigns by the opposition. The second is cybersecurity: ability to protect information and data (ranging from individuals’ personal bank cards to national-level information constituting state secrets), and security of automated production and technological processes’ management and control systems at crucial infrastructures. Both these components are becoming extremely relevant and important for national security as a whole. The information sphere is becoming a major area of countries’ competition with each other; information campaigns and warfare are turning into efficient tools for internal destabilisation and even destruction of countries (like in the case of Syria), and weakening them internationally (the media campaign against Russia which began even before the coup d’état in Ukraine). And cyber-attacks today are a much more reliable way to bleed and disarm countries than “classic” military action. According to the Russian Information Security Doctrine approved on 9 September, 2000, information security is broadly defined as “the state of affairs when the country’s national interests in the information sphere, which are determined by the sum of balanced interests of individuals, the society, and the state, are protected”\(^\text{27}\). The interests of the state in this sphere are defined as “creating conditions for harmonious development of Russian information infrastructure, for exercising human and civil constitutional rights and liberties concerning accessing and using information, in order to ensure stability of Russia’s constitutional system, sovereignty, and territorial integrity, its political, economic, and social stability, unconditionally ensure the rule of law, law and order, and promote mutually beneficial international cooperation”. According to this document, Russian national interests in the information security area include, among other things, “information support of the Russian Federation’s national policy, to provide to Russian and international public reliable information about the Russian Federation’s national policy”; “development of advanced information technologies and the Russian information industry, including informatisation and telecommunication technologies”; and “protection of information resources from unauthorised access; ensuring security of information and telecommunication systems”. Accordingly, among threats to Russia’s information security were mentioned disabling Russian state media’s activities aimed at informing Russian and international public about Russian policy and its assessment of international events; their low efficiency due to various reasons; and unauthorised access to Russian information and communication networks and databases. Securing these interests and minimising these threats directly involves technologies, primarily ICT. Various cybersecurity-related issues are described in more detail in the document entitled “The main aspects of the national policy on security of automated production and technological processes’ management and control systems for crucial infrastructures in the Russian Federation”, approved on 3 February, 2012. Security of crucial information infrastructures is defined as “the overall state of crucial information infrastructure components, under which subjecting it to computer attacks does not result in heavy adverse consequences”. And practically any way to increase this security has a completely technological nature; these include development of Russian ICTs, and minimising (as far as possible) dependence on foreign technologies. Quality of human capital Quality of human capital is a fundamental factor which ultimately determines countries’ national security and their place in the international system. It includes several components, the most important being the population’s health (which in turn is determined by environmental risks and level of the healthcare system), and education, which affects not just the population’s skill level but their morals, civil conscience, legal culture, and other factors vitally important to the country’s security and wealth. Thus factors which most profoundly affect the quality of human capital as the foundation of national security, include: the development level and the efficiency of the healthcare system; ability to minimise environmental risks to people’s health; and the education level. All three these areas are directly connected with technological progress, and each of them poses risks in case the RF lags behind the world leaders in developing critical technologies. Thus quality of healthcare is directly affected by technological development, and a gap in this sphere not only creates additional risks to the population’s health – due to inability to treat certain diseases – but also increases the country’s dependency on importing foreign-made drugs and equipment, which, if the international situation deteriorates and such supplies are terminated, may put to risk stable operation of the healthcare system. The country’s ability to minimise environmental risks to people’s health also is directly determined by the level of technologies required to minimise harmful emissions, manage industrial waste, make production and transport systems more environmentally friendly. Finally, the level and quality of education are also closely interconnected with technological development. On one hand, certain technologies, first of all ICT, create new opportunities for education (distant learning, increased access to literature, etc.). On the other hand, the country’s abil- ity to develop, apply, or accept certain advanced or cutting-edge technologies, including the ones which significantly affect its national security, depend on the level of education, and the country’s potential to train professionals capable of maintaining these technologies. In case there’s no such abilities and potential, acquiring technologies may weaken rather than strengthen the security of such a country – because it will become completely dependent on other nations for technologies critical to its security. Conclusions and discussion Currently, all countries, including Russia, are faced with brand new global challenges related to fundamental changes in production processes, transformation of socio-economic processes, cultural values and the redistribution of profit in global value chains. New technologies and their corresponding markets are constantly emerging and evolving. At the same time the scope of national security issues has broadened significantly and now also includes economic, social, cultural, moral, ethical, ecological and other issues. These drive the adaptation of states to the changing global economic and political environment. On one hand, technologies determine to an extent the emergence of new centers of power. On the other hand, technology issues begin to penetrate the political agenda as states try to tackle national security risks related to technological development. Thus, science, technology and innovation (STI) and national security agenda converge progressively which was discovered in the study. In order to find practical and theoretically grounded solutions to technology related risks to national security one would need to bring together both policy-makers and scholars in the corresponding areas. In reality it is not about treating technologies as a hostile element and trying to get rid of their impact. Quite the opposite, they need being tamed, harnessed and exploited to the limit for the maximal benefit of the society. Ultimately, it is in the scope of policy-makers’ and scholars’ responsibility to work together in order to solve challenges arising in the sphere of national security using STI policy instruments. REFERENCES 27. Доктрина информационной безопасности Российской Федерации. at <http://www.scrf.gov.ru/documents/6/5.html (date of access: 02.03.2015)> ANNEX 1 Situational Analysis Methodology The first step in the situational analysis, after the object of study has been determined (identifying and describing the main components of national security), is to identify their connections with technologies. This implies identifying technologies which most significantly affect each of the five national security areas. In the course of the situational analysis the Scenario Group developed a special methodology for identifying technology groups which have major effect on the rate and quality of economic growth, energy security, food security, information security, and quality of human capital, based on the nested matrices technique. Using this methodology, the Scenario Group obtained the following results. The second step is to identify, out of the selected and presented above technologies, those in respect of which Russia is, firstly, already lagging behind the world leaders, and secondly, those where it potentially may begin to lag behind during the period until 2030. Participants of the situational analysis are asked to comment on this selection and if necessary, indicate which technologies should be crossed out and which, on the contrary, added to it. Participants of the situational analysis are also asked to identify technologies out of those most significantly affecting the five presented national security components, in respect of which Russia may lag behind the leaders during the period until 2030. Please name potential gap areas for each national security area, in the following format: - for economic development, Russia’s potential lagging behind in developing the following technologies seems to be particularly important: - for energy security, Russia’s potential lagging behind in developing the following technologies seems to be particularly important: - etc. The third stage of the situational analysis is to identify and describe the risks which Russia’s lagging behind (both actual and potential) in developing certain technologies creates to the relevant national security segment (each of the five). The risks should be identified for each national security area individually, i.e. their consideration should start from “national security segments”, not from “technologies”. Since Russia’s lagging behind in various technologies probably poses similar risks, participants of the situational analysis are not asked to identify risks for each specific technology. Rather, it would make sense to identify the risks typical to Russia’s lagging behind in respect of the technology groups relevant for particular national security areas (risks for economic development posed by Russia’s lagging behind in ICT; risks for education posed by Russia’s lagging behind in ICT; etc.). Identifying the risks posed by Russia’s lagging behind in developing specific technologies would be useful if such risks were different from the ones typical for the respective technology group, and potentially could significantly damage Russia’s national security. Risks should be identified and analysed in the following format (Tab. 1): Tab. 1 - Questionnaire for expert panel (situation analysis) <table> <thead> <tr> <th>Questionnaire for expert panel (situation analysis)</th> </tr> </thead> <tbody> <tr> <td>1. Risks for Russia’s economic development posed by technology gap in:</td> </tr> <tr> <td>Technology group 1: definition, description, and analysis of the risk</td> </tr> <tr> <td>Technology group 2: definition, description, and analysis of the risk</td> </tr> <tr> <td>Technology group 3: definition, description, and analysis of the risk</td> </tr> <tr> <td>2. Risks for Russia’s energy security posed by technology gap in:</td> </tr> <tr> <td>Technology group 1: definition, description, and analysis of the risk</td> </tr> <tr> <td>Technology group 2: definition, description, and analysis of the risk</td> </tr> <tr> <td>3. Risks for Russia’s food security posed by technology gap in:</td> </tr> <tr> <td>Technology group 1: definition, description, and analysis of the risk</td> </tr> <tr> <td>Technology group 2: definition, description, and analysis of the risk</td> </tr> <tr> <td>4. Risks for Russia’s information security posed by technology gap in:</td> </tr> <tr> <td>Technology group 1: definition, description, and analysis of the risk</td> </tr> <tr> <td>Technology group 2: definition, description, and analysis of the risk</td> </tr> <tr> <td>5. Risks for Russia’s healthcare system posed by technology gap in:</td> </tr> <tr> <td>Technology group 1: definition, description, and analysis of the risk</td> </tr> <tr> <td>Technology group 2: definition, description, and analysis of the risk</td> </tr> <tr> <td>6. Risks for Russia’s ability to minimise negative effect of environment degradation on people’s health posed by technology gap in:</td> </tr> <tr> <td>Technology group 1: definition, description, and analysis of the risk</td> </tr> <tr> <td>Technology group 2: definition, description, and analysis of the risk</td> </tr> <tr> <td>7. Risks for Russian education posed by technology gap in:</td> </tr> <tr> <td>Technology group 1: definition, description, and analysis of the risk</td> </tr> <tr> <td>Technology group 2: definition, description, and analysis of the risk</td> </tr> </tbody> </table> The fourth stage of the situational analysis is to prepare recommendations on minimising and (ideally) overcoming the risks in relevant national security segments posed by the technological gaps identified at the previous stage, and on using opportunities available to Russia in various national security areas. The experts are asked to answer the following questions: - Which technologies must be developed first of all to minimise the risks (including those where Russia is already lagging behind, and where it potentially can lag behind during the period until 2030)? • Which technological opportunities are available to Russia to improve the relevant national security segment? • Which technologies or technology groups Russia must develop (or which technology gaps it must overcome) independently, on its own? • Which technologies or technology groups should be acquired abroad (through purchasing or commercial espionage)? • Which risks should be overcome not by developing/acquiring technologies and/or closing the technology gap, but asymmetrically? Exactly what this asymmetrical response should be? As at the previous stages, the recommendations should be developed for each specific national security area individually. The last stage of the situational analysis is to identify the effect of the Western countries’ (USA, EU) unilateral sanctions on Russia’s lagging behind in technologies which are particularly important for its national security. As at the previous stages, the analysis will be conducted for each national security area individually. Specifically, the situational analysis participants will be asked to answer the following questions: • How the unilateral sanctions already imposed against Russia by Western countries can possibly affect technological gaps in relevant national security areas? • What new sanctions against Russia can be expected if the objective is to maximise Russia’s technological gap with the advanced countries in relevant national security areas? • What are the ways to minimise damage from the sanctions (both the current and potential ones) concerning Russia’s technological gap with advanced countries? Alexander A. Chulok National Research University Higher School of Economics. Institute for Statistical Studies and Economics of Knowledge. International Research and Educational Foresight Centre. Deputy Director; E-mail: achulok@hse.ru Dmitry V. Suslov National Research University Higher School of Economics. School of World Economy and International Affairs. Center for Comprehensive European and International Studies. Deputy Director; E-mail: dsuslov@hse.ru Evgeny IA. Moiseichev E-mail: emoiseichev@hse.ru Any opinions or claims contained in this Working Paper do not necessarily reflect the views of HSE. © Chulok, Suslov, Moiseichev, 2015
ENVIRONMENT DIRECTORATE ENVIRONMENT POLICY COMMITTEE Working Party on Economic and Environmental Policy Integration Working Group on Economic Aspects of Biodiversity LOSS OF VALUE OF THE SZIGETKÖZ WETLAND DUE TO THE GABCIKOVO-NAGYMAROS BARRAGE SYSTEM DEVELOPMENT: APPLICATION OF BENEFIT TRANSFER IN HUNGARY CASE STUDY: HUNGARY FOREWORD This report was contributed as a national case study to an OECD project on the Applied Evaluation of Biodiversity, being carried out by the Working Group on the Economic Aspects of Biodiversity. It does not necessarily reflect the views of individual OECD Member countries or of the OECD Secretariat. It is published under the responsibility of the Secretary General. ### TABLE OF CONTENTS Executive Summary .......................................................................................................................... 7 1. General Description ................................................................................................................... 9 1.1 Description of the ecosystem ................................................................................................. 9 1.2 Sources of the degradation of the ecosystem in Szigetköz .................................................... 10 1.3 Main objective of the evaluation of the deterioration of the Szigetköz wetland ...................... 11 2. Cause and Source of Changes that have occurred in the Ecosystem ....................................... 11 3. Impacts exercised on the Ecosystem ......................................................................................... 12 4. Impacts exercised on the Economy and Social Welfare ............................................................ 14 4.1 The role of information and uncertainty in the valuation process ........................................ 14 4.1.1 Uncertainty inherent in environmental impacts .............................................................. 14 4.1.2 Uncertainty inherent in the valuation method ................................................................. 15 4.2 The method of assessment: Transposition of CVM research results by benefit transfer ....... 16 4.2.1 Benefit transfer ................................................................................................................. 16 4.2.2 Surveys available for the application of benefit transfer .................................................. 17 4.2.3 Calculations relating to benefit transfer valuation ........................................................... 19 5. Environmental Policy Steps on the Basis of the Findings ....................................................... 23 6. Conclusions Relevant from the Aspect of Environmental Policy ........................................... 23 References ................................................................................................................................... 25 LOSS OF VALUE OF THE SZIGETKÖZ WETLAND DUE TO THE GABÉIKOVO-NAGYMAROS BARRAGE SYSTEM DEVELOPMENT\textsuperscript{1}: APPLICATION OF BENEFIT TRANSFER IN HUNGARY by Marjainé Zsuzsanna Szerényi, Eszter Kovács, Sándor Kerekes and Mónika Kék Executive Summary This case study describes the calculation of the loss of values of an important wetland area in Hungary using the method of benefit transfer. The Szigetköz wetland is situated at the North-Western part of Hungary on the side of the River Danube. The construction of a barrage system between Hungary and Slovakia and especially the diversion of water from the main riverbank caused considerable damage to the valuable ecosystem of the Szigetköz wetland. An Austrian contingent valuation study was used as a basis for the benefit transfer. The Austrian survey calculated the willingness to pay for a wetland (similar to Szigetköz) as an area for a future national park with and without a hydroelectric power station. The findings were adjusted to the Hungarian case, taking the differences of the two situations into account (e.g. the GDP of the two countries, the territory of the wetland area and the estimated rate of degradation). The NPV (net present value) of the loss of value using the discount rate of 2\% and 3.5\% was calculated. As a result, the value of the loss is between 42-252 Billion HUF at 2\% discount rate and 24-144 Billion HUF at 3.5\% discount rate. Ecosystem or species studied: riverbank wetland Valuation method(s) used: benefit transfer \textsuperscript{1} The authors thank Mr. Stale Navrud for all his valuable comments. The last phase of research was implemented through the funds provided by the Government Commissioner’s Secretariat for the Danube, the Prime Minister’s Office. We would like to thank the Secretariat for allowing us to publish the data contained in this study. Main lessons learned: The findings of the benefit transfer were used in cost-benefit analyses of different possible alternatives of the Gabcikovo-Nagymaros barrage system. It is a great step toward a much broader economic analysis including the estimation of the natural capital invested. The results showed that although Hungary’s investment into the project was of a smaller scale in terms of material assets, but a rather significant part of its natural capital has been invested. Due to the known deficiencies of the benefit transfer method, the results might be tested with an original WTP survey. Contact details of delegate: Ms. Eszter Kovács, Hungarian Ministry for Environment, Authority for Nature Conservation, Költő u. 21, Budapest, H-1121, Hungary, Tel: (36-1) 395-2605/122, Fax: (36-1) 200-8880, email: kovi@mail2.ktm.hu. 1. General Description 1.1 Description of the ecosystem Szigetköz is the largest extension flood area of almost natural status left in the entire Upper Danube Valley, a wetland habitat of outstanding importance. Various habitats had been formed here, because of geological, geo-morphological, climatic, water-balance and soil related endowments. These diverse habitats are responsible for the high levels of biodiversity in the studied area (Szabó et al., 1997; p.1). As a wetland, Szigetköz is significant not only for the preservation of different levels of biodiversity, but also because it is suitable for the filtration of anthropogenic environmental loads, such as nitrogen and heavy metal contamination. Wetland habitats are being protected all over the world because they can provide these types of valuable ecological services (Szabó et al., 1997; Mészáros, 1996). Throughout Europe, governments directing significant intellectual and financial resources to maintain existing and restore degraded or nearly destroyed wetlands (Szabó et al., 1997). Variety is provided by the diversity of plant communities. The flood plain is fragmented by the branches of former valleys with some ridges and hills built from the sand of the valleys by the dominant Northwest winds. The features of this terrain and the valley branches helped prairie, forest steppe, forest, marsh-water, marshy meadow and meadow ecosystems be created. (Szabó et al., 1997). Table 1 shows the distribution of terrestrial habitats in Szigetköz prior to intervention. The most significant values of the Szigetköz ecosystem are the following: - high biological diversity; - unique flora and fauna; - characteristic mosaic arrangement of habitats (Mészáros, 1996). Table 1 Review of terrestrial habitats in Szigetköz <table> <thead> <tr> <th>Habitats</th> <th>Area size (ha)</th> </tr> </thead> <tbody> <tr> <td>Habitats of the riparian corridors, willow and poplar groves on the active flood area</td> <td>6,500</td> </tr> <tr> <td>Hardwood groves on the active flood area</td> <td>200</td> </tr> <tr> <td>Hardwood fringing forests on the areas protected by dikes</td> <td>1,500</td> </tr> <tr> <td>Wetland on the active flood area</td> <td>2,800</td> </tr> <tr> <td>Wet meadows and hayfields</td> <td>2,600</td> </tr> <tr> <td>Dry forests and meadows</td> <td>1,100</td> </tr> <tr> <td><strong>Total:</strong></td> <td><strong>14,700</strong></td> </tr> </tbody> </table> 2. Our general description of the ecosystem under examination and the impact produced by the Gabčikovo-Nagymaros Barrage System was originally based on the personal consultation conducted with Dr. Mária Szabó, assistant professor, Dept. of Natural Geography, ELTE University of Budapest. We have also used studies on the monitoring activities supervised by Dr. Szabó. Our expert in zoology was Dr. Ferenc Mészáros, director of the Zoological Department, Hungarian Museum of Natural Sciences. Dr. Zoltán Alexay (assistant professor, Széchenyi István College), the local expert of Szigetköz also provided tremendous help. The flora in Szigetköz include 1010 species and 80 plant communities, of which 60 are natural, and 15 are of relic nature of outstanding value (Mészáros, 1996). The significance of the ecosystem of Szigetköz is also proved by the fact that 20% of all plant species and 30% of all animal species under protection in Hungary can be found here (Mészáros 1996). The most characteristic representatives of the fauna include, in the first place, vermin, articulata, aquatic mollusks, amphibia and birds (Alexay, 1999). 1.2 Sources of the degradation of the ecosystem in Szigetköz The source of the damage and degradation of the wetland area in Szigetköz is undisputedly the water barrage project on the Danube jointly launched by Hungary and Czechoslovakia. Below the key events of the construction project are described. A detailed description is to be found in Chapter 2. − The bilateral contract on the construction of the Gabčíkovo-Nagymaros Barrage System (GNBS) was signed in 1977 by the Prime Ministers of Hungary and Czechoslovakia. Construction was started in 1978. Two elements of this system, the dam at Dunakiliti and the power station at Gabčíkovo are located in Szigetköz or in its immediate neighbourhood. In order to mitigate the foreseen damaging consequences of the decreased water supply of the future operation, a water supplementing system was built in 1987-89 (Alexay, 1999). − In 1989, the Hungarian Party stopped the construction of the Nagymaros barrage system and later demolished the built elements. The Slovakian Party insisted on the completion of the project and in October 1992 blocked the Danube riverbed at Dunacsúny (West from Dunakiliti). The fill-up of the reservoir began and the river was diverted into an operational channel. By applying the so-called variant "C", the role of the Dunakiliti dam was substituted, and the facility at Gabčíkovo was put into operation (iid, 1999). As a consequence the water level dropped in the old Danube riverbed and the wetland area started to dry out. − In June 1995 an underwater weir was constructed in the large Danube riverbed in order to dam up the water coming from the reservoir and to facilitate the water supply of the branch system by gravitation (Alexay, 1999). Certain steps of the dam project have brought about significant changes in the wetland ecosystems of Szigetköz that can only slightly be mitigated by the aforementioned supplementary measures. --- 3 The river Danube is a border river between Hungary and Slovakia where the barrage system was planned. The system – according to the original plan - included on the upper part of the Danube: a dam and reservoir at Dunakiliti (Hungary), a hydropower plant in Gabčíkovo (Czechoslovakia) and a connecting operational channel; on the lower part of the Danube: another hydropower plant in Nagymaros (Hungary). The main aim was the generation of electrical power, and at the same time the improvement of flood control safety and provisions for international waterways. 4 The so-called variant "C", in essence, means that the Gabčíkovo project would be implemented only on Slovakian territory, thus offering a substitution for the dam to be built at Dunakiliti; this alternative does not include either the barrage at Nagymaros (iid, 1999). 5 The underwater weir is a simple underwater dam made of gravel, which raises the water level by 3-4 meters and provides more water for the branch system, thus for the wetlands. 1.3 **Main objective of the evaluation of the deterioration of the Szigetköz wetland** In 1998 the research team of the Department for Environmental Economics and Technology of the Budapest University of Economic Sciences was commissioned by the Government Commissioner for the Danube, of the Prime Minister’s Office to prepare a study on the loss of the natural capital in Szigetköz. The task included the further specification of the formerly pursued estimation of the capital value of Szigetköz and the creation of its methodological foundation. By that time the water management and ecological experts concluded that a desirable compromise would be the so-called meandering alternative to make the Szigetköz wetland habitats most similar to its status before the dam construction. In our analyses, the estimation of the impact of two alternatives was considered: a) how great is the loss of value generated by the implementation of variant "C" and b) how great loss of value would occur in the case of the meandering alternative being implemented. In the present study, the findings of the most recent estimation are presented, but, where it is necessary, reference is made to former findings. 2. **Cause and Source of Changes that have occurred in the Ecosystem** The obvious source of recent damage and degradation of the wetland areas in Szigetköz is the barrage system project on the Danube jointly launched by Hungary and Czechoslovakia. The bilateral contract on the construction of the Gabčíkovo-Nagymaros Barrage System (GNBS) was signed in September 1977 by the Prime Ministers of Hungary and Czechoslovakia. Two elements of this system, the --- 6. The research team, has been commissioned many times to participate in preparing economic calculations on the barrage system. For the first time in 1994 (see: Kerekes et al., 1994), when the Ministry of Foreign Affairs gave commissions concerning the estimation of the Hungarian claim for damage as a preparation for the lawsuit at the Court of the Hague. It quickly became clear, that the traditional economic and legal concept of damage cannot be used in relation with GNBS. For that reason, we tried to give an estimation of the loss of value in natural capital within the limits of the scientific knowledge available at the time. After the decision was declared by the Court of the Hague, the claim to measure the environmental damage of the Hungarian Party was again raised in the negotiations with the Slovak Party. The Assistant Secretary of State of the Ministry for Environment commissioned us to prepare the background material of the estimation. The study was completed in March 1998 (Kerekes et al., 1998), and its findings were discussed with Slovak environmental experts in two rounds of negotiation. 7. The so-called "meandering" version, in essence, means that, although artificial structures would be placed, the natural river branches would be used to reach the appropriate water level, on the one hand, and the water flow relation close to the former status, on the other. The water dammed up at suitable points of the main riverbed by blocks and dams is conducted into a branch, then returned into the main riverbed, then again into the flood plain branch. From an ecological standpoint this alternative seems to be the best solution, as it creates a constant contact between the main stream and the flood plain branches, provides water flow in the branches, and allows free movement for aquatic animals on almost the entire territory (Alexay, 1999). 8. Changes that have taken place in the status of the Danube and the Szigetköz date back 100 years. It began with river regulation, followed by building barrages on the upper section of the Danube, resulting in a significantly deeper main riverbed (Alexay, 1999). The unfavourable processes affecting the wetland habitats began long before the current project of investigation. Yet the Gabčíkovo-Nagymaros is playing an significant role in this negative change. From an ecological standpoint, drastic changes have taken place within an extremely short period of time. dam at Dunakiliti and the power station at Gabčíkovo, are located in Szigetköz, or nearby, thus the construction had a direct impact on the region (Alexay, 1999). Most of the river branches and islands Northwest of Dunakiliti were eliminated during the construction, and forests were cut down on the site of the preparatory buildings set up for the construction purposes and on the territory of the dam. In order to mitigate the detrimental consequences of decreased water supplies after the barrage system was put in operation, a water substitution system was built in 1987-89. On the protected side and in Lower Szigetköz, there was no change due to the construction of the dam. In 1989, Hungary stopped the construction of the Nagymaros barrage system and even had demolished what had been built so far. The situation of the original investment plan was further deteriorated by the fact that the Hungarian government did not approve of the completion of the dam at Dunakiliti and the fill-up of the its reservoir, which made it impossible to put the Gabčíkovo barrage into operation. In response, Slovakia blocked the riverbed of the Danube at Dunacsúny in October 1992 and began to dam-up the reservoir and divert the river into an operation channel. By this so-called variant "C", the role that Dunakiliti was to play was substituted, and the operation of the Gabčíkovo project was started (iid, 1999). After the riverbed was blocked, the rate of flow decreased to 10% of the median water rate, and through the open branches, the subsidiary branches became empty. On the protected side, channels became totally dry, and the water table decreased by 2-3 meters. In order to prevent any further damage, it seemed to be advisable to implement the water substitution system built earlier. Owing to the disputes over the different water substitution plans, the flood plain branches could not receive the necessary quantity of water for 2.5 years, and the dry status of the area became constant, increasing the ecological damage. (Alexay, 1999). In June 1995, the underwater weir constructed at 1843 river km in the large Danube riverbed was put into operation to dam the water coming from the reservoir and to facilitate the water supply of the branch system by gravitation (Alexay, 1999). By the construction of the underwater weir however, no high flood levels can be reached, therefore its construction did not solve the problem of water supply for the marches and the moorish meadows on the protected side, therefore, the flora and fauna of these areas suffered the highest rate of damage (Alexay, 1999). In 1998, a special fish channel was built at Cikolasziget with the intention to restore the relationship between the branch system of Szigetköz and the main branch. In the planning phase of the project, until 1994, it was not considered important how and to what extent these changes affected the natural environment. It is then perhaps not surprising that the initial economic calculations, not well-founded otherwise, do not even mention the impacts exercised on the natural capital. ### 3. Impacts exercised on the Ecosystem Based on the findings of the monitoring conducted in the region since 1987, the variant "C" of the Gabčíkovo-Nagymaros Barrage System resulted in a rapid and drastic deterioration of the habitats in some places. Due to desiccation, the flood area’s ecological potential has undergone significant changes. --- 9. Originally, phyto-cenology, flora and vegetation level monitoring was conducted in the area, and an ecological monitoring was added to this in 1993, through which anatomic and population level indications of sensitive indicator species were examined. In practice, this facilitated the follow-up of ecological changes occurring with varied ”speed” on the different biological organisation levels (Szabó et al, 1997). 10. The ecological potential is a performance potential arising from the individuals of the plant and animal species, from the totality of these individuals (the population), and from the features of their habitats. In the course of evolution, the ecological potential is being changed by the environment, but it can also be The large habitat diversity began to show the signs of becoming homogenous and the most valuable communities and populations of nearly undisturbed vegetation began to deteriorate, become overgrown with weeds, and die (Szabó et al., 1997). Water demanding species started to decrease in number, while the draught resistant species’ coverage began to increase (Szabó et al., 1997). These negative impacts affected the habitats and vegetation units tied to water in Middle Szigetköz. Due to farming, forestry and an increasingly arid climate, the flood area ecosystem in Szigetköz has been partly degraded. Yet, in a number of habitats, especially in the area between the dams, the so-called foreshore, it has been preserved in a nearly natural status. Annual monitoring conducted since 1987 gives straightforward evidence that Szigetköz had most of its ecological potential preserved until the time when the bulk of the water of the Danube was diverted into an operation channel. Due to the construction of the underwater weir the water supply of the habitats in the vicinity was slightly improved (in a circle of about 3-4 kms), but in essence this was true only for a relatively small area falling outside the foreshore. The ecological status of the flood plain with the most significant natural values has not improved at all in spite of the permanent water substitution. This conclusion is also supported by the data of the sample areas in Middle Szigetköz in 1995-96. Improvement can only be expected from a significantly higher water rate of the Large Danube (approx. 800 m³/sec) (Szabó et al., 1997). A significant consequence of the dam project is the lack of regular flooding of the foreshore that is highly important both in the flood plains of the Great Danube and the Mosoni Danube, primarily in the Middle Szigetköz. Water substitution by the construction of the underwater weir has brought about some improvement on areas far from the Great Danube, but as we approach the riverbed of the Danube, this favourable impact is decreased or even comes to an end, and this is what has brought about the destruction of the white willow and black poplar groves along the river bank. The lack of floods has also resulted in a significant restructuring of the soft stem vegetation in the flood plains. At the same time, the number of original protected plant species characteristic to flood plain forests (e.g.: Senecio paludosus, Leucojum aestivum,) has decreased, and certain species (e.g.: Cardamine amara) have completely disappeared (personal information from Szabó). The bush willow and white willow groves on the Great Danube flood plain in the Middle Szigetköz have suffered lasting damage, the rate of destruction reaching 50-60% in some places. As a response to the construction of the underwater weir, only white willow is still sprout, but it was of little help to other tree species (personal information from Szabó). The wet meadows and forests along the Danube have not received sufficient quantities of water since the time the Danube was diverted, and as a result, they have been largely damaged. The lack of mowing also has an unfavourable impact, converting the wet meadow rich in species before the diversion of the Danube into an area fully covered by weeds. Water demanding species characteristic to the meadow are pushed to the background (e.g.: occurrence of plantago altissima is only sporadic) (personal information from Szabó). The protected side, falling outside the dam, was also affected unfavourably by the diversion of the river: it affected mainly the willow and poplar fringing in lower lying areas of forests. The wet meadows, flood area moorish meadows and fresh hayfields have become covered by weeds and homogenised, although prior to the diversion of the river, they were treasurers of rare species. Most of the changed in the course of ontogenesis (Environment Protection Lexicon, Akadémia Kiadó Budapest, 1993, p.: II./147.). reed marshes exhibit poor growth, and some have been overtaken by weeds (personal information from Szabó). In the restoration of the Szigetköz to its original status, the guiding factors are water supply and water dynamics. It is not enough to provide sufficient supplies of water. The river must also reach the flow speed and water level fluctuations like those previous to the diversion (Alexay, 1999). Apart from the unfavourable changes in flora, the negative processes can also be observed in the fauna. The decrease in the number of species and individuals of mollusks on the diverted part of the Old Danube is obvious. Draught-resistant bird species from the protected side continue to establish colonies on the flood plain (Mészáros, 1996). According to Mészáros (1996), two protected dragon fly species under the scope of the Bern Convention (Aeshna viridis, Leucorrhinia pectoralis) have disappeared from this area (owing to the short time scale, we cannot speak about extinction at that time). Due to the diversion, the number of different types of water decreases and variety can even be fully eliminated that will make most of the aquatic fauna uniform-like (Mészáros, 1996). 4. **Impacts exercised on the Economy and Social Welfare** As it has been shown, the Szigetköz wetland is one of the types of habitat that are becoming more and more rare not only in Hungary, but also all over Europe. Consequently, any negative changes occurring in its state are a loss to both the Hungarian population and the population of Europe. In our assessment, the impacts realised in relation with to welfare of the Hungarian population have primarily been quantified on the basis of the data and information available and the high rate of uncertainty characteristic anyway. In the assessment of Szigetköz, both the methods applied, as well as the rate and irreversibility of the changes occurring in the ecosystem have a high rate of uncertainty. First these uncertainties are explored, and assessment cases are described only after having received knowledge on the uncertainties. 4.1 **The role of information and uncertainty in the valuation process** 4.1.1 **Uncertainty inherent in environmental impacts** In the valuation of the changes in the value of flora and fauna, our starting point is the transformation and degradation of the plant communities and the fauna characteristic to the region. It is ultimately important to show the relevant uncertainty factors that are the following (personal information from Mária Szabó): - The decrease in the number of individuals in the population of a species should not lead us to the conclusion that this species will become extinct in that place in future. Every species has a specific population cycle characteristic only to this species, which means that the number of individuals may decrease (or increase) even if the environmental factors are unchanged. That is, a species showing a decreasing tendency does not necessarily mean that extinction is imminent or will happen. **But:** If the number of individuals of a plant species with high water demand decreases due to habitat desiccation, the extinction of that species from this habitat can almost be taken for granted. - The international rule is that a species can be considered extinct in an area, if not a single individual is found for 50 years (in Mészáros’ view (1996), it is 15 years). − The number of individuals of any species in the future depends on the original conditions, like the initial number of individuals, the genetic variability of the species, the changes in the ecological factors, and the competition relations with other species. − The rate of irreversibility of the changes cannot be easily estimated because it depends on a number of factors: − In cases of plant species strongly tied to water, the future changes in the process and its reversibility are fundamentally defined by how long the given habitat is left in a desiccated status. The longer the duration, the higher the probability of irreversibility. − It also depends on the number of individuals of the population affected by the unfavourable change. The higher the number of individuals, the higher the probability of adaptation. − An essential aspect is the plasticity of the plant, that is, to what extent it is able to allocate its energies to root growth to seek out decreasing ground water. With respect to flood plain plants and wetland species, their root system plasticity is not known, because it has never been necessary to examine where the water supply for the vegetation can be considered optimal. The river diversion of this scale has caused unprecedented rapid and large-scale desiccation of the habitat. 4.1.2 Uncertainty inherent in the valuation method Researches of the recent years show that none of the benefit transfer methods are able to show results that would satisfactorily substitute the findings of a preliminary assessment (those of a contingent valuation in the first place). Ready et al. (1999) make a comparison of people’s willingness to pay for similar problems in six European countries by using identical tools (questionnaire, survey environment, time). They believe that the differences in willingness to pay cannot be simply explained by the differences in the population; their final conclusion, however, is not a full rejection of the benefit transfer, rather they point out that the application of benefit transfer may transfer a lot of distortions into the results. Bergland et al. (1999) were first to examine directly the potential to transfer benefits appraised in a given area to some other problems. An identical problem was examined in two regions of Norway at identical point of time by the contingent valuation method. Their findings show that the transfer of neither the valued WTP nor the so-called valuation function is reliable and may bring forth-distorted results. Owing to the deficiencies of the benefit transfer method and the disputable nature of the applied assumptions, the estimation of the loss of value in the flora and fauna of Szigetköz contains a high rate of uncertainty. Beyond applying the benefit transfer presented below, the valuation of the ecosystem changes generated by the barrage system, was also conducted on the basis of the summary valuation contained in the study published in Nature (Costanza et al., 1997). The process of calculation and the detailed results are not discussed, it is only noted that the results of the assessment were fairly similar to the results of the benefit transfer assessment (see details at: Kerekes et al., 1998, 1999). Below the benefit transfer valuation and its findings are presented. 4.2 The method of assessment: Transposition of CVM research results by benefit transfer 4.2.1 Benefit transfer In the estimation of the loss of value of Szigetköz, the so-called benefit transfer method\(^\text{11}\) has been used. The benefit transfer method, refers to the transfer of available analysis results - in our case conducted as a contingent valuation\(^\text{12}\) - to an area that can be considered similar to the one that the original analyses are relevant for. We presume that the available results give some sort of assessment of the characteristics of the area to be examined. In general, the benefit transfer method can be used if the following conditions are met: - the policy site intended to be examined is similar to the study site that will provide an answer based on the available findings; - the presumed consequences of the change intended to be examined are similar to the consequences of the changes considered in the available findings; - the valuation methods have been used with appropriate accurateness and consideration in the available studies; - the necessary personnel and material conditions are not available, and there is not enough time to conduct an original examination. The application of the method requires thorough consideration. It is especially important that the study site and the policy site should be as similar as possible: e.g. geographical and ecological characteristics, changes that have been occurred or might occur, the causes of the changes and the policy situation. Lack of time and money were characteristic to the entire research phase. As a result, there was no way to conduct a primary assessment. Consequently, although it was a forced solution, it seemed advisable and inevitable to apply the benefit transfer method.\(^\text{13}\) --- \(^{11}\) According to Navrud (2000) there are two main approaches to the benefit transfer method: a) the transfer of a unit value (simple unit transfer and unit transfer with income adjustment) and b) function transfer (benefit function transfer and meta-analysis). Since ecological impacts are rather cause and site specific, we used the method of direct benefit transfer, where we transferred the unit value from the study site to the policy site. \(^{12}\) The contingent valuation method is a direct environmental valuation technique where people are directly approached and asked about their willingness to pay for a natural resource or an environment policy change (for details, see e.g.: Mitchell and Carson, 1989). This method has an especially important role in the preservation of habitats and biodiversity, as it is also able to give valuation characteristic to these assets, independent of their use. \(^{13}\) The Prime Minister’s Office, probably due to the proposal emphasised in this research - gave commission in Spring 2000 for a study to survey, by contingent valuation, the willingness to pay for the conservation of the natural capital in Szigetköz. 4.2.2 **Surveys available for the application of benefit transfer** Unfortunately, valuation by contingent valuation of freshwater wetlands similar to Szigetköz has been documented in only a few case studies in the international technical literature. Examination of saltwater areas is much more frequent, but they cannot be compared to freshwater wetlands. Two assessments are available on European wetland areas: a) a survey in East-England, carried out in the early 1990 on the conservation of “Broadland”, a complex freshwater wetland area (Turner et al., 1993)\(^{14}\) and b) a survey in Austria on saving a wetland area along the Danube by the creation of a national park (Kosz, 1996). The Austrian survey has been chosen as a basis for the benefit transfer. **Survey in Austria (according to Kosz (1996))** A survey was conducted in Austria in 1993 to measure Austrian citizens’ willingness to pay for the creation of a national park along the Danube. Perhaps one of the largest riparian wetlands of Europe can be found on the Vienna-Bratislava section of the Danube in Austria, and it provides habitats for a large number of threatened and special species. The plans for the creation of an internationally recognised national park (in accordance with IUCN recommendations) on 11,500 ha on part of the wetland, date back to several decades. Parallel with the national park plans, however, plans have appeared for the construction of a hydro-electric power station in the area, whose implementation would cause irreversible damage to the wetland, also decreasing the area of the national park to be created. Two plans were available concerning the site of the power station: one at Wolfsthal, the other close to Wildungsmauer; the latter one exactly in the middle of the wetland. The survey on the willingness to pay was conducted by the method of contingent valuation, and focused on the Austrian citizens’ willingness to pay for three different implementation plans. In our opinion, findings of the study conducted in Austria can be applied in the valuation of Szigetköz, for the following reasons: - it evaluates a wetland situated along the Danube of outstanding importance in Europe that is very similar to the wetland habitats in Szigetköz; - the survey conducted in Austria focused on the willingness to pay for the conservation of the wetland area as a national park (that is, the conservation function is at the forefront); - while the problem in Austria was also associated with the construction of a hydro-power plant, it can be considered similar to the Hungarian decision-making situation; --- \(^{14}\) Broadland is considered an outstanding wetland in East-England, having three nature reserves on its territory, two of which are included in the Ramsar list. The objective of the WTP survey was to state the monetary value of the conservation of the wetland in the event that a program is realised to decrease the danger of flooding (for more details see Turner et al., 1993). The method of contingent valuation was used on two separate samples: users and non-users. Although the case in England regards a wetland like Szigetköz, we believe that the application of benefit transfer in that case would generate distortions, because a) although the survey is relevant to the structure and characteristics of a freshwater wetland, but not to a riparian wetland along the Danube; b) there are similarities in the land-use patterns of the two areas, but in England, recreation and agricultural use are dominant compared to conservation being the most important aspect in the Szigetköz; c) in England, the conservation strategy focuses on protection against inflow of sea water, which, in our opinion, cannot be adapted well to the Hungarian situation. − with respect to the point of time of the survey (1993) it is close to the present; − the Austrian economic indicators are probably close to the Hungarian conditions. For the above reasons, we have used the findings of the Austrian survey in the Szigetköz case, by the method of benefit transfer. The major characteristics concerning the willingness to pay for the national park in Austria can be summarised as follows: − In the frame of the full project 952 Austrians over the age of 14 were surveyed (representative sample by random-quota procedure). − Three alternatives were examined, one was on the creation of a national park, the other two were combinations of a national park of a much smaller area and a hydroelectric power station. − The question asked was the following: How much are you willing to sacrifice annually for the implementation of the different alternatives?\(^{15}\) Table 2 contains a summary of the willingness to pay for the three alternatives. Table 2 **Willingness of Austrian citizens to pay for the three alternatives** <table> <thead> <tr> <th>Project</th> <th>average WTP (1993 ATS/person/year)</th> </tr> </thead> <tbody> <tr> <td>Creation of the Donau-Auen national park on 11,500 ha territory</td> <td>329.25</td> </tr> <tr> <td>Hydro-electric power station and a 9,700 ha national park</td> <td>122.21</td> </tr> <tr> <td>Hydro-electric power station and a 2,700 ha national park</td> <td>69.63</td> </tr> </tbody> </table> *Source:* Kosz (1996), p. 120 The findings show that the Austrian population most strongly supported the creation of the national park with the largest territory, as willingness to pay seemed to be the highest there. The creation of a national park together with a hydroelectric power station was less preferred. In Szigetköz we are examining the conservation of the area’s natural status pre-diversion and the loss of value due to the conditions created by the current and the meandering version. It is justified to consider from among the above instances of willingness to pay the plan with no hydroelectric power plant \(^{15}\) In the Austrian study open ended questions were asked, which usually result in lower WTP than close ended questionnaire surveys. In our case it means that the WTP most probably will not be overestimated. Another feature of the study is that earmarked yearly tax was used as a payment vehicle. and with the largest territory. Thus, ATS 329.25/person/year\textsuperscript{16} can be considered the average willingness to pay for the creation of the Donau-Auen national park. 4.2.3 Calculations relating to benefit transfer valuation In order to transfer the Austrian willingness to pay on the Hungarian population, first we must make use of certain assumptions\textsuperscript{17}, that are as follows: \begin{itemize} \item the environmental sensitivity of the Austrian and Hungarian citizens is identical; \item the Austrian citizens’ willingness to pay has not changed since 1993; \item any differences in the willingness to pay can mostly be explained by the different size of GDP per person and can be considered proportionate to that; \item willingness to pay in Hungary is also affected by the income generated in the black (grey) economy, therefore, that should also be added to the calculated GDP value for the legal economy (data from the National Bank of Hungary shows that it is min.12\% );\textsuperscript{18} \item willingness to pay changes in proportion to the size of the area (larger territory - higher WTP); \item willingness to pay decreases proportionate to the rate of degradation. \end{itemize} With attention to the above, the basis of the calculation is given by the percentage proportion of the average WTP in Austria in 1993 and the GDP per person (see: Table 3). We assume that this proportion is valid for Hungary in 1999. This assumption, however, can only be accepted with reservations, as Kriström and Riera (1996), when comparing the findings of researches conducted by contingent valuation method, came to the conclusion that their flexibility of income is smaller than 1. This means that experiences show that citizens of countries that have smaller income, are willing to use a higher proportion of their income for environmental protection objectives. On that basis we may say that the transfer of the Austrian results would definitely result in an underestimation. \textsuperscript{16} This average contains the 0 bids, but does not contain the unrealistic high bids. If the highest bid were considered (ATS 36,000), the average would change to WTP ATS 414/person/year. \textsuperscript{17} These assumptions might lead to distortions and simplifications, but the available time and data made only the shown calculations possible. \textsuperscript{18} Source: Internet, March 1999, home page of National Bank of Hungary. Table 3 Austrian results and the starting points calculated on the basis of those results <table> <thead> <tr> <th></th> <th>1993</th> </tr> </thead> <tbody> <tr> <td>GDP/person (ATS)</td> <td>265,812*</td> </tr> <tr> <td>WTP/person/year (ATS)</td> <td>329.25</td> </tr> <tr> <td>WTP in % of GDP/person</td> <td>0.12%</td> </tr> </tbody> </table> The table shows that the ratio of the average WTP per person and the GDP per person in Austria in 1993 is 0.12%. For Hungary, the base figure to be used for the calculation of the GDP value per person in 1999 necessary for the calculation of the WTP per person valid in 1999. The GDP for Hungary was estimated as HUF 11,565 bn. Further on, however, we shall not use the official GDP/person value (HUF 1,146,000), but a figure corrected with an estimated income from the grey and black economy. The estimate of the National Bank of Hungary shows that the size of illegal economy is minimum 15% of the legal economy. Thus, the corrected value of GDP: 1,146,000 x 1.15 = HUF 1,138,000. By considering the corrected GDP/person value and the accepted 0.12% ratio of WTP/person and GDP/person, the WTP per person per year can be calculated for Hungary. The results are summarised in Table 4. Table 4 Process of WTP estimation in Hungary and its results <table> <thead> <tr> <th></th> <th>1999</th> </tr> </thead> <tbody> <tr> <td>official GDP/person (HUF)</td> <td>1,146 thousand</td> </tr> <tr> <td>corrected GDP/person (HUF)</td> <td>1,318 thousand</td> </tr> <tr> <td>WTP in the % of GDP/person</td> <td>0.12%</td> </tr> <tr> <td>WTP/person/year (HUF)</td> <td>1,581*</td> </tr> </tbody> </table> The aggregate willingness to pay (applying it to the whole of Hungary) was calculated considering the population above the age of 14. When multiplying the annual WTP with the size of the solvent population above the age of 14, we find that: HUF 1,581 x 8,326,000 persons = HUF 13.17 bn It should also be considered that the Hungarian area under examination is somewhat larger than the area in Austria (wetland in Szigetköz: 14,700 ha as against the area in Austria:11,500 ha). In case we assume that WTP changes proportionately with the size of the area, then the HUF 13.7 bn can be corrected in accordance with the difference of the areas. 19. Value calculated from the ratio of the annual GDP provided by ÖSTAT and the actual population. 20. Value of bid calculated with attention to highest bid (ATS 36,000), converted to Hungarian conditions: (ATS 414 is 0.15% of the GDP/person) 1,318,000 x 0.0015 = HUF 1,977. 21. The size of the population over the age of 14 in 1999 was estimated to be approx. 8,326,000 (Hungarian Central Statistical Office). Although the number of households could also be used for the aggregation, we followed the calculation method of the Austrian study. 20 Thus, the aggregated Hungarian WTP for the conservation of Szigetköz in its original status is: $$\frac{14,700}{11.500} \times 13.17 = \text{HUF 16.83 bn}$$ Loss of value can be estimated on the basis of the conversion of wetland areas into valueless associations of weeds, where the rate of degradation can be calculated. Here, only the changes in acreage proportions can be considered, the restructuring, however, cannot be shown in our calculations, that is, we cannot show how the composition of species has changed within the individual habitat compared to the status prior to diversion. We cannot tell whether the old or the new composition represents the higher value, but it is clear that what we have today is not identical with the old composition. As we assume that WTP proportionately decreases with degradation, for that reason, in Table 5, a review of the degradation indicators for the alternatives examined is given. Table 5 **Rate of degradation that has occurred or presumed to have occurred in the habitats of Szigetköz** <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td>small and medium water level</td> </tr> <tr> <td>Size of wetland areas (ha)</td> <td>13,600</td> <td>10,300-10,500</td> <td>10,800-11,000</td> </tr> <tr> <td></td> <td></td> <td></td> <td>High water level</td> </tr> <tr> <td>Decrease of wetland areas</td> <td>23-25</td> <td>20-21</td> <td>8</td> </tr> <tr> <td>compared to their decrease before 1992 (%)</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> The percentage values on the decrease of wetland areas clearly point out that the currently operating version of variant "C" and the small and medium water level case of the meandering solution are rather close to each other. The difference between the two were even less, if we could consider the significant potential the current variant "C" has from an ecological standpoint that could be further exploited by using further technical solutions on the section between Ásványráró and Szap. Instead of using the actual figures in Table 5 on the actual and estimated rate of degradation, we intend to use a scale through which we can decrease the uncertainty of the estimation of the loss of value. Thus, the rate of degradation generated by the impact of variant "C" is placed between 20-30%, eventual implementation of the meandering version for small and middle water level is considered a 15-25% rate of degradation, while for high water level, a 5-15% scale is considered (See: Table 6). Table 6 Change in value estimated for the different alternatives <table> <thead> <tr> <th>Rate of degradation (%)</th> <th>variant “C”</th> <th>Meandering with small and medium water level</th> <th>Meandering with high water level</th> </tr> </thead> <tbody> <tr> <td></td> <td>20</td> <td>30</td> <td>15</td> </tr> <tr> <td>WTP decrease due to degradation (bn HUF)</td> <td>3.4</td> <td>5.1</td> <td>2.5</td> </tr> <tr> <td>Loss of value by using a 2% discount rate (bn HUF)</td> <td>168.3</td> <td>252.5</td> <td>126.2</td> </tr> <tr> <td>Loss of value by using a 3.5% discount rate (bn HUF)</td> <td>96.2</td> <td>144.3</td> <td>72.1</td> </tr> </tbody> </table> The detailed process of calculation is shown through the 20% rate of degradation of variant "C". The initial aggregate WTP is HUF 16.83 bn. It is assumed that WTP linearly decreases with degradation, therefore a 20% rate of degradation results in the following WTP decrease: \[ 16.83 \times 0.2 = \text{HUF 3.37 bn} \] The amount of WTP in Austria was understood for an indefinite time scale, meaning that every citizen is willing to sacrifice the given sum, in all further years, for the maintenance of a national park. Thus, by applying the formula of perpetual annuity \(^{22}\), and a 2%, and 3.5% discount rate, the present values of the loss in value will be as follows: \[ d=2\% \quad \text{present value: HUF 3.37 bn/0.02 = HUF 168.31 bn} \] \[ d=3.5\% \quad \text{present value: HUF 3.37 bn/0.035 = HUF 96.17 bn} \] Table 7 Summary of value loss calculated by benefit transfer <table> <thead> <tr> <th>Variant “C”</th> <th>Meandering version</th> </tr> </thead> <tbody> <tr> <td>at small and medium water level</td> <td>at high water level</td> </tr> <tr> <td>at 2% discount rate (HUF bn)</td> <td>168 - 252</td> </tr> <tr> <td>at 3.5% discount rate (HUF bn)</td> <td>96 - 144</td> </tr> </tbody> </table> --- \(^{22}\) If a project is intended to be maintained in the long run (as in the case of a hydroelectric power plant), the benefits and costs accrued can be considered on a long time horizon. In case the earnings (and expenses) are even, the project cash flow can be calculated with the help of the formula applicable to perpetual annuity in the following way: \( PV = \frac{C}{r} \), where \( C \) stands for earnings (expenses), \( r \) for the value of discount rate (see: Brealey-Myers (1993); p. 33). If the wetland is damaged, we shall be deprived of a certain proportion of the earnings - proportionately to the rate of degradation. This is why we used the formula applicable to perpetual annuity. \(^{23}\) In the article used as the basis of benefit transfer (Kosz, 1996), the author takes position in favour of a 2% discount rate, as all variants contain components that can be characterised by long term ecological sensitivity, whose change can be considered irreversible. The question may be raised whether Hungarian people would be really willing to pay for the conservation of Szigetköz, or, whether the sum derived from GDP is excessive or not. We try to answer these questions on the basis of Hungarian examples. CSERGE\textsuperscript{24} and the Department of Environment Economics, Budapest University of Economics conducted a large scale survey on WTP (number of interviewees close to 2000) in 1995-1997 (the interviewing was done in 1995). They applied the contingent valuation method in a survey of the water quality improvement of the Lake Balaton (Mourato et al., 1997). The findings of the survey clearly show that the Hungarian adult population is sensitive to the environmental issues and is willing to pay HUF 3,900 per annum to preserve the environment. The WTP sum derived through the application of the benefit transfer method is far behind the sum of HUF 3,900 per annum. Szigetköz may have smaller significance than Lake Balaton, it is not so well-known as Lake Balaton, often referred to as a national symbol. Yet we still believe that the sum of HUF 1,581 is below the presumable WTP (not to speak about the fact that the transfer of WTP for Balaton in 1995 for the year of 1999 (adjustment by inflation, or increase proportion to higher GDP) would result in an even higher figure). Another survey conducted in the Bükk National Park in 1996 in Hungary provided similar results. Findings show (for details see: Szerényi, 1998) that Hungarian people visiting the park would be willing to pay HUF 1,426 per year for the conservation of the third largest national park, the Bükk National Park. (This survey shows only the WTP of those using the park, and it was lower than the result of the survey conducted in the circle of users and non-users (that is, the Hungarian population) for Lake Balaton. Thus, the conclusions that we can draw are similar to the ones described above. The WTP surveys conducted so far in Hungary give clear evidence that the WTP of HUF 1,581 calculated by the method of benefit transfer is probably below the actual WTP. Therefore, the calculation of the loss of value of Szigetköz on the basis of this WTP results in underestimation.\textsuperscript{25} 5. Environmental Policy Steps on the Basis of the Findings The findings of the estimations have been used in cost-benefit analyses. The primary objective was to create a well-founded tool encompassing the costs and benefits, in a broad sense, of all the alternatives of the Gabčíkovo-Nagymaros Barrage System to enable us to find solutions for disputed issues in the negotiations with the Slovak experts, and to be a help for us in an eventual second court case in the Hague. Compared to the former economic calculations, the changes in the natural values not considered that are now included in the analyses a great step forward. As a result, it became clearly evident that the incorporation of natural capital depreciation in cost-benefit analysis brings forth utterly different results than when only the costs and benefits in a narrower sense are included. Although in terms of material assets (concrete, structures, labour), Hungary’s investment in the project was small, the sacrifice in part of the wetland areas in Szigetköz meant that significant part from our natural capital has been "invested". 6. Conclusions Relevant from the Aspect of Environmental Policy The changes that have been generated by the Gabčíkovo-Nagymaros Barrage System in the natural capital of Szigetköz and the loss of value estimation generated as a result are, by all means, a milestone in the area of environment and nature protection policies. The monetary values of changes in the \textsuperscript{24} The Centre for Social and Economic Research on the Global Environment, U.K. \textsuperscript{25} This statement of us, made in 1999, may be verified by the CVM research on Szigetköz, launched in summer 2000. flora and fauna in the course of the cost-benefit analyses and in the foundation of our position in the negotiations with the Slovak Party significantly change our position in reaching a settlement. This is a warning pointing to the fact that the transformation changes in the natural capital must be included both on the side of costs and of benefits as they represent high value. Perhaps this case will launch a favourable process in Hungary that will call the attention to the importance of factors and impacts that so far have been considered "non-quantifiable," and to the necessity of considering the changes in the value of natural capital in specific decision-making. References Navrud, Stale. (2000), Strengths and weaknesses and policy utility of valuation techniques and benefit transfer methods, invited paper for the OECD-USDA Workshop on The Values of Rural Amenities: Dealing with Public Goods and Externalities;Washington DC. June 5-6,
Neurophysiological Correlates of Frequency, Concreteness, and Iconicity in American Sign Language Karen Emmorey1, Kurt Winsler2, Katherine J. Midgley3, Jonathan Grainger4, and Phillip J. Holcomb3 1School of Speech, Language and Hearing Sciences, San Diego State University 2Department of Psychology, University of California, Davis 3Department of Psychology, San Diego State University 4Laboratoire de Psychologie Cognitive, Aix-Marseille University, Centre National de la Recherche Scientifique Keywords: American Sign Language, event-related potentials, frequency, concreteness, iconicity, lexical access ABSTRACT To investigate possible universal and modality-specific factors that influence the neurophysiological response during lexical processing, we recorded event-related potentials while a large group of deaf adults (n = 40) viewed 404 signs in American Sign Language (ASL) that varied in ASL frequency, concreteness, and iconicity. Participants performed a go/no-go semantic categorization task (does the sign refer to people?) to videoclips of ASL signs (clips began with the signer’s hands at rest). Linear mixed-effects regression models were fit with per-participant, per-trial, and per-electrode data, allowing us to identify unique effects of each lexical variable. We observed an early effect of frequency (greater negativity for less frequent signs) beginning at 400 ms postvideo onset at anterior sites, which we interpreted as reflecting form-based lexical processing. This effect was followed by a more widely distributed posterior response that we interpreted as reflecting lexical-semantic processing. Paralleling spoken language, more concrete signs elicited greater negativities, beginning 600 ms postvideo onset with a wide scalp distribution. Finally, there were no effects of iconicity (except for a weak effect in the latest epochs; 1,000–1,200 ms), suggesting that iconicity does not modulate the neural response during sign recognition. Despite the perceptual and sensorimotoric differences between signed and spoken languages, the overall results indicate very similar neurophysiological processes underlie lexical access for both signs and words. INTRODUCTION Current theories in linguistics, psychology, and cognitive neuroscience have all been developed primarily from investigations of spoken languages. This focus has led theories to ignore or downplay phenomena that are limited in speech but are pervasive in sign languages, such as iconicity (a nonarbitrary relation between phonological form and meaning) and observable linguistic articulators (the vocal articulators for speech are largely hidden from view). By widening our scientific lens to include sign languages, we can distinguish neurobiological principles that are universal to human language processing from those that are modulated by the specific sensorimotor systems within which language is instantiated. To investigate possible universal factors in language processing, the present study used event-related potentials... (ERPs) to determine the impact of lexical frequency and concreteness on the brain’s response to a large set of signs (n = ~400) from American Sign Language (ASL) in a large group of deaf signers (n = 40). The frequency and semantic properties of lexical forms are likely to be represented and processed similarly for signed and spoken languages, although the time-course and scalp distribution of these effects could differ due to differences between the visual-manual and auditory-vocal modalities. In addition, we investigated whether iconicity, a phenomenon influenced by the modality of sign language, affects the time course or amplitude of neural responses when signers comprehend ASL signs. One challenge to investigating the effects of frequency on language processing is that currently there are no ASL corpora available from which frequency counts can be obtained. Psycholinguistic research has thus relied on sign familiarity ratings by deaf signers to estimate lexical frequency (e.g., Carreiras, Gutiérrez-Sigut, Baquero, & Corina, 2008; Emmorey, 1991; Emmorey, Petrich, & Gollan, 2013). Recently, a database of ~1,000 ASL signs (ASL-LEX) was created that contains frequency ratings from 25 to 30 deaf signers per sign (Caselli, Sevcikova Sehyr, Cohen-Goldberg, & Emmorey, 2017; Sevcikova Sehyr & Emmorey, 2019). For this database, signers rated how often they felt a sign appears in everyday conversation on a scale of 1 (very infrequently) to 7 (very frequently). The sign videos for the present study were selected from this database. For spoken language, familiarity ratings are highly correlated with corpora-based frequency counts (Gilhooly & Logie, 1980) and are consistent across different groups of participants (Balota, Piloti, & Cortese, 2001). For sign languages, Fenlon, Schembri, Rentelis, Vinson, and Cormier (2014) found that subjective frequency ratings of British Sign Language (BSL) from Vinson, Cormier, Denmark, Schembri, and Vigliocco (2008) were positively correlated with objective frequency counts from the BSL Corpus, although the sample size for this analysis was much smaller than for spoken languages. Parallel to spoken languages, faster lexical decision times are reported for signs that are rated as very frequent than for signs rated as infrequent (e.g., Carreiras et al., 2008; Caselli, 2015; Emmorey, 2002). Further, high-frequency signs are retrieved faster than low-frequency signs in picture-naming tasks (Baus & Costa, 2015; Emmorey, Petrich, & Gollan, 2012; Emmorey et al., 2013). Higher frequency signs are also acquired earlier by deaf children (Caselli & Pyers, 2017), and later acquired signs tend to be lower frequency (Vinson et al. 2008). In addition, high-frequency signs, like high-frequency words, tend to be shorter in duration (e.g., Borstell, Hörberg, & Östling, 2016) and are more likely to undergo coarticulation processes, such as sign lowering (e.g., Russell, Wilkinson, & Janzen, 2011). To date, similar linguistic and behavioral effects of lexical frequency have been found for signed and spoken languages. In the present study, we utilized ERPs to investigate the impact of lexical frequency on sign comprehension. One limitation of using reaction times (RTs) to assess linguistic factors that affect lexical processing is that RTs reflect the final outcome of lexical access, including decision processes (e.g., Grainger & Jacobs, 1996). In contrast, ERPs continuously reflect information processing in real time, providing insight into the temporal neural dynamics of phonological (form) processing, lexical access, and sign comprehension. No study to our knowledge has examined how lexical frequency impacts the neural response in sign comprehension; however, Baus and Costa (2015) investigated frequency effects in a sign-production ERP study in which hearing bilinguals fluent in spoken Spanish or Catalan and Catalan Sign Language (LSC) named pictures in either Spanish/Catalan or LSC. The authors reported that high-frequency signs elicited more negative amplitudes than low-frequency signs in a 280–350-ms time window over occipital sites. This pattern parallels the frequency effect for spoken word production, although the difference between high- and low-frequency words may emerge earlier for speech (e.g., Strijkers, Costa, & Thierry, 2009). In contrast, for visual and auditory word comprehension, low-frequency words tend to elicit more negative amplitudes than high-frequency words (e.g., Dufau, Grainger, Midgley, & Holcomb, 2015; Dufour, Brunellièrè, & Frauenfelder, 2013; Kutas & Federmeier, 2011; Winsler, Midgley, Grainger, & Holcomb, 2018). Winsler et al. (2018) conducted a large megastudy of spoken word recognition (~1,000 words; 50 participants) using both a lexical decision task and a semantic categorization task similar to the one used in the present study. For the semantic decision task (detect an occasional animal word), Winsler et al. (2018) reported greater ERP negativities for low-frequency words at frontal and central sites, beginning 500 ms after word onset, which persisted into the final analyzed epoch, 800–900 ms. Here, we investigated whether the effects of sign frequency are parallel to the effects of word frequency with respect to the polarity, scalp distribution, and timing of ERPs to visual-manual signs. We note that auditory word recognition is more parallel to sign recognition than visual word recognition because both speech and sign unfold over time and written words are a secondary code derived from speech and acquired later through instruction. An early study by Kutas, Neville, and Holcomb (1987) compared ERP responses to semantic anomalies in written, auditory, and signed sentences and found a strong similarity in the N400 component across modalities (greater negativity for anomalous than expected lexical items; see also Capek et al., 2009), but there were also differences, with a more prolonged ERP response for both auditory words and signs compared to written words. Grosvald, Gutierrez, Hafer, and Corina (2012) found that pseudosigns elicited a larger sentence-final N400 response compared to semantically appropriate signs, while nonlinguistic grooming gestures (e.g., scratching one’s nose) elicited a large positivity. This result highlights the linguistic specificity of the N400 component for signs. Further, Meade, Lee, Midgley, Holcomb, and Emmorey (2018) reported both semantic and phonological priming effects in the N400 window for single signs. Together, these results indicate that the N400 elicited by signs is sensitive to both phonological structure and lexical semantics. Based on these findings, we predict that lexical frequency will modulate ERPs in the N400 window, with low-frequency signs eliciting greater negativity than high-frequency signs, as found for spoken languages. In addition to lexical frequency, the parallel megastudy by Winsler et al. (2018) examined the effect of relative concreteness on ERPs during auditory word recognition. Concrete spoken words elicited larger negativities than abstract words, with robust effects emerging after 400 ms that were widely distributed around central sites. Greater negativity for concrete than abstract words within the N400 window has been interpreted as reflecting richer semantic representations for concrete words that arise from associations with imagistic and sensorimotor representations and from larger semantic networks (e.g., Holcomb, Kounios, Anderson, & West, 1999; Kutas & Federmeier, 2011). Behaviorally, concrete words are typically recognized faster than abstract words (e.g., Kroll & Merves, 1986), possibly due to their semantic richness. For sign language, Emmorey and Corina (1993) found that concrete ASL signs were recognized faster than abstract signs in a lexical decision task. No study to our knowledge has investigated the effect of concreteness on ERPs during sign recognition. However, given the behavioral concreteness effects found by Emmorey and Corina (1993) and the sensitivity of the N400 to semantic manipulations in sign language (Capek et al., 2009; Kutas et al., 1987; Meade et al., 2018; Neville et al., 1997), we anticipate that concreteness effects within the N400 window will pattern like spoken language, with greater negativity associated with more concrete signs. In addition to lexical frequency and concreteness, we examined the effect of iconicity on ERPs to signs. Iconicity values were obtained from deaf signers who rated the iconicity of the signs in the ASL-LEX database (Caselli et al., 2017; Sevcikova Sehyr & Emmorey, 2019). The number of deaf participants rating each sign varied between 26 and 31. Parallel to the subjective frequency ratings, participants were asked to rate each sign video on a 7-point scale based on how much the sign looks like what it means (1 = not iconic at all, 7 = very iconic). Several behavioral studies have used this type of rating to investigate the effect of iconicity on sign comprehension and production. In picture-naming tasks, several studies have now found that highly iconic signs are retrieved faster than noniconic signs (Baus & Costa, 2015; McGarry, Mott, Midgley, Holcomb, & Emmorey, 2018; Navarrete, Peressotti, Lerose, & Miozzo, 2017; Vinson, Thompson, Skinner, & Vigliocco, 2015). However, for comprehension the effects of iconicity have been mixed. Bosworth and Emmorey (2010) found that iconic signs were not recognized more quickly than noniconic signs in a lexical decision task. In a translation task, Baus, Carreiras, and Emmorey (2013) found that for proficient signers, iconic signs were actually recognized more slowly than noniconic signs. In a picture-sign matching task, Thompson, Vinson, and Vigliocco (2009) and Vinson et al. (2015) reported faster decision times when the iconic properties of the sign were aligned with visual features in the picture (e.g., the ASL sign BIRD depicts a bird’s beak and matches a picture of a bird with a prominent beak) compared to nonaligned pictures (e.g., a bird in flight where the beak is not visible). Thompson, Vinson, and Vigliocco (2010) found that form decisions about handshape (straight or curved fingers) were slower for more iconic signs, while Vinson et al. (2015) found that decisions about movement direction (up or down) were faster for more iconic signs. To date, the data suggest that iconicity does not have a clear, consistent impact on sign recognition. To our knowledge, the only ERP study to explicitly manipulate iconicity in a sign comprehension task with deaf signers is Mott, Midgley, Holcomb, and Emmorey (2020). Mott et al. used ERPs and a translation priming paradigm (English word prime–ASL sign target) to investigate the effects of iconicity on sign recognition in proficient deaf signers and hearing L2 learners. Participants decided whether word-sign pairs were translation equivalents or not. For hearing learners, iconic signs elicited earlier and more robust priming effects (i.e., greater negativities for target signs preceded by unrelated word primes than by translation primes) compared to noniconic signs. In contrast, for deaf signers, iconicity did not modulate translation priming effects either in RTs or in the ERPs within the N400 window. The fact that priming effects did not begin earlier for iconic than noniconic signs suggests that iconicity does not facilitate lexical access for deaf signers, in contrast to L2 learners. Here we explore whether iconicity modulates ERPs to signs in a comprehension paradigm that does not involve priming or a translation task. In sum, the purpose of the present study was to use ERPs to investigate how lexical frequency, concreteness, and iconicity affect the temporal neural dynamics of sign recognition. Following the “megastudies” of auditory and visual word recognition by Winsler et al. (2018) and Dufau et al. (2015), we gathered data from a large number of items and participants and treated these lexical variables as continuous measures, rather than categorizing and factorially manipulating them. This method avoids potential experimenter bias in selecting cut-off boundaries when categorizing continuous variables and allows for statistical analyses that control for the effects of other variables, such that results can clearly be attributed to the variable of interest (see Balota, Yap, Hutchison, & Cortese, 2012, for a discussion of the advantages of this type of “megastudy”). Following Winsler et al. (2018) and Emmorey, Midgley, Kohen, Sevcikova Sehyr, and Holcomb (2017), we used linear mixed-effects regression (LMER) techniques, rather than more traditional... analyses, which allowed us to use single trial EEG data, rather than averaged ERP data. We also used linear mixed-effects (LME) models to visualize the ERP effects by computing an LME equivalent to scalp voltage maps using the t statistics at each electrode (see Data Analysis). MATERIALS AND METHODS Participants Forty deaf ASL signers participated in this study (22 females; mean age = 28.9 years; SD = 7.2 years; range = 19–46 years). Thirty-one participants were native signers who were born into a deaf signing family, eight participants had hearing parents and were exposed to ASL before three years of age, and one participant learned ASL at age 12 years. Four participants were left-handed. Most participants were from San Diego or Riverside, California, and were compensated $15 per hour of participation. An additional eight participants were run but were not included in the analyses due to high-artifact rejection rates, very noisy EEG data, or failure to perform the task. Informed consent was obtained from all participants in accordance with the institutional review board at San Diego State University. Materials The critical stimuli were 404 ASL sign videos from the ASL-LEX database (Caselli et al., 2017). An additional 26 probe sign videos (also from ASL-LEX) that referred to people were also presented (e.g., MAN, NURSE, MOTHER). Each sign occurred twice for a total of 52 probe signs. The critical stimuli varied in lexical class: nouns = 50%, verbs = 25%, adjectives = 16%, adverbs = 2%, and other (“minor” closed class signs) = 7%. All stimuli can be viewed on the ASL-LEX website (http://asl-lex.org). The Entry IDs (English glosses) for the signs are provided in the Supporting Information. For frequency measures, we used the subjective frequency ratings from ASL-LEX, which used a scale of 1 (very infrequent) to 7 (very frequent). For the sample of critical signs, frequency ratings ranged from 1.63 to 6.84, with a mean of 4.50 (SD = 1.07). Because no database with concreteness ratings is available for ASL signs, we used ratings from Brysbaert, Warriner, and Kuperman (2014) based on the English translations of the ASL signs. However, there were 13 signs that did not have translation equivalents in Brysbaert et al. (e.g., STARBUCKS, MCDONALDS, EUROPE), and therefore we gathered additional concreteness ratings for these words from 39 students at San Diego State University, using the same 5-point scale as Brysbaert et al. and mixing these 13 words in with 37 other words that varied in concreteness. Concreteness ratings for the English translation equivalents of the ASL signs ranged from 1.22 to 5.0, with a mean of 3.42 (SD = 1.60). Finally, iconicity ratings were collected from deaf ASL signers (Sevcikova Sehyr & Emmorey, 2019) on a scale of 1 (not iconic) to 7 (very iconic), as part of ASL-LEX 2.0 (the ratings will be publicly available on the website in 2020). Iconicity ratings ranged from 1.0 to 7.0, with a mean of 3.03 (SD = 1.60). The mean length of the videos was 1,770 ms (SD = 260 ms; range = 934–2,903 ms). The mean onset of the sign was 497 ms after the start of the video (SD = 122 ms; range = 200–1,168 ms). Sign onset is typically defined as when the hand(s) makes contact with the target location on the body (see Caselli et al., 2017, for details on how sign onset is determined). Sign offset is typically defined as the last video frame when the hand contacts the body before moving back to a resting position (see Caselli et al., 2017). The mean sign length was 506 ms (SD = 260 ms; range = 934–2,903 ms). 161 ms; range = 134–1,301 ms). We note that sign onset and length were related at least in part to this particular model’s sign production (rather than inherent to the signs themselves). Thus, they were not analyzed as experimental variables. However, given the variability of timing in the videos, sign length and sign onset were used as covariates in all analyses to control for some of the possible differences in EEG signal due to timing differences in the videos. **Procedure** Participants were seated in a comfortable chair, 150 cm from a 24-inch LCD stimulus monitor in a sound-attenuating darkened room while engaging in a go/no-go semantic categorization task. The testing session began with a short practice block of 15 trials, followed by two experimental blocks for 259 trials each. On each trial an ASL sign was presented as a video clip that was centered on the LCD monitor. Trials were of varying duration depending on the length of the individual video clips. Regardless of clip duration, a fixed blank-screen inter-stimulus-interval of 620 ms was interspersed between the offset of one clip and the onset of the next (see Figure 1 for a schematic of the paradigm). Each experimental block contained 202 critical target signs and 26 randomly intermixed probe signs (so-called people signs [e.g., BOY, NURSE]—12% of trials). Participants were instructed to press a button resting in their lap whenever they detected a people sign (i.e., a “go” stimulus) and to passively view all other “no-go” signs. On average every 12 trials a visual “blink” stimulus was presented for 2.5 s. This indicated that the participant could blink/rest their eyes, thus reducing the tendency for participants to blink during the critical sign ERP epochs. **EEG Recording** The EEG was collected using a 29-channel electrode cap containing tin electrodes (Electro-Cap International, Inc., Eaton, OH), arranged in the International 10–20 system (see Figure 2). Electrodes were also placed next to the right eye to monitor horizontal eye movements (HE) and below the left eye (LE) to monitor vertical eye movements and blinks. Finally, two electrodes were placed behind the ears over the mastoid bones. The left mastoid site was used as an online reference for the other electrodes, and the right mastoid site was used to evaluate differential mastoid activity. Impedances were kept below 2.5 kΩ for all scalp and mastoid electrode sites and below 5 kΩ for the two eye channels. The EEG signal was amplified by SynAmpsRT amplifier (Neuroscan-Compumedics, Charlotte, NC) with a bandpass of DC to 200 Hz and was continuously sampled at 500 Hz. Prior to data analysis the raw EEG data were corrected for blink and horizontal eye artifact using ICA (EEGLAB, Jung et al., 2000). Single-trial ERPs were formed from artifact-free trials, starting 100 ms prior to the onset of each ASL sign video and continuing for 1,200 ms. The 100 ms pre-sign-onset period was used as a baseline. **Data Analysis** The data were analyzed using LMER, a relatively new approach to analyzing EEG data. LMER modeling is particularly advantageous for designs such as the current one, where there are multiple, potentially collinear, continuous variables. The correlation matrix for the variables included in our analysis is given in the Supporting Information (Appendix A1). Further, LMER allows for the model to simultaneously include random effects for both participant and item (see Baayen, Davidson, & Bates, 2008; Barr, Levy, Scheepers, & Tily, 2013). The models described below were fit using the lme4 package (Bates, Mächler, Bolker, & Walker, 2015) in R (R Core Team, 2014), and were structured based on models of Winsler et al. (2018). EEG data were measured per participant, per item, and per electrode as average voltage over 100-ms epochs, starting in a 100–200-ms epoch, and continuing through an 1,100–1,200-ms epoch. Identical models were fit to predict mean amplitude for each of the 11 time windows. These models contained main effects for the three experimental variables, Frequency, Concreteness, and Iconicity, as well as for the two covariates for Sign Length and Sign Onset. Each of these five variables were standardized prior to analysis. Interactions between experimental variables were not included in the model. Although it is likely that these variables interact in ways that are detectable in the EEG signal, a full analysis of interactions would greatly increase the complexity of the models and is outside the scope of the present article. The questions of interest here relate to probing the broad patterns of effects related to sign-level variables. Fewer exploratory experiments will be necessary to adequately answer questions about how these variables interact with each other. Figure 1. A schematic for two typical trials. Top is a critical sign trial and bottom is a “probe” trial (people sign) requiring a button press. Note that in this figure the images are still frames extracted from actual sign videos shown to participants. To analyze the distribution of the effects in addition to their overall effects, all electrodes were included in the models separately, each with three distributional variables corresponding to the spatial location of the electrode. These dimensions (X-position, Y-position, and Z-position) were included as interaction terms with each of the experimental variables and the covariates, as well as included as covariates themselves. Thus, the models had four parameters for each variable, an overall effect across all electrode sites, one for how the effect differs from left to right (X-position), one for how the effect differs between anterior and posterior sites (Y-position), and one for how the effect differs across electrodes lower on the scalp (e.g., T3, Oz) versus higher, central sites (e.g., Cz). See the Supporting Information (Appendix A2) for model code. This was the strategy used by Winsler et al. (2018) and was shown to appropriately analyze broad patterns of ERP distributions. Given the exploratory nature of the present experiment and the low spatial resolution of EEG signals, this approach was adopted to identify the general pattern of effects, without strong a priori hypotheses. But it should be noted that this strategy has limited power to detect especially focal or nonlinear interactions between effects and their distributions over the scalp. The random-effect structure included random intercepts for participant, item, and electrode. Additionally, there were by-participant random slopes for the effect of each experimental variable (Frequency, Concreteness, and Iconicity), as well as Sign Length and Sign Onset. To assess significance of each effect, confidence intervals were generated for each parameter. Additionally, p values were obtained for each parameter using type-two Wald tests, which allowed us to test the partial effect (unique variance) of each variable of interest. These p values were FDR (false discovery rate) corrected using the Mass Univariate Analysis Toolbox (Groppe, Urbach & Kutas, 2011). Effects were only interpreted as significant if they were significant by confidence interval (interval not containing 0) and by FDR-corrected p value (p < 0.05). **Data visualization** The confidence interval and t statistic for each parameter of interest are presented for each time window in Figures 3A–5A. The effect is highlighted if it was significant with both the To visualize the distribution of the effects, models were constructed for each time epoch and electrode separately, and \( t \) values for the effect of each variable of interest were plotted across the scalp as topographic maps (maps in Figures 3A–5A). These models included the overall effects of Frequency, Concreteness, and Iconicity, as well as Sign Length and Sign Onset as covariates. The electrode-specific models also contained random intercepts for participants and items. Additionally, for visualization, traditional ERPs were plotted by averaging EEG data from 50 representative signs for the high and low conditions of each of the three experimental variables (see Figures 3B–5B). These averages controlled for the other experimental variables such that each comparison differed significantly only by the variable of interest, but not by the other two experimental variables. Note that these ERPs are for visual reference only and have not been analyzed statistically. Confidence interval and the FDR-corrected \( p \) value. To visualize the distribution of the effects, models were constructed for each time epoch and electrode separately, and \( t \) values for the effect of each variable of interest were plotted across the scalp as topographic maps (maps in Figures 3A–5A). These models included the overall effects of Frequency, Concreteness, and Iconicity, as well as Sign Length and Sign Onset as covariates. The electrode-specific models also contained random intercepts for participants and items. Additionally, for visualization, traditional ERPs were plotted by averaging EEG data from 50 representative signs for the high and low conditions of each of the three experimental variables (see Figures 3B–5B). These averages controlled for the other experimental variables such that each comparison differed significantly only by the variable of interest, but not by the other two experimental variables. Note that these ERPs are for visual reference only and have not been analyzed statistically. Figure 3. American Sign Language frequency effects. (A) linear mixed-effect \( t \) statistics, confidence intervals, and topographical \( t \)-statistic maps for frequency effects. Effects are only highlighted if results were significant with both confidence intervals and false discovery rate corrected \( p \) values; trend (\( p < .06 \)) indicated by (*). (B) Frequency ERP plots made using the top and bottom quartiles of items sorted by frequency. RESULTS Behavioral Results Participants correctly detected an average of 87% of the probe signs that referred to people, with an average false alarm rate of 4%. The mean RT for detecting the probe signs was 1,392 ms ($SD = 82$ ms). Linear Mixed-Effect Regression Results Confidence intervals and $t$ statistics for each parameter estimate are presented in a table for each variable of interest (Frequency, Concreteness, and Iconicity) in Figures 3A–5A. Table cells are highlighted if the effect is statistically significant both by its confidence interval, and FDR-corrected \( p \) value. To aid the visualization of the effects, for each time point there is a topographical map made from \( t \) values obtained from per-electrode LMER models. Additionally, Figures 3B–5B present averaged ERPs comparing each variable with 50 items per average, balancing for the other two variables. **Frequency Effects** In the first epoch from 100–200 ms, there was a Frequency by Y-position interaction. As shown in Figure 3A, this indicates that lower frequency signs tended to produce more negativity in frontal sites, and less negativity in posterior sites. In the following two time windows, there were no significant effects. For the next four epochs, between 400 and 800 ms, there was again an interaction with the Y dimension, with low-frequency signs generating greater negativity in anterior electrode sites (see Figure 3A). In the following 800–900-ms epoch there were no significant effects. Based on the topographic maps in Figure 3A, the frequency effect seems to be transitioning from the previous anterior distribution, to a more posterior distribution in the following epochs. In the 900-1,000-ms time window there was again a Frequency by Y-position interaction, but now in the opposite direction as previously (note the flipped t statistic), indicating lower frequency items elicited more negativity in posterior sites. This effect remained through the final epoch (1,100–1,200 ms). Additionally, in the final epoch there was a Frequency by Z-position interaction, showing more negativity to low-frequency signs in peripheral sites, and a slight positivity to low-frequency signs in central sites. Concreteness Effects In the 100–200-ms epoch there was a Concreteness by X-position interaction, with more concrete signs producing greater negativity on the left side of the montage, and less negativity on the right side. However, for the next four epochs (200–600 ms), there were no significant effects of concreteness. Beginning in the 600–800-ms epoch and continuing through the final epoch (1,100–1,200 ms), there was a Concreteness by Z-position interaction. This interaction indicates that concrete items elicited more negativity than abstract items, and this effect was distributed in the center of the scalp (see Figure 4A and Figure 4B). Additionally, in the four time windows between 700 and 1,100 ms, there was an overall effect of concreteness, showing that in these epochs the concreteness effect is distributed across the entire scalp. Iconicity Effects In the first nine time windows analyzed, there were no effects of Iconicity. In the final two epochs (1,000–1,100 ms and 1,100–1,200 ms) there were Iconicity by Y-position interactions showing greater negativity to low-iconicity signs in posterior sites, and less negativity in anterior sites (see Figure 5A and Figure 5B). DISCUSSION This is the first ERP study to investigate the effects of lexical frequency, concreteness, and iconicity on the temporal neural dynamics of sign comprehension. LMER models were fit in 100 ms-time epochs with per-participant, per-trial, per-electrode data to analyze the electrophysiological effects of these lexical variables on sign recognition. The results revealed both universal properties of lexical processing that are shared across signed and spoken languages, as well as different patterns that may be attributable to characteristics of the auditory-oral and visual-manual modalities. As predicted, lexical frequency and concreteness exhibited similar electrophysiological effects for sign recognition as previously found for spoken word recognition, but the time-course and scalp distribution of these effects were somewhat different for signs. No significant effects of iconicity were found, except for a weak effect in the late epochs. Frequency ERPs were time-locked to video onset, and sign onset occurred approximately 500 ms later. Therefore, the very early effects of frequency observed in the first epoch (100–200 ms and 200–300 ms) are most likely associated with the transitional movement of the signer’s hand(s) from the resting position on her lap to the target location of the sign (see Figure 1). In these early epochs, lower frequency signs produced greater negativities than higher frequency signs at frontal and central sites. Unlike spoken languages, the linguistic articulators for sign languages are fully visible, and psycholinguistic research has shown that signers are sensitive to early linguistic cues that are visible in the transitional movement from a resting position to sign onset, as well as in the transitional movement between signs. For example, in gating studies signers can often identify the handshape and location of the sign prior to the onset of the sign, both when signs are presented in isolation (Emmorey & Corina, 1990; Grosjean, 1981) and when presented in a sentence context (Clark & Grosjean, 1982). Further, in an ERP study of sentence processing in German Sign Language, Hosemann, Herrmann, Steinbach, Bornkessel-Schlesewsky, and Schlesewsky (2013) found that the onset of the N400 response to sentence-final anomalous signs occurred prior to sign onset and thus had to be elicited by information present during the transition phase. We suggest that the very early effect of lexical frequency observed during the transition phase for isolated signs in the present study may reflect sensitivity to the frequency of sublexical properties, particularly handshape. Caselli et al. (2017) reported that handshape frequency was positively correlated with lexical frequency (i.e., higher frequency handshapes occurred in more frequent signs), but location frequency was not correlated with lexical frequency. If signers recognize a sign’s handshape during the transition phase, then it is possible that less frequent handshapes elicit a more negative response compared to more frequent handshapes (that occur in more frequent signs). Frequency effects next emerged in the 400–500-ms epoch at frontal sites (slightly left-lateralized), and then there was a later, more central-posterior frequency effect that began to emerge in the 800–900-ms epoch. We suggest that the different timing and distribution of these two effects may reflect sensitivity to frequency at two distinct levels: phonological form and lexical-semantics. For spoken languages, frequency effects are known to occur at multiple levels, including phonological encoding and lexical-semantic processing (e.g., Knobel, Finkbeiner, & Caramazza, 2008; Winsler et al., 2018). Previous ERP studies investigating implicit and explicit phonological priming in ASL indicate a frontal distribution for form priming, with smaller negativities over anterior sites for sign pairs that overlap in form (e.g., share the same handshape and location) compared to unrelated sign pairs (Meade, Midgley, Sevcikova Sehyr, Holcomb, & Emmorey, 2017; Meade et al., 2018). These results lead us to hypothesize that this earlier anteriorly distributed effect is related to accessing the phonological form of signs. The later central-posterior distribution is more typical of the frequency effect observed in the N400 window for spoken language, which is usually associated with lexical-semantic processes. Note that this later effect is significant in the 900–1,000-ms epoch, which is 400 ms after the average sign onset (i.e., 500 ms after stimulus onset). It may be possible to observe separate effects of phonological frequency and lexical-semantic frequency in the ERPs to signs because phonological form encoding involves recognition of large movements of the hands and arms and distinct body configurations. The neural regions involved in form processing may be more neurally segregated from regions involved in lexical-semantic processing for sign language compared to spoken language. For spoken language, temporal cortex is involved in both phonological and lexical-semantic processing (e.g., Hickok & Poeppel, 2007), whereas for sign language more distinct neural regions appear to be involved in phonological processing (parietal cortex) and lexical-semantic processing (temporal cortex; see MacSweeney & Emmorey, 2020). In addition, the timing of these processes may be more segregated for sign language because the articulators are visible during the transition to sign onset. For speech, word onset coincides with stimulus onset, whereas there is ~500-ms delay between stimulus (video) onset and sign onset that contains form information about the upcoming sign. Future work that separately manipulates phonological and semantic variables will help to determine whether the distinct timing and distribution of the frequency effects observed here are linked to different processing levels involved in sign recognition. Concreteness A robust effect of concreteness began to emerge 700–800 ms after video onset (~200 ms after sign onset) and continued throughout all analyzed epochs. The polarity of the effect (more negative for more concrete signs) and the wide distribution around central electrode sites parallel what has been found for spoken languages (e.g., Holcomb et al., 1999; Winsler et al., 2018). ERP effects of concreteness on word recognition are typically interpreted as reflecting richer semantic representations and denser links to associated semantic representations for more concrete words compared to more abstract words (Holcomb et al., 1999; West & Holcomb, 2000). Larger negativities to concrete words may result from increased neural activation arising from the more extensive semantic networks of these words, although greater N400 negativity does not appear to be monotonically associated with an increasing number of semantic features (Amsel & Cree, 2013; Kounios et al., 2009). Abstract words presented in isolation (as in the current study) may receive less semantic processing because they activate a smaller number of associations that may not be easily integrated into a unified concept (Barsalou & Wiemer-Hastings, 2005). In addition, semantic processing of concrete words engages a larger number of neural networks that are linked to sensorimotor properties of the concept (e.g., Barber, Otten, Kousta, & Vigliocco, 2013; Binder, Desai, Graves, & Conant, 2009). The parallel ERP results for signs and words indicate that language modality does not impact the neural networks that underlie processing of concrete vs. abstract concepts. The time course of the concreteness effect likely reflects how the perception of single signs (produced in isolation) unfolds over time. A robust effect of concreteness emerges in the 700–800-ms time window (see Figure 4A), which is ~200 ms after the average sign onset (i.e., when the hand reaches the target location on the face/body or in neutral space). We suggest that some signs have already been recognized at this time window (Emmorey & Corina, 1990), giving rise to the concreteness effect. There is a main effect of concreteness for the next four epochs, and we suggest that this timing is consistent with the N400 concreteness effect observed for spoken and written word recognition. Iconicity There were no significant effects of iconicity on ERPs until the final two epochs (1,000–1,200 ms). In these late epochs, the effect of iconicity was relatively weak (compared to the effects of frequency and concreteness) and consisted of a more negative response for less iconic signs at posterior sites. This finding is consistent with the results of Mott et al. (2020) who reported a late effect of iconicity when deaf signers performed a word-sign translation task. Specifically, noniconic signs exhibited a weaker translation priming effect (i.e., larger negativity for signs preceded by unrelated than by related English primes) compared to iconic signs in time windows that followed lexical by access (i.e., after the N400 window where translation priming was observed, but there was no interaction with iconicity). Following Mott et al., we suggest that the weak, late effect of iconicity reflects postlexical sensitivity to sign iconicity, perhaps reflecting a strategic effect when making the semantic categorization judgment. The (weak) iconicity effect emerged about 200–300 ms prior to the average RT for the “go” probe decision. Our results indicate that the degree of form-meaning mapping does not impact the temporal neural dynamics of sign recognition and lexical access. In contrast to lexical frequency and concreteness, there does not appear to be a neural response that is modulated by lexical variation in iconicity during sign comprehension. Although sign iconicity may impact... performance on tasks such as picture naming (e.g., Navarrete et al., 2017) or picture-sign matching (e.g., Thompson et al., 2009), iconicity does not appear to have a general impact on the neural networks that support sign recognition (see also Bosworth & Emmorey, 2010). For spoken language, Lockwood, Hagoort, and Dingemanse (2016) found that iconicity (sound symbolism) impacted ERPs for new learners, specifically Dutch speakers who learned Japanese ideophones (marked words that depict sensory imagery; Dingemanse, 2012) in either a “real” condition (the correct Dutch translation) or in an “opposite” condition (the Dutch translation had the opposite meaning of the ideophone). Ideophones (auditorily presented) in the real condition elicited a larger P3 component and late positive complex compared to ideophones in the opposite condition. Further, these effects were greater for individuals who were more sensitive to sound symbolism (as assessed in a separate task). For native Japanese speakers, Lockwood and Tuomainen (2015) compared ERPs to iconic adverbs (adverbial ideophones) and arbitrary adverbs while participants made sensibility judgments to visually presented sentences that differed only in the type of adverb. Iconic adverbs elicited a greater P2 response than arbitrary adverbs, and there was a long-lasting late effect of iconicity, which the authors interpreted as a late positive complex. The authors speculated that the P2 effect arises from the integration of sound and sensory information associated with the distinctive phonology of ideophones and the later effect may reflect facilitated lexical access for arbitrary adverbs compared to ideophones. However, ideophones differ from iconic signs because ideophones occur in sparse phonological neighborhoods (due to their distinctive phonology; Dingemanse, 2012), whereas iconic signs tend to be found in dense phonological neighborhoods (Caselli et al., 2017) and are not phonologically marked. In addition, highly iconic ASL signs tend to be found in dense semantic neighborhoods, whereas highly iconic English words are associated with sparser semantic neighborhoods (Thompson, Perlman, Lupyan, Sevcikova Sehyr, & Emmorey, 2020). Thus, the effect of iconicity on ERPs does not appear to be parallel for signed and spoken languages. However, no study that we know of has investigated whether continuous lexical variation in iconicity as measured by iconicity ratings of spoken words (e.g., Perry, Perlman, & Lupyan, 2015) modulates ERP components associated with spoken or written word recognition. The Temporal Neural Dynamics of Sign Recognition: Neurobiological Effects on Lexical Access Our results revealed neurobiological principles that hold for both signed and spoken languages, as well as neural patterns that are modulated by language modality. The early waveforms shown in Figures 3B–5B (100–300 ms postvideo onset) reveal that signs elicit an occipital P1 response followed by an N1 response—both components are typically elicited by visual stimuli, including written words (Luck, 2014). Within these two early epochs, we observed effects of sign frequency (Figure 3A), which we attributed to signers’ sensitivity to the frequency of handshapes that are perceived during early transitional movements. This interpretation is consistent with the results of a MEG study by Almeida, Poeppel, and Corina (2016) in which deaf signers and hearing nonsigners were asked to discriminate between still images of possible signs and anatomically impossible signs. The earliest visual cortical responses (M100 and M130) were sensitive to this distinction only for deaf signers who also outperformed the nonsigners on the discrimination task. The authors concluded that extensive sign language experience (and/or deafness) can “shape early neuronal mechanisms that underlie the analysis of visual communication, likely on the basis of highly articulated, predictive internal models of gesture and language processing” (p. 372). As can be seen in Figures 3B–5B, the N1 component was followed by the N300, a component that has been observed in studies using pictures or gestures (rather than written or spoken words) and is hypothesized to be involved in processing early visual semantic features (e.g., Hamm, Johnson, & Kirk, 2002; McPherson & Holcomb, 1999; Wu & Coulson, 2007, 2011). As found for picture and gesture processing, the N300 to signs has an anterior distribution. Meade et al. (2018) found both phonological and semantic priming effects on the N300 (reduced negativities for target signs preceded by related versus unrelated prime signs). Here, we observed frequency effects emerging during this component (400–500 ms postvideo onset), and we interpreted this early anterior effect as reflecting form-based lexical frequency, that is, accessing visual-manual phonological representations. It is possible that the N300, like the N250 for visual words, indexes the mapping between sublexical and lexical representations. Further research is needed to determine the functional significance of the N300 component for sign recognition and the factors that modulate this response. As found for spoken word recognition, the N400 response to signs tends to be prolonged, compared to the N400 elicited by visually presented words. This is likely due to the fact that both spoken words and signs are dynamic and unfold over time. During these later epochs (~400 ms postsign onset), the anterior frequency effect shifted to a more widely distributed posterior effect that we interpreted as reflecting lexical-semantic frequency. Concreteness effects were also observed during these later epochs with the same polarity (i.e., more negative for more concrete signs) and the same distribution as observed for spoken languages. Our findings support the consensus that the N400 component is associated with amodal lexical-semantic processing (Kutas & Federmeier, 2011). The results are also consistent with ERP studies demonstrating N400 effects for lexical-level semantic violations in signed sentences (Capek et al., 2009; Grosvald et al., 2012; Gutierrez, Williams, Grosvald, & Corina, 2012; Kutas et al., 1987; Neville et al., 1997). Finally, lexical variation in iconicity did not modulate the neural response during sign recognition, suggesting that this lexical variable is not represented in the brain in a manner that is parallel to either frequency or concreteness. Despite the pervasiveness of iconicity in ASL (Thompson et al., 2020), there does not appear to be a general neural response that is associated with variation in iconicity. However, the present study was only designed to identify general patterns of effects and may not have been able to detect particularly focal effects or nonlinear interactions with iconicity. Thus, further work is necessary to determine under what conditions (if any) sign iconicity impacts lexical access and sign recognition and/or if there are particular types of iconicity that might modulate the neural response to signs, such as perceptual or motor iconicity (Perniss, Thompson, & Vigliocco, 2010) or highly transparent signs that are “manual cognates” with gestures (Ortega, Özyürek, & Peeters, 2019; Sevcikova Sehyr & Emmorey, 2019). In sum, we used a large-scale, item-based analysis with LMER models which controlled for the colinearity of lexical variables, and this approach allowed us to identify ERP effects that were specific to the continuous variables of lexical frequency, concreteness, and iconicity. Despite the perceptual and motoric differences between signed and spoken languages, the overall results indicate that very similar electrophysiological processes underlie lexical access for signs and words. The findings provide a better understanding of the timing and distribution of these lexical effects on sign recognition such that future studies can analyze them more precisely. We expect that future studies will be able to uncover nuances in the temporal neural dynamics of sign recognition based on the broad pattern of lexical effects presented here. ACKNOWLEDGMENTS The authors would like to thank Cindy O’Grady Farnady for help carrying out this study. We also thank all of the participants without whom this research would not be possible. FUNDING INFORMATION Karen Emmorey, National Institute on Deafness and Other Communication Disorders (http://dx.doi.org/10.13039/10000055), Award ID: DC010997. Phillip J. Holcomb, National Institute of Child Health and Human Development (http://dx.doi.org/10.13039/100000071), Award ID: HD25889. AUTHOR CONTRIBUTIONS Karen Emmorey: Conceptualization; Supervision - participant recruitment & data collection; Writing - original draft; Writing - review & editing. Phillip J. Holcomb: Conceptualization; Writing - review & editing. Katherine J. Midgley: Supervision - participant recruitment & data collection; Writing - review & editing. Kurt WInsler: Data analysis; Writing - review & editing. Jonathan Grainger: Writing - reviewing & editing. REFERENCES Neurophysiological correlates of sign recognition
Math212a1412 Constructing outer measures. Shlomo Sternberg October 23, 2014 1. Constructing outer measures, Method I. - Metric outer measures. 2. Constructing outer measures, Method II. 3. Hausdorff measure. 4. Hausdorff dimension. Review: outer measures. An **outer measure** on a set $X$ is a map $m^*$ to $[0, \infty]$ defined on the collection of *all* subsets of $X$ which satisfies - $m^*(\emptyset) = 0$, - **Monotonicity**: If $A \subset B$ then $m^*(A) \leq m^*(B)$, and - **Countable subadditivity**: $m^*(\bigcup_n A_n) \leq \sum_n m^*(A_n)$. Shlomo Sternberg Math212a1412 Constructing outer measures. Measures from outer measures via Caratheordory. Given an outer measure, $m^*$, we defined a set $E$ to be measurable (relative to $m^*$) if $$m^*(A) = m^*(A \cap E) + m^*(A \cap E^c)$$ for all sets $A$. Then Caratheodory’s theorem that we proved last time asserts that the collection of measurable sets is a $\sigma$-field, and the restriction of $m^*$ to the collection of measurable sets is a measure which we shall usually denote by $m$. Let \( C \) be a collection of sets which cover \( X \). For any subset \( A \) of \( X \) let \[ \text{ccc}(A) \] denote the set of (finite or) countable covers of \( A \) by sets belonging to \( C \). In other words, an element of \( \text{ccc}(A) \) is a finite or countable collection of elements of \( C \) whose union contains \( A \). Suppose we are given a function $$\ell : C \to [0, \infty]$$. **Theorem** There exists a unique outer measure $m^*$ on $X$ such that - $m^*(A) \leq \ell(A)$ for all $A \in C$ and - If $n^*$ is any outer measure satisfying the preceding condition then $n^*(A) \leq m^*(A)$ for all subsets $A$ of $X$. This unique outer measure is given by $$m^*(A) = \inf_{D \in \text{ccc}(A)} \sum_{D \in D} \ell(D). \quad (1)$$ In other words, for each countable cover of $A$ by elements of $C$ we compute the sum above, and then minimize over all such covers of $A$. 1. Uniqueness, 2. $m^*(\emptyset) = 0$, 3. monotonicity. If we had two outer measures satisfying both conditions then each would have to be $\leq$ the other, so the uniqueness is obvious. To check that the $m^*$ defined by (1) is an outer measure, observe that for the empty set we may take the empty cover, and the convention about an empty sum is that it is zero, so $m^*(\emptyset) = 0$. If $A \subset B$ then any cover of $B$ is a cover of $A$, so that $m^*(A) \leq m^*(B)$. Subadditivity. To check countable subadditivity we use the usual $\epsilon/2^n$ trick: If $m^*(A_n) = \infty$ for any $A_n$ the subadditivity condition is obviously satisfied. Otherwise, we can find a $D_n \in ccc(A_n)$ with $$ \sum_{D \in D_n} \ell(D) \leq m^*(A_n) + \frac{\epsilon}{2^n}. $$ Then we can collect all the $D$ together into a countable cover of $A$ so $$ m^*(A) \leq \sum_n m^*(A_n) + \epsilon, $$ and since this is true for all $\epsilon > 0$ we conclude that $m^*$ is countably subadditive. Checking condition 1: \( m^*(A) \leq \ell(A) \) for all \( A \in C \). We have verified that \( m^* \) defined by (1) is an outer measure. We must check that it satisfies the two conditions in the theorem. If \( A \in C \) then the single element collection \( \{A\} \in ccc(A) \), so \( m^*(A) \leq \ell(A) \), so the first condition is obvious. Checking conditions 2: If $n^*$ is any outer measure satisfying $n^*(D) \leq \ell(D)$ then $n^*(A) \leq m^*(A)$ for all subsets $A$ of $X$. As to condition 2, suppose $n^*$ is an outer measure with $n^*(D) \leq \ell(D)$ for all $D \in \mathcal{C}$. Then for any set $A$ and any countable cover $\mathcal{D}$ of $A$ by elements of $\mathcal{C}$ we have $$\sum_{D \in \mathcal{D}} \ell(D) \geq \sum_{D \in \mathcal{D}} n^*(D) \geq n^*(\bigcup_{D \in \mathcal{D}} D) \geq n^*(A),$$ where in the second inequality we used the countable subadditivity of $n^*$ and in the last inequality we used the monotonicity of $n^*$. Minimizing over all $\mathcal{D} \in \text{ccc}(A)$ shows that $m^*(A) \geq n^*(A)$. □ This argument is basically a repeat of the construction of Lebesgue measure of the last lecture. However there is some trouble: A pathological example. Suppose we take $X = \mathbb{R}$, and let $C$ consist of all *half open* intervals of the form $[a, b)$. However, instead of taking $\ell$ to be the length of the interval, we take it to be the square root of the length: $$\ell([a, b)) := (b - a)^{\frac{1}{2}}.$$ I claim that any half open interval (say $[0, 1)$) of length one has $m^*([a, b)) = 1$. (Since $\ell$ is translation invariant, it does not matter which interval we choose.) Claim: \( m^*([0, 1)) = 1. \) Proof. \( m^*([0, 1)) \leq 1 \) by the first condition in the theorem, since \( \ell([0, 1)) = 1. \) On the other hand, if \[ [0, 1) \subset \bigcup_i [a_i, b_i) \] then we know from Heine-Borel that \( \sum (b_i - a_i) \geq 1, \) so squaring gives \[ \left( \sum (b_i - a_i)^{\frac{1}{2}} \right)^2 = \sum_i (b_i - a_i) + \sum_{i \neq j} (b_i - a_i)^{\frac{1}{2}}(b_j - a_j)^{\frac{1}{2}} \geq 1. \] So \( m^*([0, 1)) = 1. \) The closed unit interval is not measurable in this example. On the other hand, consider an interval \([a, b)\) of length 2. Since it covers itself, \(m^*(a, b)) \leq \sqrt{2}\). Consider the closed interval \(I = [0, 1]\). Then \[ I \cap [-1, 1) = [0, 1) \quad \text{and} \quad I^c \cap [-1, 1) = [-1, 0) \] so \[ m^*(I \cap [-1, 1)) + m^*(I^c \cap [-1, 1)) = 2 > \sqrt{2} \geq m^*([-1, 1]). \] In other words, the closed unit interval is \textit{not measurable} relative to the outer measure \(m^*\) determined by the theorem. A desiratum. We would like Borel sets to be measurable, and the above computation shows that the measure produced by Method I as above does not have this desirable property. In fact, if we consider two half open intervals $I_1$ and $I_2$ of length one separated by a small distance of size $\epsilon$, say, then their union $I_1 \cup I_2$ is covered by an interval of length $2 + \epsilon$, and hence $$m^*(I_1 \cup I_2) \leq \sqrt{2 + \epsilon} < m^*(I_1) + m^*(I_2).$$ In other words, $m^*$ is not additive even on intervals separated by a finite distance. It turns out that this is the crucial property that is missing: Let $X$ be a metric space. An outer measure on $X$ is called a **metric outer measure** if $$m^*(A \cup B) = m^*(A) + m^*(B) \text{ whenever } d(A, B) > 0. \quad (2)$$ The condition $d(A, B) > 0$ means that there is an $\epsilon > 0$ (depending on $A$ and $B$) so that $d(x, y) > \epsilon$ for all $x \in A$, $y \in B$. The main result here is due to Caratheodory: Caratheodory’s theorem on metric outer measures. **Theorem** If \( m^* \) is a metric outer measure on a metric space \( X \), then all Borel sets of \( X \) are \( m^* \) measurable. Since the \( \sigma \)-field of Borel sets is generated by the closed sets, it is enough to prove that every closed set \( F \) is measurable in the sense of Caratheodory, i.e. that for any set \( A \) \[ m^*(A) \geq m^*(A \cap F) + m^*(A \setminus F) \] for any closed set \( F \). This will require a clever argument due to Caratheodory. Let \[ A_j := \{ x \in A \mid d(x, F) \geq \frac{1}{j} \}. \] We have \( d(A_j, A \cap F) \geq 1/j \) so, since \( m^* \) is a metric outer measure, we have \[ m^*(A \cap F) + m^*(A_j) = m^*((A \cap F) \cup A_j) \leq m^*(A) \tag{3}\] since \((A \cap F) \cup A_j \subset A\). Now \[ A \setminus F = \bigcup A_j \] since \( F \) is closed, and hence every point of \( A \) not belonging to \( F \) must be at a positive distance from \( F \). We would like to be able to pass to the limit in (3). If the limit on the left is infinite, there is nothing to prove. So we may assume it is finite. If \( x \in A \setminus (F \cup A_{j+1}) \) there is a \( z \in F \) with \( d(x, z) < 1/(j + 1) \) while if \( y \in A_j \) we have \( d(y, z) \geq 1/j \) so \[ d(x, y) \geq d(y, z) - d(x, z) \geq \frac{1}{j} - \frac{1}{j+1} > 0. \] See the figure in the next slide, where \( x \notin A_{j+1} \) and \( y \in A_j \). Let \( A_j := \{ x \in A | d(x, F) \geq \frac{1}{j} \} \). \[ \begin{align*} d(x, z) &< \frac{1}{j+1} \\ d(y, z) &\geq \frac{1}{j} \\ d(x, y) &\geq d(y, z) - d(x, z) \geq \frac{1}{j} - \frac{1}{j+1} > 0. \end{align*} \] if \( x \in A \setminus (F \cup A_{j+1}) \) there is a \( z \in F \) with \( d(x, z) < 1/(j+1) \) while if \( y \in A_j \) we have \( d(y, z) \geq 1/j \) so \[ d(x, y) \geq d(y, z) - d(x, z) \geq \frac{1}{j} - \frac{1}{j+1} > 0. \] Let \( B_1 := A_1 \) and \( B_2 := A_2 \setminus A_1, B_3 = A_3 \setminus A_2 \) etc. Thus if \( i \geq j + 2 \), then \( B_j \subset A_j \) and \[B_i \subset A \setminus (F \cup A_{i-1}) \subset A \setminus (F \cup A_{j+1})\] and so \( d(B_i, B_j) > 0 \). So \( m^* \) is additive on finite unions of even or odd \( B \)'s: \[ m^* \left( \bigcup_{k=1}^{n} B_{2k-1} \right) = \sum_{k=1}^{n} m^*(B_{2k-1}), \quad m^* \left( \bigcup_{k=1}^{n} B_{2k} \right) = \sum_{k=1}^{n} m^*(B_{2k}). \] Constructing outer measures, Method I. Constructing outer measures, Method II. Hausdorff measure. Hausdorff dimension. Metric outer measures. \[ m^* \left( \bigcup_{k=1}^{n} B_{2k-1} \right) = \sum_{k=1}^{n} m^* (B_{2k-1}), \quad m^* \left( \bigcup_{k=1}^{n} B_{2k} \right) = \sum_{k=1}^{n} m^* (B_{2k}). \] Both of these are \( \leq m^*(A_{2n}) \) since the union of the sets involved are contained in \( A_{2n} \). Since \( m^*(A_{2n}) \) is increasing, and assumed bounded, both of the above series converge. Thus $$m^*(A/F) = m^*(\bigcup A_i)$$ $$= m^* \left( A_j \cup \bigcup_{k \geq j+1} B_k \right)$$ $$\leq m^*(A_j) + \sum_{k=j+1}^{\infty} m^*(B_k)$$ $$\leq \lim_{n \to \infty} m^*(A_n) + \sum_{k=j+1}^{\infty} m^*(B_k).$$ But the sum on the right can be made as small as possible by choosing \( j \) large, since the series converges. Hence $$m^*(A/F) \leq \lim_{n \to \infty} m^*(A_n). \quad \square$$ Comparing the method I construction for two covers. Let $\mathcal{C} \subset \mathcal{E}$ be two covers, and suppose that $\ell$ is defined on $\mathcal{E}$, and hence, by restriction, on $\mathcal{C}$. In the definition (1) of the outer measure $m^*_{\ell,\mathcal{C}}$ associated to $\ell$ and $\mathcal{C}$, we are minimizing over a smaller collection of covers than in computing the metric outer measure $m^*_{\ell,\mathcal{E}}$ using all the sets of $\mathcal{E}$. Hence $$m^*_{\ell,\mathcal{C}}(A) \geq m^*_{\ell,\mathcal{E}}(A)$$ for any set $A$. Applying this remark: the method II construction. We want to apply this remark to the case where $X$ is a metric space, and we have a cover $\mathcal{C}$ with the property that for every $x \in X$ and every $\epsilon > 0$ there is a $C \in \mathcal{C}$ with $x \in C$ and $\text{diam}(C) < \epsilon$. In other words, we are assuming that the $$C_\epsilon := \{ C \in \mathcal{C} | \text{diam}(C) < \epsilon \}$$ are covers of $X$ for every $\epsilon > 0$. Then for every set $A$ the $$m^*_{\ell, C_\epsilon}(A)$$ are increasing as $\epsilon$ decreases to zero, so we can consider the function on sets given by $$m^*_\text{II}(A) := \sup_{\epsilon \to 0} m^*_{\ell, C_\epsilon}(A).$$ Constructing outer measures, Method II. Hausdorff measure. Hausdorff dimension. \[ m_{\text{II}}^*(A) := \sup_{\epsilon \to 0} m_{\ell,C_\epsilon}^*(A). \] The axioms for an outer measure are preserved by this limit operation, so \( m_{\text{II}}^* \) is an outer measure. If \( A \) and \( B \) are such that \( d(A, B) > 2\epsilon \), then any set of \( C_\epsilon \) which intersects \( A \) does not intersect \( B \) and vice versa, so throwing away extraneous sets in a cover of \( A \cup B \) which does not intersect either, we see that \( m_{\text{II}}^*(A \cup B) = m_{\text{II}}^*(A) + m_{\text{II}}^*(B) \). The method II construction always yields a metric outer measure. Binary sequence space. Let $X$ be the set of all (one sided) infinite sequences of 0's and 1's. So a point of $X$ is an expression of the form $$a_1a_2a_3\cdots$$ where each $a_i$ is 0 or 1. For any finite sequence $\alpha$ of 0's or 1's, let $[\alpha]$ denote the set of all sequences which begin with $\alpha$. We also let $|\alpha|$ denote the length of $\alpha$, that is, the number of bits in $\alpha$. The metrics $d_r$ on binary sequence space. For each $$0 < r < 1$$ we define a metric $d_r$ on $X$ by: If $$x = \alpha x', \quad y = \alpha y'$$ where the first bit in $x'$ is different from the first bit in $y'$ then $$d_r(x, y) := r^{|\alpha|}.$$ In other words, the distance between two sequences is $r^k$ where $k$ is the length of the longest initial segment where they agree. Proof that $d_r$ is a metric. Clearly $d_r(x, y) \geq 0$ and $= 0$ if and only if $x = y$, and $d_r(y, x) = d_r(x, y)$. Also, for three $x, y,$ and $z$ we claim that $$d_r(x, z) \leq \max\{d_r(x, y), d_r(y, z)\}.$$ Indeed, if two of the three points are equal this is obvious. Otherwise, let $j$ denote the length of the longest common prefix of $x$ and $y$, and let $k$ denote the length of the longest common prefix of $y$ and $z$. Let $m = \min(j, k)$. Then the first $m$ bits of $x$ agree with the first $m$ bits of $z$ and so $d_r(x, z) \leq r^m = \max(r^j, r^k)$. □ A metric with this property (which is much stronger than the triangle inequality) is called an ultrametric. The diameter of $[\alpha]$. The spaces $(X, d_r)$ are homeomorphic. Notice that $$\text{diam } [\alpha] = r^\alpha. \quad (4)$$ The metrics for different $r$ are different, and we will make use of this fact shortly. But **Proposition.** The spaces $(X, d_r)$ are all homeomorphic under the identity map. Proof. It is enough to show that the identity map is a continuous map from \((X, d_r)\) to \((X, d_s)\) since it is one to one and we can interchange the role of \(r\) and \(s\). So, given \(\epsilon > 0\), we must find a \(\delta > 0\) such that if \(d_r(x, y) < \delta\) then \(d_s(x, y) < \epsilon\). So choose \(k\) so that \(s^k < \epsilon\). Then letting \(\delta = r^k\) will do. So although the metrics are different, the topologies they define are the same. The case $r = \frac{1}{2}$. Let $C$ be the collection of all sets of the form $[\alpha]$ and let $\ell$ be defined on $C$ by $$\ell([\alpha]) = \left(\frac{1}{2}\right)^{|\alpha|}.$$ We can construct the method II outer measure associated with this function, which will satisfy $$m^*_\|([\alpha]) \geq m^*_i([\alpha])$$ where $m^*_i$ denotes the method I outer measure associated with $\ell$. What is special about the value $\frac{1}{2}$ is that if $k = |\alpha|$ then $$\ell([\alpha]) = \left(\frac{1}{2}\right)^k = \left(\frac{1}{2}\right)^{k+1} + \left(\frac{1}{2}\right)^{k+1} = \ell([\alpha0]) + \ell([\alpha1]).$$ What is special about the value \( \frac{1}{2} \) is that if \( k = |\alpha| \) then \[ \ell([\alpha]) = \left( \frac{1}{2} \right)^k = \left( \frac{1}{2} \right)^{k+1} + \left( \frac{1}{2} \right)^{k+1} = \ell([\alpha 0]) + \ell([\alpha 1]). \] So if we use the metric \( d_{\frac{1}{2}} \), we see, by repeating the above, that every \([\alpha]\) can be written as the disjoint union \( C_1 \cup \cdots \cup C_n \) of sets in \( \mathcal{C}_\epsilon \) with \( \ell([\alpha]) = \sum \ell(C_i) \). Thus \( m^*_{\ell, \mathcal{C}_\epsilon}([\alpha]) \leq \ell(\alpha) \) and so \[ m^*_{\ell, \mathcal{C}_\epsilon}([\alpha])(A) \leq m^*_i(A) \text{ or } m^*_{\text{II}} = m^*_i. \] It also follows from the above computation that \[ m([\alpha]) = m^*([\alpha]) = \ell([\alpha]). \] In particular \[ m(X) = 1. \] The case $r = \frac{1}{3}$. There is also something special about the value $r = \frac{1}{3}$: Recall that one of the definitions of the Cantor set $\mathcal{C}$ is that it consists of all points $x \in [0, 1]$ which have a base 3 expansion involving only the symbols 0 and 2. Let $$h : X \to \mathcal{C}$$ where $h$ sends the bit 1 into the symbol 2, e.g. $$h(011001\ldots) = .022002\ldots.$$ In other words, for any sequence $z$ $$h(0z) = \frac{h(z)}{3}, \quad h(1z) = \frac{h(z) + 2}{3}.$$ (5) I claim that: \[ \frac{1}{3} d_{\frac{1}{3}}(x, y) \leq |h(x) - h(y)| \leq d_{\frac{1}{3}}(x, y) \quad (6) \] **Proof.** If \( x \) and \( y \) start with different bits, say \( x = 0x' \) and \( y = 1y' \) then \( d_{\frac{1}{3}}(x, y) = 1 \) while \( h(x) \) lies in the interval \([0, \frac{1}{3}]\) and \( h(y) \) lies in the interval \([\frac{2}{3}, 1]\) on the real line. So \( h(x) \) and \( h(y) \) are at least a distance \( \frac{1}{3} \) and at most a distance 1 apart, which is what (6) says. So we proceed by induction. $$h(0z) = \frac{h(z)}{3}, \quad h(1z) = \frac{h(z) + 2}{3}. \quad (5)$$ $$\frac{1}{3} d_{\frac{1}{3}}(x, y) \leq |h(x) - h(y)| \leq d_{\frac{1}{3}}(x, y). \quad (6)$$ Suppose we know that (6) is true when $x = \alpha x'$ and $y = \alpha y'$ with $x'$, $y'$ starting with different digits, and $|\alpha| \leq n$. (The above case was where $|\alpha| = 0$.) So if $|\alpha| = n + 1$ then either $\alpha = 0\beta$ or $\alpha = 1\beta$ and the argument for either case is similar: We know that (6) holds for $\beta x'$ and $\beta y'$ and $$d_{\frac{1}{3}}(x, y) = \frac{1}{3} d_{\frac{1}{3}}(\beta x', \beta y')$$ while $|h(x) - h(y)| = \frac{1}{3} |h(\beta x') - h(\beta y')| \quad \text{by} \quad (5)$. Hence (6) holds by induction. \hfill \square In other words, the map $h$ is a Lipschitz map with Lipschitz inverse from $(X, d_{\frac{1}{3}})$ to the Cantor set $C$. In a short while, after making the appropriate definitions, these two computations, one with the measure associated to $\ell([\alpha]) = \left(\frac{1}{2}\right)^{|\alpha|}$ and the other associated with $d_{\frac{1}{3}}$ will show that the “Hausdorff dimension” of the Cantor set is $\log 2 / \log 3$. Shlomo Sternberg Math212a1412 Constructing outer measures. Hausdorff measures on a metric space. Let $X$ be a metric space. Recall that if $A$ is any subset of $X$, the \textbf{diameter} of $A$ is defined as $\text{diam}(A) = \sup_{x, y \in A} d(x, y)$. Take $\mathcal{C}$ to be the collection of “all” subsets of $X$, and for any positive real number $s$ define $$\ell_s(A) = \text{diam}(A)^s$$ (with $0^s = 0$). The associated method II outer measure is called the \textbf{$s$-dimensional Hausdorff outer measure}, and its restriction to the associated $\sigma$-field of (Caratheodory) measurable sets is called the \textbf{$s$-dimensional Hausdorff measure}. We will let $m^*_s, \epsilon$ denote the method I outer measure associated to $\ell_s$ and $\epsilon$, and let $\mathcal{H}^*_s$ denote the Hausdorff outer measure of dimension $s$, so that $$\mathcal{H}^*_s(A) = \lim_{\epsilon \to 0} m^*_s, \epsilon (A).$$ For example, we claim that for $X = \mathbb{R}$, $\mathcal{H}^*_1$ is Lebesgue outer measure, which we will denote here by $L^*$. Indeed, if $A$ has diameter $r$, then $A$ is contained in a closed interval of length $r$. Hence $L^*(A) \leq r$. The Method I construction theorem says that $m^*_{1,\epsilon}$ is the largest outer measure satisfying $m^*(A) \leq \text{diam } A$ for sets of diameter less than $\epsilon$. Hence $m^*_{1,\epsilon}(A) \geq L^*(A)$ for all sets $A$ of diameter less than $\epsilon$ and this is true for all $\epsilon$. So $$\mathcal{H}^*_1 \geq L^*.$$ On the other hand, any bounded half open interval $[a, b)$ can be broken up into a finite union of half open intervals of length $< \epsilon$, whose sum of diameters is $b - a$. So $m^*_{1,\epsilon}([a, b)) \leq b - a$. But the method I construction theorem says that $L^*$ is the largest outer measure satisfying $$m^*([a, b)) \leq b - a.$$ Hence $\mathcal{H}^*_1 \leq L^*$. So they are equal. In two or more dimensions, the Hausdorff measure $\mathcal{H}_k$ on $\mathbb{R}^k$ differs from Lebesgue measure by a constant. This is essentially because they assign different values to the ball of diameter one. In two dimensions for example, the Hausdorff measure $\mathcal{H}_2$ assigns the value one to the disk of diameter one, while its Lebesgue measure is $\pi/4$. For this reason, some authors prefer to put this "correction factor" into the definition of the Hausdorff measure, which would involve the Gamma function for non-integral $s$. I am following the convention that finds it simpler to drop this factor. Back to the general theory: The main theorem in the theory. **Theorem** *Let $F \subset X$ be a Borel set. Let $0 < s < t$. Then* $$\mathcal{H}_s(F) < \infty \implies \mathcal{H}_t(F) = 0$$ *and* $$\mathcal{H}_t(F) > 0 \implies \mathcal{H}_s(F) = \infty.$$ Proof. If diam $A \leq \epsilon$, then $$m^*_t,\epsilon(A) \leq (\text{diam } A)^t \leq \epsilon^{t-s}(\text{diam } A)^s$$ so by the method I construction theorem we have $$m^*_t,\epsilon(B) \leq \epsilon^{t-s} m^*_s,\epsilon(B)$$ for all $B$. If we take $B = F$ in this equality, then the assumption $\mathcal{H}_s(F) < \infty$ implies that the limit of the right hand side tends to 0 as $\epsilon \to 0$, so $\mathcal{H}_t(F) = 0$. The second assertion in the theorem is the contrapositive of the first. Definition of the Hausdorff dimension. This last theorem implies that for any Borel set $F$, there is a unique value $s_0$ (which might be 0 or $\infty$) such that $\mathcal{H}_t(F) = \infty$ for all $t < s_0$ and $\mathcal{H}_s(F) = 0$ for all $s > s_0$. This value is called the **Hausdorff dimension** of $F$. It is one of many competing (and non-equivalent) definitions of dimension. Notice that it is a metric invariant, and in fact is the same for two spaces differing by a Lipschitz homeomorphism with Lipschitz inverse. But it is not a topological invariant. In fact, we shall show that the space $X$ of all sequences of zeros and one studied above has Hausdorff dimension 1 relative to the metric $d_{\frac{1}{2}}$ while it has Hausdorff dimension $\log 2 / \log 3$ if we use the metric $d_{\frac{1}{3}}$. Since we have shown that $(X, d_{\frac{1}{3}})$ is Lipschitz equivalent to the Cantor set $\mathcal{C}$, this will also prove that $\mathcal{C}$ has Hausdorff dimension $\log 2 / \log 3$. Shlomo Sternberg Math212a1412 Constructing outer measures. The Hausdorff dimension of \((X, d_{\frac{1}{2}})\). **Lemma** *If \(\text{diam}(A) > 0\), then there is an \(\alpha\) such that \(A \subset [\alpha]\) and \(\text{diam}([\alpha]) = \text{diam} A\).* **Proof.** Given any set \(A\), it has a “longest common prefix”. Indeed, consider the set of lengths of common prefixes of elements of \(A\). This is a finite set of non-negative integers since \(A\) has at least two distinct elements. Let \(n\) be the largest of these, and let \(\alpha\) be a common prefix of this length. Then it is clearly the longest common prefix of \(A\). Hence \(A \subset [\alpha]\) and \(\text{diam}([\alpha]) = \text{diam} A\). \(\square\) Let $C$ denote the collection of all sets of the form $[\alpha]$ and let $\ell$ be the function on $C$ given by $$\ell([\alpha]) = \left(\frac{1}{2}\right)|\alpha|,$$ and let $\ell^*$ be the associated method I outer measure, and $m$ the associated measure; all these as we introduced above. We have $$\ell^*(A) \leq \ell^*([\alpha]) = \text{diam}([\alpha]) = \text{diam}(A).$$ By the method I construction theorem, $m^*_{1,\epsilon}$ is the largest outer measure with the property that $n^*(A) \leq \text{diam} A$ for sets of diameter $< \epsilon$. Hence $\ell^* \leq m^*_{1,\epsilon}$, and since this is true for all $\epsilon > 0$, we conclude that $$\ell^* \leq \mathcal{H}^*_1.$$ \[ \ell^* \leq \mathcal{H}_1^*. \] On the other hand, for any \( \alpha \) and any \( \epsilon > 0 \), there is an \( n \) such that \( 2^{-n} < \epsilon \) and \( n \geq |\alpha| \). The set \([\alpha]\) is the disjoint union of all sets \([\beta] \subset [\alpha]\) with \(|\beta| = n\), and there are \(2^{n-|\alpha|}\) of these subsets, each having diameter \(2^{-n}\). So \[ m_{1,\epsilon}^*([\alpha]) \leq 2^{-|\alpha|}. \] However \( \ell^* \) is the largest outer measure satisfying this inequality for all \([\alpha]\). Hence \( m_{1,\epsilon}^* \leq \ell^* \) for all \( \epsilon \) so \( \mathcal{H}_1^* \leq \ell^* \). In other words \[ \mathcal{H}_1 = m. \] But since we computed that \( m(X) = 1 \), we conclude that **The Hausdorff dimension of \((X, d_{\frac{1}{2}})\) is 1.** The Hausdorff dimension of \((X, d_{\frac{1}{3}})\). The diameter \(\text{diam}_{\frac{1}{2}}\) relative to the metric \(d_{\frac{1}{2}}\) and the diameter \(\text{diam}_{\frac{1}{3}}\) relative to the metric \(d_{\frac{1}{3}}\) are given by \[ \text{diam}_{\frac{1}{2}}([\alpha]) = \left(\frac{1}{2}\right)^k, \quad \text{diam}_{\frac{1}{3}}([\alpha]) = \left(\frac{1}{3}\right)^k, \quad k = |\alpha|. \] If we choose \(s\) so that \(2^{-k} = (3^{-k})^s\) then \[ \text{diam}_{\frac{1}{2}}([\alpha]) = (\text{diam}_{\frac{1}{3}}([\alpha]))^s. \] This says that relative to the metric \(d_{\frac{1}{3}}\), the previous computation yields \[ \mathcal{H}_s(X) = 1. \] Hence \(\log 2 / \log 3\) is the Hausdorff dimension of the Cantor set. Felix Hausdorff Born: 8 Nov 1868 in Breslau, Germany (now Wroclaw, Poland) Died: 26 Jan 1942 in Bonn, Germany by suicide, to avoid being sent to an extermination camp.
The structure of steady axisymmetric force-free magnetosphere of a Kerr black hole (BH) is governed by a second-order partial differential equation of $A_\phi$ depending on two “free” functions $\Omega(A_\phi)$ and $I(A_\phi)$, where $A_\phi$ is the $\phi$ component of the vector potential of the electromagnetic field, $\Omega$ is the angular velocity of the magnetic field lines and $I$ is the poloidal electric current. In this paper, we investigate the solution uniqueness. Taking asymptotically uniform field as an example, analytic studies imply that there are infinitely many solutions approaching uniform field at infinity, while only a unique one is found in general relativistic magnetohydrodynamic simulations. To settle down the disagreement, we reinvestigate the structure of the governing equation and numerically solve it with given constraint condition and boundary condition. We find that the constraint condition (field lines smoothly crossing the light surface (LS)) and boundary conditions at horizon and at infinity are connected via radiation conditions at horizon and at infinity, rather than being independent. With appropriate constraint condition and boundary condition, we numerically solve the governing equation and find a unique solution. Contrary to naive expectation, our numerical solution yields a discontinuity in the angular velocity of the field lines and a current sheet along the last field line crossing the event horizon. We also briefly discuss the applicability of the perturbation approach to solving the governing equation. Subject headings: gravitation – magnetic field – magnetohydrodynamics 1. INTRODUCTION The Blandford-Znajek (BZ) mechanism (Blandford & Znajek 1977) is believed to be one of most efficient ways to extract rotation energy from spinning black holes (BHs), which operate in BH systems on all mass scales, from the stellar-mass BHs of gamma ray bursts to the supermassive BHs of active galactic nuclei. In the past decade, we have gained better understanding of the BZ mechanism from general relativistic magnetohydrodynamic (GRMHD) simulations (e.g. Komissarov 2001, 2004a,b, 2005; Semenov et al. 2004; McKinney & Gammie 2004; McKinney 2005; McKinney & Narayan 2007a,b; Komissarov & McKinney 2007; Tchekhovskoy et al. 2008, 2010, 2011; Palenzuela et al. 2011; Alic et al. 2012; Tchekhovskoy & McKinney 2012; Penna et al. 2013; McKinney et al. 2013), numerical solutions (e.g. Fendt 1997; Uzdensky 2004, 2005; Palenzuela et al. 2010; Contopoulos et al. 2013; Nathaniel & Contopoulos 2014) and analytic perturbation solutions (e.g. Tanabe & Nagataki 2008; Beskin & Zheltoukhov 2013; Pan & Yu 2014, 2015a,b, 2016; Gralla & Jacobson 2014; Gralla et al. 2015, 2016b; Yang et al. 2015; Penna 2015) to the steady axisymmetric force-free electrodynamics in the Kerr spacetime. Various studies converge to a common picture of how the BZ mechanism operates: The spinning BH distorts the poloidal magnetic field $B_P$, and induces the poloidal electric field $E_P$ and toroidal magnetic field $B_T$, which generate an outward Poynting flux $E_P \times B_T$ along the magnetic field lines threading the spinning BH. The rotation energy of the spinning BHs is extracted in the form of Poynting flux (Komissarov 2009; Beskin 2010). To step further, it is natural to ask whether these different approaches give qualitatively and quantitatively consistent descriptions of the BH magnetosphere structure, e.g., the topology of magnetic fields, the electric current distributions, the angular velocities of the magnetic field lines and the energy extraction rates. The answer is yes and no. The axisymmetric, steady-state, force-free magnetosphere around Kerr BHs is governed 1 A few families of exact solutions (e.g. Menon & Dermer 2005, 2007, 2011; Brennan et al. 2013; Menon 2015; Compère et al. 2016) to the equations of force-free electrodynamics in the Kerr spacetime have been found in the past decade. But these solutions have various limitations, e.g., not allowing energy extraction from the BH, being electrically dominated instead of magnetically dominated, lacking clear physical interpretation, or being time-dependent and not axisymmetric, which make it difficult to compare these exact solutions with simulations and numerical solutions. by the general relativistic Grad-Shafranov (GS) equation. For the simplest magnetic field configuration, split monopole field, both analytic (Pan & Yu 2015a) and numerical solutions (Nathanail & Contopoulos 2014) reproduce the simulated angular velocity of field lines $\Omega$, poloidal electric current $I$ and energy extraction rate $\dot{E}$ to high precision (Tchekhovskoy et al. 2010). But for the asymptotically uniform field, different approaches do not even reach a consensus on the solution uniqueness. Time-dependent simulations (e.g. Komissarov 2005; Komissarov & McKinney 2007; Yang et al. 2015) seem to converge to a unique solution. Previous analytic studies (Beskin & Zheltoukhov 2013; Pan & Yu 2014; Gralla et al. 2016b) seem to find a unique perturbation solution which roughly agrees with GRMHD simulations. But in this paper, we will show there are actually many of them due to the superposition of monopole component (and other possible components). According to the argument of Nathanail & Contopoulos (2014), solving the GS equation is actually an eigenvalue problem, with two eigenvalues $\Omega(L_\phi)$ and $I(L_\phi)$ to be determined by requiring field lines to smoothly cross the light surfaces (LSs). For common field configurations, there exists usually two LSs, sufficing to determine two eigenvalues. With only one LS for the uniform field configuration and one more boundary condition, Nathanail & Contopoulos (2014) numerically found a unique solution, which however shows distinctive features from previous GRMHD simulations (Komissarov 2005). How to explain the relationship between the unique solution and the infinitely many possible candidates, and the discrepancy between previous numerical solution and GRMHD simulations? Does the plasma inertia make a difference? The force-free condition is assumed in both analytic and numerical solutions, but the inertia cannot be completely ignored in simulations. Taking account of the plasma inertia, Takahashi et al. (1990) proposed the so-called MHD Penrose process, where the plasma particles within the ergosphere are projected onto negative-energy orbits by magnetic field and eventually are captured by the central BH. As a result, Alfvén waves are generated along the magnetic field lines, and BH rotation energy is carried away by these Alfvén waves. Koide et al. (2002) and Koide (2003) found the MHD Penrose process was operating in GRMHD simulations (see e.g. Lasota et al. 2014; Koide & Baba 2014; Toma & Takahara 2014; Kojima 2015; Toma & Takahara 2016 for recent discussions on this issue). If the MHD Penrose process is the dominant energy extraction process, the unique solution found in simulations actually describes the MHD Penrose process instead of the BZ mechanism. However, later simulations showed that the MHD Penrose process is only a transient state, after which the Alfvén waves decay, the system settles down into a steady state, and the BZ mechanism takes over (e.g. Komissarov 2005). Therefore the plasma inertia seems to make little difference after the system settles down into the steady state. Another possible explanation is that, among all these mathematically possible solutions, only the one found simulations is stable. Yang & Zhang (2014) and Yang et al. (2015) analyzed the stability of these solutions, and no unstable mode was found at order $O(a)$, where $a$ is the dimensionless BH spin. Therefore modes can be unstable with a growth rate at $\sim O(a^2)$ at most. But the relevant timescale is much longer than the transient time scales observed in simulations. Therefore they concluded that the selection rule unlikely comes from instability. In this paper, we show that the uniform field solution is unique as strongly implied by previous GRMHD simulations and pointed out by Nathanail & Contopoulos (2014). Following the algorithm proposed by Contopoulos et al. (2013) and Nathanail & Contopoulos (2014), we numerically find a unique combination of $\Omega$ and $I$, ensuring both smooth field lines across the LS and uniform field at infinity. Contrary to Nathanail & Contopoulos (2014), our numerical solution yields a discontinuity in the angular velocity of field lines and a current sheet along the last field line crossing the event horizon, which are features found in previous simulations. We also investigate the applicability of analytic perturbation approach to the GS equation, which relies on a fixed unperturbed solution and priory known asymptotic behavior of magnetic field. Analytic approach breaks down if any of the two factors is violated. Both of the two are satisfied for monopole field in Kerr spacetime, therefore we see the perfect match between high-order perturbation solutions, and results from simulations and numerical solutions. But for the uniform field, the unperturbed background field is not fixed due to the superposition of the monopole component, therefore the perturbation approach cannot predict a unique solution. The paper is organized as follows. In Section 2, we summarize the basic equations governing the steady axisymmetric force-free magnetospheres. In Section 3, we clarify the relation between constraint conditions, radiation conditions and boundary conditions; and our numerical method to solve the GS equation. We apply the perturbation approach on the uniform field problem and clarify the applicability of analytic perturbation approach in Section 4. Summary and Discussions are given in Section 5. In Appendix, we present a robust solver for the horizon regularity condition and its implication for the existence of electric current. 2. BASIC EQUATIONS In the force-free approximation, electromagnetic energy greatly exceeds that of matter. Consequently, the force-free magnetospheres is governed by energy conservation equation of electromagnetic field, or conventionally called as the GS equation. In the Kerr spacetime, the axisymmetric and steady GS equation is written as \[ \frac{\partial E}{\partial t} + \nabla \cdot \mathbf{J} = 0 \] where $E$ is the electromagnetic energy density, $\mathbf{J}$ is the electromagnetic current density. 2 It is definitely worthwhile examining the solution uniqueness problem of the BZ mechanism, considering its significance in the modern relativistic astrophysics. For example, rotating BHs are believed to be described by Kerr solution, but there is no solid evidence for it. Some efforts have been done to detect possible deviation from Kerr solution via BZ mechanism powered jet emission (e.g. Bambi 2012a,b, 2015; Pei et al. 2016). A prerequisite for these BZ applications is its solution uniqueness. 3 The stability problem of general force-free jets is a complicated story. Early studies implied that the force-free jets are vulnerable to various instabilities (e.g. Begelman 1998; Lyubarskii 1999; Li 2000; Wang et al. 2004) while later studies showed that these instabilities are strongly suppressed by field rotation, poloidal field curvature, etc (e.g. Tomimatsu et al. 2001; McKinney & Blandford 2009; Narayan et al. 2009, and references therein). in a more illustrating form ary condition at infinity. To be consistent this boundary way usually is in conflict with the uniform field bound- \[ -\Omega \left[ (\sqrt{-g}F^r)_{,r} + (\sqrt{-g}F^\theta)_{,\theta} \right] + F_r \phi I'(A_\phi) + \left[ (\sqrt{-g}F^\phi)_{,r} + (\sqrt{-g}F^\phi)_{,\theta} \right] = 0 \, , \] (1) which expands as (see also e.g. Contopoulos et al. 2013; Nathanail & Contopoulos 2014; Pan & Yu 2016, in slightly different forms) \[ \begin{align*} \left[ \frac{\beta}{r} \Omega^2 \sin^2 \theta - \frac{4r_\Omega}{\Sigma} \Omega \sin^2 \theta - \left( 1 - \frac{2r}{\Sigma} \right) \right] A_{\phi,rr} \nonumber \\ + \left[ \frac{\beta}{r} \Omega^2 \sin^2 \theta - \frac{4r_\Omega}{\Sigma} \Omega \sin^2 \theta - \left( 1 - \frac{2r}{\Sigma} \right) \right] \sin^2 \theta \frac{\Delta}{A} A_{\phi,\mu\mu} \nonumber \\ + \Omega^2 \sin^2 \theta \left( \frac{\beta}{r} \right)_{,r} \sin^2 \theta \frac{\Delta}{r} A_{\phi,r} \nonumber \\ + \left( \frac{\beta}{r} \right) \Omega - \frac{4r_\Omega}{\Sigma} \sin^2 \theta \frac{\Delta}{r} A_{\phi,r} \nonumber \\ + \frac{\Delta}{\Omega} II' = 0 \, , \end{align*} \] where \( \Sigma = r^2 + a^2 \mu^2 \), \( \Delta = r^2 - 2r + a^2 \), \( \beta = \Delta \Sigma + 2r(r^2 + a^2) \), \( \mu \equiv \cos \theta \) and the primes designate derivatives with respect to \( A_\phi \). For clarity, we may write the GS equation in a more illustrating form \[ \begin{align*} \left[ A_{\phi,rr} + \frac{\sin^2 \theta}{\Delta} A_{\phi,\mu\mu} \right] K(r, \theta; \Omega) \nonumber \\ + \left[ A_{\phi,r} \partial^\Omega + \frac{\sin^2 \theta}{\Delta} A_{\phi,\mu} \partial_\mu \right] K(r, \theta; \Omega) \nonumber \\ + \frac{1}{2} \left[ A_{\phi,r} + \frac{\sin^2 \theta}{\Delta} A_{\phi,\mu} \right] \Omega \partial_\Omega K(r, \theta; \Omega) \nonumber \\ - \frac{\Sigma}{\Omega} II' = 0 \, , \end{align*} \] where \( K(r, \theta; \Omega) \) is the prefactor of \( A_{\phi,rr} \) in Equation (2), \( \partial^\Omega (i = r, \mu) \) denotes the partial derivative with respect to coordinate \( i \) with \( \Omega \) fixed, and \( \partial_\Omega \) is the derivative with respect to \( \Omega \). The GS equation written in this compact form manifests clear symmetry, therefore is beneficial in various aspects.\(^4\) 3. THE SOLUTION UNIQUENESS PROBLEM In this section, we first clarify all the constraint conditions the GS equation satisfies, and their relation with boundary conditions at horizon and at infinity. We find the constraint conditions and boundary conditions are not independent. For a given \( \Omega(A_\phi) \), we can numerically find a \( I(A_\phi) \) ensuring field lines smoothly crossing the LS, but the combination of \( \Omega(A_\phi) \) and \( I(A_\phi) \) obtained this way usually is in conflict with the uniform field boundary condition at infinity. To be consistent this boundary \[ \Omega(A_\phi) \) and \( I(A_\phi) \) must satisfy one more constraint. Then, we numerically find the unique combination of \( \Omega(A_\phi) \) and \( I(A_\phi) \) ensuring field lines smoothly cross the LS and being consistent with the boundary condition at infinity. Finally, we compare our numerical solution with previous studies. 3.1. Constraint Conditions and Boundary Conditions We want physically allowed solutions to be finite and smooth everywhere. At LS where \( \mathcal{K} = 0 \), the second-order GS equation degrades to a first-order equation \[ \begin{align*} \left[ A_{\phi,r} \partial^\Omega + \frac{\sin^2 \theta}{\Delta} A_{\phi,\mu} \partial_\mu \right] & \mathcal{K}(r, \theta; \Omega) \\ + \frac{1}{2} \left[ A_{\phi,r}^2 + \frac{\sin^2 \theta}{\Delta} A_{\phi,\mu}^2 \right] & \Omega \partial_\Omega \mathcal{K}(r, \theta; \Omega) = \frac{\Sigma}{\Delta} II' \, . \end{align*} \] Field lines smoothly crossing the LS must satisfy the above constraint, which we call LS crossing constraint condition. At horizon and infinity, the requirement of solution finiteness leads to the radiation conditions (e.g. Pan & Yu 2016), which read as \[ I = \frac{2r(\Omega - \Omega_H)\sin^2 \theta}{\Sigma} A_{\phi,\mu} \bigg|_{r=r_+} \, , \quad (5) \] and \[ I = -\Omega \sin^2 \theta A_{\phi,\mu} \bigg|_{r=\infty} \, , \quad (6) \] where \( \Omega_H \) is angular velocity of the central BH. But the radiation conditions and boundary conditions are not independent. For example, the radiation condition (5) uniquely determines the boundary values at horizon if \( I \) and \( \Omega \) are specified, and we will use it as the inner boundary condition in our numerical calculation. In the same way, the radiation condition (6) uniquely determines the boundary values at infinity, if \( \Omega \) and \( I \) are specified; or the radiation condition (6) enforces a constraint on \( \Omega \) and \( I \), if the boundary condition at infinity is given. In our working example, the boundary condition at infinity \[ A_\phi (r \to \infty) = r^2 \sin^2 \theta \, . \quad (7) \] is given. Plugging it into the radiation condition (6), we find that \( \Omega \) and \( I \) must satisfy a new constraint (Nathanail & Contopoulos 2014; Pan & Yu 2014, 2016) \[ I = 2\Omega A_\phi \, . \quad (8) \] Note that conditions (6, 7, 8) are not independent, and we will use two of them (7, 8) to close the GS equation. Now we get two constraint conditions (4, 8), and two boundary conditions in the \( r \) direction (5, 7) ready, (where the inner boundary condition (5) is nontrivial, see Appendix for details). The next step is to specify proper boundary conditions in the \( \mu \) direction. According to the claim proved in paper II: “In the steady axisymmetric force-free magnetosphere around a Kerr BH, all magnetic field lines that cross the infinite-redshift surface must intersect the event horizon”,\(^5\) the possible field configuration in the steady state is shown in the Figure 1 \(^4\) For example, in flat spacetime, the classical pulsar equation (Scharlemann & Wagener 1973) is recovered by plugging \( K(r, \theta; \Omega) = \Omega^2 r^2 \sin^2 \theta - 1 \) into Equation (3). \(^5\) The claim depends on the relation \( I \propto \Omega \), which can be derived from the radiation condition at infinity [Equation (6)]. But their are some debates about whether the radiation condition holds for of paper II. Consequently, we write boundary conditions in the $\mu$ direction as follows, \begin{align*} A_\phi(\mu = 1) &= 0, \\ A_\phi(\mu = 0, r_+ \leq r \leq 2) &= A^H_\phi, \\ A_{\phi,\mu}(\mu = 0, r \geq 2) &= 0, \end{align*} where the horizon enclosed magnetic flux $A^H_\phi$ is to be determined self-consistently. 3.2. Numerical Method and Results The algorithm for numerically solving the GS equation was proposed by Contopoulos et al. (2013) and was optimized by Nathanial & Contopoulos (2014). We slightly tailor their algorithm to accommodate the problem we are working on. We define a new radial coordinate $R = r/(1 + r)$, confine our computation domain $R \times \mu$ in the region $[R(r_+), 1] \times [0, 1]$, and implement a uniform $512 \times 64$ grid. The detailed numerical steps are as follows: 1. We choose some initial guess $A_\phi$, trial functions $\Omega$ and $I$ as follows, \[A_\phi = r^2 \sin^2 \theta, \quad \Omega = \frac{\Omega_H}{2} \cos \left(\frac{\pi}{2} \frac{A_\phi}{A^H_\phi}\right), \quad I = \Omega_H A_\phi \cos \left(\frac{\pi}{2} \frac{A_\phi}{A^H_\phi}\right),\] 2. We evolve $A_\phi$ using relaxation method (Press et al. 1987), while this method does not work properly at the LS due to the vanishing second order derivatives. Fortunately, the directional derivative of $A_\phi$ is known as a function of $II'$ there (see Equation (4)). We instead update $A_\phi$ at the LS using neighborhood grid points and the directional derivative. From the directional derivative and the grid points on the left/right side, we obtain $A_\phi(r_+^L) / A_\phi(r_+^R)$. Usually the two are not equal and field lines are broken here. To smooth the field lines, we fit $A_\phi(r_+^L) / A_\phi(r_+^R)$ and update $A_\phi(r_+^L)$ as follows: \[II'_{\text{new}}(A_\phi, \text{new}) = II'_{\text{old}}(A_\phi, \text{old}) - 0.02(A_\phi(r_+^L) - A_\phi(r_+^R)),\] where \[A_\phi, \text{new} = 0.5(A_\phi(r_+^L) + A_\phi(r_+^R)).\] 3. Repeat step 2 for 10 times, then update $\Omega(A_\phi)$ according to the constraint (8). We iterate the initial guess solution following the above steps until field lines smoothly cross the LS and satisfy the boundary conditions. The numerical results are shown in Figure 1. In the left panel, we show the convergent field configuration which as expected matches those of simulations (e.g. Komissarov & McKinney 2007). In the right panel, we show functions $\Omega(A_\phi)$ and $II'(A_\phi)$. From the plot, we see that the angular velocity of the last field line crossing the event horizon $\Omega(A^H_\phi)$ is not vanishing, i.e., $\Omega(A^H_\phi) \approx 0.28 \Omega_H$, while we expect the angular velocity of field lines not crossing the BH vanishes, i.e., $\Omega(A_\phi > A^H_\phi) = 0$. 3.3. Comparison with Previous Studies Nathanial & Contopoulos (2014) also studied BH magnetosphere structure of uniform field and concluded that both $\Omega(A_\phi)$ and $I(A_\phi)$ must approach zero along the last field line crossing the event horizon, and therefore the IRS coincides with the IRS along the equator, and the electric current sheet does not appear. But as shown in the Appendix, the horizon regularity condition requires the existence of current sheet along the equator. And our numerical solution shows that there is a discontinuity in $\Omega(A_\phi)$ at $A^H_\phi$, and therefore the IRS lies between the event horizon and the IRS, and there exists a current sheet. The discrepancies here can be settled down by previous GRMHD simulations done by Komissarov (2005), where they observed a sharp transition in $\Omega(A_\phi)$ at $A^H_\phi$ and interpreted it as a discontinuity smeared by numerical viscosity. It is worth noting that the discontinuity in $\Omega(A_\phi)$ does not lead to any physical difficulties, e.g., the continuity of $\mathbf{B}^2 - \mathbf{E}^2$ across the LS is not affected. In addition, we do not explicitly show the BZ power of the uniform field configuration here, because it is sensitive to the magnetic flux trapped by the event horizon $A^L_\phi$, which is boundary condition dependent. In the real astrophysical environment, it is mainly determined by accretion process of the central BH (e.g. Garofalo 2009). 4. APPLICABILITY OF ANALYTIC PERTURBATION APPROACH In this section, we first recap the analytic perturbation approach to the GS equation, then apply it to the uniform field problem (Pan & Yu 2014, 2015a) and explain why this approach actually yields many solutions. Finally, we discuss the applicability of the perturbation method. We start with a unperturbed solution $A_0$ in the Schwarzschild spacetime, \[\mathcal{L}A_0 = 0,\] where the operator \[\mathcal{L} \equiv \frac{\partial}{\partial \nu} \left(1 - \frac{2}{\rho}\right) \frac{\partial}{\partial \rho} + \frac{\sin^2 \theta}{\rho^2} \frac{\partial^2}{\partial \phi^2}.\] For the corresponding Kerr metric solution $(A_\phi, I(A_\phi), \Omega(A_\phi))$, we assume $A_\phi \mid r \to \infty = A_0$, de- fine $i = l|_{r \to \infty}$, $\omega = \Omega|_{r \to \infty}$, and expand the solution to the leading order $$A_\phi = A_0 + \epsilon^2 A_2, \quad \omega = a\omega_1, \quad i = ai_1.$$ (16) Then we linearize the GS equation (2) as $$\mathcal{L}A_2(r, \theta) = S_2(r; i_1, \omega_1),$$ (17) by only keeping leading order perturbation terms, where the source function $S_2$ depends on $i_1$ and $\omega_1$, which can be figured out from the radiation conditions at horizon and at infinity (5-6). The solution to the linearized GS equation is written as $$A_2(r, \theta) = \int \int dr_0 d\theta_0 \, S_2(r_0, \theta_0)G(r, \theta; r_0, \theta_0),$$ (18) where $G(r, \theta; r_0, \theta_0)$ is the Green’s function of operator $\mathcal{L}$ (Pettersen 1974; Blandford & Znajek 1977) $$\mathcal{L}G(r, \theta; r_0, \theta_0) = \delta(r - r_0)\delta(\theta - \theta_0).$$ (19) In this way, for a given Schwarzschild metric solution $A_0$, the corresponding Kerr metric solution $\{A_\phi(\epsilon; r, \theta), I(\epsilon, A_\phi), \Omega(\epsilon, A_\phi)\}$ is uniquely determined order by order. Applying the method on the uniform field problem $A_0 = r^2 \sin^2 \theta$, we find (Beskin & Zheltonzhok 2013; Pan & Yu 2014; Gralla et al. 2015), $$\Omega = I = 0 \quad (A_\phi > A_\phi^H),$$ (20) $$\Omega = \frac{\Omega_H}{2 \left(1 + \sqrt{1 - A_\phi/A_\phi^H}\right)}, \quad I = 2\Omega A_\phi \quad (A_\phi < A_\phi^H),$$ where $A_\phi^H$ is the magnetic flux trapped by the event horizon ($A_\phi^H = 4$ for the lowest-order perturbation solution, and generally depends on BH spins and boundary conditions). It seems that we have found the unique solution (at leading order) approaching uniform field at infinity, but this is not the case. The Schwarzschild spacetime GS equation (14) is linear. Both uniform field and monopole field are its solutions, so do their linear combinations $$A_0(\epsilon) = r^2 \sin^2 \theta + \epsilon(1 - \cos \theta),$$ (21) where $\epsilon$ is some constant coefficient. The mixture of monopole component generates a family of Schwarzschild metric solutions, $A_0(\epsilon)$, and all these solutions approach uniform field at infinity. For each solution, the corresponding Kerr metric solution $\{A_\phi(\epsilon), I(\epsilon), \Omega(\epsilon)\}$ can be obtained using the above perturbation method. To summarize, the perturbation method depends on two main ingredients: the known asymptotic field behavior at infinity $A_\phi|_{r\to\infty}$ and the fixed underlying unperturbed field configuration $A_0$. But for the uniform field problem, there are many mathematically allowed unperturbed solutions due to the additional monopole component. That’s why the perturbation approach cannot predict the unique solution. 5. SUMMARY AND DISCUSSIONS The GS equation is a second-order differential equation with two to-be-determined functions $\Omega$ and $I$. Generally speaking, we need two constraint conditions to determine $\Omega$ and $I$, two boundary conditions in the $r$ directions and two boundary conditions in the $\mu$ direction to fix $A_\phi(r, \mu)$. For asymptotically uniform field, we use constraint conditions (4,8), boundary conditions in the $r$ direction (5,7) and boundary conditions in the $\mu$ direction (9) to close the GS equation. Our numerical solution of the uniform field yields a discontinuity in the $\Omega(A_\phi)$ at $A_\phi^H$, therefore the ILS lies between the event horizon and the IRS, and there exists a current sheet along the last field line crossing the event horizon (Figure 1), which is as expected from the horizon regularity condition. Following the same logic, let us reexamine other well studied field configurations: monopole field in the Kerr spacetime and dipole field in the flat spacetime (classical pulsars). For both field configurations, the number of LSs equals the number of to-be-determined functions $I$ and $\Omega$. For monopole field in the Kerr space- time, there are two LS crossing conditions and two radiation conditions. The former two determine $\Omega$ and $I$, and the latter two determine the inner and the outer boundary. Hence, there is no more freedom for specifying a boundary condition at infinity, i.e., we actually do not know the solution at infinity before we really solve the GS equation. Previous simulations and numerical solutions indeed confirmed the asymptotic monopole field configuration $A_\phi \propto 1 - \cos \theta$. For pulsar dipole field, $\Omega$ is equal to the angular velocity of the central star and $I$ is the only function to determine. The only one LS uniquely determines $I$ (Contopoulos et al. 1999), and two radiation conditions automatically determine boundary conditions. In the same way, there is no more freedom to impose a boundary condition at infinity. And previous numerical studies found the field at infinity deviates from $A_\phi \propto 1 - \cos \theta$ (e.g. Gralla et al. 2016a). We also discuss the perturbation approach for solving the GS equation, whose applicability depends on two main ingredients: the known asymptotic field behavior at infinity $A_\phi \mid_{r \to \infty}$ and the fixed underlying unperturbed field configuration $A_0$. For the monopole field, both of them are satisfied, therefore the perturbation approach is applicable and the high-order perturbation solutions show good match with results from simulations (Pan & Yu 2013a). For both uniform field in the Schwarzschild spacetime and dipole field surrounding static stars, the superposition of monopole component (and possible other components) generates many unperturbed solutions, as a result, the perturbation approach cannot predict a unique uniform field solution in the Kerr spacetime or a unique dipole field solution surrounding spinning stars. We thank our referee Prof. I. Contopoulos for his insightful comments on the validity of radiation condition for vertical field configurations and on the possible difficulties arising form the discontinuity in the angular velocity of field lines. We also thank A. Nathanail for clearly explaining his numerical algorithm. C.Y. is grateful for explaining his numerical algorithm. C.Y. is grateful for the support by the National Natural Science Foundation of China (grants 11173057, 11521303), Yunnan Natural Science Foundation (grants 2012FB187, 2014HB048), and the Youth Innovation Promotion Association, CAS. Part of the computation was performed at the HPC Center, Yunnan Observatories, CAS, China. L.H. thanks the support by the National Natural Science Foundation of China (grants 11203055). This paper is supported in part by the Strategic Priority Research Program “The Emergence of Cosmological Structures” of the Chinese Academy of Sciences, grant No. XDB09000000 and Key Laboratory for Radio Astronomy, CAS. This work made extensive use of the NASA Astrophysics Data System and of the astro-ph preprint archive at arxiv.org. REFERENCES Bambi, C. 2012a, Phys. Rev. D, 86, 123013 —. 2012b, Phys. Rev. D, 85, 043002 —. 2015, arXiv:1509.03884 Beskin, V. S. 2010, Physics-Uspekhi, 53, 1199 Bremsen, T. D., Gralla, S. E., & Jacobson, T. 2013, Class. Quantum Gravity, 30, 195012 —. 2016b, Phys. Rev. D, 93, 044038 Menon, G. 2015, Phys. Rev. D, 024054, 024054 —. 2016, Gen. Relativ. Gravit., 39, 785 Palenzuela, C., Bona, C., Lehner, L., & Reula, O. 2011, Class. Quantum Gravity, 28, 134007 —. 2015a, Astrophys. J., 801, 57 —. 2015b, Phys. Rev. D, 91, 064067 Penna, R. F. 2015, Phys. Rev. D, 92, 084017 Peterson, J. 1974, Phys. Rev. D, 10, 3106 APPENDIX For a given combination of \( I(A_\phi) \) and \( \Omega(A_\phi) \), the horizon regularity condition (Equation (5)) uniquely determines the boundary condition at horizon \( A_\phi(r = r_+, \mu) \). We find its numerical solution is not trivial due to the nonlinearity. To construct a robust solver, we first rewrite Equation (5) as \[ I = \frac{2r(\Omega - \Omega_H)}{\Sigma} \sin^2 \theta \frac{A_\mu}{r_+} \bigg|_{r=r_+} \tag{1} \] where we have defined two normalized variables, \( \mathcal{I} = I/A_H^\phi \) and \( A = A_\phi/A_H^\phi \). Here \( A \) runs from 0 to 1, and its values on boundaries are \( A(\mu = 0) = 1 \) and \( A(\mu = 1) = 0 \). Furthermore, we define \( f(A) \equiv I/2(\Omega_H - \Omega) \), and the above equation is written in a variable separated form \[ \frac{A_\mu}{f(A)} = -\frac{r_+ \sin^2 \theta}{r_+^2 + a^2 \mu^2}, \tag{2} \] which has a formal solution \[ e^{\int_{A(\mu=0)}^{A(\mu=1)} \frac{dA}{f(A)}} = \frac{1 - \mu}{1 + \mu} \times e^{\frac{a^2}{r_+^2} \mu}. \tag{3} \] In this form, numerically solving \( A(\mu) \) is stable. Here we show a general property of force-free magnetospheres read out of the formal solution (3): the horizon condition requires the existence of a current sheet at the equator. Enabling a non-singular solution, the integral \( \int_1^{A(\mu=1)} \frac{dA}{f(A)} \) must be finite, except at \( A = 0(\mu = 1) \), where \( f(A) = 0 \) due to vanishing \( I(\mu = 1) \). At \( A = 1(\mu = 0) \), the finite integral requires nonzero \( f(A = 1) \), or quickly decreased \( f(A) \) (e.g., \( \sim \sqrt{1-A} \)). Usually \( \Omega \leq \Omega_H/2 \), therefore \( I(A_H^\phi) \) must be nonzero, or quickly decrease to zero (e.g., \( \sim \sqrt{A_H^\phi - A_\phi} \)), where the former is a current sheet at the equator, and the later is a divergent current density (but weaker than the former).
Modeling cross-contamination during poultry processing: Dynamics in the chiller tank Daniel Munther a, *, Xiaodan Sun b, Yanni Xiao b, Sanyi Tang c, Helio Shimozako d, Jianhong Wu d, Ben A. Smith e, Aamir Fazil e a Department of Mathematics, Cleveland State University, Cleveland, OH, USA b Department of Applied Mathematics, School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, 710049, China c Department of Applied Mathematics, School of Mathematics and Information Science, Shaanxi Normal University, Xi’an, 710062, China d York Institute for Health Research, Center for Disease Modeling, Department of Mathematics and Statistics, York University, Toronto, ON, Canada e Public Health Risk Sciences Division, Laboratory for Foodborne Zoonoses, Public Health Agency of Canada, Guelph, ON, Canada A R T I C L E I N F O Article history: Received 30 December 2014 Received in revised form 6 May 2015 Accepted 7 May 2015 Available online 20 May 2015 Keywords: Mathematical modeling Cross-contamination Poultry chilling Escherichia coli Red water A B S T R A C T Understanding mechanisms of cross-contamination during poultry processing is vital for effective pathogen control. As an initial step toward this goal, we develop a mathematical model of the chilling process in a typical high speed Canadian processing plant. An important attribute of our model is that it provides quantifiable links between processing control parameters and microbial levels, simplifying the complexity of these relationships for implementation into risk assessment models. We apply our model to generic, non-pathogenic Escherichia coli contamination on broiler carcasses, connecting microbial control with chlorine sanitation, organic load in the water, and pre-chiller E. coli levels on broiler carcasses. In particular, our results suggest that while chlorine control is important for reducing E. coli levels during chilling, it plays a less significant role in the management of cross-contamination issues. 1. Introduction Poultry contamination by bacterial pathogens such as Salmonella, Campylobacter and Escherichia coli O157:H7, continues to pose a serious threat to public health both in Canada and on the global scale. According to the World Health Organization, 25% of foodborne outbreaks are closely associated with cross-contamination events involving deficient hygiene practices, contaminated equipment, contamination via food handlers, processing, or inadequate storage (Carrasco, Morales-Rueda, & García-Gimeno et al., 2012). As processing has been highlighted as a pivotal juncture in the supply chain, both for preventing and potentially promoting cross-contamination, researchers have conducted numerous studies, attempting to determine pathogen prevalence and concentration at various processing stages. However, the underlying mechanisms of cross contamination are still poorly understood and, furthermore, many studies evaluating the efficacy of intervention strategies during processing have presented inconsistent and even contradictory results. One reason for such issues is that studies were conducted at the lab or pilot scale under specific conditions that leave their results difficult to synthesize (Bucher et al., 2012). In this work, part one of a series of studies, we develop a mathematical model to gain insight into the main mechanisms of chlorine decay and cross-contamination during the chilling process. This approach is important because of its ability to test mechanistic hypotheses as well as to help streamline experiments that would otherwise be expensive both financially and temporally. More specifically, modeling informed insights can be used as cost-effective tools to help describe the mechanisms driving cross-contamination, and to establish unambiguous, quantifiable links between processing control parameters (such as chiller water temperature, wash time, chlorine concentration, carcass to water volume ratio, etc.) and pathogen prevalence and concentration. In turn, the quantified connections between control parameters and pathogen dynamics can provide invaluable information in terms of testing control strategies to keep pathogen levels below thresholds. While our focus is the chiller process of a typical modernized Canadian poultry inspection program plant (high speed), our model can be easily generalized to chiller processes in other locales. Also. * Corresponding author. E-mail address: d.munther@cscuohio.edu (D. Munther). the modeling framework and techniques can be modified to describe similar mechanisms in the process of defeathering, eversion and scaling. We describe the background and modeling formulation in Section 2. In Section 3 we apply our model to generic, non-pathogenic E. coli contamination of broiler carcasses, discuss detailed parameter estimation, and perform sensitivity analysis. Using the results of the sensitivity analysis, we discuss thresholds within which cross-contamination and chlorine control play a lesser role as well as when cross-contamination may pose a more significant risk. Also, in Section 3, we compare model predictions for E. coli levels on poultry exiting the chiller tank when free chlorine (FC) input is used at 50 mg/l or not at all. These results are given in terms of USDA baseline values. In addition, we examine the dynamics of FC inactivation via the organic load in chiller red water, i.e., chiller water that has been exposed to poultry carcasses, organic material and possibly pathogens. In the final section, we suggest some quantified rules of thumb for managing cross-contamination issues and discuss the feasibility of developing more complex models and of simplifying the complexity of cross-contamination models for relatively easy implementation. 2. Background and chiller model Canada has a variety of poultry processing operations, ranging from smaller traditional type processing to state of the art, high speed operations. In this work, we consider a typical modernized poultry processing plant (high speed), which covers most of the Canadian slaughter production (based on personal communication with CFIA officers, which we will reference from now on as [P]). Essentially, our processing framework involves a poultry slaughter establishment which operates under the CFIA approved Modernized Poultry Inspection Program (MIP); see CFIA (2014) for more information. This perspective leads to several assumptions that guide our model formulation. These include (1) the typical weight of a carcass is 2 kg; (2) the typical processing speed is 180 carcasses/ min; (3) the average dwell time of carcasses in the chiller tank is 45 min; (4) red water is not recycled, rather the set up involves fresh water intake at the beginning of the chiller tank, with overflow at the end; (5) a maximum of 50 ppm (mg/l) of free chlorine (FC) is added (if any) at the beginning of the chiller tank, and mixed with incoming fresh water, and (6) due to model simplification and a lack of data, we assume that organic matter and microbes do not bind/attach to the tank surfaces. Our model is built around two main types of mechanisms: (i) those that involve typical processing procedures for immersion chilling in high speed poultry processing facilities in Canada and (ii) bacteria transfer, bacteria inactivation, and water chemistry dynamics during the chilling process. Refer to Table 2 for a list of parameters corresponding to type (i) and (ii). To be clear, the parameters involved with the particular processing assumptions and dynamics, as in (i), are what specifies our model for Canadian poultry programs. The mechanisms under type (ii) are general mechanisms that are expected in a typical large-scale immersion chilling procedure that is utilized during poultry processing in many locations, not just Canada. Therefore, in this section as well as Section 3, where we apply our model to generic E. coli contamination, data used to quantify the type (ii) mechanisms need not necessarily be Canadian. We now formulate the chiller model in several steps. 2.1. The carcass dynamics and total suspended solids We assume that the incoming rate of chicken carcasses to the tank is \( N \) (kg/min) and the chickens spend on average \( 1/d_p \) (min) in the tank. These two assumptions lead to the following equation for \( P \), the total kg of chicken carcasses in the tank at time \( t \geq 0 \) (min): \[ P(t) = N - \epsilon d_p P(t), \] where \[ \epsilon = \begin{cases} 0, & t \leq \frac{1}{d_p} \\ 1, & t > \frac{1}{d_p}. \end{cases} \] Note that \( \epsilon \) is the derivative with respect to time and the function \( \epsilon \) ensures that no carcasses will leave the tank before the “average” wash time \( 1/d_p \) has elapsed. As the chickens enter and move through the chiller tank, they release high amounts of organic material (in the form of blood, fat, protein, etc.) into the water. Such material is important because it alters chiller water chemistry as well as microbial counts (Russell, 2012). We represent the organic material in the chiller tank at time \( t > 0 \) by \( I \) (kg): in order to relate this to the total suspended solids (concentration), we consider \( I/T_v \) where \( T_v \) is the total tank volume in ml. For simplicity, we assume that the amount of organic material coming in to the water is proportional to the incoming rate of chicken carcasses \( N \) (kg/min) and this is represented by \( q \epsilon (0.1) \). Note that in reality, the amount of organic material shed from individual carcasses may be independent of one another. Also, we assume, via the flow through the tank, that the organic material spends on average \( 1/d_p \) minutes in the tank. Therefore we build the following equation for \( I \): \[ I' = qN - \epsilon d_p I. \] 2.2. Average microbial load on carcasses and organic material in the tank One of the key purposes of the model is to understand the dynamics of the average microbial load on both the poultry and the organic material in the chiller tank. To do so, we represent the average microbial load (CFU/(kg ml)) on the chicken and organic material in the tank at time \( t > 0 \), by \( v_p \) and \( v_0 \) respectively. Notice that the units for \( v_p \) and \( v_0 \) are (CFU/(kg ml)) since we scale the average bacteria load per kg by the tank volume \( T_v \). For modeling purposes, it is convenient to scale by the tank volume and this scaling should not be connected with bacterial concentration measurements taken from typical rinse procedures used to quantify the microbial load on a pre or post-chill carcass. For instance, the USDA conducted studies using a 400 ml carcass rinse in order to determine E. coli levels on individual poultry carcasses during processing and reported their results in units CFU/ml (USDA, 2012). We assume that the chickens enter the chiller process with an average level of \( \rho \) CFU/kg. Upon entering the tank, a certain fraction of this contamination level initially sheds into the chiller water. Let this fraction be \( p \) and so \( 0 < p < 1 \). Also, as the carcasses move through the chiller tank, we suppose that continued microbial shedding occurs at a rate \( b v_p \), where \( b \) (1/min) is the shedding parameter (i.e., the shedding rate is proportional to the current average contamination level on the poultry). In addition, bacterial attachment occurs via contact between a carcass and microbials in the chiller water. If we let \( W \) (CFU/ml) be the microbial concentration in the chiller water at time \( t \), then we assume this attachment occurs at a rate \( \beta W \), where \( \beta \) (1/(kg min)) is the binding parameter. In addition to shedding/binding, we consider the inactivation of microbes on carcass surfaces via free chlorine (FC) contact during the chiller process. While the effective contact of FC with carcass surfaces (and therefore with microbes attached to carcass surfaces) during immersion chilling is, to our knowledge, not well documented in the literature, there are multiple studies quantifying inactivation rates of microbes in solution via FC; see (Helbling & VanBriesen, 2007; Zhang, Luo, Zhou, Wang, & Millner, 2015) and references therein. If we let $k_w > 0$ be the inactivation rate of microbes in the chiller water, then we argue that the inactivation rate via FC of microbes on carcass surfaces can be written as $ak_w$, where $a \in (0,1)$. For instance, in the fresh produce industry, studies have concluded that surface characteristics can reduce effective contact of chemical sanitizers during wash cycle protocols (Adams, Hartley, & Cox, 1989; Gil Selma, López-Gámez, & Allende, 2009). Since carcass surfaces are irregular and this is an important factor in determining contamination levels (Thomas & McMee, 1980), similar to the results from fresh produce studies, FC contact with microbes attached to carcass surfaces should be significantly less than FC contact with microbes in the chiller water. Combining these ideas, the average decrease of the microbial load on carcasses is given by $ak_wP\varepsilon C$, where we assume that this decrease is proportional to the product of the current microbial load and the FC concentration $C$ (mg/l). Finally, taking into account the fact that $1/d_p$ is the average wash time (and assuming that the natural death rate of the microbes attached to the poultry and organic material is zero (Russell, 2012)) our equation for $v_p$ becomes: $$v_p' = \frac{(1 - \rho)\phi N}{PT} + \beta W - bv_p - d_p v_p - ak_wv_pC.$$ (3) In a similar manner, we can construct an equation for $v_j$ as follows: $$v_j' = \frac{(1 - \rho)\phi N}{PT} + \beta W - bv_j - d_p v_j - ak_wv_jC.$$ (4) Notice that the carcass surface temperatures during the initial cooling phase of chilling may promote microbial growth on carcass surfaces. Please refer to Section 4 for reasons as to why we do not include this dynamic in Equations (3) and (4) for specific application to E. coli contamination. 2.3. Dynamics in the chiller water The two main variables we examine include $W$, the microbial concentration in the chiller water, and $C$ (mg/l), the FC concentration in the water. Assuming that red water is not filtered or recycled, bacteria do not multiply in the water because of the low temperature, $<4^\circ$C as per Canadian Food Inspection Agency (CFIA) regulations, and bacterial survival in the water is expected (Ratkowsky, Olley, McMeeke, & Ball, 1982; Wang & Doyle, 1998; CFIA, 2014). $W$ depends on four things: (i) microbes shed into the water (due to shear forces in the water, etc., (ii) microbes in water attaching to poultry or organic material, (iii) microbes inactivate by FC and (iv) the flow rate of water in/out of the tank. While there may be concentration differences along the length of the tank, for simplicity we assume complete mixing for the dynamics in (i)--(iv). Observe that injured bacteria cells might also be represented in this model, given the assumption that they shed from/ adhere to carcasses at the same rates as other intact/ viable cells. However, from a modeling standpoint we do not differentiate between injured and viable cells. In terms of quantifying the above mechanisms, for instance, the inactivation rate for microbes in water due to FC is given by $W' = -k_wCW$, where $k_w (l/(mg min))$ is the inactivation rate parameter. Notice, the change in $W$ at time $t$ is proportional to $CW$, illustrating the common mass action assumption for such chemical reactions (Deborde & von Gunten, 2008). Putting together this idea with (i)--(iv), the equation for $W$ is: $$W' = (1 + q)\frac{\phi N}{PT} + bv_j + bv_pP - \beta PW - \beta JW - k_wCW - g\frac{W}{10^{-3}T},$$ (5) where the first term $(1 + q)\phi N/PT$ reflects the amount of contamination that initially sheds from the poultry into the water and the last term $gW/10^{-3}T$ reflects the concentration of microbes that exit the tank with the outflow. Notice, we assume the tank volume $PT$ is constant in time, so inflow = outflow. Assuming that 1 liter of fresh water are added to the tank per carcass and each carcass on average weighs $m$ kg, $g = N(\text{kg/min})/(1 \text{ carcass/m kg})$. If $l$ (l freshwater/carcass) = $NI(m/l/min)$ is the addition rate of fresh water per carcass. In terms of the equation for $C$, we have: $$C' = c_1\frac{g}{10^{-3}T} - d_c - k_cWC - ak_c\phi C - ak_v\phi C - hJC - gC\frac{C}{10^{-3}T},$$ (6) where $c_1$ (mg/l) is the FC concentration of the input water, $g$ is as above, and so $c_1 g/10^{-3}T$ (mg/l/min) measures the rate of increase of FC in the water. The natural decay rate of FC in the tank is represented by $d_c$ (1/min). Also, $c_1$ (ml/(CFU min)) reflects the rate at which chlorinated is oxidized or degraded due to inactivating microbes in the water, $ak_c$ is the rate at which FC is degraded due to inactivating microbes on carcass surfaces, and $h$ (I/(kg min)) is the rate at which the organic material in the tank decreases the FC through chemical binding. For the terms involving $k_c$ and $h$ we assume that the decrease in FC concentration at time $t$ is proportional to the product of the respective interacting "species", and therefore $k_c$, $ak_c$, and $h$ are types of second order rate constants. Finally, $gC/10^{-3}T$ illustrates the loss of FC due to outflow of water from the tank. 2.4. Complete model Putting together the six equations above, our model becomes: $$J' = qN - \epsilon d_pJ,$n$$ $$v_j' = \frac{(1 - \rho)\phi N}{PT} + \beta W - bv_j - d_p v_j - ak_wv_jC,$n$$ $$P' = N - \epsilon d_pP,$n$$ $$v_p' = \frac{(1 - \rho)\phi N}{PT} + \beta W - bv_p - d_p v_p - ak_wv_pC,$n$$ $$W' = (1 + q)\frac{\phi N}{PT} + bv_j + bv_pP - \beta PW - \beta JW - k_wCW - g\frac{W}{10^{-3}T},$$n$$ $$C' = c_1\frac{g}{10^{-3}T} - d_c - k_cWC - ak_c\phi C - ak_v\phi C - hJC - gC\frac{C}{10^{-3}T}.$n$$ (7) See Table 1 for a concise description of the model (7) variables. 2.5. Model properties Note that system (7) is positively invariant for all non-negative, not identically zero initial conditions. This essentially means that for each set of non-negative, not identically zero initial conditions, there exists a unique solution to (7) which stays positive in each component for all positive time (i.e. we can meaningfully ascribe physical units to each component of the solution). Furthermore, such solutions to (7) are not unbounded in finite time. Combining these ideas, we see that system (7) can unambiguously describe the variables we have associated to a continuous poultry chilling process with potential cross-contamination dynamics. In addition, our system has a positive equilibrium state which we denote by $$\Sigma = \left(J^*, v^*_p, P^*, v^*_p, W^*, C^*\right).$$ This equilibrium state $\Sigma$ is independent of time and according to numerical calculations (not shown) it attracts all solutions with non-negative, not identically zero initial conditions. This means that as time increases, biologically relevant solutions (as described above) move closer in value to $\Sigma$. The equations for each coordinate of $\Sigma$, in terms of model parameters, are given below (note that some are implicitly given for the sake of clarity). $$J^* = \frac{qN}{dp} \tag{8}$$ $$P^* = \frac{N}{dp} \tag{9}$$ $$v^*_p = \frac{v^*_p}{T^*} = \frac{(1 - \rho)dp}{1 + dp + \alpha_kC^*} + \frac{\beta W^*}{1 + dp + \alpha_kC^*} \tag{10}$$ $$W^* = \frac{(1 - \phi)dn}{iv} + b\left(\frac{v^*_p f^* + v^*_p P^*}{\beta (P^* + J^*) + \alpha_k C^* + \frac{2}{10T^*}}\right) \tag{11}$$ $$C^* = \frac{c_1g}{(10^{-3}T^*)} \left(d_e + k_c W^* + ak_c v^*_p f^* + ak_c v^*_p P^* + h j^* + \frac{g}{10^{-3}T^*}\right) \tag{12}$$ ### 3. Application of model to E. coli cross-contamination during immersion chilling Due to the fact that we have a relatively complete data set for generic E. coli, both bacteria levels and transfer during industrial chiller processes (Cavani, Schocken-Iturimo, Garcia, & de Oliveira, 2010; Northcutt, Smith, H hue, & Ingram, 2008; Tsai, Schade, & Molyneux, 1992) and chlorine inactivation (Helbling & VanBriesen, 2007), we tailor our model to address the specific dynamics associated to the chiller water chemistry and cross-contamination of broiler carcasses contaminated with non-pathogenic E. coli. For the parameter ranges specific to this situation, the solutions of system (7) approach $\Sigma$ on the order of 200–250 min, which means during a typical 8 h shift, if there is little variation in the average E. coli load on carcasses, i.e. $\sigma = \text{constant}$, the model predicts that contamination levels in the water and on the poultry in the tank will equilibrate. Practically, this gives us mathematical justification to simplify the dynamics in the tank and consider only the equilibrium solution $\Sigma$ of (7). However, since the parameters involved with E. coli contamination are not precisely known, we want to understand how sensitive $\Sigma$ is to the model parameters. This sensitivity analysis is vital for making informed conclusions for E. coli control as we illustrate in the following sections. Note that $\sigma$ may vary significantly during an 8 h shift and therefore depending on certain time intervals, contamination levels in the chiller water and on the poultry may vary instead of equilibrating. Realistically, then, $\sigma$ should depend on time. However, in order to build control strategies and rules of thumb for treating such cases, we first seek results that act as a reference point. That is, we first determine a range for $\sigma$ in which cross-contamination plays a significant role and gain an understanding of which parameters play dominant roles in contributing to cross-contamination at equilibrium. We do this via sensitivity analysis, assuming $\sigma$ is constant but randomly selected from within its range. Please see Section 3.1 for details concerning this analysis as well as Section 3.2.2 and Section 4 for situations where $\sigma$ may vary as a function of time and how the model (7) can be applied to quantitative microbial risk assessment (QMRA). #### 3.1. Parameter baseline and range estimation for E. coli contamination ##### 3.1.1. Parameters specific to Canadian processing Referring to Table 2, specific processing parameter values were obtained from personal communication with CFIA officers, as referenced by [P]. Refer to the beginning of Section 2 for the details of Canadian high speed chilling specifications. Note that the other studies referenced for these same parameters in Table 2 confirm the uniformity of some of these assumptions for immersion chilling in other locations such as the U.S. and Brazil. ##### 3.1.2. Average E. coli load on incoming carcasses $\sigma$: Following (Northcutt et al., 2008), we can set a baseline value for $\sigma$, the average microbial load on incoming carcasses (CFU/kg). From (Northcutt et al., 2008), the rinsing procedure to quantify the bacterial load on poultry prior to the chilling process, indicates that the average E. coli level for incoming carcasses is $10^{6.5}$ CFU/ml in the rinsate. Given a 100 ml rinse, this translates to roughly $10^{8.5}$ CFU on the average carcass. Assuming the average carcass weight is 2 kg, $\sigma = 10^{8.5}/2 = 2 \times 10^4$ CFU/kg. For sensitivity analysis, we establish the following range for $\sigma$, $10^4$ to $10^6$ CFU/kg based on E. coli data in Cavani et al. (2010). ##### 3.1.3. Shed rate of E. coli from carcasses to chiller water $b$: From Northcutt et al. (2008), we can estimate $b$ by comparing the pre-chill bacteria load on a carcass and the post-chill bacteria load. Following the rinse procedure in (Northcutt et al., 2008), the pre-chill E. coli load recovered from an average single carcass was $10^{6.5}$ CFU/ml and the post-chill load was $10^{1.5}$ CFU/ml. By conservation of the E. coli population, and considering a 45 min average wash time, we estimate the shed rate to be: $$b = \frac{\ln(10^{1.5}/10^{2.6})}{45} = 0.077 \quad 1/\text{min}.$$ In terms of a range for $b$, we use 0.04 to 0.1. Considering that a Table 2 Baseline parameters values for application to E. coli. Parameters with reference [P], correspond to information obtained from personal communication with CFIA officers. Currently there is no documented data for these references marked with [P]. Parameters I and g are extrapolated from guidelines in [CFIA, 2014]. <table> <thead> <tr> <th>Type</th> <th>Parameter/Reference</th> <th>Description</th> <th>Values/Units</th> </tr> </thead> <tbody> <tr> <td>(i)</td> <td>$T_v$</td> <td>Tank volume</td> <td>$5 \times 10^7$ ml</td> </tr> <tr> <td></td> <td>(Cavan et al., 2010)</td> <td></td> <td></td> </tr> <tr> <td></td> <td>$N$</td> <td>Carcass process rate</td> <td>360 kg/min</td> </tr> <tr> <td></td> <td>(Cavan et al., 2010)</td> <td></td> <td></td> </tr> <tr> <td></td> <td>$(\text{Northcutt et al., 2008})$</td> <td>Average wash time</td> <td>45 min</td> </tr> <tr> <td></td> <td>$l/d_p$</td> <td>Fresh water/carcass</td> <td>1.7 l/carcass</td> </tr> <tr> <td></td> <td>(CFIA, 2014)</td> <td>Average carcass weight</td> <td>2 kg</td> </tr> <tr> <td></td> <td>$m$</td> <td>input water rate</td> <td>306 l/min</td> </tr> <tr> <td></td> <td>[CFIA, 2014]</td> <td>input FC concentration</td> <td>0–50 mg/l</td> </tr> <tr> <td></td> <td>$c_1$</td> <td>FC decay rate</td> <td>$4.1 \times 10^{-5}$ min$^{-1}$</td> </tr> <tr> <td>(ii)</td> <td>$I_t$</td> <td>Prechill carcass load</td> <td>$2 \times 10^6$ CFU/kg</td> </tr> <tr> <td></td> <td>(Li, Gu, Qi, Usita, &amp; Zhao, 2003)</td> <td>Initial shedding fraction</td> <td>0.15</td> </tr> <tr> <td></td> <td>$\varphi$</td> <td>Organic material fraction</td> <td>0.011</td> </tr> <tr> <td></td> <td>(Northcutt et al., 2008)</td> <td>Microbial attachment rate</td> <td>0.01 (kg min)$^{-1}$</td> </tr> <tr> <td></td> <td>$q$</td> <td>Microbial shed rate</td> <td>0.077 min$^{-1}$</td> </tr> <tr> <td></td> <td>(Tral et al., 1992)</td> <td>Fraction for FC kill rate on carcass</td> <td>0.001–0.1</td> </tr> <tr> <td></td> <td>$B$</td> <td>FC kill rate in water</td> <td>216 l/(mg min)</td> </tr> <tr> <td></td> <td>(Northcutt et al., 2008)</td> <td>FC decay rate via killing</td> <td>0.0069 ml/(CFU min)</td> </tr> <tr> <td></td> <td>$\alpha$</td> <td>FC oxidation rate</td> <td>0.0017 (kg min)$^{-1}$</td> </tr> </tbody> </table> Carcass undergoes an average chilling time of 45 min, this corresponds to a 1 to 2 log reduction on the poultry. 3.1.4. FC inactivation kinetics $k_c$ and $k_w$. From Helbling and VanBriesen (2007), we have that the “3-log” CFU/ml inactivation contact time is given by $0.32 \pm 0.09$ (mg/l)min. That is, it takes 0.032 mg/l of FC concentration to inactivate $10^3$ CFU/ml of E. coli in solution in one minute. The study in Helbling and VanBriesen (2007) indicates that E. coli is very reactive with FC and the “contact time” is calculated by integrating the FC concentration curve over the time interval $[t_0, t_2]$, i.e., the time interval it takes to reduce the microbial concentration by $10^3$. Considering the units of $k_w$ and $k_c$, and the fact that it takes 0.032 mg/l of FC to eliminate $10^3$ CFU/ml of E. coli in one minute, we calculate: $$k_w = 3.125 \times 10^4 k_c.$$ As, bacteria are organic substances, we can model their inactivation by FC using a second order rate reaction (Deborde & von Gunten, 2008). $W = -k_w CW$. Considering this equation on the time interval $[t_0, t_2]$, we can solve for $k_w$ as follows: $$k_w = \frac{\ln(W(t_0)/W(t_2))}{\int_{t_0}^{t_2} C(s)ds} \approx \frac{\ln(5 \times 10^7)}{0.032} = 215.867.$$ Using the relationship above, we see that $k_c = 0.0069$. From the range given for the contact time above, we find that $k_w \in [150, 300]$, while $k_c$ ranges from about 0.0048 to 0.0096. Performing similar calculations with inactivation of E. coli O157:H7 data from (Zhang et al., 2015), we find that $k_w = 276$ and $k_c = 0.02$. Here we use the result that 5 log reduction is achieved in 0.25 s with 10 mg/l of FC (Zhang et al., 2015). Because $k_c$ barely affects model outputs as it varies across its range, for simplicity we fix $k_c = 0.0069$ and do not include it in the sensitivity analysis. In terms of inactivation of E. coli on carcass surfaces, we assume that the rate is $ak_w$ and the loss of FC due to this inactivation, since $k_c$ is proportional to $k_w$, is $ak_w$. While $a$ is not precisely known, considering the discussion in Section 2.2, we assume that it is at most 0.1. For the sensitivity analysis, we assume that $a$ ranges from 0.001 to 0.1. 3.1.5. Organic material in the chiller water $q$: From Tsai et al. (1992), the total suspended solids in the chiller water is 0.35% (i.e. about 3500 mg/l) (this value is similar to the initial measuring station in the tank (Northcutt et al., 2008)). With our tank volume given by $V = 5 \times 10^7$ ml, this translates to about 175 kg. Using the total suspended solids to estimate the organic material in the tank, $j(t)$ should equilibrate to about 175 kg. We know that the positive equilibrium value of $J = qN/d_p$ without filtering. This implies that $q = 175/(45*360) = 0.011$. Since $N$ and $d_p$ are fixed from our processing assumptions, we allow $q$ to vary from 0.005 to 0.03 as this means that $J$ varies from about 80 to 490 kg. 3.1.6. FC oxidation rate via organic material in tank $h_f$: We want to estimate the rate of chlorine reaction with the organic material in the chiller water. From our model, we use the following equation: $$C' = -h_f J_C$$ From Tsai et al. (1992), the chiller water is assumed to have the total suspended solids at equilibrium, $J = 3500$ mg/l (or 0.35%). Assuming, as above, $T_V = 5 \times 10^4$ ml, we have that for large enough $J$, $J_{\text{eq}} = 175$ kg. Substituting this into the model, we solve to get $$C(t) = C_0 e^{-h_f t}. \quad (13)$$ Referring to the data in Table 5 of Tsai et al. (1992), we see that chlorine depletion from organic material has both a "fast" and "slow" kinetic. For our purposes, we consider only the fast kinetic as we have a continuous flow of chlorine and organic material entering the chill tank. From Tsai et al. (1992), the average of this fast kinetic is 0.35/min with standard deviation 0.10. Combining this with the rate in (13), leads to $h_f = 0.29 \pm 0.10$. Since $J = 175$ kg, our baseline value for $h_f = 0.0017$ and the range is $h_f \in [0.0011, 0.0022]$. Note, the residual chlorine data from Tsai et al. (1992) is not the same as FC; however, we assume that the residual chlorine is proportional to the FC and therefore the decay rate for both types will be given by $h_f$. 3.1.7. Binding rate of E. coli to poultry in tank $\beta$: To estimate the binding rate of E. coli in suspension in the process water to the poultry during chilling, we adopt the "transmission rate" perspective that is common to disease models. In a disease model with a well mixed population, this rate is based on the number of successful contacts an infective individual makes with the susceptible population (Brauer, 2008). For the chilling process, the number of contacts depends on (a) the poultry to water ratio, (b) the average dwell time of the poultry in the tank and (c) the "path" the carcasses take through the tank. We suppose that the tank is 9 m long (Northcutt et al., 2008) and its volume is $T_V = 5 \times 10^4$ ml. Because the equilibrium amount of poultry $P^* = NJ / d_p = 16200$ kg, the poultry to water ratio is $P^* / T_V = 0.324$ kg/l. We want to know how many liter "cubes" of contaminated water this 0.324 kg of poultry "hits" as it travels through the tank. Assuming the 0.324 kg poultry unit travels straight through the tank, and because a liter cube of water has a side dimension of 0.1 m, the poultry unit "hits" about 90 cubes of water. Therefore, 1 kg of poultry "hits" about 270 L cubes during its 45 min trip through the tank. We describe $\beta$ as follows: $$\beta = \frac{270 \text{ hits}}{\mu,}$$ where $\mu$ is the probability of successful E. coli attachment. Currently we have no data for $\mu$, but we estimate it to be between 0.02% and 2% success. See Munther & Wu (2013) for a discussion on the probability of E. coli attachment to lettuce during a commercial produce wash. In that context, $\mu = 1\%$. Putting these ideas together indicates that $\beta \in [0.0010, 0.1]$. 3.2. Results from sensitivity analysis In order to understand how parameter variations affect $W^*$, $v_p^*$ and $C^*$ (i.e. the equilibrium E. coli levels in the chiller water, on the poultry and the equilibrium FC level), we use Latin hypercube sampling to build a matrix of parameter input values (see Tables 2 and 3). These input values are then fed into our model (7) and linked with the corresponding outputs for $W^*$, $v_p^*$ and $C^*$. Using a sample size of $n = 1000$, we calculate the partial rank correlation coefficients (PRCCs) corresponding to each parameter. Briefly, the PRCC values quantify the degree of monotonicity between respective parameters and outputs. For more details concerning this analysis, please refer to (Marino, Hogue, Ray, Krisscher, & Zhao, 2008). Observe that for a complete uncertainty and sensitivity analysis, we should also perform an extended Fourier amplitude sensitivity test (eFAST), however, more relevant data for certain parameter ranges is needed in order to justify an extensive sensitivity analysis. Fig. 1 illustrates the PRCC values using the baseline and range values for corresponding parameters coming from Tables 2 and 3. From Fig. 1(A) and (C), we first notice that $W^*$ and $v_p^*$ are strongly influenced by $\sigma$ (CFU/kg), the average E. coli load on pre-chilled poultry. This is a logical result, as increasing the pre-chiller microbial load on the poultry will in general lead to an increase in microbial concentration in the chiller water as well as on carcasses during chilling. We quantify the average E. coli load (during chilling) on the poultry, $v_p^*$ (CFU/(kg ml)), as the time independent expression: $$v_p^* = \frac{(1 - \rho) d_p}{T_V (b + d_p + \alpha_{W^*} C^*)} \sigma + \frac{b d_p}{T_V (b + d_p + \alpha_{W^*} C^*)}.$$ Equation (14) can be understood as follows: the first term corresponds to the E. coli load that remains on the poultry during the chilling process. That is, $(1 - \rho)(b)/(T_V (b + d_p + \alpha_{W^*} C^*))$ quantifies the fraction of the incoming E. coli load on the poultry that does not shed during chilling and is not inactivated by FC contact with carcass surfaces. The second term quantifies the E. coli load gained via cross-contamination from contaminated chiller water. Recall that $\beta$ is the water to carcass transmission parameter, $1/\beta$ is the characteristic time scale for E. coli shedding from carcasses into the chiller water (it is proportional to the time it takes to shed $1 - 2 \log_{10}$ CFU) and $1/d_p$ is the average dwell time of a carcass in the chiller tank. Notice that $1/(\alpha_{W^*} C^*)$ is the characteristic time scale for FC to inactivate E. coli on carcass surfaces when the FC has reached the equilibrium amount of $C^*$. Combining these three time scales, we observe that $1/(b + d_p + \alpha_{W^*} C^*)$ is the characteristic time scale of cross-contamination during an 8-h shift of continuous processing. In other words, some of the E. coli gained from cross-contamination may be shed or inactivated before the carcass leaves the chiller tank and the model accounts for this by shortening the effective cross-contamination time scale from 1/$d_p$ to 1/$(b + d_p + \alpha_{W^*} C^*)$. While $v_p^*$ is sensitive to many parameters, referring to Fig. 1(C), we see that $c_1$, $\sigma$, $l$ and $q$ play more influential roles. In terms of chlorine efficacy, Fig. 1 indicates that $c_1$ (the input FC concentration) has a strong effect on reducing the E. coli load on carcasses during chilling. This effect is coupled with the impact of $l$ (the input of fresh water/carcass) as increasing $l$ increases the addition rate of FC to the tank (see formula (6)). From an industry standpoint, it is important to note that both $c_1$ and $l$ can be directly controlled during the chilling process. Additionally, from a regulatory perspective, $c_1$ has an upper bound. Considering this limitation, we will give a more detailed discussion of FC control as well as discuss the role of $\sigma$ in Section 3.2.3. In terms of water usage and the parameter $l$, an interesting study would be to use the model (7) predictions to compare the tradeoffs between simultaneously minimizing E. coli loads in the red water and on carcass surfaces and the cost associated to water consumption. Fig. 1(C) suggests that an increase in $q$ (the fraction of organic material the sheds from carcasses into the water) leads to an increase in $v_p^*$. While $q$ cannot be directly controlled during chilling, as Table 3 Parameters and ranges for sensitivity analysis. <table> <thead> <tr> <th>Parameter/Reference</th> <th>Baseline</th> <th>Range</th> </tr> </thead> <tbody> <tr> <td>$\sigma$</td> <td>$2 \times 10^4$ CFU/kg</td> <td>$10^3$–$10^6$</td> </tr> <tr> <td>$\rho$</td> <td>0.15</td> <td>0.05–0.30</td> </tr> <tr> <td>Estimated $q$</td> <td>0.011</td> <td>0.005–0.03</td> </tr> <tr> <td>$\beta$</td> <td>0.01 (kg min)$^{-1}$</td> <td>0.001–0.1</td> </tr> <tr> <td>Estimated $b$</td> <td>0.077 min$^{-1}$</td> <td>0.04–0.1</td> </tr> <tr> <td>Estimated $\sigma$</td> <td>0.05</td> <td>0.001–0.1</td> </tr> <tr> <td>$k_w$</td> <td>216 l/[(mg min)]</td> <td>150–300</td> </tr> <tr> <td>$l$</td> <td>1.7 l/carcass</td> <td>1–4</td> </tr> <tr> <td>$C_1$</td> <td>25 mg/l</td> <td>0–50</td> </tr> <tr> <td>$h_j$</td> <td>0.0017 (kg min)$^{-1}$</td> <td>0.0011–0.0022</td> </tr> </tbody> </table> Fig. 1. PRCC values for parameters with respective output: (A) $W^*$, (B) $C^*$, (C) $r_p^*$, and (D) $\beta W^*/(b + d_p + akwC^*)$. Parameters that are significant ($p < 0.05$) are marked with a star. in the case of $c_1$ and $l$, it can be indirectly regulated during pre-chiller processing. See Section 3.2.3 for more details on the relationship between $q$, FC control and pre-chiller processing interventions. 3.2.1. Rules of thumb for E. coli cross-contamination Under what conditions should we worry about cross-contamination? In terms of Equation (14), this question translates into comparing the magnitude of the first term and the cross-contamination term. Since $c_1$ (the input FC concentration) can be controlled during processing, we estimate the magnitude of the fraction $(1 - \rho)d_p(T_s(b + d_p + akwC^*))$, the cross-contamination term, $\beta W^*/(b + d_p + akwC^*)$, and the E. coli level in water $W^*$ as follows: for each fixed $c_1$ in [0, 50] mg/l, we perform Monte Carlo simulations to calculate the respective values of these terms. Then, fitting a normal distribution to the range of respective outputs, we calculate the 95% confidence interval. The results are illustrated in Fig. 2. Examining Fig. 2(A), we see that the 95% confidence interval for $(1 - \rho)d_p(T_s(b + d_p + akwC^*))$ ranges over approximately $[10^{-10.4}, 10^{-8.4}]$ as $c_1$ varies from [0,50]. Similarly, from Fig. 2(B) and (C), the cross-contamination term ranges over \([10^{-6.7}, 10^{-3.3}]\) and \(W^r\) ranges over \([10^{-4.4}, 10^{-3}]\), respectively. These results provide quantifiable evidence that FC plays a significant role in reducing the \(E.\ coli\) load on poultry both by directly inactivating the bacteria on carcass surfaces and by inactivating the bacteria in the chiller water which reduces the load gained via cross-contamination. However, the question remains, when should we worry about cross-contamination? Using the results from Fig. 2(A) and (B), we can compare the expressions for \(v^p_\sigma\) when \(c_1 = 0\) (no FC input) and when \(c_1 = 50\) (maximum FC input). First, for \(c_1 = 0\), we have that \[ v^p_\sigma = 10^{-8.4} \sigma + 10^{-3.3} \] Next, for \(c_1 = 50\), we see that \[ v^p_\sigma = 10^{-10.4} \sigma + 10^{-6.75} \] From Equation (15), if \(\sigma \leq 10^{3.7}\), the magnitude of the cross-contamination term plays a dominant role in determining the overall order of \(v^p_\sigma\). That is, during a typical 8-h shift, if no FC is used, cross-contamination has a primary effect in determining the \(E.\ coli\) level on post-chiller poultry when the average \(E.\ coli\) load on pre-chiller carcasses is on the order of 5 \(\log_{10}\) CFU or less. Following the same reasoning, Equation (16) indicates that if \(\sigma \leq 10^{3.7}\), then the cross-contamination dynamic is again significant. In other words, during a typical 8-h shift, if maximum FC input is used, cross-contamination plays a leading role in determining the \(E.\ coli\) level on post-chiller poultry when the average \(E.\ coli\) load on pre-chiller carcasses is on the order of 4 \(\log_{10}\) CFU or less. Therefore, while FC input significantly reduces \(E.\ coli\) levels in the water and on poultry during chilling, from a management perspective, it plays a lesser role in ensuring that cross-contamination will not be an issue. That is, using maximum FC input, compared to zero FC input, reduces the range of \(\sigma\), within which cross-contamination dictates the magnitude of \(v^p_\sigma\), only by about 1 \(\log_{10}\) CFU. 3.2.2. Cross-contamination and flock to flock transmission Our analysis also indicates that \(W^r\) is sensitive to \(\beta, \rho, \) and \(b\), and \(bW^r/(b + d_p + \alpha_{wp}C^r)\) is sensitive to \(\rho\) and \(b\) (see Fig. 1(A) and (D)). This information coupled with our discussion in Section 3.2.1 points to potential cross-contamination issues as described in the following archetypal situation: Suppose chickens are processed from a variety of farms at one particular processing center but farm (A), at some juncture, delivers chickens that carry a significantly higher \(E.\ coli\) load as compared with the chickens from the other farms. It is likely then, that the \(E.\ coli\) load in the chiller water, \(W^r\), will dramatically increase via the shed from farm (A) chickens during chilling. If the magnitude of \(W^r\) is higher or comparable to the magnitude of \(\sigma\) from carcasses now entering the chiller tank, and the chiller water has yet to be replaced, then cross-contamination (flock-to-flock) may be significant. To obtain rules of thumb for such scenarios, that are backed by scientific rigor at the mechanistic scale, our model suggests the need for specific experiments to capture the components of shedding (\(\rho\) and \(b\)) and cross-contamination (\(\beta\)) more precisely. Refer to Section 4 for a more detailed discussion. 3.2.3. FC control and inactivation While our findings in Section 3.2.1 show that cross-contamination can strongly influence the resulting \(E.\ coli\) load on chilled poultry, both in the presence or absence of FC input, the results in Figs. 1 and 2 demonstrate that FC input is pivotal as a control. To quantify this control on \(E.\ coli\) levels, we again consider --- Fig. 2. 95% confidence interval for (A) Fraction of unshed E. coli level on poultry vs FC input \(c_1\), (B) Level of cross-contamination on poultry vs FC input \(c_1\), and (C) E. coli level in red water \(W^r\) vs FC input \(c_1\). Equation (14). Rescaling \( v_p^* \) to have units CFU/carcase and rescaling \( \sigma \) to have units CFU/carcase, we let \( \tilde{v}_p^* = 2T_v v_p^* \) (CFU/carcase) and \( \sigma^* = 2\sigma \) (CFU/carcase) where \( T_v \) is the tank volume and each carcass is assumed to be 2 kg on average. Therefore, (14) becomes: \[ \tilde{v}_p^* = \frac{(1 - \rho) d_p}{(b + d_p + ak_w C^*)} + \frac{2T_v \delta W^*}{b + d_p + ak_w C^*} \tag{17} \] Recall in Section 2.3, we assume that FC is mixed with fresh water input at the entrance to the chiller tank. Connecting to our model, we want to understand how the FC input, captured by the parameter \( c_1 \), affects the microbial level on outgoing poultry, given by \( \tilde{v}_p^* \) in (17). Note that because solutions equilibrate well before a typical 8-h shift ends, we focus on the time independent value \( \tilde{v}_p^* \). The USDA and CFIA both specify that at most 50 mg/l of FC input can be used during chilling. Also, the USDA (USDA, 2012) examined E. coli test levels from national baseline studies concerning poultry slaughter establishments and has categorized these levels into three control ranges: Acceptable (<100 CFU/ml), Marginal (100–1000 CFU/ml) and Unacceptable (>1000 CFU/ml), where a 400 ml solution is used in the rinse procedure to remove microbes from the carcass surface. For the units CFU/carcase, these ranges translate to: - Acceptable (<4 \times 10^4 CFU/carcase) - Marginal (4 \times 10^4 to 4 \times 10^5 CFU/carcase) - Unacceptable (>4 \times 10^5 CFU/carcase). To illustrate our model results in the context of these ranges, we run simulations for two scenarios: (1) when \( c_1 = 0 \) (i.e., no FC input) and (2) when \( c_1 = 50 \) mg/l. Specifically, for each \( \sigma^* \) in [2 \times 10^{-2} \times 10^0] CFU/carcase, we perform Monte Carlo simulations for \( \tilde{v}_p^* \), fit a normal distribution to the outputs and calculate the 95% confidence interval. Fig. 3 displays the model outcomes. Notice that the confidence intervals for all scenarios are quite narrow (this is partly because of the log-scale on both axes). The region below the green line A illustrates Acceptable E. coli levels, the region between the green line A and the red line M illustrates Marginal levels and the region above the red line represents Unacceptable levels. The C0 “line” represents \( \tilde{v}_p^* \) for no FC input and the C50 “lines” represent \( \tilde{v}_p^* \) under 50 mg/l of FC input, corresponding to the respective values for \( \alpha \). Fig. 3 illustrates the sensitivity of \( \tilde{v}_p^* \) to \( \alpha \), corresponding to Fig. 1(C). Recall that \( \alpha \) comes from the term \( ak_w \) which captures the rate of E. coli inactivation via FC contact with carcass surfaces. Because we have no data to determine this rate, we estimated \( \alpha \in [0.001,0.1] \) (see the discussion in Section 2.2 for more details). While the uncertainty associated with \( \alpha \) leads to uncertainty in the direct quantification of \( \tilde{v}_p^* \), the results in Fig. 3 ensure that no matter the value of \( \alpha \), with maximum FC input, the E. coli load on poultry during chilling may be reduced by about 1-3 log_{10} CFU/carcase. To narrow this range, experiments are needed to estimate the rate \( ak_w \). However, in terms of USDA control ranges, Fig. 3 confirms that whether we know the exact value of \( \alpha \) or not, maximum FC input will keep the E. coli level on post-chiller carcasses below the Unacceptable range and most likely within the Acceptable range when \( \sigma^* < 2 \times 10^0 \) CFU/carcase. This can be seen in Fig. 3 where the C50 line remains below the red line M for all values of \( \alpha \), and remains below the green line A for \( \alpha = 0.1 \) and 0.01. In line with FC control, it is worth noting that \( C^* \), the FC level in the chiller water, is sensitive to a number of parameters (see Fig. 1(B)). The freshwater input rate \( I \), and \( c_1 \) clearly have a significant positive impact on \( C^* \). What is noteworthy is that \( q \) and \( hj \) both significantly decrease the FC level. Recall that \[ C^* = \frac{c_1 g}{(10^{-3} T_v) (d_c + k_w W^* + ak_v v_p^* J^* + ak_v v_p^* P^* + hj J^* + 10^{-3} T_v)} \tag{18} \] Since the relative magnitude of \( d_c \) is small (i.e. FC degrades relatively slowly), the magnitude of all the terms involving \( k_w \) is small as a limited amount of FC is needed to neutralize E. coli in solution and on carcass surfaces, and the order of \( g/((10^{-3} T_v) \) is small (flow out), the effective magnitude of \( C^* \) is given by: \[ C^* \approx \frac{c_1}{hj I}, \] 4. Discussion and future directions Although cross-contamination during immersion chilling involves complex phenomena, our model (7) is able to simplify these dynamics for relatively easy assessment. In the case of generic *E. coli* contamination, solutions to our model reach an equilibrium state on the order of 200 – 250 minutes. This means that during a typical 8-h shift, if the average *E. coli* load on poultry entering the chill tank (σ) is relatively constant, we can use the equilibrium solutions (see Equations 8–12) to predict *E. coli* levels for instance, on poultry exiting the chiller, *ν* 0. The advantage here is that *ν* 0 is given by an analytic expression in terms of model parameters. Thus, our model provides a pragmatic, quantified description of *E. coli* cross-contamination in terms of processing and control parameters. As mentioned in Section 3.2.1, we find that if σ has a magnitude of at least 4 log10 CFU, cross-contamination may not affect the concentration of *E. coli* on post-chill carcasses as significantly as when the incoming concentration is less than 4 log10 CFU. This suggests that maximum FC input may be unsuccessful in preventing cross-contamination, placing the emphasis on surveillance of pre-chiller contamination. On the other hand, Figs. 1–3 reveal that FC input is still able to significantly reduce the *E. coli* load on poultry during chilling. However, if the model (7) is to capture, for instance, significant flock-to-flock cross-contamination, σ should be a function of time, determined by appropriate data. In particular, by varying σ in time, we can extend the model (7), which we plan to explore in a future paper, to be used as a reference point to inform strategies for flock processing throughout a given day. Results from such a model can inform logistical slaughter, a processing strategy that orders flocks with greater incoming concentrations of *E. coli* or other bacteria to be processed last. In contrast to the model (7), solutions of the extended model may not settle to equilibrium. While this situation is more complicated, the model will still mechanistically link processing and control parameters to bacterial contamination in the chiller water and on chiller carcasses. This implies that model parameters can be tuned in order to keep contamination levels within certain bounds, and this interplay will offer insight towards control strategies. In line with this, we stress the importance of the results from the model (7). It is critical to note that even if σ varies in time, if σ(t) ≥ 4 log10 CFU or σ(t) < 4 log10 CFU throughout the course of an 8-h shift, our rules of thumb for generic *E. coli* cross-contamination (e.g. in the case of maximum FC input in Section 3.2.1) still hold. Thus, this illustrates how the model (7) can provide a key management threshold for addressing cross-contamination issues. In addition to providing insight toward cross-contamination management, our model (7) and the extended model are useful for QMRA during poultry processing. Ideally, control strategies should be built on knowledge of both prevalence and concentration of contamination during processing. While stochastic models are the typical players used to address these concerns, microbial transfer coefficients at various steps may be unknown or loosely estimated, limiting the confidence in predictive results. The advantage of our model (7) and the extended model, is that both can be used in QMRA to set baseline parameter values and functional forms for bacterial transfer coefficients that are data informed, rooted in mechanistic foundations and may be hard to precisely measure via experiments. Thus our modeling approach can bolster the confidence in the risk predictions from such analyses. Furthermore, these models, as opposed to expensive experiments, can provide quantitative evidence as to which assumptions should or should not be included in large scale risk models during poultry processing. For example, should a risk model of poultry processing include, at the chill step, the effects of the organic material (J) in both neutralizing FC and in determining bacteria levels on carcasses and in the process water in during chilling? Rather than conducting multiple experiments to estimate the probabilities of how these mechanisms affect contamination levels in the tank, our model (7) outputs based on selected inputs can be used to estimate microbial transfer with or without the consideration of the organic load (i.e. with h1 > 0 or h1 = 0, respectively). Also, our approach allows for testing the sensitivity of our model outputs relative to specific model parameters, and therefore, provides guidance for specific future experimentation. In terms of bacterial transfer, our model (7) results indicate, see Fig. 1(A) and (D), that the *E. coli* level in the red water and the *E. coli* load gained by poultry via cross-contamination are sensitive to ρ and b (carcass to water shed rates) and β (water to carcass transmission rate) and therefore, in the case where σ may vary over multiple orders of magnitude during an 8-h shift, the parameters ρ, b, and β, need to be examined in more detail before being applied to the extended model. For instance, data for the probability of microbial attachment μ, on which β depends, as well as details connecting water flow through the tank, sheer forces in the red water to ρ and b, would be necessary to have greater accuracy and predictability in understanding the dynamics of the chiller tank. Furthermore, if the carcass to water ratio is sufficiently high, the model should include carcass to carcass type transmission. Again, in order to quantify such transmission during chilling, further experiments are needed. Another mechanism that may contribute to the *E. coli* load on carcasses during chilling is carcass surface temperature. Results from Carcillo and Laurindo (2007) indicate that the average temperature just under the skin of a 2 kg carcass, subject to water temperature similar to our modeling assumptions, takes between 5 and 10 min on average to cool from its prechill temperature (between 33 and 40°C) to 4°C. While bacterial growth could presumably occur during this time. Data from the study in Northcutt et al. (2008), subject to similar chill tank operating conditions as in our model (7), indicate that this growth most likely is not significant. For instance, for *E. coli*, they found that the average prechill level on carcasses was 2.6 log10 CFU/ml, the average postchill level was at most 1.2 log10 CFU/ml, and the level in the chiller water was at most 1.2 log10 CFU/ml (Northcutt et al., 2008). This suggests that there is no significant growth of bacteria on the carcass surface during the cooling phase, and therefore, we do not include this mechanism in the model (7). While we have discussed contamination mechanisms that need more exploration and mechanisms that may be ignored, an important feature of our model is that it can quantify the effect of indirect mechanisms involved with cross-contamination. For example, it is known that organic material such as blood, fat, protein, and digesta react with FC in the red water, reducing its efficacy to eliminate microbes (Russell, 2012). However, from the model (7) we can obtain the FC concentration in the red water explicitly in terms of model parameters. In particular, we have developed an expression (Equation (18)) which links FC reduction to the *E. coli* and organic material load of the poultry entering the tank and the rate at which organic material reacts with FC. This reinforces the importance of reducing the organic material on poultry during the pre-chilling stages of processing. For instance, spraying procedures along the evisceration line, pre-scald mechanisms and the scalding process, are typical practices that affect the organic load in the chill tank (Russell, 2012). Generally speaking, we have shown how microbiological data for generic *E. coli* can inform the model (7) in order to understand cross-contamination mechanisms as well as quantified control limits. In terms of future directions, the model (7) can be used for similar analyses involving human pathogens such as *Campylobacter*, *Salmonella*, and *E. coli* O157:H7, using relevant data for pathogen specific parameters. In addition, our model framework can be adapted to describe industrial scale immersion chilling operations in the U.S. and other locations by modifying certain processing parameters. Referring to Table 2, this may involve adjusting type (i) parameters as well as type (ii). Even if the immersion chilling process involves slightly different practices, such as recycling of red water, our modeling approach can be adjusted to account for these mechanisms. Finally, it would be desirable to stratify poultry into different categories based on pathogen loads (perhaps in terms of thresholds highlighted by the USDA baseline studies). From this perspective, stochastic/agent based simulations (discrete models) can be used to more precisely examine flock-to-flock cross-contamination and derive rules of thumb for logistical slaughter in terms of both prevalence and level of contamination. The model (7) is a vital tool for parameterizing these new discrete models in the context of poultry contamination with human pathogens. Acknowledgments This work was partially supported by the Natural Sciences and Engineering Research Council of Canada (Grant number: 105588-2011), the Canada Research Chairs Program (Grant number: 230720), Mitacs and the Mprime Centre for Disease Modelling. The authors sincerely thank the two anonymous reviewers for their careful reading and helpful suggestions for improving the exposition of the article. Finally, the authors want to thank the CFIA for their support and details concerning poultry slaughter establishments which operate under the CFIA approved Modernized Poultry Inspection Program. References
United States Department of the Interior National Park Service National Register of Historic Places Registration Form This form is for use in nominating or requesting determinations of eligibility for individual properties or districts. See instructions in Guidelines for Completing National Register Forms (National Register Bulletin 16). Complete each item by marking "x" in the appropriate box or by entering the requested information. If an item does not apply to the property being documented, enter "N/A" for "not applicable." For functions, styles, materials, and areas of significance, enter only the categories and subcategories listed in the instructions. For additional space use continuation sheets (Form 10-600). Type all entries. 1. Name of Property historic name: Samuel Gwinn Plantation other name/site number: Old Brick Farm 2. Location street & number: County Route 15 city, town: Lowell state: West Virginia code: 54 county: Summers code: 089 zip code: 24962 3. Classification Ownership of Property: Category of Property Number of Resources within Property private building(s) Contributing building(s) Noncontributing building(s) site 1 sites structure 1 structures object 8 objects Total Name of related multiple property listing: N/A Number of contributing resources previously listed in the National Register: N/A 4. State/Federal Agency Certification As the designated authority under the National Historic Preservation Act of 1966, as amended, I hereby certify that this nomination request for determination of eligibility meets the documentation standards for registering properties in the National Register of Historic Places and meets the procedural and professional requirements set forth in 36 CFR Part 60. In my opinion, the property meets the National Register criteria. See continuation sheet. Date Signature of certifying official State or Federal agency and bureau 5. National Park Service Certification I hereby certify that this property is: √ entered in the National Register. See continuation sheet. determined eligible for the National Register. See continuation sheet. determined not eligible for the National Register. removed from the National Register. other, (explain): Signature of the Keeper Date of Action ### 6. Function or Use <table> <thead> <tr> <th>Historic Functions (enter categories from instructions)</th> <th>Current Functions (enter categories from instructions)</th> </tr> </thead> <tbody> <tr> <td>Domestic: Single dwelling</td> <td>Domestic: Single dwelling</td> </tr> <tr> <td>Agriculture: Agriculture field</td> <td>Agriculture: Agricultural outbuildings</td> </tr> </tbody> </table> ### 7. Description <table> <thead> <tr> <th>Architectural Classification (enter categories from instructions)</th> <th>Materials (enter categories from instructions)</th> </tr> </thead> <tbody> <tr> <td>Mid 19th Century: Greek Revival with Italianate Influence</td> <td>foundation: Rock-faced hard limestone</td> </tr> <tr> <td></td> <td>walls: solid brick - interior &amp; exterior</td> </tr> <tr> <td></td> <td>roof: asphalt/fiber glass shingles</td> </tr> <tr> <td></td> <td>other: wood side and front piazza</td> </tr> </tbody> </table> Describe present and historic physical appearance. - **Domestic/single dwelling:** (Manor House) Key #1 (circa 1868) C - **Secondary structures:** Meat curing house Key #6 (circa 1770) C - Necessary house Key #2 (circa 1925) NC - Carriage House Key #5 (circa 1868) C - Delco House Key #3 (circa 1910) C - Ice House Key #4 (circa 1868) C - **Agricultural** - Storage: Granary Key #7 (circa 1910) NC - Granary foundation Key #11 (circa 1910) NC - Agricultural field: Pasture Key #8 (circa 1800 to present) C - Agricultural Building: Machinery Barn Key #9 (circa 1900) C - Forge Key #10 (circa 1868) C See continuation sheet (B) Present Description: Manor House (Sketch-Sheet Key #1; dimensions: 40' x 50') The Manor House is a large (11 room), two story brick Greek Revival with Italianate influence dwelling and is a fine example of the transition between the styles. It is a five-bay, two story structure with unusual Jerkin head or clipped-gable roof, (believed to be original). Each level has an identical four-room central hall floor plan. The overall layout is "L" shaped with a one story kitchen/great room at the rear. The construction date is circa 1868. The structure is of solid brick, common bond with every 6th course a straight header. All exterior and interior walls are brick loadbearing. The bricks were fired on the property from clay removed from the cellar. The foundation is rock-faced hard limestone. It was commissioned by "Long Andy" Gwinn and was constructed by Silas F. Taylor, designer and brick mason. Victorian porches (piazza) run the full length of the front of the house at both the first and second stories. There are six rectangular columns embellished with pierced brackets. The five bays include a single "Greek Revival" entrance with transom, side, and corner lights. The same design is repeated on the second story but to a slightly smaller scale. The 21 windows, each with a segmented arch, include the original shutter hardware. The present owner has secured period shutters which have not as yet been hung (paneled 1st level; adjustable louvered 2nd). There is a one story open side porch on the West, or river side of the home. The manor house faces north with beautiful views of the Greenbrier River to the West and North. The central halls include a straight staircase rising to the upper level along the west inner wall. The open-string staircase has turned balusters and a handsome walnut handrail. The interior woodwork is generally plain with architrave door trim and four paneled doors. The lower hall includes closed pediments over each doorway and large crown molding in the hallways and several rooms. All doors have iron box locks, but one door has an elaborate carved relief brass boxlock. Several of the interior doors retain their Describe present and historic physical appearance. (A) Summary The property is comprised of a large brick Manor House with eleven rooms and three porches constructed circa 1868; and in addition, eight outbuildings circa 1770-1910. It encompasses approximately four acres on a promontory overlooking the Greenbrier River, 75 feet below. A local landmark, it is set in a scenic rural location amidst a general and cattle agricultural area. The property is at the western terminus of the Piedmont country before the ascendancy of the more mountainous plateau of Central Southern West Virginia. The original land grant of this property was to Samuel Gwinn Sr., who along with James Graham (who settled just across the river) were the first permanent settlers of Summers County in 1770. Samuel constructed a large 20 x 25 two story log house which was located 300 yards south of the subject Manor House. The existing Manor House was built by his grandson, "Long Andy" Gwinn, the county's wealthiest man. The log house was dismantled in the 1960's by the Gwinn's and utilized in the construction of the Savannah Inn at Lewisburg. The site is still visible but it is not included in this property nomination. Due to successful tobacco, and later cattle farming, the Gwinn holdings were expanded in the 18th and early 19th centuries to include a 2000 acre plantation. This acreage is contiguous and currently surrounds the property. In the 20th century the family divided their land holdings into two large farms which they currently operate, and the subject property was sold outside the family for the first time in over 200 years. The term "Manor House" is used for two reasons. (1) It was the residence of the owner of the largest cash crop plantation (tobacco) and the richest man in the county and (2) its substantial appearance, size, and prominent location led the local population to refer to it as a "mansion" or "Manor House". original hand-painted grain finish in excellent condition. (C) Historic Description - Manor House (Contributing) The house still appears much as it did when constructed circa 1868. A change was made in the Manor House's front appearance about 1895. Originally the front porch was a single-bay, one-story structure with a deck and railing above, somewhat Federal in style. The existing double front porches across the entire front are a relatively common vernacular adaptation. An interesting feature still exists under these porches. At the time the porches were added the entire structure's brick-work was painted red. To cause the mortar joints to stand out visually, they were hand-painted with straight white lines. Because of the protection from weathering afforded by the porches, these paint lines are still well preserved today. When the house was converted to central heat, about 1930, several changes were made, one of which affects the external appearance also. Two inside-end chimneys (each with four flues) were removed at a level just below the attic floor. In the two parlors, on the first floor and in two bedrooms on the second, the fireplaces were covered with plaster. The fireplaces in the other rooms and their original mantels remain, although they are no longer functional. At about this same time, the original metal roof was converted to asphalt shingles. Originally the house included five bedrooms, four on the upper floor and one for "Long Andy" on the lower. The lower bedroom has become a library and its two original "clothes presses" at each side of the chimney breast were converted to bookcases and enclosed cabinets. One of the upper bedrooms was converted to a bathroom in 1930. (D) Outbuildings: Historic and Present Usage (see Sketch Map) (Key #2) Necessary House (Non-contributing) Dimensions 4' x 4' This is of wood frame construction, plain board and batten sheathing; and is a "one seater" with unusual side vents. It was constructed circa 1925. In 1930 indoor plumbing was installed in the house and it is assumed from its location it was used for the "hands" use. Currently it is in occasional use. It has a shed style roof with asphalt shingles. Falls outside the period of significance. (Key #3) "Delco House" - Contributing 12'.2" x 10'.4" A private electric generating capability was added to the property circa 1910. The machinery included a motor, generator and batteries. The engine pads are still in existence. It has horizontal weatherboard sheathing with a tin raised-seam roof. It currently serves as a storage for garden tools and supplies. (Key #4) Ice House - Contributing 14'.8" x 14'.2" Construction circa 1900 utilizing round head nails. Has tin raised seam roof and horizontal weatherboard sheathing. The Gwinn's deepened the channel in the Greenbrier River directly below with dynamite to slow its flow and facilitate freezing (also served as their "swimming hole" in summer). May be one of few remaining ice houses in area. Currently used to store lumber. (Key #5) Carriage House - Contributing 17' x 26' Frame construction with cut nails circa 1860's. Sheathed in plain boards and batten sides with a tin raised seam roof. Has wooden floor and remnants of harness, side saddle, etc. Currently used as a garage. (Key #6) Meat House - Contributing 14' x 14' Constructed of squared logs with "full saddle notches". Use of hand forged nails (clinched on the door) lead to the conclusion that this building was part of the original Samuel Gwinn Sr. homestead of 1770 - 1839. Door also has cast iron butt hinges and a Norfolk lock (1800-1830). Has tin raised seam roof. Currently used for storage. (Key #7) Granary - Contributing 10'.5" diameter This building is all metal with poured concrete floor in very good condition. Ruins of a second one of identical size are located close to the machinery barn. Construction estimated to be circa 1920's. Currently used for storage of period shutters to be installed on Manor House at a future date. (Key #8) Agricultural fields. - Contributing 3 acres The Manor House is surrounded by a plank fence which itself is surrounded by approximately 3 acres of pasture. Current use is pasture for several horses. (Key #9) Machinery Barn - Contributing 26' x 17' Circa 1900 this building with pole construction, plain board and batten siding and tin raised seam roof was added. It is currently used for its original function. (Key #10) Forge - Contributing 18'.5" x 16'.5" Frame constructed circa 1860's with cut nails. Original heavy workbenches and supplies remain. Has raised-seam tin roof and is sheathed in plain board and batten siding. (Key #11) Granary Foundation - non contributing At least two granary's were constructed. In the 1960's, possibly when the Gwinn's sold Sam Sr's log house, the metal structure was removed and sold. The Samuel Gwinn Plantation is significant because it consists of locally important associated resources that date from the period of settlement of the region, c. 1770, to the death of A. Gwinn, 1913, a span of time witnessing the agricultural development of the valley and growth of the operation to include a variety of important buildings, such as the Gwinn "manor house." The complex meets Criterion A because it is closely associated with the local pattern of agricultural land use, particularly tobacco farming. Criterion B also applies because the district is associated with the lives of persons, in this instance several patriarchs of the Gwinn family, who contributed to the first permanent settlement and general prosperity of Summers County and its section of the Greenbrier River Valley. The Gwinn Plantation meets Criterion C inasmuch as its buildings reflect distinctive characteristics of a type, such as the early log building, and work of a master. The large brick farmhouse was constructed by one Silas F. Taylor, a local master builder. types in the county. It is the largest, and the only one of brick construction with the vernacular wood piazza. **Broader Patterns of History/Culture** One of the significant aspects of this property is the contribution it makes to the broader patterns of history and culture. The James Graham House (NRHP) just across the river represents the earliest pioneer homestead (1770) of the frontier settlement period of the second half of the 18th Century. On the other hand, the Gwinn Plantation represents other, and perhaps ultimate phases in the development of rural life in the period. The two properties are very closely linked historically, culturally, and socially. The Gwinn property represents the phase that evolved from (1) limited to expanded land holdings, (2) subsistence to commercial agriculture, and (3) pioneer to more sophisticated dwelling places. Viewed from the perspective of their common heritage and close familial and geographic associations, the two properties contribute much to our knowledge and understanding of the first 150 years of rural history in this region. **Archeology:** Prehistoric, Historic-Aboriginal, and Historic-Non-Aboriginal artifacts have been discovered on this property. The site, located on a promontory above the river, is surrounded by gently sloping lands in all directions, yet is only 150 yards from the Greenbrier River. It would seem to be a convenient and defensible location for a Native American campsite. Tradition holds that in fact it served that function. Numerous stone artifacts have been found. An amateur archeologist did some very limited diggings about 15 years ago and found a death mask. From a historic standpoint, the property has been an agriculture site for approximately 218 years. The two Samuel Gwinn's farmed there and "Long Andy" located the plantation buildings there when he built the Manor House in 1868. Both prehistoric and historic archeology potential on the nominated property appears outstanding. Historical Events: This property was settled circa 1770 by Samuel Gwinn Sr., who along with James Graham established the first permanent settlement in the Summers County area. This immediate vicinity was the focus of numerous and significant Indian attacks during the period 1763-1780. Individuals: Samuel Gwinn Sr., during the period 1771-1776, was a scout and spy on the frontier in the Indian wars and fought in the Battle of Point Pleasant. Beyond his original holdings, over the years he obtained and cleared additional tracts of land and established the beginnings of a successful frontier plantation. His will included nine slaves. Samuel Gwinn Jr. continued the agricultural development and expansion of the acreage. He lived in his father's original log house initially accepting from his father a gift of half of a 300 acre parcel. Several years later, he purchased the remainder from a brother. More acreage was subsequently added. Tobacco was the cash crop. Andrew Gwinn ("Long Andy"), Samuel Jr's son carried on the traditions adding contiguous lands to a total of 2000 acres. He became the "wealthiest farmer" in Summers County and its first Sheriff. Silas F. Taylor was a notable regional designer/builder of fine brick homes and buildings. "Long Andy" retained him to build the Manor House. Architecture: The property's existent "Manor House" built by "Long Andy" in 1868, was the largest and most elegant in the area. It was constructed by the area's "Master Builder" Silas Taylor who utilized bricks made from clay removed from the cellar excavation and baked on the premises. The architecture was originally Greek Revival with Federal influences. Circa 1895, Italianate piazza were added across the front, at the first and second levels. The home is one of the more unique HISTORICAL CONTEXT (A) Historical Events: Along with James Graham, Samuel Gwinn Sr. was the first permanent settler in the area (Lowell/Pence Springs today). "It was generally believed that this settlement, when made by Col. Graham was one of the first made in this immediate region, if not the very first." (Miller p. 49). "About the year 1770 or possibly a little later, James Graham with his family moved to the Greenbrier River and settled in what is now Summers County" (Graham p. 41). "About the same time that Graham settled near Lowell, Samuel and James Gwinn, two brothers settled in the same section. The Grahams and Gwinns were neighbors on the Calf Pasture River in Virginia before they emigrated, and both had sailed from Ireland together." (Miller p. 50). In his book, David Graham states that his ancestors came from noble stock and that seems somewhat evident from his genealogy research and descriptions in his early chapters. (Graham p. 10). But at least in this country, it would seem more probable that they and the Gwinns, in their successful pursuit of land, followed a more typical pattern. "Thus from early times there existed in Virginia, side by side with the planter class, and sharply distinct from it, a servile laboring class which forms a large part of the total population. Many of these in time, acquired capital, bought land, and joined the planter class. Others, following their terms of indenture, moved to the frontier." (Ambler p. 63). "Robert Gwinn Sr., Samuel Sr.'s father, owned land adjoining the land of John Graham on the Calf Pasture, bought on July 17, 1745." (Long, Nov 15, 1979). When they moved to the frontier, James Graham settled on the "North" side of the Greenbrier River and his friend Samuel Gwinn Sr. settled just across from him. Each side of the Greenbrier at these points were blessed with bottom lands, fertile flood plains, and rolling meadows, very suitable for eventual clearing and planting. This immediate vicinity, due to the increasing "concentration" of settlers was the focus of Indian attacks during the period 1763 - 1780: "There was a first farm" (James Hill Farm) down river from Alderson (toward Lowell) on the Greenbrier which was attacked by Indians circa 1763 - all lives were lost save one very small girl." (Miller p 41). "In the 1770's Jarretts Fort was constructed in Summers County on the Greenbrier at Newman's Ferry near what is now Alderson." (Miller p 41). "There was a fort erected on the opposite side of the river from the Graham House (or on the Gwinn side) where Spotts Hotel now stands (raised in 1985), known as Graham's Fort." (Graham p 49). This was built between 1770 and 1777 since it figured in the latter dated attack by Indians. The Graham homestead was attacked by Indians in the Spring of 1777. A man from the Fort, Mr. McDonald or Caldwell, a negro servant named Sharp, and 10 year old John Graham were killed. Seven year old Elizabeth Graham was captured. (Graham p 89). She was a hostage for 8 years and ransomed for the equivalent of $300 in silver. Indians killed a Ms. Butler or Massey in 1778 or 1779, across the Greenbrier from Talcott. (Formerly Rollynsburg Ferry Miller p 736). This is less than a mile from the Samuel Gwinn Plantation. The history of Patents or Grants and sale and exchange of eventual Gwinn Plantation land is also of significance: "The land titles of the whole of the county were derived from the Commonwealth by these grants, commonly known as "patents", issued by the governor. Prior to the date of the Revolutionary War the titles were derived from the Crown of England by grants from the king, but there are no Crown grants in Summers County, unless the 100,000 acres granted to the Greenbrier Company lies in this county. This grant was prior to 1776. (Miller p 252). "The earliest land grant of which we have knowledge is for a tract of land on the mountain between the mouth of Greenbrier River and Wolf Creek. It was issued by Thomas Jefferson in 1779. The claim for the land was laid in 1772, four years before the Declaration of Independence.” (Miller “The first grant of land to Samuel Gwinn was by the Governor of the Commonwealth of Virginia, in the year 1796, and was for a tract of land on Greenbrier River, on which Andrew Gwinn, with his son James, now resides. I have examined a number of patents for lands in that neighborhood to Samuel Gwinn and others, which are all ancient documents, written in elegant handwriting on the dressed skin of some animal, and are in a perfect state of preservation. Mr. Andrew Gwinn has some eight or ten of these old documents, which he prizes very highly.” (Miller p. 657). An earlier patent was issued by Edmund Randolph, dated 10th day of December, 1787: "Edmund Randolph, Esq., Governor of the Commonwealth of Virginia. To all whom these presents shall come (the 's' being in the shape of an 'f') greeting: Know ye, that by virtue of a certificate in of settlement given by the commissioners for adjusting the titles to unpatented lands, in the District of Augusta, Botetourt and Greenbrier, and by consideration of the ancient composition of 2 Pounds sterling, paid by Samuel Gwinn, into the treasury of the Commonwealth, there is granted by the said Commonwealth unto the said Samuel Gwinn, assignee of James Henderson, a certain tract or parcel of land containing 400 acres by survey, bearing the date the first day of June 1784, lying and being in the county of Greenbrier, beginning, etc." (Miller p 661). "Another of these patents is to James Gwinn, and is issued by Edmund Randolph, on the 8th day of November 1787, and of the Commonwealth the twelfth, which is consideration of the ancient composition of 2 pounds sterling, paid by James Gwinn into the treasury. He was granted 400 acres by survey, living in the county of Greenbrier, on Little Wolf Creek, adjoining the lands of John Dickinson." (Miller p 661). "Thomas M. Randolph, Governor of Virginia, granted to Samuel Gwinn November 1, 1821, thirty-one acres. James Monroe, Governor of Virginia, and afterwards President of the United States, granted to Samuel Gwinn, December 2, 1800 five acres. John M. Gregory Lieutenant Governor of Virginia, granted to Ephraim J. Gwinn, August 30, 1842, twenty-one acres. On July 31, 1779, John Osborne conveyed to Samuel Gwinn, for five shillings, 245 acres." (Miller p 658). "Edmond Randolph, Governor of said State, on the 10th of December 1787, issued his patent unto Samuel Gwinn for 400 acres by virtue of survey made on the 1st of June, 1784, on the south side of Greenbrier River adjoining Henry Jones and John Van Bibber". (Miller p 256). "Another tract of 95 acres was patented to Samuel Gwinn by Governor James Pleasant, on the 2d day of April, 1824. Governor Edmond Randolph issued his patent to Samuel Gwinn for one of the tracts near Lowell, on the 18th day of March 1789." (Miller p 658). "Claypool seems to have been an original patentee, John Osborne and others conveying the property to said Samuel Gwinn of Monroe County, and the price paid was five shillings. The Claypool patent was dated in 1793, for 250 acres." (Miller p 657). "James Wood, Governor of said Commonwealth, on the 20th day of January, 1798, issued his grant unto Samuel Gwinn for 220 acres adjoining William Graham." (Miller p 256). There are also records of land exchanges involving the Gwinn Plantation through sale or exchange: "Deed between Samuel Gwinn, Sr., and Samuel Gwinn, Jr., dated on the 26th day of October, 1807, by which is conveyed three different tracts of land on Greenbrier River near Lowell. The signature of Samuel Gwinn is witnessed by O. Tolles, Joseph Alderson, John Gwinn and George Alderson; was admitted to record at the December court of Monroe County, 1807; attested by Isaac Hutchinson, C.T." (Miller p 661). "I have another deed which is signed by Samuel Gwinn, the father of Andrew Gwinn, when he was eighty-four years old, and it is well-written. It is witnessed by Joseph Alderson, George Alderson, John Gwinn, and O. Towles, and bears date the 26th day of October 1811, and is a conveyance from Samuel Gwinn, Sr., to his son, Samuel Gwinn, Jr., who was a brother of Andrew Gwinn, Jr., and died only a short time ago." (Miller p 663). "On the 31st day of July, 1779, John Osborne sold to Samuel Gwinn for five shillings, 245 acres at Green Sulphur. Samuel Gwinn conveyed these lands to his son, E. J. Gwinn, as a gift on the 20th of October 1829." (Miller p 253). "All of these are title papers and are as in good state of preservation as when issued; are written on parchment, some kind of skin; the writing is excellent, plain and legible. One of the patents especially, I notice, came from an animal, and the holes made by taking out the legs still remain on the margin, and another hole in it made by taking the hide off the animal. It is unevenly trimmed, but all of them are finely dressed. We seldom see in these days and times better handwriting than that exhibited on these wonderful old documents." (Miller p 662). "These lands are now principally, if not altogether, owned by Andrew Gwinn (Builder of the 1868 Manor House) of Lowell." (Miller p 256). All these lands and others accumulated by the Gwinn's are comprised of 2000 acres surrounding the nominated property and are currently farmed by direct Gwinn heirs. **TOBACCO PLANTATION** It is clear from the foregoing the Gwinn's, Samuel Sr., through the lifetime of Andrew, more than any other family in the region, prospered in the acquisition of lands and the development of agriculture. Records indicate Samuel Sr., acquired some 1000 acres and these were subsequently extended by Samuel Jr., and Andrew during their lifetimes. Mary Gwinn Nelson was born on the plantation in the Manor House and lived on it her entire 94 years. She told the author that her father "Long Andy" had often and at length described the early plantation days, and the size of the operation. "It was not unusual," he had said, "that 40 hands would be engaged in their agricultural and related operations." The Gwinn's pulled far ahead of their friends and neighbors, the Grahams in prosperity. Col. James Graham's will of 1812 lists only one slave. We know from Samuel Sr.'s will that he owned 9 slaves at his death. The Gwinn's acquired large land holdings. Mary indicated that "the Gwinn's brought in the first horse drawn machinery." Over the years and up until this day they have been innovators in agricultural technology. In Col. Graham's will, he makes several references to his "plantation". (Miller p 789). Finally, Miller, in describing the origin of the name of the mountain behind the property states, "...Gwinns Mountain behind the property, after Andrew Gwinn who owned a magnificent plantation at its base." (Miller p 352). The documentation on tobacco culture in the immediate area is significant. "There was in the early days quite a profitable industry from which farmers and merchants derived quite a considerable income - that of raising and transporting tobacco, which was cultivated quite extensively and successfully in Forest Hill and Pipestem and a part of Talcott (includes Lowell)." (Miller, p 58). Ambler, in his 1933 History of West Virginia devotes a section to tobacco-culture (p 61-64.) "But of greater importance than any of these elements was the discovery of a crop that could be depended upon as a source of assured income. This was tobacco. Henceforth, for a century and a half, tobacco was the determining factor in the economic, social and political life of the colony. With lands practically free but uncleared, and with the demand for Virginia tobacco increasing annually, the greatest concern of the Virginia landowner was labor." He continues indicating the solution to obtaining this labor (i.e., clearing land and tobacco culture) was to engage indentured servants and subsequently slaves. "By 1750, the total Negro population had increased to more than thirty thousand, and Negro slavery had become an essential of the plantation system". (Ambler p 64). "Because of the westward extension of tobacco culture and consequent increase of the slave population all parts of this area had tended toward economic unity." (Ambler p 212). That "unity" continued up until and through the Civil War with this section of Virginia/West Virginia maintaining strong Confederate sympathies. No less than four Gwinn's: Harrison, Augustus, Marion, and Samuel (the former with Edgars Battalion and the other three with Thurmond's Rangers) fighting for the Confederacy (Miller p 215). (One of their Confederate muskets is in the possession of the author, the current owner of Samuel Gwinn Plantation.) Long refers to "Long Andy's" Confederate sympathies: "During the Civil War, he (Long Andy) had accumulated several thousand dollars of Confederate money, rolls and rolls of fifty and hundred dollar bills." Long goes on to explain "that thereafter, Long Andy would deal only in gold and silver coin!" (Long, April 10, 1980). In an interview of the Honorable C. A. (Bud) Dunn, Lester Lively writes in the Hinton News about the Red Sulphur Turnpike which ran from that area, down Gwinn Mountain (now called Wind Creek) to Lowell. "In addition it was the avenue by which timber products and the products of a rich agricultural region were brought to the shipping point at Lowell. Tobacco was raised extensively in the area in years past which was packed in hogsheads and transported to the C&O for shipment to the markets." (Lively) "Times change and the value of tobacco declined in the mid 19th Century." (Amber p 213). The Gwinn's gradually shifted their agricultural production to general dairy and cattle farming, especially as the C&O Railroad was extended to Lowell and beyond in the later part of the century. A picture of an old log tobacco barn in Miller's Book is the only known remnant of that era. This is from the Talcott/Lowell district. (Miller p 9). HISTORICAL INDIVIDUALS Samuel Gwinn, Sr. - First settler (with James Graham). He was born in either 1745 or 1751, the former consistent with his headstone, the latter with his affidavit for a military pension. His father Robert Gwinn (or Gwynn) emigrated prior to 1742 from Ireland becoming a landowner and raising six sons. (Long, April 10, 1980). "Robert Gwinn owned land adjoining the land of John Graham on the Calf Pasture, bought on July 17, 1749. (Long, November 14, 1979). Samuel Gwinn, Sr. married Robert Graham's widow Elizabeth Lockridge Graham Gwinn (died 1794 and brother to Col. James) Before moving his wife to the frontier; (she already had three children before R. Graham’s death), Mr. Gwinn took two slaves to the new property, cleared land and built a cabin (300 yards to the south of the 1868 Manor House.) (Long, November 15, 1979). They eventually had 9 children who lived to marry, in addition to two surviving who remained in the custody of Andrew Lockridge, her father. Samuel Sr's land acquisition and farming accomplishments were documented above. In addition, as might be expected, Samuel Sr. was heavily involved in the Indian Wars prior to and during the Revolution. His military record is documented in the National Archives. "Service: served as a guard on the frontiers against the Indian, at various times prior to the Revolutionary War from 1771 to 1774... was in the Battle of Point Pleasant under General Andrew Lewis and from 1776 served different times as scout and spy on the frontier." (Long, April 10, 1980). In his own words he continues, "I was born in Augusta County, Virginia, I am in my 83rd year; was in the War of 1774; enlisted under Captain Gwinn and went to Point Pleasant where we had a severe battle with the Indians in 1776. I moved to Monroe County (Virginia, now Summers County, West Virginia) with my wife and children for a year or two and then moved to a block house because of the Indians. I returned to my cabin and to my hunting, although people of the settlement took their families to the fort in the summer months." (Long, April 10, 1980). He received a military pension on March 22, 1814, but a year later it was revoked on the basis that he did not serve in "an embodied military corps." (U.S. Archives March 9, 1835). Samuel Gwinn, Sr. died on March 25, 1839 at age 94. His will, dated June 4, 1832, is on record at the Greenbrier County Courthouse, West Virginia and lists in addition to other possessions, nine slaves: Tecumsey, Hiram, Jingo, David, Liews, Lisa Ann, Norris, Grisy and Lewis." (Long, November 22, 1979). Elizabeth, his wife had preceded him in death on January 25, 1832. Samuel Gwinn, Jr. - Prosperous and Innovative Plantation Owner. Samuel Jr. was born to Samuel Sr. and Elizabeth on October 22, 1777. He married Elizabeth Taylor on June 20, 1803. They had thirteen children. Samuel Jr. was born in the original log cabin (as was his son, "Long Andy"). Samuel Jr. was also a prosperous farmer receiving 150 acres from his father and later buying out at least one of his brothers and subsequently "moved into the large home his brother Andrew had built just after his service in the war of 1812". (Long March 22, 1980). The fields were needing enhancement when Samuel Jr. had two large lime kilns build on the property about 200 yards to the west of the original log house. Limestone was blasted out of Wind Creek (below the log house site - several remaining drilled explosive holes may still be seen in the creek bed). The rocks were hauled to the kilns, about fifteen feet high and 5 feet across, and cooked to a powder consistency whereupon it was applied to the fields as fertilizer. The kilns still exist today about 300 yards south and 200 yards west of the Manor House. Sometime during this period the Red Sulphur Spring to Lowell road was built across Gwinn Mountain (now Wind Creek Mountain) to carry produce and the Stage Line. The Stage Line was utilized to bring wealthy families to the mountainous (cooling altitude before the days of air conditioning) areas where their social and health needs could be met at the various spas in the area (i.e. White, Green, Red, Blue Sulphur Springs, etc.). Roads were built and maintained by people in the vicinity. At one early time "hands in the neighborhood were compelled, between the ages of twenty one and forty five years, to work such numbers of days as appointed... not exceeding six." (Miller p 344). This road was hewn in the rock on Gwinn Mountain and can be clearly seen in fall and winter from the Manor House. It is no longer used but can be easily traversed. "The Stage was true to its type one sees in the movies today, plush seats and ornate doors, the driver cracking the whip over four horses from a high seat on the top front; baggage piled on a rack behind him. The drivers were "Shug Spangler, the Dunns, Eb Dillon... "Shug" seemed to be the most colorful of the drivers, his "Whoopie" when he topped Gwinn mountain could be heard in Lowell. Shug was drown in Greenbrier River near Lowell after a long and exciting career." (Faulkner from K. J. "Kent Kessler). Samuel Jr. died September 8, 1863 and his wife on February 20, 1872. Andrew Gwinn, (Long Andy) Because of his great height, the oldest son of Samuel Jr. was nicknamed "Long Andy". He was born in the log house his Grandfather had built and in which his father was born, on December 3, 1821. He married Elizabeth Keller on October 18, 1857. They had only one child, James. In the 1840's several families then living in the area moved west. Taking advantage of the opportunities, Long Andy purchased about 1000 more acres adjoining his property. After the construction of the C&O Railroad in the 1870's, a stock yard was built at Pence Springs. "It became the big shipping point for beef cattle that were brought in huge cattle drives by the thousand, even as far away as Giles County, Virginia; and there was "Long Andy" with around 2000 acres in one body. It wasn't long before he was dealing mostly in beef cattle." (Long, April 3, 1980). In 1868 he built the Manor House. "No one knows how many times he looked down the hill where the cabin stood, a constant memory of his humble beginnings; but there stood the big house, its wall twelve inches thick." (Long, April 3, 1980). James H. Miller in his "History of Summers County" says that at the time he was the "wealthiest farmer in the county." **Silas F. Taylor - Designer/Builder** Silas F. Taylor designed and built the Manor House. He was born in Bedford County, Virginia in 1820. He emigrated to Monroe County (now Summers and Monroe) with his father when he was 16. He married Sobina Nutter in 1842 and settled on Lick Creek in 1855. "Silas F. Taylor, the ancestor, was a brick mason by trade, and had a reputation throughout all this section of the county for his honest work and ability in his occupation. He built the brick house of Captain A. A. Miller on Lick Creek, also one for Augustus Gwinn near Alderson (still extant), one for Andrew Gwinn at Lowell, (nomination property) and also the Ephraim J. Gwinn brick house at Green Sulphur Springs ... and other old, substantial brick buildings of the county. At the outbreak of the War (Civil) he was captain of the militia, and became a soldier of the Confederacy, being captured in 1862, confined in Johnson's Island prison, and after his discharge entered the service under Captain Philip Thurmond, and was again captured and confined in the same prison..." (Miller p. 420-21). Interestingly enough, his son William J. Taylor was also a Confederate, was captured, and was confined in Camp Chase, Ohio, Prison No. 1, Mess 5. In his voluminous book, Miller includes a copy of a letter from son to father dated August 1, 1862, from prisoner to prisoner! (Miller p. 421). Each of Taylor's homes and buildings are architecturally quite distinctive in their own right. ARCHITECTURE-BUILDINGS The property's architectural significance has been detailed in item 7. The Manor House and contributing outbuildings are important examples of their unique functions and period and their association with the important events and persons in the history of the frontier and its early settlement and development. It should be noted that only the log meat house remains from the original 1770 "settlement" period. The Manor House from its exterior views appears much as it did when constructed in 1868. With the later addition of the front piazza, it is of particular "vernacular" interest. A number of homes in this region possess them. The outbuildings were placed and utilized by Long Andy for his agricultural and domestic needs. They represent generic agricultural functions from the period 1770 up until his death in 1913. The overall view depicts what an affluent family's home looked like in the mid 19th Century. With few exceptions, the Manor House is "period"; admittedly it is less elegant and of smaller scale than the typical "Southern" plantation house. Samuel Sr. and Samuel Jr. cleared, lived on, and cultivated this property from the 1770's. The meat house was Samuel Sr's. Samuel Jr. lived in his father's log house and expanded the land holdings and agricultural operation. Long Andy built the Manor House, lived in it his entire life and died there. It represents the "best" of its period in this region. The Manor House is in an excellent state of repair. HISTORIC SETTING The site is as beautiful and functional as the day Samuel Sr. purchased it and began the pursuit of his dreams. It appears today much as it would have in the early period; rolling agricultural fields and pasture along the side of and overlooking the Greenbrier River Valley. The events and the people and the functions associated with this property are most certainly significant in the patterns of American History. Previous documentation on file (NPS): - Preliminary determination of individual listing (36 CFR 67) has been requested - Previously listed in the National Register - Previously determined eligible by the National Register - Designated a National Historic Landmark - Recorded by Historic American Buildings - Recorded by Historic American Engineering Record # 10. Geographical Data Acreage of property: 3.75 acres UTM References <table> <thead> <tr> <th>A</th> <th>Zone</th> <th>Easting</th> <th>Northing</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>313.4</td> <td>14.0</td> </tr> </tbody> </table> <table> <thead> <tr> <th>B</th> <th>Zone</th> <th>Easting</th> <th>Northing</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>5.1</td> <td>6.5</td> </tr> </tbody> </table> <table> <thead> <tr> <th>C</th> <th>Zone</th> <th>Easting</th> <th>Northing</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>9.6</td> <td>5.0</td> </tr> </tbody> </table> <table> <thead> <tr> <th>D</th> <th>Zone</th> <th>Easting</th> <th>Northing</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>11.4</td> <td>16.1</td> </tr> </tbody> </table> Verbal Boundary Description See continuation sheet Boundary Justification The property's approximately 3.75 acres includes a complex of surviving plantation buildings and Manor House surrounded by pasture. The boundaries include a concentration of structures in a setting which conveys a strong sense of historical relationship as to location, setting and design. See continuation sheet 11. Form Prepared By name/title: Schmauss, David C. - owner organization: NA date: July 28, 1988 street & number: Box 165A Route 3 telephone: 666-4421 city or town: Pence Springs, West Virginia state: WV zip code: 26962 MAJOR BIBLIOGRAPHICAL REFERENCES: Graham, A.R.; *Gwinn Family Sketch*; Unpublished manuscript. 1907. Gwinn, Samuel Sr.; "Pension Certificate No. 23706" For the Revolutionary War; March 12, 1834; National Archives, Wash. D.C. Lively, Lester; "Bud Dunn Recalls Red Sulphur Turnpike Days;" 19; *Hinton Daily News* - date unknown. Long, Frederick D; "The Gwinn's"; Nineteen Article Series; November 15, 1979 to April 24, 1980; *Hinton Daily News*. Miller, James H.; *History of Summers County - From the Earliest Settlement to the Present Time*; Privately Published Book, 1908. BEGINNING at a metal fence post on the southerly right of way line of W. Va. Secondary Route 15 near C&P Telephone Company pole No. 182.1, being a corner to Mrs. M. E. Nelson and proceeding thence leaving said right of way and with said Nelson line S. 17 degrees 11' 15" W. 278.20 feet with the fence to the northeast corner of an old wooden shed; thence with the easterly side of said shed and continuing past said shed S. 14 degrees 24' W. 37.03 feet to a metal fence post set by J. E. Gwinn; thence N. 76 degrees 13' 17" W. 179.92 feet with a fence to a metal fence post corner; thence N. 15 degrees 16' 17" E. 392.55 feet with a fence to a fence post near a gate on the southerly right of way line of W. Va. secondary route 15; thence with said right of way S. 72 degrees 53' E. 471.02 feet to the point of Beginning, containing 3.758 acres, more or less. (Recorded in the Office of the County Clerk of Summers County, West Virginia, Deed Book Number 115 page 211.)
CULTURAL RESOURCES SURVEY OF THE LAKE RIDGE 115kV TRANSMISSION PROJECT, HORRY COUNTY, SOUTH CAROLINA Prepared By: Michael Trinkley, Ph.D., RPA and Nicole Southerland Prepared For: Mr. Tommy L. Jackson Central Electric Power Cooperative P.O. Box 1455 Columbia, SC 29202 CHICORA RESEARCH CONTRIBUTION 505 Chicora Foundation, Inc. PO Box 8664 Columbia, SC 29202-8664 803/787-6910 www.chicora.org December 22, 2008 This report is printed on permanent paper ∞ This study reports on an intensive cultural resources survey of an approximately 0.6 mile corridor and 1.98 acre substation in Horry County, South Carolina. The work was conducted to assist Central Electric Power Cooperative comply with Section 106 of the National Historic Preservation Act and the regulations codified in 36CFR800. The corridor is to be used by Central Electric Power Cooperative for the construction of a transmission line, which will connect a Santee Cooper line to a new substation. The topography is low and flat with no distinct ridge tops. The proposed route will require the clearing of the corridor, followed by construction of the proposed transmission line and substation. These activities have the potential to affect archaeological and historical sites that may be in the project corridor. For this study an area of potential effect (APE) 0.5 mile around the proposed transmission project was assumed. An investigation of the archaeological site files at the S.C. Institute of Archaeology and Anthropology identified two previously recorded sites (38HR172 and 38HR175) in the project APE. Site 38HR172 is an Early Woodland and late nineteenth to twentieth century scatter while 38HR175 is a twentieth century scatter. Both sites have been determined not eligible for the National Register of Historic Places. The Archsite GIS was consulted for any previously recorded architectural sites. Two sites (060-0063 and 060-0064) were identified. Even though a 1988 county-wide architectural survey has been performed, the GIS showed these two structures as not evaluated and having no information. The archaeological survey of the corridor incorporated shovel testing at 100-foot intervals along the center line of the 75-foot right-of-way, which was marked by stakes. All shovel test fill was screened through ¼-inch mesh with a total of 32 shovel tests excavated along the corridor with four shovel tests excavated in the substation area (which had already been cleared and filled at the time of the survey). As a result of these investigations no sites were identified. This is likely the result of the lack of any ridge tops and the distance from a permanent water source. A survey of public roads within a 0.5 mile of the proposed undertaking was conducted in an effort to identify any architectural sites over 50 years old that also retained their integrity. No such sites were found. The previously identified structures were revisited and rephotographed. Structure 060-0063 is no longer present and 060-0064 is recommended not eligible for the National Register of Historic Places. Another structure (060-0503), which had been recorded during a 2000 survey but not placed on the GIS, was also revisited. Finally, it is possible that archaeological remains may be encountered in the project area during clearing activities. Crews should be advised to report any discoveries of concentrations of artifacts (such as bottles, ceramics, or projectile points) or brick rubble to the project engineer, who should in turn report the material to the State Historic Preservation Office or to Chicora Foundation (the process of dealing with late discoveries is discussed in 36CFR800.13(b)(3)). No construction should take place in the vicinity of these late discoveries until they have been examined by an archaeologist and, if necessary, have been processed according to 36CFR800.13(b)(3). # TABLE OF CONTENTS <table> <thead> <tr> <th>Section</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>List of Figures</td> <td>iv</td> </tr> <tr> <td>Introduction</td> <td>1</td> </tr> <tr> <td>Natural Environment</td> <td>5</td> </tr> <tr> <td>Physiography</td> <td>5</td> </tr> <tr> <td>Geology and Soils</td> <td>5</td> </tr> <tr> <td>Floristics</td> <td>6</td> </tr> <tr> <td>Climate</td> <td>7</td> </tr> <tr> <td>Prehistoric and Historic Synopsis</td> <td>9</td> </tr> <tr> <td>Previous Research</td> <td>9</td> </tr> <tr> <td>Prehistoric Overview</td> <td>9</td> </tr> <tr> <td>Historic Synopsis</td> <td>13</td> </tr> <tr> <td>Research Methods and Findings</td> <td>17</td> </tr> <tr> <td>Archaeological Field Methods and Findings</td> <td>17</td> </tr> <tr> <td>Architectural Survey</td> <td>17</td> </tr> <tr> <td>Site Evaluation and Findings</td> <td>18</td> </tr> <tr> <td>Conclusions</td> <td>21</td> </tr> <tr> <td>Sources Cited</td> <td>23</td> </tr> </tbody> </table> LIST OF FIGURES Figure 1. Project vicinity in Horry County 1 2. Project corridor and previously identified archaeological and architectural sites 2 3. Portion of the corridor adjacent to a field and ditch 5 4. Portion of a mixed pine and hardwood forest along the corridor 6 5. Generalized cultural sequence for South Carolina 10 6. Portion of Mills’ Atlas showing the project vicinity 13 7. Portion of the 1918 Horry County Soil Survey showing the project area 15 8. Portion of the 1939 General Highway and Transportation Map of Horry County 16 9. View of shovel testing at the substation lot, which had already been cleared 17 10. View of the transmission route through the landscaped area of a new CVS 18 11. Topo map showing the three recorded architectural sites 19 12. View of Structure 060-0064 20 13. View of Structure 060-0503 20 INTRODUCTION This investigation was conducted by Dr. Michael Trinkley of Chicora Foundation, Inc. for Mr. Tommy L. Jackson of Central Electric Power Cooperative. The work was conducted to assist Central Electric Power Cooperative comply with Section 106 of the National Historic Preservation Act and the regulations codified in 36CFR800. The project consists of a 0.6 mile corridor and 1.98 acre lot to be used for a 115kV transmission line and substation in southeastern Horry County (Figure 1). The project runs approximately east-west between the proposed substation and a proposed Santee Cooper transmission line on the east side of SC544. The proposed corridor, as previously mentioned, is intended to be used as a transmission line. Landscape alteration, primarily clearing, and construction, including erection of poles, will damage the ground surface and any archaeological resources that may be present in the survey area. Construction and maintenance of the transmission line and substation may also have an impact on historic resources in the project area. The project will not directly affect any historic structures (since none are located on the survey corridor), but the completed facility may detract from the visual integrity of historic properties, creating what many consider discordant surroundings. As a result, this architectural survey uses an area of potential effect (APE) about 0.5 mile radius around the proposed survey corridor. This study, however, does not consider... any future secondary impact of the project, including increased or expanded development of this portion of Horry County. We were requested by Mr. Tommy L. Jackson of Central Electric Power Cooperative to conduct a cultural resources survey for the project on November 5, 2008. These investigations incorporated a review of the site files at the South Carolina Institute of Archaeology and Anthropology. As a result of that work, two archaeological sites (38HR172 and 38HR175) were found within a 0.5 mile area of potential effect (APE). Site 38HR172 is an Early Woodland and late nineteenth to twentieth century scatter while 38HR175 is a twentieth century scatter. Both site have been determined not eligible for the National Register of Historic Places. The Archisite GIS was consulted to check for any NRHP buildings, districts, structures, sites, or objects in the study area. Two sites (060-0063 and 060-0064) were identified. Even though a 1988 county-wide architectural survey (Utterback 1988) has been performed, the GIS showed these two structures as not evaluated and having no information. Archival and historical research was limited to a review of secondary sources available in the Chicora Foundation files. The archaeological survey was conducted on December 18, 2008 by Ms. Nicole Southerland and Ms. Ashley Guba under the direction of Dr. Michael Trinkley. Figure 2. Project corridor and previously identified archaeological and architectural sites (basemap is USGS Bucksville 7.5'). The architectural survey of the APE, designed to identify any structures over 50 years in age that retain their integrity and were potentially eligible for the National Register of Historic Places, revealed no such structures. The two previously recorded sites were examined with 060-0063 no longer standing. Another structure within the APE, recorded as 060-0503, was identified during a 2000 survey by Chicora Foundation (Trinkley 2000), but was recommended not eligible for the National Register. This structure was not listed on the Archsite GIS. Report production was conducted at Chicora’s laboratories in Columbia, South Carolina from December 22-23, 2008. The only photographic materials associated with this project are digital images, which are not archival, and will be retained for only 90 days. NATURAL ENVIRONMENT Physiography The project area is situated in southeastern Horry County, less than 0.5 mile east of the Waccamaw River, which dominates the landscape, meandering to form large cutoffs or lakes, as well as much swamp. The level topography in the region is interrupted by only occasional marsh sloughs and small wetland depressions. In general, the topography of the study tract is level, with only a slight elevation change toward the small drainage on the property. The Waccamaw essentially bisects the county into east and west halves and drains numerous swamps between the river and the Atlantic Ocean. On a regional scale the topography slopes either southeast toward the Waccamaw or northwest toward smaller drainages such as Maple Swamp. Horry County is bounded to the north by Brunswick and Columbus counties, North Carolina, to the east by the Atlantic Ocean, to the south by Georgetown County, and to the west by Dillon and Marion counties. It lies within the Lower Coastal Plain, which is made up of fluvial deposits that contain varying amounts of sand, silt, and clay (Dudley 1986). This is also the area known as the Atlantic Coast Flatwoods which extends from the sea shore inland about 30 to 70 miles. The area is characterized by broad flats and depressions. While there are areas of well drained soils, much of the flatwoods consist primarily of poorly drained soils with clay subsoils, especially near the coast (Ellerbe 1974:18). Elevations may range from sea level to about 100 feet above mean sea level in the Lower Coastal Plain. In the project area there are no areas where the land is higher than about 20 feet above mean sea level (AMSL), and some of the area is lower (around 10 feet) toward the drainage at the western end of the corridor. A noticeable characteristic of this physiographic area is how gradually the flat lands seem to grade into freshwater marshes, savannahs, or swamps. Geology and Soils The geology of the Lower Coastal Plain has been well described by Cooke (1936) who notes that from the Cape Fear River in North Carolina to Winyah Bay in South Carolina, the coast forms a “great arc scooped out by waves” (Cooke 1936:4). This area has been described by Brown (1975) as being an arcuate strand. In this area salt marshes are poorly developed or absent. Figure 3. Portion of the corridor adjacent to a field and ditch. and few tidal inlets breach the coast (Smith 1933:20-21). The situation is the result of an erosional history about 100,000 years ago. In general, however, the geology of the Lower Coastal Plain is less complex than that of other sections of the state. As previously mentioned, the area is dominated by fluvial deposits of unconsolidated sands and clays. Rocks are almost totally absent from the area, although Mills (1972[1826]:584) does note that some compact shell limestone was found on the Waccamaw between Gaul’s Ferry and Bear Bluff. Soils were primarily formed during the Pleistocene epoch and several terraces were deposited (Dudley 1986:85). The project vicinity is characterized by the Yauhannah-Ogeechee-Bladen Association (Dudley 1986). This association, which occurs on nearly level to gently sloping soils, consists of moderately well drained and poorly drained soils with a loamy or sandy surface and a loamy to clayey subsoil. The survey area includes three soil series – Yauhannah fine sandy loam, Yemassee loamy fine sand, and Ogeechee loamy fine sand. The moderately well drained Yauhannah soils comprise about 74% of the project area. This soils has an Ap horizon of very dark grayish brown (10YR3/2) loamy fine sand to a depth of 0.5 foot over a yellowish brown (10YR5/4) loamy fine sand that extends to a about 0.8 foot in depth. The somewhat poorly drained Yemassee Series accounts for 21% of the project area. This soil has an A horizon of black (10YR2/1) loamy fine sand to 0.6 foot in depth over a pale brown (10YR5/3) loamy fine sand to 1.0 foot in depth. The poorly drained Ogeechee soils, which account for 5% of the survey area, have an A horizon of very dark gray (10YR3/1) loamy fine sand to 0.7 foot in depth over a dark grayish brown (10YR4/2) sandy clay loam to 1.9 feet in depth. In 1826 Robert Mills commented that soil was rich and productive adjacent to Horry’s rivers. Even the uplands were well suited for cotton with their light sandy soil underlaid by clay. But he commented that a great deal of swamp land was found in the district, “fit only for cattle ranges” (Mills 1972[1826]:585). Edmund Ruffin, who managed to visit much of South Carolina’s coast in the mid-1840s, never sought to go to Horry, commenting that: I would have gone to Horry, which is called the “dark corner” of the state, but for having no expectation of finding anyone acquainted with or feeling interested in the objects of explorations (Mathew 1992:215). Floristics Vegetation in Horry County is characterized in relation to the previously broad topographic patterns of poorly drained floodplains and lowlands, and the well drained uplands. The vegetation in Horry County has been classified by Küchler (1964) as part of the Oak-Hickory-Pine forest, based on potential natural vegetation. This would consist of medium tall to tall forests of broadleaf deciduous and needleleaf ever-green trees. More specifically, however, the floodplains are covered by mixed hardwoods, including bald cypress, tupelo gum, and black gum. Less water tolerant trees, such as pines, occur on the uplands or on better drained slopes. Also found in the bottomlands, floodplains, and Carolina bays are red maple, ash, water oak, elm, and sweet gum. On the better drained uplands pine dominates, with loblolly and longleaf pines being indigenous and the slash pine introduced. In 1826 Mills in describing the Horry District vegetation, notes: The long leaf pine abounds, also the cypress, live oak, water oak, white oak, &c. The fruit trees are, peaches, apples, pears, plums, cherries, figs; besides strawberries, which grow wild, whortleberries, &c. The forest trees begin to bud in the latter part of March, and the fruit trees in April. The pine and cypress are mostly used for buildings (Mills 1972[1826]:582). Climate Elevation, latitude, and distance from the coast work close together to affect the climate of South Carolina, although Horry is clearly dominated by its maritime location. Much of the weather is controlled by the proximity of the Gulf Stream, about 50 miles offshore. In addition, the more westerly mountains block or moderate many of the cold air masses that flow across the state from west to east. Even the very cold air masses that cross the mountains are warmed by compression before the descent on the Coast. As a result, the climate of Horry County is temperate. The winters are relatively mild with a mean temperature of 48°F and the summers are very warm and humid, with a mean temperature of 79°F and average humidity of 60%. Rainfall in the amount of about 51 inches is good for a broad range of crops. About 31 inches (or 60% of the total) occurs during the growing season. Until recently, periods of drought have not been common. Of course, there have been statewide droughts, such as the one in 1845, but more often the threat to Horry crops was flooding. Major floods have occurred in 1855, 1924, 1928, 1959, 1961, and 1973, with the September 1928 flood the largest known, reaching a stage of 12.75 feet above mean sea level (U.S. Army Corps of Engineers 1973:9). The average growing season is about 234 days, although early freezes in the fall and late frosts in the spring can reduce this period by as much as 30 or more days (Dudley 1986:97). Consequently, most cotton planting did not take place until early May, avoiding the possibility that a late frost would damage the young seedlings. PREHISTORIC AND HISTORIC SYNOPSIS Previous Research Horry has received rather spotty archaeological attention. Derting and his colleagues, for example, list only 67 reports associated with the county, with 41 of these (or 61%) representing highway or sewer surveys (Derting et al. 1991). Although dated, this indicates that the attention has been focused on relatively narrow, contained corridors, with only minor attention devoted to the area’s rich prehistoric and protohistoric resources. Considerable, primarily unpublished, research took place in the Myrtle Beach area during the 1960s at the Ellsworth Site by Erika Fogg-Amed, then a student of Reinhold Englemeyer at USC-Conway. Several test units were placed within the site which yielded Stallings, Thom’s Creek, Hanover, and Cape Fear sherds, as well as a Morrow Mountain component (Fogg-Amed n.d. a). No site boundaries were established and, in fact, no site form has ever been filed. Fogg-Amed also tested the “Coates Site,” located about 10 miles north of Myrtle Beach on a high bluff overlooking a freshwater pond. Testing at this site yielded a dense shell midden that produced only lithic debitage (Fogg-Amed n.d. b). Again, no site form was filed. Closer to the survey corridor at least two project areas have been surveyed. These are compliance reports on road improvements and a school (Martin et al. 1987; Trinkley 2000). Prehistoric Overview The Paleoindian period, lasting from 12,000 to 8,000 B.C., is evidenced by basally thinned, side-notched projectile points; fluted, lanceolate projectile points; side scrapers; end scrapers; and drills (Coe 1964; Michie 1977; Williams 1968). The Paleoindian occupation, while widespread, does not appear to have been intensive. Artifacts are most frequently found along major river drainages, which Michie interprets to support the concept of an economy “oriented towards the exploitation of now extinct mega-fauna” (Michie 1977:124). Unfortunately, little is known about Paleoindian subsistence strategies, settlement systems, or social organization. Generally, archaeologists agree that the Paleoindian groups were at a band level of society (see Service 1966), were nomadic, and were both hunters and foragers. While population density, based on the isolated finds, is thought to have been low, Walthall suggests that toward the end of the period, “there was an increase in population density and in territoriality and that a number of new resource areas were beginning to be exploited” (Walthall 1980:30). The Archaic period, which dates from 8000 to 2000 B.C., does not form a sharp break with the Paleoindian period, but is a slow transition characterized by a modern climate and an increase in the diversity of material culture. Associated with this is a reliance on a broad spectrum of small mammals, although the white tailed deer was likely the most commonly exploited mammal. The chronology established by Coe (1964) for the North Carolina Piedmont may be applied with little modification to the South Carolina coastal plain and piedmont. Archaic period assemblages, characterized by corner-notched and broad stemmed projectile points, are fairly common, perhaps because the swamps and drainages offered especially attractive ecotones. In the Coastal Plain of the South Carolina, there is an increase in the quantity of Early Archaic remains, probably associated with an increase in population and associated increase in the intensity of occupation. While Hardaway and Dalton points are typically found as isolated specimens along riverine environments, remains from the following Palmer phase are not only more common, but are also found in both riverine and interriverine settings. Kirks are likewise common in the coastal plain (Goodyear et al., 1979). The two primary Middle Archaic phases found in the coastal plain are the Morrow Mountain and Guilford (the Stanly and Halifax complexes identified by Coe are rarely encountered). Our best information on the Middle Woodland comes from sites investigated west of the Appalachian Mountains, such as the work in the Little Tennessee River Valley. The work at Middle Archaic river valley sites, with their evidence of a diverse floral and faunal subsistence base, seems to stand in stark contrast to Caldwell’s Middle Archaic “Old Quartz Industry” of Georgia and South Carolina, where axes, choppers, and ground and polished stone tools are very rare. The Late Archaic is characterized by the appearance of large, square stemmed Savannah River projectile points (Coe 1964). These people continued the intensive exploitation of the uplands much like earlier Archaic groups. The bulk of our data for this period, however, comes from work in the Uwharrie region of North Carolina. The Woodland period begins, by definition, with the introduction of fired clay pottery about 2000 B.C. along the South Carolina coast (the introduction of pottery, and hence the beginning of the Woodland period, occurs much later in the Piedmont of South Carolina). It should be noted that many researchers call the period from about 2500 to 1000 B.C. the Late Archaic because of a perceived continuation of the Archaic lifestyle in spite of the manufacture of pottery. Regardless of terminology, the period from 2500 to 1000 B.C. is well documented on the South Carolina coast and is characterized by Stallings (fiber-tempered) pottery. The subsistence economy during this early period was based primarily on deer hunting and fishing, with supplemental inclusions of small mammals, birds, reptiles, and shellfish. Like the Stallings settlement pattern, Thom’s Creek sites are found in a variety of environmental zones and take on several forms. Thom’s Creek sites are found throughout the South Carolina Coastal Zone, Coastal Plain, and up to the Fall Line. The sites are found into the North Carolina Coastal Plain, but do not appear to extend southward into Georgia. In the Coastal Plain drainage of the Savannah River there is a change of settlement, and probably subsistence, away from the riverine focus found in the Stallings Phase (Hanson 1982:13; Stoltman 1974:235-236). Thom’s Creek sites are more commonly found in the upland areas and lack evidence of intensive shellfish collection. In the Coastal Zone large, irregular shell middens; small, sparse shell middens; and large “shell rings” are found in the Thom’s Creek settlement system. The Deptford phase, which dates from 1100 B.C. to A.D. 600, is best characterized by fine to coarse sandy paste pottery with a check stamped surface treatment. The Deptford settlement pattern involves both coastal and inland sites. Inland sites such as 38AK228-W, 38LX5, 38RD60, and 38BM40 indicate the presence of an extensive Deptford occupation on the Fall Line and the Coastal Plain, although sandy, acidic soils preclude statements on the subsistence base (Anderson 1979; Ryan 1972; Trinkley 1980b). These interior or upland Deptford sites, however, are strongly associated with the swamp terrace edge, and this environment is productive not only in nut masts, but also in large mammals such as deer. Perhaps the best data concerning Deptford “base camps” comes from the Lewis-West site (38AK228-W), where evidence of abundant food remains, storage pit features, elaborate material culture, mortuary behavior, and craft specialization has been reported (Sassaman et al. 1990:96-98). Throughout much of the Coastal Zone and Coastal Plain north of Charleston, a somewhat different cultural manifestation is observed, related to the “Northern Tradition” (e.g., Caldwell 1958). This recently identified assemblage has been termed Deep Creek and was first identified from northern North Carolina sites (Phelps 1983). The Deep Creek assemblage is characterized by pottery with medium to coarse sand inclusions and surface treatments of cord marking, fabric impressing, simple stamping, and net impressing. Much of this material has been previously designated as the Middle Woodland “Cape Fear” pottery originally typed by South (1976). The Deep Creek wares date from about 1000 B.C. to A.D. 1 in North Carolina, but may date later in South Carolina. The Deep Creek settlement and subsistence systems are poorly known, but appear to be very similar to those identified with the Deptford phase. The Deep Creek assemblage strongly resembles Deptford both typologically and temporally. It appears this northern tradition of cord and fabric impressions was introduced and gradually accepted by indigenous South Carolina populations. During this time, some groups continued making only the older carved paddle stamped pottery, while others mixed the two styles, and still others (and later all) made exclusively cord and fabric stamped wares. The Middle Woodland in South Carolina is characterized by a pattern of settlement mobility and short-term occupation. On the southern coast it is associated with the Wilmington phase, while on the northern coast it is recognized by the presence of Hanover, McClellanville or Santee, and Mount Pleasant assemblages. The best data concerning Middle Woodland Coastal Zone assemblages comes from Phelps' (1983:32-33) work in North Carolina. Associated items include a small variety of the Roanoke Large Triangular points (Coe 1964:110-111), sandstone abraders, shell pendants, polished stone gorgets, celts, and woven marsh mats. Significantly, both primary inhumation and cremations are found. On the Coastal Plain of South Carolina, researchers are finding evidence of a Middle Woodland Yadkin assemblage, best known from Coe’s work at the Doerschuk site in North Carolina (Coe 1964:25-26). Yadkin pottery is characterized by a crushed quartz temper and cord marked, fabric impressed, and linear check stamped surface treatments. The Yadkin ceramics are associated with medium-sized triangular points, although Oliver (1981) suggests that a continuation of the Piedmont Stemmed Tradition to at least A.D. 300 coexisted with this Triangular Tradition. The Yadkin series in South Carolina was first observed by Ward (1978, 1983) from the White’s Creek drainage in Marlboro County, South Carolina. Since then, a large Yadkin village has been identified by DePratter at the Dunlap site (38DA66) in Darlington County, South Carolina (Chester DePratter, personal communication 1985) and Blanton et al. (1986) and have excavated a small Yadkin site (389SU83) in Sumter County, South Carolina. Research at 38FL249 on the Roche Carolina tract in northern Florence County revealed an assemblage including Badin, Yadkin, and Wilmington wares (Trinkley et al. 1993:85-102). Anderson et al. (1982:299-302) offer additional typological assessments of the Yadkin wares in South Carolina. Over the years, the suggestion that Cape Fear might be replaced by such types as Deep Creek and Mount Pleasant has raised considerable controversy. Taylor, for example, rejects the use of the North Carolina types in favor of those developed by Anderson et al. (1982) from their work at Mattassee Lake in Berkeley County (Taylor 1984:80). Cable (1991) is even less generous in his denouncement of ceramic constructs developed nearly a decade ago, also favoring adoption of the Mattassee Lake typology and chronology. This construct, recognizing five phases (Deptford I-III, McClellanville, and Santee I), uses a type variety system. Regardless of terminology, these Middle Woodland Coastal Plain and Coastal Zone phases continue the Early Woodland Deptford pattern of mobility. While sites are found all along the coast and inland to the Fall Line, shell midden sites evidence sparse shell and artifacts. Gone are the abundant shell tools, worked bone items, and clay balls. Recent investigations at Coastal Zone sites such as 38BU747 and 38BU1214, however, have provided some evidence of worked bone and shell items at Deptford phase middens (see Trinkley 1990). In many respects, the South Carolina Late Woodland may be characterized as a continuation of previous Middle Woodland cultural assemblages. While outside the Carolinas there were major cultural changes, such as the continued development and elaboration of agriculture, the Carolina groups settled into a lifeway not appreciably different from that observed for the previous 500 to 700 years (cf. Sassaman et al. 1990:14-15). This situation would remain unchanged until the development of the South Appalachian Mississippian complex (see Ferguson 1971). The South Appalachian Mississippian period, from about A.D. 1100 to A.D. 1640, is the most elaborate level of culture attained by the native inhabitants and is followed by cultural disintegration brought about largely by European disease. The period is characterized by complicated stamped pottery, complex social organization, agriculture, and the construction of temple mounds and ceremonial centers. The earliest phases include the Savannah and Pee Dee (A.D. 1200 to 1550). Historic Synopsis The earliest activity in the Horry County area may have been the Spanish Ayllon movement from Rio Jordon (Cape Fear River) to San Miguel de Gualdape, 45 leagues distant. Some have argued that Fort San Miguel may have been at the mouth of Winyah Bay, although Paul Hoffman has recently suggested the fort was in Beaufort County, South Carolina or Chatham County, Georgia. While the English settled Charleston in 1670, the northern frontier was ignored, except for the Indian trade, until 1731, when the first Royal Governor of Carolina, Robert Johnson, directed 11 townships to be laid out, including Kingston on the west bank of the Waccamaw. Kingston covered much of Georgetown and Horry counties and by 1734 the town of Kingston, later known as Conwayboro and eventually Conway, was founded. The township, however, was never elevated to a parish, but remained part of the Parish of Prince George, Winyah until 1785. In that year Prince George was divided into four districts and by 1801 Horry District was formally separated from Georgetown (Rogers 1972:9). The designation of “county” was not used until 1868. A variety of townships were established, including Simpson Creek and Little River on the south side of the Waccamaw River. Prior to the Revolution there were few residents in Kingston and it was not until the late eighteenth century that English, French, Scotch, and Irish settlers began coming into the area. Many settlers in the early nineteenth century came from North Carolina and the northern seaboard states. In spite of Horry’s coastal plain situation, the area developed along vastly different lines than its southern neighbors Georgetown and Charleston. Horry District was always isolated from the remainder of South Carolina and had much stronger connections with North Carolina (Rogers 1972:3). The major traffic artery was the Waccamaw River and this reliance on river transport did not change until the highway development of the 1930s. Subsistence farming was the main occupation in the early 1800s and the farms were small, specializing in peas, wheat, rice, cotton, and corn, most for home consumption (Rogers 1972:5). Mills notes that the population was, mostly engaged in cultivating the soil. There are a few mechanics, such as blacksmiths, shoemakers, taylors [sic], halters, etc. (Mills 1972[1826]:583). For Mills’ Atlas of 1826, the Horry District was surveyed by Harlee in 1820. No settlements are shown in the project corridor (Figure 6). The settlement of Larrimore is located to the south of the project area. The absence of houses surrounding the project area may not so much indicate sparse settlement as it may reflect the subscription basis of Mills’ Atlas. The subsistence farmers of Horry District may either have been unable to subscribe or may have had no need to let others know their location. The 1860 census for Horry District indicates that many of the farmers in Kingston, for example, could neither read nor write, further reducing the benefits of listing in an atlas. The emphasis on subsistence farming appears to be the result of topography. Only 20% of the land is subject to the type of tidal overflow necessary for wet cultivation of rice. Mills (1972[1826]:581) notes that the river floodplain soil was productive where it could be reclaimed by drainage, while the upland soils were much less productive. This difference in quality is reflected in the prices for the land. Mills states that, the low land swamps, when secured from the freshets, will sell for 40 or $50 an acre. The uplands are valued at from $4 down to 25 cents per acre (Mills 1972[1826]:581). Interestingly, the price of “improved farms” ranged from $20 to $50 an acre as late as 1918 (Tillman et al. 1919:340). The few plantations found in Horry District were primarily located in All Saints Parish, east and south of the Waccamaw River. It was from this area that a small quantity of rice was exported throughout the nineteenth century (Rogers 1972:13). Because the soils of Horry District were not able to support plantation agriculture a unique distribution of population and a very low percentage of slaves were found in the region. Horry County also continued to play a minor role in state politics. The area, prior to the Civil War, was oriented to smaller farmers and never developed an aristocratic plantation society with political and economic power. Most of the farms, including the larger ones, were situated in Kingston Township. The 1860 census indicates that of the 782 farms, 560 were in Kingston (Rogers 1972:12). In 1860, the population was 2,606 and there were only 708 slaves. This ratio of 70% white and 30% blacks has not only remained stable into the twentieth century, but also stands in contrast to Georgetown District where about 12% of the population was white and 88% was black until the 1880 census, when the white population increased to about 20% (Rogers 1972). By the 1830s, a new industry was competing with farming in the Horry area. Northern immigrants from Maine, coupled with “pine woods speculators” form North Carolina began to exploit the forest products of both the uplands and swamp areas (Tillman et al. 1919:330; Berry 1970; Rogers 1972:14). The Horry District was the leading turpentine producer in South Carolina by 1860, producing products valued at $392,643. The lumber and turpentine industry continued to grow rapidly after the Civil War. Tobacco was introduced about 1850, but was not an important crop until after the Civil War, lead by the Green Sea Township. Horry District never sided with the radical secessionists, possibly because of the influence of northern immigrants or because of the resentment of the political and economic power of slave owners. In any event, Horry County responded “enthusiastically” to the call for volunteers at the outbreak of the Civil War (Rogers 1972:35). Horry District saw little involvement in the Civil War, although 925 of the 1,000 men in the voting population volunteered for duty and served (Rogers 1972:35). Fort Randell was established at Clardy’s Point on the Little River and saw skirmishes in 1863 and 1865. The salt works of Peter Vaught, Sr. at Singleton Swash were raided in April 1864, and in 1865 a Union expedition was led up the Waccamaw to destroy ferries at Bull Creek and Yahannah (Rogers 1972:35-38). After the Civil War, Horry was part of the Military District of Eastern South Carolina, but the Federal stay was short and by 1866 military troops had left Horry County. This absence of Federal troops continued throughout Reconstruction and the Democrats maintained political control throughout the period. Further, there was no land distribution in Horry County, possibly because there was really no land work distributing (Rogers 1972:47). Following the Civil War a number of changes began to affect the Horry area. Tobacco began to be a more important crop, the first county bank was organized in 1880, the railroad and telegraph arrived in 1887, and in 1889 a regular weekly county newspaper appeared (the Horry Weekly News, which published until 1877). Conwayboro was changed to Conway in 1883 and the only other “major” town continued to be Little River. The turpentine business boomed in the 1870s and by 1880 there were 21 operators in the county, producing $181,400 annually (Rogers 1972:50). Farming, however, continued to be important. In 1870 there were 1,300 farms averaging 50 acres in size. The major crops were still subsistence items such as corn, sweet potatoes, and rice. Few wage employees were found in Horry (Rogers 1972:58). The Socastee and Little River townships had the richest farms and the five largest farms also produced turpentine in 1870 (Rogers 1972:60). The Grange movement arrived in Horry County relatively late, never organized in many areas, and failed by the late 1870s. By 1910, the County population had increased to almost 27,000 but there was no town, including Conway, with a population of even 2,500. Conway continued, however, to have strong lumbering and mercantile interests. With the gradual decline of lumbering and the turpentine industry, farming was once again the dominant activity in the county. The period from 1880 to 1910 saw corn acreage increase 140%, cotton acreage increase 90%, and tobacco acreage increase from 19 to 5,347 acres. During the same time rice production fell from 747,689 to 1,210 pounds (Tillman et al. 1919:333). By 1919 the chief money crops were corn, cotton, and tobacco, although corn was largely used to supply the home and fatten stock. After 1895, tobacco began to replace cotton as a prime money crop and by 1910 was “grown more or less generally over a county by small farmers who live on their farms and superintend the work” (Tillman et al. 1919:335). The 1918 soil survey map shows one structure along the survey corridor (Figure 7). No artifacts were found in this area. Several modern houses have been built along this stretch of road. and in addition, the road has been improved with a ditch excavated for drainage. The yard areas have been altered for pasture, cultivation, erection of fences, construction, and an existing transmission line. In the early twentieth century, hogs were the principle source of livestock income. These animals were usually slaughtered in the fall for home use or sale on the local market. Cattle were mostly scrub stock and dairying was neglected. Farm equipment was largely inadequate in the early 1900s and most of the plowing was done with one ox or mule. On many small farms the adequacy of farm equipment did not appreciably improve into the 1940s, when the probate inventory for one small Horry farmer listed only one mule, a one-horse wagon, one disc, four plows, one lot hoes, one guano distributor, a tobacco sprayer, and a corn planter (Trinkley and Caballero 1983:8). Tillman et al. (1919:338) indicate that in the early 1900s plowing was seldom more than 2 to 3 inches deep because of the poor machinery. It is suggested that this lack of equipment was not entirely related to a lack of prosperity, but rather was largely the result of cheap labor. Tillman et al. report that, “negro men receive 75 cents to $1.25 a day . . . while negro women are paid 50 to 65 cents a day” (Tillman et al. 1919:340). Horry County, in 1910, had a relatively low rate of farm tenancy. The 1939 General Highway and Transportation Map of Horry County (Figure 8) fails to show any houses on the corridor. In fact, the road on which the corridor follows as shown on the 1918 map fails to appear on this 1939 map. The area is shown to be in wetland. Tillman et al. (1919:340) indicate that 72.9% of the farms were operated by owners and 27% by tenants. The average size of such farms (each tenancy is classified as a farm) was 117.8 acres. This is contrasted with piedmont Spartanburg, where in 1920 32.1% of the farms were operated by their owners and 67.7% were operated by tenants. In Spartanburg, where cotton was still king, the average farm size was 49.4 acres (Latimer et al. 1924:419). This dichotomy documents the differences between tenancy in the Atlantic Coastal Plain, where there was a low “devotion” to cotton, and in the Black Belt and Upper Piedmont, where cotton was more important, tenancy rates higher, and farm size smaller (see Woofter et al. 1936). Archaeological Field Methods and Findings The initially proposed field techniques for the substation lot involved the placement of shovel tests at the four corners of the property. The transmission corridor incorporated shovel testing at 100 foot intervals along the center line of the corridor, which had a right-of-way of 75 feet. All soil would be screened through ¼-inch mesh, with each test numbered sequentially. Each test would measure about 1 foot square and would normally be taken to a depth of at least 1.0 foot or until subsoil was encountered. All cultural remains would be collected, except for mortar and brick, which would be quantitatively noted in the field and discarded. Notes would be maintained for profiles at any sites encountered. Should sites (defined by the presence of three or more artifacts from either surface survey or shovel tests within a 50 feet area) be identified, further tests would be used to obtain data on site boundaries, artifact quantity and diversity, site integrity, and temporal affiliation. These tests would be placed at 25 to 50 feet intervals in a simple cruciform pattern until two consecutive negative shovel tests were encountered. The information required for completion of South Carolina Institute of Archaeology and Anthropology site forms would be collected and photographs would be taken, if warranted in the opinion of the field investigators. A total of four shovel tests were excavated within the substation lot. A total of 32 shovel tests were excavated along the corridor. Analysis of collections would follow professionally accepted standards with a level of intensity suitable to the quantity and quality of the remains. Nevertheless, the archaeological survey of the substation lot and transmission corridor failed to identify any remains. This is most likely due to the lack of high land, suitable for habitation and the distance from a permanent water source. In addition, the land as been altered by road improvements, creation of pasture and agricultural lands, and construction (Figure 10). Architectural Survey As previously discussed, we elected to use a 0.5 mile area of potential effect (APE). The architectural survey would record buildings, sites, structures, and objects that appeared to have been constructed before 1950. Typical of such projects, this survey recorded only those which have retained “some measure of its historic integrity” (Vivian n.d.:5) and which were visible from public roads. For each identified resource we would complete a Statewide Survey Site Form and at least two representative photographs were taken. Permanent control numbers would be assigned by the Survey Staff of the S.C. Department of Archives and History at the conclusion of the study. The Site Forms for the resources identified during this study would be submitted to the S.C. Department of Archives and History. Site Evaluation and Findings Archaeological sites would be evaluated for further work based on the eligibility criteria for the National Register of Historic Places. Chicora Foundation only provides an opinion of National Register eligibility and the final determination is made by the lead federal agency, in consultation with the State Historic Preservation Officer at the South Carolina Department of Archives and History. The criteria for eligibility to the National Register of Historic Places is described by 36CFR60.4, which states: the quality of significance in American history, architecture, archaeology, engineering, and culture is present in districts, sites, buildings, structures, and objects that possess integrity of location, design, setting, materials, workmanship, feeling, and association, and a. that are associated with events that have made a significant contribution to the broad patterns of our history; or b. that are associated with the lives of persons significant in our past; or c. that embody the distinctive characteristics of a type, period, or method of construction or that represent the work of a master, or that possess high artistic values, or that represent a significant and distinguishable entity whose components may lack individual distinction; or d. that have yielded, or may be likely to yield, information important in prehistory or history. National Register Bulletin 36 (Townsend et al. 1993) provides an evaluative process that contains five steps for forming a clearly defined explicit rationale for either the site’s eligibility or lack of eligibility. Briefly, these steps are: • identification of the site’s data sets or categories of archaeological information such as ceramics, lithics, subsistence remains, architectural remains, or sub-surface features; • identification of the historic context applicable to the site, providing a framework for the evaluative process; • identification of the important research questions the site might be able to address, given the data sets and the context; • evaluation of the site’s archaeological integrity to ensure that the data sets were sufficiently well preserved to address the research questions; and • identification of important research questions among all of those which might be asked and answered at the site. This approach, of course, has been developed for use documenting eligibility of sites being nominated to the National Register of Historic Places where the evaluative process must stand alone, with relatively little reference to other documentation and where typically only one site is being considered. As a result, some aspects of the evaluative process have been summarized, but we have tried to focus on an archaeological site’s ability to address significant research topics within the context of its available data sets. The two previously identified resources (060-0063 and 060-0064) were revisited and rephotographed. While the Archsite GIS failed to record any information on these structures, we were able to locate a 2000 compliance report for the school located to the east across SC 544 (Trinkley 2000). This report describes these two structures and, in addition, records an additional structure (060-0503) in the project APE. All three structures were recommended not eligible for the National Register. Structure 060-0063 was described as a “ca. 1940 massed hall-and-parlor side-gabled structure with a full-façade engaged shed porch” (Trinkley 2000:21). At the time, the structure was recommended not eligible because it had “been sold and [would] be moved off-site for use as a movie prop” (Trinkley 2000:21). The current survey was unable to locate the structure, so it is likely that the house has long been removed from the site. No additional research was done to see where the structure was removed. Structure 060-0064 was described as a “ca. 1955 structure with extensive modifications, likely ca. 1975” (Trinkley 2000: 22). This structure was recommended not eligible for the National Register of Historic Places “both because of its recent age and also because of the extensive modifications” (Trinkley 2000:22). The revisit from the current project agrees with the not eligible recommendation (Figure 12). This structure was revisited during the current survey and we agree with the not eligible recommendation. Even since the 2000 survey, a completely new porch has been added to the house. No additional resources were identified during the survey that may be potentially eligible for the National Register. The 1988 county-wide architectural survey (Utterback 1988) failed to identify any resources in the project APE. The 2000 survey also identified another structure, 060-0503, within the APE. This structure is described as being a massed plan side-gabled structure. It is 1 ½ stories with a porch which originally extended across the front and left facades. Today the side porch has been enclosed, significantly altering its appearance. Other modifications include storm windows and doors, as well as a rear addition. While a structure is shown in this location on the 1918 soil survey map, we believe that the extant house is likely a replacement of an earlier one (which is probably shown on the 1939 highway map). This structure is recommended not eligible for inclusion on the National Register (Trinkley 2000:22). CONCLUSIONS This study involved the examination of a 0.6 mile corridor for a transmission line and 0.98 acre lot for a substation in Horry County. This work, conducted for Mr. Tommy L. Jackson of Central Electric Power Cooperative examined archaeological sites and cultural resources found in the proposed project area and is intended to assist this company in complying with their historic preservation responsibilities. As a result of this investigation, no archaeological sites were found in the survey area. This is likely the result of the lack of high, habitable ground and the distance from a permanent water source. In addition, construction activities including a new CVS pharmacy, road improvements including a ditch, and landscape alteration for pasture and agriculture have damaged the ground surface. A survey of public roads within 0.5 mile revealed no structures that retain the integrity for the National Register of Historic Places. The two previously identified structures (060-0063 and 060-0064) from the Archsite GIS and the one structure (060-0503) from a previous compliance survey, were revisited during the current project. Structure 060-0063 is no longer on the property. The remaining two structures are both recommended not eligible for the National Register of Historic Places. It is possible that archaeological remains may be encountered during construction activities. As always, contractors should be advised to report any discoveries of concentrations of artifacts (such as bottles, ceramics, or projectile points) or brick rubble to the project engineer, who should in turn report the material to the State Historic Preservation Office, or Chicora Foundation (the process of dealing with late discoveries is discussed in 36CFR800.13(b)(3)). No further land altering activities should take place in the vicinity of these discoveries until they have been examined by an archaeologist and, if necessary, have been processed according to 36CFR800.13(b)(3). SOURCES CITED Anderson, David G. Anderson, David G., Charles E. Cantley, and A. Lee Novick Berry, C.B. Blanton, Dennis B., Christopher T. Espenshade, and Paul E. Brockington, Jr. Brown, Paul J. 1975 *Coastal Morphology of South Carolina.* Unpublished M.S. Thesis, Department of Geology, University of South Carolina, Columbia. Cable, John Caldwell, Joseph R. Coe, Joffre L. Cooke, C. Wythe Derting, Keith M., Sharon L. Pekrul, and Charles J. Rinehart 1991 *A Comprehensive Bibliography of South Carolina Archaeology.* Research Manuscript 211. South Carolina Institute of Archaeology and Anthropology, University of South Carolina, Columbia. Dudley, Travis A. Ellerbe, Clarence M. Ferguson, Leland G. 1971 *South Appalachian Mississippian*. Ph.D. dissertation, University of North Carolina, Chapel Hill. University Microfilms, Ann Arbor, Michigan. Fogg-Amed, Erika 1974 Site notes dig Summer of 1964 (and Fall, 1963), Summer of 1965. Field notes on file, S.C. Institute of Archaeology and Anthropology, University of South Carolina, Columbia. Goodyear, Albert C., John H. House, and Neal W. Ackerly Anthropological Studies 4, Occasional Papers of the Institute of Archaeology and Anthropology, University of South Carolina, Columbia. Hanson, Glen T., Jr. Küchler, A.W. 1964 *Potential Natural Vegetation of the Conterminous United States*. Martin, Debra K., Lesley Drucker, and Susan Jackson 1987 *An Archaeological Inventory Survey of S.C. Highway 544 Improvements, Horry County, South Carolina*. Mathew, William M., editor 1992 *Agriculture, Geology, and Society in Antebellum South Carolina: The Private Diary of Edmund Ruffin, 1843*. University of Georgia Press, Athens. Michie, James L. 1977 *The Late Pleistocene Human Occupation of South Carolina*. Unpublished Honor's Thesis, Department of Anthropology, University of South Carolina, Columbia. Mills, Robert 1972 *[1826]Statistics of South Carolina*. Oliver, Billy L. 1981 *The Piedmont Tradition: Refinement of the Savannah River Stemmed Point Type*. Unpublished Master's thesis, Department of Anthropology, University of North Carolina, Chapel Hill. Phelps, David A. SOURCES CITED Crow, pp. 1-52. North Carolina Division of Archives and History, Department of Cultural Resources, Raleigh. Rogers, James S., III Ryan, Thomas M. Sassaman, Kenneth E., Mark J. Brooks, Glen T. Hanson, and David G. Anderson 1990 *Native American Prehistory of the Middle Savannah River Valley.* Savannah River Archaeological Research Papers 1. South Carolina Institute of Archaeology and Anthropology, University of South Carolina, Columbia. Service, E.M. Smith, Lynwood 1933 *Physiography of South Carolina.* Unpublished M.S. Thesis, Department of Geology, University of South Carolina, Columbia. South, Stanley A. 1976 *An Archaeological Survey of Southeastern North Carolina.* South Carolina Institute of Archaeology and Anthropology Stoltman, James B. Taylor, Richard L., editor Tillman, B.W., W.E. McLendon, H.H. Krusehoff, A.C. Anderson, Cornelius Van Duyne, and W.J. Latimer Townsend, Jan, John H. Sprinkle, Jr., and John Knoerl Trinkley, Michael 1980a *Additional Investigations at 38LX5.* South Carolina Department of Highways and Public Transportation, Columbia. 1990 *An Archaeological Context for the South Carolina Woodland Period.* Chicora Foundation Research CULTURAL RESOURCES SURVEY OF THE LAKE RIDGE 115kV TRANSMISSION PROJECT Trinkley, Michael and Olga M. Caballero Trinkley, Michael, Debi Hacker, and Natalie Adams U.S. Army Corps of Engineers 1973 Flood Plain Information B Waccamaw River, Kingston Lake Swamp, Crab Tree Swamp, City of Conway, South Carolina. Charleston District, Corps of Engineers, Charleston, South Carolina. Utterback, J. David Vivian, Daniel J. Waccamaw Regional Planning and Development Council Walthall, John A. Ward, Trawick 1978 The Archaeology of Whites Creek, Marlboro County, South Carolina. Research Laboratories of Anthropology, University of North Carolina, Chapel Hill. Williams, Stephen B., editor Woofter, T.J., Jr. Archaeological Investigations Historical Research Preservation Education Interpretation Heritage Marketing Museum Support Programs
REGULARITY AND GRÖBNER BASES OF THE REES ALGEBRA OF EDGE IDEALS OF BIPARTITE GRAPHS YAIRON CID-RUIZ ABSTRACT. Let \( G \) be a bipartite graph and \( I = I(G) \) be its edge ideal. The aim of this note is to investigate different aspects of the Rees algebra \( \mathcal{R}(I) \) of \( I \). We compute its regularity and the universal Gröbner basis of its defining equations; interestingly, both of them are described in terms of the combinatorics of \( G \). We apply these ideas to study the regularity of the powers of \( I \). For any \( s \geq \text{match}(G) + |E(G)| + 1 \) we prove that \( \text{reg}(I^{s+1}) = \text{reg}(I^s) + 2 \). 1. INTRODUCTION Let \( G = (V(G), E(G)) \) be a bipartite graph on the vertex set \( V(G) = X \cup Y \) with bipartition \( X = \{x_1, \ldots, x_n\} \) and \( Y = \{y_1, \ldots, y_m\} \). Let \( \mathbb{K} \) be a field and let \( R \) be the polynomial ring \( R = \mathbb{K}[x_1, \ldots, x_n, y_1, \ldots, y_m] \). The edge ideal \( I = I(G) \), associated to \( G \), is the ideal of \( R \) generated by the set of monomials \( x_iy_j \) such that \( x_i \) is adjacent to \( y_j \). One can find a vast literature on the Rees algebra of edge ideals of bipartite graphs (see [28], [22], [11], [26], [25], [27], [10]), nevertheless, in this note we study several properties that might have been overlooked. From a computational point of view we first focus on the universal Gröbner basis of its defining equations, and from a more algebraic standpoint we focus on its total and partial regularities as \( \mathbb{K} \)-modules. Applying these ideas, we give an estimation of when \( \text{reg}(I^s) \) starts to be a linear function and we find upper bounds for the regularity of the powers of \( I \). Let \( \mathcal{R}(I) = \bigoplus_{i=0}^{\infty} I^i t^i \subset R[t] \) be the Rees algebra of the edge ideal \( I \). Let \( f_1, \ldots, f_q \) be the square free monomials of degree two generating \( I \). We can see \( \mathcal{R}(I) \) as a quotient of the polynomial ring \( S = R[T_1, \ldots, T_q] \) via the map \[ S = \mathbb{K}[x_1, \ldots, x_n, y_1, \ldots, y_m, T_1, \ldots, T_q] \xrightarrow{\psi} \mathcal{R}(I) \subset R[t], \] (1) \[ \psi(x_i) = x_i, \quad \psi(y_i) = y_i, \quad \psi(T_i) = f_i t_i. \] Then the presentation of \( \mathcal{R}(I) \) is given by \( S/\mathcal{K} \) where \( \mathcal{K} = \text{Ker}(\psi) \). We give a bigraded structure to \( S = \mathbb{K}[x_1, \ldots, x_n, y_1, \ldots, y_m] \otimes_{\mathbb{K}} [T_1, \ldots, T_q] \), where bideg\((x_i) = \text{bideg}(y_i) = (1, 0) \) and \( \text{bideg}(T_i) = (0, 1) \). The map \( \psi \) from (1) becomes bihomogeneous when we declare \( \text{bideg}(t_i) = (-2, 1) \), then we have that \( S/\mathcal{K} \) and \( \mathcal{K} \) have natural bigraded structures as \( S \)-modules. The universal Gröbner basis of the ideal \( \mathcal{K} \) is defined as the union of all the reduced Gröbner bases \( \mathcal{G} \) of the ideal \( \mathcal{K} \) as \( \prec \) runs over all possible monomial orders (see [23]). In our first main result we compute the universal Gröbner basis of the defining equations \( \mathcal{K} \) of the Rees algebra \( \mathcal{R}(I) \). \( \text{2010 Mathematics Subject Classification.} \quad 13D02, 13A30, 05E40.\) \( \text{Key words and phrases.} \quad \text{bipartite graphs, Rees algebra, Gröbner bases, regularity, canonical module, edge ideals, toric ideals.} \) The author was funded by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 675789. The author acknowledges financial support from the Spanish Ministry of Economy and Competitiveness, through the “María de Maeztu” Programme for Units of Excellence in R&D (MDM-2014-0445). Theorem 1.1 (Theorem 2.5). Let $G$ be a bipartite graph and $K$ be the defining equations of the Rees algebra $R(I(G))$. The universal Gröbner basis $U$ of $K$ is given by $$U = \{ T_w \mid w \text{ is an even cycle} \} \cup \{ v_0 T_{w^+} - v_a T_{w^-} \mid w = (v_0, \ldots, v_u) \text{ is an even path} \} \cup \{ u_0 u_a T_{(w_1,w_2)^+} - v_0 v_b T_{(w_1,w_2)^-} \mid w_1 = (u_0, \ldots, u_a) \text{ and } w_2 = (v_0, \ldots, v_b) \text{ are disjoint odd paths} \}.$$ From [25, Theorem 3.1, Proposition 3.1] we have a precise description of $K$ given by the syzygies of $I$ and the set even of closed walks in the graph $G$. The algebra $R(I)$, as a bigraded $S$-module, has a minimal bigraded free resolution $$(2) \quad 0 \rightarrow F_p \rightarrow \cdots \rightarrow F_1 \rightarrow F_0 \rightarrow R(I) \rightarrow 0,$$ where $F_i = \oplus_j S(-a_{ij}, -b_{ij})$. In the same way as in [19], we can define the $xy$-regularity of $R(I)$ by the integer $$\text{reg}_{xy}(R(I)) = \max_{i,j} \{ a_{ij} - i \},$$ or equivalently by $$\text{reg}_{xy}(R(I)) = \max \{ a \in \mathbb{Z} \mid \beta^S_{i,(a+b,j)}(R(I)) \neq 0 \text{ for some } i, b \in \mathbb{Z} \},$$ where $\beta^S_{i,(a,b)}(R(I)) = \dim_K(\text{Tor}^S_i(R(I), K)_{(a,b)}).$ Similarly, we can define the $T$-regularity $$\text{reg}_T(R(I)) = \max_{i,j} \{ b_{ij} - i \}$$ and the total regularity $$\text{reg}(R(I)) = \max_{i,j} \{ a_{ij} + b_{ij} - i \}.$$ Our second main result is computing the total regularity and giving upper bounds for both partial regularities. Theorem 1.2 (Theorem 4.2). Let $G$ be a bipartite graph. Then we have: (i) $\text{reg}(R(I(G))) = \text{match}(G)$, (ii) $\text{reg}_{xy}(R(I(G))) \leq \text{match}(G) - 1$, (iii) $\text{reg}_T(R(I(G))) \leq \text{match}(G)$, where match$(G)$ denotes the matching number of $G$. Finally, we apply these results in order to study the regularity of the powers of the edge ideal $I = I(G)$. It is a famous result (for a general ideal in a polynomial ring) the asymptotic linearity of $\text{reg}(I^s)$ for $s \gg 0$ (see [8] and [18]). However, the exact form of this linear function and the exact point where $\text{reg}(I^s)$ starts to be linear, is a problem that continues wide open even in the case of monomial ideals. In recent years, a number of researchers have focused on computing the regularity of powers of edge ideals and on relating these values to combinatorial invariants of the graph (see e.g. [4], [1], [2], [3], [5], [17]). Most of the upper bounds given in these papers use the concept of even-connection introduced in [3]. Actually, using this idea as a central tool, in [17] it was proved the upper bound $$\text{reg}(I^s) \leq 2s + \text{co-chord}(G) - 1$$ for any bipartite graph $G$, where co-chord$(G)$ represents the co-chordal number of $G$ (see [17, Definition 3.1]). As a consequence of our study of the Rees algebra $\mathcal{R}(I)$, we make an estimation of when $\text{reg}(I^s)$ starts to be a linear function, and we obtain the weaker upper bounds for the regularity of the powers of $I$ (see Remark 3.9, Corollary 4.3, Corollary 3.8). Perhaps, this could give new tools and fresh ideas to pursue the stronger upper bound \begin{equation} \text{reg}(I^s) \leq 2s + \text{reg}(I) - 2, \end{equation} that has been conjectured by Alilooee, Banerjee, Beyarslan and Hà ([4, Conjecture 7.11]). Using the upper bound for the partial $T$-regularity of $\mathcal{R}(I)$, we can get the following estimation. **Corollary 1.3 (Corollary 4.4).** Let $G$ be a bipartite graph. Then, for all $s \geq \text{match}(G) + |E(G)| + 1$ we have \[ \text{reg}(I(G)^{s+1}) = \text{reg}(I(G)^s) + 2. \] The basic outline of this note is as follows. In Section 2, we compute the universal Gröbner basis of $\mathcal{K}$ (Theorem 1.1). In Section 3, we consider a specific monomial order that allows us to get upper bounds for the $xy$-regularity of $\mathcal{R}(I)$. In Section 4 we exploit the canonical module of $\mathcal{R}(I)$ in order to prove Theorem 1.2 and Corollary 1.3. Finally, in Section 5 we give some general ideas about the conjectured upper bound (3). ## 2. The universal Gröbner basis of $\mathcal{K}$ In this section we will give an explicit description of the universal Gröbner basis $U$ of $\mathcal{K}$. Our approach is the following, first we compute the set of circuits of the incidence matrix of the cone graph, and then we translate this set of circuits into a description of $U$. The following notation will be assumed in most of this note. **Notation 2.1.** Let $G$ be a bipartite graph with bipartition $X = \{x_1, \ldots, x_n\}$ and $Y = \{y_1, \ldots, y_m\}$, and $R$ be the polynomial ring $R = \mathbb{K}[x_1, \ldots, x_n, y_1, y_m]$. Let $I$ be the edge ideal $I(G) = (f_1, \ldots, f_q)$ of $G$. We consider the Rees algebra $\mathcal{R}(I)$ as a quotient of $S = R[T_1, \ldots, T_q]$ by using (1). Let $\mathcal{K}$ be the defining equations of the Rees algebra $\mathcal{R}(I)$. Let $A = (a_{i,j}) \in \mathbb{R}^{n+m,q}$ be the incidence matrix of the graph $G$. Then we construct the matrix $M$ of the following form \begin{equation} M = \begin{pmatrix} a_{1,1} & \cdots & a_{1,q} & e_1 & \cdots & e_{n+m} \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ a_{n+m,1} & \cdots & a_{n+m,q} \\ 1 & \cdots & 1 \end{pmatrix}, \end{equation} where $e_1, \ldots, e_{n+m}$ are the first $n+m$ unit vectors in $\mathbb{R}^{n+m+1}$ (see [11, Section 3] for more details). This matrix corresponds to the presentation of $\mathcal{R}(I)$ given in (1). For any vector $\beta \in \mathbb{Z}^{n+m+q}$ with nonnegative coordinates we shall use the notation \[ \text{xy}T^\beta = x_1^{\beta_1} \cdots x_n^{\beta_{n+1}} y_1^{\beta_{n+2}} \cdots y_m^{\beta_{n+m+1}} T_1^{\beta_{n+m+2}} \cdots T_q^{\beta_q}. \] A given vector $\alpha \in \text{Ker}(M) \cap \mathbb{Z}^{n+m+q}$, can be written as $\alpha = \alpha^+ - \alpha^-$ where $\alpha^+$ and $\alpha^-$ are nonnegative and have disjoint support. **Definition 2.2 ([23]).** A vector $\alpha \in \text{Ker}(M) \cap \mathbb{Z}^{n+m+q}$ is called a circuit if it has minimal support $\text{supp}(\alpha)$ with respect to inclusion and its coordinates are relatively prime. Notation 2.3. Given a walk $w = \{v_0, \ldots, v_a\}$, each edge $\{v_{j-1}, v_j\}$ corresponds to a variable $T_{ij}$, and we set $T_{w^+} = \prod_{j \text{ is even}} T_{ij}$ and $T_{w^-} = \prod_{j \text{ is odd}} T_{ij}$ (in case $a = 1$ we make $T_{w^+} = 1$). We adopt the following notations: (i) Let $w = \{v_0, \ldots, v_a\}$ be an even cycle in $G$. Then by $T_w$ we will denote the binomial $T_{w^+} - T_{w^-} \in \mathcal{K}$. (ii) Let $w = \{v_0, \ldots, v_a\}$ be an even path in $G$, since $G$ is bipartite then both endpoints of $w$ belong to the same side of the bipartition, i.e. either $v_0 = x_i$, $v_a = x_j$ or $v_0 = y_i$, $v_a = y_j$. Then the path $w$ determines the binomial $v_0 T_{w^+} - v_a T_{w^-} \in \mathcal{K}$. (iii) Let $w_1 = \{u_0, \ldots, u_m\}$, $w_2 = \{v_0, \ldots, v_b\}$ be two disjoint odd paths, then the endpoints of $w_1$ and $w_2$ belong to different sides of the bipartition. Let $T_{(w_1, w_2)^+} = T_{w_1^+} T_{w_2^-}$ and $T_{(w_1, w_2)^-} = T_{w_1^-} T_{w_2^+}$, then $w_1$ and $w_2$ determine the binomial $u_0 u_a T_{(w_1, w_2)^+} - v_0 v_b T_{(w_1, w_2)^-} \in \mathcal{K}$. Example 2.4. In the bipartite graph shown below \[ \begin{align*} \begin{array}{c} \text{x1} \\ \text{y1} \end{array} & \quad \begin{array}{c} \text{x2} \\ \text{y2} \end{array} & \quad \begin{array}{c} \text{T1} \\ \text{T2} \end{array} & \quad \begin{array}{c} \text{T3} \\ \text{T4} \end{array} & \quad \begin{array}{c} \text{x3} \\ \text{y3} \end{array} \end{align*} \] we have that the odd paths $w_1 = (x_1, y_1)$ and $w_2 = (x_2, y_2, x_3, y_3)$ determine the binomial $x_1 y_1 T_2 T_4 - x_2 y_3 T_1 T_3$. Let $\mathcal{U}$ be the universal Gōbner basis of $\mathcal{K}$. In general we have that the set of circuits is contained in $\mathcal{U}$ ([23, Proposition 4.11]). But from the fact that $M$ is totally unimodular ([11, Theorem 3.1]), we can use [23, Proposition 8.11] and obtain the equality $$\mathcal{U} = \{xy T^\alpha^+ - xy T^\alpha^- \mid \alpha \text{ is a circuit of } M\}.$$ Therefore we shall focus on determining the circuits of $M$, and for this we will need to introduce the concept of the cone graph $C(G)$. The vertex set of the graph $C(G)$ is obtained by adding a new vertex $z$ to $G$, and its edge set consists of the edges in $E(G)$ together with the edges $\{x_1, z\}, \ldots, \{x_n, z\}, \{y_1, z\}, \ldots, \{y_m, z\}$. Theorem 2.5. Let $G$ be a bipartite graph and $I = I(G)$ be its edge ideal. The universal Gōbner basis $\mathcal{U}$ of $\mathcal{K}$ is given by $$\mathcal{U} = \{T_w \mid w \text{ is an even cycle}\} \cup \{v_0 T_{w^+} - v_a T_{w^-} \mid w = \{v_0, \ldots, v_a\} \text{ is an even path}\} \cup \{u_0 u_a T_{(w_1, w_2)^+} - v_0 v_b T_{(w_1, w_2)^-} \mid w_1 = \{u_0, \ldots, u_m\} \text{ and } w_2 = \{v_0, \ldots, v_b\} \text{ are disjoint odd paths}\}.$$ Proof. Let $\mathbb{K}[C(G)]$ be the monomial subring of the graph $C(G)$, which is generated by the monomials $\mathbb{K}[C(G)] = \mathbb{K}\{xy | \{x_i, y_j\} \in E(G)\} \cup \{x_i z \mid i = 1, \ldots, n\} \cup \{y_i z \mid i = 1, \ldots, m\}$. As we did for the Rees algebra $\mathcal{R}(I)$, we can define a similar surjective homomorphism $$ \pi : S \rightarrow \mathbb{K}[C(G)] \subset R[z], $$ $$ \pi(x_i) = x_i z, \quad \pi(y_i) = y_i z, \quad \pi(T_i) = f_i. $$ We have a natural isomorphism between $\mathcal{R}(I)$ and $\mathbb{K}[C(G)]$ [24, Exercise 7.3.3]. For instance, we can define the homomorphism $\varphi : R[t] \rightarrow R[z, z^{-1}]$ given by $\varphi(x_i) = x_i z$, $\varphi(y_i) = y_i z$ and $\varphi(t) = 1/z^2$, then the restriction $\varphi|_{\mathcal{R}(I)}$ of $\varphi$ to $\mathcal{R}(I)$ will give us the required isomorphism because both algebras are integral domains of the same dimension (see Proposition 4.1 (i)). Hence we will identify the ideal $\mathcal{K}$ with the kernel of $\pi$. Let $N$ be the incidence matrix of the cone graph $C(G)$. From [25, Proposition 4.2], we have that a vector $\alpha \in \text{Ker}(N) \cap \mathbb{Z}^{m+n+q}$ is a circuit of $N$ if and only if the monomial walk defined by $\alpha$ corresponds to an even cycle or to two edge disjoint odd cycles joined by a path. Since the graph $G$ is bipartite, then an odd cycle in $C(G)$ will necessarily contain the vertex $z$. Therefore the monomial walks defined by the circuits of $N$ are of the following types: (i) An even cycle in $C(G)$ that does not contain the vertex $z$. (ii) An even cycle in $C(G)$ that contains the vertex $z$. (iii) Two odd cycles in $C(G)$ whose intersection is exactly the vertex $z$. The figure below shows how the cases (ii) and (iii) may look. (a) The two possible cycles of (ii). (b) The graph of (iii). Since the circuits of the matrices $M$ and $N$ coincide, now we translate these monomial walks in $C(G)$ into binomials of $\mathcal{K}$. An even cycle in $C(G)$ not containing $z$, is also an even cycle in $G$, and it determines a binomial in $\mathcal{K}$ using Notation 2.3. In the cases (ii) and (iii), we delete vertex $z$ in order to get a subgraph $H$ of $G$. Thus we have that $H$ is either an even path or two disjoint odd paths, and we translate these into binomials in $\mathcal{K}$ using Notation 2.3. **Remark 2.6.** Alternatively in Theorem 2.5, we can see that the matrices $M$ and $N$ have the same kernel because they are equivalent. We multiply the last row of $M$ by $-2$ and then we successively add the rows $1, \ldots, n+m$ to the last row; with these elementary row operations we transform $M$ into $N$. **Example 2.7.** Using Theorem 2.5, the universal Gröbner basis of the defining equations of the Rees algebra of the graph in Example 2.4 is given by $$ \{x_2y_2T_1 - x_1y_1T_2, \ x_2y_2T_1T_3 - x_1y_1T_2T_4, \ x_3T_2 - x_2T_3, \ x_3y_2T_1 - x_1y_1T_3, \\ \quad \quad \quad \quad \ x_3y_3T_1 - x_1y_1T_4, \ y_3T_3 - y_2T_4, \ x_3y_3T_2 - x_2y_2T_4\}. $$ It can also be checked in [12] using the command `universalGroebnerBasis`. **Corollary 2.8.** Let $G$ be a bipartite graph and $I = I(G)$ be its edge ideal. The universal Gröbner basis $\mathcal{U}$ of $\mathcal{K}$ consists of square free binomials with degree at most linear in the variables $x_i$’s and at most linear in the variables $y_i$’s. 3. Upper bound for the $xy$-regularity In this section we get an upper bound for the $xy$-regularity of $R(I)$, and the important point is that we will choose a special monomial order. Using the $xy$-regularity we can find an upper bound for the regularity of all the powers of the edge ideal $I$. Since most of the upper bounds for the regularity of the powers of edge ideals are based on the technique of even-connection [3], then a strong motivation for this section is trying to give new tools for the challenging conjecture: **Conjecture 3.1** (Alilooee, Banerjee, Beyarslan and Hà). Let $G$ be an arbitrary graph then $$\text{reg}(I(G)^s) \leq 2s + \text{reg}(I(G)) - 2$$ for all $s \geq 1$. The following theorem will be crucial in our treatment. **Theorem 3.2.** ([19, Theorem 5.3], [14, Proposition 10.1.6]) The regularity of each power $I^s$ is bounded by $$\text{reg}(I^s) \leq 2s + \text{reg}_{xy}(R(I)).$$ By fixing a particular monomial order $<$ in $S$, then we can see the initial ideal $\text{in}_<(K)$ as the special fibre of a flat family whose general fibre is $K$ (see e.g. [14, Chapter 3] or [9, Chapter 15]), and we can get a bigraded version of [14, Theorem 3.3.4, (6)]. **Theorem 3.3.** Let $<$ be a monomial order in $S$, then we have $$\text{reg}_{xy}(R(I)) \leq \text{reg}_{xy}(S/\text{in}_<(K)).$$ Let $M$ be an arbitrary maximal matching in $G$ with $|M| = r$. We assume that the vertices of $G$ are numbered in such a way that $M$ consists of the edges $$M = \{\{x_1, y_1\}, \{x_2, y_2\}, \ldots, \{x_r, y_r\}\},$$ and also we assume that $n = |X| \leq |Y| = m$. In $R = \mathbb{K}[x_1, \ldots, x_n, y_1, \ldots, y_m]$ we consider the lexicographic monomial order induced by $$x_n > \ldots > x_2 > x_1 > y_m > \ldots > y_2 > y_1.$$ We choose an arbitrary monomial order $<_\#$ on $\mathbb{K}[T_1, \ldots, T_q]$, then we define the following monomial order $<_M$ on $S = \mathbb{K}[x_1, \ldots, x_n, y_1, \ldots, y_m, T_1, \ldots, T_q]$: for two monomials $x_1^{a_1}y_1^{b_1}T_1^{\gamma_1}$ and $x_2^{a_2}y_2^{b_2}T_2^{\gamma_2}$ we have $$x_1^{a_1}y_1^{b_1}T_1^{\gamma_1} <_M x_2^{a_2}y_2^{b_2}T_2^{\gamma_2}$$ if either (i) $x_1^{a_1}y_1^{b_1} <_\# x_2^{a_2}y_2^{b_2}$ or (ii) $x_1^{a_1}y_1^{b_1} = x_2^{a_2}y_2^{b_2}$ and $T_1^{\gamma_1} <_\# T_2^{\gamma_2}$. Let $\mathcal{G}_{<\#}(K)$ be the reduced Gröbner basis of $K$ with respect to $<_\#$. The possible type of binomials inside $\mathcal{G}_{<\#}(K)$ were described in Theorem 2.5, now we focus on obtaining a more refined information about the type (iii) in Notation 3.3. **Notation 3.4.** In this section, for notational purposes (and without loss of generality) we shall assume that $w_1$ and $w_2$ are disjoint odd paths of the form $$w_1 = (x_e, u_1, \ldots, u_{2a}, y_f),$$ $$w_2 = (x_g, v_1, \ldots, v_{2b}, y_h).$$ Then we analyze the binomial $x_ey_1T_{(w_1,w_2)^+} - x_ey_1T_{(w_1,w_2)^-}$. Lemma 3.5. Let $x_ey_1T_{(w_1,w_2)}-x_ey_iT_{(w_1,w_2)} \in G_{\leq M}(K)$, then we have (i) at least one of the vertices $x_e, y_f$ is in the matching $M$, i.e. $e \leq r$ or $f \leq r$; (ii) at least one of the vertices $x_g, y_h$ is in the matching $M$, i.e. $g \leq r$ or $h \leq r$. Proof. (i) First, assume that $a = 0$, i.e. $w_1$ has length one. Since $M$ is a maximal matching then we necessarily get that $e \leq r$ or $f \leq r$. Now let $a > 0$, and by contradiction assume that $e > r$ and $f > r$. From the maximality of $M$, we get that $u_1 = y_j$, where $j \leq r$. We consider the even path $$w_3 = (y_{j_1}, \ldots, u_{2a}, y_f),$$ then using Notation 2.3 we get the binomial $$F = y_jT_{w_3} - y_fT_{w_3^*} \in K.$$ We have $in_{\leq M}(F) = y_jT_{w_3^*}$ because $f > j$. So we obtain that $in_{\leq M}(F)$ divides $x_ey_1T_{(w_1,w_2)}^+$, and this contradicts that $G_{\leq M}(K)$ is reduced. (ii) Follows identically. □ In the rest of this note we assume the following. Notation 3.6. $b(G)$ represents the minimum cardinality of the maximal matchings of $G$ and $\text{match}(G)$ denotes the maximum cardinality of the matchings of $G$. Theorem 3.7. Let $G$ be a bipartite graph and $I = I(G)$ be its edge ideal. The $xy$-regularity of $\mathcal{R}(I)$ is bounded by $$\text{reg}_{xy}(\mathcal{R}(I)) \leq \min \{ |X| - 1, \; |Y| - 1, \; 2b(G) - 1 \}.$$ Proof. From Theorem 3.3, it is enough to prove that $$\text{reg}_{xy}(S/in_{\leq M}(K)) \leq \min \{ |X| - 1, \; |Y| - 1, \; 2r - 1 \}.$$ Let $\{m_1, \ldots, m_c\}$ be the monomials obtained as the initial terms of the elements of $G_{\leq M}(K)$. We consider the Taylor resolution (see e.g. [14, Section 7.1]) $$0 \longrightarrow T_c \longrightarrow \cdots \longrightarrow T_1 \longrightarrow T_0 \longrightarrow S/in_{\leq M}(K) \longrightarrow 0,$$ where each $T_i$ as a bigraded $S$-module has the structure $$T_i = \bigoplus_{1 \leq j_1 < \cdots < j_i \leq e} S(-\deg_{xy}(\text{lcm}(m_{j_1}, \ldots, m_{j_i})), -\deg_T(\text{lcm}(m_{j_1}, \ldots, m_{j_i}))).$$ From it, we get the upper bound $$\text{reg}_{xy}(S/in_{\leq M}(K)) \leq \max \{ \deg_{xy}(\text{lcm}(m_{j_1}, \ldots, m_{j_i})) - i \mid \{j_1, \ldots, j_i\} \subset \{1, \ldots, c\} \}.$$ When $\deg_{xy}(m_{j_i}) \leq 1$, then we have $$\deg_{xy}(\text{lcm}(m_{j_1}, \ldots, m_{j_i})) - i \leq \deg_{xy}(\text{lcm}(m_{j_1}, \ldots, m_{j_{i-1}})) - (i - 1).$$ So, according with Theorem 2.5, we only need to consider subsets $\{j_1, \ldots, j_i\}$ such that for each $1 \leq k \leq i$ we have $m_{j_k} = in_{\leq M}(F_k)$ and $F_k$ is a binomial as in Notation 3.4. We use the notation $in_{\leq M}(F_k) = x_{e_k}y_{f_k}B_k$, where $B_k$ is a monomial in the $T_i$’s. Also, we can assume that $x_{e_1}y_{f_1}, x_{e_2}y_{f_2}, \ldots, x_{e_k}y_{f_k}$ are pairwise relatively prime, because we can make a reduction like in (5) if this condition is not satisfied. Thus, in order to finish the proof, we only need to show that we necessarily have $i \leq \min\{|X| - 1, |Y| - 1, 2r - 1\}$ under the two previous conditions. Since the two paths that define each $F_k$ are disjoint, then by the monomial order chosen we have that $e_k > 1$ for each $k$, and by a “pigeonhole” argument follows that $i \leq |X| - 1 \leq |Y| - 1$. Also, from Lemma 3.5 there are at most $2r - 1$ available positions to satisfy the condition of being co-primes. Thus we have $i \leq 2r - 1$, and the result of the theorem follows because $M$ is an arbitrary maximal matching. □ Corollary 3.8. Let $G$ be a bipartite graph and $I = I(G)$ be its edge ideal. For all $s \geq 1$ we have \[ \text{reg}(I^s) \leq 2s + \min \{ |X| - 1, |Y| - 1, 2b(G) - 1 \}. \] Proof. It follows from Theorem 3.7 and Theorem 3.2. \qed Remark 3.9. From the fact that co-chord$(G) \leq \text{match}(G) \leq \min\{|X|, |Y|\}$ (see [17]) and match$(G) \leq 2b(G)$ (see [15, Proposition 2.1]), then we have the following relations \[ \text{co-chord}(G) - 1 \leq \text{match}(G) - 1 \leq \min \{ |X| - 1, |Y| - 1, 2b(G) - 1 \}. \] Although the last upper bound is weaker, it is interesting that an approach based on Gröbner bases can give a sharp answer in several cases. In the last part of this section we deal with the case of a complete bipartite graph. The Rees algebra of these graphs was studied in [26]. Notation 3.10. By $G$ we will denote a complete bipartite graph with bipartition $X = \{x_1, \ldots, x_n\}$ and $Y = \{y_1, \ldots, y_m\}$. Let $I = \{x_iy_j \mid 1 \leq i \leq n, 1 \leq j \leq m\}$ be the edge ideal of $G$ and let $T_{ij}$ be the variable that corresponds to the edge $x_iy_j$. Thus we have a canonical map \begin{equation} S = \mathbb{K}[x_i's, y_j's, T_{ij}'s] \xrightarrow{\psi} \mathcal{R}(I) \subset R[t], \end{equation} \[ \psi(x_i) = x_i, \quad \psi(y_i) = y_i, \quad \psi(T_{ij}) = x_iy_ft. \] Let $K$ be the kernel of this map. For simplicity of notation we keep the same monomial order $\prec M$. Exploiting our characterization of the universal Gröbner basis of $K$, we shall prove that all the powers of the edge ideal of $G$ have a linear free resolution. Lemma 3.11. Let $G$ be a complete bipartite graph. The reduced Gröbner basis $\mathcal{G}_{\prec M}(K)$ consists of binomials with linear $xy$-degree. Proof. From Theorem 2.5 we only need to show that any binomial determined by two disjoint odd paths is not contained in $\mathcal{G}_{\prec M}(K)$. Let $x_eyft_{(u_1,u_2)}+ - x_eyht_{(u_1,u_2)}-$ be a binomial like in Notation 3.4. By contradiction assume that $x_eyft_{(u_1,u_2)}+ - x_eyht_{(u_1,u_2)}- \in \mathcal{G}_{\prec M}(K)$. Without loss of generality we assume that $e > g$. Since $G$ is complete bipartite, we choose the edge $x_eyh$ and we append it to $w_2$, that is \[ w_3 = (x_g, v_1, \ldots, v_{2b}, y_h, x_e). \] Using Notation 2.3 we get the binomial \[ F = x_gT_{w_3} - x_eT_{w_5} \in K, \] with initial term $\text{in}_{\prec M}(F) = x_eT_{w_5}$ because $e > g$. Thus we get that $\text{in}_{\prec M}(F)$ divides $x_eyft_{(u_1,u_2)+}$, a contradiction. \qed Corollary 3.12. Let $G$ be a complete bipartite graph and $I = I(G)$ be its edge ideal. For all $s \geq 1$ we have $\text{reg}(I^s) = 2s$. Proof. Using Lemma 3.11 and repeating the same argument of Theorem 3.7 we can get $\text{reg}_{xy}(\mathcal{R}(I)) = 0$. Again, the result follows by Theorem 3.2. \qed We remark that this previous result also follows from [17] since it is easy to check that co-chord$(G) = 1$ (i.e. it is a co-chordal graph) in the case of complete bipartite graphs. 4. The total regularity of $\mathcal{R}(I)$ In the previous sections we heavily exploited the fact that the matrix $M$ (corresponding to $\mathcal{R}(I)$) is totally unimodular in the case of a bipartite graph $G$. From [11, Theorem 2.1] we have that $\mathcal{R}(I)$ is a normal domain, then a famous theorem by Hochster [16] (see e.g. [6, Theorem 6.10] or [7, Theorem 6.3.5]) implies that $\mathcal{R}(I)$ is Cohen-Macaulay. So, the Rees algebra $\mathcal{R}(I)$ of a bipartite graph $G$ is also special from a more algebraic point of view (see [22]). For notational purposes we let $N$ be $N = n + m$. It is well known that the canonical module of $S$ (with respect to our bigrading) is given by $S(-N, -q)$ (see e.g. [6, Proposition 6.26], or [7, Example 3.6.10] in the $\mathbb{Z}$-graded case). The Rees cone is the polyhedral cone of $\mathbb{R}^{N+1}$ generated by the set of vectors $$\mathcal{A} = \{ v \mid v \text{ is a column of } M \text{ in (4)} \},$$ and we will denote it by $\mathcal{R}_+\mathcal{A}$. The irreducible representation of the Rees cone for a bipartite graph was given in [11, Section 4]. **Proposition 4.1.** Adopt Notation 2.1. The following statements hold: (i) The Krull dimension of $\mathcal{R}(I)$ is $\dim(\mathcal{R}(I)) = N + 1$. (ii) The projective dimension of $\mathcal{R}(I)$ as an $S$-module is equal to the number of edges minus one, that is, $p = pd_S(\mathcal{R}(I)) = q - 1$. (iii) The canonical module of $\mathcal{R}(I)$ is given by $$\omega_{\mathcal{R}(I)} = \text{Ext}^1_S(\mathcal{R}(I), S(-N, -q)).$$ (iv) The bigraded Betti numbers of $\mathcal{R}(I)$ and $\omega_{\mathcal{R}(I)}$ are related by $$\beta_i^{S}(\mathcal{R}(I)) = \beta_i^{S}(\omega_{\mathcal{R}(I)}).$$ **Proof.** (i) The Rees cone $\mathcal{R}_+\mathcal{A}$ has dimension $N + 1$ and the Krull dimension of $\mathcal{R}(I)$ is equal to it (see e.g. [23, Lemma 4.2]). More generally, it also follows from [21, Proposition 2.2]. Since clearly $\mathcal{R}(I)$ is a finitely generated $S$-module, then the statements (ii) and (iii) follow from [6, Theorem 6.28] (see [7, Proposition 3.6.12] for the $\mathbb{Z}$-graded case). The statement (iv) follows from [6, Theorem 6.18]; also, see [6, page 224, equation 6.6]. Due to a formula of Danilov and Stanley (see e.g. [6, Theorem 6.31] or [7, Theorem 6.3.5]), the canonical module of $\mathcal{R}(I)$ is the ideal given by $$\omega_{\mathcal{R}(I)} = \{(x_1^{a_1} \cdots x_n^{a_n} y_1^{b_1+1} \cdots y_N^{b_N+1} \mid a = (a_i) \in (\mathcal{R}_+\mathcal{A})^\circ \cap \mathbb{Z}^{N+1})\},$$ where $(\mathcal{R}_+\mathcal{A})^\circ$ denotes the topological interior of $\mathcal{R}_+\mathcal{A}$. Now we can compute the total regularity of $\mathcal{R}(I)$. **Theorem 4.2.** Let $G$ be a bipartite graph and $I = I(G)$ be its edge ideal. The total regularity of $\mathcal{R}(I)$ is given by $$\text{reg}(\mathcal{R}(I)) = \text{match}(G).$$ **Proof.** In the case of the total regularity, we can see $\mathcal{R}(I)$ as a standard graded $S$-module (i.e. $\deg(x_i) = \deg(y_i) = \deg(T_i) = 1$), and since $\mathcal{R}(I)$ is a Cohen-Macaulay $S$-module then the regularity can be computed with the last Betti numbers (see e.g. [20, page 283] or [9, Exercise 20.19]). Thus, from Proposition 4.1 we get $$\text{reg}(\mathcal{R}(I)) = \max \{a + b - p \mid \beta_i^{S}(\mathcal{R}(I)) \neq 0\}$$ $$= \max \{a + b - p \mid \beta_i^{S}(\omega_{\mathcal{R}(I)}) \neq 0\}$$ $$= N + 1 - \min \{a + b \mid \beta_i^{S}(\omega_{\mathcal{R}(I)}) \neq 0\},$$ and by the bigrading that we are using \((\deg(x_i) = \deg(y_i) = (1, 0) \text{ and } \deg(t) = (-2, 1))\) then we obtain \[ \reg(R(I)) = N + 1 - \min \{ a_1 + \cdots + a_N - a_{N+1} \mid a = (a_i) \in (\mathbb{R}_+A)^o \cap \mathbb{Z}^{N+1} \}. \] One can check that the number \[ - \min \{ a_1 + \cdots + a_N - a_{N+1} \mid a = (a_i) \in (\mathbb{R}_+A)^o \cap \mathbb{Z}^{N+1} \} \] coincides with the \(a\)-invariant of \(R(I)\) with respect to the \(\mathbb{Z}\)-grading induced by \(\deg(x_i) = \deg(y_i) = 1\) and \(\deg(t) = -1\). This last formula can be evaluated with the irreducible representation of the Rees cone \([11, \text{Corollary 4.3}], \) it was done in \([11, \text{Proposition 4.5}], \) and from it we get \[ \reg(R(I)) = N - \beta_0, \] where \(\beta_0\) denotes the maximal size of an independent set of \(G\). The minimal size of a vertex cover is equal to \(N - \beta_0\), and we finally get \[ \reg(R(I)) = \text{match}(G) \] from König’s theorem. The following bound was obtained for the first power of the edge ideal in \([13, \text{Theorem 6.7}], \) **Corollary 4.3.** Let \(G\) be a bipartite graph and \(I = I(G)\) be its edge ideal. For all \(s \geq 1\) we have \[ \reg(I^s) \leq 2s + \text{match}(G) - 1. \] **Proof.** It is enough to prove that \(\reg_{xy}(R(I)) \leq \reg(R(I)) - 1\). In the minimal bigraded free resolution \((2)\) of \(R(I)\), suppose that \(\reg_{xy}(R) = a_{ij} - i\) for some \(i, j \in \mathbb{N}\). Since necessarily \(b_{ij} \geq 1\) and \[ a_{ij} + b_{ij} - i \leq \reg(R(I)), \] then we get the expected inequality. ■ This previous upper bound is sharp in some cases (see \([5, \text{Lemma 4.4}], \)). In the following corollary we get information about the eventual linearity. **Corollary 4.4.** Let \(G\) be a bipartite graph and \(I = I(G)\) be its edge ideal. For all \(s \geq \text{match}(G) + q + 1\) we have \[ \reg(I^{s+1}) = \reg(I^s) + 2. \] **Proof.** With the same argument of **Corollary 4.3** we can prove that \(\reg_T(R(I)) \leq \reg(R(I))\), here the difference is that in the minimal bigraded free resolution \((2)\) we can have free modules of the type \(S(0, -b_{ij})\) (for instance, in the syzygies of \(R(I)\) the ones that come from even cycles). Then the statement of the corollary follows from \([8, \text{Proposition 3.7}], \). ■ 5. Some final thoughts In the last part of this note we give some ideas and digressions about **Conjecture 3.1**. Using a “refined Rees approach” with respect to the one of this note, one might get an answer to this conjecture for general graphs or perhaps for special families of graphs: • Restricting the minimal bigraded free resolution (2) of $\mathcal{R}(I)$ to a graded $T$-part gives an exact sequence $$0 \to (F_p)_i \to \cdots \to (F_1)_i \to (F_0)_i \to (\mathcal{R}(I))_i \to 0$$ for all $k$. This gives a (possibly non-minimal) graded free $R$-resolution of $(\mathcal{R}(I))_i \cong I^k(2k)$. But in the case $k = 1$ one can check that $$0 \to (F_p)_{i,j} \to \cdots \to (F_1)_{i,j} \to (F_0)_{i,j} \to I_{i,j} \to 0$$ is indeed the minimal free resolution of $I(2)$. Thus, one can read the regularity $I$ from (2), and a solution to Conjecture 3.1 can be given by proving that $$\max_{i,j} \{a_{ij} - i \} = \max_{i,j} \{a_{ij} - i \mid b_{ij} = 1 \}.$$ • For bipartite graphs, Gröbner bases techniques can give very good results (for instance, in the case of complete bipartite graphs). Perhaps, for special families of bipartite graphs one can give “good” monomial orders. • The existence of a canonical module in the case of bipartite graphs could give more information about the minimal bigraded free resolution of $\mathcal{R}(I)$. From [6, Theorem 7.26] we have that the maximal $xy$-degree and the maximal $T$-degree on each $F_i$ of (2) form weakly increasing sequences of integers, that is $$\max_j \{a_{ij} \} \leq \max_j \{a_{i+1,j} \} \quad \text{and} \quad \max_j \{b_{ij} \} \leq \max_j \{b_{i+1,j} \}$$ (see e.g. [9, Exercise 20.19] for the $\mathbb{Z}$-graded case). Thus a more detailed analysis of the polyhedral geometry of the Rees cone $\mathbb{R}_+A$ could give better results. ACKNOWLEDGMENTS The work presented in this note started thanks to the PRAGMATIC 2017 Research School in Algebraic Geometry and Commutative Algebra “Powers of ideals and ideals of powers” held in Catania, Italy, in June 2017. The author is very grateful to the organizers of the event, and to the professors Brian Harbourne, Adam Van Tuyl, Enrico Carlini and Tài Huy Hà who gave the lectures. The author is specially grateful to Tài Huy Hà for his support and for insisting on a “Rees approach”. The author is grateful to Carlos D’Andrea and Aron Simis for useful suggestions. The use of Macaulay2 [12] was very important in the preparation of this note. The author wishes to thank the referee for numerous suggestions to improve the exposition. REFERENCES Department de Matemàtiques i Informàtica, Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Gran Via de les Corts Catalanes, 585; 08007 Barcelona, Spain. E-mail address: ycid@ub.edu URL: http://www.ub.edu/arcades/ycid.html
Basophils regulate the recruitment of eosinophils in a murine model of irritant contact dermatitis. Author(s): Nakashima, Chisa Citation: Kyoto University (京都大学) Issue Date: 2014-07-23 URL: https://doi.org/10.14989/doctor.k18509 This dissertation is author version of following the journal article. Chisa Nakashima, Atsushi Otsuka, Akihiko Kitoh, Tetsuya Honda, Gyohei Egawa, Saeko Nakajima, Satoshi Nakamizo, Makoto Arita, Masato Kubo, Yoshiki Miyachi, Kenji Kabashima, Basophils regulate the recruitment of eosinophils in a murine model of irritant contact dermatitis, Journal of Allergy and Clinical Immunology, Volume 134, Issue 1, July 2014, Pages 100-107.e12, ISSN 0091-6749, http://dx.doi.org/10.1016/j.jaci.2014.02.026. Type: Thesis or Dissertation Textversion: ETD Kyoto University Basophils regulate the recruitment of eosinophils in a murine model of irritant contact dermatitis Chisa Nakashima MD¹, Atsushi Otsuka MD, PhD¹, Akihiko Kitoh MD, PhD¹, Tetsuya Honda MD, PhD¹, Gyohei Egawa MD, PhD¹, Saeko Nakajima MD, PhD¹, Satoshi Nakamizo MD¹, Makoto Arita, PhD², Masato Kubo PhD³,⁴, Yoshiki Miyachi MD, PhD¹, and Kenji Kabashima MD, PhD¹ ¹Department of Dermatology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan ²Department of Health Chemistry, Graduate School of Pharmaceutical Sciences, University of Tokyo, Tokyo, 113-0033, Japan ³Laboratory for Cytokine Regulation, Integrative Medical Science (IMS), RIKEN Yokohama Institute, Suehiro-cho 1-7-22, Tsurumi, Yokohama, Kanagawa 230-0045, Japan ⁴Division of Molecular Pathology, Research Institute for Biomedical Science, Tokyo University of Science 2669 Yamazaki, Noda-shi, Chiba 278-0022, Japan. Correspondence should be addressed to Atsushi Otsuka (AO), MD, PhD and Kenji Kabashima (KK), MD, PhD Department of Dermatology, Kyoto University Graduate School of Medicine 54 Shogoin Kawara, Sakyo-ku, Kyoto 606-8507, Japan Tel: +81-75-751-3310, Fax: +81-75-761-3002 Email: otsukamn@kuhp.kyoto-u.ac.jp (AO) and kaba@kuhp.kyoto-u.ac.jp (KK) This work was supported in part by Grants-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology and the Ministry of Health, Labor and Welfare of Japan. ABSTRACT Background: Although eosinophils have been detected in several human skin diseases in the vicinity of basophils, how eosinophils infiltrate the skin and the role of eosinophils in the development of skin inflammation has yet to be examined. Objective: Using a murine irritant contact dermatitis (ICD) as a model, we sought to clarify the roles of eosinophils in ICD and the underlying mechanism of eosinophil infiltration of the skin. Methods: We induced croton oil-induced ICD in eosinophil-deficient ΔdblGATA mice with or without a reactive oxygen species (ROS) inhibitor. We performed co-cultivation using fibroblasts and bone marrow-derived basophils and evaluated eosinophil migration using a chemotaxis assay. Results: ICD responses were significantly attenuated in the absence of eosinophils or by treatment with the ROS inhibitor. ROS was produced abundantly by eosinophils and that both basophils and eosinophils were detected in human and murine ICD skin lesions. In co-culture experiments, basophils attracted eosinophils especially in the presence of fibroblasts. Moreover, basophils produced IL-4 and TNF-α in contact with fibroblasts and promoted the expression of eotaxin/CCL11 from fibroblasts in vitro. Conclusion: Eosinophils mediated the development of murine ICD possibly via ROS production. Recruitment of eosinophils into the skin was induced by basophils in cooperation with fibroblasts. Our findings raise a novel concept that basophils promote the recruitment of eosinophils into the skin via fibroblast in the development of skin inflammation. **Key Messages:** Basophils initiate the recruitment of eosinophils into the skin via fibroblasts. Interference of this system may control several skin inflammations. **Capsule Summary:** Eosinophil infiltration into the skin by basophils in cooperation with fibroblasts promotes irritant contact dermatitis via ROS production, which raises a new concept that interaction between basophils and fibroblasts induces skin inflammation via eosinophil recruitment. **Key Words:** Eosinophil, basophil, fibroblast, eotaxin/CCL11, RANTES/CCL5, irritant contact dermatitis, reactive oxygen species, tumor necrosis factor **Abbreviations:** BM, bone marrow; BMBa, bone marrow-derived basophil; BMEo, bone marrow-derived eosinophil; CBA, cytometric bead array; CHS, contact hypersensitivity; CM-H$_2$DCFDA, 5-(and-6)-chloromethyl-2',7'-dichlorodihydrofluorescein diacetate; cRPMI, complete RPMI; DCF, 2',7'-dichlorofluorescein; DT, diphtheria toxin; FCS, fetal calf serum; Flt3-L, fms-related tyrosine kinase 3 ligand; IgE-CAI, IgE-mediated chronic allergic inflammation; IL, interleukin; MCP-8, mast cell serine protease-8; MEF, mouse embryonic fibroblast; NAC, N-acetylcysteine; RANTES, regulated on activation, normal T cell expressed and secreted; ROS, reactive oxygen species; SCF, stem cell factor; Tg, transgenic; WT, wild-type INTRODUCTION Contact dermatitis is one of the most common inflammatory skin diseases and comprises both irritant contact dermatitis (ICD) and allergic contact dermatitis.\(^1\) ICD is more common than allergic contact dermatitis and is responsible for approximately 80% of all contact dermatitis.\(^2\) It is defined as a locally arising reaction that appears after chemical irritant exposure.\(^2\) The chemical agents are directly responsible for cutaneous inflammation because of their inherent toxic properties, which cause tissue injury.\(^3\), \(^4\) This inflammatory response is known to activate innate immune system cells, but the precise mechanism of ICD remains largely unknown. Eosinophils are one of the bone marrow (BM)-derived innate immune leukocytes that normally represent less than 5% of leukocytes in the blood, but are frequently detected in the connective tissues and BM.\(^5\) Eosinophils regulate local immune and inflammatory responses, and their accumulation in the blood and tissue is associated with several inflammatory and infectious diseases.\(^6\), \(^7\), \(^8\) The recruitment of activated eosinophils from the blood stream into tissues occurs under numerous conditions, and leads to the release of preformed and synthesized products such as cytokines, chemokines, lipid mediators, cytotoxic granule proteins, and reactive oxygen species (ROS).\(^7\), \(^9\) ROS are mainly produced by NADPH oxidase and lead to tissue injury at the inflamed site during allergic inflammation.\(^10\) The differentiation, migration, and activation of eosinophils are mainly enhanced by interleukin (IL)-5.\(^11\) It has been reported that the IL-5-targeted therapy can reduce airway and blood eosinophils and prevent asthma exacerbations\(^12\); however, the roles of eosinophils in the development of cutaneous immune responses remain largely unknown. It has been recently reported that basophils have been detected in skin diseases including contact dermatitis where eosinophils were present.\textsuperscript{13,14} Basophils are one of the least abundant granulocytes, representing less than 1\% of peripheral blood leukocytes.\textsuperscript{15} Their specific physiological functions during immune responses have been ignored until recently. Basophils play key roles in the development of acute and chronic allergic responses, protective immunity against parasites, and regulation of acquired immunity, including the augmentation of humoral memory responses.\textsuperscript{16,17} In this study, we observed the infiltration of eosinophils in human and murine ICD. Murine ICD responses were attenuated in eosinophil-deficient mice or in mice treated with an ROS inhibitor. ROS was produced by eosinophils, which were attracted by chemokines produced via interaction between basophils and fibroblasts. Our findings may raise an important concept that the interaction between basophils and mesenchymal fibroblasts induces the development of ICD via recruitment of eosinophils. \textbf{Methods} \textbf{Mice} \textit{ΔdblGATA} mice on BALB/c background were purchased from the Jackson Laboratory (West Grove, PA, USA). IL-5 transgenic (Tg) mice on BALB/c background\textsuperscript{18} were kindly provided by Dr. K. Takatsu (University of Toyama, Toyama, Japan). Basophil-specific enhancer-mediated toxin receptor-mediated conditional cell knock-out (Bas TRECK) mice on BALB/c background were generated as reported previously. Briefly, basophils use a specific 4 kb enhancer fragment containing the 3’-untranslated region and HS4 elements to regulate Il4 gene expression. Utilizing this system, we generated mice that express human diphtheria toxin (DT) receptor under the control of HS4. Using these mice, basophils have been reported to play an essential role for the induction and promotion of Th2 immunity. C57BL/6N and BALB/c wild-type (WT) mice were purchased from Japan SLC (Shizuoka, Japan). Eight-to-ten-week-old female mice were used for all the experiments and bred in specific pathogen-free facilities at Kyoto University. All experimental procedures were approved by the Institutional Animal Care and Use Committee of Kyoto University Graduate School of Medicine (Kyoto, Japan). Reagents, antibodies, and flow cytometry We purchased croton oil and N-acetylcysteine (NAC) from Sigma-Aldrich (St. Louis MO). 5-(and-6)-chloromethyl-2’,7’-dichlorodihydrofluorescein diacetate (CM-H$_2$DCFDA) was purchased from Invitrogen (Carlsbad, CA, USA). Recombinant murine stem cell factor (SCF), fms-related tyrosine kinase 3 ligand (Flt3-L), and IL-3 were purchased from PeproTech (Rocky Hill, NJ, USA). Recombinant mouse IL-5 was purchased from R&D Systems (Minneapolis, MN, USA). FITC-, PE-, PE-Cy7, APC, APC-Cy7, Pacific Blue-conjugated , anti-Gr-1 (RB6-8C5), anti-CD117 (c-Kit) (2B8), anti-FcεRIα (MAR-1), anti-CD49b (Dx5), anti-CD69 (H1.2F3), anti-CD86 (GL1), anti-CD11b (M1/70), and anti-CD45.1 (A20) mAbs were purchased from eBioscience (San Diego, CA, USA). APC-, and PE-conjugated anti-Siglec F (E50-2440) mAbs were purchased from BD Biosciences (San Jose, CA). FITC-conjugated anti-intercellular adhesion molecule-1 (ICAM-1(CD54)) (3E2) mAb was purchased from BD Biosciences (Franklin Lakes, NJ, USA). BV-conjugated anti-CD45 (30-F11), purified anti-CD200R3 (Ba13) mAbs, and rat anti-mast cell serine protease-8 (MCP)-8 (TUG8), were purchased from BioLegend (San Diego, CA, USA). For fluorescence labeling, purified anti-CD200R3 mAb was labeled with the HiLyte Fluor 647 Labeling Kit (Dojindo, Kumamoto, Japan). Functional grade purified anti-FcεRIα (MAR-1), anti-tumor necrosis factor (TNF)-α (MP6-XT22), and anti-IL-4 (11B11) mAbs were purchased from eBioscience. Single-cell suspensions from skin were prepared for flow cytometric analysis as follows. Skin/ear samples were collected using 8 mm skin biopsy (= ~100 mm²), cut into pieces and then digested for 1 h at 37 °C in 1.6 mg/ml collagenase type II (Worthington Biochemical Corp., Freehold, NJ) and 0.1 mg/ml DNase I (Sigma-Aldrich) in complete RPMI medium (cRPMI; RPMI 1640 medium (Sigma-Aldrich) containing 10% heat-inactivated fetal calf serum (FCS) (Invitrogen), 0.05 mmol/L 2-mercaptoethanol, 2 mmol/L L-glutamine, 25 mmol/L N-2-hydroxyethylpiperazine-N’-2-ethanesulfonic acid, 1 mmol/L nonessential amino acids, 1 mmol/L sodium pyruvate, 100 U/mL penicillin, and 100 µg/mL streptomycin). Samples were passed through a 40-µm pore size nylon mesh, and cells were stained for the indicated markers. Samples were acquired on a FACSFortessa system (BD) and analyzed with FlowJo software (Tree Star, San Carlos, CA). The numbers of each cell subset were calculated by flow cytometry and presented the numbers per mm² of the skin surface. ICD and basophil-depletion models Mice were anesthetized with diethyl ether and 20 μl of 1% (v/v) croton oil in acetone was applied to ear skin. To deplete basophils in vivo, mice were injected twice daily for 3 days with anti-FcεRIα (MAR-1). The efficiency of basophil depletion was analyzed in peripheral blood on day 4. To block ROS production, mice were intraperitoneally injected with NAC (500 mg per kg body weight) and given 20 μl of 50 mM NAC in 100% ethanol on ear skin 1 h before application of croton oil. Bas TRECK Tg mice were treated with DT for basophil-depletion. BALB/c mice with DT were used as control. For DT treatment, mice were injected intraperitoneally with 100 ng of DT per mouse. Histology and immunohistochemistry Skin samples for hematoxylin-eosin (HE) staining were collected from ICD patients (n=10) and healthy control subjects (n=6). The number of eosinophils was counted in five fields (× 20 objective). HE staining and histological scoring were evaluated as reported. In brief, samples were scored for the severity and character of the inflammatory response on a subjective grading scale. Responses were graded as follows: 0, no response; 1, minimal response; 2, mild response; 3, moderate response; 4, marked response. The slides were blinded, randomized, and reread to determine the histology score. All studies were read by the same pathologist using the same subjective grading scale. The total histology score was calculated as the sum of scores, including inflammation, neutrophils, mononuclear cells, edema, and epithelial hyperplasia. The evaluation of eosinophils was performed with Papanicolaou staining. For the identification of basophils by immunohistochemistry, tissue sections were immunostained as previously reported.\textsuperscript{24} **Staining of ROS in ear skin** Mice were applied with 1% croton oil and cells from the ear skin were isolated 6 h later, and incubated for 30 min at 37°C with a solution of 1 μM CM-H₂DCFDA in phosphate buffered saline (PBS). After being washed twice with PBS, cells were labeled with anti-SiglecF and anti-CD11b. We detected production of ROS as indicated by an increase in 2',7'-dichlorofluorescein (DCF) fluorescence. **Preparation of BM-derived basophils (BMBas), BM-derived eosinophils (BMEos) and mouse embryonic fibroblasts (MEFs)** Complete RPMI medium (cRPMI) was used as culture medium. For BMBas induction, 5 × 10⁶ BM cells were cultured in cRPMI supplemented with 20% FCS, in the presence of 10 ng/ml recombinant mouse IL-3 (PeproTech) for approximately 9 to 14 days. For BMEos induction, 1 × 10⁶ BM cells of Ly5.1 mice were cultured in cRPMI supplemented with 20% FCS, in the presence of 100 ng/ml recombinant mouse SCF and 100 ng/ml rmFlt3-L (PeproTech) from days 0 to 4. On day 4, the medium containing SCF and Flt3-L was replaced with medium containing 10 ng/ml recombinant mouse IL-5 (R&D) thereafter.\textsuperscript{25} MEFs were obtained from embryos on embryonic day 15 by using standard methods in complete DMEM medium (Sigma-Aldrich). \textsuperscript{26} **Chemotaxis assay and cell culture** Cells were tested for transmigration to the lower chamber across uncoated 5-μm transwell filters (Corning Costar Corp., Corning, NY, USA) for 3 h and were enumerated by flow cytometry. BM cells of IL-5 Tg mice and starved BMBas were co-cultured at a density of 2 × 10^5 cells in 200 μl per well in a 96-well microplate at a BM: BMBa ratio 1:4 in cRPMI supplemented with 10 ng/ml recombinant mouse IL-3 for 24 h. Separation of BM cells and BMBas was performed by using transwell culture plates with a 3-μm pore size. MEFs were cultured in 24-well plates to 80% confluences. For co-culture, the medium of MEFs was replaced with cRPMI and the co-culture was performed supplemented with 10 ng/ml recombinant mouse IL-3 for 24 h. For inhibition assays, BMBas and MEFs were co-cultured with or without 5 μg/ml isotype control Ab (Rat IgG2b, eBioscience), 10 μg/ml anti-IL-4 mAb (11B11, eBioscience) or 5 μg/ml anti-TNF-α mAb (MP6-XT22, eBioscience) for 24 h. For chemotaxis toward the supernatant of co-culture of BMBas and MEFs, 1 × 10^6 BM cells were transferred into the upper chamber of transwell containing 5-μm pore filters. The supernatant of cultivation of MEFs with or without BMBas was added to the lower chamber and incubated for 3 h at 37°C. Gr-1\textsuperscript{int+} SiglecF\textsuperscript{+} CD11b\textsuperscript{+} eosinophils which migrated to the lower chambers were counted by flow cytometry. **ELISA and cytometric beads array** The amount of eotaxin/CCL11 in the culture medium was measured by ELISA (eBioscience). The amount of regulated on activation, normal T cell expressed and secreted (RANTES)/CCL5 was measured using a cytometric bead array (CBA) system according to the manufacturer’s instructions (BD Biosciences). For the measurement of eotaxin and RANTES, a total of $3 \times 10^5$ BMBas and $1 \times 10^5$ MEFs were cultured with recombinant mouse IL-3 (10 ng/ml) for 24 h, and the supernatants were collected for ELISA assay and CBA. **Statistical analysis** Unless otherwise indicated, data are presented as the means ± standard error of the mean and a representative of at least three independent experiments. P-values were calculated with Wilcoxon signed-rank test. P-values < 0.05 are considered to be significantly different and are marked by an asterisk in the figures. **Results** **Eosinophils play some roles in the development of ICD** We first evaluated whether eosinophils were detected in the lesional skin of patients with ICD. In comparison with healthy donors, the number of eosinophils was significantly higher in patients with ICD (Fig. 1A). To further investigate the role of eosinophils in ICD, we used eosinophil-deficient ΔdblGATA mice. In a croton oil-induced ICD model, the ear swelling response in ΔdblGATA mice was significantly attenuated compared with that in WT mice 6, 24, and 48 h after application (Fig. 1B). To confirm the role of eosinophils in ICD, we used IL-5 Tg mice, which demonstrate eosinophilia in peripheral blood as well as infiltration of eosinophils into various tissues. The ICD response in IL-5 Tg mice was significantly enhanced compared with that in WT mice at 1, 3, 6, 24, and 48 h after application (Fig. 1C). Consistent with the ear swelling responses, lymphocyte infiltration including eosinophils and edema in the dermis 6 h after application were lower in ΔdblGATA mice and higher in IL-5 Tg mice than in WT mice (Fig. 1D, Table E1 and 2). In addition, major eosinophil chemoattractants, such as RANTES and eotaxin, were detected in the skin after croton oil application (Fig. E1). **Eosinophils produce ROS in ICD** ROS is known to induce the development of some inflammatory conditions. To assess the role of ROS in the ICD model, we used an ROS-inhibitor NAC. Ear swelling significantly decreased after the NAC treatment in both WT and ΔdblGATA mice (Fig. 2A). Of note, ear swelling in NAC-treated WT mice was comparable to that in NAC-treated ΔdblGATA mice, suggesting that ROS produced from eosinophils play a major role in the induction of ICD. In addition, using the ROS-sensitive fluorescent dye CM-H₂DCFDA, we detected a significant amount of ROS production by eosinophils in the skin under steady states. In addition, eosinophils in the ICD lesional skin expressed higher amounts of ROS, which was also higher than infiltrated CD4+ T cells\textsuperscript{29} (Fig. 2B, C). **Basophils enhance eosinophil recruitment into the skin** Basophils tended to be detected in skin diseases where eosinophils were present.\textsuperscript{30} We next analyzed the distribution of eosinophils and basophils using Papanicolaou staining and immunohistochemistry, respectively, in the ICD model. Mcp8+ basophils were localized in the vicinity of eosinophils in the inflamed skin (Fig. 3A). We further evaluated whether basophils were detected in the lesional skin of patients with ICD and demonstrated the coincidental presence of basophils and eosinophils in inflamed skin of human ICD (Fig. E2). In addition, the numbers of neutrophils and basophils in ΔdblGATA mice were comparable to those in WT mice 6 h after croton oil application (Fig. 3B, C), which suggests that eosinophils do not affect the recruitment of neutrophils and basophils into the skin. We further analyzed the kinetics of recruitment of eosinophils and basophils in the lesional skin in ICD. The number of basophils increased 3 h after croton oil application. On the other hand, eosinophils increased 24 h after croton oil application, the timing of which was later than that of basophils (Fig. 3D). Therefore, we hypothesized that basophils affect eosinophil infiltration during ICD. To address this hypothesis, we depleted basophils using anti-FceRI\(\alpha\) (MAR-1) antibody.\textsuperscript{22} The administration of anti-FceRI\(\alpha\) antibodies significantly suppressed ear swelling and infiltration of eosinophils but not neutrophils or mast cells into the skin (Fig. 3E, F, and Fig. E3). To confirm these results, we next used Bas TRECK Tg mice to deplete basophils conditionally.\textsuperscript{17, 20} Consistently, the ear swelling response and the number of infiltration of eosinophils but not neutrophils in DT-treated Bas TRECK Tg mice were significantly attenuated compared with those in DT-treated WT mice (Fig. 3G, H, and Fig. E4). Similar findings were observed in mast cell-deficient WBB6F1-Kit\textsuperscript{W/Wv} (W/Wv) mice (Fig. E5). These findings suggest the potential overlap of mast cells and basophils. **Basophils augment eosinophil activation** Impaired eosinophil recruitment as a result of depletion of basophils suggests that basophils promote eosinophil infiltration into the skin. To address this issue, we prepared BM cells from IL-5 Tg mice that included numerous eosinophils and incubated them with or without BMBAs for 24 h. Co-cultivation of BM cells with BMBAs significantly enhanced the expression levels of activation markers, CD69, CD86, and ICAM-1, on eosinophils among BM cells (Fig. 4A, B).\textsuperscript{31-33} On the other hand, the incubation of BMBAs and BM cells of IL-5 Tg mice separately using transwells did not induce up-regulation of the above activation markers on eosinophils (Fig. 4A, B). These findings suggest that basophils require direct cell-to-cell interaction to activate eosinophils. In cooperation with fibroblasts, basophils promote recruitment of eosinophils Next we evaluated whether basophils were capable of attracting eosinophils. We prepared BMBas and BMEos for chemotaxis assay. The chemotaxis of BMEos applied to the upper chamber was significantly enhanced when BMBas were added to the lower chamber (Fig. 5A). On the other hand, BMBas applied to the upper chamber did not migrate to the lower chamber where BMEos were added (Fig. 5A). These results suggest that basophils attract eosinophils, but not vice versa. We then sought to identify how basophils recruit eosinophils. CCR3 is known to mediate eosinophil chemotaxis in response to eotaxin and RANTES. The number of eosinophils in anti-CCR3 antibody-treated mice was attenuated compared to that in control antibody-treated mice 24 h after croton oil application (Fig. E6A). In addition, the amounts of RANTES and eotaxin in the skin after croton oil application were reduced in basophil-depleted mice using a Bas TRECK transgenic system (Fig. E6B). RANTES was detected in the supernatant medium of IL-3-stimulated BMBa cultures (Fig. 5B), but eotaxin was not detected therein (data not shown). Fibroblasts are known to produce chemoattractants such as RANTES and eotaxin. We observed that BMBas expressed only RANTES mRNA, which was consistent with the findings in Fig. 5B, and that MEFs expressed RANTES and eotaxin mRNA by quantitative PCR (Fig. E7). Since basophils infiltrated into the dermis where mesenchymal fibroblasts localize abundantly, we then evaluated the effect of the interaction between basophils and fibroblasts on chemokine production. Although MEFs expressed marginal RANTES in the culture supernatant, BMBAs expressed pronounced RANTES. Co-cultivation of MEFs significantly increased RANTES levels in the culture supernatant of BMBAs (Fig. 5C). It has been reported that TNF-α promotes migration of immune cells, such as dendritic cells and mast cells, and that IL-4 is an inducer for several chemokines. We next hypothesized that TNF-α or IL-4 might mediate the production of RANTES by basophils. To address this issue, we examined whether enhanced RANTES production in co-cultivation of BMBAs and MEFs was inhibited by anti-TNF-α or IL-4 antibody. Although anti-IL-4 antibody did not inhibit RANTES production, anti-TNF-α antibody inhibited RANTES production in the culture supernatant of both basophils and co-cultivation of basophils and MEFs (Fig. 5C). Next, we sought to reveal the mechanism by which eotaxin is induced in the skin. In contrast to RANTES, eotaxin mRNA was strongly detected in MEFs (Fig. E7), whereas eotaxin protein levels in the culture supernatant of MEFs were marginal (Fig. 5D). Interestingly, eotaxin protein was induced by co-cultivation of BMBAs and MEFs (Fig. 5D). Differently from RANTES induction, both anti-IL-4 antibody and anti-TNF-α antibody inhibited the induction of eotaxin. We found that the main producer of TNF-α and IL-4 was IL-3-stimulated BMBAs (Fig. 5E). Consistently, the amounts of IL-4 and TNF-α in the skin were reduced by depletion of basophils after croton oil application. In addition, co-cultivation of BMBas and MEFs in the presence of IL-3 to the lower chamber attracted eosinophils applied to the upper chamber, when compared to only BMBas or only MEF incubation to the lower chamber (Fig. 5F). **DISCUSSION** In this study, we have demonstrated that eosinophils mediate the development of the ICD reactions possibly via ROS production. Eosinophils accumulate into the human and murine ICD skin lesions in the vicinity of basophils. Basophils are detected in the skin lesion prior to the infiltration of eosinophils, suggesting that basophils promote eosinophil accumulation into the skin. Consistently, BMBas promote the migration and activation of eosinophils *in vitro*. RANTES is produced by IL-3-stimulated basophils and even more by co-cultivation of MEFs in a TNF-α dependent manner. On the other hand, eotaxin is produced by co-cultivation of BMBas and MEFs, which is inhibited by anti-TNF-α and anti-IL-4 antibody. Basophils attracted eosinophils via CCR3. And direct cell-to-cell interaction was required for activation of eosinophils by basophils. Taken together, our findings suggest that basophils infiltrating into the skin attract eosinophils directly or indirectly via interaction of mesenchymal fibroblasts, and that basophils activate eosinophils in situ, which contributes to the development of skin inflammation (Fig. E9). Basophils are thought to be major early producers of IL-4 and IL-13, which are critical for triggering and maintaining allergic responses. In this report, we have demonstrated that basophils rapidly infiltrate into the inflamed skin and subsequently attract eosinophil therein for the development of ICD. IgE-mediated chronic allergic inflammation (IgE-CAI) is a long-lasting inflammation that follows immediate-type reactions and late-phase responses. It is histopathologically characterized by massive eosinophil infiltration into the skin. Basophils are considered as cells responsible for initiating inflammation of IgE-CAI. Consistently with our results using ICD, the number of eosinophils increased in the lesional skin after basophil infiltration in IgE-CAI. In addition, basophils co-localize with eosinophils in human skin diseases such as atopic dermatitis and eosinophilic pustular folliculitis. These findings suggest that our novel findings might be applicable to more general skin inflammatory diseases both in mice and in human. In this study, we have clarified that basophils recruit eosinophils into the skin. The next question is how basophils infiltrate into the lesional skin. It has been reported that \( \alpha(1,3) \) fucosyltransferases IV and VII are essential for the initial recruitment of basophils in chronic allergic inflammation. We are currently working to understand the underlying mechanism of how basophils infiltrate into the lesion as an inducer of skin inflammation in our model. Both basophils and eosinophils express the common chemokine receptor CCR3. Ligands for CCR3, such as eotaxin, are produced by dermal fibroblasts in response to Th2-type cytokines in humans. We demonstrated herein a new network for eosinophil infiltration into the skin as summarized in Fig. E9. Activated basophils produced RANTES, which was dependent on TNF-\( \alpha \) that was possibly produced by basophils themselves. In addition, basophils that have infiltrated into the lesional skin were activated via contact with dermal fibroblasts and produced IL-4 and TNF-α, which promoted eotaxin expression from fibroblasts. These cytokine-chemokine networks may support recruitment of eosinophils from the blood stream into the skin. It has been reported that ICD is IgE-independent. In addition, recent studies showed that thymic stromal lymphopoietin (TSLP), which is produced by keratinocytes and fibroblasts in the skin, activated basophils. We have demonstrated that croton oil application promoted the induction of TSLP in ICD (data not shown). Therefore, we assume that TSLP is one of the candidates for the activator in this assay. In this study, we also examined the role of mast cells in ICD using mast cell-deficient Kit<sup>W/W<sup>v</sup> mice. Interestingly, similar phenotypes, such as the attenuation of ICD and eosinophil infiltration, were found in a mast cell-deficient model (Fig. E5). These findings suggest the potential overlap of mast cells and basophils, which seems to be intriguing. We demonstrated that TNF-α was decreased partially but IL-4 was almost completely diminished by basophil depletion (Fig. E8). These findings suggest that TNF-α and IL-4 might be released by mast cells and basophils, respectively, for the development of ICD. TNF-α is a potent pro-inflammatory cytokine and immunomodulatory cytokine implicated in inflammatory conditions. Treatment with anti-TNF-α antibody is effective for several diseases including psoriasis, Crohn’s disease, and rheumatoid arthritis (RA). On the other hand, peripheral blood eosinophilia can be observed in patients with active inflammatory RA. We demonstrated herein that anti-TNF-α antibody inhibited the production of eosinophil chemoattractant such as RANTES and eotaxin from basophils and fibroblasts (Fig. 5C, D). Since synovial fibroblasts and basophils have been reported to play important roles in the pathogenesis of RA, anti-TNF-α might also block the interaction of basophils, eosinophils and fibroblasts to regulate RA activity. Further understanding of the relationship between basophils, eosinophils, and fibroblasts in the immune organs may lead to the development of new therapeutic strategies to control eosinophil-associated diseases, such as ICD, atopic dermatitis, and allergic asthma. ACKNOWLEDGEMENTS We thank Dr. Hideaki Tanizaki, Dr. Kazunari Sugita, Ms. Kaori Tomari, Ms. Kiiko Kumagai, Ms. Natsuki Ishizawa, and Ms. Hiromi Doi for technical assistance. No additional external funding was received for this study. REFERENCES Figure Legends Figure 1. Eosinophils play some role in the development of ICD. (A) Histology of the skin of ICD (I; n=10) and healthy donors (H; n=6). Scale bars, 50 μm. The number of eosinophils per field (left panel). (B, C) Ear swelling of WT and ΔdblGATA mice (n=9 per group; B) and of WT (n=10) and IL-5 Tg (n=7) mice (C) after application of croton oil. (D) HE staining of ears 6 h after application. Histotology scores of the skin before and 6 h after the application. Scale bars, 100 μm. Figure 2. ICD is mediated by eosinophil-derived ROS (A) Ear swelling of WT (n=11) and ΔdblGATA mice (n=7) pretreated with or without antioxidant NAC measured 24 h after application. (B, C) Histogram (B) and MFI of DCF (C) on SiglecF^+ CD11b^+ eosinophils and CD4^+ T cells before (i.e., steady states; 0 h) or 6 h after application. **Figure 3. Basophils accumulate into the skin prior to eosinophil infiltration in ICD.** (A) Mcp8^+ basophils (black arrowhead) and Papanicolaou staining^+ eosinophils (red arrowhead) 24 h after croton-oil application. Scale bars, 50 μm. (B, C) FACS plots (B) and numbers (C) of infiltrating eosinophils (Eo), basophils (Baso), and neutrophils (Neu) of WT and ΔdblGATA mice per mm^2 of surface of the skin 6 h after croton-oil application. (D) Kinetics of the numbers of eosinophils and basophils in skin. (E) Ear swelling and (F) cell infiltration in ICD by depletion of basophils by MAR-1 antibody (n=5 per group). (G, H) Ear swelling in ICD (G) and cell infiltration (H) of WT (n=5) and Bas TRECK Tg (n=4) mice 24 h after croton-oil application. **Figure 4. Basophils promote the activation of eosinophils via direct cell interaction.** Histogram (A) and MFI (B) of CD69, CD86, and ICAM-1 expression on the eosinophil subset in BM cells of IL-5 Tg mice cultured with or without BMBas directly or indirectly. **Figure 5. Basophils promote eosinophil recruitment directly or indirectly via fibroblasts.** (A) Migration of eosinophils and basophils. BMBas or BMEos were applied to the upper or lower chambers with or without IL-3, and cell number was evaluated. (B) Amount of RANTES in the supernatants of BMBa cultures with or without IL-3. (C) Amount of RANTES and (D) eotaxin in supernatants of MEF, BMBas (Baso), or MEF plus BMBas for 24 h with or without neutralizing anti-IL-4 or TNF-α antibodies. (E) Amount of TNF-α and IL-4 in supernatants of BMBa or MEF cultures. (F) The number of migrating eosinophils. Chemotaxis of eosinophils in lower chamber that was incubated with MEFs, BMBas, or MEF plus BMBas in the presence or absence of IL-3 was evaluated. **Figure 1** A. ICD B. Ear swelling (µm) over time with Croton oil: 1% C. Ear swelling (µm) over time with Croton oil: 1% D. Histology score over time with Croton oil: 1% Figure 2 A <table> <thead> <tr> <th>NAC</th> <th>Croton oil: 1%</th> </tr> </thead> <tbody> <tr> <td>-</td> <td>*</td> </tr> <tr> <td>+</td> <td></td> </tr> <tr> <td>-</td> <td></td> </tr> </tbody> </table> B <table> <thead> <tr> <th>ROS</th> <th>0 h</th> <th>6 h</th> </tr> </thead> <tbody> <tr> <td>10^2</td> <td>10^3</td> <td>10^4</td> </tr> <tr> <td>0</td> <td>10</td> <td>20</td> </tr> </tbody> </table> C <table> <thead> <tr> <th>MF/DCF (×10^4)</th> </tr> </thead> <tbody> <tr> <td>0 6 0 6 (h)</td> </tr> </tbody> </table> * Figure 3 A B C D E F G H Figure 4 A - Isotype - BM - BM+Baso (direct) - BM+Baso (indirect) B - Isotype - BM - BM+Baso (direct) - BM+Baso (indirect) Figure 5 A Cell count (×10^3) IL-3: - - + + - + + + Lower: - - + + - + + + Upper: Eo Baso B RANTES (ng/ml) IL-3: 0 1 10 (ng/ml) Baso: - - + + + + + + MEF: + + + + + + + + Ab: Iso IL-4 TNF Iso IL-4 TNF Iso IL-4 TNF C RANTES (ng/ml) Baso: - - + + + + + + MEF: + + + + + + + + Ab: Iso IL-4 TNF Iso IL-4 TNF Iso IL-4 TNF D Eotaxin (pg/ml) Baso: ND + + + + MEF: + - + + + + Ab: Iso Iso Iso IL-4 TNF E TNF-α (pg/ml) Baso: 0 + + + + MEF: + + + + + + F IL-4 (pg/ml) Baso: ND + + + + MEF: - - + + + + Cell count (×10^3) SUPPLEMENTAL MATERIALS AND METHODS Mice Genetically mast cell-deficient WBB6F1-Kit<sup>W/Wv</sup> mice, and congenic normal WBB6F1-Kit<sup>+/+</sup> mice were purchased from Japan SLC (Shizuoka, Japan). Reagents, antibodies, and flow cytometry Monoclonal anti- mouse CCR3 antibody (83103) was purchased from R&D Systems. Purified anti-human basophils (2D7) antibody was purchased from BioLegend. Immunohistochemistry to detect basophils in human Human skin samples were collected from ICD patients (n=10). For the identification of basophils by immunohistochemistry, formalin-fixed, paraffin-embedded tissue sections were deparaffinized and rehydrated through graded ethanol solutions. To enhance antigen retrieval, the slides were treated with 0.4 mg/ml proteinase K (Dako, Carpinteria, CA, CA) for 5 min at room temperature. Samples were blocked with 10% goat serum for 30 min at room temperature and incubated for 16 h at 4°C with primary antibody (human 2D7, 1:50) followed by an Envision kit (Dako). They were lightly counterstained with hematoxylin and eosin. ELISA and beads array The amount of several cytokines in the skin was measured by ELISA and a cytometric bead array (CBA) system (BD Biosciences) according to manufacturer’s instructions. For measurement of RANTES and eotaxin in the skin, ear skin was collected 24 h after applying 1% croton oil and homogenized in 150 μl PBS. The supernatants were collected for ELISA assay and CBA. We also measured the amount of RANTES, eotaxin, IL-4, and TNF-α using organ culture method. Ear skin of WT or Bas TRECK Tg mice was collected 24 h after applying 1% croton oil and split in dorsal and ventral halves. The dorsal (i.e., cartilage-free) halves were cultured in 300 μl PBS at 37°C for 3 h. The culture medium was collected for an ELISA assay and CBA. **Quantitative PCR analysis** Total RNAs were isolated with RNeasy kits and digested with DNase I (Qiagen, Hilden, Germany). cDNA was reverse transcribed from total RNA samples using Prime Script RT Master Mix (Takara Bio, Otsu, Japan). Quantitative RT-PCR was performed by monitoring synthesis of dsDNA during the various PCR cycles using SYBR Green I (Roche, Basel, Switzerland) and lightCycler real-time PCR apparatus (Roche) according to manufacturer’s instructions. Primer for eotaxin and RANTES were obtained from Greiner Bio-One (Tokyo, Japan), and the primer sequences were *Eotaxin*, 5’-GAA TCA CCA ACA ACA GAT GCA C-3’ (forward) and 5’-ATC CTG GAC CCA CTT CTT CTT-3’ (reverse); and *RANTES*, 5’-TTT GCC TAC CTC TCC CTC CTC G-3’ (forward) and 5’-CGA CTG CAA GAT TGG AGC ACT-3’ (reverse). For each sample, triplicate test reactions and a control reaction lacking reverse transcriptase were analyzed for expression of genes, and results were normalized to those of levels. Expression of mRNA (relative) was normalized to the ‘housekeeping’ glyceraldehyde-3-phosphate dehydrogenase (*Gapdh*) mRNA by the change in cycling threshold (ΔC_T) method and calculated based on 2^{-ΔCT}. SUPPLEMENTAL FIGURE LEGENDS Figure E1. Increased levels of RANTES and eotaxin protein in ICD For measurement of RANTES and eotaxin in the skin, ear skin was collected and homogenized in PBS. RANTES and eotaxin protein levels were measured before (0 h) or 24 h after croton oil application. Figure E2. Coincidental presence of eosinophils and basophils in inflamed skin of human ICD Immunohistochemistry with 2D7 and HE staining were performed using inflamed skin of human ICD. 2D7+ basophils (black arrowhead) and eosinophils (red arrowhead) were present coincidently. The representative histological findings of two ICD patients were shown. Scale bars, 50 μm. Figure E3. Intact mast cells in the skin by anti-FcεRIα (MAR-1) antibody treatment (A) Basophils can be efficiently depleted by anti-FcεRIα (MAR-1) antibody treatment. BALB/c mice were injected twice daily for 3 days with 5 μg isotype-matched control or anti-FcεRIα (MAR-1) antibody. On day 4 the depletion efficiency in blood was examined by FACS. (B) Preservation of mast cell population in the skin detected by toluidine blue staining after MAR-1 treatment. Figure E4. Basophils were efficiently depleted by DT-treated Bas TRECK Tg mice. (A) Control BALB/c mice and Bas TRECK Tg mice (BALB/c background) were injected intraperitoneally with 100 ng of DT per mouse. On day 2 the depletion efficiency in blood was examined by FACS. (B) The number of eosinophils in bone marrow was preserved equivalently in DT-treated Bas TRECK Tg mice. **Figure E5. Impaired ICD in the absence of mast cells** (A) Ear swelling of WBB6F1-Kit<sup>W/Wv</sup> (W/Wv) mice and congenic normal WBB6F1-Kit<sup>+/+</sup> (+/+) mice (n=5 per group). (B) Numbers of infiltrating eosinophils (Eo), basophils (Baso), and neutrophils (Neu) of congenic normal WBB6F1-Kit<sup>+/+</sup> (+/+) mice and WBB6F1-Kit<sup>W/Wv</sup> mice. **Figure E6. Eosinophils were recruited via CCR3 and CCR3 ligands were produced in the skin in a basophil-dependent manner** (A) Administration of anti-CCR3 antibodies significantly suppressed the infiltration of eosinophils in the skin. (B) Production of CCR3 ligands, RANTES and eotaxin, in the skin were reduced in DT-treated Bas TRECK Tg mice using skin organ culture. **Figure E7. RANTES and eotaxin mRNA expression in BMBas and MEFs** Quantitative RT-PCR analysis of RANTES and eotaxin mRNA in BMBas and MEFs. **Figure E8. Depletion of basophils reduced TNF-α and IL-4 protein levels in the skin.** The amounts of TNF-α and IL-4 were reduced in DT-treated Bas TRECK Tg mice using organ culture. Figure E9. Roles for basophils and eosinophils in ICD When chemical agents, including irritants, are exposed to the skin, basophils are recruited into the skin as an early phase (Step 1). Activated basophils secrete cytokines including TNF-α and IL-4 (Step 2), which in turn act on dermal fibroblasts and induce to produce chemokines, such as eotaxin (Step 3). In addition, basophil themselves produce RANTES in a TNF-α dependent manner (Step 3). These chemokines recruit eosinophils into the skin (Step 4). Accumulated eosinophils are activated in interaction with basophils and release inflammatory mediators, including ROS and establish skin inflammation (Step 5). SUPPLEMENTAL TABLE Table E1. Histological findings of skin 6 h after application of croton oil <table> <thead> <tr> <th>Phenotype</th> <th>ΔdblGATA</th> <th>WT</th> <th>IL-5</th> </tr> </thead> <tbody> <tr> <td>Inflammation</td> <td>2.3 ± 0.5</td> <td>2.4 ± 0.5</td> <td>3.0 ± 0.0</td> </tr> <tr> <td>Neutrophils</td> <td>1.6 ± 0.5</td> <td>2.2 ± 0.4</td> <td>3.0 ± 0.0</td> </tr> <tr> <td>Mononuclear cells</td> <td>1.5 ± 0.5</td> <td>1.8 ± 0.8</td> <td>2.8 ± 0.4</td> </tr> <tr> <td>Edema</td> <td>1.6 ± 0.8</td> <td>2.4 ± 0.9</td> <td>2.3 ± 0.5</td> </tr> <tr> <td>Epithelial hyperplasia</td> <td>1.0 ± 0.0</td> <td>2.6 ± 0.5</td> <td>2.1 ± 0.4</td> </tr> </tbody> </table> Samples were scored for the severity and character of the inflammatory response using a subjective grading scale. Table E2. Numbers of eosinophils in the skin lesions <table> <thead> <tr> <th>Mouse strain</th> <th>ΔdblGATA</th> <th>WT</th> <th>IL-5 Tg</th> </tr> </thead> <tbody> <tr> <td>Eosinophils</td> <td>0.0 ± 0.0</td> <td>5.0 ± 1.4</td> <td>17.6 ± 6.5</td> </tr> </tbody> </table> Eosinophils in the skin lesions of irritant contact dermatitis of ΔdblGATA, WT, and IL-5 Tg mice (n=5) were counted by microscopic analysis (averaged from 5 high power fields, × 400 magnification) using Papanicolaou staining. **Figure E1** Bar graphs showing RANTES and Eotaxin levels (pg/ml) over time (0 and 24 h). RANTES levels are undetectable (ND) at 0 h and increase significantly at 24 h. Eotaxin levels are elevated at 0 h and show a slight increase at 24 h. Figure E3 A Isotype MAR-1 CD49b (DX5) CD200R3 1.2 % 0.0 % B Iso MAR-1 Number of mast cells per field Iso MAR-1 Figure E4 A CD200R3 CD49b(DX5) 1.2% 0.0% B Cell count (x 10^5) Eo WT Bas TRECK CD200R3 CD49b(DX5) Figure E5 A Ear swelling (µm) -/+ W/Wv <table> <thead> <tr> <th>Time (h)</th> <th>1</th> <th>3</th> <th>6</th> <th>24</th> </tr> </thead> <tbody> <tr> <td>Croton oil: 1%</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> B Cell count (×10/mm²) -/+ W/Wv <table> <thead> <tr> <th>Time (h)</th> <th>Eo</th> <th>Baso</th> <th>Neu</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>8</td> <td>6</td> <td>7</td> </tr> <tr> <td>3</td> <td>8</td> <td>6</td> <td>7</td> </tr> <tr> <td>6</td> <td>8</td> <td>6</td> <td>7</td> </tr> <tr> <td>24</td> <td>8</td> <td>6</td> <td>7</td> </tr> </tbody> </table> * Significant difference Figure E6 A - Control - anti-CCR3 ab Cell count (×10/mm²) Eo B - WT - Bas TRECK RANTES (pg/ml) Eotaxin (pg/ml) 24 h * Figure E7 - **RANTES** - Baso MEF - Expression: $\frac{\text{GAPDH}}{\times 10^{-2}}$ - **Eotaxin** - Baso MEF - Expression: $\frac{\text{GAPDH}}{\times 10^{-4}}$ - **ND** - Baso MEF Figure E8 Figure E9 Chemicals (irritants, allergens) - Basophil - Eosinophil - IL-4 - TNF-α - Eotaxin - RANTES - ROS - Cytokines - Dermal fibroblast - Blood vessels - Epidermis - Dermis 1. Basophil 2. TNF-α IL-4 3. Eotaxin RANTES 4. 5. ROS Cytokines
Abstract—Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active. 1. INTRODUCTION Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models that capture the underlying physical phenomena [1–3]. Component wear is driven by several different degradation phenomena. Each of these degradation phenomena results in its own damage progression path, which all combine to contribute to the overall degradation of the component. Due to manufacturing variances and differences in usage and environmental conditions, the damage progression rates for the different damage mechanisms vary among components of the same type. This poses considerable challenges to data-driven (model-free) approaches, which use run-to-failure data to train machine learning algorithms to make end of life and remaining useful life predictions [4], because often the training data to cover a sufficient portion of such cases is lacking. In the absence of such data, model-based approaches are better-suited, since they use underlying physical models to help estimate the amount of damage and the rates of damage progression. Extending previous work in [1], we develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. In particle filter-based parameter estimation, an artificial random walk evolution is assigned to the parameters, which is necessary for convergence of the estimates and proper tracking afterwards. But, the optimal variance of the random walk depends on the unknown parameter value. To reduce the amount of this artificial uncertainty, we introduce a novel variance control mechanism that maintains an uncertainty bound around an unknown parameter being estimated. We demonstrate our prognostics methodology on a centrifugal pump. Centrifugal pumps are used in a wide range of applications, from water supply to spacecraft fueling systems. Because pumps typically see high usage, they can particularly benefit from prognostics and health management solutions to ensure satisfactory system performance, extended component lifetime, and limited downtime. Model-based diagnosis has been investigated previously with centrifugal pumps [5–7]. However, most prognostics approaches for pumps have been data-driven, usually based on pump vibration signals. A principal component analysis method is applied for condition monitoring of a pump using vibration signals in [8]. A model-based approach is presented in [9], however it considers only a single degradation mode. We illustrate here our model-based prognostic approach for centrifugal pumps using a number of simulation-based experiments when multiple damage mechanisms are active. We evaluate algorithm performance using established prognostics metrics [10]. The paper is organized as follows. Section 2 formally defines the prognostics problem and describes the prognostics architecture. Section 3 describes the modeling methodology and develops the centrifugal pump model for prognostics. Section 4 describes the particle filter-based damage estimation method and develops the variance control scheme. Section 5 discusses the prediction methodology. Section 6 provides results from a number of simulation-based experiments and evaluates the approach. Section 7 concludes the paper. 2. PROGNOSTICS APPROACH The problem of prognostics is to predict the EOL and/or the RUL of a component. In this section, we first formally define the problem of prognostics. We then describe a general model-based architecture for prognostics. Problem Formulation In general, a system model may be defined as \[ \dot{x}(t) = f(t, x(t), \theta(t), u(t), v(t)) \] \[ y(t) = h(t, x(t), \theta(t), u(t), n(t)), \] where \( x(t) \in \mathbb{R}^n \) is the state vector, \( \theta(t) \in \mathbb{R}^{n_\theta} \) is the parameter vector, \( u(t) \in \mathbb{R}^{n_u} \) is the input vector, \( v(t) \in \mathbb{R}^{n_v} \) is the process noise vector, \( f \) is the state equation, \( y(t) \in \mathbb{R}^{n_y} \) is the output vector, \( n(t) \in \mathbb{R}^{n_n} \) is the measurement noise vector, and \( h \) is the output equation. This form represents a general nonlinear model with no restrictions on the functional forms of \( f \) or \( h \). Further, the noise terms may be coupled in a nonlinear way with the states and parameters. The parameters \( \theta(t) \) evolve in an unknown way, but are typically considered to be constant in practice. The goal is to predict EOL (and/or RUL) at a given time point \( t_p \) using the discrete sequence of observations up to time \( t_p \), denoted as \( y_{0:t_p} \). EOL is defined as the time point at which the component no longer meets a functional requirement (e.g., a pump is overheated). This point is often linked to a damage threshold, beyond which the component fails to function properly. In general, we may express this threshold as a function of the system state and parameters, \( T_{EOL}(x(t), \theta(t)) \), which determines whether EOL has been reached, where \[ T_{EOL}(x(t), \theta(t)) = \begin{cases} 1, & \text{if EOL is reached} \\ 0, & \text{otherwise.} \end{cases} \] The EOL threshold is linked to a boundary in the multi-dimensional damage space. Inside the boundary, \( T_{EOL}(x(t), \theta(t)) = 0 \), and outside the boundary, \( T_{EOL}(x(t), \theta(t)) = 1 \). Fig. 1 illustrates this concept with a two-dimensional example, with damage dimensions \( d_1 \) and \( d_2 \). The dimensions are normalized such that \( d_1 = 1 \) corresponds to the maximum allowable damage for \( d_1 \) when \( d_2 = 0 \), and \( d_2 = 1 \) corresponds to the maximum allowable damage for \( d_2 \) when \( d_1 = 0 \). If the different damage mechanisms are considered independently, then the space where \( T_{EOL}(x(t), \theta(t)) = 0 \) would be defined by the space within the dashed lines in the figure. In higher dimensions, this space forms a hypercube. However, in general, the different damage mechanisms cannot be considered independently in defining EOL, because increased damage along one dimension may either allow a greater amount of damage or restrict the allowable amount of damage along another damage dimension. For example, in a normally-closed valve, where EOL is defined by opening and closing times, friction damage will cause the valve to open more slowly, but a weakening of the return spring will allow the valve to open more quickly. So, the actual EOL threshold may take on a more complex form, as shown by the shaded area in Fig. 1. In the regions of the space where \( T_{EOL}(x(t), \theta(t)) = 0 \) that extend beyond the hypercube, more damage is allowed, and in the regions that fall within the hypercube, damage is restricted further. Using \( T_{EOL} \), we can formally define EOL with \[ EOL(t_p) \equiv \arg \min_{t \geq t_p} T_{EOL}(x(t), \theta(t)) = 1, \] i.e., EOL is the earliest time point at which the damage threshold is met. RUL may then be defined with \[ RUL(t_p) \equiv EOL(t_p) - t_p. \] Note that we are interested in the EOL formed by the combined effects of all damage progressions paths, so they must be considered simultaneously, rather than independently. In practice, many sources of uncertainty exist that affect the prediction. Noise is inherent in the process and the measurements, represented by the noise terms \( v(t) \) and \( n(t) \), respectively. Further, the future inputs of the system, which affect the evolution of the state, and therefore the progres- sion of damage, are not always known. Certain input profiles may also excite some damage mechanisms more than others. Thus, it is much more useful to compute a probability distribution of the EOL or RUL, rather than a single prediction point. The goal, then, is to compute, at time $t_P$, $p(EOL(t_P)|y_{0:t_P})$ or $p(RUL(t_P)|y_{0:t_P})$. **Prognostics Architecture** In our model-based approach, we develop detailed physics-based models of components and systems that include descriptions of how fault parameters evolve in time. These models depend on unknown and possibly time-varying wear parameters, $\theta$. Therefore, our solution to the prognostics problem takes the perspective of joint state-parameter estimation. In discrete time $k$, we estimate $x_k$ and $\theta_k$, and use these estimates to predict EOL and RUL at desired time points. We employ the prognostics architecture in Fig. 2. The system is provided with inputs $u_k$ and provides measured outputs $y_k$. Prognostics may begin at $t = 0$, with the damage estimation module determining estimates of the states and unknown parameters, represented as a probability distribution $p(x_0, \theta_0|y_{0:k})$. In parallel, a fault detection, isolation, and identification (FDII) module may be used to determine which damage mechanisms are active, represented as a fault set $F$. The damage estimation module may then use this result to limit the space of parameters that must be estimated. Alternatively, prognostics may begin only when diagnosis has completed. The prediction module uses the joint state-parameter distribution, along with hypothesized future inputs, to compute EOL and RUL as probability distributions $p(EOL|y_{0:k_P})$ and $p(RUL|y_{0:k_P})$ at given prediction times $k_P$. In this paper, we focus on the damage estimation and prediction modules, and assume that the FDII module does not inform the prognostics, i.e., all possible damage progression paths must be tracked starting from $t = 0$. **3. PUMP MODELING** We apply our prognostics approach to a centrifugal pump, and develop a physics-based model of its nominal and faulty behavior. Centrifugal pumps are used in a variety of domains for fluid delivery. A schematic of a typical centrifugal pump is shown in Fig. 3. Fluid enters the inlet, and the rotation of the impeller forces fluid through the outlet. The impeller is driven by an electric motor, typically a three-phase alternating-current induction motor. The radial and thrust bearings help to minimize friction along the pump shaft. The bearing housing contains oil which lubricates the bearings. A seal prevents fluid flow into the bearing housing. Wear rings prevent internal pump leakage from the outlet to the inlet side of the impeller, but a small clearance is typically allowed to minimize friction (a small internal leakage is normal). The state of the pump is given by $$x(t) = \begin{bmatrix} \omega(t) & T_r(t) & T_o(t) \end{bmatrix}^T,$$ where $\omega(t)$ is the rotational velocity of the pump, $T_r(t)$ is the thrust bearing temperature, $T_o(t)$ is the radial bearing temperature, and $T_c(t)$ is the oil temperature. The rotational velocity of the pump is described using a torque balance, $$\dot{\omega} = \frac{1}{J} \left( \tau_e(t) - r\omega(t) - \tau_L(t) \right),$$ where $J$ is the lumped motor/pump inertia, $\tau_e$ is the electromagnetic torque provided by the motor, $r$ is the lumped friction parameter, and $\tau_L$ is the load torque. In an induction motor, a voltage is applied to the stationary part, the stator, which creates a current through the stator coils. With a polyphase supply, this creates a rotating magnetic field which induces a current in the rotating part, the rotor, causing it to turn. A torque is produced on the rotor only when there is a difference between the synchronous speed of the supply voltage, $\omega_s$, and the mechanical rotation, $\omega$. This difference, called ![Figure 2. Prognostics architecture.](image) ![Figure 3. Centrifugal pump.](image) slip, is defined as \[ s = \frac{\omega_s - \omega}{\omega_s} \] The expression for the torque \( \tau_e \) is derived from an equivalent circuit representation for the three-phase induction motor, shown in Fig. 4, based on rotor and stator resistances and inductances, and the slip \( s \) [11]: \[ \tau_e = \frac{n p R_2}{s \omega_s} \left( \frac{V_{rms}^2}{(R_1 + R_2/s)^2 + (\omega_s L_1 + \omega_s L_2)^2} \right), \] where \( R_1 \) is the stator resistance, \( L_1 \) is the stator inductance, \( R_2 \) is the rotor resistance, \( L_2 \) is the rotor inductance, \( n \) is the number of phases (typically 3), and \( p \) is the number of magnetic pole pairs. For a 3600 rpm motor, \( p = 1 \). The dependence of torque on slip creates a feedback loop that causes the rotor to follow the rotation of the magnetic field. The rotor speed may be controlled by changing the input frequency \( \omega_s \), e.g., through the use of a variable-frequency drive. The load torque \( \tau_L \) is a polynomial function of the flow rate through the pump and the impeller rotational velocity [5, 6]: \[ \tau_L = a_0 \omega^2 + a_1 \omega Q - a_2 Q^2, \] where \( Q \) is the flow, and \( a_0, a_1, \) and \( a_2 \) are coefficients derived from the pump geometry [6]. The rotation of the impeller creates a pressure difference from the inlet to the outlet of the pump, which drives the pump flow, \( Q \). The pump pressure is computed as \[ p_p = A \omega^2 + b_1 \omega Q - b_2 Q^2, \] where \( A \) is the impeller area, and \( b_1 \) and \( b_2 \) are coefficients derived from the pump geometry. Flow through the impeller, \( Q_i \), is computed using the pressure differences: \[ Q_i = c \sqrt{|p_s + p_p - p_d| \text{sign}(p_s + p_p - p_d)}, \] where \( c \) is a flow coefficient, \( p_s \) is the suction pressure, and \( p_d \) is the discharge pressure. The small (normal) leakage flow from the discharge end to the suction end due to the clearance between the wear rings and the impeller is described by \[ Q_l = c_l \sqrt{|p_d - p_s| \text{sign}(p_d - p_s)}, \] where \( c_l \) is a flow coefficient. The discharge flow, \( Q \), is then \[ Q = Q_i - Q_l. \] Pump temperatures are often monitored as indicators of pump condition. The oil heats up due to the radial and thrust bearings and cools to the environment: \[ \dot{T}_o = \frac{1}{J_o} (H_{o,1}(T_t - T_o) + H_{o,2}(T_r - T_o) - H_{o,3}(T_o - T_a)), \] where \( J_o \) is the thermal inertia of the oil, and the \( H_{o,i} \) terms are heat transfer coefficients. The thrust bearings heat up due to the friction between the pump shaft and the bearings, and cool to the oil and the environment: \[ \dot{T}_t = \frac{1}{J_t} (r_t \omega^2 - H_{t,1}(T_t - T_o) - H_{t,2}(T_r - T_o)), \] where \( J_t \) is the thermal inertia of the thrust bearings, \( r_t \) is the friction coefficient for the thrust bearings, and the \( H_{t,i} \) terms are heat transfer coefficients. The radial bearings behave similarly: \[ \dot{T}_r = \frac{1}{J_r} (r_r \omega^2 - H_{r,1}(T_r - T_o) - H_{r,2}(T_r - T_o)), \] where \( J_r \) is the thermal inertia of the radial bearings, \( r_r \) is the friction coefficient for the radial bearings, and the \( H_{r,i} \) terms are heat transfer coefficients. Note that \( r_t \) and \( r_r \) contribute to the overall friction coefficient \( r \). The overall input vector \( u \) is given by \[ u(t) = \begin{bmatrix} p_s(t) & p_d(t) & T_o(t) & V(t) & \omega_s(t) \end{bmatrix}^T. \] The measurement vector \( y \) is given by \[ y(t) = \begin{bmatrix} \omega(t) & Q(t) & T_i(t) & T_o(t) \end{bmatrix}^T. \] Fig. 5 shows nominal pump operation. The input voltage (and frequency) are varied to control the pump speed. The electromagnetic torque is produced initially as slip is 1. This causes a rotation of the motor to match the rotation of the magnetic field, with a small amount of slip remaining, depending on how large the load torque is. As the pump rotates, fluid flow is created. The bearings heat up as the pump rotates and cool when the pump rotation slows. **Damage Modeling** The most significant forms of damage for pumps are impeller wear, caused by cavitation and erosion by the flow, and bearing failure, caused by friction-induced wear of the bearings. In each case, we map the damage to a particular parameter in the nominal model, and this parameter becomes a state variable in \( x(t) \) that evolves by a damage progression function. These functions are parameterized by a set of unknown wear parameters, forming the unknown parameter vector \( \theta(t) \). Impeller wear is represented as a decrease in impeller area $A$ [7, 9]. We use the erosive wear equation [12]. The erosive wear rate is proportional to fluid velocity times friction force. Fluid velocity is proportional to volumetric flow rate, and friction force is proportional to fluid velocity. We lump the proportionality constants into the wear coefficient $w_A$ to obtain $$ \dot{A} = -w_A Q^2. $$ A decrease in the impeller area will decrease the pump pressure, which, in turn, reduces the delivered flow, and, therefore, pump efficiency. The pump must operate at a certain minimal efficiency. This requirement defines an EOL criteria. We define $A^-$ as the minimum value of the impeller area at which this requirement is met, hence, $T_{EOL} = 1$ if $A(t) < A^-$. Bearing wear is captured as an increase in friction. Sliding and rolling friction generate wear of material which increases the coefficient of friction [1, 12]: $$ \begin{align*} \dot{r}_s(t) &= w_s r_s \omega^2 \\ \dot{r}_r(t) &= w_r r_r \omega^2, \end{align*} $$ where $w_s$ and $w_r$ are the wear coefficients. The slip compensation provided by the electromagnetic torque generation masks small changes in friction, so it is only with very large increases that a change in $\omega$ will be observed. These changes can be observed much more readily through the bearing temperatures. Limits on the maximum values of these temperatures define EOL for bearing wear. We define $r^+_s$ and $r^+_r$ as the maximum permissible values of the friction coefficients, before the temperature limits are exceeded over a typical usage cycle. So, $T_{EOL} = 1$ if $r_s(t) > r^+_s$ or $r_r(t) > r^+_r$. Vibration and acceleration sensors have also been used in pumps for bearing monitoring, e.g., in [8], however, when using such methods it is difficult to map changes in vibration back to changes in the thrust bearings, radial bearings, or both, while also quantifying the amount of damage. ### 4. Damage Estimation In model-based prognostics, damage estimation reduces to joint state-parameter estimation, i.e., computation of $p(x_k, \theta_k | y_{0:k})$. A general solution to this problem is the particle filter, which may be directly applied to nonlinear systems with non-Gaussian noise terms [13]. In particle filters, the state distribution is approximated by a set of discrete weighted samples, called particles. With particle filters, the particle approximation to the state distribution is given by $$ \{ (x^i_k, \theta^i_k), w^i_k \}_{i=1}^N, $$ where $N$ denotes the number of particles, and for particle $i$, $x^i_k$ denotes the state vector estimate, $\theta^i_k$ denotes the parameter vector estimate, and $w^i_k$ denotes the weight. The posterior density is approximated by $$ p(x_k, \theta_k | y_{0:k}) \approx \sum_{i=1}^N w^i_k \delta(x^i_k, \theta^i_k) (dx_k d\theta_k), $$ where $\delta(x^i_k, \theta^i_k) (dx_k d\theta_k)$ denotes the Dirac delta function located at $(x^i_k, \theta^i_k)$. We use the sampling importance resampling (SIR) particle filter, using systematic resampling [14]. The pseudocode for a single step of the SIR filter is shown as Algorithm 1. Each particle is propagated forward to time $k$ by first sampling new parameter values, and then sampling new states using the model. The particle weight is assigned using $y_k$. The weights are then normalized, followed by the resampling step [13]. Algorithm 1 SIR Filter | Inputs: $\{ (x^i_{k-1}, \theta^i_{k-1}), w^i_{k-1} \}_{i=1}^N, u_{k-1:i}, y_k$ | | Outputs: $\{ (x^i_k, \theta^i_k), w^i_k \}_{i=1}^N$ | | for $i = 1$ to $N$ do | | $\theta^i_k \sim p(\theta_k | \theta^i_{k-1})$ | | $x^i_k \sim p(x_k | x^i_{k-1}, \theta^i_{k-1}, u_{k-1:k})$ | | $w^i_k \sim p(y_k | x^i_k, \theta^i_k, u_k)$ | | end for | | $W \leftarrow \sum_{i=1}^N w^i_k$ | | for $i = 1$ to $N$ do | | $w^i_k \leftarrow w^i_k / W$ | | end for | | $\{ (x^i_k, \theta^i_k), w^i_k \}_{i=1}^N \leftarrow \text{Resample}(\{ (x^i_k, \theta^i_k), w^i_k \}_{i=1}^N)$ | assign some type of evolution to the parameters. The typical solution is to use a random walk, i.e., for parameter $\theta$, $\theta_k = \theta_{k-1} + \xi_{k-1}$, where $\xi_{k-1}$ is sampled from some distribution (e.g., zero-mean Gaussian). With this type of evolution, the particles generated with parameter values closest to the true values should be assigned higher weight, thus allowing the particle filter to converge to the true values. The selected variance of the random walk noise determines both the rate of this convergence and the estimation performance once convergence is achieved. Therefore, it is very desirable to tune this parameter to obtain the best possible performance. A large random walk variance will yield quick convergence but tracking with too wide a variance, whereas too small a random walk variance will yield a very slow convergence, if at all, but, once achieved, tracking will proceed with a very small variance. One approach is to use kernel shrinkage, in which the random walk noise is diminished over time [15]. This approach assumes that the parameter is constant, but in reality, this may not be the case, so some amount of noise should still be included to account for unmodeled deviations in the parameter value over time. In [16], this noise (viewed as a hyper-parameter) is tuned using outer correction loops based on prediction error. In this case, the underlying prognostic model is assumed to contain only a single fault dimension, therefore it cannot be applied in our case. We develop a $\xi$ adaptation method similar to [16], but with some key distinguishing features. First, we consider a multi-dimensional damage space, therefore, we must simultaneously adapt the random walk noise for multiple parameter values. Second, we cannot use prediction error to drive the adaptation, because we cannot, in general, map errors in prediction to specific wear parameters, since each output is dependent on multiple damage mechanisms. Instead, we try to control the variance of the hidden wear parameter estimate to a user-specified range by modifying the random walk noise variance. Since the random walk noise is artificial, we should reduce it as much as possible, because this uncertainty propagates into the EOL predictions. So, controlling this uncertainty helps to control the uncertainty of the EOL prediction. Reducing the variance of the wear parameter can reduce the variance of the EOL prediction by several factors, and the improvement is substantial over long time horizons. The algorithm for the adaptation of the $\xi$ vector is given as Algorithm 2 $\xi$ Adaptation |Inputs| $\{(x_k, \theta_k^i, w_k^i)_{i=1}^N, \xi_{k-1}\}$ | |State| $a$ | |Output| $\xi_k$ | |if| $k = 0$ then | |a ←| 0 | |end if| |for all $j \in \{1, 2, ..., n\}$ do | |v ←| $\text{RMAD}([\theta_k^i(j)]_{i=1}^N)$ | |if| $a(j) = 0$ and $v_j < T$ then | |a(j) ←| 1 | |end if| |if| $a(j) = 0$ then | |v′ ←| $v_j^\text{opt}$ | |else| |v′ ←| $v_j^\infty$ | |end if| |$\xi_k(j) ← \xi_{k-1}(j) \left(1 + P\frac{v_j - v_j^*}{v_j^\infty}\right)$ | |end for| Algorithm 2, and Fig. 6 shows how it interacts with the particle filter. We assume that the $\xi$ values are tuned initially based on the maximum expected wear rates, e.g., if the pump is expected to fail no earlier than 100 hours, then this corresponds to particular maximum wear rate values. The initial wear rate estimate values may start at 0. We use the relative median absolute deviation (RMAD) as the measure of variance: $$\text{RMAD}(X) = 100 \frac{\text{Median}_i(|X_i - \text{Median}_j(X_j)|)}{\text{Median}_j(X_j)},$$ where $X$ is a data set and $X_i$ is an element of that set. We use RMAD because it is statistically robust, and, since it is a relative measure of spread, it can be treated equally for any wear parameter value. The adaptation scheme resembles a proportional control law, where the error between the actual RMAD of a parameter $\theta(j)$, denoted as $v_j$ in the algorithm, and the desired RMAD value (e.g., 10%), denoted as $v_j^\text{opt}$ in the algorithm, is normalized by $v_j$. The error is then multiplied by a factor $P$ (e.g., $1 \times 10^{-3}$), and the corresponding variance $\xi(j)$ is increased or decreased by that percentage. We utilize two different setpoints. First, we allow for a convergence period, with setpoint $v_j^\text{opt}$ (e.g., 50%). Once $v_j$ reaches $T$ (e.g., $1.2v_j^\text{opt}$), we mark it using the $a(j)$ flag, and begin to control it to a new setpoint $v_j^\infty$ (e.g., 10%). Because there is some inertia to the process of $v_j$ changing in response to a new value of $\xi(j)$, the gain $P$ cannot be too large, otherwise $v_j$ will not converge to the desired value, instead, it will continually shrink and expand. In our experiments, $P = 1 \times 10^{-3}$ worked well over the entire range of values considered for each wear parameter. Ideally, the wear parameter variance would be zero, but the particle filter needs some amount of noise to accurately track the parameter. So, $v_j^\text{opt}$ cannot be too small, and we have found that controlling to an RMAD of 10% introduces an acceptable amount of uncertainty while allowing for accurate tracking. Algorithm 3 EOL Prediction Inputs: \{(x_{i,k_p}, \theta_{i,k_p})\}, w_{i,k_p}^{N} Outputs: \{(EOL_{i,k_p}, w_{i,k_p}^{N})\}_{i=1}^{N} for \(i = 1 \) to \(N\) do \(k \leftarrow k_p\) \(x_{i,k} \leftarrow x_{i,k_p}\) \(\theta_{i,k} \leftarrow \theta_{i,k_p}\) while \(T_{EOL}(x_{i,k}, \theta_{i,k}) = 0\) do Predict \(\hat{u}_k\) \(\theta_{i,k+1} \sim p(\theta_{i,k+1}|\theta_{i,k})\) \(x_{i,k+1} \sim p(x_{i,k+1}|x_{i,k}, \theta_{i,k}, \hat{u}_k)\) \(k \leftarrow k + 1\) end while \(EOL_{i,k_p} \leftarrow k\) end for 5. PREDICTION Prediction is initiated at a given time \(k_p\). Using the current joint state-parameter estimate, \(p(x_{k_p}, \theta_{k_p}|y_{0:k_p})\), which represents the most up-to-date knowledge of the system at time \(k_p\), the goal is to compute \(p(EOL_{k_p}|y_{0:k_p})\) and \(p(RUL_{k_p}|y_{k_p})\). As discussed in Section 4, the particle filter computes \[ p(x_{k_p}, \theta_{k_p}|y_{0:k_p}) \approx \sum_{i=1}^{N} w_{i,k_p}^{N} \delta(x_{i,k_p}, \theta_{i,k_p})(dx_{k_p},d\theta_{k_p}). \] We can approximate a prediction distribution \(n\) steps forward as [17] \[ p(x_{k_p+n}, \theta_{k_p+n}|y_{0:k_p}) \approx \sum_{i=1}^{N} w_{i,k_p}^{N} \delta(x_{i,k_p+n}, \theta_{i,k_p+n})(dx_{k_p+n},d\theta_{k_p+n}). \] So, for a particle \(i\) propagated \(n\) steps forward without new data, we may take its weight as \(w_{i,k_p}^{N}\). Similarly, we can approximate the EOL as \[ p(EOL_{k_p}|y_{0:k_p}) \approx \sum_{i=1}^{N} w_{i,k_p}^{N} \delta_{EOL_{i,k_p}}(dEOL_{k_p}). \] To compute EOL, then, we propagate each particle forward to its own EOL and use that particle’s weight at \(k_p\) for the weight of its EOL prediction. If an analytic solution exists for the prediction, this may be directly used to obtain the prediction from the state-parameter distribution. An analytical solution is rarely available, so the general approach to solving the prediction problem is through simulation. Each particle is simulated forward to EOL to obtain the complete EOL distribution. The pseudocode for the prediction procedure is given as Algorithm 3 [1]. Each particle \(i\) is propagated forward until \(T_{EOL}(x_{i,k}, \theta_{i,k})\) evaluates to 1; at this point EOL has been reached for this particle. Note that prediction requires hypothesizing future inputs of the system, \(\hat{u}_k\), because damage progression is dependent on the operational conditions. For example, in the pump, an increased rotation speed will cause bearing friction to increase at a faster rate, and will cause an increased pump flow, which, in turn, will cause impeller wear to increase at a faster rate. The choice of expected future inputs depends on the knowledge about operational settings and the type of information the user is interested in, e.g., for a worst-case scenario, one would consider the pump running at its maximum rotation. Fig. 7 shows results from the simultaneous prediction of impeller wear and thrust bearing wear for \(N = 100\) (not all trajectories are shown in the lower plot). Initially, the particles have a very tight distribution of friction and impeller area damage values, but the distribution of the wear parameters, \(w_A\) and \(w_r\), is relatively large. As a result, the individual trajectories are easily distinguishable as EOL is approached. Because the damage threshold is multi-dimensional, we show also the projections of the trajectories onto the damage-time planes. The projection onto the \(A\-t\) plane (right) shows the progression of \(A\) towards the \(A^-\) threshold as a function of time. The projections stop when EOL is reached, and the vertical dotted lines connecting the projections to the time axis indicate individual EOL predictions. Similarly, the projection onto the \(r_t\-t\) plane (bottom) shows the progression of \(r_t\) towards the \(r_{t}^-\) threshold as a function of time. The dotted lines connecting to the time axis indicate EOL predictions. For some particles, \(A^-\) is reached first, while for others, \(r_{t}^-\) is reached first. The different EOL values along with particle weights form an EOL distribution approximated by the probability mass function shown in the upper plot. 6. RESULTS In this section, we present simulation-based experiments to analyze the performance of the prognostics algorithm in the case of multiple damage progression paths. We first define the metrics used to evaluate the algorithm performance. We then provide detailed results for a single experiment to demonstrate the approach, followed by results summarized over a large number of experiments. **Evaluation Metrics** We evaluate the performance of the wear parameter estimation by quantifying estimation accuracy and spread. Accuracy is calculated using the percentage root mean square error (PRMSE), which expresses relative estimation accuracy of \( w \) as a percentage: \[ \text{PRMSE}_w = 100 \sqrt{\frac{\text{Mean}_k \left( \frac{\hat{w}_k - w_k^*}{w_k^*} \right)^2}{N}}, \] where \( \hat{w}_k \) denotes the estimated wear parameter value at time \( k \), \( w_k^* \) denotes the true wear parameter value at \( k \), and \( \text{Mean}_k \) denotes the mean over all values of \( k \). In computing PRMSE, we ignore the initial time frame associated with convergence of the wear parameter estimate (from 0 hours up to 30% of the true EOL). We calculate the spread using RMAD as defined in Section 4. For estimation spread, for time \( k \), we compute for wear parameter \( w \), \( \text{RMAD}_{w,k} \) using the distribution of wear parameter values given by the particle set at \( k \) as the data set. We denote the average RMAD over multiple \( k \) using: \[ \text{RMAD}_w = \text{Mean}_k(\text{RMAD}_{w,k}). \] In computing estimation spread, we also ignore the initial time frame associated with convergence of the wear parameter estimate. For a particular prediction point \( k_P \), we compute measures of accuracy and spread for the prediction. For accuracy, we use the relative accuracy (RA) metric [10]: \[ \text{RA}_{k_P} = 100 \left( 1 - \frac{|\text{RUL}_{k_P}^* - \text{Mean}_i(\text{RUL})|}{\text{RUL}_{k_P}^*} \right). \] RA is averaged over each prediction point to obtain a single value that characterizes the overall accuracy, denoted as \( \overline{\text{RA}} \). We calculate prediction spread using RMAD, which we denote as \( \text{RMAD}_{\text{RUL}} \) for the RUL prediction. To obtain a single value for overall spread, RMAD is averaged over all prediction points starting from the prediction at which a prognostics horizon (where RA is within a specified bound) is first reached, denoted using \( \overline{\text{RMAD}}_{\text{RUL}} \). Prognostics performance is summarized using the \( \alpha \)-\( \lambda \) metric which requires that for a given prediction time \( \lambda \), at least \( \beta \) of the RUL probability mass lies within \( \alpha \) of the true value [10]. ![Figure 8. Simultaneous estimation of pump wear parameters for \( N = 500, T = 60\%, \nu_0^* = 50\%, \nu_\infty^* = 10\%, \) and \( P = 1 \times 10^{-3} \).] **Demonstration of Approach** We first provide an example scenario to illustrate the approach. Fig. 8 shows the estimation results for the hidden wear parameters, with \( w_A^* = 2 \times 10^{-3}, w_r^* = 4 \times 10^{-11} \), and \( w_r^* = 2 \times 10^{-11} \). Initially, the estimate bounds are very large, however, as the estimates begin to converge, the RMAD of each is reduced to 50% through the adaptation scheme, and then to 10%. Once convergence has occurred, tracking proceeds very well. The RMAD is maintained around 10% to the end of the experiment. The PRMSE of the different wear parameters are correspondingly low, with \( \text{PRMSE}_{w_A} = 4.36 \), \( \text{PRMSE}_{w_r} = 3.60 \), and \( \text{PRMSE}_{\nu_r} = 5.51 \). The mean \( \text{RMADs} \) of the wear parameters are \( \overline{\text{RMAD}}_{w_A} = 8.60 \), \( \overline{\text{RMAD}}_{w_r} = 8.42 \), and \( \overline{\text{RMAD}}_{\nu_r} = 8.29 \), which are less than the controlled value of 10%. Prediction performance is shown by the \( \alpha \)-\( \lambda \) plot of Fig. 9. Impeller wear damage dominates the EOL prediction. The accurate and precise wear parameter estimates yield correspondingly accurate and precise RUL predictions. Here, \( \alpha = 0.1 \) and \( \beta = 0.5 \), so the \( \alpha \)-\( \lambda \) test requires that 50% of the probability mass lies within 10% of the true value at each prediction point. The test succeeds at all but the last prediction point, although the probability mass contained within the \( \alpha \)-bounds, 49.6%, is very close to the requirement of 50%. The average RA is 97.16%. The average RMAD of the RUL distribution is 9.14%. Maintaining the variance of the wear parameter estimates maintains also the RMAD of the RUL (though not necessarily to the same setpoint). Table 1. Estimation and Prediction Performance <table> <thead> <tr> <th>n</th> <th>PRMSE(_{w_A})</th> <th>PRMSE(_{w_t})</th> <th>PRMSE(_{w_r})</th> <th>RMAD(_{w_A})</th> <th>RMAD(_{w_t})</th> <th>RMAD(_{w_r})</th> <th>RA</th> <th>RMAD(_{RUL})</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>6.44</td> <td>6.64</td> <td>4.45</td> <td>8.44</td> <td>8.38</td> <td>8.30</td> <td>96.17</td> <td>10.24</td> </tr> <tr> <td>10</td> <td>5.38</td> <td>2.64</td> <td>3.25</td> <td>8.55</td> <td>8.76</td> <td>8.53</td> <td>96.79</td> <td>10.68</td> </tr> <tr> <td>100</td> <td>4.60</td> <td>2.71</td> <td>2.40</td> <td>9.12</td> <td>8.82</td> <td>8.88</td> <td>93.99</td> <td>11.65</td> </tr> </tbody> </table> Simulation Results We performed a number of simulation experiments in which combinations of wear parameter values were selected randomly within a range, with \(N = 500\). We selected values in \([0.5 \times 10^{-3}, 4 \times 10^{-3}]\) at increments of \(0.5 \times 10^{-3}\) for \(w_A\), in \([0.5 \times 10^{-11}, 7 \times 10^{-11}]\) at increments of \(0.5 \times 10^{-11}\) for \(w_t\), and in \([0.5 \times 10^{-11}, 7 \times 10^{-11}]\) at increments of \(0.5 \times 10^{-11}\) for \(w_r\), such that the maximum wear rates corresponded to a minimum EOL of 20 hours. In order to confirm that the wear parameter variance could still be maintained with additional sensor noise, we varied the sensor noise variance by factors of \(5\%\), \(10\%\), and \(10\%\), and performed 20 experiments for each case. We considered the case where the future input of the pump is known, and it is always operated at a constant RPM. Hence, the only uncertainty present is that involved in the noise terms and that introduced by the particle filtering algorithm. The averaged estimation and prediction performance results are shown in Table 1. In all experiments, we used \(T = 60\%\), \(v_0 = 50\%\), \(v_* = 10\%\), and \(P = 1 \times 10^{-3}\). In each of the cases, the PRMSE for the different wear parameter estimates remained at most around 6.6% for the normal amount of noise, and under 5% for increased noise. We attribute the higher PRMSE of the normal noise cases to a couple outlier scenarios where convergence was slower, throwing off the estimate early on. In these cases, the median PRMSEs were under 5%. The PRMSE for \(w_A\) is on average higher than that for the bearing wear parameters because the flow measurement \(Q\) is relatively more noisy than the temperature measurements \(T_i\) and \(T_r\). The RMAD of each wear parameter was successfully controlled to 10%, averaging around 8 to 9%. This translated to good prediction performance, with the RA averaging around 96% and the RMAD of the RUL prediction averaging around 11%. Even as the noise increases, the variance control scheme was able to maintain the RMAD setpoint, and so RMAD\(_{RUL}\) increased only slightly as sensor noise increased. Fig. 10 shows the RMAD of the wear parameters as a function of wear parameter value. Here, it is clear that the RMAD can be controlled well independently of the wear parameter value. Performance is similar across different wear parameters and their values, translating to the similar prediction performance observed across different wear parameter values. 7. Conclusions We investigated the issues of multiple damage progression paths and developed a model-based prognostics methodology to accommodate them. Damage progression paths are characterized by a fault or damage variable and a set of wear parameters that describe how they evolve in time. Particle filters perform joint state-parameter estimation in order to estimate the health state of the component. The state-parameter distribution is then extrapolated to the EOL threshold to compute EOL and RUL predictions in the presence of multiple damage progression paths. A novel variance control mechanism keeps the uncertainty necessary for proper functioning of the particle filter in check, in order to maintain the uncertainty of the unknown wear parameters at a desired level. The framework was applied to a centrifugal pump, and the results demonstrated good performance over a range of wear parameter values and sensor noise levels. In higher dimensional systems, the particle filter requires a very large number of particles to track successfully. Using only 500 particles was sufficient for good results here, but as the number of states or damage mechanisms needed to be tracked increases, the number of particles must increase also. For large \(N\), the particle filter approach may not be efficient enough. In future work, we would like to investigate alternative approaches with reduced computational burden for high-dimensional state spaces. Also, the model-based approach presented here could possibly be complemented by data-driven methods that utilize pump vibration or acceleration sensors. Figure 10. RMAD of the wear parameter as a function of wear parameter value. ACKNOWLEDGMENTS The funding for this work was provided by the NASA Fault Detection, Isolation, and Recovery (FDIR) project under the Exploration Technology and Development Program (ETDP) of the Exploration Systems Mission Directorate (ESMD). REFERENCES
Omics of endothelial cell dysfunction in sepsis Jordan C Langston¹, Michael T Rossi², Qingliang Yang³, William Ohley⁴, Edwin Perez⁴, Laurie E Kilpatrick⁵, Balabhaskar Prabhakarpandian⁶ and Mohammad F Kiani¹ ¹Department of Bioengineering, Temple University, Philadelphia, Pennsylvania, USA ²Illumina, San Diego, California, USA ³Department of Mechanical Engineering, Temple University, Philadelphia, Pennsylvania, USA ⁴Lewis Katz School of Medicine, Temple University, Philadelphia, Pennsylvania, USA ⁵Center for Inflammation and Lung Research, Department of Microbiology, Immunology and Inflammation, Lewis Katz School of Medicine, Temple University, Philadelphia, Pennsylvania, USA Correspondence should be addressed to M F Kiani: mkiani@temple.edu Abstract During sepsis, defined as life-threatening organ dysfunction due to dysregulated host response to infection, systemic inflammation activates endothelial cells and initiates a multifaceted cascade of pro-inflammatory signaling events, resulting in increased permeability and excessive recruitment of leukocytes. Vascular endothelial cells share many common properties but have organ-specific phenotypes with unique structure and function. Thus, therapies directed against endothelial cell phenotypes are needed to address organ-specific endothelial cell dysfunction. Omics allow for the study of expressed genes, proteins and/or metabolites in biological systems and provide insight on temporal and spatial evolution of signals during normal and diseased conditions. Proteomics quantifies protein expression, identifies protein–protein interactions and can reveal mechanistic changes in endothelial cells that would not be possible to study via reductionist methods alone. In this review, we provide an overview of how sepsis pathophysiology impacts omics with a focus on proteomic analysis of mouse endothelial cells during sepsis/inflammation and its relationship with the more clinically relevant omics of human endothelial cells. We discuss how omics has been used to define septic endotype signatures in different populations with a focus on proteomic analysis in organ-specific microvascular endothelial cells during sepsis or septic-like inflammation. We believe that studies defining septic endotypes based on proteomic expression in endothelial cell phenotypes are urgently needed to complement omic profiling of whole blood and better define sepsis subphenotypes. Lastly, we provide a discussion of how in silico modeling can be used to leverage the large volume of omics data to map response pathways in sepsis. Introduction Sepsis is a clinical syndrome defined as life-threatening organ dysfunction due to dysregulated host response to infection (1). It is a major health issue with the number of cases ranging from 19 to 50 million per year and is a leading cause of death globally (2). Sepsis can be caused by primary bacterial, fungal or viral infections or secondary infections that can develop following non-infectious insults such as burn or trauma (3). Sepsis is a heterogeneous syndrome... and diagnosis is complicated due to the broad spectrum of non-specific clinical features (3). In addition, the clinical course is impacted by individual factors relating to infection source, (epi)genetics, comorbidities or demographics (1, 4). Furthermore, there are a multitude of biological signals that play a role in interconnecting pathways, making it difficult to define clinically relevant endpoints besides mortality and to establish a clear understanding of the underlying disease. This wide array of factors determining sepsis onset and response diminishes the likelihood of creating one standard treatment for the heterogeneous cohort of patients. Thus, categorizing sepsis patients into distinct endotype classes should improve the prospects of finding efficacious drugs within each class (5). In sepsis, if organ function is not maintained, organ damage can develop, leading to increased morbidity and mortality (3, 6). Particularly, the microvascular endothelium plays a key role in the development and progression of sepsis (7), but the application of omics, specifically proteomics, towards defining endotypes and unraveling the mechanisms of dysfunction of endothelial cells (ECs) in multiple organs during sepsis is in its infancy. Therapeutic approaches for the treatment of sepsis are supportive, but there are no specific pharmacologic therapies to treat the underlying pathophysiology and maintain endothelial cell function (7). In the emerging field of omics of sepsis, we believe that this review will provide an initial summary of the literature in the field as a resource and also encourage further studies (8, 9, 10, 11). In particular, as sepsis is a complex process, proteomics provides a quantitative analysis of the protein changes that can help bridge the genotype–phenotype gap (12). Specifically, since proteins are involved in every biological phenomenon, unraveling protein–protein interactions (PPIs) is crucial for identifying pathways contributing to disease (12, 13, 14). In this regard, omics analysis can further our understanding of the subphenotypes of the disease and, in combination with laboratory and clinical variables, suggest future studies with clinical relevance. In this review, we summarize how omics of various ECs is leveraged to better describe sepsis progression, define sepsis subphenotypes and identify novel therapeutic targets. We not only discuss how genomics has been used to define septic endotype signatures in different populations but also focus on the application of proteomic analysis of organ-specific microvascular ECs during sepsis or septic-like inflammation which has not been reviewed before. Lastly, we provide a brief discussion of how in silico modeling can be used to leverage the large volume of omics data for mapping endothelial response pathways in sepsis. An overview of sepsis, endothelium and omics Sepsis and the role of the endothelium The vascular endothelium is a single layer of cells lining the tunica intima (inner layer) of blood vessels (15). The endothelium regulates several physiological functions including vascular tone, permeability and immune response (15). The endothelium of different organs shows heterogeneity in function and morphology, and organ-specific ECs exhibit distinct barrier properties and interactions with immune cells (16). During sepsis, an intense systemic inflammatory response develops in response to pathogen-associated molecular patterns (PAMPs) (7). This systemic inflammation activates a cascade of pro-inflammatory events that results in leukocyte dysregulation and an altered endothelial phenotype, producing increased barrier permeability, coagulation and neutrophil trafficking into critical organs; this results in host tissue damage and multiple organ dysfunction syndrome (MODS) (3, 7, 17). Specifically, neutrophils and ECs engage in crosstalk that leads to neutrophil rolling, adhesion and migration across ECs via a multifactorial process controlled by concurrent chemoattractant-dependent signal, hemodynamic shear forces and adhesive events (18). While neutrophils are crucial to host defense, neutrophil dysregulation has a critical role in the early course of death of ECs through the release of proteases and the formation of neutrophil extracellular traps (NETs) (17, 18). Subsequently, ECs dysfunction induces the activation of the complement and coagulation cascades and disseminated intravascular coagulation (DIC) (19). To date, there are no gold-standard diagnostic measures for sepsis, which complicates hypothesis-driven studies searching for individual biomarkers or therapeutic targets (1). Mechanistic computational modeling based on multi-omic analysis can provide a rational basis for understanding the pathophysiology of sepsis, sepsis phenotypes and design of clinically relevant therapeutics (20). Omics for understanding disease and developing therapeutics Since the development of the first genome sequencing method, technologies that further allow the quantification and identification of genes, RNA transcripts, proteins and metabolites (Fig. 1) have been instrumental for understanding disease mechanisms and identifying intervention targets for pathological conditions such as cancer (21). The field of systems biology represents a leap forward from reductionist methods by providing the ability to quantify the entire state of a biological system in the context of the four major classes of biomolecules (DNA, RNA, protein, metabolites) (14). Genomics focuses on whole-genome sequencing, while transcriptomics focuses on RNA-sequencing (RNA-seq) and analyzing differential RNA transcript expression patterns (14). Single-cell RNA-seq (scRNA-seq) is an emerging technology that captures differential transcript expression from individual cells. This technique permits the evaluation of biological events at a greater resolution compared to performing bulk RNA-seq (22). Thus, incorporating scRNA-seq in studying endothelial cell heterogeneity during sepsis would be beneficial in characterizing organ-specific omic expression patterns. Proteomics quantifies differentially expressed proteins (DEPs) in a biological sample, while metabolomics analyzes metabolites within a cell (23). In proteomics and metabolomics, liquid chromatography (LC) methodologies separate complex mixtures based on size, resin affinity or charge; mass spectrometry (MS) ionizes and fragments protein mixtures into peptides and nuclear magnetic spectroscopy (NMR) is used to determine molecular structure (14). Additionally, newer mass spectrometry technologies help capture the heterogeneity of cell response during sepsis by measuring low-abundance proteins in samples, improving detection of peptides and increasing sensitivity over traditional 2-D-LC proteomic assays (24). Once fragmented, protein databases are utilized to determine the targeted protein(s) of interest (23). Omics can help with molecular sub-typing of specific diseases and tailoring of treatment strategies for different patient groups by analyzing large amounts of data to characterize biomolecule expression (25), enabling the development of next-generation therapeutics for complex, poorly characterized diseases such as sepsis. Omics provide tools for the characterization of biomolecule expression in a tempo-spatial manner, thus allowing us to quantify the dynamics of pathway signaling during disease progression (25, 26). Furthermore, omics can generate hypothesis-driven experiments and identify pathways and biomolecules from samples a priori which can then be tested in experimental models to investigate the role of the identified biomolecules in signaling pathways (27). Omics in septic research and endothelial dysfunction Omics in sepsis research Given the dynamic nature of sepsis, omic analysis, combined with clinical input regarding the stage of the disease, can be used to characterize pathologically relevant biomarkers (4). For example, omics can be particularly useful in sepsis research for discovering (a) biomarkers to differentiate between infectious and non-infectious sources, (b) prognostic biomarkers, (c) biomarkers that aid in sepsis therapy and (d) biomarkers to predict individual patient response to therapy (28). Discovering these synergistic combinations of biomarkers is of high interest, given the fact that no single biomarker is sensitive and specific enough to capture the entirety of an individual’s septic condition (29). A recent study found 60 biomarkers that were able to distinguish between sepsis and systemic inflammatory response syndrome (SIRS), but only 7 of these contain sufficient data for further evaluation (30). One of them is PCT, which is the only FDA-approved sepsis biomarker; the other six are presepsin, CRP, IL6, sTREM1, LBP and CD64 (30, 31). PTX-3 is another biomarker that has been studied in septic shock (32). Limitations of these biomarkers include: low diagnostic and prognostic accuracy when used alone, lack of studies directly comparing one over another, variability of concentration during early or late-stage sepsis and lack of standardized diagnostic cut-off values (29, 30). Comprehensive reviews of sepsis biomarkers can be found elsewhere (29, 30). Several studies have proposed classification systems that stratify sepsis patients into unique endotypes based on genomic data and/or modeling approaches (5, 11, 33, 34, 35). The success in stratifying septic patients into endotypes and in associating these features with clinical outcomes illustrates the clinical relevance of the heterogeneous aspects of sepsis. The papers serve as blueprints for precision medicine to reconsider therapeutic approaches on a patient-by-patient basis depending on individual omic profiling. While research has elucidated clinical signs of these endotypes in septic patients, more investigation on the underlying biomolecules and pathways of disease is needed to establish the physiological basis for these endotypes (25). This is where the integration of systems biology and omics plays a major role. Since sepsis affects multiple cellular compartments and organs in an entropic manner, omics can capture patient-specific biomolecule expression in biological systems and, in combination with computational methods, decipher how underlying biological networks are dysregulated (36). This will then permit sub-typing of patients according to common clinical features (25, 26) and characterize the underlying endotype. Table 1 shows a summary of different genomic and modeling studies that have stratified patients into sepsis endotypes with selected differential gene expression and corresponding outcomes. As shown in Table 1, a selective number of genes were used to characterize the endotypes of interest in sepsis in several different population/demographic groups. The fact that different genes were identified across studies may in part be based on the type of study (retrospective vs prospective), time of patient recruitment (months vs years), if recruitment occurred before or after the sepsis-3 definition (1), study population (children vs adults), demographics (country of origin, race) and time of assay. Wong et al. performed genome-wide expression profiling using whole-blood-derived RNA from 98 children with septic shock (33). Three subclasses were established via unsupervised hierarchical clustering: subclasses A, B and C. Subclass A had the highest mortality (36%), illness severity and degree of organ failure. Also, subclass A had repressed genes in immunity (44 genes including LAT and TRAIT) and zinc biology which help to maintain homeostasis (181 genes including ZnT), and thus this cohort exhibited lower adaptive immunity and increased mortality than other subclasses (33). Additional pathway hits corresponding with these repressed genes included B-cell and glucocorticoid signaling which further confirms that the subclass A cohort did not have the immune-related genes expressed during sepsis (33). These initial findings support the efforts to stratify patients into various endotypes based on differential omic expression in sepsis. In another study conducted within the Molecular Diagnosis and Risk Stratification of Sepsis (MARS) project (34), a clinical trial investigating sepsis endotypes in ICUs, eight genes were identified which, in specific combinations, could be used to systematically classify patients into the MARS 1, 2, 3 or 4 endotype. The MARS 1 group showed decreased innate and adaptive gene expression, MARS 2 exhibited increased cell motility and cytokine pathway expression, MARS 3 demonstrated increased adaptive immune gene expression and the MARS 4 group had increased IL6, NFKB and interferon gene expression (34). This is of particular importance since having a large, complex omic signature that is sensitive enough to correctly classify different patients into various endotypes is impractical in the clinic and thus having a smaller signature would enable additional studies to evaluate its relevance in sepsis pathophysiology and predict treatment responses on a larger scale (37). The comparison between these studies is further complicated. Table 1 Examples of genomic and modeling studies to classify septic human patients into various endotypes. Gene definitions can be found in Supplementary Table 1. <table> <thead> <tr> <th>Endotype classification</th> <th>Endotype outcome</th> <th>Genes</th> <th>Study population</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>A, B, C</td> <td></td> <td></td> <td></td> <td>(33)</td> </tr> </tbody> </table> Subclass A group - Increased organ failure, highest mortality - 44 key adaptive immune genes (i.e. T/B-cell related such as KAT2B, SOS1, JAK2, G6, TAF1, PTPRC, MA3K7 etc.) in subclass A compared to B and C. - 181 key zinc biology-related genes (i.e. ZnT/SLC, etc) downregulated in subclass A compared to B and C. Subclasses B and C groups SRS 1, 2 - Decreased mortality - SRS1 group - Higher mortality and T-cell exhaustion - IRAK3, TOLLIP, CBL, PAG1, HIF1A, EPAS1, IL18RAP, CCR1, LDHA, GAPDH - LAT, CD247, HLA family, CIITA, RFX5, CCR3, MTOR, SIRT1, CD247 - SRS2 group - Increased cell response to infection, low mortality - HLA family class II, T-cell and B-cell complexes MARS 1–4 - MARS 1 group - Highest 28-day mortality, decreased immune gene expression - BPGM, TAP2 - MARS 2 group - Increased cytokine pathway expression - GADD45A, PCGF5 - MARS 3 group - Increased adaptive immunity expression, lowest 28-day mortality - AHNAK nucleoprotein, PDCD10 (Continued) The fact that these patterns characterize differences in gene expression at different times. For example, in many studies outlined in Table 1, data are collected and profiled within the first 24–48 h of hospital admission; however, another study indicated that 50% of patients can change from one endotype to another within the first 5 days of hospital admission (4). Thus, tracking of omic expression in a time-dependent manner is important. Although each study in Table 1 utilizes unique methods to categorize sepsis patients into their own endotype groups, there are commonalities across different endotypes groups which can be utilized to promote future therapeutic research. Many studies have performed genomic profiling of leukocytes or mononuclear cells (33, 38, 39, 40, 41, 42, 43); however, only a few studies focus on grouping patients into different endotypes. Focusing on endotype-dependent studies is critical for developing appropriate therapeutic intervention, since one needs to identify which pathway is critical to target in a particular patient, a goal of precision medicine. Among endotypes, low mortality groups (SRS 2 (11), MARS 3 (34), adaptive (35), α (5)) shared the common characterization of increased adaptive immune signaling, but high mortality groups (subclass A (33), SRS 1 (11), MARS 1 (34), Inflammopathic (35), δ (5)) were not as uniformly characterized by immune status and had repressed immune function. Certain groups were characterized by hyperinflammation (inflammopathic (35), δ (5)), and others were linked to immunosuppression (inflammopathic (35), δ (5)), and others were linked to immunosuppression (inflammopathic (35), δ (5)), and others were linked to immunosuppression (inflammopathic (35), δ (5)), and others were linked to immunosuppression (inflammopathic (35), δ (5)), and others were linked to immunosuppression (inflammopathic (35), δ (5)), and others were linked to immunosuppression (inflammopathic (35), δ (5)), and others were linked to immunosuppression (inflammopathic (35), δ (5)), and others were linked to immunosuppression (inflammopathic (35), δ (5)), and others were linked to immunosuppression (inflammopathic (35), δ (5)). Furthermore, emerging studies are beginning to evaluate different endotype signatures across populations (e.g. evaluating SRS endotype signatures in pediatric patients) to investigate their performance with respect to mortality (44). Additional knowledge from the combination of endotypes can verify common biological targets between populations leading to an endotype, as described in other diseases such as acute respiratory distress ### Table 1 <table> <thead> <tr> <th>Endotype classification</th> <th>Endotype outcome</th> <th>Genes</th> <th>Study population</th> <th>Reference</th> </tr> </thead> <tbody> <tr> <td>MARS 4 group</td> <td>Increased interferon gene expression</td> <td>IFIT5, GLTSCR2/NOP53/NOL5A</td> <td>Retrospective study; Total of 23 bacterial sepsis/inflammation datasets (12 in children, 11 in adults) were analyzed; Majority of patients in the cohorts were males from first-world nations</td> <td>(35)</td> </tr> <tr> <td>Inflammopathic, adaptive and coagulopathic group</td> <td>Highest mortality and innate immunity expression</td> <td>ARG1, LCN2, LTF, OLFM4</td> <td>HLA-DMB</td> <td></td> </tr> <tr> <td>Adaptive group</td> <td>Lowest mortality and increased adaptive immunity expression</td> <td>YKT6, PDE4B, TWISTNB/POLR1F, BTN2A2</td> <td>GADD45A, CD24, S100A12, STX1A</td> <td></td> </tr> <tr> <td>Coagulopathic group</td> <td>High mortality and coagulopathy</td> <td>KCNMB4, CRISP2, HTRA1, PPL</td> <td>RHBDL2, ZCCHC4, YKT6, DDX6</td> <td></td> </tr> <tr> <td>Alpha, beta, gamma, delta group</td> <td>Less organ dysfunction, normal blood tests and lowest mortality</td> <td>IL10</td> <td>v-dimer, IL6, IL8, TNFa, Procalcitonin, C-reactive protein</td> <td>(5)</td> </tr> <tr> <td>α group</td> <td>Chronic illness and renal dysfunction</td> <td>IGFBP7, COL4, TIMP2</td> <td>IL10, IL66, procalcitonin, SELE, PAI1</td> <td></td> </tr> <tr> <td>β group</td> <td>Increased inflammation and fever</td> <td>IL6, KIM1/HAVCR1, procalcitonin, PAI1, ICAM1, SELE</td> <td>SELE, PAI1</td> <td></td> </tr> <tr> <td>γ group</td> <td>High coagulation and hypotension and the highest mortality</td> <td>IL10, IL6, IL8, procalcitonin, TNFa, COL4, v-dimer, PAI1, VCAM1, TAT complex</td> <td>SELE, PAI1</td> <td></td> </tr> </tbody> </table> MARS, Molecular Diagnosis and Risk Stratification of Sepsis; SRS, sepsis response signature. syndrome (ARDS) (45). However, there is currently a lack of standards by which common endotypes can be identified in different studies. Though biomarkers can provide valuable insight to guide therapeutic decisions and enhance patient management by preventing, for example, unnecessary antibiotic therapy (29) and commonalities between omic studies can be further validated experimentally, it is highly improbable that a universal endotype signature for sepsis can be developed due to the heterogeneity of the disease (37). It is therefore important to understand different endotypes in the disease to allow the development of tailored therapeutics. It is important to note that the overall goal of endotyping is to unravel molecular subtypes of a disease, and it should not be used as a definitive prognostic tool (44). An international effort to form a standardized consensus on omic profiling procedures would be beneficial (37). Additionally, it would be useful to further validate the omic expression changes in each endotype across populations in time-lapse, multi-institutional prospective studies and plan endotype studies that employ the current definition of sepsis (1), since many omic studies in Table 1 were conducted using the former consensus definition (33, 46, 47, 48). Omics of microvascular endothelium in sepsis/inflammation Microvascular ECs play a central role in neutrophil-endothelial crosstalk, and excessive neutrophil migration leads to edema, shock and MODS (7, 17). Since sepsis progresses rapidly, and there are no standard diagnostic procedures to determine a patient's clinical condition between admission and first course of ‘treatment’ (1), omics would be beneficial in determining how ECs of vital organs are impacted in the early phase of sepsis (7, 17, 27). Understanding organ-specific omics of ECs should be of high importance due not only to its role in maintaining homeostasis and immunity but also for how its dysfunction can lead to organ failure (7, 17, 28). Furthermore, characterizing differential omic expression patterns of ECs phenotypes will help us better understand how each vascular bed responds to inflammatory insults and what gene ontologies (GO) and signaling pathways are unique to each bed or common across beds. A summary of the genomic, sepsis/inflammatory studies performed in mice is presented in Table 2. Though there have been concerns about whether results from mice can translate to human trials, data from mouse models are still needed to help understand the pathology of sepsis (49). Mouse models are also critical for establishing the response of ECs phenotypes to inflammatory stimuli for the evaluation of genetic (e.g. knock-in or knock-out) or pharmacological effects in a living system, since these studies cannot be done in patients. In this section, we discuss genomic studies, followed by proteomic studies of ECs in ‘Proteomics of ECs and in silico modeling of omics’. To our knowledge, there are no published metabolomic or epigenomic studies of mouse microvasculature ECs challenged with an inflammatory insult. Organ-specific ECs have been stimulated with exogenous substances (e.g. LPS, bacteria) to induce septic-like conditions over different time points (e.g. 6 or 24 h) (50, 51, 52, 53) to identify unique genes, pathways or GO differentially expressed in various organs using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database and GO database. Overall, most of the KEGG signaling pathways, gene families (e.g. Cxcl, Tnfα, Sele, Selp) and GO overexpressed in the ECs beds correlate with the activation of the innate immune system (e.g. TLR signaling), leukocytes (e.g. leukocyte migration) and coagulation (54). These findings are consistent with our understanding that sepsis causes dysregulated host response to infection, leading to activation of ECs and immunity pathways (7, 17). Additional organ-specific pathways based on endothelial-specific gene expression have also been reported (54). For example, upregulation of adipose tissue-specific ECs genes (e.g. Car3, Csf2rb) drives osteoclast differentiation, kidney-specific ECs genes (e.g. Dram1, Dkk2) aid in endocytosis, cardiac-specific ECs genes (e.g. Kcna5, Myadn) drive axon guidance and brain-specific ECs genes (e.g. Edn3, Foxj2) help maintain ErbB signaling (54). To date, most omic endothelial-based sepsis/inflammatory studies have focused on the lung, liver and brain showing that a number of unique as well as common pathways and genes are associated with these different ECs (50, 51, 52, 53, 54). Even though common pathways are expressed among all ECs, differential gene expression still occurs. For example, Wnt signaling is a common pathway among ECs, but brain ECs show higher expression of Nkd1 or Fzd6 while liver ECs express Apc or Ep300 (54). Though this initial study (54) was not done under inflammatory stimuli, these findings can provide an organ-specific understanding of the signaling mechanisms to examine during sepsis. Additionally, pathway changes in adipose tissue, mammary or adrenal glands or skeletal muscle ECs during normal or disease conditions have not been systematically studied and warrant further investigation. Furthermore, omics studies investigating intra-organ endothelial heterogeneity... <table> <thead> <tr> <th>Reference</th> <th>Methodology</th> <th>Region</th> <th>KEGG pathway hits</th> <th>GO pathway hits</th> <th>Genes</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>(S4)</td> <td>Embryonic stem cells were differentiated into organ-specific ECs</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>AT</td> <td>Osteoclast differentiation, MAPK signaling, metabolism</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Brain</td> <td>ErbB signaling, RPAR signaling, MAPK signaling</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Diaphragm</td> <td>Toxoplasmosis, RIG-I-like signaling, apoptosis</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Heart</td> <td>Focal adhesion, axon guidance signaling, ECM-receptor interaction</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Kidney</td> <td>Endocytosis, hematopoietic cell lineage, calcium signaling</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Liver</td> <td>TGF-B signaling, complement and coagulation, hematopoietic cell lineage</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Lung</td> <td>Neuroactive ligand-receptor interaction, Wnt signaling, metabolism</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>MG</td> <td>JAK-STAT, NOD-like receptor signaling, MAPK signaling</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> (Calculated) This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. https://doi.org/10.1530/VB-22-0003 (Continued) https://vb.bioscientifica.com © 2022 The authors Published by Bioscientifica Ltd Downloaded from Bioscientifica.com at 08/22/2022 04:51:39AM via free access Table 2 Continued. <table> <thead> <tr> <th>Reference</th> <th>Methodology</th> <th>Region</th> <th>KEGG pathway hits</th> <th>GO pathway hits</th> <th>Genes</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> <td>Upregulated</td> <td>Downregulated</td> </tr> <tr> <td>Pancreas</td> <td></td> <td></td> <td>Adherens junction, focal adhesion, MAPK signaling</td> <td></td> <td></td> </tr> <tr> <td>SM</td> <td></td> <td></td> <td>JAK-STAT, TLR signaling, metabolism</td> <td></td> <td></td> </tr> <tr> <td>Trachea</td> <td></td> <td></td> <td>Gap junction, NOD-like receptor signaling, TLR signaling</td> <td></td> <td></td> </tr> </tbody> </table> Cultured brain, lung and heart ECs were stimulated with LPS for 6 and 24 h <table> <thead> <tr> <th>Region</th> <th>Methodology</th> <th>Region</th> <th>KEGG pathway hits</th> <th>GO pathway hits</th> <th>Genes</th> </tr> </thead> <tbody> <tr> <td>Brain</td> <td></td> <td>6 h</td> <td>Leukocyte migration, response to LPS</td> <td></td> <td>Ccl11, Timp1, Tnfa, Il1a, Il1b, Sele, Selp</td> </tr> <tr> <td></td> <td></td> <td>24 h</td> <td>Response to chemokine, cell chemotaxis, leukocyte/neutrophil migration</td> <td></td> <td>Ccl3, Timp1, Ccl11, Selp, Sele</td> </tr> <tr> <td>Heart</td> <td></td> <td>6 h</td> <td>Cell chemotaxis, leukocyte migration</td> <td></td> <td>Ccl3, Sele, Selp, Cxcl5, Cxcl11, Il6, Cxcl3</td> </tr> <tr> <td></td> <td></td> <td>24 h</td> <td>Leukocyte migration, neutrophil/leukocyte chemotaxis, response to chemokine</td> <td></td> <td>Cxcl5, Cxcl3, Sele, Selp, Acod1</td> </tr> <tr> <td>Lung</td> <td></td> <td>6 h</td> <td>Acute inflammatory response, cell chemotaxis</td> <td></td> <td>Cxcl1, Cxcl9, Ilr2, Casp6, Il10, Ly96</td> </tr> <tr> <td></td> <td></td> <td>24 h</td> <td>Cell chemotaxis, leukocyte/neutrophil chemotaxis, leukocyte migration</td> <td></td> <td>Mmp8, Il10, Acod1, Cxcl9</td> </tr> </tbody> </table> (Continued) <table> <thead> <tr> <th>Reference</th> <th>Methodology</th> <th>Region</th> <th>KEGG pathway hits</th> <th>GO pathway hits</th> <th>Genes</th> </tr> </thead> <tbody> <tr> <td>(51)</td> <td>Mice were injected with LPS for 4 h prior to isolation of heart, brain liver and lung ECs</td> <td>Kidney, brain, liver, lung, heart</td> <td></td> <td>Leukocyte migration, response to lipopolysaccharide, response to bacterium</td> <td>\begin{itemize} \item Sele, Selp, Cdh5, Ctnna1, Cldn5, Thb, Vwf, Jam2 \end{itemize}</td> </tr> <tr> <td>(52)</td> <td>Mice were injected with LPS for 3 h prior to isolation of adrenal ECs</td> <td>Brain</td> <td></td> <td></td> <td>\begin{itemize} \item Sele, Selp, Cdh5, Ctnna1, Cldn5, Thb, Vwf, Jam2 \end{itemize}</td> </tr> <tr> <td>(53)</td> <td>Mice were injected with influenza infection for 6 h prior to isolation of lung ECs</td> <td>Heart</td> <td></td> <td></td> <td>\begin{itemize} \item Sele, Selp, Cdh5, Ctnna1, Cldn5, Thb, Vwf, Jam2 \end{itemize}</td> </tr> <tr> <td>(54)</td> <td>Mice were injected with influenza infection for 6 h prior to isolation of lung ECs</td> <td>Liver</td> <td></td> <td></td> <td>\begin{itemize} \item Sele, Selp, Cdh5, Ctnna1, Cldn5, Thb, Vwf, Jam2 \end{itemize}</td> </tr> <tr> <td>(55)</td> <td>Cultured mouse brain ECs were stimulated with avian E. coli for 1–6 h</td> <td>Lung</td> <td></td> <td>Blood vessel development, positive regulation of cell motility, sprouting angiogenesis</td> <td>\begin{itemize} \item Gpihbp1, Ifi47, Pivpa, Sox17, Aft3, Nr11, Nusap1, Birc5, Cd1k, Top2a, Hki1, Kdr, Aft3, Cd34 \end{itemize}</td> </tr> <tr> <td></td> <td></td> <td>Brain</td> <td>Ribosome, legionellosis, TNF signaling, HIF-1 signaling</td> <td></td> <td>Nuclear part, intracellular part, intracellular organelle, cellular macromolecule metabolic process</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Biosynthesis of amino acids, glycolysis</td> <td></td> <td></td> </tr> </tbody> </table> are needed to understand how inflammatory stimuli impact different ECs of the same organ. These urgently needed studies will help in generating hypotheses for the validation of experiments in experimental models to enhance our understanding of how sepsis impacts ECs in various organs and to identify druggable targets. Most studies in Table 2 report genes that are upregulated, but all use either KEGG or GO to find the biological processes or signaling pathways these genes play a role in. However, only two report KEGG and/or GO hits that are downregulated (52, 55). More studies reporting what processes and pathways are downregulated are necessary to obtain a comprehensive understanding of how the functionality and structural properties of organ-specific ECs are altered in response to sepsis. Proteomics of ECs and in-silico modeling of omics Proteomics of mouse ECs A summary of proteomic organ-specific ECs studies in mice under septic/inflammatory conditions is outlined in Table 3. Most studies report up- and downregulated proteins and use either KEGG or GO to find the biological processes or signaling pathways in which these proteins play a role. If KEGG or GO hits were not reported, protein lists were submitted to these databases, and the hits are reported in Table 3. None of these studies report KEGG pathways and/or GO hits that are downregulated. Consistent with genomic studies, proteins expressed correlate with the upregulation of KEGG pathways and GO hits related to coagulation, cell adhesion and immune response (56, 57, 58, 59, 60, 61, 62). Thus, proteins correspond with gene expression to determine final cellular pathways and biological processes overexpressed in septic-like conditions. Interestingly, studies have also shown that COVID-19, which has been described as a form of viral sepsis, significantly affects the endothelium (63). For example, the proposed KEGG COVID-19 pathway is shown to play a role in endothelial dysfunction (56), thus implicating the endothelium as a potential target for COVID-19 therapeutics. Though there have been emerging studies on the omics of COVID pathogenesis and progression (64), there are currently no omic studies on endothelial dysfunction in COVID-19. Many studies discussed in Tables 2 and 3 do point to the molecular players and pathways already known in sepsis that produce a cellular phenotype. Nevertheless, differential omic analysis can provide insights for the design of future studies based on shared and unique proteins across organ-specific ECs. While certain pathways are highly upregulated among ECs (e.g. Wnt signaling), there are also organ-specific upregulated pathways such as axon guidance in cardiac ECs or endocytosis in kidney ECs (54). Additionally, organ-specific ECs express cell surface proteins, and thus the variety of genes and proteins shown in Tables 2 and 3 could be classified as potential therapeutic targets to preserve the vasculature of the tissue and prevent downstream damage such as edema and MODS, which are hallmarks of sepsis damage to the endothelium (54). However, it should be noted that all of these studies were performed in mouse ECs, and further validation of these findings must be complemented with experimental models such as microphysiological systems (MPS) using human cells that recapitulate the 3-D geometry and physiologically relevant flow conditions of the microvasculature (18, 65). Other than causing differential regulation of coagulation, cell adhesion and immune response proteins in ECs, sepsis has been shown to affect the glycocalyx (a gel-like layer composed of proteoglycans coating ECs) and vascular smooth muscle cells (VSMCs) (7, 58). The synthesis of new peptides in response to sepsis related to lipid transport (e.g. Apo family), immunity or oxidative stress (C7) is all downregulated in the glycocalyx (58). Thus, designing studies investigating potential proteins that shed from the glycocalyx during sepsis would be beneficial. While proteomic studies in mouse microvascular ECs provide a better understanding of the basic mechanisms of sepsis, in vitro proteomic studies using human ECs, specifically with ECs exposed to physiological and abnormal shear flow conditions (66), have been done providing potential relevance to clinical studies. Proteomics of human ECs ECs under shear stress convert mechanical stimuli into intracellular signals that affect cellular functions under both normal and diseased conditions. However, traditionally, proteomic expression patterns of ECs have been studied under static conditions, mostly in human umbilical vein ECs (HUVECs) (67, 68). For example, IL1B and IL13 are two cytokine proteins that are released from ECs during inflammation, thus inflammatory and cell adhesion proteins (e.g. RIPK22, SERPINB2, VCAM1) were upregulated in HUVECs (67, 68). Most proteins involved in molecular functions in ECs such as enzyme regulation and metabolic regulation of the cytoskeleton (e.g. cystatin-SN and profilin-1) are upregulated in inflammation (69). While HUVECs are well-established and easy to use in vitro models for studying ECs function, for the most part, they are Table 3 Summary of proteomic studies in mouse microvascular endothelial cells investigating differential protein expression after inflammatory/septic-like stimulation. Protein definitions can be found in Supplementary Table 3. <table> <thead> <tr> <th>Reference</th> <th>Methodology</th> <th>Region</th> <th>KEGG pathway hits</th> <th>GO pathway hits</th> <th>Proteins</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td>Upregulated</td> <td>Downregulated</td> <td></td> </tr> <tr> <td>(56)</td> <td>Mice were</td> <td>Lung</td> <td></td> <td></td> <td>SAA1, VCAM1, C4BP, C4,</td> </tr> <tr> <td></td> <td>injected</td> <td></td> <td></td> <td></td> <td>COL5A1, GNAI1, CFB, MLST8,</td> </tr> <tr> <td></td> <td>with LPS</td> <td></td> <td></td> <td></td> <td>ATP8, POSTN, TH, ST3GAL1,</td> </tr> <tr> <td></td> <td>over 48 h</td> <td></td> <td></td> <td></td> <td>POLR2M</td> </tr> </tbody> </table> Mice were injected with oleic acid for 6 h before lung ECs isolation Liver <table> <thead> <tr> <th>Region</th> <th>Methodology</th> <th>KEGG pathway hits</th> <th>GO pathway hits</th> <th>Proteins</th> </tr> </thead> <tbody> <tr> <td>MRSA was</td> <td></td> <td>Cell adhesion</td> <td>Leukocyte</td> <td>SAA1</td> </tr> <tr> <td>injected</td> <td></td> <td>molecules,</td> <td>proliferation,</td> <td>VCAM1</td> </tr> <tr> <td>in mice</td> <td></td> <td>Staphylococcus</td> <td>cell–cell</td> <td>CXCL9</td> </tr> <tr> <td>for 24 h</td> <td></td> <td>aureus</td> <td>adhesion,</td> <td>SAA1</td> </tr> <tr> <td>before</td> <td></td> <td>infection</td> <td>positive</td> <td>HPGD</td> </tr> <tr> <td>kidney, liver, heart, brain and white adipose tissue ECs isolation</td> <td></td> <td>regulation of cell death</td> <td>APOE, MUP3</td> <td></td> </tr> </tbody> </table> Brain Kidney Heart WAT (Continued) Table 3 Continued. <table> <thead> <tr> <th>Reference</th> <th>Methodology</th> <th>Region</th> <th>KEGG pathway hits</th> <th>GO pathway hits</th> <th>Proteins</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Endothelial secretome</td> <td>Metabolic pathways, endocytosis, biosynthesis of antibiotics, complement and coagulation, viral carcinogenesis, cell adhesion</td> <td>Metabolic pathways, endocytosis, remodeling of epithelial adherens junction</td> <td>Inflammatory response, cell assembly and organization, DNA repair/replication</td> <td></td> <td></td> </tr> <tr> <td>EC</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Glycocalyx</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Vascular smooth muscle</td> <td>Metabolic pathways, endocytosis, pathways in cancer, PI3k-Akt signaling, cGas/STING-pathway</td> <td>Metabolic pathways, endocytosis, pathways in cancer, PI3k-Akt signaling, cGas/STING-pathway</td> <td>Metabolic pathways, endocytosis, pathways in cancer, PI3k-Akt signaling, cGas/STING-pathway</td> <td></td> <td></td> </tr> <tr> <td>Heart</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Lung</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> (59) Mice received a dose of cardiac radiation at 8 or 16 Gy before isolation of heart ECs 8 Gy | EIF2 signaling, remodeling of epithelial adherens junction | Inflammatory response, cell assembly and organization, DNA repair/replication | | | SAA1, HX and HPX | CLU, AZGP1, C6, CFD, TLN1, GSN, F10 | | Heart | WVF, ICAM1, LAMB, DLAT, NCL, VCP, FH1, HIST1HE, LMNB2 | (60) Mice were irradiated with a dose of 10 Gy at the thorax prior to the isolation of lung ECs 16 Gy | EIF2 signaling, actin cytoskeleton signaling | Energy production, cell–cell signaling, cell movement | | | CDH13, GDI2, LC25A4, DYN1C1H1, CLTC, GBP2, ISG15, H2D1, SERPINB2, B2M, CASP7 | | Heart | ACADM, ACTB, CALD1, DES, EC11, MSN, PRKCDBP, TPM1, FADS1, GBE1, DNH1, MLYCD, FAM120C, ALDOC | Downloaded from Bioscientifica.com at 08/22/2022 04:51:39AM via free access unsuitable models for sepsis research, since they come from large vessels of the umbilical cord which are not considered early targets of sepsis. Also, HUVECs do not recapitulate the 3-D, morphological and functional microenvironment of organ-specific microvascular ECs (70). A study investigating the effect of a bacterial strain on human brain microvascular ECs was performed (71). Exposure to bacterial strains can disrupt the blood–brain barrier (BBB) and increase the likelihood of toxins to enter the brain resulting in sepsis-associated encephalopathy (SAE) (6, 72). Studies such as these, given the significant level of heterogeneity in ECs from different organs, and additional studies of omic analysis of phenotypes of ECs under flow conditions directly affected by inflammation are urgently needed (70). Pioneering shear flow-based studies by McCormick in HUVECs and Chen in human aortic ECs in the early 2000s highlighted gene expression changes under flow conditions in a time-dependent manner during 6 or 24 h of flow (73, 74). Endothelial survival genes involved in angiogenesis or matrix remodeling (e.g. TIE2, FLK1) were upregulated during long-term shear exposure which help maintain an anti-inflammatory phenotype, while genes which switch an endothelial phenotype from anti- to pro-inflammatory such as MYD88 and CD30 were downregulated during flow (73, 74). Following these initial studies, a number of other investigators have reported similar gene expression trends under laminar or abnormal shear flow (75, 76, 77). Particularly, proteomic studies of ECs under shear flow, again in HUVECs, have identified proteins corresponding with the underlying genes. Proteomic analysis of the secretome, defined as proteins secreted from cells, following exposure of ECs to shear stress, could determine if plasma proteins are altered in flow-dependent vascular diseases (66). Over 100 proteins were identified to be secreted under control, laminar or abnormal shear stress conditions (66). Those identified under laminar (e.g. PTHR and LTBP4) or abnormal shear flow (e.g. endothelin-1 and insulin-like growth factor II) conditions are again proteins involved in conferring an anti- or pro-inflammatory ECs phenotype and thus correspond to endothelial genes identified under similar conditions (66). Thus, anti-inflammatory genes and proteins are upregulated by exposure to laminar flow, while pro-inflammatory proteins contributing to vascular inflammation (e.g. DKK1 and Endothelin-1) are expressed by exposure to abnormal shear flow (78, 79). Specifically, during sepsis, the endothelium becomes dysfunctional due to abnormal shear stress, resulting in decreased oxidation in ECs and increased coagulation, among other outcomes (80). Pathogen-associated molecular patterns (PAMPS) and pro-inflammatory cytokines initiate sepsis leading to the breakdown of ECs–ECs contact, ECs exhibiting a procoagulant phenotype and the shedding of the glyocalyx; these events could lead to hemorheological defects (increased red blood cell aggregation and viscosity) and subsequent reduced arterial pressure, hypotension and abnormal shear stress (80). One of the first proteins found to be impacted by shear stress in vascular inflammation was forkhead box P (a gatekeeper of vascular inflammation) (81) which is downregulated by Kruppel-like factor 2 (a protein involved in adipogenesis, inflammation and T-cell viability) and, in turn, is suppressed by abnormal shear stress (82); this same mechanism applies to sepsis (83). Low shear or no shear stress decreases the activation of Prospero homeobox 1 (a protein-regulating cell fate) and forkhead box C2 (a protein involved in mesenchymal tissue development) and thus decreases thrombomodulin (a glycoprotein that controls coagulation) and endothelial protein C receptor (another protein that regulates coagulation) expression, causing pro-inflammation and leukocyte adhesion/migration (80). Overall, in sepsis, abnormal shear stress significantly impacts endothelial function and glyocalyx shedding and contributes to MODS and death. Thus, additional studies investigating the detrimental effects of differential shear stress on ECs proteomic expression are needed to provide further insight on how sepsis progresses and causes tissue damage. **In silico modeling of omics** The large volume of data obtained in omic studies is inherently complex and requires special computational tools for mapping endothelial response pathways, understanding the evolution of inflammatory signaling during sepsis, identifying druggable targets and predicting how different therapeutics may impact the progression of inflammatory signaling. Thus, in addition to experimental approaches, *in silico* modeling can generate testable hypotheses, and simulations can provide new, non-intuitive knowledge on complex systems (20). These models can accelerate the process of discovering novel therapeutic candidates (20) and have been used to investigate how ECs interact with a pathological microenvironment or respond to stimuli. For example, *in silico* modeling has been used to examine how ECs interact with the tumor microenvironment in angiogenesis (84). Other *in silico* models study ECs interacting with other cell types such as hepatocytes (85) or responding to shear stress (86). More recent studies have used organ-specific ECs in pathway models to predict therapeutic targets for specific pathologies, such as diseases in the brain (87). Despite these models providing further insight on how ECs are regulated under various conditions, they have not been applied to investigate the dysfunction of ECs in sepsis which is urgently needed. In systems biology, several different methods of *in silico* modeling are implemented including agent models, equation models and network models (20). Specifically, network models, based on Boolean logic, are constructed through the integration of the interactome (e.g. protein–protein interaction data) and omics data and do not require a priori quantitative knowledge of biological reactions, which is difficult to achieve (36, 88). Network models are initially constructed by submitting a gene/protein list to a database that maps the entities onto a global PPI model to illustrate their physical interaction or functional association with other entities based on statistical parameters (e.g. confidence scores) (36). Figure 2 is a general workflow of biological network construction and the application of network algorithms. PPIs are constructed from large-scale experiments or computational predictions and maintained in databases such as BioGRID or STRING (36). Additionally, there are tissue-specific PPI (such as GIANT and TISPIN), and since sepsis affects multiple tissues, the incorporation of omics in multiple tissue-specific *in silico* models for the comparison of differential disease-associated signaling would be beneficial (36). Once an *in silico* network model is generated, graph theory analyses are performed to characterize topological features of the network such as network diameter or node degree (36). Pathway databases such as KEGG can be used in the early stages of modeling to examine signaling pathways in omics data to evaluate the relationships between the data and disease pathways (36). There are numerous strategies to construct and validate *in silico* network models that are discussed elsewhere (88). Overall, *in silico* models have the potential to model drug–protein or protein–protein interactions for a specific pathology, and it is critical to develop such models to decipher omic changes in ECs during sepsis to determine how endothelial inflammatory signaling evolves within and between tissues affected in sepsis. **Conclusions** In this review, we discuss the importance of omics, specifically proteomics, profiling of microvascular endothelial cells and leukocytes under septic and/or inflammatory conditions in humans and mice, respectively. In sepsis, microvascular ECs are key targets leading to edema, capillary leak syndrome and MODS... Since endothelial cells are early targets of sepsis, research focused on creating a systems biology, mechanistic understanding of microvascular endothelial cell dysfunction in sepsis, especially across organs damaged early on such as the lungs, liver and kidneys, is critical. A key area where future research should be directed is profiling the proteome of microvascular endothelial cells in microphysiological systems using cultured organ-specific human microvascular endothelial cells subjected to shear flow to identify differentially expressed proteins (DEPs) and protein–protein interactions that contribute to a disease subphenotype or endotype. Thus, incorporating physiologically relevant flow conditions and organ-specific primary endothelial cells would provide more realistic microenvironments to help identify how organ-specific endothelial proteomes are altered in response to shear and aid in the discovery of therapeutics targeting endothelial cells. Omics can not only characterize the underlying molecular mechanisms of diseases by identifying and quantifying all the biomolecular interactions in a biological system but also categorize patients into endotypes based on their omic expression patterns. Using bioinformatic tools to identify differentially expressed cellular pathways or GO in ECs could answer questions such as where genes or proteins are expressed in a system during different pathologies and their corresponding molecular functions. While genomics can identify causative variants that contribute to disease, complementary experimental and validation studies are often required and needed to identify and characterize the functionality of the variant(s) in disease progression within a heterogenous population. Proteomics, in particular, can yield novel insight since proteins are involved in every biological phenomenon and thus unraveling the complex protein–protein interactions (PPIs) in a cell can identify DEPs between disease and control groups. In addition, one needs to consider that proteins in vivo are subjected to post-translational modifications during and/or after synthesis, and thus future research should focus on identifying these modifications, their potential role in disease and how these modifications can be targeted for therapy. Another critical area of research is the application of in silico network modeling incorporating proteomics data to screen and test therapeutics for a disease in a realistic model prior to in vitro and in vivo experimentation. Evaluating the effect of a therapeutic on protein(s) or protein complex(es) in silico, especially if the protein complex plays a role in multiple pathways or processes leading to disease, will generate novel hypotheses that (a) would be tested and validated experimentally, (b) complement and refine experimental testing and protocol and (c) potentially reduce the number of animal models needed for experimental studies. In silico models that enable pharmacological intervention of a target (e.g. in silico simulation of knock-out or pharmacological alteration of a biological pathway) would be beneficial. (91). Additionally, models that can simulate therapeutic responses over time in multiple biological compartments will provide an even more physiologically relevant tool for drug screening and evaluation. Using in silico modeling to repurpose existing therapeutics for treating other diseases (92) is an emerging area of research interest, and its potential for screening therapeutics to target endothelial cells for treating sepsis should be further investigated. Further application of the emerging field of omics for treating sepsis is needed in studies comparing biomarkers used alone or in combination in a time-dependent manner to evaluate their impact on disease progression. A major hurdle for implementing omics in medical practice is a lack of consensus on standardization of methodologies in the scientific community (4). Validation in multicenter, diverse cohorts is urgently needed to effectively test these omic models in clinical trials before translation, and further investigation can assess their utility and cost effectiveness (4). Furthermore, since sepsis progresses rapidly, the quick turn-around time from omic testing to results in healthcare, which is necessary for effective, tailored therapy, has not been achieved yet. Addressing these issues will allow for the translation of omics from the bench to the bedside and, coupled with advances in in silico modeling, establishment of scientific and clinical standards to utilize the potential of omic analyses for clinical treatment of sepsis. Overall, this will significantly advance the goal of precision medicine for delivering the right therapeutic to the right patient at the right time. **Supplementary materials** This is linked to the online version of the paper at https://10.1530/VAB-22-0003. **Declaration of interest** The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. **Funding** J C L is an NIH NRSA F31 Predoctoral Fellow (1-F31AI164870-01). This work was supported by the National Institutes of Health (GM114359, GM134701) and the Defense Threat Reduction Agency (HDTRA11910012). **Author contribution statement** J C L, M T R, W O, E P and Q Y prepared the manuscript. M F K, L E K and B P developed the ideas and edited the manuscript. All authors have read and agreed to the published version of the manuscript. **References** 27 Langley RJ & Wong HR. Early diagnosis of sepsis: is an integrated omics approach the way forward? Molecular Diagnosis and Therapy 2017 21 525–537. (https://doi.org/10.1007/s40291-017-0282-z) 33 Zang F & Itan Y. Biological network approaches and applications in rare disease studies. Genes 2019 10. (https://doi.org/10.3390/genes10010079) Leveraging omics to understand endothelium J C Langston et al. Leveraging omics to understand endothelium 67 Mohr T, Haudek-Prinz V, Slayn A, Grillari J, Micksche M & Gerner C. Proteome profiling in IL-1 beta and VEGF-activated human umbilical vein endothelial cells delineates the interlink between inflammation and angiogenesis. PLoS ONE 2017 12 e0179065. (https://doi.org/10.1371/journal.pone.0179065) 68 Gautier V, Cayrol C, Farache D, Roga S, Monsarrat B, Bunlet-Schiltz O, Gonzalez de Peredo A & Girard J. Extracellular IL-33 cytokine, but not endogenous nuclear IL-33, regulates protein expression in endothelial cells. Scientific Reports 2016 6 34255. (https://doi.org/10.1038/srep34255) Received in final form 15 March 2022 Accepted 7 April 2022 Accepted Manuscript published online 12 April 2022
[REMOVED]
[REMOVED]
[REMOVED]
Exploring cultural heritage repositories with creative intelligence. The Labyrinth 3D system This is a pre print version of the following article: Original Citation: Availability: This version is available http://hdl.handle.net/2318/1578514 since 2017-12-04T09:32:41Z Published version: DOI:10.1016/j.entcom.2016.05.002 Terms of use: Open Access Anyone can freely access the full text of works made available as "Open Access". Works made available under a Creative Commons license can be used according to the terms and conditions of said license. Use of all other works requires consent of the right holder (author or publisher) if not exempted from copyright protection by the applicable law. (Article begins on next page) Exploring cultural heritage repositories with creative intelligence. The Labyrinth 3D system. Rossana Damiano\textsuperscript{1,1}, Vincenzo Lombardo\textsuperscript{1,1}, Antonio Lieto\textsuperscript{1} \textsuperscript{a}Dipartimento di Informatica, Universit\`a di Torino \textsuperscript{b}CIRMA, Universit\`a di Torino Abstract In cultural heritage, the use of ontologies makes the description of artworks clearer and self-explanatory, with advantages in terms of interoperability. The current shift towards semantic encoding opens the way to the creation of interfaces that allow the users to build personal paths in heritage collections by exploiting the relations over the artworks. In the attempt to leverage this multiplicity of paths, we designed and implemented a system, called Labyrinth 3D, which integrates the semantic annotation of cultural objects with the interaction style of 3D games. The system immerses the user into a virtual 3D labyrinth, where turning points and paths represent the semantic relations over cultural objects, with the goal of engaging the user in the exploration of the collection. Keywords: 3D visualization, cultural heritage, computational ontologies 1. Introduction In the last decade, the advent of connected, portable devices, and the evolution of the Web towards a participatory model have prompted cultural institutions to pursue new communication strategies that leverage the Web [1, 2]. Cultural institutions have rushed to publish their collections online, with the goal of innovating their interaction with the audience through the help of personalization and social media [3, 4]. In parallel with this trend, digital archives have moved towards semantic annotation, a paradigm where the items in the archive are described with reference to a computational ontology. The use of ontologies, implemented through logic-based languages [5], makes the description of artworks clearer and unambiguous, with advantages in terms of interoperability among systems [6, 7]. Semantically annotated collections, then, lend themselves to personalization [4] and cross media integration of data sources, following the paradigm of Linked Open Data [8, 9]. Despite the potential of the semantic representation, however, the search in heritage archives is still largely based on keywords and/or tags, through which users can filter the archive contents to find what they need. As exemplified by the well known Europeana initiative, which provides a unified interface to a set of national digital collections [10], the search typically returns a list of items (books, pictures, videos, etc.) accompanied with personalized recommendations, but it does not contain an explicit representation of meaning relations over them. In contrast with this approach, [11] argues that, in order to meet the needs of the general audience, tools for supporting exploratory search are needed besides the traditional keyword–based interfaces. In cultural heritage, search interfaces are typically based on the metaphor of the “archive”, which mirrors the actual fruition of the physical cultural objects (see, for example, the web interface of the above mentioned Europeana system), although the trend of the 3D “visit” has emerged in online museum collections, as demonstrated by the well known Google Art Project.¹ In this paper, we address the access to digital collections by proposing an approach that leverages semantic annotation to create a 3D environment where the user can explore the semantic relations over the items in a visual environment. Our approach combines the use of the 3D language, typical of new media – and video games in particular –, with the capability of semantic annotation to connect entities that are distant in space and time but share some common features at the cultural level. The use of 3D for the interface is motivated by the goal of attaining a user experience characterized by high level of engagement and a sense of immersion [12]. As shown by an established line of research in information visualization [13], in fact, visual metaphors can convey a conceptual model in an immediate and engaging way. The system we describe in this paper is part of a larger project, called Labyrinth², aimed at the dissemination of cultural heritage archives to the general audience. In order to mediate between the point of view of the user and the heterogeneity of the items in heritage repositories, which usually differ by features such as media type, age and purpose but share some narrative features like stories and characters, the project relies on the notion of “archetypes” of narrative nature. Mainly inspired by the research in iconology and narratology [14, 15], the term “archetype” is employed in Labyrinth to refer to a conceptual core set at the intersection of narrative motifs, iconological themes and classical mythology (the system itself is named after a well–known archetype). The plan of the paper is the following: after describing the background of the project and discussing its motivations (Section 2), in Section 3 we provide a brief overview of the live system with a navigation example. Section 4 describes the components of the systems, namely the ontology (Section 4.1), the 3D environment (Section 4.2) and the core component of the system, i.e., the mapping of the ontology onto the 3D environment (Section 4.3). The system architecture, which combines these elements to create newer and newer paths through the repository, is described in Section 5. Discussion and conclusion end the paper. ¹https://www.google.com/culturalinstitute/u/0/project/art-project ²http://app.labyrinth-project.it:8080/LabyrinthTest/ 2. Background In the last decade, the use of ontologies for the access to cultural heritage collections has been investigated by several projects. A pioneering contribution was given by the Finnish Culture Sampo project [16]. In this project, a number of domain ontologies provide the background against which cultural objects (including artworks, artists, traditional practices, etc.), encoded in different media formats (e.g., images and videos), can be explored, tracking the underlying relations over them. In CultureSampo, once a certain artifact (e.g., a painting) has been retrieved, it is possible to explore the relations over the objects (and characters) represented therein. The system has recently evolved towards a linked data approach with the release of a new application, War Sampo, focused on the Second World War [9]. The Agora system [17] frames the exploration of a digital collection into historically relevant episodes, supported by a semantic account of the notion of event [18]. For example, the user can choose a historical episode (e.g., “German occupation of Poland in the Second World War”) and navigate among the cultural objects related to this event. A line of research in ontology-based systems has specifically explored the use of narrative models in cultural heritage dissemination. Stories not only represent an effective way to convey information in a compact format, as argued by [19], but, according to the research in cognitive psychology, they are a primary means for the conceptualization of reality [20]. In cultural heritage, many artworks have, by and large, some type of narrative content. In visual arts, for example, paintings often display story episodes while statues immortalize characters; even non representational artworks often refer to narrative elements, despite the abstract nature of their visual content. Stories are narrated by textual media such as tales and novels, but also – though in nonverbal terms – by different kinds of musical works, from operas to symphonic poems. Narrative is the focus of the Bletchley Park Text system [21], a semantic system designed with the goal of supporting the users in the exploration of online museum collections. Designed with the notion of the “guided visit” in mind, the system encompasses an ontology of story, taken from the Story Fountain project [22]. The stories represented in the system are employed in a web interface to create relations over entities in online collections; based on this knowledge, the user can ask the system to find a narrative connection between different entities. More recently, the Decypher EU project leverages stories to addresses the curatorial side of cultural heritage dissemination [23]. In Decipher, a story ontology is the basis of a system that supports the creation of story-based collections by museum curators. Finally, Europeana also uses some simple narrative features to describe the items they contain. In Europeana, it is possible, for example, to navigate among the artifacts representing a given action or displaying a certain character, across a large number of indexed objects; the system does not provide, however, a story-level navigation. The Labyrinth project extends the approaches described above by integrating the use of a narrative model to connect the items in a collection with the use of a visual environment for the exploration of these connections. The system re- lies on an ontology of narrative archetypes to describe the items in the collection; the 3D interface of system is inspired to a well known narrative archetype, the “labyrinth”. The notion of labyrinth is not only deeply rooted in the Western Culture, dating back to Greek Myths and witnessed by several archaeological locations across Europe [24], but also, thanks to the graph like nature of the notion of labyrinth [25], it lends itself well to representing the many-to-many relations among artworks encoded in the ontology. The goal of the visual design of the 3D interface is two-fold: on the one side, it is aimed at engaging the users to explore the repository though an immersive experience; on the other side, it is aimed at making the system usable by the large majority of users by integrating information giving and entertainment in a familiar environment. The labyrinth, or maze, is a genre of video games most users are familiar with, thanks to classic 2D games such as Atari’s Pacman and recent 3D titles such as Imangi’s Temple Run or PlayFirst’s Dream Chronicles. In cultural heritage, the use of 3D visualization is normally intended as a support for study and dissemination activities. 3D projects in cultural heritage can be roughly divided into two types: virtual equivalents of physically existing locations, such as museums and historical buildings, and reconstructions of physical environments that have disappeared, such as archaeological locations or temporary art works. Google Art Project and Arounder are examples of the first type, where 3D is often obtained through PMVR techniques that integrate high definition images of artworks in the 3D environments. Rome Reborn, the 3D reconstruction of Rome as it appeared in the IV century, is an example of the second type. In this project, the use of 3D is integrated with animated characters of ancient Romans, who interact with the users. A similar approach is proposed by [27], who present a framework for 3D real time applications in web browsers, employed to develop virtual reconstructions of Rome (Virtual Rome project) and other Italian locations [28]. Labyrinth differentiates from these approaches since the 3D representation is not employed to reconstruct real environments or to create virtual ones, but as a tool to convey semantic relations through a visual environment. For this reason, the system does not encompass a semantic model of the 3D environment: rather, it maps a semantic representation of the domain onto the 3D environment, as part of the interaction design process. 3. Live System The Labyrinth project encompasses both a standard web based interface [29] and a 3D application [30]. Both interfaces allow the user to navigate a repository of cultural objects with the guidance of a set of archetypes of narrative nature. 3https://en.wikipedia.org/wiki/Pac-Man 4http://www.imangistudios.com 5http://www.playfirst.com/games/view/dream-chronicles 6www.google.com/culturalinstitute/project/art-project 7www.arounder.com The archetypes are contained in an ontology that describes each archetype in terms of its related stories, characters, objects, events, and locations, and stores the connections that relate these categories with the items in the repository. In both the hypertextual and the 3D interfaces, the interaction with the user starts with the selection of an archetype. In the hypertextual interface, the user continues by refining her/his search based on the inner articulation of the selected archetype into more specific categories (namely, stories, characters, objects, events, locations and epochs), then into single elements within the category (single story, character, etc.), by following a top-down strategy that ends with the selection of a specific artifact (for a detailed description, see [31]). Fig. 1 shows a screenshot of the interface (in Italian): after selecting the archetype of the labyrinth, the user has decided to explore the category of “stories”, then the specific story entitled “Theseus kills the Minotaur”. As a result, the interface shows a record of the story (upper part of the main box, “Teseo uccide il Minotauro”), which includes the related stories, the characters and objects featured by the story, and the locations and epochs in which the story takes place. The user can click on them to navigate from the currently selected story to another, or to move to a different category of the archetype (for example, “characters” or “locations”). Below (bottom of Figure 1), the interface shows the thumbnails of the artifacts (or, better, of their digital copy) that refer to the currently selected element (here, the story “Theseus kills the Minotaur”); each thumbnail can be clicked on to get a record of the artifact. Figure 2: 3D interface: selection of the archetype (left) and assignment of initial and target artworks (right, labeled as “current” and “target”). Figure 3: First step of the navigation: the initial artwork, Minotauromachia (left); right: some of the doors available from the initial artwork (same character and same story). A slide show of the thumbnails is positioned on the left of the story record to provide a quick glance on the available contents for the current selection. Differently from the hypertextual interface, the 3D interface is characterized by a bottom-up approach: here, the user navigates from artifact to artifact on the basis of the relations over them represented in the ontology, building her/his own personal path through the repository. The user is situated in a virtual maze where the artworks are located in the clearings and connected by pathways that represent the relations over the artworks. Immersed in the virtual maze in a first person perspective, the user is encouraged to explore the repository in the same way as the visitor of a hedge maze explores the turns and twists of the maze on her/his way to the exit. In order to exemplify the user experience in the 3D labyrinth, we will describe a navigation example extracted from the system log, illustrated through the screenshots of the steps that compose it (Figures 4, 5, 6). After choosing the archetype of the “labyrinth” (Fig. 2, left), the user is assigned a start and a target artwork (Fig. 2, right), randomly extracted from the repository: in the example, they are, respectively, the “Minotauromachia” (a painting) and a novel, “Il labirinto greco”. When the user clicks on the Start button posited below the start and target nodes, she/he is brought to the 3D environment Figure 4: Second step of the navigation: a pathway (left) to the subsequent artwork; second artwork (right), with doors leading to the other artworks in the “same character” relation. Figure 5: Third step of the navigation: a Greek vase representing Theseus killing the Minotaur (left); doors leading to the artworks referring to the same story (right). (Fig. 3, left). The first location is the node containing the start artwork, the painting entitled “Minotauromachia” by Pablo Picasso, which shows the Greek hero Theseus fighting with a Minotaur. Fig. 3 (left) shows how the artwork (here, a picture of the painting) is displayed to the user in a 2D layer temporarily superimposed to the 3D scene; the artwork is accompanied by the information about its author, the place where it is hosted and the creation date. A longer description can be obtained by clicking on “description”, below the image; by clicking on “close”, the layer disappears. Figure 3 (right) shows some of the connections available from the node, represented by the two doors labeled as “agent” and “story” (other doors are out of the view): the first door leads to artworks that feature the same character (named “agent” in the system) as the current artwork, the second door leads to artworks that relate to the same story. By choosing the door labeled as “agent”, the user is led through a pathway (Fig. 4, left) to an empty node (Fig. 4, right) that contains doors for the artworks that display the same character as the previous artwork, Theseus; the titles of the artworks are written above the doors, from left to right: “Monete rivenute a Cnosso” (“Coins found in Knossos”), “Teseo uccide il Minotauro” (“Theseus kills the Minotaur”), “Affreschi della villa imperiale a Pompei” (“Frescos, Villa Imperiale in Pompei”). The user selects the middle door, and is led to a node that contains a Greek vase displaying the Theseus in the act of killing the Minotaur (Fig. 5, left). Notice that the console posited in the bottom part of the interface contains, besides the controls for getting help, stopping the sound and exiting the application, a progress bar displaying the artworks visited by the user so far: by selecting a previously visited artwork, the user is brought back to the node containing it. After the Greek vase, the user follows the same story relation by clicking on the door labeled as “story” (not shown), and is brought to an empty node with doors for the artworks that refer to the same story: “Arianna dormiente” (“Sleeping Ariadne”) and “Affreschi della villa imperiale a Pompei” (“Frescos, Villa Imperiale in Pompei”; notice that, like in a true labyrinth, the same artwork can be gained by following different paths). By choosing the first door, “Arianna dormiente” (“Sleeping Ariadne”), the user will reach a node containing a Roman statue of Ariadne, the female character of the myth of the Minotaur (Fig. 6, left); from there, by backtracking to the previous node, the user may select the second door, “Affreschi della villa imperiale a Pompei” (“Frescos, Villa Imperiale in Pompei”), which leads to a node containing a painting that illustrates the myth of the Minotaur, located in a Roman villa in archaeological site of Pompei (Fig. 6, right). 4. The tripartite core of Labyrinth Given a collection of cultural objects, commonly represented by the digital resources that constitute the “digital equivalents” of the actual physical objects [32], the access to the collection in an ontology based system such as Labyrinth 3D is the result of the interplay of three elements, namely the information about the objects, or metadata, contained in the ontology (Section 4.1), by which the objects are indexed, the visualization interface (Section 4.2), driven by the project specific goals (dissemination, presentation, study, etc.), and the mapping of the objects onto the visualization interface (Section 4.3). Thanks to this tripartite relation, the system translates the information about the cultural objects into a visual representation where the semantic relations over the objects contained in the ontology are mapped onto the elements of a 3D environment. 4.1. The Archetype Ontology The description of the artworks encoded in the metadata typically includes features such as date, authorship and title of the items, normally expressed according to standard vocabularies, such as ISO 8601 for dates\(^8\) or ULAN (Union List of Artists Names) for names.\(^9\) Beside authorship and editorial information, metadata usually contain also information about the management and preservation of cultural objects, such as responsibility for the preservation, digitization standards, etc. In the last decade, metadata have evolved towards semantic encodings describing the content of the artworks, with categories such as iconography, event types, etc.\([33, 34]\). An example of semantic description is provided by the Europeana Semantic Model (ESM)\(^10\). As exemplified by the navigation example provided in the previous section, the semantic annotation of the artworks in Labyrinth is mainly oriented to the representation of their content, and narrative content in particular. The narrative content of the artworks is expressed through a set of archetypes that characterize Western culture through the ages, heritage of the Greek and Roman tradition \([35]\). The core of the Labyrinth system is the Archetype Ontology (AO), described in detail in \([31]\). The AO contains a number of archetypes (the journey, the labirinth, the hero) and describes how the artworks relate to them via the representation of stories, characters, objects, events, locations and epochs. The AO contains 8 top classes: the Archetype class contains the archetypes; the Artifact class contains the artworks, organized according to the FRBR model \([36]\); Entity contains the characters and objects represented in artifacts; Story represents a collection of stories related with the archetypes; the Description Templates class contains a role-based schema for describing events and states that can be filled by characters and objects; the Format class encodes the format and type of media resources; Geographical Place and Temporal Collocation, finally, encode, respectively, the spatial and temporal information related to artifacts, stories and archetypes. The Archetype Ontology was manually built based on an extensive survey of the notion of archetype, spanning from Warburg’s Bilder Atlas \([14]\) to folkloric studies \([15]\) and contemporary accounts of tropes in media.\(^11\) The ontology was aligned with the conceptual reference model established by the International Council of Museums (ICOM), the CRM-FRBR model \([36]\), a standard in the description of cultural heritage. In the Labyrinth system, the editing phase is conducted through a back-end web interface through which items can be added to the repository. The description of the items is accomplished through form filling and it follows the Dublin Core initiative \([37]\), a metadata element schema that has become a standard de \(^8\)http://www.iso.org/iso/home/standards/iso8601.htm \(^9\)http://www.getty.edu/research/tools/vocabularies/ulan/ \(^10\)http://pro.europeana.eu/ese-documentation \(^11\)http://tvtropes.org facto in digital archives\textsuperscript{12}. When a new item is added to the repository, the system imports the description of the item in the ontology through a built-in procedure that converts the input data into the ontology format (a set of RDF\textsuperscript{13} triples). First, the \textit{internalization} phase (described in details in \cite{29}) translates the metadata of the resource (creator, date, etc.) into the language of the ontology. Then, a \textit{mapping} procedure matches the imported description with the available archetypes. Both steps are achieved via if-then rules encoded in SWRL\textsuperscript{14}, the rule language designed for ontologies. As an example of how the mapping is accomplished, consider the rule that examines the “title” metadata element of an artwork in order to find a connection with the archetype of the “Labyrinth”: if words like “labyrinth” or “maze” are found in the title, the rule will add to the ontology the assertion that the artwork \textit{evokes} the archetype of the labyrinth. Finally, after the new item has been internalized in the system and mapped onto the ontology, a specific set of rules add the narrative features to the artworks (\textit{narrative mapping}). For example, if an artwork represents a set of characters performing some action, the system searches for a story in which the same characters perform that action (see \cite{29} for a detailed description of how the narrative properties are added to the representation of the items). For instance, an artwork representing Ariadne in the act of giving the ball of thread to Theseus (a focal event in the myth of the Minotaur) would be recognized as having the myth of the Minotaur as a narrative component. In order to illustrate how the items in the repository are represented in the ontology as a result of the internalization process, we will resort to an example. Fig. 7 illustrates the description of the painting “Minotauromachia” by Pablo Picasso (the first step of the navigation example in Section 4.1), serialized in the XML/RDF format. Notice that each line represents an RDF triple, composed of a subject (all triples in this fragment have the same subject, i.e., the named individual “Minotauromachia” in line 2), a predicate (for example, hasResourceType, line 4) describing a property of the subject or a relation with another entity, and an object (here, the resource type, Image) which constitutes the value of the property – or the second term of the relation. All resources are characterized by a prefix given by the URI of the ontology. Fig. 7 outlines the role of each phase of the procedure described above. The annotation is divided into two groups of assertions: the first group contains the assertions extracted from the artwork metadata by the internalization procedure; the second group contains the properties added by the mapping procedure, which connects the artwork with the archetypes: this group contains specific annotations concerning the narrative relations among the artworks. Lines 3 to 6 are created by the internalization procedure and correspond to the artwork metadata, such as type, creator, etc. The hasResourceType property (line 4) describes the media type of the resource, i.e., image; the hasCreator \textsuperscript{13}https://www.w3.org/TR/rdf-concepts/ \textsuperscript{14}https://www.w3.org/Submission/SWRL/ Properties added by the internalization phase: 1 <!-- http://www.AO.org/labyrinth#Minotauromachia --> 2 <owl:NamedIndividual rdf:about="http://www.AO.org/labyrinth#Minotauromachia"> 3 <rdf:type rdf:resource="http://www.AO.org/labyrinth#Manifestation"/> 4 <hasResourceType rdf:resource="http://www.AO.org/labyrinth#Image"/> 5 <ma-ont:hasCreator rdf:resource="http://www.AO.org/labyrinth#Pablo_Picasso"/> 6 <hasGeographicalLocation rdf:resource="http://www.AO.org/labyrinth#NewYork"/> 7 <evokes rdf:resource="http://www.AO.org/labyrinth#KnossosLabyrinth"/> 8 <displays rdf:resource="http://www.AO.org/labyrinth#Minotaur"/> 9 <displays rdf:resource="http://www.AO.org/labyrinth#Theseus"/> 10 <describesAction rdf:resource="http://www.AO.org/labyrinth#killing"/> 11 </owl:NamedIndividual> After the mapping phase: 7 <evokes rdf:resource="http://www.AO.org/labyrinth#KnossosLabyrinth"/> 8 <displays rdf:resource="http://www.AO.org/labyrinth#Minotaur"/> 9 <displays rdf:resource="http://www.AO.org/labyrinth#Theseus"/> 10 <describesAction rdf:resource="http://www.AO.org/labyrinth#killing"/> 11 <hasPart rdf:resource="http://www.AO.org/labyrinth#Minotaur_Story"/> 12 </owl:NamedIndividual> Figure 7: The description of the artwork “Minotauromachia” by Pablo Picasso in the AO ontology. The sections show the properties added by each phase of the internalization and mapping procedures. property (line 5) connects the painting with its author, “Pablo Picasso”; the hasGeographicalLocation property (line 6) describes the location of the artwork. Lines 7 to 11 describe the relation of the artwork with the archetype: the property evokes (line 7) relates the painting with the archetype of the “Labyrinth”, while a set of specific properties describe the relation with the archetype in greater detail, focusing on its narrative aspects: displays (lines 8-9) refers to the characters which appear in it, i.e., Theseus and the Minotaur; describesAction (line 10) refers to the event type it depicts (“killing”). Finally, the property hasPart (line 11) states that the painting contains, as part of its narrative content, the Minotaur Story. Given this description, several relations can be detected with other artworks. Besides the standard relations based on author or resource type, the archetype of the labyrinth connects the artwork with other artworks that display the same characters (Theseus or the Minotaur), depict the same action type (killing), or refer to the same story (the myth of the Minotaur) and other related stories (e.g., Ariadne and the Thread). 4.2. Designing the 3D environment The design of the 3D environment is inspired by the metaphor of the labyrinth. This metaphor was chosen for its ability to convey the graph–like nature of the relations over the artworks in a cultural heritage collection, and for its immediacy of use, since it provides an intuitive mapping for artworks (nodes of the labyrinth) and relations over them (connections among the nodes). The interaction metaphor underlying the navigation is “finding one’s way”: here, however, the user does not simply gain the exit, but the creation of a personal path in artworks’ meaning, represented by a virtual “red thread”. In order to make the experience more engaging, when the session begins, the user is given a target node. When the user reaches the target node, or when the user decides to exit from the labyrinth, the session ends and the user is shown the statistics about her/his own path: number of visited nodes, elapsed time, backtrackings, etc. The visual design of the labyrinth is inspired to the classical hedge maze, with architectural elements that are intended to remind of some distant but indefinite past; this choice was primarily due to the constraint posed by the heterogeneity of the contents assumed by the project. The floor is partly tiled, partly covered with grass, and the mood is inspired by a dark, Gothic style. The maze contains two types of nodes: some nodes (artwork nodes) contain artworks, some nodes (relation nodes) are empty and only serve the function of connecting the artwork nodes, as exemplified in the navigation example provided in Section 3. The presence of an artwork in a node is signaled by a low circular balustrade in the middle, open in several points, that are intended as affordances inviting the user to step into the inner part of the node [38]. The entrance to pathways is marked by doors; each door corresponds to a semantic connection (e.g., same story), and is surmounted by the name of the type of connection (e.g., “story”). Each node has a fixed number of doors/pathways: depending of the number of semantic relations that connect the node with other nodes, some doors may be closed, or hidden by greenery. If the connection leads straight to a single artwork, the title of the artwork is posited above the door. Pathways differ in length and form: some are short, some are longer and they bend, so that their end is not visible, in order add some thrill to the experience. The navigation in the system is inspired to the paradigm of constrained navigation [39], with the aim of making it usable also for non expert users of 3D applications. The user moves by clicking on small circles of light posited on the floor, in front of the doors of the nodes and along the pathways. Circles of the same type also mark the presence of an artwork in the middle of artwork nodes and must be clicked to get information about the artwork. Smaller circles of light appear inside the circles when they are clicked, so that they eventually form a sort of “red thread” that marks the path made by the user so far. The metaphor of the red thread, aimed at improving self orientation, is enforced also by the console posited in the lower part of the screen, that shows the list the nodes visited by the user. By clicking on a node in the list, the user is brought back to that node. The console also contains buttons for ending the session and turning off the sound. The user is free to explore the labyrinth, going back to previous locations and clicking on the control posited in artwork nodes to receive information and experience them via the appropriate plugins: depending of the media type of the resource associated with the artwork, an image is displayed, a video is played, etc. A short description of the artwork, with title, date and creator, is always provided, as exemplified in Fig. 3 (Sect. 3). The 3D environment was implemented with the Unity 3D real time engine, which supports several platforms and mobile conditions. Unity 3D offers first person gameplay default assets, both concerning camera motion control and mouse tracking motion control. In order to optimize the production time and cost of the 3D assets, a single model of the labyrinth node, with a predefined set of exits, was created: at run time, it is dynamically adapted to the semantic relations connecting the current artwork with the others by closing or opening the corresponding number of doors. To achieve our goals, we built an indexed database of 3D objects to be promptly displayed in real time by the 3D Engine. Thus, we were able to produce several theme variations exponentially exploding the number of possible combinations. The standard 3D objects are: the octagonal square (3 variants, actually), the open door (3 variants), the closed door (2 variants), the textual artwork viewer (1 variant), the pictorial artwork viewer (1 variant), the movie viewer (1 variant). The pathways are a 3D object category on their own: they vary in shape in accordance to their length, which is measured in steps (2-3-4 steps, each in three variants). Joined together, steps compose asymmetrical paths, which can also be used backwards, therefore multiplying the possible combinations of subsequent pathways. So, the maze, determined through the user choices, is perceived as never being the same. 4.3. Mapping the ontology onto the 3D The mapping of the semantic relations onto the visual environment poses some problems that need to be faced as part of the system design, and constrain the architecture of the system. Formally, the labyrinth is an undirected graph [25], where vertices have a variable degree. The nodes correspond to the graph vertices, the pathways to the edges. Notice that, as in a real maze, there are also nodes with only one incident edge, i.e. dead ends where the user has to backtrack. The direct transposition of the graph-like structure of the relationships over the artworks from the ontology to the 3D labyrinth, however, would lead to a proliferation of the edges that would be confusing for the user. Take, for example, the similarity relation “displaying the same character” among artworks. Representing this relation as artwork to artwork relations implies that, for each artwork, an edge should be added from the artwork to every other artwork that displays the same character (and this should be done for each semantic relation). In order to alleviate this problem, in Labyrinth, we decided to represent the semantic relations such as “displaying the same character” through special nodes that represent the relation itself, thus obtaining a more compact representation. These nodes do not correspond to artworks, but simply distribute the semantic relation over the pairs of artworks that are in the given relation. \footnote{https://unity3d.com} \footnote{For usability reasons, the maximum number of edges per vertex has been limited to the arbitrary threshold of 7, given the well known limitations of short term memory first shown by Miller [40].} As a result of the constraint described above, there are two types of nodes (with different iconic elements) in the labyrinth, connected by the pathways: *artwork nodes* and *relation nodes*. Artwork nodes are connected with both relation nodes and artwork nodes. Relation nodes are connected only with artwork nodes. The user navigation starts from an artwork node: the user has to choose one of the pathways exiting from the node, labeled either with the name of a different artwork (in this case, the pathway, leads directly to an artwork node) or with the name of a semantic relation (in this case, the pathway leads to a relation node, that in turn leads to a set of different artwork nodes). Since the semantic relations are symmetric, pathways can be walked both ways. 5. The Labyrinth system In this section, we describe the architecture of the labyrinth system, which constructs the 3D environment as long as the interaction with the user progresses. 5.1. System architecture The architecture of the Labyrinth system is structured according a client–server schema. The ontology is stored in an ontology server; the information it contains is dynamically extracted from the ontology and made available to the visualization client, which manages the user interface. The system encompasses four main modules (see Fig. 8): - **the Ontology Server** (Fig. 8, top) stores the AO ontology – where the cultural heritage objects are described – and provides the reasoning services that allow the system to establish the relations of each object with the archetypes, as exemplified in Section 4.1 (for example, inferring the relation between an artifact and a story given the characters displayed in the artifact). In the current implementation, the ontology server is provided by Owlim.\(^{17}\) The ontology server also supports the SWRL rule sets that implement the internalization and mapping procedures described in Section 4.1, by which new items are ingested in the system. The ontology server provides also the SPARQL\(^ {18}\) endpoint for querying the ontology, necessary to extract from the ontology the data that will be visualized in the interface (i.e., the semantic relations over the artifacts, such as the same character or the same story relations exemplified in the navigation example in Sect. 3). Notice that this module is independent of the visualization type and it serves both the web-based interface and the 3D environment. - **the Media Repository** (Fig. 8, right) stores the media objects (the digital equivalents of the artworks) which constitute the repository of the system and is indexed by a relational database (a mySql database); - a set of **Web Services** (Fig. 8, left) implement the Application Programming Interface specific to each visualization client. This component extracts the data from the ontology in response to the requests of the clients. The web services, written in Java, are called by the visualization clients to respond to the actions of the user, and return the data in XML format. For example, in the 3D environment, when the user clicks on a door leading to a given artwork, the visualization client calls the API command that fetches the information about the artwork, needed to generate the node with the artwork in the 3D maze. - **the Visualization Clients** (Fig. 8, bottom) support the interaction with the user through 3D navigation, as standalone application (for the 3D system) or embedded in a web browser (for the web interface). The core of the system consists of the APIs that fetch the data from the ontology to the visualization clients. The interplay among these components realizes the mapping of the objects and relations encoded in the ontology onto the environments where they can be visualized by the user. In the following, we describe in detail the interaction between the Ontology-to-3D API and the 3D environment. --- \(^{18}\)https://www.w3.org/TR/sparql11-overview/ visualization client. The interaction between these two components achieves the computational creativity that the user can enjoy by navigating the repository in the virtual maze. 5.2. The system at work The topology of the maze is computed locally as the user progresses in her/his path. This choice is partly related with the user experience design and partly related with optimization issues. Concerning the user experience, the step by step generation of the maze provides room for the adaptive personalization of the navigation experience, which can be tailored to the typology and behavior of the user given the available relations over the artworks. Currently, the variability of the navigation is provided by a basic random mechanism. Since the system does not pose any constraints on the number of related artworks, the available relations over them may exceed the number of doors, set to 6 in the current implementation, when necessary, 6 artworks are randomly extracted. As a result, in different navigation sessions, different artworks may be extracted, thus generating slight variations in the user experience. Concerning the optimization issues, the step by step generation of the topology guarantees that the computation needed to generate the maze is not affected by the size and the connectivity degree of the repository (which only affect the execution of the queries). Moreover, if the repository changes, no initialization is needed. Notice also that this solution is made possible by the fact that the 3D environment does not encompass a top down, map-like visualization of the maze, with the consequence that the user can only experience a subjective view of the environment. This choice, although debatable for the lack of orientation it may provoke in the user, is consistent with the actual experience of the real hedge maze by which the design of the 3D system is inspired. Basically, the maze is generated on the fly as follows: when the navigation begins, the system retrieves from the ontology the information about the first artwork, and generates only the portion of the maze which describes this artwork and its connections with the others, namely a node containing the artwork and the pathways which represent its semantic relations with the others. Each pathway represents a relation type (e.g., story, location or agent/character), as illustrated by Figure 3 (right), where the doors leading to the pathways for “story” and “agent” are visible. When the user makes the next choice by selecting the pathway she/he wants to take, the next portion of the maze is created. If the selected pathway leads to a group of artworks (i.e., the relation it represents contains multiple artworks), the system creates an empty node whose function is to redirect the user to the single artworks, each placed in a different node (see the example in Fig. 4, right, with doors for the single --- 19 Notice that, if the user backtracks in the same session, the nodes that have been already visited are not generated from scratch, to let the user orientate her/himself. 20 The prototype ontology currently contains 1211 triples, but a wholly functioning system would be much larger. artworks). A direct pathway is generated only if the selected pathway leads to a single artwork. The rationale behind this strategy is to enforce the 1 : 1 mapping between artworks and nodes, so that a node always contains a single artwork. In the following, we describe the algorithm executed by the 3D visualization module to manage the interaction with the user, supported by the ontology-to-3D API (see Figure 9). The visualization client (3D Labyrinth, bottom of figure) queries the ontology (top of the figure) through the ontology-to-3D API (see Section 5.1). The generation of the 3D labyrinth is accomplished through the following steps: **Initialization.** The session begins when the user starts the application on the client device. - **Session start.** First, the 3D client queries the ontology server to get the list of the available archetypes through the `startLabyrinth3D()` command (Fig. 2, left). - **Archetype selection.** When user chooses one of the available archetypes, the client sends the selected archetype to the server (`setArchetype()`). • **Navigation initialization.** The client invokes an initialization command (initialize()) that selects a random pair of artworks: they provide, respectively, the initial and target nodes (Fig. 2, right). When the user clicks on the “start” command in the interface, the navigation begins. **Node generation** At this point, the next node is generated, until either the user reaches the target node or she/he exits the labyrinth (by clicking the exit button posited in the navigation console). This loop is repeated each time the user selects the next artwork. 1. **Retrieval of relations.** The client queries the ontology to get the information about the chosen artwork (or the initial artwork at the beginning of the navigation) through the command getNodeInfo(). This command is the key to the mapping of the semantic relations encoded in the ontology onto the 3D labyrinth: given an artwork, it returns the identifier of the digital resource that represents the artwork in the media repository, its metadata (the information about its creator, title, etc.) and the list of its semantic relations (character-based, story-based, location-based relations, and so on) with the other artworks. The client employs the information about the artwork’s relations with the other artworks to build the node that will contain the artwork and stores the digital resource and the information about the artwork to generate the 2D panel describing the artwork in case the user requires it (as exemplified in Fig. 3, left, Fig. 5, left, and Fig. 6). To retrieve these data from the ontology, getNodeInfo() executes a set of SPARQL queries on the ontology, one for each possible type of semantic relations. For example, the following query extracts from the ontology the set of artworks ?a that are evocative of the archetype of the labyrinth (:evokes) and display the character of Theseus (:displays): ``` SELECT ?a WHERE{ ?a :evokes :Labyrinth ?a :displays :Theseus } ``` By executing similar queries for all the semantic relations embedded in the system (namely, story, character, event, location, epoch and object), the system collects all the available relations connecting the selected artwork with the rest of the repository. 2. **Computation of topology.** The method getNodeInfo() returns an XML fragment describing the selected artwork and its related artworks; in practice, the XML contains a section for each semantic relation type (agent, story, etc.). For example, consider the following fragment, returned by invoking the command getNodeInfo() on the artifact displayed in Fig. 5 (left), a Greek vase of the 5th century b.c. displaying Theseus killing the Minotaur: The response contains the set of artworks (or artifacts, as artworks are generically termed in AO) related to the input artwork, indexed by relation types: the example shows the story relation (artworks tagged as \textit{rartifactstory}) and character relation (artworks tagged as \textit{rartifactagent}). The example response contains (among other artifacts not listed in the example) two story-related paintings, “Minotauromania” by Pablo Picasso and “Ariadne and the Thread” by the Italian painter Palagio Pelagi, and two character-related artworks, namely an anonymous statue representing a sleeping Ariadne (situated at the Vatican Museums) and the frescoes depicting the myth of the Minotaur situated at the “Villa Imperiale” in Pompei. Notice that the latter two artworks are displayed in Fig 5 (right) as available alternatives after the user has chosen the story relation from the node containing the Greek vase (Fig 5, left): in the figure, they are termed, respectively “Arianna dormiente” (“Sleeping Ariadne”) and “Af-freschi della Villa Imperiale” (“Frescoes, Villa Imperiale”). At this point, the topology of the labyrinth is computed. For each semantic relation (in the example response: \textit{<rartifactstory>} and \textit{<rartifactagent>}): – if the relation contains a \textit{single artifact}, an artwork node is created to represent it and a pathway is added from the chosen node to the new artwork node; – if the relation contains \textit{multiple artifacts} (as in the standard case), a relation node is created and a pathway is added from the chosen node to the relation node; for each artifact, then, an artwork node is created and a pathway is added from the relation node to each of the new artwork nodes (see Fig. 3, right). 3. \textit{Generation of the labyrinth.} Based on the topology computed above, the next node of the 3D labyrinth is created and added to the 3D environment, together with its exiting pathways. When the user chooses a new artwork (either directly connected to the current one or indirectly, via a relation node), the loop is repeated. **End of session.** When the user either reaches the target node or clicks on the exit button, the client executes the *endLabyrinth()* command to visualize the statistics of the session (time elapsed, visited nodes, etc.) and closes the session. ### 6. Lesson learned We carried out an evaluation of the 3D interface of the system, in order to gather information about the users’ liking of the system and their expectations about its use. The evaluation took place in a scientific fair, with some users taking part to participated demos and some other user freely interacting with the system. The experimentation is described in detail [41]: here, we only summarize the most important results, which are relevant for discussing the potential and the possible applications of Labyrinth 3D. 41 testers took part in the evaluation, males and females, with ages ranging from 10 to 67 years old. The system was very well welcomed by the visitors of the fair, in particular by students and teachers, who were enthusiastic about its potential for education and dissemination. The ethnographic observation of the testers who interacted directly with the system showed that the navigation was generally easy, with some problems in clicking the navigation controls when they were located far away along the pathways, because the distant controls tend to be small due to the perspective. Users were sometimes bewildered at finding themselves in a node they had already visited, but were ready to accept the explanation that this is typical of labyrinths. The users tended to read carefully the information displayed about the single items, reasoning aloud about their connection with the archetype and with the previously visited nodes. A questionnaire was given to the users to assess their liking of the system and their preferences about the use contexts. The questions about the use of the system revealed that the users would prefer the PC and the tablet for using the system, a finding that is in line with the goal of the project of creating an immersive experience. When asked about the similar media, the users selected the video game and the encyclopedia, also in line with the design goal of creating a tool for cultural dissemination. In particular, a group of 6 questions were aimed at investigating the general acceptance of the system: by using Likert scales (with 5 points from −2 to +2, mapped onto values from 1 (−2) to 5 (+2) in the subsequent data analysis), we asked testers to what degree the system was: i. intuitive, ii. interesting, iii. engaging, iv. useful, v. appealing, vi. straightforward to use. The average value of the answers to the questions concerning the acceptance was 4.5, with “interesting” as the highest average value (4.7) and “straightforward” as the lowest average value (4.32), indicating that the application was appealing but that its use was not entirely clear to some users. The values are illustrated in Table 1. As it can be noticed, the standard deviation is not high, meaning that the testers generally agreed on a positive evaluation. Table 1: Average values for the questions about perceived properties of the system, on a 5 point Likert scale. <table> <thead> <tr> <th>subquestion</th> <th>SYSTEM PROPERTY</th> <th>AVERAGE VALUE</th> <th>ST. DEV.</th> </tr> </thead> <tbody> <tr> <td>i</td> <td>intuitive</td> <td>4.35</td> <td>0.72</td> </tr> <tr> <td>ii</td> <td>interesting</td> <td>4.7</td> <td>0.57</td> </tr> <tr> <td>iii</td> <td>engaging</td> <td>4.41</td> <td>0.74</td> </tr> <tr> <td>iv</td> <td>useful</td> <td>4.48</td> <td>0.61</td> </tr> <tr> <td>v</td> <td>appealing</td> <td>4.5</td> <td>0.74</td> </tr> <tr> <td>vi</td> <td>straightforward</td> <td>4.32</td> <td>0.68</td> </tr> </tbody> </table> The results of the evaluation suggest that the proposed approach works and open the way to a re-use of the architecture of the system for applications that leverage the creativity intrinsic to a cultural heritage archive to generate personalized paths in a 3D environment. A precondition to the reuse the approach of the Labyrinth project is to abstract the experience in the design and implementation of the system into a pipeline for creating similar applications. Given our experience in the design and implementation of Labyrinth 3D, we propose the following pipeline, divided into three phases: **visual design**, **software development** and **editing**, each characterized by specific professional roles. We skip the conceptual modeling phase, assuming that an annotated repository of cultural heritage objects is already available (for example, as part of some annotation project or as a by product of a digitalization initiative). It is possible, however, that, for specific projects, an ad hoc ontology is developed to satisfy this requirement: if this is the case, an **ontology engineer** and a **domain expert** cooperate to design the ontology that will constitute the backbone of the system. The **visual design** phase is aimed at bridging the gap between the conceptualization of heritage objects (the *archetypes* in Labyrinth 3D) and the users through the use of visual and spatial metaphors (the *maze* in Labyrinth 3D). As argued by [13], the choice of the metaphor is crucial to communicating the conceptual model. This phase should be conducted with the help of a sample repository where a few objects have been inserted to support the design process and the subsequent development phase. Given the annotated repository, the **interaction designer**, in cooperation with a **visual designer**, i) devises a suitable metaphor for conveying the description of the objects in the repository through the 3D environment (by mapping of the object properties and their relations onto the features of the environment), ii) designs the interaction flow (specifying how the user can interact with the 3D environment and what responses he/she should get in each phase of the interaction) and iii) establishes the visual properties of the 3D, such as its mood and appearance. A **game designer** may be involved in this phase to insert elements of playability into the interaction. As shown by [42], in fact, the use of game in tandem with visual metaphors increases the levels of learning. The **software development** phase translates the interaction design into 3D assets, staged and manipulated by a 3D engine. Once the interaction metaphor has been established, the 3D models that constitute the environment are created and arranged in a set of layouts by a 3D *production team*, together with animations and camera movements (in case the navigation is achieved by constraining the user to predefined movements, as in Labyrinth 3D). In parallel, the *semantic web developer* implements the queries that extract the object descriptions from the ontology (previously uploaded onto an ontology server) and makes them available by programming a web service available through an API. Finally, the *3D developer* programs the 3D environment so that it implements the interaction flow established in the interaction design phase. In the **editing phase**, the cultural heritage objects are collected and annotated with the semantic metadata required by the conceptual model encoded in the ontology before adding them to the repository. Metadata may include, for example, the relations of heritage objects with locations, artists, historical events, etc. Although professional *annotators* are preferred, metadata may also be contributed by amateurs through crowd sourcing, as recently proposed by [43]. ### 7. Conclusion In this paper, we described Labyrinth 3D, a system where the user can explore the semantic relations over a repository of cultural objects through a virtual maze where the objects are connected by pathways representing the meaning relations over them. The approach of Labyrinth 3D leverages a systematic mapping of the conceptual model underlying the repository onto a virtual, 3D environment, to create an immersive and engaging experience for the user. Designed to provide an alternative to the standard approaches to archive navigation, Labyrinth 3D relies on the users’ curiosity to create personal paths in a cultural domain. In the next years, thanks to the advent of the paradigm of Linked Open Data [44], semantically encoded information about cultural heritage, including events, performances, collections, etc. will be available on the web from different sources, enabling the experimentation of new paradigms in the presentation and dissemination of cultural heritage. By applying the approach of Labyrinth 3D to the design of new applications, it will possible to refine and improve the approach described in this paper through practical case studies. The ultimate goal of this research is to take full advantage of the whole range of the new media languages, such as 3D, to develop creative and innovative applications in the field of cultural heritage. As a future work, we envisage the adoption of the software pipeline of Labyrinth 3D in educational projects. In this setting, in fact, our assumption is that the semantic-guided narrative exploration of themes, characters, epochs etc. can provide a ludic path to knowledge access for young students and may favour, in a gamification perspective, the process of knowledge acquisition. This would require both an extension of the current catalogue of the archetypes as well as an adaptation according to the specific needs of the educational project considered. 8. Acknowledgements The authors wish to thank Prof. Giulio Lughi for inspiration and discussion. Our thanks go also to Neos s.r.l. for bringing to the Labyrinth project their insights and contributions. 9. References URL http://dx.doi.org/10.4230/OASICS.CMN.2013.76 [38] D. A. Norman, Affordance, conventions, and design, Interactions 6 (3) (1999) 38–43. [40] G. A. Miller, The magical number seven, plus or minus two: some limits on our capacity for processing information., Psychological review 63 (2) (1956) 81.
(Re)framing built heritage through the machinic gaze Vanicka Arora University of Stirling, Stirling, United Kingdom of Great Britain and Northern Ireland Liam Magee Institute for Culture and Society, Western Sydney University, Penrith, NSW, Australia Luke Munn Research Fellow, Digital Cultures and Societies, University of Queensland, Saint Lucia, QLD, Australia Abstract Built heritage has been both subject and product of a gaze that has been sustained through moments of colonial fixation on ruins and monuments, technocratic examination and representation, and fetishisation by a global tourist industry. We argue that the recent proliferation of machine learning and vision technologies create new scopic regimes for heritage: storing and retrieving existing images from vast digital archives, and further imparting their own distortions upon this gaze. We introduce the term ‘machinic gaze’ to conceptualise the reconfiguration of heritage representation via artificial intelligence (AI) models. To explore how this gaze reframes heritage, we deploy an image-text-image pipeline that reads, interprets, and resynthesizes images of several UNESCO World Heritage Sites. Employing two concepts from media studies—heteroscopia and anamorphosis—we describe the reoriented perspective that machine vision systems introduce. We propose that the machinic gaze highlights the artifice of the human gaze and its underlying assumptions and practices that combine to form established notions of heritage. Corresponding author: Vanicka Arora, University of Stirling, D24, Pathfoot Building, Stirling FK9 4LA, United Kingdom of Great Britain and Northern Ireland. Email: vanicka.arora@stir.ac.uk Keywords Heritage photography, heritage gaze, machinic gaze, synthetic images, text-to-image models, generative AI Introduction Built heritage has a long, well-established relationship with visual representation, production, and consumption. Multiple scopic regimes have been in operation within the heritage industry and are continually evolving and diversifying, from careful artistic depictions of the romantic ruin, archaeological surveys, and cartographic representation through to photography, both as technical documentary evidence and as commercial tourist fantasy. Photography has long assisted in what Sterling (2019: 2) has termed the ‘mythic representation of heritage as ideology’, drawing attention to iconic or emblematic aspects of sites that reinforce narratives of power. The discussions around the ‘gaze’ in heritage have encompassed multiple ways of seeing. Chadha (2002: 380) suggests, for instance, with reference to the disciplinary project of archaeological photography in India, that multiple gazes are in operation simultaneously—the colonial, scientific, anthropological, and voyeuristic—while Wickstead (2009) considers the possibilities of moving beyond ideas of the male gaze and the Western gaze in archaeology, and instead approaching the gaze as diffused and ambiguous. For the purposes of our examination, however, we focus primarily on two established forms of viewing heritage—the tourist and expert gaze. Substantial work on heritage commodification and consumption builds on Urry’s (1990) conceptualisation of the tourist gaze (see for instance Watson and Waterton, 2016; Waterton, 2009; Santos, 2016), while extending Foucauldian notions of the gaze for heritage are discussions around expert gaze (Bohrer, 2011; Moshenska, 2013; Smith, 2006; Winter, 2006). Proliferating digital technologies and social media have intensified the heritage gaze and further complicated relationships between heritage and visual representation, especially in the context of photography. The introduction of machine learning technologies and generative artificial intelligence platforms that can now draw upon large archives of texts and images to resynthesise and produce ‘photographs’ in the absence of a ‘real’ object, temporality, or location are now positioned to substantially reconfigure these relationships. We argue that the emergence of computer vision and, recently, of machine learning systems trained on image corpora reproduce both forms of the heritage gaze, alongside other styles and subjects, retaining as they do so existing social biases (Offert and Phan, 2022). However, this reproduction is not pure. In their reconstitution of synthetic photographs of heritage sites, image generating systems such as Midjourney adhere to conventions with palettes and perspectives, but also at times inject the uncanny differences of an alien observer or subject (Parisi, 2019). The differences between these machinic outputs and human expectation seem to belong to a novel, *sui generis* mode of visual perception and production, which we describe in this paper, following Denicolai (2021), as the ‘machinic gaze’. ‘Gaze’ here serves a double purpose, referring to the technical algorithms that make up computer vision and to the general ‘way of seeing’ that shares and yet is distinguished from human forms of apprehension. By directing computer vision algorithms to interpret and resynthesise a controlled archive of images, we offer a partial response to the question of what, in relation to heritage, of the human gaze, in its tourist and expert orientations, is reproduced by the machine, and what if anything is instead introduced? More generally: What does the machine see when it looks at heritage? The expansive digitisation of vision has led to new possibilities in how machines consume and produce images (Azar et al., 2021). Social media image agglomerations have been systematised and organised into vast archives. With respect to these systems, two distinct kinds can be distinguished: image-to-text auto-captioning systems such as BLIP-2 (Li et al., 2023; Schuhmann et al., 2022; Zhang et al., 2023), and text-to-image generative systems such as Stable Diffusion, Midjourney, and DALL-E (Midjourney, 2022; Mostaque, 2022; Ramesh et al., 2022). We discuss the implications of both systems, though our focus is on the second, more novel system. With these generative AI models, the input of a text ‘prompt’, an instruction made up of typically English words that specify a subject, style, and format, generates synthetic images that, despite having no direct referent in their training sets or archives, can integrate parts of that prompt in often evocative and striking ways. We focus on how this apprehension works to reproduce visual representations of heritage sites that have been subject to the explicit focus of both tourist and expert gaze. After a discussion of how to conceptualise the gaze, we describe experiments with machine-generated text and images, based on a small sample of images from UNESCO’s World Heritage archive of sites. These experiments employ technical methods, using software libraries and machine learning systems, to read and decode these images into textual prompts, and then render those prompts as candidate reimaginings of the original images. We then comment on these machine-synthesised images and consider how these relate to both prompts and source images and conclude with implications of what the fast-moving field of machine learning might mean for the visual representation and production of heritage. We undertook this exercise with three objectives. Our first and central objective is to consider the ways in which the image model captures, ‘understands’, and recreates the heritage site and the specific gaze directed towards these sites. The second is gaze-directed exploration of the politics of visual representation of global sites of heritage through the medium of the synthetically produced image. Properties of this synthesis, we argue, can condense and refract highly disparate human representations of heritage, marking out more clearly its own preoccupations and ideological attachments. The third objective is to begin to set out some of the parameters of the emergent relationships between heritage and synthetic photography. Using a ‘textual’ prompt to produce an image, we highlight the presence and endurance of the heritage gaze embedded in both text and image archives, mediated, and intensified through the machine. Our goal is not to assess the fidelity of auto-captioning or image generation systems or investigate these systems’ capabilities to reproduce or extrapolate existing image archives. Rather, through our description of the machinic gaze, we hope to extend long-standing questions around the visual with respect to heritage—the heritage gaze, authenticity (or its absence), sense of place, and the commodification of sites for tourist consumption in the context of emerging forms of generative AI. **Conceptualising the machinic gaze** The gaze, often with attached qualifiers (‘male’, ‘colonial’), has an extensive history in heritage and adjacent fields of cultural studies (Wickstead, 2009). A common thread to distinct conceptualisations, from Mulvey’s (2013) seminal essay on the male gaze to Urry and Larsen’s (2011) discussion of the tourist gaze, is that *seeing* is never only a perceptual act, but is always informed by background assumptions, desires, prejudices, and power relations that inform interpretation of what is seen. Following work by media scholars (see Offert and Phan, 2022), we argue that despite the complexity of its datasets, training process, and software architecture, the machinic gaze as manifest in machine learning systems is similarly a social product. However, its relationship to diverse human gazes is not simply mimetic; rather it reproduces elements into representations that are often banal, and sometimes surreal and novel. While the trained machine has nothing to reference apart from its training set, at a certain scale and complexity mechanical *reproduction* can resemble an *introduction* of a novel palette, elements, and vision. To conceptualise this process of transformation in the context of heritage, we begin with a discussion of two dominant modes of the heritage gaze: the tourist and the expert. Boundaries between the two are not always clear-cut, particularly now, as consumer devices and services make expert visions more accessible. Yet the dichotomy identifies imagistic qualities that help to account for certain aspects of the machine gaze, and to characterise what also distinguishes that gaze from dominant human vision paradigms. In the context of archaeological monuments and sites, both tourist and expert ways of seeing have been further tied to forms of mechanical apprehension and capture since the inception of photography (Dicks, 2000; Sterling, 2016; Watson and Waterton, 2016). Shaped by a collectivised desire to witness scale and history, the polyvalent and complex tourist gaze (Urry and Larsen, 2011), alongside a supporting apparatus of travelogues, transport, and curation, has been stretched and magnified through the proliferation of social media platforms (Barauah, 2017; Oh, 2022). The expert gaze is similarly polyvalent, informed by disciplinary regimes ranging from archaeology and anthropology to architecture and conservation. The desire to document, authenticate, evaluate, and structure the object is central to this gaze, as is the construction of distance and objectivity (Beck and Sorensen, 2017; Bohrer, 2011; Wickstead, 2009). As other scholars have argued in relation to recent practice, this ‘distance’ is itself a multilayered phenomenon: one form of archaeological gaze reprises a positivist, scientific, and masculinist view of heritage observed, for instance, via top-down satellite imagery and GIS maps, while another—characterised as ‘critical GIS’ (Hacigüzeller, 2012) or even ‘gaze-critical’ (Wickstead, 2009)—looks back reflexively on the techniques of archaeological production. In discussing the ‘Europeanness’ of heritage, Niklasson (2017) suggests a similar, more politically inflected distinction between past-preserving conservation and a present-oriented openness towards flexible interpretation. Across these distinctions, the expert orientation is still distinguished from that of the tourist by a precise and particularist knowledge, which transfers to the preferred instruments, perspective, and types of attention directed toward heritage. Similarly, the tourist gaze has been theorised as layered and multifocal. MacCannell (2001) draws upon the Lacanian conception of the gaze that stresses the effect of viewing upon the heritage spectator themselves, a move which recuperates the agency of the heritage observer. Viewing heritage does not simply involve a consuming tourist or calculative expert state but may effect a transformation of the spectator into a subject aware of their own historicity. The tourist experiences for example the strange sense of becoming an object for some other, future viewer or visitor—and as this object, also becomes a proper subject. Resisting efforts to subsume all touristic appreciation to that of cliche, Sterling (2016) has similarly argued that the seeing tourist is also an embodied figure, one who apprehends their own materiality in heritage encounters, and to varying degrees is also managed through deliberately arranged scaffolds and signs by heritage site managers. The body, in Sterling’s account, in a certain sense anchors the otherwise cliched gaze within the singularity of the individual subject. Both ways of seeing belong to a history of apprehension entwined with developments in optical technology (Kittler, 2010). Tourists, archaeologists, and other forms of expert viewing coordinate within networks of technical visuality: observing via a camera, decomposing an image, studying a map (Hacigüzeller 2012; Sterling, 2019; Urry and Larsen, 2011), or constructing virtual and immersive environments (Champion, 2019; Forte, 2007). However, image-making AI systems do more than mediate, analyse, or mechanically reproduce (following Benjamin, 1986), and so seem to ask for a conceptual expansion of the machinic gaze. In generative systems, these patterns are mapped to words, so that when a prompt is submitted, the individual parts of the prompts serve as queries for finding these patterns; the patterns are then merged to produce a final image output. Data is however supplied as a closed set from which these patterns are learned. Unlike with photography or painting, there is no situated and embodied subject who encounters an object in what Crary (1990) terms the ‘real’ of human vision and perception. It is instead as though an image was produced by an artist forever trapped in a room, with only a captioned picture book for reference. We propose that two specific operations of the machinic gaze can be identified through its reading and synthesising of images of built heritage. The first of these operations is anamorphosis. An old term of the pictorial arts describing the deformation of an object under different perspectives, anamorphosis was refreshed by Lacan (1998) to illustrate the distorting effects of unconscious desires on visual perception and cognition. In an analogous way, we describe the machine gaze as ‘anamorphic’ when it suggests unusual or bizarre affinities, provoked by what for a human viewer appear as accidental and unintentional, rather than essential properties of a source image or description. Extending this Lacanian connection, as MacCannell (2001) has earlier done in relation to the tourist gaze, anamorphosis also details the moment of reflexive human surprise at the realisation, in the face of machinic interpretation, of the contingency of their own ways of seeing. What appears first as technical error of translating prompt into image invites further questions as to how and why we perceive it as an error. In other words, this specific operation of the machinic gaze allows us to reflect upon our expectations of heritage representation. Specifically, it allows us to interrogate our expectations of specific aesthetic values, forms, and style more closely in the outputs of generative AI. In the context of heritage, this offers up the possibility of querying specificities of the tourist and the expert gazes. The second operation of the machinic gaze we attribute to the composite and synthetic character of generative AI systems like Midjourney and Stable Diffusion, which we describe as heteroscopia, a term coined by Jaireth (2000) in the context of Indian cinema. Jaireth gives heteroscopia two meanings: the first refers to a historical scopic regime or visual culture, while the second refers to how a given image may incorporate or reference other images, and so be more or less heteroscopic. Our own use adapts this second sense to the context of computer vision. In image generating systems, all outputs are essentially heteroscopic: they come from nowhere other than from an archive of existing images. The technical act of ‘diffusion’ in models like Midjourney and Stable Diffusion involves a twin process of adding noise to and subtracting it from an image corpus to learn to discriminate forms, styles, and colour compositions (Croitoru et al., 2023). These visual elements are related to captions in the corpus, and the training process produces in effect a network between visual elements and caption terms. Once the model is trained, prompts function as queries that in combination produce a synthetic image. This act of synthesis can sometimes reproduce a dominant gaze, and at others draw together disparate or incongruent elements into surrealistic montages or hybridised palimpsests. Heteroscopia here refers then to the extent and variation of gazes these systems render as outputs in response to prompts. These two terms enable a move from the general technical operations of machines to a characterisation of the machinic gaze as applied to the heritage image—as something modelled on a codified and deracinated human vision that equally, as Parisi has noted (2019), apprehends its world through an uncanny and alien lens. The heteroscopic property allows for exploration and partial explanation of how the eventual image output appears as some composite of tourist, archaeological, and other forms of gazing—it describes the relation of the generated image to its inferred image sources. The anamorphic property captures instead the situation where the machine output traverses human conventions and expectations in the relation of image to text. When directed towards heritage, the machinic gaze reveals, at different moments, a dominant heritage scopic regime as well as moments of divergence that elicit opportunities for re-engagement. Methods In this section we explore this conceptual understanding of the machinic gaze through production of a small dataset of synthetically generated images. The dataset was developed to enable contrast across three dimensions: (1) publicly available image models; (2) visually distinctive heritage sites; and (3) expert-authored text and text derived from analysis of source photographs. Though specific to our study here, aspects of the approach outlined below suggest other uses in archaeological research, from auto-captioning to novel forms of image archive analysis. The machine, as we note, pays attention to what is presented in images and texts differently, and while our purpose here is primarily to study that difference itself, we also acknowledge it can complement and correct the researcher gaze. To that end, we include in the Appendix links to code and datasets to allow replication and further exploration. We produced the dataset in a sequence (Figure 1). First, we selected digital photographs from the UNESCO World Heritage Sites online archive. We then ran these images with three algorithmic interpretations (BLIP-2, Google Vision API, image EXIF metadata) to assemble a brief textual description for each image. These assembled descriptions were submitted in turn to three image generation systems in the form of textual ‘prompts’ (Stable Diffusion, Realistic Vision, and Midjourney) to produce a series of image samples—120 in total. Finally, we interpreted these images in terms of subject, composition, and deviations from the source images. We briefly discuss each of these steps below. Image selection We used the archive of photographs from the official website of UNESCO’s World Heritage Sites as base images. Most of these photographs were taken by experts appointed directly by UNESCO’s World Heritage Programme Office or by individual State Parties and are intended to simultaneously serve as official visual documentation of the site and ostensibly communicate a sense of its ‘outstanding universal value’ for a general audience. In order to limit our search, we filtered images on two conditions: inclusion in the UNESCO World Heritage in Danger list and meeting criterion (iv), ‘to be an outstanding example of a type of building, architectural or technological ensemble or landscape which illustrates (a) significant stage(s) in human history’ (UNESCO, 2008). Of the 31 results returned, we then chose single Creative Commons-licensed photographs of five sites that contrasted with each other with respect to photo range and perspective, building typology, geographic region, historical style, and site description. This selection of sites was intended to highlight variations in the operationalisation of the heritage gaze and is not related to the sites’ individual histories or World Heritage trajectories. ![Figure 1. Machinic reading, prompt generation, and image synthesis pipeline.](image-url) Expert and machinic readings of heritage sites We employed two techniques to produce prompts. The first takes the UNESCO-supplied description as an ‘expert’ view of the site. The second applies three computational techniques to extract information from the selected site photos, and combines these into a synthetic, automated prompt. Technique one uses both the visual and textual cues reflecting the expert gaze, reflected in UNESCO’s WHS descriptions. The selected photographs are taken by UNESCO-appointed experts and a related group of expert group authors. Both privilege expert gaze of the sites themselves, attentive to what is most salient, distinguishing, or ‘outstanding,’ and what therefore must be compared against a repertoire of other built forms and sites. In technique two, we first extract captions via BLIP-2, a vision-language based model which uses pre-trained image encoders and large language models to extract image captions (Li et al., 2023). The generated captions are short and generic, often five to 10 words. To enrich the prompt text further, we combined the caption with comma-delimited labels extracted from Google Vision’s Application Programming Interface (API). These labels included computed image properties such as dominant colours, objects, locations, architectural features, geometric shapes, natural features, and colour schema, as well as individual objects. Each label includes a relevance probability, and we included the top 30 labels. In many cases, labels were redundant, misidentifying, or overly specific, and we pruned the list manually. Finally, we added metadata extracted from the digital image itself, including the type of camera, focal length, exposure time, and use of flash. Machinic synthesis and its interpretation The prompts or instructions produced through both methods were then submitted to three image generating systems. We selected Stable Diffusion and Midjourney, two systems widely discussed in 2023. Stable Diffusion is a text-to-image model that has been made open source by its developer, StabilityAI, and can be downloaded and operated on consumer devices. Midjourney is a service-based system that requires a subscription to operate, via the Discord social media platform. Both perform similar functions, converting a natural language prompt into one or more images that aim to ‘represent’ that prompt in a meaningful way. We used the latest versions of these two systems at time of writing: version XL in the case of Stable Diffusion and version 5.2 in the case of Midjourney. Stable Diffusion models can be adapted or ‘fine-tuned’ on much smaller data sets of images to produce styles or aesthetics. For further contrast we used an older version of Stable Diffusion (version 1.5) fine-tuned to generate photorealistic images, in a model named ‘Realistic Vision’. For each of the five sites, we applied the prompts generated through the approach described above to each of the three systems. We specified each system to generate four images for each prompt, producing a data set of 60 images (five sites x three systems x four images). For comparison, we also applied the UNESCO-supplied description for the selected site as a prompt to the same combination of site, system, and image variations, doubling the size of our data set to 120.1 Finally, we interpreted these sets of images in terms of their composition, form, subject selection, framing, colour palette, and aesthetic style. This interpretation, as we reflect upon in our findings and discussion, involves reflection upon the acts of seeing and reading of machine-generated images. It builds necessarily upon our own backgrounds in heritage and media studies, and consequently involves a specific form of what has been theorised as the expert gaze. Despite the limits of such interpretation, we look to avoid a specific judgement upon these machinic productions in terms of their approximation to some notion of ‘ground truth’ or as a quantitative exploration of bias within the underlying datasets of these systems (Salvaggio, 2022), instead focusing on unusual objects and style elements. **The machine imagines heritage** We discuss here three sets of images that contrast internally (across models and prompt) and externally (across sites). We use ‘H-M’ and ‘M-M’ to distinguish human-prompt-machine-generated from machine-prompt-machine-generated images. **Old towns of Djenné** Figure 2 shows a mid-distance elevational aspect of the mosque of Djenné, which is one the key structures identified in the description of the World Heritage Site. The adobe mosque appears in multiple photographs of the site, as one of the distinctive architectural landmarks within the urban ensemble of Djenné. The photograph frames the mosque tightly, editing out the immediate context of the marketplace or townscape that surrounds the mosque. Figure 3 shows results of the ‘H-M’ process: four outputs (in rows) of three image models (in columns), in response to the prompt that was extracted from the description of Djenné on the UNESCO World Heritage Site website, which included phrases like ‘typical African city’, ‘intensive and remarkable use of earth’, ‘mosque of great monumental and religious value’ (UNESCO, 2023). In the case of Stable Diffusion (both versions), while the colour scheme of the UNESCO image is retained, no version of the mosque is produced in any of the images. SDXL (left column) shows, at different resolutions, a grid-like configuration of mud brick structures that approximates the sub-Saharan vernacular, but without the specificities of Djenné’s architectural proportions or ornamentation. The Realistic Vision outputs (centre column) produce an approximation of a generic sub-Saharan settlement, small adobe buildings with thatched roofs—neither characteristic of the mosque nor of the general town. Midjourney (right column) produces images that are quite distinct from the reference image, but that resemble other images of the townscape of Djenné, showing markets, houses, and people in transit. This set of images shows the compositional nature of Midjourney’s generated images: in each case some version of the mosque is recognisable, but in the background, shot in shadow and occasionally at oblique angles. People in the foreground feature in a quasi-cinematic way: in two cases, one or two people appear close to the presumed camera, as though on a journey, while more distant figures appear as accidental subjects. In all cases, people appear in some variant of an assumed local dress. For the ‘M-M’ reading of the source image, we obtained the following: *photograph of a large sand castle with people walking in front of it, Building center, Sky, Cloud, Travel, Landscape, Sand, Aeolian landform, Facade, History, Ancient history, Archaeological site, Historic site, Art, Arch., Soil, Horizon, Singing sand, Tourism, Castle, Desert, Tourist attraction. Colors: #bb9667, #b7c1c3, #b78e5c, #9b7344, #977649, #6f4b1f, #aa9373, #694e26, #634d2d, #8d795a Shot with a E3700, at a resolution of 300 pixels per inch, year 2005, exposure time of 5/1806, Flash did not fire, auto mode, focal length of 27/5* The first part, in italics, represents the BLIP2 caption; the second, in bold, a textual representation of properties extracted from the Google analysis; and the third, metadata properties of the source image. The misrecognition of the mosque as a sandcastle in the machinic reading can be attributed to an alignment of the language of castles with similar visual patterns in the training data. Figure 4 represents the outputs, following the same pattern as Figure 3. In each case, the anchoring characteristic is the first part of the prompt, ‘large sandcastle’. In one case (Midjourney, bottom), there are recognisable aspects of the source image, including the mosque’s exterior ornamentation. But in most cases the ‘castle’ produced more closely resembles a Disneyfied castle caricature, diverting quite starkly from the rectilinear form of the mosque. A recurring similarity in most of the images produced is a lack of surrounding built context: both the mosque and the castle appear to be isolated monumental objects in the frame. In other respects, and despite the prompt specifying colours, camera type, and exposure time, the reference to a sandcastle appears to over-determine the colour palette and saturation level. Compared to Figure 3 (H-M), in Figure 4 (M-M) the sand, of both castle and foreground, is lighter, and the sky clear rather than hazy. Keywords such as ‘history’ and ‘archaeology’ also change the sense of scale and context, with the implied camera position being now more distant. The scene is also deracinated: the form of the ‘castle’ is drawn from a wide range of typological and stylistic references, and though diminutive, the ‘people’ referenced in the prompt are dressed in global rather than ‘local’ attire, tourists who apprehend the monumental structure rather than locals who live around it. The presumed holder of the gaze is, in other words, no longer solely a figure imagined as behind the camera, but firmly embedded within it. Old City of Sana’a Figure 5 shows the original UNESCO World Heritage Site image (top) of the Old City of Sana’a, along with two generated outputs from Midjourney 5.2: the first (middle, human-machine or ‘H-M’) is the result of the UNESCO, human-authored description used as a prompt, and the second (bottom, machine-machine or ‘M-M’) the output of the machine-generated prompt. In this case, the UNESCO description places emphasis on the cityscape, with phrases like ‘rammed earth and burnt brick towers’ and ‘densely packed houses, mosques’, but also specifies colours—‘white gypsum’, ‘bistre colored earth’, ‘green bustans’. The BLIP generated prompt correctly identifies the image subject—’old city of Yemen’—but also the frame: ‘an aerial view.’ Figure 5. Top: Old City of Sana’a (Yemen), author: Maria Gropa, copyright: © UNESCO, reproduced with a CC3 license. Middle: A synthesised image using the UNESCO World Heritage Site description as prompt (model: Midjourney). Bottom: A synthesised image using a machinic prompt (prompt uses BLIP2, Google Vision API, and metadata, and image generation uses Midjourney v5). Here the machine-generated prompt was: photograph of an aerial view of the old city of Yemen, Building center, Sky, Daytime, Window, Architecture, Landscape, City, Urban design, Landmark, Cityscape, Facade, Roof, Human settlement, Urban area, Medieval architecture, Metropolis, Arch, Mixed-use, Archaeological site, Ancient history, Historic site, History, Turret, Dome, Town, Monument, Bird’s-eye view, Tourism, Classical architecture, Holy places. Colors: #cdc3b8, #cfc2b1, #ab9b8a, #a69b93, #e7ddd2, #83766e, #887868, #e9dccb, #5f534d, #3f342e. Shot with a DSC-T9, at a resolution of 72 pixels per inch, year 2009, exposure time of 1/500, no flash, focal length of 1139/100 As with the Midjourney outputs for Djenné, both generated outputs show a tendency to emphasise geographic features identified in the textual description or source image. Mountains are exaggerated, and in the ‘H-M’ case parts of the city hug a cliff-face and overlook a river, in sharp contrast to the source photo. Tonally, the ‘H-M’ image also employs stronger use of contrast (brightly illuminated buildings on the left compared to those in shadow on the right), and a greater colour dynamic—browns, vivid blues, and varying greens—reflects the especially chromatic verbal description (‘spacious green bustans’). The ‘M-M’ image, on the other hand, is strikingly similar, both in broad elements of architectural form and image composition, to the source. Just as with ‘giant sandcastle’ in the case of Djenné, here both the identification of aspect (‘aerial view’) and location (‘old city of Yemen’) work to determine scale, perspective, and chromatism of images for all three models. In the case of the selected Midjourney image, the identification of an ‘arch’ object by the Google API—barely discernible in the source image—is brought into the fore as a photographic conceit, a ‘found’ frame for the distant cityscape. Though not evident in this source image, even another of the UNESCO images of Sana’a employs the same framing device—a convention of the ‘serious’ or expert photographer the machine has learned to reproduce. Despite the inclusion of a palette extracted from the source, though, the colours of the sky and buildings are once again more lurid and saturated than those that appear in the official ‘expert’ gaze—a kind of machinic equivalent to an Instagram filter designed to appeal instead to some imagined, would-be tourist to the city. This last feature is unsurprising for several reasons: the training sets include more ‘tourist’ than ‘expert’ images, reinforced by the very inclusion of the term ‘tourism’ alongside ‘archaeological history’ in the generated prompt; more contemporary images featured in those training sets also use a greater colour range than even those from the 2000s decade; the reference to a specific location; and finally, Midjourney itself is a commercial system that has been ‘fine-tuned’ to produce arresting images precisely through use of high contrast. And yet, in the final case we discuss here, this effect is in fact reversed. **Tombs of Buganda Kings at Kasubi** Figure 6, featuring representations of the Tombs of Buganda Kings at Kasubi, uses the same pattern as Figure 5: at the top is the original UNESCO World Heritage Site image, followed by two synthetic images, this time generated by Stable Diffusion XL, selected Figure 6. Top: Tombs of Buganda Kings at Kasubi (Uganda), author: Lazare Eloundou Assomo, copyright: © UNESCO, reproduced with a CC3 license. Middle: A synthesised image using the UNESCO World Heritage Site description as prompt (model: SDXL). Bottom: A synthesised image using a machinic prompt (prompt uses BLIP2, Google Vision API, and metadata, and image generation uses SDXL). for the purpose of contrast. The middle image (H-M) is again produced from the UNESCO textual prompt, while the bottom image (M-M) is from a prompt constructed from machine-generated captions and image metadata. The UNESCO description in this case emphasises the materiality of structures, with the phrases ‘organic materials’ and ‘wood, thatch, reed, wattle, and daub’, but also references form: ‘circular and surmounted by a dome’. The machinic prompt locates the structure and identifies the image as a ‘photograph of the roof (is) made of straw’. Machine-generated prompt: photograph of the roof is made of straw, Building center, Cloud, Sky, Land lot, Tree, Thatching, Shade, Grass, Tints and shades, Roof, Monument, Triangle, Soil, Historic site, Symmetry, Landscape, Building material, Hut, House. Colors: #83726b, #9a7360, #392d29, #d7b9a4, #f2f3f6, #7c685d, #211918, #bb9783, #645650, #6b584b. Shot with a DSC-W50, at a resolution of 72 pixels per inch, year 2007, exposure time of 1/80, Flash did not fire, auto mode, focal length of 47/5 The first photograph of the Kasubi tombs, representing a front elevational aspect to the main structure, focuses primarily on the structure’s symmetry, materiality (‘thatch and reed’ in particular), and form, while the tight framing of the camera angle and the relative absence of other objects and context add a sense of scale, creating a sense of monumentality in the fairly austere building. The photograph of the single structure devoid of context emphasises a monumentality that is not reflected in the UNESCO description, which instead identifies intangible aspects of the tomb, including the continuity of its use and its associated meaning. These non-visual cues acknowledge that the building’s aesthetics and form are not solely constitutive of its value as a heritage site. One of the generated images was a black and white photograph, which we speculate is in response to the specific mention of dates (1882/1884) in the prompt potentially directing the colour scheme. The tonality, frame, and context of the H-M image are closest to photographs of late 19th- and early 20th-century archaeological surveys. The subject of the M-M image is notionally closer to the original in terms of morphology, materials, and a focus on roof form. The foreground landscape echoes the materiality of the subject, while the background reproduces vegetation and tonality often depicted in images of the African savanna. The tight framing of the structure in the photograph and the difficulty in assigning a sense of scale mimic the original image, but the central difference between the two is in framing the subject, which shifts the emphasis from the monumental in the original to something more vernacular in the M-M image. **Heritage and the machinic gaze** The algorithmic reading and synthesis of the three UNESCO World Heritage Sites offers an interesting counterpoint to UNESCO’s own textual descriptions. All three site images, when read via BLIP-2, focused on the descriptions of form, scale, material, and composition, erasing any sense of aesthetic judgement or valuation and instead generating descriptions for precision and conciseness with varying levels of accuracy. For instance, while the caption generated for the historic centre of Sana’a accurately identified ‘an aerial view of the old city of Yemen’, the caption generated for the photograph of the Old Towns of Djenné was ‘a large sand castle with people walking in front of it’, while the photograph of the Tombs of Buganda Kings at Kasubi was ‘the roof is made out of straw’. The misrecognition of the Great Mosque of Djenné as a sandcastle reflects perhaps most clearly the distortion introduced by a machinic reading of this kind. However, even the simplification of the Kasubi tombs to essentially an image of a roof allows us to reflect upon our own interpretation of the images as sites of globally recognised heritage. The second layer of algorithmic reading of the image, via Google Vision’s API, followed a mathematical extraction based on probabilistic interpretation. In each of the images, elements such as ‘sky’, ‘grass’, and ‘building center’ were identified, alongside other identifying descriptors such as ‘medieval architecture’, ‘arch’, and ‘archaeological site’, but also specific descriptions such as ‘Classical architecture’ or ‘Byzantine architecture’. Occasionally seemingly contradictory descriptors would be generated for the same image, once again illustrating the slippage between image and text in the absence of a referent informing the machinic gaze. The anamorphic properties of the machinic gaze play out in all three cases, but especially so with Djenné. The mosque is interpreted as a ‘giant sand castle’, and subsequent image synthesis then renders this as an artefact that substantially deviates from the original object. However, the machine ignores cues in the image, focussing on the form of the dominant subject, and inferring the likely class of building based on a reading of pixels and profiles: castle rather than mosque. This is due to the way images are processed iteratively: first with coarse filters that aim to identify, for example, horizontal and vertical lines, then with finer filters that progressively distinguish more subtle gradations in form and colour. Buildings—as relatively geometrically regular objects of a certain scale—are likely to be seen as alike, regardless of functional distinctions between, for example, a place of worship and a playful structure designed to imitate a castle. Such distinctions, if they feature at all, depend in turn upon the relative mass of images and labels in the training set. Hence the apparent confusion between a certain type of mosque and a sandcastle reflects the proportionate mass of labelled images of Djenné, relative to other sites—and the corresponding value attributed, in the human (tourist and archaeological) gaze, to that site. To ‘correct’ this error would involve different practices of touristic attention (or modified weightings of the training data) to better ‘align’ this vision with human expectation. Conversely, it is precisely this orthogonal or anamorphic perspective that in turn reflects upon existing practices of human observation and perspective—the privileging of certain sites over others, the concentration of canonical representations of ‘mosques’ and ‘castles’, and the re-projection of localised settings into the global imaginary of tourism and heritage. In the context of the heritage image, how should we describe the process by which, for instance, the old towns of Djenné become instead whimsical sandcastles that impossibly dwarf human characters in the foreground? No existing heritage taxonomic overlay can quite work to make sense of these creations, and even existing artistic nomenclature would struggle to ‘locate’ these examples of machinic heteroscopia. This step of algorithmic reading, which is devoid of the human ‘expert’ or the ‘tourist’ gaze that relies on a constant referent to ideas of heritage value derived from architectural and/or archaeological aesthetics and classifications, but which instead focuses purely on pixels of an image, reveals the extent of meaning we implicitly attach to images of heritage sites. Deploying the machinic gaze towards photographs allows us to occupy a position of tourist or expert—or in some cases both—but in each case, we can reflect upon the presumed author/generator of the image. On the other hand, multiple historic and visual referents are embedded within each of the five UNESCO site descriptions. Read alongside the description, the image of Djenné is inscribed with multiple aesthetic judgements and associated ideas of heritage value. The privileging of the visuality and aesthetics of heritage sites in UNESCO is, we argue, distorted and refracted through the machinic gaze, and through the operations we identify as anamorphosis and heteroscopia. In highlighting elements of both similarity and difference, through visual representation, the fetishisation of architectural form, ornamentation, and material can be examined through both sets of images. In the first set, where the human textual description is used as visual description/reinscription, we observe a greater degree of diversity in both subject and framing, but consistent in the images produced is a privileging of a certain kind of aesthetic that aligns to the idea of heritage value being inscribed and prescribed visually and materially. In the second set, which is produced through a machinic reading and resynthesis, even though the subject of the image shifts substantially, the framing does not. We argue that heteroscopia and anamorphosis help to cluster and aggregate these features into refracted and concentrated delineations that otherwise exist as more diffused tendencies or proclivities: how the tourist and the expert see. These tendencies appear more or less evident across two of the site/model/prompt combinations. Sana’a (with Midjourney) is reproduced through something like the tourist gaze—imagined at a distance, with saturated colour—while outputs prompted by Kasubi (with SDXL) prompts appear closer to an expert’s view—muted palette, with the photographic subject brought to the foreground. The Djenné images share elements of both, but veer into alternative registers of the cinematic and fantastical. In calling for such interpretations, the machine here acts to bring these gazes themselves into focus. And with the act of interpretation itself, we move invariably away from attention to purely quantitative variances—inherent in the very mechanisms by which machine learning techniques aim to approximate a training set—to emphasise instead a process of human judgement and critique. We conclude on a speculative note about the effects of this process. In Lacan’s treatment of the gaze, which MacCannell (2001) draws upon, its significance is the drawing back in of the viewing subject into the picture or tableau (Lacan, 1998). It is the subject who, alongside the image under apprehension, at a critical moment perceives themselves as being also observed, as an object that appears in the eyes of others. The emergence of computer vision, machine learning, and generative AI exacerbates this reflexive moment. The human gaze—especially in its tourist or heritage genres—becomes aware of itself in its particularity, as a thing both distinctive and available in turn as object for consumption by other viewers. The combined operation of heteroscopia and anamorphosis here performs a kind of double act with respect to human ways of seeing. Firstly, it points to the impossibility of any privileged and original specular *morphosis*, or authoritative gaze. Secondly and conversely, it points to the possibility of any given form of gaze becoming itself treated as quotable reference material, and in that process, also becoming objectified. **Declaration of conflicting interests** The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. **Funding** The author(s) received no financial support for the research, authorship, and/or publication of this article. **ORCID iD** Vanicka Arora [https://orcid.org/0000-0001-8733-4510](https://orcid.org/0000-0001-8733-4510) **Supplemental Material** Supplemental material for this article is available online. **Note** 1. To aid reproduction, and to support similar studies, the scripts and datasets have been published on GitHub: [https://github.com/liammagee/reframing-built-heritage-through-the-machinic-gaze](https://github.com/liammagee/reframing-built-heritage-through-the-machinic-gaze) **References** Author biographies Vanicka Arora is Lecturer in Heritage at University of Stirling. She looks at the intersections of heritage, disasters, urbanisation, and globalisation, with an empirical focus on South Asia. She has recently turned her attention towards the methodological possibilities of generative AI in visual studies and heritage. Liam Magee is Associate Professor at the School of Humanities, Communications & Arts, Western Sydney University. He works on topics of automation, digital media, urban sustainability, and social inclusion, and recently has focussed on the relationship between generative AI and subjectivity, aesthetics, and labour. Luke Munn is a Research Fellow in Digital Cultures & Societies at the University of Queensland. His wide-ranging work investigating digital cultures has been published in more than 40 articles in highly regarded journals and in six books, including most recently Automation Is a Myth (2022), Red Pilled (2023), and Technical Territories (2023).
At the beginning of the twentieth century, the linguistic thought centered around the problem of studying semantic aspects of the syntactical structures. Different scientific schools appeared at that time. One of the most prominent scientists in pragmatics of the middle of the 20th century is John Austin. Much of his epoch-making book, “How to Do Things with Words” (at first, a series of lectures delivered at Harvard University in 1955) was first published in 1962. J. Austin’s conception is directed against oversimplified view of language. A central tenet of his theory is that no philosophical school can afford to study language in itself, without paying any attention to pragmatic aspects: “it was for too long the assumption of philosophers that the business of a “statement” can only be to “describe” some state of affairs, or to “state some fact”, which it must do either truly or falsely” [10, 1]. To artificially abstract sentences from real everyday conversation and to confine one’s interest to them alone is to elude the great complexity of linguistic communication. Austin stresses that in our everyday conversation we are attuned not primarily to the sentences we utter to one another, but to the speech acts that those utterances are used to perform. Such acts are staples of communicative life, but only became a topic of sustained investigation, at least in the English-speaking world, in the middle of the twentieth century. In the last decade, the problem of speech act and its characteristics has become the focal point of researchers interested in speech activity. The interest in the functioning of a living language was stimulated by communication theories [3; 5; 12–14] as well as by speech act theory [1; 4; 6–11; 15–20]. This scientific paper aims at the profound analysis of the term “speech act” and … its place in the terminological apparatus of Speech Act Theory. To achieve the aim we are to fulfill the following tasks, and namely to: - present a short analysis of theoretical works on Speech Act Theory; - single out the basic central terms of Speech Act Theory; - clarify the term “speech act” and determine its aspects and its place in the terms of Speech Act Theory; - describe basic main speech act classifications and point out their advantages and disadvantages. The urgency of the investigation in the field of Speech Act Theory can be proved by the following words of J. Searle: “A great deal can be said in the study of language without studying speech acts, but any such purely formal theory is necessarily incomplete. It would be as if baseball were studied only as a formal system of rules and not as a game” [19, 17]. As was stated before, the Speech Act Theory first took shape in the lectures of J. Austin, which were published in a book “How to Do Things with Words” [10]. Later on J. Searle [7; 17–20] devised Austin's ideas, but still kept them in terms of general linguistics. The same approach was taken by Russian linguists Yu. Apresyan [1], V. Demiankov [4] and I. Shatunovsky [8]. Let us now pass on to the analysis of basic central terms of Speech Act Theory. These basic central terms are: a performative verb, a speech act (locutionary act, illocutionary act and perlocutionary act), illocutionary force, direct and indirect speech acts. The main idea of the Speech Act Theory is that we, when pronouncing a sentence in communicative situation, are committing some action or actions: moving our speech organs; mentioning people, places, objects; saying something to our interlocutor; enrapturing or annoying him/her; asking, promising, ordering, apologizing, censuring, etc. These actions are motivated by the intention of the speaker. The term “performative” (derived from the verb “to perform”) was first introduced by an English linguist J. Austin. He singled out sentences which were not used to describe or merely state something, but to perform an action by saying something and named them “performative utterances” or, in short, “performatives”. He outlined the grammatical form of performatives as “verbs in the first person singular present indicative active” [11, 235]. But the phenomenon of this kind itself was described earlier in the works of E. Benvenist [2] and E. Koshmider [5]. E. Koshmider named this phenomenon “coincidence” and describes it as “the coincidence of word and action <...> in the sense that the uttered word is in itself the indicated action <...> it is obvious, that a speaker, uttering his request <...> is not trying to expose the action of the request in the process of that action. On the contrary, the speaker is concerned only with the performing of the act of request, and performing it only with uttering the word, so that the moment of uttering is a moment of performing a request itself, the moment of performing an action, indicated by the verb”[5, 163]. Let us now pass to another important element of the Speech Act Theory – speech acts. Firstly, “speech act” as a linguistic term had existed even before J. Austin started to deliver his lectures on Speech Act Theory. K. Buhler in his “Theory of Language” [3] borrows the term from German philosopher Edmund Husserl. As E. Husserl before him, K. Buhler also views the speech act as a sum of speech situation, context and interpretation. In “Theory of language” it is a far less developed notion than the other elements of K. Buhler’s “structure of language”. His speech act is connected with the language structure via the meaning, devised by the speaker on the basis of social context and the abstract meaning, which is an object of linguistic description [3, 83–88]. However, speech act in this sense holds no interest to us, because it does not function in terms of Speech Act Theory, presented by J. Austin. Although the basis of the Speech Act Theory was laid in the early 30’s of the XXth century, it is not an easy task to give the correct definition of the term “speech act”, mainly due to the complexity of its structure. For the term “speech act” includes locutionary act, illocutionary act and perlocutionary act, which are complex concepts. The first element of the speech act concept is the locutionary act. J. Austin subdivides the locutionary act into three constituent parts: phonetic act, phatic act and rhetic act. The phonetic act is an act of uttering noises, the phatic act is an act of uttering certain vocables and rhetic act is an act of using these vocables with a certain meaning and reference. [10, 95]. Thus, locutionary act is a term, which J. Austin distinguishes as an act of producing certain vocables or words with a certain meaning and reference. The illocutionary act, on the other hand, is a more complex concept. J. Austin notes that “to perform a locutionary act is in general, we may say, also and eo ipso to perform an illocutionary act” [10, 98]. However, he also adds that to determine a kind of the illocutionary act we must also take into account the way in which we use locution in this certain sentence [10, 98–99]. This kind of distinction seems hazy and unclear at best, the weak point here being the distinction between meaning and illocutionary force of the utterance. J. Searle, the follower and pupil of J. Austin, argues that locutionary and illocutionary act in Austin’s meaning are simply two labels for one and the same phenomenon: “Uttering the sentence with a certain meaning is, Austin tells us, performing a certain locutionary act; uttering a sentence with a certain force is performing a certain illocutionary act; but where a certain force is part of the meaning, where the meaning uniquely determines a particular force, there are not two different acts but two different labels for the same act” [17, 407]. The scholar insists that although the concepts of locutionary and illocutionary act are different concepts, the class of illocutionary acts will contain many members of the class of locutionary acts. He continues with the idea that the meaning of the utterance determines its illocutionary force with necessity. He goes further and proposes to abolish the notions of locutionary act and rhetic act. J. Austin’s classification included locutionary act (phonetic, phatic and rhetic acts) and illocutionary act, whereas J. Searle’s proposed classification is as follows: phonetic act, phatic act, propositional act and illocutionary act, all of which are mutually dependent. J. Searle characterizes a propositional act as the content (or proposition, as it is called in philosophy) of a certain utterance [17, 420]. In this way, by separating the meaning (content) of the utterance from its illocutionary force, J. Searle hopes to escape the ambiguity of the original J. Austin’s classification. The illocutionary force of an utterance can be roughly described as its purpose, the goal we wish to achieve by using that particular verb or phrase. Illocutionary forces became the basic tenet for J. Austin’s classification of speech acts, which he describes in the second half of his course of lectures on Speech Act Theory [10, 150]. Perlocutionary act, according to J. Austin is a speech act “which is the achieving of certain effects by saying something” [10, 120]. The effects achieved may include persuading, convincing, intimidating etc. In other words, those effects are consequences of a certain speech act. J. Austin’s distinction between illocutionary and perlocutionary act seems to us strained at best. Illocutionary act, the scholar states, is an act performed in saying something by means of an explicit (direct) performative. Perlocutionary act, on the other hand, is an act performed by saying something (or as a result of saying something). Perlocutionary act is a non–conventional non–linguistic act and as with all consequences it is not in the power of the speaker. That is why J. Austin distinguishes between perlocutionary object (intended result) and sequel (unintended result). The main principle of illocutionary and perlocutionary act distinction proposed by the scholar is the impossibility of the latter to be expressed in the form of a direct performative verb. Such verbs as to persuade, to prevent are perlocutionary, not illocutionary. Thus, we may suggest that J. Austin knew there are not only explicit forms of performative verbs, but also the hidden, indirectly expressed intentions, indirect speech acts. However, he failed to explain their nature and find a place for them in his theory of speech acts which can be explained by sheer difficulty of the task or simply by fear on his part that this new inclusion would upset the delicate structure of the new theory’s terminological apparatus. However, not all speech acts are obvious in their illocutionary force. In fact, direct speech acts constitute the lesser part of all speech acts used in written and oral everyday speech. They are indirect speech acts that are used most frequently, and to determine their illocutionary force we must carefully consider the linguistic and extralinguistic contexts. The very existence of indirect speech acts was not realized by J. Austin (he only hints at the possibility in his lectures, as we have stated here before) and even J. Searle at first did not acknowledge their existence. The phenomenon can be explained by the absence of reliable methods for indirect speech acts’ illocutionary force detection. Only some time later, J. Searle was forced to include indirect speech acts in the terminological apparatus of the Speech Act Theory. He published the article “Indirect Speech Acts” [18] in 1975 for the first time. The scholar defines them as “cases in which one Ellocutionary act is performed indirectly by way of performing another” [18, 30]. He continues with the thought that “in indirect speech acts the speaker communicates to the hearer more than he actually says by way of relying on their mutually shared background information, both linguistic and non-linguistic, together with the general powers of rationality and inference on the part of the hearer” [18, 31]. In other words, the difference between direct and indirect speech acts is that direct speech acts have an explicit performative in their structure, whereas indirect speech act do not have it. The illocutionary force of the latter can only be inferred, guess at by the hearer (audience). Only with the inclusion of concepts such as shared background knowledge (context), rational reason (common sense) and inference we can hope to determine the appeal of the indirect speech act. The methods of indirect speech acts’ illocutionary force determination are not, however, in the scope of our article, so we will abstain from elaborating any further on the issue. Let us pass on to the final point of our research, which is the main existing speech act classifications. There are many speech act classifications based on different principles [1; 7; 8; 10]. Some of them are based on grammatical and semantic differences of the speech acts [1; 8], others – on the illocutionary force and purpose of the utterance [7; 10]. Further on we will discuss some of them in comparison, but now it seems more suitable to stop on the detailed description of one of such classifications. Since J. Austin is the founder of that theory, let us begin with his classification. He distinguished the following kinds of speech acts: expositives, verdictives, commissives, exercitives and behabitives. Expositives. Here the core of usage often has a direct form of assertion, but also at the head of its core a performative is placed, which indicates, how this assertion matches the context of the discourse (exposition itself) [10, 85]. This class can be argued to include instances of verdictives, exercitives, behabitives and commissives as J. Austin himself pointed out [10, 160]. Such ambiguity is inevitable, given the large sphere of use of this class of speech acts in our everyday speech. e.g. *I claim that the Moon has no opposite side. I allow that such fallacy should spread so wide.* The use of such verbs is also possible: *to predict, to allow (in the meaning ‘to reason’), to testify.* Verdictives. A verdictive is concluded or based on facts, official or non–official message or reasoning, or judgemental evaluation of facts, if they are eminent. It is essentially (as the name suggests) a verdict. It may or may not be final (as in estimates, reckoning etc) [10, 150–152]. e.g. *I take it for granted.* The use of such verbs is also possible: *to convict, to interpret as, to rule, to estimate, to date, to rank (to evaluate), to find (as a matter of fact), to understand, etc.* Commissives are designated by promises or other obligations or commitments. They are also used for declarations or to state one’s intentions [10, 150]. The peculiar case of taking sides, for which commissives are also used, is less clear. Examples of such use of comissives are: to espouse, to oppose, to champion, to side with, to declare for etc [10, 157]. e.g. “No man will touch them, *I promise you,*” he said. I *pledge myself to fight with evil.* The use of such verbs is also possible: *to undertake, to be determined to, to mean to, to propose to, to envisage, to guarantee, to vow, to dedicate oneself to, to espouse, to adopt, to covenant, to intend, to bind oneself, I shall, to plan, to engage, to agree, etc.* Exercitives are characterized by the exercising of one’s rights, influence or power. This class is somewhat similar to verdictives (as both types of speech acts are used by judges), but exercitives are an act of will and power of the speaker, of his decision that a thing is *to be** so–and–so, instead of simply stating that the thing is so–and–so [10, 150–154]. e.g. *I appoint you a judge of the Supreme Court.* The use of such verbs is also possible: *to degrade, to demote, to name, to dismiss, to order, to command, to levy, to choose, to bequeath, to warn, to proclaim (in the sense “to issue”), to countermand, to enact, to dedicate, to vote for, to fine, to claim (in the sense “to state one’s ownership”), to pardon, etc.* Behabitives include the notion of reacting on other people’s behavior, their faith and settings, and expressing one’s own settings towards other people’s behavior in past or predicted future. In short, they are a varied class of performatives which have to do with the social side of human life [10, 151–159]. e.g. *I thank you for your generosity, sir. I regret to inform you that the plan did not succeed.* The use of such verbs is also possible: *to apologize, to regret, to thank, to congratulate, to sympathize, to praise, to ignore, to criticize, etc. In terms of this work we had also studied the classifications by J. Searle, Yu. Apresyan and I. Shatunovsky. They are somewhat different from the original scheme proposed by J. Austin, but they are still compatible with it, which can be easily shown in the following table. **Table 1. Speech Acts Classifications Comparison** <table> <thead> <tr> <th>J. Austin</th> <th>J. Searle</th> <th>Yu. Apresyan and I. Shatunovsky</th> </tr> </thead> <tbody> <tr> <td>Expositives</td> <td>Representatives</td> <td>Specific messages and assertions</td> </tr> <tr> <td>Verdictives</td> <td>(Assertives)</td> <td>Consents and objections</td> </tr> <tr> <td>Commissives</td> <td>Commissives</td> <td>Promises</td> </tr> <tr> <td>Exercitives</td> <td>Directives</td> <td>Requests</td> </tr> <tr> <td>Verdictives</td> <td></td> <td>Proposals and advice</td> </tr> <tr> <td></td> <td></td> <td>Warnings and predictions</td> </tr> <tr> <td></td> <td></td> <td>Demands and orders</td> </tr> <tr> <td></td> <td></td> <td>Permissions and prohibitions</td> </tr> <tr> <td>Verdictives</td> <td>Declarations</td> <td>Declarations</td> </tr> <tr> <td>Exercitives</td> <td></td> <td>Approvals</td> </tr> <tr> <td></td> <td></td> <td>Convictions</td> </tr> <tr> <td></td> <td></td> <td>Forgiveness</td> </tr> <tr> <td></td> <td></td> <td>Specialized acts of alienation</td> </tr> <tr> <td></td> <td></td> <td>Acts of nomination and promotion</td> </tr> <tr> <td>Behabitives</td> <td>Expressives</td> <td>Speech rituals</td> </tr> <tr> <td></td> <td></td> <td>Approvals</td> </tr> <tr> <td></td> <td></td> <td>Forgiveness</td> </tr> <tr> <td></td> <td></td> <td>Declarations</td> </tr> <tr> <td></td> <td></td> <td>Specialized acts of alienation</td> </tr> </tbody> </table> As it can be seen from this table, all three speech act classifications are, in fact, classifications of illocutionary acts, because they are based solely or partially on the obvious illocutionary force of the utterances. And due to the fact that illocutionary force can only be so clearly defined in direct speech acts, these classifications can only loosely be applied to indirect speech acts, which are less obvious in their purpose and means of expression. Thus, although indirect illocutionary acts can belong to the classes stated in those classifications, the class definitions given by their authors do not incorporate indirect speech acts. The first two classifications, that by J. Austin and that by J. Searle are similar in their terminology. They both rely heavily on the illocutionary forces of the utterances in their class distinction. J. Austin had not acknowledge the existence of indirect speech acts in his theory, but this addition was made later by J. Searle. In his article on indirect speech acts the scholar gives the indirect speech act’s definition and provides guidelines for its identification. J. Austin’s classification is, in fact, a classification of illocutionary verbs, which he supposes to be a mark of illocutionary cats. However, illocutionary verbs do not always constitute different illocutionary acts, what is confirmed by J. Searle’s insight [7, 177–178]. Furthermore, in Austin’s classification the definitions of classes of speech acts are ambiguous and generalized. For example, we can take behabitives, which include the notions of behavior, reaction, faith, setting and their expression (social side of human life in general, it seems). But that kind of definition is considerably lacking in brevity and strictness of terms. The same problem persists with most of other classes found in Austin’s classification. Because there is no single clear principle this classification is based on, this ambiguity causes a great deal of confusion and overlapping between the notions inside the system. J. Searle proposed his own classification [7], using J. Austin’s classification as the basis, has considerably improved it. According to Searle, the point of representatives is “to commit the speaker <...> to something's being the case, to the truth of the expressed proposition” [7, 181]. Directives he characterizes as attempts of the speaker to make the hearer to do something. They can have various degree of “modesty”, however, i.e. it can be a simple invitation or a insistent urging. Searle’s definition of commissives does not differ significantly from that Austin had applied before him. The point of expressives is “to express the psychological state specified in the sincerity condition about a state of affairs specified in the propositional content” [7, 183]. To be considered successful, declarations must achieve correspondence between their propositional content and reality. As we can observe, J. Searle’s definitions are much clearer which precludes overlapping of notions. The classification by Yu. Apresyan and I. Shatunovsky is mostly directed towards semantic and grammatical differentiation of utterances, and less towards their illocutionary force. Thus, this particular classification seems to us to be more suitable to the needs of practical selection and description of indirect illocutionary acts. Specific messages, consents and objections roughly coincide with Searle’s representatives and their purpose is self–explanatory. Promises are, in fact, are commissives. Requests, propositions and advice, warnings and predictions, demands and orders, permissions and prohibitions are for the most part directives. Declarations, approvals, convictions, forgiveness, specialized acts of alienation and acts of nomination and promotion are all declarations in essence. Searle’s expressives can be subdivided according to this classification into speech rituals, approvals, forgiveness, declarations and specialized acts of alienation. In general, Yu. Apresyan’s taxonomy only further breaks classes proposed by J. Austin and J. Searle into smaller parts, the names of new elements indicating the communicative purpose of speech acts, belonging to them. The speaker’s intention is the main principle of this taxonomy, which would make it more useful in the task of practical linguistic analysis of indirect speech acts in everyday speech. In conclusion to this part of our article, we would like to present a set of speech act characteristics which were generalized and developed by V. Demiankov in one of his articles on Speech Act Theory [4]: 1) the circumstances of success of the speech act are rooted in that, what in terms of a sentence is usually called its modus (in a sense that it is a certain part of a sentence, its performative part); 2) speech act is an atomic unit of speech, a sequence of language expressions, which is uttered by a speaker and is intelligible for at least one of the many users of a certain language; 3) it can be as well larger than a sentence (utterance) or smaller, i.e. it can be a consistent part of a sentence; in this way, a nominative word combination can be represented (although in classical Speech Act Theory it is forbidden) as a speech act of description, more or less successful; 4) it establishes a connection between non–verbal and verbal behavior; 5) it allows us to interpret the text and its implied meaning; 6) it is connected with the term “frame” in some conceptions of modeling speech activity: there are “ritual” sequences of speech acts, which are interpreted on the basis of a mental picture of the world (which in its turn depends on the frame we had chosen) and rely on past, present and future (predicted) actions of communicants; 7) the process of understanding of an utterance, in which speech act takes place, depends on the process of deductive conclusion in everyday thinking, which brings to light a new aspect of the problem of opposition of grammar rules of language on one side and mental processes – on the other; 8) it is not appropriate to mention the understanding of a sentence only in its literal meaning: we must point out the purpose of speech act. That is why the detection of illocutionary force of the sentence is incorporated in the description of language [4, 226–228]. The first part of the article is mainly concerned with analysis of theoretical works on the topic of Speech Act Theory. Here we have defined the basic notions of that theory, namely speech act (locutive, illocutive, perlocutive act), performative verb, direct and indirect speech act. According to our research, speech act is an abstract complex concept. It is a separate act of speech, that in standard speech circumstances represent a bilateral process of acoustic cognition and understanding. Performative verb is a verb, which can be put in the 1st person singular present indicative active and is equal to a one–time performing of the action mentioned by the verb. In the process of research the subdivision of speech act elements proposed by J. Austin, we came to the conclusion that some of the elements of this structure are redundant, because they create unnecessary ambiguity of definition inside the system, which must be avoided in order for this taxonomy to hold any practical value. Following J. Searle we abolished the notion of locutionary act and rhetic act (as its element) altogether. In our subdivision we follow J. Searle’s order. Our elements are: phonetic act (sound production), phatic act (production of vocables), prepositional act (content of the utterance), illocutionary act and perlocutionary act. J. Searle suggested that the meaning (content) of the utterance should be distinguished and separated from its illocutionary force. The second part of the article deals with main speech act classifications, suggested by J. Austin, J. Searle and Russian linguists Yu. Apresyan and I. Shatunovsky. These classifications were described and compared to single out their advantages and disadvantages. From the comparison it is clear that, among those classifications only the one suggested by Russian linguists tries to free itself from depending on the Illocutionary force of the speech act and relies instead on semantics and grammar, and to some degree can be applied to identification of indirect speech acts in speech. In conclusion, we suggest that the further research of indirect speech acts and ways of their identification is needed. Although many methods of their detection exist in linguistics and philosophy of language, we still lack a reliable practical toolset necessary to further our research in the field of Speech Act Theory and understanding of the mechanisms of language use in general. Резюме Ігошев Кирило Михайлович. Мовленнєвий акт у світлі теорії мовленнєвих актів Стаття присвячена дослідженню термінологічного апарату теорії мовленнєвих актів, зокрема її центрального поняття – мовленнєвого акту. У статті автор розглядає теоретичні положення теорії мовленнєвих актів, спираючись на роботи засновників даної лінгвістичної теорії та їх послідовників. У першій частині статті подаються визначення складових мовленнєвого акту: локуційний, іллокуційний та перлокуційний акти, а також такі поняття як перформатив, іллокуційна сила, прямий та непрямий мовленнєвий акт. Шляхом аналітичного огляду джерел з проблем теорії мовленнєвих актів автор уточнює і доповнює вищеназвані дискусійні поняття теорії мовленнєвих актів. У другій частині статті розглянуто три основні класифікації мовленнєвих актів британського лінгвіста Дж. Остіна, американського лінгвіста Дж. Серля та російських лінгвістів Ю. Апресяна та І. Шатуновського. Зазначено, що хоча вищеназвані класифікації мовленнєвих актів спираються на різні їх ознаки, однак усі вони здебільшого враховують лише їх іллокуційну силу, яка у непрямих мовленнєвих актах далеко не завжди очевидна. Тому недолік цих класифікацій у тому, що вони можуть бути використані лише для класифікації прямих мовленнєвих актів. На завершення автор наголошує на необхідності подальшого вивчення непрямих мовленнєвих актів та методів їх ідентифікації задля подальшої розробки теорії мовленнєвих актів та розуміння принципів дії мови. Ключові слова: теорія мовленнєвих актів, мовленнєвий акт, локуційний акт, іллокуційний акт, перлокуційний акт, перформатив, іллокуційна сила. теории речевых актов. Во второй части статьи рассмотрены три основные классификации речевых актов британского лингвиста Дж. Остина, американского лингвиста Дж. Серля и российских лингвистов Ю. Апресян и И. Шатуновского. Отмечено, что хотя вышеназванные классификации речевых актов опираются на различные их признаки, но все они в основном учитывают только их иллокутивную силу, которая в косвенных речевых актах далеко не всегда очевидна. Поэтому недостаток этих классификаций в том, что они могут быть использованы только для классификации прямых речевых актов. В завершение автор отмечает необходимость дальнейшего изучения косвенных речевых актов и методов их идентификации для будущей разработки теории речевых актов и понимания принципов действия языка. Ключевые слова: теория речевых актов, речевой акт, локутивный акт, иллокутивный акт, перлокутивный акт, перформатив, иллокутивная сила Igoshev Kirill Michailovich. Speech Act in Terms of Speech Act Theory The article investigates the terminological apparatus of Speech Act Theory, in particular its central concept - the speech act. The author considers theoretical principles of Speech Act Theory, relying on the works of the founders of this linguistic theory and their followers. In the first part of this article definitions are provided of the components of the speech act: locutionary, illocutionary and perlocutionary acts, as well as concepts such as performative, illocutionary force, direct and indirect speech act. By means of an analytic review of sources on the problems of the Speech Act Theory the author clarifies and complements the above-mentioned debatable concepts of Speech Act Theory. In the second part of this article the three main classifications of speech acts of the British linguist John Austin, American linguist George Searle and Russian linguists Yuri Apresyan and Ivan Shatunovsky are described. It is noted that although the above-mentioned classifications of speech acts are based on their different features, but they still only consider their illocutionary force, which is not always obvious in indirect speech acts. Therefore, the drawback of all these classifications is that they can only be used for the classification of direct speech acts. In conclusion, the author points out the necessity of further study of indirect speech acts and methods of identifying them for the future development of the Speech Act Theory and understanding of the principles of language work. Key-words: Speech Act Theory, Speech Act, locutionary act, illocutionary act, perlocutionary act, performative, illocutionary force. ЛІТЕРАТУРА
Stochastic Watershed Hierarchies Fernand Meyer To cite this version: Fernand Meyer. Stochastic Watershed Hierarchies. ICAPR 2015; The Eighth International Conference on Advances in Pattern Recognition, Indian Statistical Institute, Jan 2015, Kolkata, India. hal-01111749 HAL Id: hal-01111749 https://minesparis-psl.hal.science/hal-01111749 Submitted on 30 Jan 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Stochastic Watershed Hierarchies Fernand Meyer MINES ParisTech PSL Research University (CMM) Center for Mathematical Morphology Fontainebleau, France Email: fernand.meyer@mines-paristech.fr Abstract—We present a segmentation strategy which first constructs a hierarchy, i.e. a series of nested partitions. A coarser partition is obtained by merging adjacent regions in a finer partition. The strength of a contour is then measured by the level of the hierarchy for which its two adjacent regions merge. Various strategies are presented for constructing hierarchies which highlight specific features of the image. The last part shows how the hierarchies lead to a final segmentation. Keywords: Mathematical Morphology, Segmentation, Stochastic Watershed, Waterfall, Hierarchies. I. INTRODUCTION If one considers an image as a topographic surface, the watershed transform partitions the domain of the function into catchment basins [4], [11]. Each catchment basin represents the attraction basin of a regional minimum of the function. The watershed partition counts numerous tiles, as many tiles as there are regional minima in the gradient images: the image is oversegmented and has to be regularized. For segmenting a scene, the watershed is applied to its gradient image. Fig.1 presents a grey tone image, its gradient and the watershed segmentation associated to the minima of the gradient. ![Fig. 1. A gray tone image, its gradient and the watershed segmentation associated to the minima of the gradient.](image) II. FROM THE IMAGE TO A FINE PARTITION A. The Region Adjacency Graph To be efficient, we work at two resolutions. The lowest level is the pixel level: the initial image is segmented and a fine partition produced. The highest level is the level of regions, of partitions and families of partitions. We suppose that the fine partition produced by an initial segmentation (for instance the watershed transform presented above) contains all contours making sense in the image. We define a dissimilarity measure between adjacent tiles of the fine partition. As an example, in the case of the watershed transform, it may be the lowest altitude or the mean altitude of the watershed line separating two adjacent basins. In the case of color images the dissimilarity may be derived from various color distances. Partition and dissimilarity between adjacent tiles are then modelled as an edge weighted graph, the region adjacency graph or RAG: each node represents a tile of the partition; an edge links two nodes if the corresponding regions are neighbors; the weight of the edge is equal to the dissimilarity between both regions. Working on the graph is much more efficient as working on the image, as there are far less nodes in the graph as there are pixels in the image. B. Reminders On Node And/or Edge Weighted Graphs A non oriented graph \( G = [N,E] \) contains a set \( N \) of vertices or nodes and a set \( E \) of edges; an edge being a pair of vertices. The nodes are designated with small letters: \( p, q, r, ... \). The edge linking the nodes \( p \) and \( q \) is designated by \( e_{pq} \). The partial graph associated to the edges \( E' \subset E \) is \( G' = [N,E'] \). Edges and/or nodes may be weighted. Denote by \( F_e \) and \( F_n \) the sets of non negative weight functions on the edges and on the nodes respectively. The function \( \eta \in F_e \) takes its value \( \eta_{pq} \) on the edge \( e_{pq} \). The partial graph associated to the edges \( E' \subset E \) is \( G' = [N,E'] \). A path, \( \pi \), is a sequence of vertices and edges, interweaved in the following way: \( \pi \) starts with a vertex, say \( p \), followed by an edge \( e_{ps} \), incident to \( p \), followed by the other endpoint \( s \) of and radius $\rho$ and $D$. The MST disconnects the same nodes as illustrated by fig.2C. Cutting the edges above a threshold $\lambda$ disconnects the MST in two trees, each of them containing one node. Any two nodes are linked by a unique path in this tree, and represents a minimum spanning tree. Fig.2B always a critical path, the spanning tree is a minimum spanning tree, i.e. the sum of the weights of its edges is minimal. Fig.2A shows an edge weighted graph with its minimum spanning tree. C. The Ultrametric Hierarchy [3] 1) The Ultrametric Distance: We define the altitude of the path $\pi$ as the weight of the highest edge along the path. Among all paths between two nodes $p$ and $q$, the paths with the lowest altitude are called critical paths; their altitude constitutes an ecart $\delta_{pq}$ (and not a distance as $\delta_{pq} = 0 \Rightarrow p = q$) between $p$ and $q$: - $\delta_{pp} = 0$ - for $(p,q,s) : \delta_{ps} \leq \delta_{pq} \vee \delta_{qs}$. (Proof: Concatenating a critical path between $p$ and $q$ of altitude $\delta_{pq}$ and a critical path between $q$ and $s$ of altitude $\delta_{qs}$ is a path between $p$ and $s$ whose altitude is higher or equal to the altitude of a critical path between $p$ and $s$). This inequality is called ultrametric inequality and the ecart $\delta_{pq}$ ultrametric distance. 2) The Minimum Spanning Tree: In a spanning tree, there exist one and only one path linking two nodes. If this path is always a critical path, the spanning tree is a minimum spanning tree, i.e. the sum of the weights of its edges is minimal. Fig.2B represents a minimum spanning tree $T$ of the graph of fig.2A. Any two nodes are linked by a unique path in this tree, and this path is a critical path. Cutting the highest edge in this path disconnects the MST in two trees, each of them containing one node. Cutting the edges above a threshold $\lambda$ in the RAG or in the MST disconnects the same nodes as illustrated by fig.2C and D. 3) The Ultrametric Hierarchy: The open ball of center $p$ and radius $\rho$ is $\text{Ball}(p,\rho) = \{q \mid \delta_{pq} < \rho\}$ and the closed ball $\text{Ball}(p,\rho) = \{q \mid \delta_{pq} \leq \rho\}$. The following lemmas are easy to prove. - **Lemma 1**: Each element of a ball $\text{Ball}(p,\rho)$ is centre of this ball. - **Lemma 2**: Two balls $\text{Ball}(p,\rho)$ and $\text{Ball}(q,\rho)$ with the same radius are either disjoint or identical. Since each node $p$ belongs to one and only ball, namely $\text{Ball}(p,\rho)$, the balls with the same radius form a partition of the nodes. For increasing radii, the balls are increasing : $\lambda < \mu \Rightarrow \text{Ball}(p,\lambda) \subset \text{Ball}(p,\mu)$. Hence the balls $\text{Ball}(p,\mu)$ form a coarser partition as the balls $\text{Ball}(p,\lambda)$. And $\text{Ball}(p,\mu)$ is the union of all balls $\text{Ball}(q,\lambda)$ for $q \in \text{Ball}(p,\mu)$. A series of partition where the tiles of a coarse partition are obtained by unions of tiles of finer partitions is called a hierarchy. One would expect that the coarse levels of the hierarchy represent the most salient features of an image and the finer levels constitute minor details and refinements. If the fine partition represented by the RAG is the watershed partition associated to a gradient image, this is unfortunately not the case. Fig.3 presents an image followed by the watershed partition of its gradient image. The contour separating two tiles is weighted by the ultrametric distance between the tiles. The next 4 images show 4 hierarchies obtained for decreasing values of the radius of the balls $\text{Ball}(p,\lambda)$. The coarsest levels of the hierarchy only contain small and contrasted objects of the initial image. The larger structures appear only later. This is due to the fact that the contour surrounding a large object is more likely to have a weaker portion with low values. For this reason this region is more likely to merge with neighboring regions; it appears as an isolated region for relatively small values of $\lambda$ in the balls $\text{Ball}(p,\lambda)$. 4) A Minimum Spanning Forest Associated To Markers: Cutting all edges of the MST with a weight higher than $\lambda$ creates a spanning forest. Among all forests with the same number of trees, this forest is a minimum spanning forest $\text{MSF}_\lambda$, the sum of its weights being minimal. Two nodes $p$ and $q$ belonging to the same tree of the forest have a distance $\delta_{pq} < \lambda$; they belong to the same ball $\text{Ball}(p,\lambda)$. Hence the trees of the forest and the balls \( \text{Ball}(p, \lambda) \) induce the same partition of the nodes. As shown above, this partition often does not well represent the salient features of an image. More interesting partitions are obtained with the same number of trees, if we chose the roots of the trees. We select a subfamily \((m_i)\) of nodes, (also called markers) within \( N \) and construct a minimum spanning forest where each tree is rooted in a marker. Each minimum spanning forest is obtained by cutting some edges of the MST. Consider two consecutive markers \( m_1 \) and \( m_2 \) on the MST, such that there exists no other marker along the path along the MST joining both markers. In order to get a forest, one has to cut an edge along this path; in order to minimize the total weight of the edges, one cuts the highest edge. The same process applied to all pairs of edges produces the desired minimum spanning forest \([9]\). Consider two consecutive markers \( m_1 \) and \( m_2 \). Suppose that the highest edge \( e_{pq} \) on the path of the MST linking both markers has a weight \( \lambda \). If \( p \) and \( m_1 \) (resp. \( q \) and \( m_2 \)) are connected after cutting \( e_{pq} \), then the altitude of the path linking \( p \) with \( m_1 \) (resp. \( q \) with \( m_2 \)) is lower than \( \lambda \). This gives us a criterion for recognizing whether a given edge of the MST belongs or not to the MSF associated to a family of markers: the edge \( e_{pq} \) with weight \( \lambda \) does not belong to the MSF if and only if there exist 2 paths with an altitude lower than \( \lambda \), one linking \( p \) with a marker and another linking \( q \) with another marker. Or equivalently, if the balls \( \text{Ball}(p, \lambda) \) and \( \text{Ball}(q, \lambda) \) contain each at least one marker. This criterion will be used all along of this paper for deriving various feature driven hierarchies. 5) A Hierarchy Based On Prioritized Markers: The previous section has explained how to associate a partition to a family \((m_i)\) of the nodes taken as markers. Let \( F \) be the minimum spanning forest associated to these markers. Suppose that we add a new marker \( n \). A marker \( m_k \) of the family is a neighboring marker of \( n \) if there exists a path between \( n \) and \( m_k \) along the MST on which there is no other marker. Such a path belongs to the tree \( T_k \) rooted in \( m_k \). The highest edge along this path has to be cut. Like that the tree \( T_k \) is cut in two parts. Hence, by adding new markers, one obtains finer partitions \([9]\). Consider now a family of markers ranked according to some priority \((m_i)\). We want to construct a hierarchy associated to this family. The coarsest level of the hierarchy is the partition associated to the markers with the highest priority. Every time we add a marker, we obtain a finer partition, as a tile of the coarser partition is cut in several parts. Our goal is to define new weights \( \theta_{pq} \) for the edges \( e_{pq} \) such that cutting all edges with a weight above \( k \) produces a minimum spanning forest associated to the \( k \) markers with the highest priorities. The edge \( e_{pq} \) with weight \( \eta_{pq} = \lambda \) does not belong to the MSF if the balls \( \text{Ball}(p, \lambda) \) and \( \text{Ball}(q, \lambda) \) contain each at least one marker. If there is no marker at all in one of the balls \( \theta_{pq} = 0 \). If \( \mu_p \) and \( \mu_q \) are the highest priorities of the markers present respectively in \( \text{Ball}(p, \lambda) \) and \( \text{Ball}(q, \lambda) \), then, by choosing all markers with a priority higher or equal than \( \mu_p \land \mu_q \), there will be a marker in each of the balls. If we assign to the edge \( e_{pq} \) the weight \( \mu_p \land \mu_q \), we obtain the desired result. The algorithm visits all edges of the MST in the order of increasing weights. Repeat until all edges are processed: Let \( e_{pq} \) the current edge to process with a weight \( \lambda \). If \( \mu_p \) and \( \mu_q \) are the highest priorities of the markers present respectively in \( \text{Ball}(p, \lambda) \) and \( \text{Ball}(q, \lambda) \), we assign to the edge \( e_{pq} \) the weight \( \mu_p \land \mu_q \). Illustration: In fig.4 a number of prioritized markers have been introduced ; they appear in the second image as disks whose shade of grey is brighter for higher priorities. The saliency of the hierarchy is indicated in the same image. The boundary between two regions has a shade of grey proportional to the hierarchy level for which it disappears. The last 4 images represent 4 partitions of the associated hierarchy with decreasing coarseness. III. THE STOCHASTIC WATERSHED The last section has shown how ranking the markers generates a hierarchy. We now replace deterministic markers by stochastic markers. The seminal idea, introduced by Angulo \([1], [2]\), is to spread random germs all over the image and to use them as markers for the watershed segmentation. Large regions, separated by low contrast gradient from neighboring regions will be sampled more frequently than smaller regions and will be selected more often. On the other hand, strong contours will often be selected by the watershed construction, as there are many possible positions of markers which will select them. Evaluating the strength of the contours by simulation offers a great versatility: various laws for the implementation of point patterns, various shapes for the markers themselves may be used. The method suffers however from a serious handicap, if the contour strength is evaluated from a serious handicap, if the contour strength is evaluated through simulations, as each of them requires the construction of a watershed segmentation. We show below, not only how simulations may be avoided, but also how to imagine scenarios which would be difficult or even impossible to simulate \([10]\). A. Principle Of The Method We imagine that we draw random germs on the domain where the image is defined and compute the probability of each piece of contour to appear in the associated segmentation. We have to assign to each edge $e_{pq}$ of the MST with an initial weight $\eta_{pq}$ a new weight $\theta_{pq}$ equal to the probability to appear as a contour. In a first stage we only consider points as markers. Later we will also considers arbitrary, stochastic or deterministic, sets as markers. As shown above, the edge $e_{pq}$ with weight $\eta_{pq} = \lambda$ does not belong to the MSF if the balls $Ball(p, \lambda)$ and $Ball(q, \lambda)$ contain each at least one marker. Thus the probability $\theta_{pq}$ is equal to the probability that there is at least one random marker in each of the balls $Ball(p, \lambda)$ and $Ball(q, \lambda)$. For the sake of simplicity, we chose a Poisson distribution of germs over the domain. We fix the number of germs to be equal to $\omega$; the distribution is then uniform. Consider a set $X$ of area $A$ within a domain $D$ of area $S$. The probability that there falls no one germ within the domain $X$ is then $(1 - \frac{A}{S})^\omega$. And the probability that there is at least one germ in $X$ is then $1 - (1 - \frac{A}{S})^\omega$. 1) Absorption Of The Smallest Region: a) Area Oriented Absorption: Consider the edge $e_{pq}$ with weight $\eta_{pq} = \lambda$ and the balls $Ball(p, \lambda)$ and $Ball(q, \lambda)$. Let $a_p$ and $a_q$ be the areas of these balls. We place a deterministic marker in the region with the largest area and a random marker in the smallest. The probability $\theta_{pq}$ is then equal to the probability that there exists at least one random marker in the smallest region of area $a_p \wedge a_q$, i.e. $1 - (1 - \frac{a_p \wedge a_q}{S})^\omega$. b) Volume Oriented Absorption: The previous criterion is based on the area of the balls $Ball(p, \lambda)$ and $Ball(q, \lambda)$. For high values of $\lambda$ this area is likely to be larger than for small values. However, in order to reinforce the influence of the contrast, one may multiply the areas of the balls $Ball(p, \lambda)$ and $Ball(q, \lambda)$ by the value $\lambda$. This product $\lambda a_p$ may be considered as a kind of volume. Let $\lambda_{\text{max}}$ be the highest weight of the edges of the MST. The probability that no marker falls within the volume $\lambda a_p$ within the total volume $\lambda_{\text{max}} S$ is then $\left(1 - \frac{\lambda a_p}{\lambda_{\text{max}} S} ight)^\omega$. Remark that whereas the absolute values of $\lambda$ depend upon the global contrast of the image, the evaluation of the contour strength is nevertheless relatively robust against the change of contrast, as it is based on the ration $\lambda / \lambda_{\text{max}}$. c) Contrast Oriented Absorption: Consider again the watershed segmentation. If the image on which the watershed is constructed is a gradient image, then significant features are the levels of the pass points between adjacent regions; the level of the minima, often near to 0, has not much significance. In other situations one has to construct the watershed on images of a different type, for which the levels of the minima is significant. For instance, the micro-aneurisms in a retina appear as dark spots for which the level of the minima is significant. Another example is the segmentation of text on a document (see fig.9). In such situations, the noise often appear also as dark spots with less contrast. With the stochastic watershed less contrasted regions get absorbed by more contrasted regions. We measure the contrast of the ball $Ball(p, \lambda)$ as the difference between $\lambda$ and the deepest value $\zeta_p$ taken by the image in $Ball(p, \lambda)$. We put a hard marker in the most contrasted region and compute the probability that there is a marker in the less contrasted region for $\omega$ markers uniformly distributed in the range $[0, \zeta_p]$, yielding $\left(1 - \frac{\lambda - \zeta_p}{\lambda_{\text{max}}} \right)^\omega$. 2) The Symmetrical Stochastic Watershed: a) The Area Based Stochastic Watershed: We now consider the distributions of markers in both balls $Ball(p, \lambda)$ and $Ball(q, \lambda)$. In short we write $B_p = Ball(p, \lambda)$ and $B_q = Ball(q, \lambda)$. The weight $\theta_{pq}$ of the edge $e_{pq}$ is then equal to the probability of the event: $$E = \{ \text{there is at least one marker in } B_p \} \text{ and } \{ \text{there is at least one marker in } B_q \}$$ The opposite event is the union of two non exclusive events: $\neg E = \{ \text{there is no marker in } B_p \} \text{ or } \{ \text{there is no marker in } B_q \}$. Its probability is: $$P(\neg E) = P \{ \text{there is no marker in } B_p \} + P \{ \text{there is no marker in } B_q \} - P \{ \text{there is no marker in } B_p \cup B_q \}$$ And $P(E) = 1 - P(\neg E)$. Fig.5 presents the surfacic stochastic watershed hierarchy: the initial image, the new saliency of the contours, followed by 4 levels of the hierarchy. 3) The Volume Based Stochastic Watershed: For stressing more the strength of the gradient separating both regions $Ball(p, \lambda)$ and $Ball(q, \lambda)$, we replace the measures of the areas $a_p$ and $a_q$ by the pseudo volumes $\lambda a_p$ and $\lambda a_q$. The markers being distributed in a total volume $S \times \lambda_{\text{max}}$. The probability that there exists at least one markers in both "volumes" is then: $$P(E) = 1 - \left(1 - \frac{\lambda a_p}{S \times \lambda_{\text{max}}} \right)^\omega - \left(1 - \frac{\lambda a_q}{S \times \lambda_{\text{max}}} \right)^\omega + \left(1 - \frac{\lambda a_p + \lambda a_q}{S \times \lambda_{\text{max}}} \right)^\omega$$ 4) The Symmetrical Stochastic Watershed Within Transformed Domains: Until now we considered the domains $Ball(p, \lambda)$ and $Ball(q, \lambda)$ only through their area or the deepest value taken by the image within the balls. In order to now take into account also their shape, we apply an anti-extensive morphological operator $\psi$ on the balls: $\psi(X) \subset X$. The area of $\psi(X)$ is thus smaller than the area of $X$. The most common operators are the erosion and the opening. This opens a large choice of possibilities: erosion or opening, type of structuring elements (often disks or segments in various directions), size of the structuring element, etc. We define $\beta_p = \text{area}[\psi \text{Ball}(p, \lambda)]$. The probability $\theta_{pq}$ to be assigned to the edge $e_{pq}$ is then $$1 - \left(1 - \frac{\beta_p}{S_p}\right) - \left(1 - \frac{\beta_q}{S_q}\right)$$ It is noteworthy that this assignment of probabilities cannot be obtained by the simulation method used by Jesus Angulo, consisting in introducing real random germs in the image and constructing the watershed partition for each new simulation. 5) The Symmetrical Stochastic Watershed With Non Punctual Markers: The computation which follows corresponds to the experiment where one uses random markers, which are not reduced to points. We suppose that $Z_x$ is a marker implanted at a random position $x$. For the sake of simplicity we suppose that $Z$ is the same marker everywhere, and its implementation is random. It is possible to imagine and compute the probabilities using random markers (for instance disks with random radii, segments with random or regionalized length and orientation etc.). Recall that the structuring element $Z_x$ hits a set $X$ if its center $x$ belongs to the dilation of $X$ by $Z : x \in X \oplus Z$. Taking the same notations as above : the edge $e_{pq}$ will be cut for a random distribution of markers, if the 3 following events are verified: - $A_1 = \{\exists$ random marker $Z$ hitting $B_p\} = \{\exists$ random point marker belonging to $B_p \oplus Z\}$ - $A_2 = \{\exists$ random marker $Z$ hitting $B_q\} = \{\exists$ random point marker belonging to $B_q \oplus Z\}$ - $A_3 = \{\exists$ random marker $Z$ hitting $B_p$ and $B_q\} = \{\exists$ random point marker belonging to $(B_p \oplus Z) \cap (B_q \oplus Z)\}$ The balls $B_p$ and $B_q$ before and after dilation by an horizontal segment, and the intersection $(B_p \oplus Z) \cap (B_q \oplus Z)$ of both dilated sets are illustrated in fig.6. We have to compute $P(A_1 \text{ and } A_2 \text{ and } A_3) = P(A_1 \text{ and } A_2 | A_3) \times P(A_3)$. If $S_{pq} = \text{the area of } (B_p \oplus Z) \cap (B_q \oplus Z)$, then $P(A_3) = \left(1 - \frac{S_{pq}}{S_p}\right)^\omega$. And $P(A_1 \text{ and } A_2 | A_3) = 1 - P(\text{not } A_1 \text{ or } \text{not } A_2 | A_3) = 1 - P(\text{not } A_1 | A_3) - P(\text{not } A_2 | A_3) + P(\text{not } A_1 \text{ and } \text{not } A_2 | A_3)$ The conditional probability $P(\cdot | A_3)$ means that all punctual germs have been distributed outside $(B_p \oplus Z) \cap (B_q \oplus Z)$, that is in an area $S - S_{pq}$. And the event (not $A_1 \text{ and } A_3$) means that there is no germ falling in $B_p \oplus Z$, knowing that there is also no germ falling in $(B_p \oplus Z) \cap (B_q \oplus Z)$, i.e. there is no germ falling in $(B_p \oplus Z) / (B_q \oplus Z)$, domain with an area $S_{pq}$. Thus the probability is equal to $\left(1 - \frac{S_{pq}}{S_p}\right)^\omega$. Exchanging the roles of $p$ and $q$, we get $P(\text{not } A_2 | A_3) = \left(1 - \frac{S_{pq}}{S_p}\right)^\omega$. The event \{not $A_1 \text{ and } \text{not } A_2 \text{ | } A_3\}$ means that there is no punctual germ in $(B_p \oplus Z) / (B_q \oplus Z)$ nor in $(B_q \oplus Z) / (B_p \oplus Z)$. If $S_{pq}$ is the area of $(B_p \oplus Z) / (B_q \oplus Z) \cup (B_q \oplus Z) / (B_p \oplus Z)$, we obtain the probability $P(\text{not } A_1 \text{ and } \text{not } A_2 \text{ | } A_3) = \left(1 - \frac{S_{pq}}{S_p}\right)^\omega$. Putting everything together, we get the new weight $\theta_{pq} = \left\{1 - \left(1 - \frac{S_{pq}}{S_p}\right)^\omega\right\} \times \left\{1 - \left(1 - \frac{S_{pq}}{S_q}\right)^\omega\right\} \times \left\{1 - \left(1 - \frac{S_{pq}}{S_p}\right)^\omega\right\}$ 6) Illustration: Fig.7A presents the partition and a minimum spanning tree derived from the dissimilarities between adjacent regions. We want to evaluate the strength of the blue edge $e_{pq}$, having a weight equal to 4. This edge will get a new weight $\theta_{pq}$ according various scenarii: Fig.7B: Area stochastic watershed : All edges with a weight above or equal to 4 are cut, leaving two trees representing the regions $B_p = \text{Ball}(p, 4)$ and $B_q = \text{Ball}(q, 4)$. Two yellow polygons symbolize two random markers in these balls. $\theta_{pq}$ is the probability that at least one marker falls in each of the regions $B_p$ and $B_q$. Fig. 7C: Area oriented absorption: A non random marker (large red polygon) is placed in the largest ball $B_p$. $\theta_{pq}$ is the probability that a random marker (yellow polygon) falls in the smallest region $B_q$. Fig. 7D: Area stochastic watershed with transformed domains: Both balls $B_p$ and $B_q$ are submitted to an opening $\gamma$ by a segment in the direction $2\pi/3$. $\theta_{pq}$ is the probability that at least one marker falls in each of the opened regions $\gamma B_p$ and $\gamma B_q$. **B. More Hierarchies** 1) **The Waterfall Hierarchy:** Starting with the MST of the RAG, we keep for each node one and only one of its lowest neighboring edges. We create like that a spanning forest. Assigning the same label to all nodes of each tree yields the level 2 of a hierarchy. The next level is obtained by retaining for each tree one and only one of the edges linking this tree with a neighboring tree. A number of tree has merged, creating a forest with less trees, inducing the partition of level 3 of the hierarchy. The same process may go on, creating at each stage a new level of the hierarchy. This hierarchy has first been described in the context of flooding a topographic surface, and called waterfall hierarchy [5]. If the nodes represent the catchment basins of a topographic surface, and a basin is flooded, then it overflows, creating a waterfall, into a neighboring basin and this overflow occurs along its lowest edge. It is possible to produce the waterfall hierarchy in one pass through the edges of the MST, with initial weights $\tau_{pq}$. We will assign to each edge $e_{pq}$ of the MST a new weight $\theta_{pq}$ expressing the level of the waterfall hierarchy. The algorithm visits all edges of the MST in the order of increasing weights $\tau$. Repeat until all edges are processed: Let $e_{pq}$ the current edge to process with a weight $\lambda$. The initial weights $\tau$ of the edges of $B_p = \text{Ball}(p, \lambda)$ and $B_q = \text{Ball}(q, \lambda)$ are lower than $\lambda$. Hence all the new weight $\theta$ of these edges has already been computed. The highest weight taken by the function $\theta$ in the ball $B_p$ is called diameter of the ball and we write $\text{diam } B_p$. The waterfall level $\theta_{pq}$ is then equal to: $\theta_{pq} = 1 + \min(\text{diam } B_p, \text{diam } B_q)$. The diameter of the ball $B_p \cup \{u\} \cup B_q$ is then $\max(\text{diam } B_p, \theta_{pq}, \text{diam } B_q)$. Fig. 8 presents the waterfall hierarchy: the initial image, the waterfall saliency of the contours, followed by 4 levels of the waterfall hierarchy. 2) **Cascading And Combining Hierarchies:** All hierarchies described so far are fully characterized by their ultrametric distance (UD); these distances have as support the same MST spanning the nodes/regions of the same fine hierarchy and differ by the weights of the edges. Each operator described so far takes as input a set of weights of the MST and produces a new set of weights on the same MST; in some cases additional measurements taken in the image are needed. This new MST may then be submitted to the same process and a second hierarchy produced, taking into account different features of the image. 3) **The Lattice Of Hierarchies:** Two hierarchies $A$ and $B$ may be compares through their UD $\chi_A$ and $\chi_B$: $B \leq A \Leftrightarrow \forall p, q \in E \quad \chi_A(p, q) \leq \chi_B(p, q)$ As $\forall p \in E : \text{Ball}_B(p, \rho) \subset \text{Ball}_A(p, \rho)$, the hierarchy $A$ is coarser than the hierarchy $B$. Consider now a family of hierarchies $(A_i)_{i \in I}$, with the associated UD $\chi_i$. The infimum $\bigwedge A_i$ is the largest hierarchy which is smaller than each $A_i$ and its UD is $\chi_{\bigwedge A_i} = \bigvee \chi_i$. The infimum of hierarchies is particularly useful for dealing with color images, as the hierarchies produced for each color component may be combined. The supremum $\bigvee A_i$ is characterized by its UD, the largest UD below $\bigwedge \chi_i$. The supremum of hierarchies retains the contours which are present in various hierarchies, and thus emphasizes the strength of these contours. IV. **CONCLUSION: TAILOR THE HIERARCHY WHICH IS BEST FOR YOUR PROBLEM** A hierarchy aims at proposing a reduced but sufficient set of contours in an image, ranked by their pertinence and importance. There is nothing like an optimal hierarchy, adapted to all types of images or objects to segment. Each method presented above constructs a particular hierarchy, although they are all derived from the same fine partition. Hierarchy $A$ will highlight some contours of this fine partition and neglect others, whereas hierarchy $B$ makes another choice. For this reason will a particular hierarchy inform us about the image content. For instance, if a contour appears strong in a stochastic watershed hierarchy based on openings with large horizontal structuring elements but is weak for vertical structuring elements, it informs us about the local orientations in the image. Combining the weights of the same piece of contour obtained for various hierarchies constitutes a powerful signature which may serve for object recognition or image matching. If we have to segment an image we have to design a hierarchy which discards the structures of no interest and highlights those which are interesting for us. In order to design a useful hierarchy which will facilitate the further processing we have to analyze carefully the image and determine which features best characterize the image and the objects to detect: size, contrast, orientation, color, texture etc. We then have to design one or several hierarchies which highlight these features. Let us now give some cues on how to use hierarchies for extracting among all weighted contours the contours of the objects we want to detect. A. Marker Based Segmentation Marker based segmentation has been rephrased above as the construction of a hierarchy, in which the coarsest level of the hierarchy corresponds to the desired segmentation; each region containing one and only one marker. We have shown also how to use a family of prioritized markers. We have seen that the hierarchies may be cascaded. If the first hierarchy highlights correctly the contours of interest, it will be possible to extract the regions of interest with a reduced number of markers. The segmentation will be more robust and less sensitive to the shape or size of the markers. B. Robust Ans Parameter Free Top-Hats The top hat, the residue of an opening or a closing, is a useful operator for detecting text on a non uniform background. The waterfall hierarchy is a hierarchy which does not depend on any parameter. Consider the image 9: the text is dark on a brighter background. The watershed segmentation of this image (and not on its gradient) produces a fine segmentation. The waterfall hierarchy analyses how the structures of the image are nested. The ranking of the contours is an enumeration of these nested levels, is completely independent of the contrast of the image and does not depend upon any parameter. We take the contours of the last but one level in the waterfall hierarchy and derive from it a ceiling function which is equal to the initial image along these contours and is white everywhere else. The highest flooding of the image under this function fills completely the text. The residue produces a bright text on a uniformly dark background. C. Interactive Segmentation With Hierarchies Interactive segmentation strategies are particularly efficient when applied on hierarchies. As the hierarchy is constructed beforehand, the computing time after each interaction is greatly reduced. The library of routines for interactive segmentation, as well for segmenting multimedia images [14] as for segmenting medical images [13]. Local resegmentations or mergings: a first partition is chosen in the hierarchy and then adapted locally by resegmenting regions which are two large or inversely by merging adjacent regions. Both operations are simply obtained by going up or down in the hierarchy. Magic wand To extract a region with uniform color, most drawing/painting software packages have a function called "magic wand". For each position of the mouse, the color is determined and the connected region composed of all pixels with more or less the same color, depending on some tolerance threshold, is selected. This procedure is often helpful, but fails in some situations, when there is a progressive change of color shade, as is the case with the yellow apple in fig.10. The darker part of the apple is not selected and an irregular contour produced. On the contrary, using a hierarchy has the advantage of providing well defined contours. The hierarchy based magic wand selects the largest region in the hierarchy such that its mean color remains within some predefined limits. Fig. 10. On the left, initial image; center: all pixels which are within a color tolerance of an initial pixel. On the right, result of the magic wand. Lasso Another classical interactive tool is the lasso: the user draws an approximate contour around the real contour as shown in fig.11a. The classical solution consists in applying the magic wand defined above to each pixel belonging to the approximate contour. For each such position one gets a piece of the background. The union of all such pieces constitutes the background. As shown on fig.11b, the result is not very satisfactory. Using a hierarchy, one may select the union of all regions of the hierarchy contained in the contour yielding a much better result as shown on fig.11c. Intelligent Brush: An intelligent brush segments an image D. Energy Minimization In A Hierarchy An additional way to construct hierarchies is through energy minimization, which becomes a tractable problem if it is applied on a hierarchy. Given a hierarchy $A$, one wants to extract a partition $\pi$, whose regions verify an optimality criterion. The regions all belong to the hierarchy but not necessarily to the same hierarchical level. Philippe Salembier et al proposed to construct optimal partitions in the context of image coding; the aim is to produce a partition where each region is described by a simplified model under the constraint that the encoding cost is not too high [12]. Laurent Guigues [6] analyzed the types of energies which may be minimized within hierarchies. His work has been continued and extended by Ravi Kiran and Jean Serra [7]. The energies contain two terms, a data fidelity term and a regularization term; the value of the first increases and the second decreases, by climbing in the hierarchy towards coarser levels. Both terms are linked by a scale parameter. As an example, consider the Mumford-Shah model where $D(R_i)$ represents the total variance of the image in the region $R_i$ of the partition and the second term measures the length of the contours present in the partition; we get like that a kind of energy: $E(\pi, \lambda) = \sum_{R_i \in \pi} D(R_i) + \lambda C(\pi)$ where $\lambda$ is a scale parameter. For each scale parameter an optimal partition is easily extracted from the hierarchy through dynamic programming. For increasing values of $\lambda$ one obtains a series of nested partitions, i.e., a new hierarchy. There are then various strategies for approximating or finding the global minimum. The following example in fig.13 is due to the courtesy of Jean Stawiaski, from Philips Medical Systems. It presents the various steps for segmenting a tumor: initial image, gradient, fine segmentation, saliency of the initial contours, saliency of the surfacic stochastic watershed, extraction of the contours minimizing the Mumford Shah functional. Fig. 13. Segmentation of a tumor in a liver: initial image, gradient, fine segmentation, saliency of the gradient, saliency of the stochastic watershed, final segmentation by ”painting” it: it first selects a zone of interest by painting. Contrary to conventional brushes, the brush adapts its shape to the contours of the image. The shape of the brush is given by the region of the hierarchy containing the cursor. Moving from one place to another changes the shape of the brush, when one goes from one tile of a partition to its neighboring tile. Going up and down the hierarchy modifies the shape of the brush. In fig.12, on the left, one shows the trajectory of the brush; in the centre, the result of a fixed size brush, and on the right the result of a self-adapting brush following the hierarchy. This method has been used with success in a package for interactive segmentation of organs in 3D medical images [13]. Fig. 12. Comparison of the drawing with a fixed size brush and a self adaptive brush.
[REMOVED]
Structure and dehydration mechanism of the proton conducting oxide $\text{Ba}_2\text{In}_2\text{O}_5(\text{H}_2\text{O})_x$ Johan Bielecki, Stewart F. Parker, Laura Mazzei, Lars Börjesson and Maths Karlsson Published version information Citation: Bielecki J, Parker SF, Mazzei L, Börjesson L and Karlsson M. “Structure and dehydration mechanism of the proton conducting oxide $\text{Ba}_2\text{In}_2\text{O}_5(\text{H}_2\text{O})_x$”. Journal of Materials Chemistry A, issue 4 (2016): 1224-1232. doi: 10.1039/C5TA05728K This version is made available in accordance with publisher policies. Please cite only the published version using the reference above. Structure and dehydration mechanism of the proton conducting oxide \( \text{Ba}_2\text{In}_2\text{O}_5(\text{H}_2\text{O})_x \) Johan Bielecki\(^1,2\), Stewart F. Parker\(^3\), Laura Mazzei\(^1\), Lars Börjesson\(^1\), Maths Karlsson\(^1,*\) The structure and dehydration mechanism of the proton conducting oxide \( \text{Ba}_2\text{In}_2\text{O}_5(\text{H}_2\text{O})_x \) are investigated by means of variable temperature (20–600 °C) Raman spectroscopy together with thermal gravimetric analysis and inelastic neutron scattering. At room temperature, \( \text{Ba}_2\text{In}_2\text{O}_5(\text{H}_2\text{O})_x \) is found to be fully hydrated (\( x = 1 \)) and to have a perovskite-like structure, which dehydrates gradually with increasing temperature and at around 600 °C the material is essentially dehydrated (\( x \approx 0.2 \)). The dehydrated material exhibits a brownmillerite structure, which is featured by alternating layers of \( \text{InO}_6 \) octahedra and \( \text{InO}_4 \) tetrahedra. The transition from a perovskite-like to a brownmillerite-like structure upon increasing temperature occurs through the formation of an intermediate phase at ca. 370 °C, corresponding to a hydration degree of approximately 50%. The structure of the intermediate phase is similar to the structure of the dehydrated material, but with the difference that it exhibits a non-centrosymmetric distortion of the \( \text{InO}_6 \) octahedra that is not present in the dehydrated material. The dehydration process upon heating is a two-stage mechanism; for temperatures below the hydrated-to-intermediate phase transition, dehydration is characterized by a homogeneous release of protons over the entire oxide lattice, whereas above the transition a preferential desorption of protons originating in the nominally tetrahedral layers is observed. Furthermore, our spectroscopic results point towards the co-existence of two structural phases, which relate to the two lowest-energy proton configurations in the material. The relative contributions of the two proton configurations depend on how the sample is hydrated. 1 Introduction Proton conducting oxides are currently the subject of considerable attention due to their significant potential as efficient proton conducting electrolytes in next-generation, intermediate-temperature (\( \approx 200–500 \) °C) solid oxide fuel cells (SOFC)\(^1,2\). Amongst the most studied and promising materials is barium indate, \( \text{Ba}_2\text{In}_2\text{O}_5 \), which has a brownmillerite type structure, named after the original \( \text{Ca}_2\text{FeAlO}_5 \) mineral.\(^3\) The brownmillerite structure may be described as an oxygen deficient variant of the more well-known perovskite structure and exhibits alternating layers of \( \text{InO}_6 \) octahedra and \( \text{InO}_4 \) tetrahedra; for recent structural studies of \( \text{Ba}_2\text{In}_2\text{O}_5 \) and its variants see refs.\(^4–10\) As shown in Fig. 1(a), the octahedral layers contain the \( \text{In}(1) \) and \( \text{O}(1) \) atomic positions and the tetrahedral layers contain the \( \text{In}(2) \) and \( \text{O}(3) \) atomic positions, with the two types of layers bridged by the apical oxygens, denoted \( \text{O}(2) \). There is no orientational order between successive layers.\(^4\) Like many other oxygen deficient oxides, \( \text{Ba}_2\text{In}_2\text{O}_5 \) transforms upon hydration into a hydrogen containing, proton conducting, material. Hydration is generally carried out by heat treatment in a humid atmosphere, a process during which the water molecules in the gaseous phase dissociate into hydroxyl groups (\( \text{OH}^- \)) and protons (\( \text{H}^+ \)) on the surface of the sample. The hydroxyl groups then occupy nearby oxygen vacancies, whilst the remaining protons bind to lattice oxygens of the oxide host lattice. The protons, however, are not stuck to any particular oxygen atoms, but are free to move from one oxygen to another and, with time, they will therefore diffuse into the bulk of the material. At the same time as protons diffuse into the bulk, the counter diffusion of oxygen vacancies from the bulk to the surface allows the dissociation of other water molecules on the surface of the sample. This leads to an increase of the proton concentration in the material and so it is believed that the process continues until the (bulk) oxygen vacancies are filled, leading, ideally, to a material of the form \( \text{Ba}_2\text{In}_2\text{O}_5 \text{H} \). The structure of \( \text{Ba}_2\text{In}_2\text{O}_5 \text{H} \) is not a brownmillerite, but may be described as a perovskite-like structure with successive, distinctly different, layers of \( \text{InO}_6 \) octahedra running along the \( c \)-direction of an orthorhombic unit cell, cf. Fig. 1 (b-c). The orthorhombic arrangement can be expected to be due to proton ordering, as opposed to protons being randomly distributed over the oxide lattice. Neutron diffraction analysis has shown that the average structure contains two different proton sites; one of which lies on the midpoint between \( \text{O}(1) \) atoms within the octahedral layer and the other one which refers to a position in the plane formed by the apical \( \text{O}(2) \) oxygens, described by the 2c and 16l Wyckoff positions, respectively.\(^12\) Using these results as a starting point for structural optimizations by means of first-principles calculations, Martinez et al.\(^11\) and --- \(^1\)Department of Applied Physics, Chalmers University of Technology, SE-412 96 Göteborg, Sweden. Fax: +46 31 772 2090; Tel: +46 31 772 8038; E-mail: maths.karlsson@chalmers.se (Maths Karlsson). \(^2\)Department of Cell and Molecular Biology, Uppsala University, Box 596, SE-75124 Uppsala, Sweden. \(^3\)ISIS Facility, STFC Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX UK. Dervişoğlu et al.\textsuperscript{5} both investigated the possible local proton configurations and found that the 16l protons are, in a more realistic proton arrangement, described by the 32y position and that the 2c protons are described by the 4h position, where 4h and 32y represent local deviations from the average 2c and 16l positions.\textsuperscript{11} Specifically, the 4h position refers to protons which we here denote as H(2) and which are bonded to inplane oxygens, O(3), whereas the 32y position refers to protons which we denote as H(1) and which are bonded to the apical oxygens, O(2), cf. Fig. 1(b). Upon dehydration the O(3) octahedra transform into tetrahedra, while the O(1) octahedra remain as such. Both of the theoretical studies found two local structures (proton configurations), labeled Martínez1 and Martínez2, as shown in Fig. 1(b) and (c), with lower energies compared to a range of other proton configurations also considered in the structural optimizations. The two studies do not, however, agree on the ground-state structure: whereas Martínez et al.\textsuperscript{11} assigned the ground-state structure to the Martínez1 proton configuration, Dervişoğlu et al.\textsuperscript{5} found that Martínez2 was of lowest energy. The two local structures are conceptually similar, with equally many protons in the 4h and 32y positions, respectively, and where the only difference between them relates to the hydrogen-bond pattern of the 32y protons. In the Martínez1 structure, the 32y protons are hydrogen bonded towards the O(1) layer, whereas in the Martínez2 structure the 32y protons are hydrogen bonded to the O(3) oxygens. Recently, it was shown that the hydrogen bonding of the 32y protons in the Martínez1 structure has the effect of pulling the O(1) oxygen towards the H(1) site, which gives rise to a long-range non-centrosymmetric distortion of the In(1)O\textsubscript{6} octahedra.\textsuperscript{7} Further, Dervişoğlu et al.\textsuperscript{5} measured the \textsuperscript{1}H NMR spectra of BaInO\textsubscript{3}H\textsubscript{2}, which suggested the presence of three distinct proton positions in the structure. The three positions correspond to one position within the O(3) layer, and two positions within the O(2) layer that hydrogen bonds to either the O(3) or O(1) layer, respectively. First-principles calculations could reproduce the \textsuperscript{1}H NMR experiments by including four low-energy proton configurations within the material.\textsuperscript{5} Each of these configurations, labeled I, J, K, and L, is a specific combination of proton occupations on the three positions mentioned above.\textsuperscript{5} Similarities in hydrogen-bond patterns and crystal distortions make it possible to associate, with regards to vibrational fingerprints, I and K as "Martínez1-like" proton configurations, whereas J and L can be regarded as "Martínez2-like". While the structures of the fully dehydrated and fully hydrated structures have emerged recently,\textsuperscript{7} little is known about the structure for intermediate proton loadings, \textit{i.e.} for partially hydrated structures, and in particular about the dehydration mechanism, which relates to the proton dynamics and therefore to the materials’ proton conducting properties. In this context, it has been suggested recently that the full occupation of H(2) protons on the 4h site may hinder the diffusion of protons within the In(2)-O(3) plane containing the nearest oxygen neighbors to which the H(2) protons form strong hydrogen bonds, and therefore that the proton conductivity may be governed instead by the more weakly hydrogen bonded H(1) protons on the 32y site.\textsuperscript{7} However, upon dehydration with increasing temperature it might be that the diffusivity of H(2) protons increases at a rate that is a function of the H(2) occupancy, and if so, the question is whether there is an optimum occupancy? Such information is not only of purely academic interest, but can be expected to help in the development of new, more highly proton conducting oxide systems, which is critical for further development of intermediate-temperature SOFC technology based on proton conducting electrolytes. Accordingly, this work focuses on structural studies of the technologically important material Ba\textsubscript{2}In\textsubscript{2}O\textsubscript{5}(H\textsubscript{2}O)\textsubscript{x}, with the aim to obtain information about its local structure and how it depends on temperature and degree of hydration, x. The in- vestigations are performed by means of variable temperature (20–600 °C) Raman spectroscopy together with thermal gravimetric analysis and inelastic neutron scattering (INS). We also discuss our structural results in terms of the mobility of protons and plausible proton conduction mechanisms. 2 Experimental 2.1 Sample preparation A powder sample of Ba$_2$In$_2$O$_5$ was prepared by solid state sintering by mixing stoichiometric amounts of the starting reagents (BaCO$_3$ and In$_2$O$_3$). The sintering process was divided into three heat treatments: 1000 °C for 8 h, 1200 °C for 72 h and 1325 °C for 48 h, with intermediate cooling, grinding and compacting of pellets between each heat treatment. The as-sintered Ba$_2$In$_2$O$_5$ powder was annealed in vacuum at high temperature (≈600 °C) in order to remove any protons that the sample may have taken up during its exposure to ambient conditions; this sample is referred to as dehydrated and exhibited essentially the same spectrum as a hydrated sample, BaInO$_3$H, after heating to 600 °C in air. A hydrated sample, BaInO$_3$H, was prepared by annealing a portion of the dehydrated sample at ≈300 °C under a flow of N$_2$ saturated with water vapor for a period of a few days. On the basis of a thermal gravimetric measurement upon heating from 25 to 950 °C (heating rate 1.5 °C/min), as performed using a F1 Iris spectrometer from Netzsch, the degree of hydration was determined to be around 110%, i.e. the sample was found to be fully hydrated, see Fig. 2. The fact that the mass loss corresponds to a hydration level slightly higher than 100% may be related to the presence of a small amount of adsorbed surface water. In agreement with our previous measurements on the same materials, room temperature X-ray powder diffraction patterns for the Ba$_2$In$_2$O$_5$ and BaInO$_3$H samples suggest an orthorhombic crystal structure for Ba$_2$In$_2$O$_5$ and a tetragonal structure for BaInO$_3$H with no significant amount of impurities present. 2.2 Raman spectroscopy The Raman spectroscopy experiments were performed in backscattering geometry using a DILOR XY800 spectrometer, equipped with a tunable Ar$^+$ laser, a long working distance 40x objective, and a liquid nitrogen cooled CCD detector. The laser was tuned to the green 514 nm line and the laser power at the sample position was kept at 4 mW for all measurements. A comparison of the Stokes and anti-Stokes spectra showed negligible laser heating on the sample. All spectra were collected with linearly polarized light impinging on the sample and unpolarized light collected at the CCD, and we used three different experimental setups for our measurements. The 35–720 cm$^{-1}$ range, covering the vibrational modes of the oxide host lattice, was measured in a high resolution double subtractive mode with a 800 mm focal distance. The higher-frequency region, 2500–4000 cm$^{-1}$, covering the O-H stretch vibrational modes, was measured with a single grating of 300 mm focal distance. Variable temperature measurements were performed by measuring in-situ at sequentially higher temperatures. The temperature was controlled by a Linkam heating stage over the range from 20 °C to 600 °C, with a small opening to prevent overpressure as the sample was dehydrated with increasing temperature. To ensure that the spectra were measured in thermodynamic equilibrium, the sample was held for 1 h at each temperature before measuring the Raman spectra and, in addition, successive measurements at the same temperature were performed in order to rule out further dehydration after this time. The Raman spectra have been corrected for the Bose-Einstein occupation factor and adjusted to a common baseline level. The O-H stretch region of the Raman spectra was further normalized according to the thermal gravimetric curve in order to accurately reflect the total hydrogen content in the sample. 2.3 Inelastic neutron scattering The INS experiment was performed on the fully hydrated sample, BaInO$_3$H, on MAPS$^{13}$ at 10 K with an incident energy of 650 meV, with the Fermi chopper at 600 and 500 Hz. The sample, approximately 15 grams, was loaded into an aluminium sachet and the sachet into an indium wire sealed thin-walled aluminium can. The measuring time was about one day. 3 Results and discussion 3.1 Structural variability While investigating the 20 °C Raman spectra of BaInO$_3$H samples from different sample batches, differing essentially in hydration conditions (hydration time and temperature), we observed significant spectral variations. In particular, we found that these variations can be ascribed to different ratios of two distinctly different proton configurations, or phases, which are here denoted as type1 and type2. A comparison of the spectra of samples for which either phase is predominant (Fig. 3) suggests that type2 is generally characterized by a smaller number of bands related to vibrations of the oxide host lattice [Fig. 3(a)], as well as a wider O-H stretch region [Fig. 3(b)]. The former characteristic suggests a more symmetric, although not necessarily ordered, structure, whereas the latter suggests a larger variability in O-H distances in the material. Conversely, the narrower O-H stretch region in phase type1 indicates less structural variability between unit cells, whereas the presence of the sharp, intense, Raman bands at around 150 and 530 cm$^{-1}$, respectively, is a clear characteristic of a reduction of the symmetry of the local structure. Included in Fig. 3(b) are also the calculated Raman spectra according to Bielecki et al.$^{11}$, for the two lowest-energy proton configurations found by Martinez et al.$^{11}$ and Dervišoğlu et al.$^{5}$, i.e. the proton configurations that here are called Martinez1 and Martinez2, see Fig. 1. As can be seen, the Martinez1 configuration corresponds to O-H stretch modes in the relatively narrow range from 3100 to 3500 cm$^{-1}$, which is in agreement with the experimental spectrum of predominantly type1. In comparison, the Martinez2 proton configuration is characterized by O-H stretch modes at lower frequencies and is better in agreement with the experimental spectrum of predominantly type2. The association of type1 with Martinez1-like and type2 with Martinez2-like structures is consistent with the low-frequency Raman spectra [Fig. 3(a)], where the two 150 cm$^{-1}$ and 530 cm$^{-1}$ bands, as present only in the type1 spectrum, can be explained by the non-centrosymmetric In(1)$\text{O(2)}\text{O(1)}_4$ distortion induced by the 32y hydrogen-bond pattern in the Martinez1 proton configuration, as mentioned above.$^{5,11}$ In general, such a structural distortion activates previously inactive Raman modes; in this case an In(1) mode at 150 cm$^{-1}$ and an In(1)-O stretch mode at 530 cm$^{-1}$. One should note, however, that there is a small degree of intermixing of the two phases. This is reflected by the thin line in both Fig. 3(a) and Fig. 3(b), which illustrates the amount of the type2 phase found in the predominating type1 phased sample. By comparing the relative contributions of the two phases to the total integrated intensity of the O-H stretch region, we estimate that the sample of predominantly type1 contains approximately 30% of type2. Lastly, note that the calculations were done in optimized, static, unit-cell geometries and hence cannot capture the unit-cell variations giving rise to the Gaussian-shaped broadenings, nor the finite vibrational lifetime giving rise to Lorentzian-shaped broadenings, in the experimental spectra. Thus, the calculated frequencies should be seen as indications of the frequency range expected from the different atomic positions in the experimental spectra. ### 3.2 Host-lattice region of the vibrational spectra In Fig. 4 are shown the 50–650 cm$^{-1}$ range of the 20 °C Raman spectra of the dehydrated (Ba$_2$In$_2$O$_3$) and hydrated (BaInO$_3$H) samples. Included in the figure [Fig. 4(b)] is also the spectrum for an intermediate proton loading, as will be discussed in detail below. Considering first the spectrum of the dehydrated material [Fig. 4(a)], we observe several well-defined bands in agreement with the literature. These bands are assigned according to the following: (i) bands below 200 cm$^{-1}$ relate to vibrational modes involving the heavy Ba ions, (ii) bands between 200 and 350 cm$^{-1}$ relate to different tilt and bend modes of the InO$_6$ and InO$_5$ moieties, and (iii) bands between 350 and 650 cm$^{-1}$ relate to symmetric In-O stretch modes of the same moieties.$^{7}$ The only discrepancy from this classification regards two In related bands at approximately 60 cm$^{-1}$ and 130 cm$^{-1}$ (indicated by vertical lines). respectively. Considering next the spectrum of the hydrated material [Fig. 4(c)], we observe that the spectrum changes considerably upon hydration. This is expected since the overall structural changes from a brownmillerite to a perovskite-like structure. In particular, we observe that all Ba related bands, except the one at 130 cm$^{-1}$, as well as the strong 600 cm$^{-1}$ band, which is assigned to In-O stretches of In(2)O(2)O(3)$_2$ tetrahedra, are now completely absent. Instead, a strong band at around 530 cm$^{-1}$, which is assigned to In-O stretches of InO$_6$ octahedra, and a band at 150 cm$^{-1}$, are now observable in the spectrum. The 150 cm$^{-1}$ band has previously been assigned to an In(1) related mode activated by the long-range non-centrosymmetric distortion of the In(1)O(2)O(1)$_4$ octahedra, as caused by the hydrogen bonding between H(1) protons and O(1) oxygens, which is a fingerprint of the Martínez1 proton configuration. Further information about the non-centrosymmetric In(1)O(2)O(1)$_4$ distortion can be found in the linewidths of the 150 and 530 cm$^{-1}$ bands as a function of temperature, as we shall see later but first we discuss the overall spectral changes with increasing temperature. Figure 5 shows the Raman spectra measured upon increasing the temperature from 20 °C to 600 °C. For the 50–720 cm$^{-1}$ range of the spectra [Fig. 5(a)], which relates to the vibrational dynamics of the oxide lattice, we observe a general broadening of all bands as a function of increasing temperature from 20 °C to 370 °C. At a temperature of 370–380 °C the spectrum changes more markedly. Most noticeable is the appearance of new, rather strong, bands, at approximately 60, 82, 92, 102, 180 and 620 cm$^{-1}$, as well as of weaker bands in the range 215–243 cm$^{-1}$, suggesting a structural phase transition away from the structure of the (fully) hydrated material. In this context, the 60 cm$^{-1}$ and 620 cm$^{-1}$ bands are identified as tilt motions and symmetric In-O stretches of In(2)O(2)O(3)$_2$ tetrahedra, respectively. The appearance of these bands is in agreement with the concomitant transformation of InO$_6$ octahedra to InO$_4$ tetrahedra as the sample is dehydrated with increasing temperature. The other bands relate to vibrations involving mainly the Ba ions (82, 92, 102, and 180 cm$^{-1}$) and oxygen ions (215–243 cm$^{-1}$), respectively, further reflecting the structural change. Upon further temperature increase (from 370 °C to 600 °C), the spectrum changes smoothly towards the shape of the spectrum for the dehydrated material, although it should be noted the fully dehydrated phase was not reached within the covered temperature range. In particular, the symmetric In-O stretch band at 620 cm$^{-1}$ band downshifts gradually with increasing temperature to reach a position of 595 cm$^{-1}$ at 600 °C. Thermal broadening is responsible for reducing the spectral intensities at higher temperatures compared to the fully dehydrated 20 °C spectra shown in Fig. 4(a). In this context, we now turn to the temperature dependence of the linewidths of the two In(1) related bands at 150 and 530 cm$^{-1}$, which are associated with the non-centrosymmetric In(1)O(2)O(1)$_4$ distortion in BaInO$_3$.H. The thermal linewidth broadening is given by the Klemens model, which takes into account the anharmonic decay of one optical phonon into two acoustic phonons. By this process the linewidth, $\Gamma$, increases with temperature according to $\Gamma(T) \approx \Gamma(0) [1 + 2/(\exp(h\omega_0/2k_BT) - 1)]$, where $\omega_0$ is the frequency of the optical phonon. Deviations from this rule are a sign of additional processes that decrease the phonon lifetime $\tau$ ($\tau \approx 1/\Gamma$) and broadens the vibrational linewidth. Such broadening commonly arises from increased disorder, and consequently anharmonicity, of the atomic species involved in the vibration at hand. In Fig. 6(a-b) are shown the temperature evolution of the 150 and 530 cm$^{-1}$ linewidths, together with fits to the Klemens model (solid lines). As can be seen, the measured linewidths agree well with the Klemens model until a temperature of ca. 370 °C is reached, indicating no loss of coher- ence in the \( \text{In}(1)\text{O}(2)\text{O}(1)\text{O}(3) \) distortion below 370 °C. Above 370 °C, however, both modes show an anomalous increase of \( \Gamma \) with increasing temperature. This is a clear indication of decoherence in the \( \text{In}(1)\text{O}(2)\text{O}(1)\text{O}(3) \) distortion, which we interpret as due to the gradual dehydration above 370 °C, which is illustrated in Fig. 6(c). Our results support the following interpretation of the hydrated-to-intermediate phase transition. Heating the sample from 20 °C gradually releases protons and oxygen atoms from the sample, a process during which \( \text{In}(2)\text{O}(2)\text{O}(3) \) octahedra are transformed into \( \text{In}(2)\text{O}(2)\text{O}(3) \) tetrahedra. However, enough \( \text{In}(2)\text{O}(2)\text{O}(3) \) octahedra are still present in order to keep the overall symmetry of the hydrated phase. The sudden spectral changes at 370 °C suggest a change in symmetry throughout the sample, and indicates that the density of \( \text{In}(2)\text{O}(2)\text{O}(3) \) tetrahedra has grown enough to transform the \( \text{O}(3) \) layer symmetry to that of the dehydrated phase. Thus, the intermediate structure is distinctly different from the dehydrated structure in that, even though the crystal structure approaches the symmetry of the dehydrated structure upon dehydration, the non-centrosymmetric \( \text{In}(1)\text{O}(2)\text{O}(1)\text{O}(3) \) distortion and its associated vibrational modes at 150 and 530 cm\(^{-1}\) are still present. 3.3 O-H stretch region of the vibrational spectra The gradual dehydration upon increasing temperature is consistent with the spectral changes in the O-H stretch region [Fig. 5(b)], which reflects a change in the local coordination of protons in the material. In particular, one should note that the frequency of an O-H stretch mode is very sensitive to the degree of hydrogen bonding the proton may experience towards a neighboring oxygen and that such a hydrogen-bonding interaction generally softens the mode. \(^{17}\) Analysis of the O-H stretch band/s provides therefore a spectroscopic means not only to identify, but also to distinguish between different proton sites in the structure. The very broad, asymmetric, O-H stretch band for \( \text{BaInO}_3 \text{H} \) suggests that several different pro- Fig. 6 (Color online) Temperature dependence of the spectral linewidth (full width at half maximum) of (a) the In(1) mode at 150 cm$^{-1}$ and (b) the In(1)-O(1) stretch mode at 530 cm$^{-1}$. The anomalous increase in linewidth above the hydrated-to-intermediate phase transition is attributed to a gradual decoherence of the non-centrosymmetric distortion of the In(1)O(2)O(1)$_4$ octahedra as H(1) protons are released; this is depicted schematically in (c). The non-centrosymmetric distortion is highlighted by the solid line that passes through the octahedral inversion plane. It is clear that the In(1) and O(1) positions in BaInO$_3$H (left) break the inversion symmetry, as opposed to the situation in Ba$_2$In$_2$O$_5$ (right). As the material is dehydrated, the inversion symmetry reappears gradually throughout the sample and follows the anomalous increase in vibrational linewidths (middle). Although the intensities of the different O-H stretch components provides a direct indication of the relative occupation of protons in the different sites, a quantitative assessment of the integrated intensity of the O-H stretch band/s is, generally, not straightforward, since the Raman scattering cross section may vary with the degree of hydrogen bonding, i.e. with the frequency of the vibration. In order to elucidate the possible frequency dependency of the Raman scattering cross section, we also measured the O-H stretch region using INS, for which the intensity of a particular vibration is directly proportional to the number of vibrating species, irrespective of their vibrational frequency.$^{18}$ A comparison of the Raman and INS spectra is shown in Fig. 7. As can be seen, the shape of the O-H stretch band, measured with the two techniques, is indeed similar to each other, suggesting that there is no strong dependence of the Raman scattering cross section with the frequency of the O-H stretch vibrations in the material as studied here. Consequently, we can directly translate the area under each O-H stretch component in Fig. 5(b) to a corresponding “band-resolved” hydration level. Fig. 8(a) shows the normalized hydration level for each O-H stretch component together with the overall hydration level as determined by thermal gravimetric measurements under similar conditions. As can be seen, the hydrated-to-intermediate phase transition corresponds to a hydration level of approximately 50%, for all six bands. That is, the protons are homogeneously desorbed over the oxide lattice, with essentially the same desorption rates for protons on the 4h and 32y sites and for both phases (type1 and type2). Above the phase transition temperature, however, protons on the 4h site are preferentially desorbed and hence the relative portion of protons on the 32y position in the material increases drastically. This is further illustrated in Fig. 8(b), where also the temperature dependence of the relative amounts of the two phases are presented. Importantly, we find that, below the phase transition temperature, our specific sample consists of roughly 3/4 of phase type1 and 1/4 of phase type2, whereas above the phase transition temperature there is a preferential desorption of protons in type2. At above 500 °C, all protons originating in phase type2 are gone. To summarise our findings, it is suggested that at room temperature the BaInO$_3$H sample is composed of two distinct phases (3/4 of phase type1 and 1/4 of phase type2), which are attributed to the two lowest-energy proton configurations Martinez1 and Martinez2. Both of the phases are characterised by two main proton positions (4h and 32y), which are equally occupied but differ in the way the protons on the 32y site are hydrogen bonded. Upon increasing the temperature to 370 °C, the material dehydrates essentially homogeneously, meaning that the two proton positions are gradually depleted at a rate that is almost the same in both phases. Upon further temperature increase, the dehydration process is more complicated as it is characterised not only by a dehydration rate which differs between the two phases but is also different for the two proton positions. For phase type1 the 4h proton position is depleted at a rate that is higher than for the 32y proton position. For phase type2, both proton sites dehydrate at an almost equal rate. This dehydration mechanism is summarized graphically in Fig. 9, where we have combined thermal gravimetric and Raman data in order to extract individual dehydration curves for the type1 and type2 phases. Our new insight into the dehydration mechanism of BaInO$_3$H also provides ideas relevant to a more mechanistic understanding of the proton dynamics, i.e. of the proton conduction mechanism, in the material. In particular, we suggest that the H(1) protons on the 32y position in the energetically more stable phase type1 are less mobile than the protons in the other local structural configuration. This indicates that above the hydrated-to-intermediate phase transition at 370 °C, there is an inhomogeneous proton conduction mechanism in which the protons move more easily within the In(2)O(3)4 planes of the material, whereas for lower temperatures, the larger occupation of H(2) protons on the 4h position may hinder significantly the proton diffusion within these planes. This picture is in agreement with the dehydration behaviour of phase type2, but here the different hydrogen-bond pattern of the H(1) protons on the 32y position appears to have a crucial effect as it make these protons more mobile. A plausible reason for the energetically higher stability of the phase type1, and in particular for the corresponding 32y protons, may be that the non-centrosymmetric distortion of the In(1)O(2)O(1)4 octahedra creates a well-defined local energy minimum for the H(1) protons. This is in agreement with the relatively narrow linewidth of band III, see Fig. 5(b). This does not necessarily mean that the phase type1 reflects the global ground state, but perhaps a metastable state whose portion depends on how the sample is hydrated. Although we are unable to determine precisely the factors determining the ratio of the two phases, our results provide some hints as to why the two phases can coexist. On the one hand, we have a spectrally well-defined proton configuration, type1, whereas on the other hand, we have the type2 configuration which is featured by generally broader spectroscopic features and thus a higher degree of structural variability in the material. This may be indicative of a competition between energy and entropy at play, where the parameters of the hydration (e.g. temperature, and time) may tip the balance of the two. To this end, an investigation of the vibrational spectra as a function of systematic changes of the hydration conditions is likely to be beneficial for the clarification of the structure determining mechanisms involved, particularly if coupled to mechanistic studies of proton diffusion, using e.g. quasielastic neutron scattering.19 4 Conclusions To conclude, we find that the proton conducting oxide Ba2In2O5(H2O)x adopts three distinctly different local structures, depending on the level of hydration, x, and temperature, T. The structure evolves from a perovskite-like structure for the fully hydrated material (x = 1) at T = 20 °C, through a partially hydrated structure for 20 °C < T < 600 °C, to a brownmillerite-like structured, essentially proton-free, material, at even higher temperatures. The structure of the intermediate phase is similar to the structure of the dehydrated material, but with the difference that it is characterised by a non-centrosymmetric distortion of the InO6 octahedra not present in the latter. The hydrated-to-intermediate phase transition occurs at approximately 370 °C, which corresponds to approximately 50% dehydration. Up to this temperature, the dehydration process progresses uniformly, with equal release of protons from the 4h and 32y proton positions, whereas upon further temperature increase protons on the 4h position are released at a higher rate. We also found that the O-H stretch region of the vibrational spectra is not consistent with a single-phase spectrum, but is in agreement with the intermixture of spectra associated with the lowest-energy (type1) and next-lowest-energy (type2) proton configurations in the structure of the material. During dehydration we find protons in crystallographic sites associated with type1 to have higher thermal stability compared to those associated with type2. The amount of each phase is found to depend on how the material is hydrated and it is thus possible that the hydration conditions influence the proton conductivity at intermediate temperatures. Acknowledgements Funding from the Swedish Research Council (grant No. 2010-3519 and 2011-4887) is gratefully acknowledged. The STFC Rutherford Appleton Laboratory is thanked for access to neutron beam facilities. We also thank S. M. H. Rahman at Chalmers University of Technology for the preparation of the sample. References Fig. 10 TOC. Dehydration mechanism of the proton conducting oxide Ba$_2$In$_2$O$_5$(H$_2$O)$_x$. [Diagram showing dehydration mechanism of Ba$_2$In$_2$O$_5$(H$_2$O)$_x$.]
Motivic matching strategies for automated pattern extraction OLIVIER LARTILLOT AND PETRI TOIVIAINEN University of Jyväskylä, Department of Music, Finland ABSTRACT This article proposes an approach to the problem of automated extraction of motivic patterns in monodies. Different musical dimensions, restricted in current approaches to the most prominent melodic and rhythmic features at the surface level, are defined. The proposed strategy of detection of repeated patterns consists of an exact matching of the successive parameters forming the motives. We suggest a generalization of the multiple-viewpoint approach that allows a variability of the types of parameters (melodic, rhythmic, etc.) defining each successive extension of these motives. This enables us to take into account a more general class of motives, called heterogeneous motives, which includes interesting motives beyond the scope of previous approaches. Besides, this heterogeneous representation of motives may offer more refined explanations concerning the impact of gross contour representation in motivic analysis. This article also shows that the main problem aroused by the pattern extraction task is related to the control of the combinatorial redundancy of musical structures. Two main strategies are presented, that ensure an adaptive filtering of the redundant structures, and which are based on the notions of closed and cyclic patterns. The method is illustrated with the analysis of two pieces: a medieval Geisslerlied and a Bach Invention. 1. INTRODUCTION Motives are musical structures that constitute one of the most characteristic descriptions of music. The perception of the motivic structure is generally governed by two main heuristics. Firstly, discontinuities of the sequential structure of music along its different dimensions imply the inference of segmentations (Lerdahl & Jackendoff, 1983). The strength of each segmentation depends on the size of the corresponding discontinuities. A local maximum of inter-pitch and/or inter-onset interval amplitude, or the accentuation of one particular note, are common examples of such local discontinuities. These segmentations result in a rich structural configuration. The multiple principles ruling these segmentations, such as Lerdahl and Jackendoff’s Grouping Preference Rules (Lerdahl & Jackendoff, 1983), can be ordered relative to their perceptive salience (Deliège 1987). The second general principle, on which this article is focused, is motivic extraction based on the concept of pattern repetition. Contrary to local segmentation, the structures extracted as a result of the pattern heuristics are associated to concepts (the description of these repeated patterns) that form a lexicon of characteristic elements. The motivic structure is often highly complex. The most salient and characteristic motives define the themes. A more detailed analysis shows the existence of deeper motivic structures that proliferate throughout the work. Some of these cells are specific material created in the context of the piece, while others are common stylistic features, also known as “signatures”, that are used in a particular musical style (Cope, 1996). Detailed analysis of the deeper motivic structures contained in music has been undertaken during the twentieth century (Reti, 1951). In previous works, systematic approaches have been suggested, with a view to augmenting the analytic capabilities, both in quantitative and qualitative terms (Ruwet, 1966-1987; Nattiez, 1975-1990; Lerdahl and Jackendoff, 1983). Computational modelling offers the possibility to automate the process, enabling the fast annotation of large scores, and the extraction of complex and detailed structures without much effort. One major difficulty here is to ensure the musical interest of the computer-based analyses, and in particular their perceptual relevance. It is assumed here that analyses produced by alternative strategies or algorithms cannot all be considered as equally valuable, and should instead be evaluated according to their musical relevance. Yet no consensus seems to have been reached among musicologists as to the criteria by which this questioned notion of musical relevance should be defined. On the contrary, the analysis of a single piece by different musicologists may show important variability, expressing the subjectivity of the musicologists’ approaches. The aim of computational modelling here would be to make explicit the spectrum of strategies that musicologists may choose to use for their analysis. Due to the experimental aspect of current computational approaches, including the one presented in this article, this complex question cannot be answered for the moment. As a first approach, the analysis may focus mainly on the simplest and most evident musical structures, whose automated discovery remains a scientific challenge. This article proposes a solution to the problem of automated extraction of motivic patterns, restricted to the study of monodies and simple musical transformations. Section 2 presents different musical dimensions, restricted to the most prominent melodic and rhythmic features at the surface level. The strategy of detection of repeated patterns is explained in section 3. It consists of an exact matching of the successive parameters forming the motives. We suggest a generalization of the multiple-viewpoint approach by allowing a variability of the types of parameters (1) Structuralism-based approaches (such as serialism) will not be considered in this article. (melodic, rhythmic, etc.) defining each successive extension of these motives. This enables us to take into account a more general class of motives, called heterogeneous motives, which include salient motives that have remained outside the scope of previous approaches. Besides, this heterogeneous representation of motives may offer more refined explanations of the impact of gross contour representation in motivic analysis. This article also shows that the main problem caused by the pattern extraction task is related to the control of the combinatorial redundancy of musical structures. Section 4 presents two main strategies, which enable an adaptive filtering of the redundant structures based on the notions of closed and cyclic patterns. Results offered by this model are presented in section 5, and compared with analyses by Nicolas Ruwet and Jeffrey Kresky. Current and future directions of research are discussed in section 6. 2. Definition of the Parametric Space This section presents the different musical dimensions currently integrated into our model. The study is restricted to monodies and does not take into account more complex polyphonic relations between notes. ![Figure 1. Descriptions of a monody. Repeated sequences of values, forming patterns, are enclosed in boxes.](image-url) (2) See section 6.3. for a brief evocation of the generalization of the approach to polyphony. 283 The diatonic pitch representation indicates the height of each note with respect to the implicit tonal scale. This information can be directly obtained from the score when the tonality of a piece strictly follows the indication given by the key signature. In more general cases — not yet considered in our approach — local modulations need to be taken into account through a proper harmonic analysis. In particular, when analysing MIDI files where no tonality is specified explicitly, diatonic pitch representations need to be reconstructed using pitch-spelling algorithms (Cambouropoulos, 2003; Chew and Chen, 2005; Meredith, 2006). The result of the automated pitch spelling can be directly imported into the pattern extraction algorithm. *Absolute diatonic pitch* values are represented on a numeric scale whose origin (0) is set at one tonic of the scale (see figure 1). *Diatonic pitch class* values are obtained by applying a modulo 7 operation to the absolute diatonic pitch values. The integer obtained by dividing the absolute diatonic pitch values by 7 gives the *octave position*. Alternatively, *absolute chromatic pitch classes* are represented on a chromatic scale, where, following the MIDI convention, the value of 60 is associated with pitch C4. Similarly to diatonic pitch, *chromatic pitch class* values are obtained by applying a modulo 12 operation to the absolute chromatic pitch values. In our system, due to the automated management of parametric dimensions according to their specificity relationships (as will be explained in section 4.2), the simple addition of the absolute pitch information automatically enables the discovery of transposition-invariant subclasses, such as pattern A in the example of section 5.2.2. Relative pitch configurations are modelled by defining the position of each successive pitch with respect to its direct neighbours within the monody, defining interval-based dimensions. Intervals can be defined either between absolute pitches or between pitch classes, resulting in two separate dimensions called respectively *absolute interval* and *interval class*. This distinction can be drawn for both diatonic and chromatic dimensions. For instance, the *chromatic interval class* dimension is used in Pitch-Class-Set theory (Forte, 1973), where interval classes are more important than absolute intervals. Absolute pitch intervals can be perceived more simply as *gross contours*, i.e., simple successions of ascending, descending or constant pitches. Studies have shown the perceptive importance of gross contour dimensions (White, 1960; Dowling and Harwood, 1986): distorted repetitions of the same motive can be recognised even if the interval values have been significantly changed, as long as the gross contour remains constant. On the other hand, due to the small alphabet of this dimension, repetition of gross contour motives cannot be perceptually detected if the occurrences are too distant in time (Dowling and Harwood, 1986). The impact of gross contour in pattern extraction will be further discussed in section 3.3. Metrical position indicates the phase of each note with respect to the metrical structure. In a first approach, the metrical structure is represented by a main pulsation (defining onbeats) subdivided into another pulsation (defining offbeats), which is generally either twice or three time fasters (corresponding to so-called binary and ternary rhythms). Onbeats are indicated by a value of 1, whereas offbeats are indicated by a value of 2 in binary rhythm, and 2 and 3 in ternary rhythm. This dimension is a first attempt to represent metric hierarchy. This information can be directly obtained from the score. However, when analysing MIDI files, the metrical structure is not specified explicitly, and needs to be reconstructed using beat-tracking (Toiviainen, 1998; Large and Kolen, 1994; Dannenberg and Mont-Reynaud, 1987), quantization (Desain and Honing, 1991; Cemgil and Kappen, 2003), and meter induction algorithms (Toiviainen and Eerola, 2006; Eck and Casagrande, 2005). The result of these algorithms can be directly integrated as input of the pattern extraction algorithm. The metric position dimension plays an important role in rhythmic identification. In particular, a rhythmic pattern is generally neither detected nor recognized when its phase is altered with respect to metrical structure (Povel and Essens, 1985; Ahlback, this issue). This constraint has been integrated into the model: a filter excludes any rhythmic repetition that does not agree with the metrical structure of the original motive. The rhythmic description of notes is generally expressed along two main distinct parameters: note duration and inter-onset intervals. Durations are rhythmic values explicitly associated with each note, whereas inter-onset intervals correspond to the temporal distance between successive note onsets in the monody. Inter-onsets might be considered as more prominent than note durations because note onsets are perceptually more salient than offsets. For instance, in figure 1, the inequality between occurrences of bars 1 and 2 in terms of rhythmic value — the quarter note in bar 1 transformed into a succession of an eighth note and an eighth-note rest in bar 2 — is a detail that does not mask the inter-onset identity. For this reason, the duration parameter has been discarded from the analysis. Since, in this paper, we are dealing with monophonic sequences, inter-onset interval is defined as the interval between successive note onsets. The specification of this parameter is more complex in polyphony as it requires a voice separation. The set of musical dimensions considered in the current version of the model is hierarchically ordered. In particular, the contour dimension is considered as more general than both the diatonic and chromatic representations. No logical relations have been set between the diatonic and chromatic representations due to ambiguities of translation between the two representations: for instance, the augmented fourth and diminished fifth degrees of a diatonic scale have identical chromatic values. New musical dimensions can be added to the framework, provided that the logical dependences, if any, between the new dimensions and the previously defined representations are specified. 3. SPECIFICATIONS FOR MELODIC COMPARISON 3.1. FUZZY VS. EXACT MATCHING Once a range of musical parameters has been defined, the heuristics for motivic identification must be specified. The simplest strategy would consist of inferring identifications only when parameters of compared entities are strictly equal. An alternate strategy hypothesizes the existence of a large range of similarities that can be perceived between melodies, but that cannot be described through exact parametric identifications. A fuzzy definition of pattern matching can be used for that purpose: a numerical distance is defined, and a matching is made when the similarity distance is lower than a pre-specified threshold. The fuzzy approach offers a way of avoiding the integration of musical dimensions that require extensive and complex computation. In particular, the diatonic pitch-interval dimension might be avoided by adopting a fuzzy approach along the chromatic pitch-interval dimension, since a one-semi-tone threshold theoretically allows the merging of major and minor intervals (Cambouropoulos et al., 2002; Cope, 1996). However, this threshold also tolerates other transformations that are not directly related to the major/minor configuration: for instance, a category that contains major third and perfect fourth intervals, but that excludes fifth intervals, cannot be easily explained using traditional musical concepts. More generally, the fuzzy approach can be considered as a clustering method that allows new identifications between musical entities. For instance, in the melodic dimension, the use of numerical similarity enables the identification of motives whose respective intervals are similar but not identical. The size and content of each cluster is highly determined by the value of the dissimilarity threshold. Yet no heuristic for precisely fixing this value has been proposed. Hence, the determination of the threshold value relies entirely on the user's intuitive choices. Due to the difficulties created by the fuzzy approach, another solution consists of restricting more simply to exact matching along multiple musical dimensions (Conklin and Anagnostopoulou, 2001). For instance, concerning the melodic dimensions, patterns can be identified along their chromatic and diatonic pitch-interval, and contour dimensions. The computational model presented in this paper follows this exact matching heuristic. 3.2. ADAPTIVE MATCHING IN A MULTI-PARAMETRIC SPACE We propose a generalisation of the multiple viewpoint approach by allowing some variability in the set of musical dimensions used during the construction of each musical pattern. This enables us to take into consideration a more general type of pattern, called heterogeneous patterns, which despite their structural complexity seem to catch an important aspect of musical structure. An example of a heterogeneous pattern is the first theme of Mozart’s Sonata in A, K. 331 (Fig. 2), which contains two phrases that repeat the same pattern. This pattern, enclosed in a solid box in the figure, is decomposed into two parts: a melodico-rhythmic antecedent, and a rhythmic consequent. The antecedent itself contains an exact repetition with transposition of a short cell (indicated with dotted lines). In the actual piece, the ending of the first phrase contains a little melodic ornamentation, that we have indicated in the score of Figure 2 by grace notes. Figure 2. Analysis of the first theme of Mozart’s Sonata in A, K. 331, bars 1-8. The reduced melodic phrase in bar 4, where ornamentations are shown as grace notes, suggests a rhythmic similarity with bar 8. Another example is the finale theme of Beethoven’s Ninth Symphony (Fig. 3), which begins with an antecedent/consequent repetition of a phrase, with identities both in pitch and time domains, except a slight modification of the ending of each phrase. Figure 3. Analysis of Ode to Joy, from Beethoven’s Ninth Symphony. Another famous example, that will be further discussed in the next section, is the four-note pattern of Beethoven’s *Fifth Symphony*, shown in figure 4. This pattern is actually subdivided in our approach into a hierarchy of pattern descriptions of diverse levels of specificity. The most specific one is the complete melodico-rhythmic pattern *a* repeated twice at the very beginning of the Symphony. This specific pattern is progressively generalised during the piece through a disintegration of the different parameters constituting its description: the modification of the last descending third interval into a simple descending contour (pattern *b*), the modification of the third pitch (pattern *c*), the variation of the general contour pattern (pattern *d*), etc. ![Figure 4](https://example.com/figure4.png) *Figure 4.* The development of the famous four-note motive in the first movement of Beethoven’s *Fifth Symphony*, in term of a succession of patterns of descending order of specificity: *a*, *b*, *c* and *d*. In an alternate and more precise description of the metrical dimension, the last note of each pattern is located on a downbeat represented by a 0 value. ### 3.3. A solution for the contour paradox As mentioned in section 2, due to the very limited degree of specificity of the gross contour parameter, patterns made of ascendant and descendant intervals are not easily recognised. It has been suggested, therefore, that repetition of gross contour sequences can be identified only when sufficiently close in time that, when the second occurrence is heard, the first one remains in short-term memory. Indeed, gross contour sequences can more easily be searched in short-term memory due to the limited size of this memory store, and availability of its content. On the other hand, a search in long-term memory seems cognitively implausible because of this store’s large size, the resulting combinatorial explosion of possible results, and the insufficient specificity of the query (cf. Dowling and Harwood, 1986). The contour dimension is all the more restricted to short-term memory since successive contour patterns are hardly perceived for long patterns (15 notes for instance) (Edworthy, 1985). However, this restriction leads to paradoxes (Dowling and Fujitani, 1971; Dowling and Harwood, 1986): if gross contour has no impact on long-term memory, how could the different occurrences of the familiar four-note theme throughout the first movement of Beethoven’s *Fifth Symphony* (see for instance figure 4) actually be detected? One suggested explanation is that the numerous repetitions of the motive enable a memorisation of the contour pattern in long-term memory. Yet, could not this motive be detected, due to its intrinsic construction, even when repeated only a couple of times throughout the piece (as in figure 5)? The heterogeneous pattern representation may offer an answer to this question, by enabling, as we saw in section 3.2, a decoupling of the choice of musical dimensions and the construction of patterns. A full understanding of the perceptive properties of motivic patterns requires a chronological view of the construction of these structures, in terms of an incremental concatenation of successive intervals. The dependency of such constructions upon long- and short-term memory may be understood in this incremental approach. More precisely, the initiation of a new occurrence of a pattern requires, as previously, a matching in long-term memory along interval dimensions. However, in this framework, it may be suggested that the further extensions of a discovered new occurrence do not require such a demanding computational effort: Once the first intervals have been initiated, the discovery of the progressive extensions simply requires a matching of the successive intervals with the corresponding successive intervals in the pattern. In other words, the proposed heuristic enables contour identification between temporally distant repetitions only when the contour value is related to the continuation of a pattern featuring more specific representations for the first intervals. This heuristic enables a selective filtering of non-salient patterns. The four-note pattern of Beethoven’s *Fifth Symphony*, figure 5, may be considered in this respect as a concatenation of two specific unison intervals (or three repetitions of the same note), followed by a less specific descending contour. Each new occurrence of the pattern can be easily perceived due to the high specificity of its three first notes (leading to an interval-based matching in long-term memory). The integration of these principles in the model enables a reconstitution of this phenomenon. 4. **Pattern extraction** This section deals with the core problem of motivic extraction: modelling the mechanisms which ensure the discovery of repeated structures. 4.1. **Related works** Designing robust algorithms for automated motivic analysis is a very difficult problem. Cambouropoulos (2006) searched for exact pattern repetition, using Crochemore’s (1981) approach, in different parametric descriptions of musical sequences. The obtained large set of extracted patterns was not taken into consideration directly. Instead, an estimation of the segmentation points was computed through a weighted average of the segmentations implied by the different patterns. In Conklin & Anagnostopoulou (2001), pattern discovery was performed by building a suffix tree data structure along several parametric dimensions. Once again, due to the large size of the set of discovered patterns, a subsequent step selected patterns that occurred in a specified minimum number of pieces, and that satisfied a statistical significance criterion. A further filtering step globally selected the longest significant patterns within the set of discovered patterns. Rolland (1999) defined a numerical similarity distance between sub-sequences based on edit distance. In order to extract patterns, similarity distances were computed between all possible pairs of sub-sequences of a certain range of lengths, and only similarity exceeding a user-defined arbitrary threshold was selected. From the resulting similarity graph, patterns were extracted using a categorisation algorithm called Star center. The set of discovered patterns was reduced even further using offline filtering heuristics. In particular, only patterns repeated in a minimum number of musical sequences were selected. Meredith, Lemström and Wiggins (2002) generalised the pattern extraction task to polyphony. Notes of musical sequences were represented by points in a two-dimensional (pitch/time) space, and maximal repetitions of point sets were searched. However, this geometrical strategy did not apply to melodic repetitions that presented rhythmic variations. Post-processing techniques were added that performed global selection in order to enhance the precision factor. In all of these approaches, in order to reduce the combinatorial explosion of the results obtained by the pattern extraction process, filtering heuristics are added that select a sub-class of the result based on global criteria such as pattern length, pattern frequency (within a piece or among different pieces), etc. The main limitation of this method comes from the lack of selectivity of these global criteria. Hence, by selecting longest patterns, one may discard short motives (such as the 4-note Beethoven pattern) that may nevertheless be considered as highly relevant for listeners. On the other hand, patterns repeated only twice may be considered as highly relevant by listeners, as long as these repetitions are sufficiently close in time that the first occurrence remains available in the short term memory when the second occurrence is heard\(^3\). The present study was primarily aimed at discovering the reasons for these failures, and at building as simple a model as possible that would be able to closely mimic the listeners’ structural perception. We propose heuristics ensuring a compact representation of the pattern configurations without any loss of information, thanks to an adaptive and lossless selection of most specific descriptions. 4.2. Closed pattern mining The problem of reducing the combinatorial complexity of pattern structure is also studied in current research in computer science, where several strategies have been tried. The frequent pattern mining approach is restricted to patterns that have a number of occurrences (or support) exceeding a given minimum threshold (Lin et al., 2002). We explained in the previous section the limitation of such heuristics for musical purposes. Another approach is based on the search for maximal patterns, i.e. patterns that are not included in any other pattern (Zaki, 2005; Agrawal and Srikant, 1995). This heuristic enables a more selective filtering of the redundancy. For instance, in figure 6, the suffix \(aij\) can be immediately discarded following this strategy, since it is included in the longer pattern \(abcde\); its properties can be directly induced from the long pattern itself. However, this approach still leads to an excessive filtering of important structures. For instance, in figure 7, the same 3-note pattern \(aij\) presents a specific configuration that cannot be directly deduced from the longer pattern \(abcde\), for the simple reason that its support (or number of occurrences) is higher than the support of \(abcde\). This corresponds to the concept of closed patterns, which are patterns whose support is higher than the support of the pattern in which they are included (Zaki, 2005). A filtering of non-closed patterns is therefore more selective than a filtering of non-maximal patterns. In fact, it ensures a more compact representation of the pattern configuration without any loss of information. \(^3\) This method of pattern segmentation through simple repetition corresponds to a common compositional strategy, in particular by Claude Debussy (Ruwet, 1962). Figure 8 shows another illustration of the closed pattern paradigm. Pattern $a$ is a maximal pattern, and therefore closed. Pattern $b$ is included in pattern $a$, but has a larger support (four) than pattern $a$ (two): it is therefore also a closed pattern. Pattern $c$, on the other hand, has a support equal to pattern $a$ (two). Pattern $c$ is therefore non-closed and should be discarded. The model presented in this article looks for closed patterns in musical sequences. For this purpose, the notion of inclusion relation between patterns Motivic matching strategies for automated pattern extraction OLIVIER LARTILLOT AND PETRI TOIVIAINEN found the definition of closed patterns is generalized to the multi-dimensional parametric space of music, defined in section 3.2. A mathematical description of this operation can be formalised using the Galois correspondence between pattern classes and pattern description (Ganter and Wille, 1999; Lartillot, 2005a). For instance, pattern \textit{abcde} (in Figure 9) features melodic and rhythmical descriptions, whereas pattern \textit{afghi} only features the rhythmic part. Hence pattern \textit{abcde} can be considered as more specific than pattern \textit{afghi}, since its description contains more information. When only the first two occurrences are analyzed, both patterns have the same support, but only the more specific pattern \textit{abcde} should be explicitly represented. But the less specific pattern \textit{afghi} will be represented once the last occurrence is discovered, as it is not an occurrence of the more specific pattern \textit{abcde}. ![Figure 8](image) \textit{Beginning of the Geisslerlied “Maria muoter reinü maît”. A complete analysis will be presented in section 5.1. The little ornamentation displayed in grey is not taken into consideration. Patterns a and b are closed, whereas pattern c is non-closed and therefore discarded.} ![Figure 9](image) \textit{The rhythmic pattern afghi is less specific than the melodico-rhythmic pattern abcde.} 4.3. Cyclic patterns Combinatorial explosions can be caused by another common phenomenon provoked by successive repetitions of a single pattern (for instance, in figure 10, the simple rhythmic pattern \textit{abcd}, a succession of one quarter note and two eighth notes forming two ascending and one descending intervals). As each occurrence is followed by the beginning of a new occurrence, each pattern can be extended (leading to pattern \textit{e}) by a new interval whose description (an ascending quarter-note interval) is identical to the description of the first interval of the same pattern (i.e., between states \(a\) and \(b\)). This extension can be prolonged recursively (into \(f, g, h, i, \) etc.), leading to a combinatorial explosion of patterns that are not perceived due to their complex intertwining (Cambouropoulos, 1998). The graph-based representation (Figure 10) shows that the last state of each occurrence of pattern \(d\) is synchronised with the first state of the following occurrence. Listeners tend to fuse these two states, and to perceive a loop from the last state (\(d\)) to the first state (\(a\)) (Figure 11). The initial acyclic pattern \(d\) leads, therefore, to a cyclic pattern that oscillates between three phases \(b', c'\) and \(d'\). Indeed, when listening to the remainder of the musical sequence, we actually perceive this progressive cycling. Hence this cycle-based modelling seems to explain a common listening strategy, and resolves the problem of combinatorial redundancy. This cyclic pattern (with three phases \(b', c'\) and \(d'\) at the top of Figure 11) is considered as a continuation of the original acyclic pattern $abcd$. Indeed, the first repetition of the rhythmic period is not perceived as a period as such but rather as a simple pattern: its successive notes are simply linked to the progressive states $a$, $b$, $c$ and $d$ of the acyclic pattern. On the contrary, the following notes extend the occurrence, which cannot be associated with the acyclic pattern anymore, and are therefore linked to the successive states of the cyclic pattern $(b', c'$ and $d')$. The whole periodic sequence is therefore represented as a single chain of states representing the traversal of the acyclic pattern followed by multiple rotation in the cyclic pattern. This additional concept immediately solves the redundancy problem. Indeed, each type of redundant structure considered previously is a non-closed suffix of a prefix of the long and unique chain of states, and will therefore not be represented anymore. But this compact representation will be possible only if the initial period (corresponding to the acyclic pattern chain) is considered and extended before the other possible periods. This implies that scores need to be analysed in a chronological fashion. Heterogeneous descriptions, as presented in section 2.2, can be associated with cyclic patterns too. For instance, in Figure 12, the cyclic pattern is a little more specific than the cyclic pattern presented in Figure 11, since the first note of each period is always C, and the interval between the second and third notes is always an ascending third. This can therefore be added to the representation of the pattern, as shown in the figure. A mechanism has been added that unifies all the possible rotations of the periodic pattern $(b'c'd', c'd'b', d'b'c')$ into one single cyclic pattern. For instance, in Figure 13, the periodic sequence beginning in a different phase than previously (on an upbeat instead of a downbeat) is still identified with the same cyclic pattern. By construction of the cyclic pattern, no segmentation is explicitly represented between successive repetitions. Indeed, the listener may be inclined to segment at any phase of the cyclic PC (or to not segment at all). Then it may be interesting to estimate the positions in the cycle where listeners would tend to segment. Several Figure 12. Heterogeneous cyclic pattern, including two complete layers of rhythmic and contour descriptions, plus two local descriptions: the absolute pitch value $C$ associated with the first note of each period ($\text{diat-pc} = 0$), and the constant pitch interval value of ascending major third between the second and the third note of each period ($\text{diat} = +2$). Figure 13. The periodic sequence is initiated with a different phase, since it begins on an upbeat instead of the downbeat. Due to this rotation, the first period of the cycle is built from a new pattern, called $ijkl$, whose prefix $ijk$ corresponds to the suffix $bcd$ of the period of the initial cycle $(abcd)^4$. The rotated cyclic sequence can be related to the same cyclic pattern $(b'c'd'$, that can be also denoted $(k')$). (4) Each pattern can accept multiple possible extensions forming a pattern tree (Lartillot, 2005). Thus a suffix of a pattern (such as $ijk$, in our example, which corresponds to the suffix “bcd” of the pattern $abcd$) is not designated with the same letters that are used in the original pattern $(abcd)$, because the multiple possible extensions of the new pattern might be different from those of the original pattern: patterns $ijk$ and $abc$ forms therefore two distinct branches of the total pattern tree related to the whole piece (Lartillot, 2005). Motivic matching strategies for automated pattern extraction OLIVIER LARTILLOT AND PETRI TOIVIAINEN factors need to be taken into consideration, such as primacy, local segmentation (as defined in the introduction), and global context. For instance, a primacy-based segmentation will favour the period that appears first in the sequence, which depends on the phase at which the cyclic sequence begins. Global context corresponds to the general segmentation of the piece, based on the major motives and the metrical structure. This will be considered in future work. 4.4. A COMPLEX SYSTEM The general model is decomposed into modules dedicated to the different underlying problems, each of them further decomposed into basic building blocks focusing on specific sub-problems. All these blocks can easily be redesigned and articulated with each other in a flexible way, offering the possibility to test various hypotheses. The data representation itself has been designed with the view of offering maximum of flexibility in the choice of structure representation. The main principle of the methodology consists of progressively building the computational system through a careful design of each sub-module. At each progressive step of the construction, the general behaviour of the system is controlled, and unwanted behaviours are listed. The overall results of the system are improved by determining the reasons for each unwanted behaviour identified, and subsequently fixing these problems, either through the modification of sub-modules, or the creation of new ones. These redundancy-filtering mechanisms ensure an optimal pattern description. Information is compressed without any loss, since all the discarded structures can be implicitly reconstructed. The filtering of redundant structures ensures clear results and at the same time decreases the combinatorial complexity of the process. Other rules have been integrated, based on cognitive heuristics. One rule in particular controls the combinatorial explosion that may be caused by the superposition of specific patterns on more general cyclic patterns, with the help of the Gestalt Figure/Ground principle (Lartillot, 2005b). 5. RESULTS This model, called kanthus, will be included in the next version of MIDItoolbox (Eerola & Toivainen, 2004). The model can analyse monodic pieces, and highlights the discovered patterns on a score. Rhythmic values are obtained through simple quantification operations, and scale degree parameters are computed through a straightforward mapping between pitches values and scale degrees. In the current state of the model, only repetition of patterns formed by series of strictly contiguous notes can be detected. The model has been tested using different musical sequences taken from several musical genres (classical music, pop, jazz, etc.), and featuring various level of complexity. The experiment has been run using version 0.8 of kanthus. This section presents the analysis of two pieces: a medieval Geißlerlied and a Bach *Invention*. The whole set of musical parameters defined in section 2 has been taken into account, and all patterns longer than 3 notes are displayed. 5.1. **Analysis of a medieval *Geisslerlied*** We first present an analysis of a fourteenth-century German *Geisslerlied*, “Maria muoter reinú mait”, proposed by the linguist Nicolas Ruwet (1966-1987), as a first application of his famous method of systematic motivic analysis. We will then show in comparison the results offered by the computational modelling. 5.1.1. **Ruwet’s analysis** Figure 14 presents Ruwet’s analysis of the piece, which offers a hierarchical decomposition in three successive levels, enumerated from I to III. On the higher level, the piece features two repetitions (with slight variation) of a I-level unit $A$ and two repetitions of another I-level unit $B$. Unit $A$ is decomposed into two phrases: the first phrase is further decomposed into two II-level units $a$ and $b$, and the second phrase into two other II-level units $c$ and $b'$, which is a slight variation of unit $b$. The second occurrence $A'$ differs only by the fact that the two units $b$ are identical. The second I-level unit $B$ is another phrase formed by two II-level units: $d$ and $b'$. Each II-level unit is decomposed into a succession of two III-level units. For instance, $a$ is decomposed into two units $a_1$ and $a_2$. One particularity of $d$ is that its two III-level units are identical ($d_1$), and $c$ is decomposed into $c_1$ and $d_1$. Moreover, $a_1$, $b_1$, $b_1'$, $c_1$ and $a_2$ are considered as melodic transformations of a same rhythmical structure and another similarity is proposed between $b_2$ and $d_1$. Finally, shorter units, composed of 2 to 5 notes, are suggested: three of them are shown in grey in the figure, another consists in two quarter notes forming a decreasing major third interval A–F, and a last one is composed of two quarter notes forming an increasing minor third interval. Ruwet’s analysis concludes with a modal analysis. Ruwet’s methodology consists of a mostly top-down hierarchical segmentation of the piece: first, the two repetitions of $B$, being exactly identical, are discovered; then the leftover (bars 1 to 16) is segmented, leading to the extraction of the two units $A$–$A’$. This strategy would not have worked if there were slight variations between the two occurrences of pattern $B$, for instance: $B = d + b$ and $B' = d + b'$. It can be shown that a systematic application of the methodology can produce, for the same musical piece, alternative analyses that are contradictory and counter-intuitive (Lartillot, 2004a). Hence Ruwet’s analysis of the *Geisslerlied* is not strictly and uniquely guided by the systematic methodology he introduces in this paper, but is rather deeply influenced by his implicit intuitions. (5) The decomposition in two phrases is not explicitly stated in Ruwet’s representation. Motivic matching strategies for automated pattern extraction OLIVIER LARTILLOT AND PETRI TOVIAINEN 5.1.2. Computational analysis of the piece A complete motivic analysis of the Geisslerlied has been carried out with the computational model. Figure 15 shows the result of the analysis\(^6\). Unit \(A\) has been retrieved by the computer due to its repetition. However, as the current model cannot take into consideration ornamentations, the eighth-note repetition of bar 3, displayed in grey in the figure, had to be removed from the score, as it prevents the detection of the complete pattern \(A^7\). On the other hand, the model is able to take into account the slight variation concerning the varying pitch \(^6\) The computational results have been filtered manually, as explained in the end of section 5.1.2. \(^7\) The consideration of ornamentation is the object of current work (see section 6.2). value, which is an A in the first repetition, and a Bb in the second repetition. Pattern A is heterogeneous in that it is described along all the musical dimensions of the parametric space, except in regard to the varied note which is described by the gross contour parameter, only for the preceding interval, and by the rhythmic parameters. Pattern b, which corresponded in Ruwet’s analysis to the identical ending of each line of the score, has been extracted, too. The aforementioned pitch variation, since repeated also several times (here, three times), is described by another pattern b’. Both pattern classes b and b’ can be unified into a more general pattern that contains the two possible variations of the endings, and, as a consequence, leaves the variable note undescribed, similarly to the description of pattern a. Patterns a and c, on the contrary, are not explicitly represented, because they do not convey additional information concerning the pattern structure of the piece. Following the terminology introduced in section 4.2, patterns a and c are non-closed subsequences of pattern A. In Ruwet’s analysis, the selection of these patterns is based on a segmentation process: patterns a and c are the leftovers after the extraction of the endings b from the bigger phrase A. No segmentation process has been integrated into the model yet. Pattern B has not been correctly extracted by the system. This is due to the fact that pattern A itself is concluded by a suffix of B (shown by the B’ line in the figure). Following the incremental approach of the algorithm, the two complete repetitions of the B’ pattern are first discovered, leading to a cyclic pattern whose starting points are indicated by the B’ letters in the figure. The inference of the B segmentation proposed by Ruwet would require the incorporation of new mechanisms, as explained in section 4.3. In the example, the initial cyclic pattern A implies a segmentation at the beginning of the third stave and also at the beginning of the fifth stave. We may suppose that the prolongation of this first segmentation would be expected by listeners. This expectation is reinforced by the repetition of pattern b in the last two staves, which seems to induce a generalisation of cycle A, that should be studied in future works. Pattern d, like patterns a and c, is not explicitly represented in the computational analysis since it is a redundant subsequence of pattern B (and B). Its extraction would thus require segmentation heuristics. Among the III-level units, the pattern extraction algorithm can only discover unit d1, due to its intrinsic repetition. Other III-level units resulted either from segmentation processes — c1 is the leftover after the extraction of d1 from the unit c — or for purely symmetrical reasons. The successive repetition of pattern A leads to the creation of a cyclic pattern, each cycle being the repetition of a new occurrence. The cyclic pattern implies the expectation of a third occurrence (indicated by the third A graduation) finally aborted. As explained in section 4.3, the concept of cyclic pattern enables to avoid the extension of each occurrence, which would lead to an overlapping of the occurrence and to a combinatorial explosion of structures. For instance, the pattern obtained by shifting of pattern A to the right by one note is filtered out as it is a non-closed suffix of one phase of the cyclic pattern A. Indeed, the support of this candidate pattern is not higher than the support of the corresponding phase in the original cyclic pattern. relatively to the relative size of each unit, and cannot therefore be detected by the algorithm. Among the shortest units proposed by Ruwet, the three-note conjunct lines (grey arrows in the figure) are formalized as cyclic successions of second intervals. On the other hand, the two other units displayed in grey in the figure cannot be detected because they are repeated through retrogradation, which is a musical transformation not yet taken into account in the model. The two last units proposed by Ruwet are only composed of two notes. But as a huge number of interval repetitions can be found in any musical piece, the selection of a particular interval requires further justifications, not given by Ruwet for these particular structures. On the other hand, short patterns, such as e and f, are proposed by the algorithm, that have no correspondence with Ruwet’s analysis. The assessment of their perceptual or musical relevance will require further study. Figure 15. Analysis of the Geisslerlied “Maria muoter reinū maīl” using our approach. Each motivic repetition is represented by a line below the corresponding stave, each labelled with a letter. Each repetition of motive A is represented by a line on the left of the stave. The analysis shown in Figure 15 results from a manual filtering of the output of the computational analysis: • The output of the algorithm catalogues the progressive construction of patterns during the incremental and chronological scanning of the score. This trace contains, therefore, much redundancy since it shows for each pattern the list of successive extensions. In particular, prefixes of patterns are not discarded, even if they are non-closed, because they form the successive states of the chronological construction of the pattern (Lartillot, 2005a). Figure 15, on the contrary, shows the final state of the analysis, which simply consists of the set of all the motives that have been discovered. The transformation of the chronological analysis into this compact list is carried out manually for the moment. • Some evident motivic structures have not been shown in the score: for instance, the simple succession of eighth notes, or the successive repetition of a same gross contour value. • The mechanism based on the Gestalt rule of figure against ground, mentioned at the end of section 4, enables a filtering of a large set of redundant structures: it prevents each pattern (such as \(b\)) from being extended by a simple rhythmic succession of quarter-notes if this succession already existed before the pattern: pattern \(b\) is considered as a figure above the background formed by the succession of quarter-notes. This rule does not currently work when the pattern is preceded by a succession of eighth notes instead of quarter notes, but would work if the model were able to infer the implicit succession of quarter notes hiding underneath. This will be implemented in future work. • The alternation of series of quarter notes and series of eighth notes should be formalized as a cycle (the alternation) between two cycles (the series). This concept of cycle of cycles is, however, not yet implemented in the model. Hence the computational analysis reveals for the most part trivial motivic structures that can, in most cases, be easily perceived by listeners. The interest of this model, in its current state, is in the automation of the process, which enables an exhaustive analysis. Moreover, these results show that the model is able to offer a compact and significantly relevant description of musical structures. The refinement of the results of the computational analyses may now be planned through an enrichment of the modelling process. 5.2. **Bach Invention in D minor** 5.2.1. **Kresky’s analysis** Jeffrey Kresky proposed a detailed analysis of Bach *Invention* in D minor (Kresky, 1977). His approach is founded on a close interconnection between the tonal and motivic dimensions of music. As our study focuses solely on motivic analysis, the large part of Kresky’s analysis related to the tonal evolution of the *Invention* is not considered in this review, the scope of which is further restricted to the first 15 bars of the piece. Figure 16 presents a tentative explicit reconstruction of the analysis, that Kresky originally presented in a mostly textual manner, without explicit categorization of the motivic classes. The piece begins with an “opening motive” (here A), composed of an “ascending line” (indicated by a grey arrow in the figure) followed by a decreasing interval Bb - C#, forming altogether a motive shape (a). This motive shape “reverses itself in the second measure” of motive A (motive a1). A “direct imitation of the motive” (A) “first appears in the bass part” at bars 3 and 4, followed by an “octave-higher repeat” in the treble part at bars 5 and 6. During the repetitions of the opening motive in bars 3 to 6, the alternate voice presents an “accompanying figure” (denoted B in the figure), containing two repetitions of a variant of motive shape a (denoted a2). This “shape directly clarifies the hidden triadic structure of the first measure (by isolating the triad notes and eliminating the passing tones)”, and “the shape of the accompaniment is just that of the motive — an ascent spanning a sixth”. More precisely, most occurrences of motive a2 picks up the whole pattern of the motive shape a, “as an ascent through a sixth followed by a large skip in the opposite direction”. From measure 7 to measure 10 is built a first sequence, whose unit (denoted A) here also can be considered as a variation of the opening motive A: the second half of A’ is identical to the ending of A (a1), whereas the first half of A’ (denoted a3) is a variation of the first half of A (a) as the scalar climb “starts one attack late, a minor third having been added before it”9. Another matching between a and a3 comes from the fact that they both feature “the same sequence of pitch classes (D-E-F-G-A-A-Bb)” but “reshaped”. From measure 11 follows a second sequence, whose unit (denoted a5) “also derives from the original motive shape [a], for the rising six-note scale [grey arrays] is here represented by two adjacent three-note scales” (denoted b), which “ends with the skip in the opposite direction”. The three-note scale is formed by the “first three notes of the original scale form” (denoted b), and the octave position of b in measure 11 is the same than in measure 5 (both denoted b’ for this reason). The first sequenced unit, when played at the treble voice between measures 7 and 10, is accompanied by two successive similar “bass lines” (denoted a4) that also shows close similarity with the motive shape a, since it contains the same three-note shape b’ in the second half of the line, and the whole line can be identified with the six-note scale (grey array). We may notice that this scale here also ends with a large skip in the opposite direction (dotted line under the score), and that the six-note scale is prolonged in the next measure (dotted arrow). Finally, the pitches found at the downbeat of each first two bars of the sequence between measures 7 and 14 forms a descending scale F-E-D-C (circled in the figure) that is prolonged in the next measures of the piece and that plays an important tonal role. Hence Kresky’s analysis precisely shows how “the opening motive [a], through various imitative means, seems to generate the entire fabric of the invention” (p. 63). 5.2.2. Computational analysis Figure 17 shows the results of the computational analysis of the first 14 bars of the Invention, and Table 1 lists the set of discovered motives. The opening motive A, with (9) Since this minor third interval can be found after the end of each occurrence of motive A, Kresky considers it as an extension of motive A (represented in dotted lines in the figure). all its occurrences throughout the piece, has been exactly retrieved by the machine. The ascending and descending lines within the motive (grey arrows) have been detected, too. On the other hand, the inversion $a1$ of the motive shape $a$ has not been detected since inversions are not taken into account in the model yet. As the opening motive $A$, throughout the analyzed extract, is not transposed in the pitch-class domain, it can therefore be expressed in this dimension, as indicated in Table 1 under column titled “Melodic”. The accompanying figure $B$ has been detected too, and is decomposed into a succession of two successive and similar shapes. They are noted $e$ instead of $a2$, however, because the similarity of these shapes with motive $a$ has not been discovered. Indeed, the model is neither able to automatically retrieve the hidden triadic structure in motive $a$, nor reduce the ascending figure to an ascending sixth interval. The first sequence unit $A’$ is detected and identified as a variation of the opening motive $A$, but not exactly for the same reasons: the scalar climb $a$ has not been detected as delayed of one sixteenth-note, but rather modified on its first interval (motive $a3’$). Contrary to the opening motive $A$, the sequence unit $A’$ is constantly transposed, and is therefore expressed along the pitch-interval dimension instead of the absolute-pitch description. The second sequence unit $a5$ has been detected and identified as a variation of the original motive shape $a$. The repeated three-note scale $b$ is also identified with the first three notes of the original scale form at the beginning of the motive shape $a$. The repeated bass line during the first sequence is detected, but cannot be identified with the motive shape $a$, therefore denoted $C$ instead of $a4$. Indeed, in Kresky’s analysis the three-note shape $b'''$ is extracted from the bass line following a heuristic, not implemented in our framework, based on a segmentation induced by the metrical structure. Besides the six-note scale (grey array in figure) has not been identified either due to the irregularity of the second note. Finally, the descending scale F-E-D-C (circled in the figure) has not been detected due to the incapacity of the algorithm to consider motivic configurations between distant notes. The computational analysis also includes additional motivic structures that do not seem to offer much musical interest or perceptive relevance. Pattern $D$, featuring --- (10) The little extension of motive $A$ with the following third interval (dotted lines in figure 16), due to the variability of its contour, is not detected either. (11) The motive spans the whole range of one single octave (from $C#$ to $Bb$), therefore no indication of relative octave position is given here. (12) In the two occurrences of the accompanying figure, the first shape is built on the same series of pitch class ($F$, $A$, $D$), where $D$ is one octave higher, therefore noted in the table ($F$, $A$, $D+$). On the other hand, the second shape is variable, and can be described only in the contour dimension as $(+ , + , -)$. The whole melodic description of the accompanying figure is thus ($F$, $A$, $D+$, $- , + , + , -$). Automated motivic analysis of J.S. Bach's Invention in D minor BWV 775, first 14 bars. The representation of the motives follows the convention adopted in the previous figures. The motives for each voice are shown below the respective staves. The class of each motive is indicated on the left side of the lines. a succession of a descending second interval followed by a series of ascending second intervals, sounds poorly salient due to its weak position in the metrical structure, and the limited size of its description. The second occurrence of pattern $D$ is also considered by the model as the beginning of a variant $a6$ of motive $a3$, which is, once again, not very salient due to the weak position of this motive in the metrical structure. Pattern $E$ shows similar limitations: it corresponds to a series of pitches ($E5, F5, D5, E5, F5$) starting on the offbeat. In order to filter patterns $D$ and $E$, a higher-level metrical representation may be integrated in future works that would show the position of each pattern in the bar structure, and would, in particular cases, force pattern occurrences to conserve the same metrical information. The model also includes evident structural configurations, such as the oscillations between ascending and descending contour, and other patterns that are mostly redundant descriptions incorrectly filtered. This results partly from unsolved errors in the modelling process, but also shows the necessity of taking into account other heuristics. Finally, in order to obtain the compact representation displayed in Figure 15, some manual ordering of the computational results was required. For more results, and discussions about the complexity of the algorithm, see Larillot (2005b). Thus, the computational system has been able to retrieve the most salient structures of the piece, which are congruent with Kresky’s analysis. However, the subtlest configurations discovered by the musicologist cannot be detected with the algorithm. Indeed, the use of computers here is not aimed at replacing the musicologist’s skills, but rather at experimenting with a formalisation of the basic principles of music. ### Table 1 Motives discovered in Bach’s Invention in D minor BWV 775 both in Kresky’s analysis and in the computational experiment. <table> <thead> <tr> <th>Name</th> <th>Kresky’s interpretation</th> <th>Computational description</th> </tr> </thead> <tbody> <tr> <td>$a1$</td> <td>Motive shape</td> <td>(prefix of $A$)</td> </tr> <tr> <td>$a2=c$</td> <td>Triadic structure</td> <td>(+,+,-)</td> </tr> <tr> <td>$a3=a3$</td> <td>Retarded scalar climb</td> <td>(-2,+1,+1,+1,-6)</td> </tr> <tr> <td>$a4=C$</td> <td>Bass line</td> <td>(+7,-5,+1,+1,-6)</td> </tr> <tr> <td>$a5$</td> <td>Second sequence unit</td> <td>(+1,+1,-2,+1,+1,-6)</td> </tr> <tr> <td>$a6$</td> <td>(not considered)</td> <td>(-,+,+,+1)</td> </tr> <tr> <td>$b$</td> <td>Three-note scale</td> <td>(+1,+1)</td> </tr> <tr> <td>$b’$</td> <td>Identifying $a5$ with $a3$</td> <td>(D5,E5,F5)</td> </tr> <tr> <td>$b''$</td> <td>Same, transposed</td> <td>(C5,D5,E5)</td> </tr> <tr> <td>$b''''$</td> <td>Three shape in bass line</td> <td>(not detected)</td> </tr> <tr> <td>$D$</td> <td>(not considered)</td> <td>(-1,+1,+1,+1)</td> </tr> <tr> <td>$E$</td> <td>(not considered)</td> <td>($E5,F5,D5,E5,F5$)</td> </tr> </tbody> </table> understanding. Besides, this automation may enable the annotation of large music databases. 6. Future work 6.1. More Detailed Validation of the Results The current state of this research project is restricted to the most evident structures of music, as explained in the introduction. The analyses produced by the computational model are evaluated for the moment in a purely qualitative manner: the results are searched for the most important motives of the piece, and the additional unexpected structures offered by the algorithm have been assessed in a purely intuitive manner. In future work, the computational results will be compared with analyses available in the music literature. A more precise and refined validation of the results will require the establishment of a “ground truth”: a corpus of pieces of diverse style will be collected, on which manual motivic analyses will be carried out by a board of musicologists. The comparison of the manual analyses and the computational results will enable a more precise determination of the precision and recall factors offered by the model. 6.2. Musical Transformations One major limitation of the first version of the model, as presented in this article, is that only repetition of sequences of notes that are immediately successive could be detected. In music in general, repeated patterns are often ornamented: secondary notes can be added in the neighbourhood of the primary notes, in both time and pitch dimensions. Primary notes may be retrieved through an automated filtering of secondary notes, for instance, by focusing primarily on notes at metrically strong positions (Eerola, Järvinen, Louhivuori, Toivainen, 2001; Conklin and Anagnostopoulou, 2001, etc.). This heuristic does not, however, work correctly for appoggiaturas (where ornamenting secondary notes are placed on metrically strong positions). Other approaches are based on optimal alignments between approximate repetitions using dynamic programming and edit distances (Rolland, 1999). We are developing algorithms that automatically discover, from the rough surface level of musical sequences, musical transformations revealing the sequence of pivotal notes forming the deep structure of these sequences. As for absolute pitches, absolute representations of temporal positions play only a minor role in motivic analysis: motives more generally result from local temporal configurations. Many studies in computer music (Conklin and Anagnostopoulou, 2001; Cambouropoulos, 2006; Lartillot, 2004b, etc.) integrate an additional temporal dimension based on inter-onset interval ratios, i.e., ratios between successive inter-onsets intervals. This ratio would enable the detection of augmented or diminished rhythmic motives, i.e. motives that are globally dilated or contracted in time. Another strategy consists of modelling metre as a multi-levelled hierarchy of beat-levels on the score, by which the rhythmic information is described. Hence, augmented or diminished repetitions can be detected as repetitions of the same rhythmic sequence on several different levels at several different positions in the score. This strategy is more satisfying since it restricts the domain of augmentations and diminutions to the different possible metrical subdivisions, a restriction that seems to correspond more closely to the listener’s capacities. These multi-level rhythmic layers will be considered in future work. 6.3. Polyphony Our approach is limited to the detection of repeated monodic patterns. Music in general is polyphonic, where simultaneous notes form chords and parallel voices. Research has been carried out in this domain (Dovey, 2001; Meredith et al., 2002), focusing on the discovery of exact repetitions along different separate dimensions. We plan to generalize our approach to polyphony following the syntagmatic graph principle. We are developing algorithms that construct, from polyphonies, syntagmatic chains representing distinct monodic streams. These chains may be intertwined, forming complex graphs along which the pattern discovery algorithm will be applied. The additional factors of combinatorial explosion resulting from this generalized framework will require further adaptive filtering mechanisms. Patterns of chords may also be considered in future works. 6.4. Articulating global patterns and local discontinuities As explained in the introduction, this study is focused on the discovery of repeated patterns, and does not take into account the second heuristic of motivic analysis based on local discontinuities. Yet local discontinuities impose some important constraints on the pattern extraction process. The pattern extraction process needs, therefore, to be studied in interaction with local segmentation. Lerdahl and Jackendoff (1983) have proposed a coupling of the two principles, and Temperley (1988) has suggested a computational formalisation. But in these approaches, as acknowledged by their authors, pattern extraction (called here “parallelism”) is theoretically considered without actual systematic modelling. Cambouropoulos (2006) proposed a way of modelling the interaction between the two principles. In a first step, both local segmentation and pattern extraction are performed in parallel, but only the boundaries of the segments and motives are taken into consideration. These boundaries are summed together, leading to a segmentation curve, and global segmentations are performed at local maxima of the curve. The resulting segments are then classified based on similarity measurement, following the paradigmatic analysis approach (Cambouropoulos & Widmer 2000). The description of the local discontinuities will be integrated in our model as a separate layer of description. The interactions between pattern extraction and local discontinuities, and particularly the constraints imposed by local discontinuities to pattern extraction, will be studied. ACKNOWLEDGMENTS This work has been supported by the Academy of Finland (Project No. 102253). The authors would like to acknowledge the valuable insights, observations and suggestions contributed by Emilios Cambouropoulos and David Meredith, and express their gratitude to Geoff Luck for his useful help during the writing of the paper. Address for correspondence: Olivier Lartillot Petri Toiviainen University of Jyväskylä Department of Music PL 35(M) 40014 University of Jyväskylä, Finland tel.: +358 14 260 1346 fax: +358 14 260 1331 e-mail: lartillo@campus.jyu.fi Motivic matching strategies for automated pattern extraction OLIVIER LARTILLOT AND PETRI TOIVIAINEN • References Estrategias de medición motívica para la extracción de patrones (“patterns”) automatizados Este artículo propone una aproximación al problema de la extracción automática de patrones motivicos en monodías. Se han definido diferentes dimensiones musicales, restringidas a aproximaciones actuales a los más destacados elementos melódicos y rítmicos del nivel superficial. La estrategia propuesta para la detección de repeticiones de patrones consiste en una medida exacta de los parámetros sucesivos que forman los motivos. Sugerimos una generalización de las múltiples aproximaciones diferentes que, de acuerdo con una variabilidad de tipos de parámetros (melódico, rítmico, etc.), definen cada extensión sucesiva de esos motivos. Esto nos conduce a considerar una clase más general de motivos, llamados motivos heterogéneos, que incluye motivos interesantes fuera del alcance de aproximaciones previas. Además, esta representación heterogénea de motivos puede ofrecer explicaciones más concretas sobre el impacto de la representación de contorno grueso en el análisis motívico. Este artículo muestra también que el principal problema revelado en la tarea de extracción de motivos está relacionado con el control de la redundancia combinatoria de las estructuras musicales. Se presentan dos estrategias principales, que aseguran un filtro adaptable a las estructuras redundantes, y que están basadas en las nociones de patrones cerrados y cíclicos. El método está ilustrado con el análisis de dos piezas: un Geisslerlied medieval y una Invención de J. S. Bach. • Strategie di abbinamento motivico per l’estrazione automatica di schemi Il presente articolo propone un approccio al problema dell’estrazione automatica di schemi motivici nelle monodie. Si definiscono differenti dimensioni musicali, limitate negli approcci correnti ai principali aspetti melodici e ritmici a livello superficiale. La strategia proposta per il rilevamento di schemi ripetuti consiste in un esatto abbinamento dei parametri successivi che formano i motivi. Suggeriamo una generalizzazione dell’approccio dal punto di vista multiplo, il quale permette una variabilità dei tipi parametrici (melodici, ritmici, ecc.) in grado di definire ogni successiva estensione di tali motivi. Ciò ci consente di prendere in considerazione una classe più generale di motivi, chiamati eterogenei, che include motivi interessanti al di là della sfera degli approcci precedenti. Oltre a ciò, questa rappresentazione eterogenea di motivi può offrire spiegazioni più sottili circa l’impatto della rappresentazione del contorno complessivo sull’analisi motívica. Il presente articolo mostra poi come il principale problema generato dal compito di estrazione degli schemi sia legato al controllo della ridondanza combinatoria delle strutture musicali. Si presentano due strategie principali, le quali garantiscono un filtro adattativo delle strutture ridondanti e si basano sulle nozioni di schemi chiusi e ciclici. Il metodo è illustrato attraverso l’analisi di due brani: un Geisslerlied medievale e un’Invenzione di Bach. Stratégies d’appariement pour l’extraction automatisées de motifs Nous proposons ici une méthode pour aborder le problème de l’extraction automatisée de motifs dans une monodie. Nous définissons différentes dimensions musicales, limitées aux dimensions mélodiques et rythmiques les plus proéminents dans les méthodes actuellement utilisées. La stratégie de détection de motifs répétés que nous proposons ici consiste à grouper avec précision les paramètres successifs qui forment les motifs. Nous proposons de généraliser la méthode des points de vue multiples, afin de pouvoir varier les types de paramètres (mélodique, rythmique, etc.) qui définissent chaque extension de ces motifs. Ceci nous permet de tenir compte d’une classe plus étendue de motifs, qu’on appelle motifs hétérogènes, incluant ceux qui n’étaient pas examinés par les méthodes précédentes. De plus, cette représentation de motifs hétérogènes offre une description plus fine de l’impact de la représentation de contours bruts dans l’analyse motivique. Nous montrons aussi que le problème principal que pose l’extraction d’empreintes est lié au contrôle de la redondance combinatoire d’une structure musicale. Nous présentons deux stratégies permettant le filtrage adaptatif des structures redondantes, fondées sur les concepts de motifs fermés et cycliques. Nous donnons comme exemple de la méthode l’analyse de deux pièces : un Geisslerlied médiéval et une Invention de Bach. Motivische Vergleichsstrategien zur automatischen Musterextrahierung
Editors: Professor Dr. Hartmut Lehmann and Dr. Kenneth F. Ledford, in conjunction with the Research Fellows of the Institute. Address: German Historical Institute. 1607 New Hampshire Avenue, N.W. Washington, D.C. 20009 Tel: (202) 387–3355 FAX: (202) 483–3430 Hours: Monday-Friday, 9:00 a.m.-5:00 p.m. Library Hours: Monday, Wednesday, Friday, 10:00 a.m.-6:00 p.m. Tuesday, Thursday, 10:00 a.m.-8:00 p.m. © German Historical Institute, 1990 MAC entire G R A P H I C S The BULLETIN appears twice a year and is available free upon request. ISSN 1048-9134 The next issue of the BULLETIN will appear in Fall 1990. I. Preface 3 II. Archive Report: Using the National Archives 4 III. Descriptions of Research Projects 11 A. German-Speaking Refugee Scholars of the Thirties at Historically Black Colleges"—Gabrielle Simon Edgcomb. E. "The Office of Strategic Services and the German Anti-Hitler Opposition During World War II"—Jürgen Heideking. H. "Gender and Social Stability. The Restructuring of West German Society 1945 to 1955"—Hanna Schissler IV. Institute News. A. New Address. 25 B. Alois Mertes Memorial Lecture. 25 C. Research Fellowships for Visiting Scholars. 25 D. Occasional Paper No. 1. 26 E. Friends of the German Historical Institute. 26 F. Supplement to Reference Guide No.1, "German-American Scholarship Guide for Historians and Social Scientists." 26 G. GHI Library Report. 27 H. New Staff Members. 28 I. Scholarships. 29 J. Spring Lecture Series. 29 K. Miscellaneous. 30 I. Preface. In the first week of April, the Institute moved into its splendid new building at 1607 New Hampshire Avenue, N.W., which, much to our joy, has a reading room for the Library and a Lecture Hall. We are very grateful to the Volkswagen Foundation for providing us premises which enable us to carry through our task. We are looking forward to welcoming students and colleagues to our new home. Also, on the first of April, Privatdozent Dr. Norbert Finzsch-Sprengel joined the Institute as Deputy Director. He comes from the Anglo-Amerikanische Abteilung of the Historisches Seminar of the University of Cologne, and he is a specialist in American and German social history. We are delighted that he is now part of our team. Both events indicate that, in its third year, the Institute is entering a new phase in its development. In the years ahead, we hope to expand our activities. In light of recent events, it seems even more important than in the past to ensure the continuing cooperation and to promote the exchange of ideas between American and German historians. Washington, D.C., April 1990 Hartmut Lehmann II. Archive Report. With Issue No. 6, the BULLETIN begins a regular pattern of publishing each spring a report on an archive, either in the United States or in Germany. The intent of the series of reports is to bring to the BULLETIN's readers a brief description of the holdings of the archive, but more importantly, to recount to the reader recent experiences of a user of that archive. The reports will provide tips on how best to prepare for a visit to the archive in question, useful information that will reduce start-up time upon arrival, suggestions as to how to avoid pitfalls both scholarly and bureaucratic, and practical advice as to how to make life in the archive more enjoyable. Because of the location of the Institute in Washington, it is only fitting to begin this series with a report on the National Archives here. This first report has been provided by Albert Diegmann of the University of Aachen. Mr. Diegmann at present is working on his dissertation, "The United States and the Decartelization of the Ruhr Coal Industry 1947–55," and in December he will begin a six-month fellowship that he has received from the Institute in order to complete his research in archives in the United States. The Institute and the editors are grateful to Mr. Diegmann for his contribution. A "Guide" to the National Archives—Albert Diegmann This article is an attempt to introduce newcomers to the use of the National Archives in Washington. It is written completely from the personal experience of a young historian who has done research in the United States for two and a half years and has spent more than eighteen months in archives in the United States. Relying on personal experience means that this account is in no way exhaustive, but on the contrary rather selective according to my perceptions and, of course, my subject of research. However, it might be helpful by suggesting ways to avoid many initial difficulties. Preparation If you intend to do research on modern history dealing with German-U.S. relations or post-war German history, the National Archives is the right place to begin your search for documentation. But before you start, you should prepare carefully for your visit. It is crucial that you refine your subject until it is specific enough for efficient research, otherwise you might get lost in the vast amount of material on hand. One of your first impressions at the National Archives will certainly be the huge mass of papers and documents of all kinds. Before you go, you must read the pertinent secondary literature on your subject and acquaint yourself with the Department of State publication series "Foreign Relations of the United States" (FRUS). These volumes represent a selection of the holdings of the National Archives, and they can give you an idea of what to look for. At the top of each published document in FRUS is printed the file number of the collection from which the document is taken. Thus, even before you come to Washington you can determine which files in the National Archives might be of interest to you. **Getting started at the National Archives** The opening hours of the National Archives allow you to devote the greatest part of the day to research. The main National Archives building, located at Pennsylvania Avenue between 7th and 9th Streets, is open from 8:45 a.m. to 9:45 p.m., the National Records Center (NRC) in Suitland, Maryland, from 8:00 a.m. to 4:15 p.m. There is a shuttle bus service operating between the two locations, free of charge; you can get a schedule at the guard's desk at the main Archive on Pennsylvania Avenue. Once you are in the National Archives, the first thing to do is to apply for a research card. You do that on the second floor in room 207. You also need this identification at Suitland. Now sufficiently equipped, you can start your work. Most of your work will take place in the Main Research Room located on the second floor, room 203, or in the Microfilm Search Room, fourth floor, room 400. Here I will deal only with the Main Reading Room. **Ordering Records** The National Archives is divided basically into two sections: the diplomatic or civil branch and the military branch. To order records, you have to go into the stack areas; diplomatic papers can be requested in Stack 6E; for military files this would be Stack 13W. The Finding Aids are located in these stacks. Do not order anything unless you know exactly what you want. You should always consult the Finding Aids first, for they are of the utmost importance in locating specific records. I strongly recommend that you invest some time in acquainting yourself with the Finding Aids in order to get an overview of all the various holdings. After examining these books, binders, and perhaps card indexes, choose carefully what you want to see or really have to see. This procedure will pay off, for it might save you a lot of time in the end. Moreover, by doing so you ensure that afterwards no one can reproach you for skipping important documents pertinent to your subject. At Suitland, ask an archivist to show you the Finding Aids. The National Records Center has manuals for all collections which provide either box lists or, in many cases, even folder lists. If you need any assistance, whether at the National Archives or the National Records Center, do not hesitate to ask an archivist or any of the staff members who will guide you to the right person. These people are always glad to help, but be considerate about their lunch hours, for they are only human. Most diplomatic records and many others are classified under decimal systems. Ordering decimal file records is rather simple to do. Here is an example. Central decimal files are part of the General Records of the Department of State, RG (Record Group) 59, which are in the civil branch. They are divided into two time periods, 1945-49 and 1950-59, and then arranged by classes. [Note: the file numbers change in part from one time period to the next.] Suppose you are interested in the development of post-war German heavy industry, and you would like to see, for example, papers on coal. You would then enter on the request form the number "862.6362." Here is how to figure this out: first take a look into the "Classification Manual of Department of State Decimal File (1910–49)," valid for the period up to 1949. This tells you that "internal economic affairs" of states are to be found in Class 8. The next two digits are the country code; for Germany this is 62. The numbers to the right of the decimal point represent the specific file for the subject. The corresponding entry for the period 1950–59 would be "862.2552," which you can check in the "Classification Manual to State Department Decimal File, 1950–59". For some subjects you should also look at files that carry an extension to the regular number; for instance, papers relating to the rearmament of West Germany are filed among others under "762A.5," To learn about Special or Lot Files, consult Gerald K. Haines, A Reference Guide to U.S. Department of State Special Files (Westport, Conn., 1985) and the black binders in the stack area. **Working with the Records** Once you have requested records in the stack area, they will be made available to you in the research room. The material is stored in archive boxes and folders. Sometimes you will find titles written on the folders, which will ease your work considerably. The collections consist of various types of documents: generally there are memoranda, memoranda of conversation, minutes or summaries of meetings, letters, telegrams, and dispatches. Quite frequently you will encounter difficulties in identifying the author or the date of a document. For example, in many cases the writer of a memo is represented only by his initials. To solve this problem, I would suggest that you acquaint yourself to a certain extent with the organization of the agencies in question and with the relevant people. For the State Department or in the Office of the U.S. High Commissioner for Germany (HICOG), consult the "Government Manual" and other pertinent listings such as the "Foreign Service List." A few examples might show you what I mean. If you see a memo of conversation signed "DA," that is likely to be Secretary of State Dean Acheson; a memo that bears the initials "LCB" came from Louis C. Boochever. Letters are often signed or addressed just with the first name, but in most cases, the letterhead shows where it came from. Knowing that will help to identify the author; a letter from the U.S. Embassy Paris to the State Department signed "David" is written by the Ambassador himself, David Bruce; a letter from HICOG Frankfurt signed "Jack" for "Dear Hank" is from John J. McCloy to Henry A. Byroade. Some of these abridgments can only be solved by a little experience as these examples show, but do not worry. If you spend enough time with the records, you will inevitably acquire this kind of experience. A bigger problem are undated (and incorrectly dated) papers. In a well-arranged collection, one might be able to figure out the exact date by looking at the surrounding documents. But sometimes you are forced to rely on your best guess; you can deduce it from the content of the document or from the context you find it in. (Caution: beware of misfiled documents!). This can lead you to a fairly well-based estimation of the approximate date, provided you have sufficient knowledge about the bureaucratic structures, personnel, and the course of events. This is one of the reasons why thorough preparation is so important. In many files you will come across withdrawal notices which mean that these papers have been removed for security or other reasons. If you deem it important to see these documents, you can submit a Freedom of Information request to the appropriate agency identified on the withdrawal sheet. For classified material, you can put in a special review request. Ask your archivist for the proper procedures. Copies Many documents will be so valuable for your future work that you would like to be able to check for the exact wording even after you have finished your visit to the National Archives; so you want to make photocopies. For this purpose you must have a debit card which is used with the copy machines. In order to get one, go to the Cashier's Office on the ground floor. Ask them to issue you a debit card with a certain monetary value; you can pay by cash or check. You can always add value to the card later. If you expect reimbursement for your copy expenses, ask for a receipt. The debit card can be used at the National Archives as well as in Suitland. To mark the documents that you want to copy, put tabs around them. Before you go to the copy machine, all documents must be declassified. If you have just a few pages, have them checked at the technicians’ desk. For larger numbers of copies, sign in for a bulk copy appointment. You will then receive a declassification sticker and you can use the machine for one hour (half an hour at Suitland). I recommend a well-devised book-keeping system from the beginning that fits your needs. This could be done with file cards, lists, or optimally with a personal computer. The main purpose (and main advantage) of "book-keeping" is to avoid duplicates, since you often will glance over the same documents in different collections. I would also recommend that you mark your copies with their source, that is, name at least the collection and the box number. **Holding records** If you have ordered a truck load of boxes and you cannot finish the work that same day, tell the staff to hold your truck; your documents will be held for up to three working days in case you are absent from the Archives. You may work with the papers as long as you wish—to my knowledge there is no time limit within which you have to finish looking at them. **Records in the National Archives** Now I would like to give a general summary of the holdings of the National Archives and National Records Center relating to post-war German history. But please, do not expect too much from this survey, since this article is not a comprehensive archive report, and because it is based only on my experience with my special subjects. It cannot provide a detailed or even concise description of the collections; it will be a mere listing with a few comments here and there and in no way exhaustive. As you already know, the National Archives divides its holdings into civil and military branches. The diplomatic branch holds in the main the records of the Department of State in RG 59. Here are gathered all the decimal files plus a variety of special files such as records of the Policy Planning Staff, the Assistant Secretary of State for Occupied Areas, European and European Regional Affairs, the Western European Division, the Central European Division, and many more. These special collections are called Lot Files. Some of these are not in RG 59 but in RG 353, Interdepartmental and Intradepartmental Committees—for example the records of State-War-Navy Coordinating Committee (SWNCC) and State- Army-Navy-Air Force Coordinating Committee (SANACC) and RG 43, International Conferences. In RG 43, for instance you find the records of the Allied Control Council for Germany and of the Council of Foreign Ministers including records of the International Authority for the Ruhr. In the military branch, there are, among others, records of the Secretary of Defense (RG 330) and the joint Chiefs of Staff (JCS) Files (RG 218). The National Records Center holds a most interesting and wide variety of collections. For matters immediately relating to the Allied occupation of Germany, there are the Office of Military Government for Germany, United States (OMGUS) Records in RG 260, as well as RG 466, Office of the US High Commissioner for Germany (HICOG). To make good use of the OMGUS Records, you should ask for the Location Register. The HICOG Papers are arranged in several sections: among others, there are records of the High Commissioner (commonly referred to as the McCloy Papers), the Office of the General Counsel, the Executive Director. The latter section is separated into four subsections: General Records 1949–52, Security Segregated General Records 1949–52, General Records 1953–55, and Security Segregated General Records 1953–55, all arranged according to a decimal file system that is different from the State Department system. You may find the key to this decimal file system in the "Records Classification Handbook" starting with 1949. This system is valid also for the Foreign Post Files of RG 84. Nevertheless, for the period up to 1948, you must use yet another file manual entitled "Foreign Service of the United States of America, Classification of Correspondence." RG 84 includes among others the Paris Embassy Files, London Embassy Files, but also records of the U.S. Political Adviser to the Military Governor. The Paris Embassy Files contain such interesting special collections as ECSC and EDC records. Furthermore, the NRC holds records of the US Foreign Assistance Agencies: RG 469 FAA, and RG 286 ECA. All of these record collections are vast in scope, so that you are actually forced to go through the finding manuals before you can order any material. Yet in doing so, you will ensure that you get to know all the important sources you need for your research and your writing later on—and that is what your visit to Washington is all about. Restaurants In conclusion, a few practical remarks about restaurants, for the inevitable need for food, or in case you just want a break from the records. This will be a short section, because the substance is poor: there are only a few places near the Archives where you can have lunch or dinner, and if you happen to be a European gourmet, you had better stay home. The biggest selection at reasonable prices is at the Old Post Office Pavillion located on Pennsylvania Avenue between 12th and 13th Street; here there are a variety of ethnic fast food stands ranging from Greek to Italian to Chinese, Japanese, Indian, Texas style, and Mexican food. Off Pennsylvania Avenue, heading north on 10th Street, you will first see a small place called "Au Bon Pain," a supposedly French bakery (which closes at 6 p.m.). Across E Street, there is the Lincoln House Restaurant which serves fairly good American food at reasonable prices. If you like rock music with your dinner, you can stop by at the Hard Rock Cafe just across the street, but you should put a few more dollars in your purse or wallet. (The quality of the music though, as well as of the food, is disputable). When you are at the National Archives at lunch-time, you can visit the cafeteria of the National Gallery of Art which gives you also the opportunity to polish up your cultural experiences. (By the way, admission to all museums on the Mall is free.) It is hard to recommend any place in Suitland. First of all, you must walk at least 15 to 20 minutes east on Suitland Road before you reach one of the few smaller cafes and restaurants. I personally take a sandwich and a thermos bottle filled with coffee with me. The good news is that there will soon be a catering service at Suitland with hot and cold foods-on a trial basis. Whether this will still be operating by the time you come to Suitland is, of course, open to question. If this article is of any value to you in surmounting just a few adjustment problems, then it has served its purpose. III. Descriptions of Research Projects. In BULLETIN No. 4, Spring 1989, three members of the Institute presented Research Reports, describing their current research projects and their status. The Institute often receives requests as to what the research interests of its Fellows are, and Fellows frequently are asked about their projects by American colleagues at professional meetings. The editors of the BULLETIN have therefore extended the opportunity to all of the Research Fellows to include a brief description of their research in this issue. One of the main purposes of the Institute is to promote scholarly dialogue with the American historical profession. We therefore invite colleagues to call or write the appropriate Fellow here at the Institute with questions, suggestions, or in order to enter into a more detailed theoretical or bibliographical discussion about his or her project. German-Speaking Refugee Scholars of the Thirties at Historically Black Colleges—Gabrielle Simon Edgcomb My continued work on this subject from May to December 1988 (see BULLETIN No. 4) had to wait for renewed authorization. Renewal came in the summer of 1989, and I resumed work in October. While the research was "complete," with the caveat of the evidently endless possibilities for further investigation, I find new information coming my way. These data are included in the updated list below. In November 1989 I spent a week at the Rockefeller Archive Center at Pocantico Hills, New York, and a day at SUNY Albany with the archives of the American Council for Emigres in the Professions. A panel discussion, "A Fruitful Encounter—German Refugees at Historically Black Colleges," took place at Howard University on April 11, 1989, with Dr. Hartmut Lehmann, the Director of the German Historical Institute, Dr. Russell L. Adams, Chairman of Afro-American Studies at Howard University, and Dr. Max Ticktin, Chairman of Judaic Studies at George Washington University. I am now engaged in writing a manuscript. It will include history, analysis, and some illustrative stories to give life to the manifold experience and interactions which show the significance of this episode in minority and immigration history. In the interest of the accuracy and completeness of my list of such scholars, additions and corrections will be welcome. Atlanta University, Atlanta, Georgia Ossip Flechtheim, 1940-43, History, Political Science Hilda Weiss, 1941-43, German, Social Science Bennett College, Greensboro, NC Beate Berwin, 1942-50, German, Geography, Philosophy Central State University, Wilberforce, Ohio Gertrude Engel, 1951-55, English Coppin State College, Baltimore, MD Eric Fischer, 1965-69, Geography Dillard University, New Orleans, LA George Iggers, 1957-63, History Wilma Iggers, 1957-63, French, German Fisk University, Nashville, TN Werner Cahnmann, 1943-45, Sociology Elsbeth Einstein Treitel, 1943-46, German Ferdinand Gowa, 1948-67, German Otto Treitel, 1943-46, Mathematics, Physics Hampton Institute (now University), Hampton, VA Margaret Altman, 1941-56, Animal Husbandry, Genetics, Biology Peter Kahn, 1953-57, Art Karla Longree, 1941-50, Home Economics Ernst Lothar, 1948-50, Art Marianne Lothar, 1948-50, German Viktor Lowenfeld, 1939-46, Art Hans Mahler, 1941-43, Music Fritz Neumann, 1946-47, History Anna Stein, 1942-44, Mathematics Howard University, Washington, D.C. Ernest L. Abrahamson, 1939-41, Romance Languages, Latin Kurt Braun, 1943-69, Economics Johann Caspari, 1946-53, German Karl Darmstadter, 1945-65, German language and Literature, Russian John Herz, 1941-43 & 1948-52, Political Science Gerhard Ladner, 1951-52, Art History Julius Ernst Lips, 1937-39, Anthropology Erna Magnus 1947-66, Social Work Otto Nathan, 1946-52, Economics History of the Prison System in the United States, 1776 to 1860—Norbert Finzsch After Michel Foucault's *Surveiller et Punir* was published in English and German in the 1970s, a broad discussion on the emergence and nature of the prison system began, which was carried on by legal and social historians in France, England, the Netherlands, Switzerland, and West Germany, as well as those in the United States. Several questions arose from that discussion which are yet to be solved. While it is a commonplace that dungeons have existed since the beginning of written history, the focus now is on the question whether the modernization of the state in the early modern era or at the beginning of the industrial age has created a modern version of the prison. If that is the case, a discussion of the prison cannot be understood without a clear grasp of the meaning and importance of the creation of a modern (central) state. Theorists of the law have contributed much to the scholarly discourse by pointing out that, in the context a feudal society, corporal punishment persisted as the main mode of punishment, despite the existence of a few dungeons, because of the way criminality was perceived by those who held power. According to theory, criminals were more often punished not because of their harmful conduct against society as such, but because their deviance constituted a violation of the sovereign's rights and a defiance of his two bodies. Punishment therefore was a public affair, an exhibition of the vengeance of the sovereign and a public rite symbolizing the reinstitution of the prince's will over the criminal, as vividly described in Victor Hugo's *The Hunchback of Notre Dame*. Prisons did exist before the Enlightenment, but they were not means of punishment; rather they had three purposes which have nothing to do at all with punishment. First, they served as jails for those who could not pay their debts; second, they served as a temporary place for keeping those awaiting trial or final punishment, and third, they functioned as a hiding place for those "public enemies" whom we would regard as political prisoners today, as in the case of Paris' Bastille. The fact that prisons existed before the Enlightenment does not prove that they were perceived as a practicable means of punishment. But during the Enlightenment, four currents of thought led away from a conception of punishment as public vengeance: 1) The humanizing effect of Christian religion, with its emphasis on repentance and betterment, called for a more humane treatment of convicted felons; 2) The enlightened discourse of the eighteenth century led to a conception of punishment which stressed the idea that a criminal could be rehabilitated and in the meantime underlined the demand of the modern state to be the sole source of coercion. 3) Nonconformist Protestant thinking perceived the human mind as a storage of associations learned by example and which could easily be unlearned and reconstructed; and 4) The need for a fundamental discipline of the proto-workers in the interest of an emergent market society. According to these sources, the modern prison was created as a system, i.e. as a relatively stable set of discourses, material organizations, and social roles with clearly defined actors, which by and large replaced the old system of public corporal punishment. This happened in England after 1775 and in France after 1791 and in Germany and the United States no earlier than in the 1820s. The new system stressed the importance of forced labor, silence, separation, and surveillance of the prisoner, as is best symbolized in Mill’s *Panopticon*. After doing research on the early German prisons in the Rhineland, which were instituted in the years of French administration on the left bank of the Rhine after 1794, it became clear to me that one must divide the ongoing discussion about the role of prisons in modern society into several schools, characterized by the source material that they use. First, there is a group of historians who deal with the theory of law and the theoretical texts on the penitentiary. Second, there are those whom one might try to describe as administrative historians who explore material describing the implementation of the prison system in specific areas by the local authorities. Third, there are those historians who deal mostly with the internal organization of specific prisons and who therefore may be described as social historians. The scopes and methodologies of these different approaches differ greatly, but it is clear that one must try to combine all of these aspects if one does not want to fall prey to scholarly misconceptions based upon self-imposed limitations. I therefore want to deal with three different types of prisons in the U.S.: first with the penitentiary in Washington, D.C., where there was a strong influence of the national legislation on the organization and practice of law enforcement; second I want to look into the history of the prison in Pennsylvania, where there was a strong religious impact on the theory and organization of prisons; and third I want to deal with the prisons in Virginia, because slavery changes the whole conception of society and therefore must have had an impact on how punishment was perceived and conducted. The National Archives in Washington hold large sets of sources on the local penitentiary for the years 1820 to 1860 (RG 48), and the state archives of both Pennsylvania and Virginia have all the necessary records for a comparative study of the prison before the Civil War on all of the levels described above. The duration of the project will be three years, and I intend to publish my results as a monograph in English. Imperialism and Slavery. The Expansion of the Southern States, 1812-1860— Stig Förster In the first half of the nineteenth century, imperialism in Europe was largely a matter of interest for only a small minority of the population. Apart from occasional enthusiasm for famous victories abroad, such as the conquests of Mysore (1799) or of Algiers (1830), the general public in Europe's expansionist states tended to ignore the issue of imperialism. Only a few politicians, holders of special shares, adventurers, officers, and the men on the spot were directly involved. In the United States, all this was very different. Here, in a relatively democratic society, based on a tradition of colonial expansion, imperialism was potentially very popular among the general public. Expansionism into the American interior, the West and South, however, did not find support everywhere. Particularly in New York and the northeast, many people rejected the idea of reckless expansionism. In the South imperialism seems to have found its most enthusiastic support. Expansionist politicians such as Andrew Jackson, the penetration of U.S. influence into annexation of Texas, and the war against Mexico were widely popular. In fact, the commitment to expansion appears to have been quite often a precondition for the election of governors and senators. Democracy and popular demand for expansion combined in a characteristic mixture in the South. Southern expansionism therefore was an interesting special case in the international history of imperialism in the first half of the nineteenth century. It is the purpose of this project to investigate the social and economic origins of Southern imperialism as well as the nature of the policy. Particularly the internal conflict between large-scale plantations and small yeomanry deserves special interest, since this seem to have been one of the driving forces behind expansionism. The growth of the slave plantations forced small farmers to abandon the old South and to move west, only to be followed by more plantations. This process fed expansion and made imperialism popular among a land- hungry population. Therefore, there seems to have been a direct link between Southern imperialism and slavery. On the other hand, there is also the question of a separate Southern identity in the decades leading up to the Civil War. To determine whether the combination of imperialism and slavery helped to create such a separate identity is another purpose of this study. The research will concentrate on regional examples, major events (such as the war against Mexico), and their public perception. All this will be incorporated into a wide-ranging study of the long-term developments. It is expected that this study will be completed within five years. **Washington, Bonn, and the Problem of Nuclear Sharing and Nuclear Control in the 1950s—Axel Frohn** At the conclusion of the London Nine Power Conference in October 1954, the Federal Republic of Germany agreed "not to manufacture in its territory any atomic weapons ... [or] any part, device, assembly or material especially designed for ... any [such] weapon." Although described as the "first nonproliferation promise," this pledge did not prevent Germany from importing nuclear weapons or from achieving effective national control through bilateral or multilateral co-ownership arrangements. The project is designed to explore the extent to which the United States was prepared or anxious to share nuclear weapons with the Federal Republic in the 1950s and the concepts which were developed to control such a nuclear potential in German hands out of consideration for the perceived security requirements of Germany's West European neighbors. These concepts will be analyzed in the context of western defense policy, military doctrine, strategic planning, the diverging interests within the Western Alliance, and the changing degree of tension and detente in US-Soviet relations. **The Office of Strategic Services and the German Anti-Hitler Opposition During World War II—Jürgen Heideking** This study is part of a research project on German resistance sponsored by the Volkswagen Foundation and conducted at the University of Tübingen. The primary task consists of systematically gathering all the information about contacts made between members or emissaries of anti-Nazi opposition circles inside Germany and U.S. secret services. These contacts took place mainly in Switzerland, and to a lesser extent also in London, Stockholm, Madrid, Lisbon, Cairo, Algiers, and Istanbul. They can be traced in the files of the Office of Strategic Services (OSS), most of which have been transferred only recently from the CIA to the National Archives. Other important sources include various military intelligence reports and State Department records, especially the correspondence between Washington, D.C., and U.S. Embassies or Legations in neutral countries. In addition valuable material is to be found in the personal papers of prominent participants, such as William J. Donovan, Director of OSS, and Allen W. Dulles, OSS representative in Berne. On the basis of this information it will be possible to reconstruct the efforts of German opposition forces to establish links with U.S. government officials, together with the purposes that they sought to serve, as well as to ascertain the American knowledge and perception of the so-called "German underground." The sources also provide good insights into the U.S. administration's decision-making process, and they shed new light on American-British-Soviet cooperation and competition in intelligence matters. As a first result of the ongoing research it is anticipated that an edition of documents concerning American reactions to the attempt on Hitler's life on July 20, 1944, including the evaluation of its long-term political and military consequences, will be published in 1991. The German Bürgertum, 1750-1950—Kenneth F. Ledford An implicit or explicit component of all "Sonderweg" theories of modern German history has been that the middle classes, usually reified into "the" Bürgertum, "failed" to perform the role in society and politics assigned to it by admirers of the "English model" of progress toward a modern society and polity. Other theorists challenge the unspoken normative and comparative assumptions of the Sonderweg and argue that the German case was not so different from that of other Western industrializing states. The sharp debate about the German Sonderweg, however, has failed to describe and analyze the process of class formation and decline of the Bürgertum in its ecological niches. At present, I am completing one examination of the German Bürgertum and conceptualizing another. I began by scrutinizing closely an accessible and arguably representative group, lawyers in private practice (Rechtsanwälte), concentrating upon the example of lawyers in the Prussian province (formerly the Kingdom) of Hannover. Liberal reformers of the mid-nineteenth century placed great hope in the establishment of the private legal profession on a nationally-unified and liberal, that is, open to the free market, footing, and they seemed to have achieved their goal in the Imperial Justice Laws of 1877-79. In fact, however, lawyers suffered with the rest of the Bildungsbürgertum the dislocations of the Wilhelmine, war-time, and Weimar periods, and they concentrated their attentions primarily upon intra-professional struggles and the preservation of guild like standards of private behavior and professional comity. This project is in the final stages of revision prior to publication. For my next project, I propose a study of the German Bürgertum in its prime habitat, the city, over a sufficient time span, probably 1750-1950, to trace its "rise and fall." Did an identifiable "middle class" in fact emerge from the particularist and conflicting claims of guilds, Beamten, Gelehrten, and Unternehmer? If so, by what process; what held it together, and what tended to drive it apart? How did it manifest itself, defining its cohesion both to itself and to outside groups? How did its members behave in relation to each other and to outside groups? What were the relative roles of culture, material interest, and ideology in the emergence and in the decline of a unified Bürgertum? These are questions whose answers can be promoted by such a long-term secular study. My preference is for Prussian cities, for it is from Prussia that most of the evidence for a Sonderweg is drawn. Moreover, the Prussian Bürgertum has suffered neglect, particularly at the hands of American scholars who have tended to focus on the more liberal south. The targets of research should have been of some significance at the beginning of the period, but they should also have experienced appreciable growth in population, accelerating in the course of the nineteenth century, as well as rapid industrialization, after 1870 and especially after 1890. It would be best if they were not clearly identifiable either as a commercial, administrative, court, or academic city. A Protestant majority is preferable, because of the greater tendency of Protestants to pursue neo-humanistic academic study; and because of the anti-Catholic policies of the National Liberal party and the defensive particularism of the Zentrum. Also important in this regard is the Weberian thesis about the relationship of confession to entrepreneurial attitudes. Finally, of course, a key requirement is that the archival records both of city and state administration must have survived the vicissitudes of time and the destruction of war in sufficient measure to support a project of such a long time-span and comprehensive nature. **Enemy Aliens and the American Home Front in World War I—Jörg Nagler** Once the United States declared war on Imperial Germany in April 1917, more than half a million immigrants from Germany were declared enemy aliens—that term being defined as men and women born in Germany, over fourteen of age, and not naturalized. Within the prevailing climate of "one hundred percent Americanism," German-Americans in general and enemy aliens in particular became the targets and often victims of the American home front. The Wilson administration saw itself confronted with a virtually insoluble task. How could a population so large as the over half-million persons classified as German enemy aliens be politically evaluated and controlled? It appeared to be particularly difficult to ensure the loyalty of this group, and it seemed impossible to control this large population of non-naturalized Germans, thus rendering them particularly dangerous. The government attempted to control enemy aliens by requiring registration and restricting their movement; it placed them under surveillance, and once they were "proved" to be "disloyal" and "dangerous" to the national security, some of them were subsequently interned for the duration of hostilities. My hypothesis is that the U.S. government used these restrictive measures, which were aimed directly at enemy aliens, indirectly against German-Americans in general to ensure their loyalty. Although interest in the study of the social implications of war has increased in the recent past and immigration history has enlarged its agenda, the question of the treatment of ethnic minorities in wartime remains neglected. In particular the treatment of minorities during World War I has suffered a lack of attention, despite the fact that the treatment and the subsequent internment of Japanese-Americans during World War II was partly based upon this experience. My study attempts to answer the question of how American society, in the throes of total war, reacted toward an ethnic minority whose country of origin was in a state of war with their new homeland. Especially when loyalty became the primary psychological touchstone to establish national cohesiveness, aimed at transcending ethnic heritages, potential or presumed disloyalty on the part of the substantial number of German immigrants appeared as a threat to national security. In my study, enemy aliens serve as a looking glass in which the national experience of the home front is seen, how a government and population act and react toward threats to national security during time of war. The project should not be understood primarily as ethnic history of one particular national minority, but rather as the national experience of the home front, with all its implications in both official and private, everyday-life dimensions. The projection of the external German menace onto the internal (alien) enemy-the "fifth column" syndrome-reflected the strongly irrational characteristics of the prevailing xenophobia and threatened the cultural and often the economic survival of this ethnic group. The study is therefore also a history of mentality of government and people in wartime, and it is important to note in this regard that public opinion and popular pressure on the administration had a definitive impact upon the treatment of enemy aliens. To answer these questions sufficiently and to achieve a better understanding of the complex processes which took place in American society during the war requires a multidimensional, interdisciplinary approach involving various subspecialties, such as immigration and ethnic history, labor history, constitutional history, and the history of mentality. The primary sources reflect this multidimensional approach. The sources which reveal the governmental side include files of the several departments involved in enemy alien control, surveillance, and internment, the Justice Department, War Department, Immigration and Naturalization Service, as well as manuscript collections of the persons involved in this process. Newspapers, diaries, and letters sent to the Justice Department depict public perception and opinion. The self-perception of enemy aliens is documented by German-American newspapers, letters sent to the Justice Department and Swiss legation, internment camp publications (often censored), diaries, and oral history documents. The study has a nation-wide focus (mainly using the sources of the National Archives), treating the specific situations in individual states as well, contrasting their policy and attitudes toward enemy aliens. How did the midwestern states, for example, react toward their high percentage of German-Americans as opposed to states with lower percentages? A final major objective of the study is to examine the beginning of large-scale federal political surveillance operations aimed at enemy aliens and radicals. Our understanding of these origins is in fact still limited. Surveillance techniques as well as the different internment policies and operations will be discussed and placed in the overall political framework of wartime government and society. The prevailing conditions in the internment camps, their social composition and profile, and the public perception of these camps will also be examined. Gender and Social Stability. The Restructuring of West German Society 1945 to 1955.—Hanna Schissler Viewed from the angle of women of my generation, the question arises whether there were chances for more gender equality in Germany in the immediate post-war period that might have been missed. If this was the case, the reasons why the chances were missed must be examined. Yet to view gender politics in post-war (West) Germany solely from the perspective of "missed chances" seems as short-sighted as simply to presume the restructuring of traditional divisions of labor between men and women. The surface of the gendered society in the fifties was calm. A supposed "normalcy" had replaced the uprootings of the war and the immediate post-war period. But underneath this surface, tensions and contradictory life situations were on the increase, especially at first for women, but then as a consequence, also for men. Finally, since the late sixties the gender conflicts which had long been muted emerged and became visible. To this day they have not found satisfactory solutions. The project has several facets: It will examine the role that the American occupation force played in gender politics. How did the Americans' understanding of gender roles affect their politics? What role did Talcott Parsons' concept of the modern family, which was then the prevailing influence in American social sciences, play in the realities of political decision-making during the unique period in which "Americans as Proconsuls" could shape West German society? Not only were there pre-1945 plans which addressed the question of gender relations in post-war Germany, but in 1948 the American Occupation Force established a Women's Affairs Section, which explicitly dealt with gender relations (usually referred to as the "women's question"). Certainly all political and economic decisions also affected gender relations. In 1946 the Civil Code of 1900 was reestablished by the Americans, which re-codified the traditionally inferior legal status of women in property rights and family law. The Parliamentary Council devoted extensive debate to the question of full political and legal equality for women and finally, after much political struggle, codified that equality in Article 3 of the Basic Law. Nevertheless, an inherent uncertainty prevailed as to what was intended by Article 3. The West German unions' concept of a male "Leistungslohn," a family income tied to the male bread-winner, worked against the interests of women in seeking employment. A sex-segregated modern labor market emerged in the fifties, with the pattern of part-time work for women (or the famous "three phase model" in women's working lives), based upon the demand for female labor, its undervaluation, the establishment of mechanisms to guarantee the availability of female labor when the labor market demanded it and means to get rid of women's competition by campaigns against "double wage earners," if demand for female labor fell sharply and if women's labor endangered male "Besitzstände" in the labor market. The social realities of post-war Germany, therefore, differed sharply from any notion of gender equality. Men returned from the experience of war and destruction, experiencing guilt and shame at having at least witnessed, but often having perpetrated, war atrocities. The men who came home from the war or from prison camps had to face the fact that their roles in family and society had been deeply shaken, and it would be essential for them to redefine their roles (a process in which not only much denial took place, but in which men and many women all too often looked for old role models and traditional pseudo-safety in gender roles). On a cultural level, the inequalities between men and women were played down. Women were marginalized socially and economically—especially single women, all the more astonishing since the lack of men (until this day referred to as an "abundance of women") must have made it hard to ignore the fact that single women (war widows, single unmarried mothers, women who did not have a chance to find a male partner because of the war losses) were a decisive element in post-war West German society. A whole range of endeavors arose which tended to produce that social unconsciousness which is so essential for the functioning of power relations; the "social production of unconsciousness" in gender relations was amazingly efficient until the sixties and seventies, and it functions even today (although occasionally challenged). It is interesting to ask why this was and in many ways continues to be (although the change that took place can by no means be played down). Can the exhaustion of women and men after the war be held responsible for the reinforcement of traditional gender orientations? How did the longing for social stability affect decisions in gender politics? Gender politics in what became West Germany were deeply affected by the desire to distinguish that region from the Soviet zone of occupation and later East Germany, and thus it was linked to anti-communism and the Cold War. This is a striking example of how gender relations are influenced and sometimes determined by political developments which at first glance seem irrelevant to the ways in which men and women relate in a given society. On the economic level, it is interesting to examine whether the resurrection of the West German economy presupposed—but then in the long run also undermined—gender inequality. And finally, what are the political, social, economic, and psychological consequences of perpetuating basic inequalities between men and women? Who (men as well as women) profits from them, and who (again, women and men) pays what price for those inequalities? Gender relations point to fundamental problems of why human labor has historically been valued differently according to sex. It also highlights the question of why the upbringing of the next generation has been perceived as beyond the responsibility of the employer and to a considerable degree also beyond that of state and society, instead overwhelmingly being placed into the sphere of individual (female) responsibility. The result is a specific social placement of men and women in modern societies with many contradictory life situations, especially for women (but then, as a consequence, also for men), which need to be examined more thoroughly. Not the least important question is when are conditions more favorable for more gender equality: times of material scarcity, under conditions of war, in a free market economy, or—comparing West with East Germany—under conditions of strong state intervention? What kind of political decision-making is necessary in order to achieve more gender equality, and what can be learned from the post-war period in this regard? What groups in society share this goal, and what groups do not, and why? This project is clearly linked with a modern feminist approach to the question of gender inequality and tries to find answers in historical research which focuses on the (re)structuring of West German society. It also, however, will examine American society (Americans as the dominant occupation force in the western part of Germany as well as American notions of gender) and more fundamental problems of industrial societies in the second half of the twentieth century. IV. Institute News. A. New Address. As of April 9, the German Historical Institute has moved into its new quarters, the Woodbury-Blair Mansion, 1607 New Hampshire Avenue, N.W., Washington, D.C. 20009, purchased for its use by the Stiftung Volkswagenwerk. The telephone number has remained the same, (202) 387-3355, as has the FAX number, (202) 483-3430. The first event to be held in the Institute's new home will be the conference, "Max Weber's 'The Protestant Ethic and the Spirit of Capitalism' Reconsidered," May 3-5, 1990. Beginning in the fall, the Institute's regular lecture series will take place in our own lecture hall. The date of an official ceremony dedicating and celebrating the new building will be announced. B. Alois Mertes Memorial Lecture. The German Historical Institute in Washington has received a grant from the Association of Foundations for German Scholarship in Essen (Stifterverband für die Deutsche Wissenschaft) in order to hold an annual Alois Mertes Memorial Lecture, which will address one of the themes upon which Alois Mertes focused his life's work. Such themes include the German question in the context of German-American relations; the dialogue between American Jews and Germans; Central and South America as themes of European-North American dialogue; European integration and the Atlantic Alliance; and the role of churches in the ethics of war-prevention in the Federal Republic and the United States. The lecture will be held at the German Historical Institute in Washington, D.C., and will be published by the Institute. The scholar who delivers the Alois Mertes Memorial Lecture will receive a short-term stipend of DM 10,000. Selection will be made by the Director of the German Historical Institute, after an invitation for applications directed at younger German and American scholars, in consultation with two members of the Academic Advisory Council of the Institute and two representatives of the Association of Foundations for German Scholarship. Further information as to the invitation for applications for the first Alois Mertes Memorial Lecture will be announced in the Fall 1990 BULLETIN. C. Research Fellowships for Visiting Scholars. The Volkswagen Foundation has recently awarded a grant jointly to the German Historical Institute and the American Institute for Contemporary German Studies for three research fellowships annually in the field of post-World-War-II German history. In its first stage, the program will last for three years. Details about the fellowships and the application process will be announced in the next issue of the BULLETIN. The German Historical Institute has inaugurated a new series of publications with the appearance of Occasional Paper No. 1, Forty Years of the Grundgesetz (Basic Law), containing an essay by Peter Graf Kielmansegg of the University of Mannheim, entitled "The Basic Law—Response to the Past or Design for the Future?", and one by Gordon A. Craig of Stanford University, "Democratic Progress and Shadows of the Past." Copies of Occasional Paper No. 1 are available from the Institute upon request. E. Friends of the German Historical Institute. At the suggestion of Gerald D. Feldman, Konrad H. Jarausch, Michael H. Kater, and Ronald Smelser, a loosely-organized circle of friends of the German Historical Institute in Washington is being formed. This group will serve as a communication link between the North American academic community and the Institute, help articulate the needs of the North American constituency to the Institute, advise on research and help plan future activities with the Institute, and, when necessary, provide public support for the Institute. The Conference Group for Central European History has decided to participate in the formation of this group, and it is hoped that its inaugural meeting will be held this fall. The Institute is grateful to the initiators of this idea and to all others in the North American historical and scholarly community who have shown their interest in and support for the activities of the Institute. F. Supplement to Reference Guide No. 1, German-American Scholarship Guide for Historians and Social Scientists. The following is an expanded description of the "German Internship Programs," affiliated with the Conference Group on German Politics, included as entry B 7 in the scholarship guide: 1. TITLE - German Internship Programs - "German Internship Programs"-Affiliated with the Conference Group on German Politics 2. SCHOLARSHIP ADMINISTRATION - Selection Committee, German Internship Programs P.O. Box 345 Durham, NH 03824 Tel.: (603) 862-1778 3. PARTICIPANTS - All non-German citizens eligible; emphasis on North Americans 4. PROMOTED DISCIPLINES - Social Sciences, (especially Political Science, History, Economics); German or German Studies if a strong Social Science emphasis is included. 5. ELIGIBILITY - Advanced undergraduate students, graduate students, and occasionally others with undergraduate or graduate degrees. 6. LOCATION - Emphasis on programs in Berlin and Bonn, but occasionally also with Länder Parliaments or other governmental or quasi-public organizations. 7. SCHOLARSHIP DURATION - One to three months 8. APPLICATION DEADLINES - March 1 9. PREREQUISITES - Completed application forms as requested by the German Internship Programs, affiliated with the Conference Group on German Politics. 10. SELECTION PROCESS - The application is handled by the German Internship Programs, affiliated with the Conference Group on German Politics. - Eleven scholarships are awarded each year. 11. SCHOLARSHIP PROVISIONS - Work-study scholarships for advanced undergraduate and graduate students in German affairs - Stipends range from DM 1200,- to DM 2000,- per month, with travel subsidies customary. G. GHI Library Report—Gaby Müller-Oelrichs The collection-building of the library continues steadily, and we now hold approximately 8,000 titles and subscribe to 145 periodicals. A list of the periodicals to which the Institute subscribes was published in BULLETIN No. 2 and No.3, and an updated list will follow in one of the next issues. The Goethe Institute in Atlanta has generously donated a large number of out-of-print books from the early 1970s, primarily concerned with political science, sociology, and social history. We have also acquired a reprint edition of the Weltbühne. Thus, gaps in the collection are gradually being filled. We are in the process of purchasing a complete edition of the Fackel, which, together with the Weltbühne, will give our readers access to some of the most important cultural periodicals of the Weimar era. The library has purchased a great number of out-of-print books on the question of the relationship of the church to the Nazi regime. By early May, about a month after the move to the new building, the library will be able to offer its services to the public in a spacious new reading room. The collection, previously scattered all over the Institute, will be concentrated in three rooms and thus more easily usable. There will be better facilities for microfiche and microfilm use, as well as an additional copy machine. We appreciate the patience of our users and invite them anew to make use of our library holdings. H. New Staff Members. Two new staff members have joined the Institute since September 1989, and one former staff member has returned. Gabrielle Simon Edgcomb, whose association with the Institute was first announced in BULLETIN No. 3, has returned to the Institute as of October 1989 and is completing her study of German-speaking refugee historians who held faculty positions in historically black colleges and universities in the United States. **Norbert Finzsch**, Deputy Director, born in Cologne, 1951. Studied history and German literature at the University of Cologne; Dr. phil. Cologne, 1980; Lecturer/Assistant Professor at the Institute for Angloamerican History at the University of Cologne, 1981 to 1988; Privatdozent at the Department of History at the same university, 1989; replacement for Professor Jörn Rüsen at the Chair for Methodology and Didactics of History at the University of Bochum 1989/90. Member of *Verband der Historiker Deutschlands*, *Deutsche Gesellschaft für Amerikastudien*, European Association for American Studies, board of directors of QUANTUM. Married to Martina Sprengel, M.A., journalist. **Renate E. Solenberger**, Receptionist, born in Worms, Germany; B.A. *summa cum laude*, University of Maryland, 1979; graduate studies in psychology and art history, University of Heidelberg, 1979-81; German language instructor, U.S. Embassy to the European Common Market, Brussels, Belgium, 1982-85; Mental Health Program Grant Director, U.S. Embassy, Bonn, 1985-88. I. Scholarships. The Institute offers scholarships to doctoral students working on topics related to the Institute's general scope of interest. Applications should be sent to the Director, together with the following supporting information: - *curriculum vitae;* - study plan, including research proposal, time frame, and locations in the United States where research is to be carried out; and - letter of recommendation from the applicant's doctoral advisor. Applicants for scholarships to be taken up at any time during calendar year 1991 must send their letters of application, current *curriculum vitae,* and supporting letters of reference to the Institute no later than June 15, 1990. Americans who apply for these scholarships should be working on German history topics for which they need to evaluate source material located in the United States. Those who wish to do research in Germany should apply to the Fulbright Commission, the Deutscher Akademischer Austauschdienst, or some similar foundation. Copies of the German-American Scholarship Guide for Historians and Social Scientists are available from the German Historical Institute. The Guide, compiled by Jürgen Heideking, Anne Hope, and Ralf Stegner, includes information on some ninety-three scholarships, fifty-six of which provide funding for residents of the United States. J. Spring 1990 Lecture Series. - January 31: Claudia Koonz, Duke University, "Collaborators or Victims: Women in the Third Reich." - February 14: Rebecca Boehling, University of Maryland, Baltimore County, "From Trümmerfrauen to Hausfrauen: West German Women, 1945-1955." - March 21: Dirk Hoerder, University of Bremen, "The Image of America: Migrants' Hopes and Expectations." - April 24: Richard Bessel, The Open University, Milton Keynes, England, "Immortality' and Social Order in Germany after the First World War." - June 5: Jonathan Knudsen, Wellesley College, "Liberalism and Culture in Pre-1848 Berlin." The list of speakers for the Fall 1990 Lecture Series will be announced shortly. K. Miscellaneous. Dr. Jörg Nagler, Research Fellow at the German Historical Institute, has been appointed to the Editorial Board of the Yearbook of German-American Studies, published by the Society for German-American Studies. The Society for German-American Studies has issued a Call for Papers for its fifteenth Annual Symposium, to be held April 25-28, 1991 in Washington, D.C. Hosts will be the German Department of Georgetown University, the German Heritage Society of Greater Washington, D.C., and the German Historical Institute. Abstracts of scholarly papers should be submitted by October 15, 1990 to: Prof. Alfred Obernberger German Department Georgetown University Washington, D.C. 20057. For additional information, please call or write Professor Volker K. Schmeissner, (703) 845-6242. The Center for Immigration Research at the Balch Institute in Philadelphia is publishing a computerized data base on German Immigration to the United States between 1850 and 1893. The years 1850-1865 have been published: *Germans to America: Lists of Passengers Arriving at U.S. Ports*, edited by Ira A. Glazier and P. William Filby (Scholarly Resources, 1988- ). The remaining years are in preparation. Information may be obtained from the Director, Temple-Balch Center for Immigration Research or from Scholarly Resources, Inc., Wilmington, Delaware. Professors Frank W. Thackeray and John E. Findling of Indiana University Southeast wish to announce that they are seeking authors to write essays for a bio-bibliographical volume on the most significant international statesmen of the modern Western world, to be published by Greenwood Press. Among the some fifty to sixty subjects to be included are German statesmen such as Kaunitz, Adenauer, Metternich, Brandt, Hitler, Bismarck, Frederick the Great, and Wilhelm II. Those interested should send a letter stating their qualifications and a brief resume to Dr. Frank W. Thackeray, c/o Division of Social Sciences, Indiana University Southeast, 4201 Grantline Road, New Albany, Indiana 47510. The Center for Austrian Studies at the University of Minnesota announces a prize competition to identify the best recent book and Ph.D. dissertation in Austrian Studies. The field of Austrian Studies includes research on the cultural, political, and socio-economic links between modern Austria or the Habsburg lands and other European states; comparative studies involving modern Austria or the Habsburg lands; and analyses of literary, artistic, musical, philosophical, and scientific works by Austrian cultural figures, especially in their socio-economic or political setting. Regulations Book Prize 1. The author must be a citizen of the United States and the work must be in English. 2. The publication date must be between 1 May 1989 and 30 April 1990. 3. The book must involve original scholarship and make an important contribution to the field. Edited works and textbooks will not be considered. 4. The author, the publisher, or any other individual may submit the book. Submit three copies to: Chair, Austrian Prize Committee, Center for Austrian Studies, 712 Social Sciences Building, University of Minnesota, Minneapolis, MN 55455. 5. The prize carries a cash award of $1,000. The Center for Austrian Studies will announce the winner at the fall 1990 German Studies Association meeting in Buffalo, New York. 6. The deadline for submission is June 15, 1990. Dissertation Prize 1. The author must be a citizen of the United States studying at an American University. 2. The author must defend the dissertation successfully between 1 January 1989 and 31 May 1990. 3. The author or any other individual may submit the dissertation. Please send three copies to: Chair, Austrian Prize Committee, Center for Austrian Studies, 712 Social Sciences Building, University of Minnesota, Minneapolis, MN 55455. 4. The prize carries a cash award of $1,000. The Center for Austrian Studies will announce the winner at the fall 1990 German Studies Association meeting in Buffalo, New York. 5. The deadline for submission is June 15, 1990. L. Publications of the German Historical Institute in Washington. The following publications of the German Historical Institute in Washington, D.C., are available upon request: - **Bulletin**, Issue No. 2, Spring 1988-Issue No. 6, Spring 1990 (the supply of copies of Issue No. 1, Fall 1987, has been exhausted). - **ANNUAL LECTURE SERIES:** - **REFERENCE GUIDES:** - **OCCASIONAL PAPERS:** No. 1: *Forty Years of the Grundgesetz (Basic Law)*, with essays by Peter Graf Kielmansegg, "The Basic Law-Response to the Past or Design for the Future?", and Gordon A. Craig, "Democratic Progress and Shadows of the Past," German Historical Institute, 1990.
TOWARDS SAFE DEEP LEARNING: UNSUPERVISED DEFENSE AGAINST GENERIC ADVERSARIAL ATTACKS Anonymous authors Paper under double-blind review ABSTRACT Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&Wagner-L2. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples. 1 INTRODUCTION Security and safety consideration is a major obstacle to the wide-scale adoption of emerging learning algorithms in sensitive scenarios, such as intelligent transportation, healthcare, and video surveillance applications (McDaniel et al. (2016); Dahl et al. (2013); Knorr (2015)). While advanced learning technologies are essential for enabling coordination and interaction between the autonomous agents and the environment, a careful analysis of their decision reliability in the face of carefully crafted adversarial samples (Goodfellow et al. (2014); Papernot et al. (2016a); Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017)) and thwarting their vulnerabilities are still in their infancy. In this paper, we aim to answer two open questions regarding the adversarial attacks. (i) Why are machine learning models vulnerable to adversarial samples? Our hypothesis is that the vulnerability of neural networks to adversarial samples originates from the existence of rarely explored sub-spaces in each feature map. This phenomenon is particularly caused by the limited access to the labeled data and/or inefficiency of regularization algorithms (Wang et al. (2016); Denil et al. (2013)). Figure 1 provides a simple illustration of the partially explored space in a two-dimensional setup. We analytically and empirically back up our hypothesis by extensive evaluations on the state-of-the-art attacks, including Fast-Gradient-Sign (Goodfellow et al. (2014)), Jacobian Saliency Map Attack (Papernot et al. (2016a)), Deepfool (Moosavi-Dezfooli et al. (2016)), and Carlini&Wagner-L2 (Carlini & Wagner (2017)). (ii) How can we characterize and thwart the underlying space for unsupervised model assurance as well as defend against the adversaries? A line of research has shown that there is a trade-off between the robustness of a model and its accuracy (Madry et al. (2017); Papernot et al. (2016b)). Taking this into account, instead of making a single model that is both robust and accurate, we introduce a new defense mechanism called Parallel Checkpointing Learners (PCL). In this setting, the victim model is kept as is while separate defender modules are trained to checkpoint the data abstractions and assess the reliability of the victim’s prediction. Each defender module characterizes the explored sub-space in the pertinent layer by learning the probability density function (pdf) of legitimate data points and marking the complement sub-spaces as rarely observed regions. Once such characterization is obtained, the checkpointing modules evaluate the input sample in parallel with the victim model and raise alarm flags for data points that lie within the rarely explored regions (Figure 1c). As we demonstrate in Section 4, adversarial samples created by various attack methods mostly lie within the sub-spaces marked as partially explored sectors. We consider a white-box attack model in which the attacker knows everything about the victim model including its architecture, learning algorithm, and parameters. This threat model represents the most powerful attacker that can endanger the real-world applications. We validate the security of our proposed approach for different DL benchmarks including MNIST, CIFAR10, and a subset of ImageNet data. Based on the result of our analysis, we provide new insights on the reason behind the existence of adversarial transferability. We open-source our API to ensure ease of use by the users (the link is omitted for blind review purposes) and invite the community to attempt attacks against our provided benchmarks in the form of a challenge. The explicit contribution of this paper is as follows: (i) Devising an automated end-to-end framework for unsupervised model assurance as well as defending against the adversaries. (ii) Incepting the idea of parallel checkpointing learners to validate the legitimacy of data abstractions at each intermediate DL layer. (iii) Performing extensive proof-of-concept evaluations against state-of-the-art attack methods. (iv) Providing new insights regarding the transferability of adversarial samples in between different models. Figure 1: (a) In this example, data points (denoted by green and blue squares) can be easily separated in one-dimensional space. Having extra dimensions adds ambiguity in choosing the pertinent decision boundaries. For instance, all the shown boundaries (dashed lines) are sufficient to classify the raw data with full accuracy in two-dimensional space but are not equivalent in terms of robustness to noise. (b) The rarely explored space (region specified by diagonal striped) in a learning model leaves room for adversaries to manipulate the nuisance (non-critical) variables and mislead the model by crossing the decision boundaries. (c) In PCL methodology, a set of defender modules is trained to characterize the data density distribution in the space spanned by the victim model. The defender modules are then used in parallel to checkpoint the reliability of the ultimate prediction. 2 TRAINING CHECKPOINTING MODULES FOR INTERMEDIATE LAYERS The goal of each defender (checkpointing) module is to learn the pdf of the explored sub-spaces in a particular intermediate DL feature map. The learned density function is then used to identify the rarely observed regions as depicted in Figure 1b. We consider a Gaussian Mixture Model (GMM) as the prior probability to characterize the data distribution at each checkpoint location. To effectively characterize the explored sub-space as a GMM distribution, one is required to minimize the entanglement between each two Gaussian distribution (corresponding to every two different classes) while decreasing the inner-class diversity. Figure 2 illustrates the high-level block diagram --- 1 We use the term “checkpointing module” and “defender module” interchangeably throughout the paper. 2 It is worth noting that our proposed approach is rather generic and is not restricted to the GMM distribution. The GMM distribution can be replaced with any other prior depending on the application. of the training procedure for devising a parallel checkpointing module. Training a defender module is a one-time offline process and is performed in three steps. 1 Replicating the victim neural network and all its feature maps. An $L_2$ normalization layer is inserted in the desired checkpoint location. The normalization layer maps the latent feature variables, $\phi(x)$, into the Euclidean space such that the acquired data embeddings live in a $d$-dimensional hypersphere, i.e., $\|\phi(x)\|_2 = 1$. This normalization is crucial as it partially removes the effect of over-fitting to particular data samples that are highly correlated with the underlying DL parameters. 2 Fine-tuning the replicated network to enforce disentanglement of data features (at a particular checkpoint location). To do so, we optimize the defender module by incorporating the following loss function with the conventional cross entropy loss: $$ \mathcal{L}^+ = \gamma \left[ \sum_{y^*} \| C^i - \phi(x) \|_2^2 \right] - \sum_{i \neq y^*} \| C^i - \phi(x) \|_2^2 + \sum_i (\| C^i \|_2 - 1)^2. $$ Here, $\gamma$ is a trade-off parameter that specifies the contribution of the additive loss term, $\phi(x)$ is the corresponding feature vector of input sample $x$ at the checkpoint location, $y^*$ is the ground-truth label, and $C^i$ denotes the center of all data abstractions ($\phi(x)$) corresponding to class $i$. The center values $C^i$ and intermediate feature vectors $\phi(x)$ are trainable variables that are learned by fine-tuning the defender module. In our experiments, we set the parameter $\gamma$ to 0.01 and retrain the defender model with the same optimizer used for training the victim model. The learning rate of the optimizer is set to $\frac{1}{10}$ of that of the victim model as the model is already in a relatively good local minima. Figure 3a illustrates the optimization goal of each defender module per Eq. (1). The first term ($loss_1$) in Eq. (1) aims to condense latent data features $\phi(x)$ that belong to the same class. Reducing the inner-class diversity, in turn, yields a sharper Gaussian distribution per class. The second term ($loss_2$) intends to increase the intra-class distance between different categories and promote separability. The composition of the first two terms in Eq. (1) can be arbitrarily small by pushing the centers to $(C^i \leftarrow \pm \infty)$. We add the term, $loss_3$, to ensure that the underlying centers lie on a unit $d$-dimensional hyper-sphere and avoid divergence in training the defender modules. Figures 3b and 3c demonstrate the distance of legitimate (blue) and adversarial (red) samples from the corresponding centers $C^i$ in a checkpoint module before and after retraining. As shown, fine-tuning the defender module with proposed objective function can effectively separate the distribution of legitimate samples from malicious data points. Note that training the defender module is carried out in an unsupervised setting, meaning that no adversarial sample is included in the training phase. --- 3The $L_2$ norm is selected to be consistent with our assumption of GMM prior distribution. This norm can be easily replaced by an arbitrarily user-defined norm through our accompanying API. 4The centers $C^i$ before fine-tuning the checkpoint (defender) module are equivalent to the mean of the data points in each class. High dimensional real-world datasets can be represented as an ensemble of lower dimensional sub-spaces (Bouveyron et al. (2007); Mirhoseini et al. (2016); Rouhani et al. (2017)). As discussed in (Bouveyron et al. (2007)), under a GMM distribution assumption, the data points belonging to each class can be characterized as a spherical density in two sub-spaces: (i) The sub-space where the data actually lives and (ii) its orthogonal complementary space. We leverage High Dimensional Discriminant Analysis (HDDA) algorithm (Bouveyron et al. (2007)) to learn the mean and the conditional covariance of each class as a composition of lower dimensional sub-spaces. The learned pdf variables (i.e., mean and conditional covariance) are used to compute the probability of a feature point $\phi(x)$ coming from a specific class. In particular, for each incoming test sample $x$, the probability $p(\phi(x)|y')$ is evaluated where $y'$ is the predicted class (output of the victim neural network) and $\phi(x)$ is the corresponding data abstraction at the checkpoint location. The acquired likelihood is then compared against a user-defined cut-off threshold which we refer to as the security parameter. The Security Parameter (SP) is a constant number in the range of $[0\% - 100\%]$ that determines the hardness of defender modules. Figure 4 illustrates how the SP can control the hardness of the pertinent decision boundaries. In this example, we have depicted the latent features of one category that are projected into the first two Principal Component Analysis (PCA) components in the Euclidean space (each point corresponds to a single input image). The blue and black contours correspond to security parameters of 10% and 20%, respectively. For example, 10% of the legitimate training samples lie outside the contour specified with $SP = 10\%$. One may speculate that an adversary can add a structured noise to a legitimate sample such that the data point is moved from one cluster to the center of the other clusters; thus fooling the defender modules (Figure 5a). The risk of such attack approach is significantly reduced in our proposed PCL countermeasure due to three main reasons: (i) Use of parallel checkpointing modules; the attacker requires to simultaneously deceive all the defender models in order to succeed. (ii) Increasing intra-class distances in each checkpointing module; The latent defender modules are trained such that not only the inner-class diversity is decreased, but also the distance between each pair of different classes is increased (see Eq. (1)). (iii) Learning a separate defender module in the input space to validate the Peak Signal-to-Noise Ratio (PSNR) level of the incoming samples as discussed in Section [3]. In the remainder of the paper, we refer to the defender modules operating on the input space as the input defenders. PCL modules that checkpoint the intermediate data features within the DL network are referred as latent defenders. 2.1 Risk analysis Detecting malicious samples can be cast as a two-category classification task. Let us refer to the category of the legitimate samples as $W_1$ and the category of adversarial samples as $W_2$. If we define $\eta_{ij} = \eta(\alpha_j|W_i)$ as the misclassification penalty incurred for deciding $W_i$ when the true state is $W_j$, the conditional risk in each of our checkpointing modules is equal to: $$ \mathcal{R}(\alpha_i | \phi(x)) = \eta_{11} P(W_1 | \phi(x)) + \eta_{12} P(W_2 | \phi(x)), $$ $$ \mathcal{R}(\alpha_2 | \phi(x)) = \eta_{21} P(W_1 | \phi(x)) + \eta_{22} P(W_2 | \phi(x)). $$ (2) The fundamental rule to express the minimum-risk decision is to decide $W_1$ if $\mathcal{R}(\alpha_1 | \phi(x)) < \mathcal{R}(\alpha_2 | \phi(x))$. In terms of the posterior probabilities, we decide $W_1$ if: $$ (\eta_{21} - \eta_{11}) P(W_1 | \phi(x)) > (\eta_{12} - \eta_{22}) P(W_2 | \phi(x)). $$ (3) Generally speaking, the penalty incurred for making an error is greater than the cost incurred for being correct; thus both of the terms $\eta_{21} - \eta_{11}$ and $\eta_{12} - \eta_{22}$ are positive. Following the Bayes’ rule, we should select a sample as a legitimate one ($W_1$) if: $$ (\eta_{21} - \eta_{11}) P(\phi(x) | W_1) P(W_1) > (\eta_{12} - \eta_{22}) P(\phi(x) | W_2) P(W_2), $$ (4) and select $W_2$ otherwise. By reordering the aforementioned decision criteria we have: $$ P(\phi(x) | W_1) > \frac{(\eta_{12} - \eta_{22}) P(W_2)}{(\eta_{21} - \eta_{11}) P(W_1)}. $$ (5) Note that the right-hand term in Eq. (5) is application specific and is independent of the input data observation $\phi(x)$. In other words, the optimal decision criteria particularly rely on the cost of making a mistake in the given task and the risk of being attacked. This term is tightly correlated with the user-defined cut-off threshold (security parameter) depicted in Figure [4]. Under the GMM assumption, the conditional probability $P(\phi(x) | W_1)$ in Eq. (5) is computed as: $$ p(\phi(x) | y') = \frac{1}{(2\pi)^{N/2} |\Sigma|^{1/2}} \exp \left\{ -\frac{1}{2} (\phi(x) - \mu_i)^T \Sigma_i^{-1} (\phi(x) - \mu_i) \right\}, $$ (6) where $y'$ is the output of the victim neural network (predicted class), $\mu_i$ and $\Sigma_i$ are the output of the HDDA analysis, and $N$ is the dimension of the latent feature space in the pertinent checkpoint location. The conditional probability $P(\phi(x) | W_2)$ is, in turn, equivalent to $(1 - P(\phi(x) | W_1))$. 3 Training checkpointing modules for the input space We leverage dictionary learning and sparse signal recovery techniques to measure the PSNR of each incoming sample and automatically filter out atypical samples in the input space. Figure [5b] illustrates the high-level block diagram of an input defender module. As shown, devising an input checkpoint model is performed in two main steps: (i) dictionary learning, and (ii) characterizing the typical PSNR per class after sparse recovery. 1 Dictionary learning; we learn a separate dictionary for each class of data by solving: $$ \arg\min_{D'} \frac{1}{2} \|Z' - D'V'\|^2_2 + \beta \|V'\|_1 \quad \text{s.t.} \quad \|D'_k\| = 1, \quad 0 \leq k \leq k_{\max}. $$ (7) $^5$The misclassification penalty is a constant value which determines the cost of each decision. Figure 5: An input defender module is devised based on robust dictionary learning techniques to automatically filter out test samples that highly deviate from the typical PSNR of data points within the corresponding predicted class (output of victim model). Here, $Z^i$ is a matrix whose columns are pixels extracted from different regions of input images belonging to category $i$. For instance, if we consider $8 \times 8$ patches of pixels, each column of $Z^i$ would be a vector of 64 elements. The goal of dictionary learning is to find matrix $D^i$ that best represents the distribution of pixel patches from images belonging to class $i$. We denote the number of columns in $D^i$ by $k_{\text{max}}$. For a certain $D^i$, the image patches $Z^i$ are represented with a sparse matrix $V^i$, and $D^i V^i$ is the reconstructed patches. In our experiments, we leveraged a dictionary of size $k_{\text{max}} = 225$ for each class of data points. For an incoming sample, during the execution phase, the input defender module takes the output of the victim DL model (e.g., predicted class $i$) and uses Orthogonal Matching Pursuit (OMP) routine (Tropp et al. 2007)) to sparsely reconstruct the input data with the corresponding dictionary $D^i$. The dictionary matrix $D^i$ contains a set of samples that commonly appear in the training data belonging to class $i$; As such, the input sample classified as class $i$ should be well-reconstructed as $D^i V^*$ with a high PSNR value, where $V^*$ is the optimal solution obtained by OMP routine. 2 Characterizing typical PSNR in each category: we profile the PSNR of legitimate samples within each class. If the incoming sample does not incur a typical PSNR (e.g., have a high perturbation after reconstruction by the corresponding dictionary), it will be regarded as a malicious data point. Figure 6: Adversarial detection rate of the latent and input defender modules as a function of the perturbation level for (a) $SP = 0.1\%$, (b) $SP = 1\%$, and (c) $SP = 5\%$. In this experiment, the FGS attack is used to generate adversarial samples and the perturbation is adjusted by changing its specific attack parameter $\epsilon$. Figure 6 demonstrates the impact of perturbation level on the pertinent adversarial detection rate for three different security parameters (cut-off thresholds). In this experiment, we have considered the FGS attack with different $\epsilon$ values on the MNIST benchmark. As shown, the use of input dictionaries facilitate automated detection of adversarial samples with relatively high perturbation (e.g., $\epsilon > 0.25$) while the latent defender module is sufficient to effectively distinguish malicious samples even with very small perturbations. We extensively evaluate the impact of security parameter on the ultimate system performance for various benchmarks in Section 4. Table 2 in Appendix A summarizes the DL model topology used in each benchmark. The latent defender module (checkpoint) is inserted at the second-to-last layers. 4 Experiments We evaluate the proposed PCL methodology on three canonical machine learning datasets: MNIST (LeCun et al. (1998b)), CIFAR10 (Krizhevsky & Hinton (2009)), and a sub-set of ImageNet (Deng et al. (2009)) consisting of 10 different classes. A detailed summary of the neural network architectures used in each benchmark along with the specific parameters used for various attacks are provided in Appendix A. We leveraged the attack benchmark sets available at (Nicolas Papernot (2017)) for evaluation of different state-of-the-art attacks including FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks. ![Figure 7](image) Figure 7: Impact of security parameter on the ultimate performance of PCL module. The false positive rate is defined as the ratio of legitimate test samples that are mistaken for adversarial samples by the defender modules. The true positive rate is defined as the ratio of adversarial samples correctly classified as malicious data point over the total number of malicious samples. Note that the scales for false positive and true positive axis are different. The false positive rate is computed by considering legitimate samples that are correctly classified by the victim model. In our proposed countermeasure, the input and latent defenders are jointly considered to detect adversarial samples. In particular, we treat an input as an adversarial sample if either of the latent or input checkpointing modules raise an alarm signal. Figure 7 demonstrates the impact of security parameter on the ultimate false positive and true positive rates for the MNIST benchmark. As shown, a higher security parameter results in a higher true positive detection rate while it also increases the risk of labeling legitimate samples as possibly malicious ones. ![Figure 8](image) Figure 8: ROC performance curve of PCL methodology against FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks. The diagonal line indicates the trajectory obtained by a random prediction. To consider the joint decision metric for each application and attack model, we evaluate the false positive and true positive rates and present the pertinent Receiver Operating Characteristic (ROC) curves in Figure 8. The ROC curves are established as follows: first, we consider a latent defender and change the security parameter (SP) in the range of $[0\% - 100\%]$ and evaluate the FP and TP rates for each security parameter, which gives us the dashed blue ROC curves. Next, we consider an input defender and modify the detection policy: a sample is considered to be malicious if either of the input or latent defenders raise an alarm flag. The ROC curve for this joint defense policy is shown as the green curves in Figure 8. The gap between the dashed blue curve and the green curve indicates the effect of the input defender on the overall decision policy; as can be seen, the input defender has more impact for the FGS attack. This is compatible with our intuition since, compared to the other three attack methods, the FGS algorithm induces more perturbation to generate adversarial samples. 7 Table 1: PCL performance against different attack methodologies for MNIST, CIFAR10, and ImageNet benchmarks. The reported numbers correspond to the pertinent false positives for achieving particular detection rates in each scenario. The JSMA attack for the ImageNet benchmark is computationally expensive (e.g., it took more than 20min to generate one adversarial sample on an NVIDIA TITAN Xp GPU). As such, we could not generate the adversarial samples of this attack using the JSMA library provide by [Nicolas Papernot (2017)]. <table> <thead> <tr> <th>Benchmark</th> <th>MNIST Detection Rate</th> <th>CIFAR10 Detection Rate</th> <th>ImageNet Detection Rate</th> </tr> </thead> <tbody> <tr> <td>Attack</td> <td>90% 95% 98% 99%</td> <td>90% 95% 98% 99%</td> <td>90% 95% 98% 99%</td> </tr> <tr> <td>FGS</td> <td>1.1% 4.2% 12.4% 2.84%</td> <td>8.1% 21.1% 62.9% 62.9%</td> <td>14.2% 26.8% 60.7% 60.7%</td> </tr> <tr> <td>JSMA</td> <td>2.1% 4.2% 8.0% 12.4%</td> <td>8.1% 14.9% 21.1% 33.0%</td> <td>- - - -</td> </tr> <tr> <td>Deepfool</td> <td>2.8% 5.9% 8% 12.4%</td> <td>12% 17.9% 33.0% 40.8%</td> <td>8.1% 8.1% 14.2% 21.5%</td> </tr> <tr> <td>Carlini&amp;WagnerL2</td> <td>1.6% 2.1% 2.8% 4.2%</td> <td>14.9% 27.3% 40.8% 62.9%</td> <td>8.1% 9.6% 14.2% 21.5%</td> </tr> </tbody> </table> We summarize the performance of the PCL methodology against each of the FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks for MNIST, CIFAR10, and ImageNet in Table 1. The reported numbers in this table are gathered as follows: we consider a few points on the green ROC curve (marked on Figure 8), which correspond to certain TP rates (i.e., 90%, 95%, 98%, and 99%), then report the FP rates for these points. In all our experiments, the use of only one latent defender module to checkpoint the second-to-last layer of the pertinent victim model was enough to prevent adversarial samples generated by the existing state-of-the-art attacks. Please refer to Appendix B for the complete set of ROC curves for the CIFAR10 and ImageNet benchmarks. 5 Discussion Figure 9 demonstrates an example of the adversarial confusion matrices for victim neural networks with and without using parallel checkpointing learners. In this example, we set the security parameter to only 1%. As shown, the adversarial sample generated for the victim model are not transferred to the checkpointing modules. In fact, the proposed PCL approach can effectively remove/detect adversarial samples by characterizing the rarely explored sub-spaces and looking into the statistical density of data points in the pertinent space. ![Confusion Matrix](a) ![Confusion Matrix](b) ![Confusion Matrix](c) Figure 9: Example adversarial confusion matrix (a) without PCL defense mechanism, and (b) with PCL defense and a security parameter of (1%). (c) Example adversarial samples for which accurate detection is hard due to the closeness of decision boundaries for the corresponding classes. Note that the remaining adversarial samples that are not detected in this experiment are crafted from legitimate samples that are inherently hard to classify even by a human observer due to the closeness of decision boundaries corresponding to such classes. For instance, in the MNIST application, such adversarial samples mostly belong to class 5 that is misclassified to class 3 or class 4 misclassified as 9. Such misclassifications are indeed the model approximation error which is well-understood to the statistical nature of the models. As such, a more precise definition of adversarial samples is extremely required to distinguish malicious samples form those that simply lie near the decision boundaries. We emphasize that the PCL defender models are trained in an unsupervised setting independent of the attack strategy, meaning that no adversarial sample is used to train the defender models. This is particularly important as it corroborates the effectiveness of the proposed countermeasure in the face... of generic attack scenarios including possible future adversarial DL algorithms. Nevertheless, one might question the effectiveness of the proposed approach for adaptive attack algorithms that target the defender modules. A comprehensive study of possible adaptive attack algorithms is yet to be performed if such attacks are developed in the future. We emphasize that, thus far, we have been able to significantly thwart all the existing attacks with only one checkpoint model approximating the data distribution in the second-to-last layer of the corresponding models. Our proposed PCL methodology, however, provides a rather more generic approach that can be adapted/modified against potential future attacks by training parallel disjoint models (with diverse objectives/parameters) to further strengthen the defense. 6 RELATED WORK In response to the various adversarial attack methodologies proposed in the literature (e.g., Goodfellow et al. (2014); Papernot et al. (2016a); Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017)), several research attempts have been made to design DL strategies that are more robust in the face of adversarial examples. The existing countermeasures can be classified into two distinct categories: (i) Supervised strategies which aim to improve the generalization of the learning models by incorporating the noise-corrupted version of inputs as training samples (Jin et al. (2015); Gu & Rigazio (2014)) and/or injecting adversarial examples generated by different attacks into the DL training phase (Huang et al. (2015); Shaham et al. (2015); Goodfellow et al. (2014); Szegedy et al. (2013)). The proposed defense mechanisms in this category are particularly tailored for specific perturbation patterns and can only partially evade adversarial samples generated by other attack scenarios (with different perturbation distributions) from being effective as shown in (Gu & Rigazio (2014)). (ii) Unsupervised approaches which aim to smooth out the underlying gradient space (decision boundaries) by incorporating a smoothness penalty (Miyato et al. (2015); Carlini & Wagner (2017)) as a regularization term in the loss function or compressing the neural network by removing the nuisance variables (Papernot et al. (2016b)). These set of works have been mainly remained oblivious to the pertinent data density in the latent space. In particular, these works have been developed based on an implicit assumption that the existence of adversarial samples is due to the piece-wise linear behavior of decision boundaries (obtained by gradient descent) in the high-dimensional space. As such, their integrity can be jeopardized by considering different perturbations at the input space and evaluating the same attack on various perturbed data points to even pass the smoothed decision boundaries as shown in (Carlini & Wagner (2016)). To the best of our knowledge, our proposed PCL methodology is the first unsupervised countermeasure that is able to detect DL adversarial samples generated by the existing state-of-the-art attacks. The PCL method does not assume any particular attack strategy and/or perturbation pattern. This is particularly important as it demonstrates the generalizability of the proposed approach in the face of adversarial attacks. 7 CONCLUSION This paper proposes a novel end-to-end methodology for characterizing and thwarting adversarial DL space. We introduce the concept of parallel checkpointing learners as a viable countermeasure to significantly reduce the risk of integrity attacks. The proposed PCL methodology explicitly characterizes statistical properties of the features within different layers of a neural network by learning a set of complementary dictionaries and corresponding probability density functions. The effectiveness of the PCL approach is evaluated against the state-of-the-art attack models including FGS, JSMA, Deepfool, and Carlini&WagnerL2. Proof-of-concept experiments for analyzing various data collections including MNIST, CIFAR10, and a subset of the ImageNet dataset corroborate successful detection of adversarial samples with relatively small false-positive rates. We devise an open-source API for the proposed countermeasure and invite the community to attempt attacks against the provided benchmarks in the form of a challenge. REFERENCES APPENDIX A Table 2 presents the neural network architectures for the victim models used in each benchmark. The network for MNIST is the popular LeNet-3 architecture, the CIFAR-10 architecture is taken from (Ciregan et al. (2012)), and the ImageNet model is inspired by the AlexNet architecture (Krizhevsky et al. (2012)). Table 2: Baseline (victim) network architectures for evaluated benchmarks. Here, $128C3(2)$ denotes a convolutional layer with 128 maps and $3 \times 3$ filters applied with a stride of 2, $MP3(2)$ indicates a max-pooling layer over regions of size $3 \times 3$ and stride of 2, and $300FC$ is a fully-connected layer consisting of 300 neurons. All convolution and fully connected layers (except the last layer) are followed by ReLU activation. A Softmax activation is applied to the last layer of each network. <table> <thead> <tr> <th>Benchmark</th> <th>Architecture</th> </tr> </thead> <tbody> <tr> <td>MNIST</td> <td>$784 - 300FC - 100FC - 10FC$</td> </tr> <tr> <td>CIFAR10</td> <td>$3 \times 32 \times 32 - 300C3(1) - MP2(2) - 300C2(1) - MP2(2) - 300C3(1) - MP2(2) - 300FC - 100FC - 10FC$</td> </tr> <tr> <td>ImageNet</td> <td>$3 \times 224 \times 224 - 96C11(4) - 256C5(1) - MP3(2) - 128C3(1) - MP3(2) - 128C3(1) - MP3(2) - 1024FC - 1024FC - 10FC$</td> </tr> </tbody> </table> We visually evaluate the perturbed examples to determine the attack parameters (e.g., perturbation level $\epsilon$ and $n_{iters}$) such that the perturbations cannot be recognized by a human observer. Table 3 details the parameters used for the realization of different attack algorithms. The JSMA attack for the ImageNet benchmark is computationally expensive (e.g., it took more than 20min to generate one adversarial sample on an NVIDIA TITAN Xp GPU). As such, we could not generate the adversarial samples of this attack using the JSMA library provided by (Nicolas Papernot (2017)). Table 3: Details of attack algorithms for each evaluated application. The FGS method (Goodfellow et al. (2014)) is characterized with a single $\epsilon$ parameter. The JSMA attack (Papernot et al. (2016a)) has two parameters: $\gamma$ specifies the maximum perturbation and $\theta$ denotes the value added to each selected feature. The Deepfool attack (Moosavi-Dezfooli et al. (2016)) is characterized by the number of iterative updates, which we denote by $n_{iters}$ in this table. The parameters for the Carlini&WagnerL2 attack (Carlini & Wagner (2017)) are set based on the experiments in the original paper. In this table, “C” denotes the confidence, “LR” is the learning rate, “steps” is the number of binary search steps, and “iterations” stands for the maximum number of iterations. <table> <thead> <tr> <th>Application</th> <th>Attack</th> <th>Attack Parameters</th> </tr> </thead> <tbody> <tr> <td>MNIST</td> <td>FGS</td> <td>$\epsilon \in {0.05,0.1,0.2,0.3,0.4,0.5}$</td> </tr> <tr> <td></td> <td>JSMA</td> <td>$\gamma = 5%$</td> </tr> <tr> <td></td> <td></td> <td>$\theta \in {0.3,0.4,0.5,0.6,0.7,0.8,0.9,1}$</td> </tr> <tr> <td></td> <td>Deepfool</td> <td>$n_{iters} \in {1,2,3,4,5,6,7,8,9,10}$</td> </tr> <tr> <td></td> <td>Carlini&amp;WagnerL2</td> <td>$C = 0$, LR = 0.1, steps = 20, iterations = 40</td> </tr> <tr> <td>CIFAR</td> <td>FGS</td> <td>$\epsilon \in {0.05,0.1,0.2,0.3,0.4,0.5}$</td> </tr> <tr> <td></td> <td>JSMA</td> <td>$\gamma = 5%$</td> </tr> <tr> <td></td> <td></td> <td>$\theta \in {0.3,0.4,0.5,0.6,0.7,0.8,0.9,1}$</td> </tr> <tr> <td></td> <td>Deepfool</td> <td>$n_{iters} \in {1,2,3,4,5,6,7,8,9,10}$</td> </tr> <tr> <td></td> <td>Carlini&amp;WagnerL2</td> <td>$C = 0$, LR = 0.1, steps = 20, iterations = 40</td> </tr> <tr> <td>ImageNet</td> <td>FGS</td> <td>$\epsilon \in {0.01,0.05}$</td> </tr> <tr> <td></td> <td>JSMA</td> <td>Attack not successful</td> </tr> <tr> <td></td> <td>Deepfool</td> <td>$n_{iters} \in {1,2,3,4,5,6,7,8,9,10}$</td> </tr> <tr> <td></td> <td>Carlini&amp;WagnerL2</td> <td>$C = 0$, LR = 0.1, steps = 20, iterations = 40</td> </tr> </tbody> </table> APPENDIX B Corresponding ROC curves for PCL performance against FGS, JSMA, Deepfool, and Carlini&WagnerL2 attacks in the CIFAR10 and ImageNet benchmarks. Figure 10: True positive versus false positive rates in CIFAR10 (top row) and ImageNet (bottom row) benchmarks for adversarial samples generated by FGS (Goodfellow et al. (2014)), JSMA (Papernot et al. (2016a)), Deepfool (Moosavi-Dezfooli et al. (2016)), and Carlini&WagnerL2 (Carlini & Wagner (2017)) attacks.
[REMOVED]
It’s time now to get into some real database programming with the .NET Framework components. In this chapter, you’ll explore the Active Data Objects (ADO).NET base classes. ADO.NET, along with the XML namespace, is a core part of Microsoft’s standard for data access and storage. ADO.NET components can access a variety of data sources, including Access and SQL Server databases, as well as non-Microsoft databases such as Oracle. Although ADO.NET is a lot different from classic ADO, you should be able to readily transfer your knowledge to the new .NET platform. Throughout this chapter, we make comparisons to ADO 2.x objects to help you make the distinction between the two technologies. For those of you who have programmed with ADO 2.x, the ADO.NET interfaces will not seem all that unfamiliar. Granted, a few mechanisms, such as navigation and storage, have changed, but you will quickly learn how to take advantage of these new elements. ADO.NET opens up a whole new world of data access, giving you the power to control the changes you make to your data. Although native OLE DB/ADO provides a common interface for universal storage, a lot of the data activity is hidden from you. With client-side disconnected RecordSets, you can’t control how your updates occur. They just happen “magically.” ADO.NET opens that black box, giving you more granularity with your data manipulations. ADO 2.x is about common data access. ADO.NET extends this model and factors out data storage from common data access. Factoring out functionality makes it easier for you to understand how ADO.NET components work. Each ADO.NET component has its own specialty, unlike the RecordSet, which is a jack-of-all-trades. The RecordSet could be disconnected or stateful; it could be read-only or updateable; it could be stored on the client or on the server—it is multifaceted. Not only do all these mechanisms bloat the RecordSet with functionality you might never use, it also forces you to write code to anticipate every possible chameleon-like metamorphosis of the RecordSet. In ADO.NET, you always know what to expect from your data access objects, and this lets you streamline your code with specific functionality and greater control. Although other chapters are dedicated to XML (Chapter 18, “Using XML and VB .NET,” and Chapter 19, “Using XML in Web Applications”), we must touch upon XML in our discussion of ADO.NET. In the .NET Framework, there is a strong synergy between ADO.NET and XML. Although the XML stack doesn’t technically fall under ADO.NET, XML and ADO.NET belong to the same architecture. ADO.NET persists data as XML. There is no other native persistence mechanism for data and schema. ADO.NET stores data as XML files. Schema is stored as XSD files. There are many advantages to using XML. XML is optimized for disconnected data access. ADO.NET leverages these optimizations and provides more scalability. To scale well, you can’t maintain state and hold resources on your database server. The disconnected nature of ADO.NET and XML provide for high scalability. In addition, because XML is a text-based standard, it’s simple to pass it over HTTP and through firewalls. Classic ADO uses a binary format to pass data. Because ADO.NET uses XML, a ubiquitous standard, more platforms and applications will be able to consume your data. By using the XML model, ADO.NET provides a complete separation between the data and the data presentation. ADO.NET takes advantage of the way XML splits the data into an XML document, and the schema into an XSD file. By the end of this chapter, you should be able to answer the following questions: - What are .NET data providers? - What are the ADO.NET classes? - What are the appropriate conditions for using a DataReader versus a DataSet? - How does OLE DB fit into the picture? - What are the advantages of using ADO.NET over classic ADO? - How do you retrieve and update databases from ADO.NET? - How does XML integration go beyond the simple representation of data as XML? Let’s begin by looking “under the hood” and examining the components of the ADO.NET stack. **How Does ADO.NET Work?** ADO.NET base classes enable you to manipulate data from many data sources, such as SQL Server, Exchange, and Active Directory. ADO.NET leverages .NET data providers to connect to a database, execute commands, and retrieve results. The ADO.NET object model exposes very flexible components, which in turn expose their own properties and methods, and recognize events. In this chapter, you’ll explore the objects of the ADO.NET object model and the role of each object in establishing a connection to a database and manipulating its tables. IS OLE DB DEAD? Not quite. Although you can still use OLE DB data providers with ADO.NET, you should try to use the managed .NET data providers whenever possible. If you use native OLE DB, your .NET code will suffer because it’s forced to go through the COM interoperability layer in order to get to OLE DB. This leads to performance degradation. Native .NET providers, such as the System.Data.SqlClient library, skip the OLE DB layer entirely, making their calls directly to the native API of the database server. However, this doesn’t mean that you should avoid the OLE DB .NET data providers completely. If you are using anything other than SQL Server 7 or 2000, you might not have another choice. Although you will experience performance gains with the SQL Server .NET data provider, the OLE DB .NET data provider compares favorably against the traditional ADO/OLE DB providers that you used with ADO 2.x. So don’t hold back from migrating your non-managed applications to the .NET Framework for performance concerns. In addition, there are other compelling reasons for using the OLE DB .NET providers. Many OLE DB providers are very mature and support a great deal more functionality than you would get from the newer SQL Server .NET data provider, which exposes only a subset of this full functionality. In addition, OLE DB is still the way to go for universal data access across disparate data sources. In fact, the SQL Server distributed process relies on OLE DB to manage joins across heterogeneous data sources. Another caveat to the SQL Server .NET data provider is that it is tightly coupled to its data source. Although this enhances performance, it is somewhat limiting in terms of portability to other data sources. When you use the OLE DB providers, you can change the connection string on the fly, using declarative code such as COM+ constructor strings. This loose coupling enables you to easily port your application from an SQL Server back-end to an Oracle back-end without recompiling any of your code, just by swapping out the connection string in your COM+ catalog. Keep in mind, the only native OLE DB provider types that are supported with ADO.NET are SQLOLEDB for SQL Server, MSDAORA for Oracle, and Microsoft.Jet.OLEDB.4 for the Microsoft Jet engine. If you are so inclined, you can write your own .NET data providers for any data source by inheriting from the System.Data namespace. CONTINUED At this time, the .NET Framework ships with only the SQL Server .NET data provider for data access within the .NET runtime. Microsoft expects the support for .NET data providers and the number of .NET data providers to increase significantly. (In fact, the ODBC.NET data provider is available for download on Microsoft’s website.) A major design goal of ADO.NET is to synergize the native and managed interfaces, advancing both models in tandem. You can find the ADO.NET objects within the System.Data namespace. When you create a new VB .NET project, a reference to the System.Data namespace will be automatically added for you, as you can see in Figure 14.1. ![Image of Visual Studio project setup] **FIGURE 14.1:** To use ADO.NET, reference the System.Data namespace. To comfortably use the ADO.NET objects in an application, you should use the `Imports` statement. By doing so, you can declare ADO.NET variables without having to fully qualify them. You could type the following `Imports` statement at the top of your solution: ```vbnet Imports System.Data.SqlClient ``` After this, you can work with the SqlClient ADO.NET objects without having to fully qualify the class names. If you want to dimension the SqlDataAdapter, you would type the following short declaration: ```csharp Dim dsMyAdapter as New SqlDataAdapter ``` Otherwise, you would have to type the full namespace, as in: ```csharp Dim dsMyAdapter as New System.Data.SqlClient.SqlDataAdapter ``` Alternatively, you can use the visual database tools to automatically generate your ADO.NET code for you. The various wizards that come with VS .NET provide the easiest way to work with the ADO.NET objects. Nevertheless, before you use these tools to build production systems, you should understand how ADO.NET works programmatically. In this chapter, we don’t focus too much on the visual database tools, but instead concentrate on the code behind the tools. By understanding how to program against the ADO.NET object model, you will have more power and flexibility with your data access code. **Using the ADO.NET Object Model** You can think of ADO.NET as being composed of two major parts: .NET data providers and data storage. Respectively, these fall under the connected and disconnected models for data access and presentation. .NET data providers, or managed providers, interact natively with the database. Managed providers are quite similar to the OLE DB providers or ODBC drivers that you most likely have worked with in the past. The .NET data provider classes are optimized for fast, read-only, and forward-only retrieval of data. The managed providers talk to the database by using a fast data stream (similar to a file stream). This is the quickest way to pull read-only data off the wire, because you minimize buffering and memory overhead. If you need to work with connections, transactions, or locks, you would use the managed providers, not the DataSet. The DataSet is completely disconnected from the database and has no knowledge of transactions, locks, or anything else that interacts with the database. Five core objects form the foundation of the ADO.NET object model, as you see listed in Table 14.1. Microsoft moves as much of the provider model as possible into the managed space. The Connection, Command, DataReader, and DataAdapter belong to the .NET data provider, whereas the DataSet is part of the disconnected data storage mechanism. **TABLE 14.1: ADO.NET Core Components** <table> <thead> <tr> <th><strong>Object</strong></th> <th><strong>Description</strong></th> </tr> </thead> <tbody> <tr> <td>Connection</td> <td>Creates a connection to your data source.</td> </tr> <tr> <td>Command</td> <td>Provides access to commands to execute against your data source.</td> </tr> <tr> <td>DataReader</td> <td>Provides a read-only, forward-only stream containing your data.</td> </tr> <tr> <td>DataSet</td> <td>Provides an in-memory representation of your data source(s).</td> </tr> <tr> <td>DataAdapter</td> <td>Serves as an ambassador between your DataSet and data source, proving the mapping instructions between the two.</td> </tr> </tbody> </table> Figure 14.2 summarizes the ADO.NET object model. If you’re familiar with classic ADO, you’ll see that ADO.NET completely factors out the data source from the actual data. Each object exposes a large number of properties and methods, which are discussed in this and following chapters. **The ADO.NET Framework** ![Diagram of ADO.NET Framework] **FIGURE 14.2:** The ADO Framework **NOTE** If you have worked with collection objects, this experience will be a bonus to programming with ADO.NET. ADO.NET contains a collection-centric object model, which makes programming easy if you already know how to work with collections. Four core objects belong to .NET data providers, within the ADO.NET managed provider architecture: the Connection, Command, DataReader, and DataAdapter objects. The Connection object is the simplest one, because its role is to establish a connection to the database. The Command object exposes a Parameters collection, which contains information about the parameters of the command to be executed. If you’ve worked with ADO 2.x, the Connection and Command objects should seem familiar to you. The DataReader object provides fast access to read-only, forward-only data, which is reminiscent of a read-only, forward-only ADO RecordSet. The DataAdapter object contains Command objects that enable you to map specific actions to your data source. The DataAdapter is a mechanism for bridging the managed providers with the disconnected DataSets. The DataSet object is not part of the ADO.NET managed provider architecture. The DataSet exposes a collection of DataTables, which in turn contain both DataColumn and DataRow collections. The DataTables collection can be used in conjunction with the DataRelation collection to create relational data structures. First, you will learn about the connected layer by using the .NET data provider objects and touching briefly on the DataSet object. Next, you will explore the disconnected layer and examine the DataSet object in detail. **NOTE** Although there are two different namespaces, one for OleDb and the other for the SqlClient, they are quite similar in terms of their classes and syntax. As we explain the object model, we use generic terms, such as Connection, rather than SqlConnection. Because this book focuses on SQL Server development, we gear our examples toward SQL Server data access and manipulation. In the following sections, you’ll look at the five major objects of ADO.NET in detail. You’ll examine the basic properties and methods you’ll need to manipulate databases, and you’ll find examples of how to use each object. ADO.NET objects also recognize events. **The Connection Object** Both the SqlConnection and OleDbConnection namespaces inherit from the IDbConnection object. The Connection object establishes a connection to a database, which is then used to execute commands against the database or retrieve a DataReader. You use the SqlConnection object when you are working with SQL Server, and the OleDbConnection for all other data sources. The ConnectionString property is the most important property of the Connection object. This string uses name-value pairs to specify the database you want to connect to. To establish a connection through a Connection object, call its Open() method. When you no longer need the connection, call the Close() method to close it. To find out whether a Connection object is open, use its State property. **WHAT HAPPENED TO YOUR ADO CURSORS?** One big difference between classic ADO and ADO.NET is the way they handle cursors. In ADO 2.x, you have the option to create client- or server-side cursors, which you can set by using the CursorLocation property of the Connection object. ADO.NET no longer explicitly assigns cursors. This is a good thing. Under classic ADO, many times programmers accidentally specify expensive server-side cursors, when they really mean to use the client-side cursors. These mistakes occur because the cursors, which sit in the COM+ server, are also considered client-side cursors. Using server-side cursors is something you should never do under the disconnected, n-tier design. You see, ADO 2.x wasn't originally designed for disconnected and remote data access. The CursorLocation property is used to handle disconnected and connected access within the same architecture. ADO.NET advances this concept by completely separating the connected and disconnected mechanisms into managed providers and DataSets, respectively. In classic ADO, after you specify your cursor location, you have several choices in the type of cursor to create. You could create a static cursor, which is a disconnected, in-memory representation of your database. In addition, you could extend this static cursor into a forward-only, read-only cursor for quick database retrieval. Under the ADO.NET architecture, there are no updateable server-side cursors. This prevents you from maintaining state for too long on your database server. Even though the DataReader does maintain state on the server, it retrieves the data rapidly as a stream. The ADO.NET DataReader works much like an ADO read-only, server-side cursor. You can think of an ADO.NET DataSet as analogous to an ADO client-side, static cursor. As you can see, you don't lose any of the ADO disconnected cursor functionality with ADO.NET; it's just architected differently. Connecting to a Database The first step to using ADO.NET is to connect to a data source, such as a database. Using the Connection object, you tell ADO.NET which database you want to contact, supply your username and password (so that the DBMS can grant you access to the database and set the appropriate privileges), and, possibly, set more options. The Connection object is your gateway to the database, and all the operations you perform against the database must go through this gateway. The Connection object encapsulates all the functionality of a data link and has the same properties. Unlike data links, however, Connection objects can be accessed from within your VB .NET code. They expose a number of properties and methods that enable you to manipulate your connection from within your code. NOTE You don’t have to type this code by hand. The code for all the examples in this chapter is located on the Sybex website. You can find many of this chapter’s code examples in the solution file Working with ADO.NET.sln. Code related to the ADO.NET Connection object is listed behind the Connect To Northwind button on the startup form. Let’s experiment with creating a connection to the Northwind database. Create a new Windows Application solution and place a command button on the Form; name it Connect to Northwind. Add the Imports statement for the System.Data.SqlClient name at the top of the form module. Now you can declare a Connection object with the following statement: Dim connNorthwind As New System.Data.SqlClient.SqlConnection() As soon as you type the period after System.Data.SqlClient, you will see a list with all the objects exposed by the System.Data.SqlClient component, and you can select the one you want with the arrow keys. Declare the connNorthwind object in the button’s Click event. NOTE All projects available on the Sybex website use the setting (local) for the data source. In other words, we’re assuming you have SQL Server installed on the local machine. Alternatively, you could use localhost for the data source value. The ConnectionString Property The ConnectionString property is a long string with several attributes separated by semicolons. Add the following line to your button’s Click event to set the connection: ```vbnet connNorthwind.ConnectionString="data source=(local);"& _ "initial catalog=Northwind;integrated security=SSPI;" ``` Replace the data source value with the name of your SQL Server, or keep the local setting if you are running SQL Server on the same machine. If you aren’t using Windows NT integrated security, then set your user ID and password like so: ```vbnet connNorthwind.ConnectionString="data source=(local);"& _ "initial catalog=Northwind; user ID=sa;password=xxx" ``` TIP Some of the names in the connection string also go by aliases. You can use Server instead of data source to specify your SQL Server. Instead of initial catalog, you can specify database. Those of you who have worked with ADO 2.x might notice something missing from the connection string: the provider value. Because you are using the SqlClient namespace and the .NET Framework, you do not need to specify an OLE DB provider. If you were using the OleDb namespace, then you would specify your provider name-value pair, such as Provider=SQL0LEDB.1. OVERLOADING THE CONNECTION OBJECT CONSTRUCTOR One of the nice things about the .NET Framework is that it supports constructor arguments by using overloaded constructors. You might find this useful for creating your ADO.NET objects, such as your database Connection. As a shortcut, instead of using the ConnectionString property, you can pass the string right into the constructor, as such: ```vbnet Dim connNorthwind as New SqlConnection _ ("data source=localhost; initial catalog=Northwind; _ user ID=sa;password=xxx") ``` Or you could overload the constructor of the connection string by using the following: ```vba Dim myConnectString As String = "data source= _ localhost; initial catalog=Northwind; user ID=sa;password=xxx" ``` You have just established a connection to the SQL Server Northwind database. You can also do this visually from the Server Explorer in Visual Studio .NET. The ConnectionString property of the Connection object contains all the information required by the provider to establish a connection to the database. As you can see, it contains all the information that you see in the Connection properties tab when you use the visual tools. Keep in mind that you can also create connections implicitly by using the DataAdapter object. You will learn how to do this when we discuss the DataAdapter later in this section. In practice, you'll never have to build connection strings from scratch. You can use the Server Explorer to add a new connection, or use the appropriate ADO.NET data component wizards. These visual tools will automatically build this string for you, which you can see in the Properties window of your Connection component. **TIP** The connection pertains more to the database server rather than the actual database itself. You can change the database for an open SqlConnection, by passing the name of the new database to the ChangeDatabase() method. **The Open () Method** After you have specified the ConnectionString property of the Connection object, you must call the Open() method to establish a connection to the database. You must first specify the ConnectionString property and then call the Open() method without any arguments, as shown here (connNorthwind is the name of a Connection object): ```vba connNorthwind.Open() ``` Unlike ADO 2.x, the Open() method doesn't take any optional parameters. You can't change this feature because the Open() method is not overridable. **The Close() Method** Use the Connection object’s Close() method to close an open connection. Connection pooling provides the ability to improve your performance by reusing a connection from the pool if an appropriate one is available. The OleDbConnection object will automatically pool your connections for you. If you have connection pooling enabled, the connection is not actually released, but remains alive in memory and can be used again later. Any pending transactions are rolled back. NOTE Alternatively, you could call the Dispose() method, which also closes the connection: connNorthwind.Dispose(). You must call the Close() or Dispose() method, or else the connection will not be released back to the connection pool. The .NET garbage collector will periodically remove memory references for expired or invalid connections within a pool. This type of lifetime management improves the performance of your applications because you don’t have to incur expensive shutdown costs. However, this mentality is dangerous with objects that tie down server resources. Generational garbage collection polls for objects that have been recently created, only periodically checking for those objects that have been around longer. Connections hold resources on your server, and because you don’t get deterministic cleanup by the garbage collector, you must make sure you explicitly close the connections that you open. The same goes for the DataReader, which also holds resources on the database server. **The Command Object** After you instantiate your connection, you can use the Command object to execute commands that retrieve data from your data source. The Command object carries information about the command to be executed. This command is specified with the control’s CommandText property. The CommandText property can specify a table name, an SQL statement, or the name of an SQL Server stored procedure. To specify how ADO will interpret the command specified with the CommandText property, you must assign the proper constant to the CommandType property. The CommandType property recognizes the enumerated values in the CommandType structure, as shown in Table 14.2. **TABLE 14.2: Settings of the CommandType Property** <table> <thead> <tr> <th>CONSTANT</th> <th>DESCRIPTION</th> </tr> </thead> <tbody> <tr> <td>Text</td> <td>The command is an SQL statement. This is the default CommandType.</td> </tr> <tr> <td>StoredProcedure</td> <td>The command is the name of a stored procedure.</td> </tr> <tr> <td>TableDirect</td> <td>The command is a table’s name. The Command object passes the name of the table to the server.</td> </tr> </tbody> </table> When you choose StoredProcedure as the CommandType, you can use the Parameters property to specify parameter values if the stored procedure requires one or more input parameters, or it returns one or more output parameters. The Parameters property works as a collection, storing the various attributes of your input and output parameters. **Executing a Command** After you have connected to the database, you must specify one or more commands to execute against the database. A command could be as simple as a table’s name, an SQL statement, or the name of a stored procedure. You can think of a Command object as a way of returning streams of data results to a DataReader object or caching them into a DataSet object. Command execution has been seriously refined since ADO 2.x., now supporting optimized execution based on the data you return. You can get many different results from executing a command: - If you specify the name of a table, the DBMS will return all the rows of the table. - If you specify an SQL statement, the DBMS will execute the statement and return a set of rows from one or more tables. If the SQL statement is an action query, some rows will be updated, and the DBMS will report the number of rows that were updated but will not return any data rows. The same is true for stored procedures: - If the stored procedure selects rows, these rows will be returned to the application. - If the stored procedure updates the database, it might not return any values. **TIP** As we have mentioned, you should prepare the commands you want to execute against the database ahead of time and, if possible, in the form of stored procedures. With all the commands in place, you can focus on your VB .NET code. In addition, if you are performing action queries and do not want results being returned, specify the NOCOUNT ON option in your stored procedure to turn off the “rows affected” result count. You specify the command to execute against the database with the Command object. The Command objects have several methods for execution: the ExecuteReader() method returns a forward-only, read-only DataReader, the ExecuteScalar() method retrieves a single result value, and the ExecuteNonQuery() method doesn’t return any results. There is also an ExecuteXmlReader() method, which returns the XML version of a DataReader. **NOTE** ADO.NET simplifies and streamlines the data access object model. You no longer have to choose whether to execute a query through a Connection, Command, or RecordSet object. In ADO.NET, you will always use the Command object to perform action queries. You can also use the Command object to specify any parameter values that must be passed to the DBMS (as in the case of a stored procedure), as well as specify the transaction in which the command executes. One of the basic properties of the Command object is the Connection property, which specifies the Connection object through which the command will be submitted to the DBMS for execution. It is possible to have multiple connections to different databases and issue different commands to each one. You can even swap connections on the fly at runtime, using the same Command object with different connections. Depending on the database to which you want to submit a command, you must use the appropriate Connection object. Connection objects are a significant load on the server, so try to avoid using multiple connections to the same database in your code. ### WHY ARE THERE SO MANY METHODS TO EXECUTE A COMMAND? Executing commands can return different types of data, or even no data at all. The reason why there are separate methods for executing commands is to optimize them for different types of return values. This way, you can get better performance if you can anticipate what your return data will look like. If you have an AddNewCustomer stored procedure that returns the primary key of the newly added record, you would use the `ExecuteScalar()` method. If you don’t care about returning a primary key or an error code, you would use `ExecuteNonQuery()`. In fact, now that error raising, rather than return codes, has become the de facto standard for error handling, you should find yourself using the `ExecuteNonQuery()` method quite often. Why not use a single overloaded `Execute()` method for all these different flavors of command execution? Initially, Microsoft wanted to overload the `Execute()` method with all the different versions, by using the DataReader as an optional output parameter. If you passed the DataReader in, then you would get data populated into your DataReader output parameter. If you didn’t pass a DataReader in, you would get no results, just as the `ExecuteNonQuery()` works now. However, the overloaded `Execute()` method with the DataReader output parameter was a bit complicated to understand. In the end, Microsoft resorted to using completely separate methods and using the method names for clarification. Selection queries return a set of rows from the database. The following SQL statement will return the company names for all customers in the Northwind database: ```sql SELECT CompanyName FROM Customers ``` SQL is a universal language for manipulating databases. The same statement will work on any database (as long as the database contains a table called Customers and this table has a CompanyName column). Therefore, it is possible to execute this command against the SQL Server Northwind database to retrieve the company names. **NOTE** For more information on the various versions of the sample databases used throughout this book, see the sections “Exploring the Northwind Database,” and “Exploring the Pubs Database” in Chapter 13, “Basic Concepts of Relational Databases.” Let’s execute a command against the database by using the conn-Northwind object you’ve just created to retrieve all rows of the Customers table. The first step is to declare a Command object variable and set its properties accordingly. Use the following statement to declare the variable: ```vbnet Dim cmdCustomers As New SqlCommand ``` **NOTE** If you do not want to type these code samples from scratch as you follow along, you can take a shortcut and download the code from the Sybex website. The code in this walkthrough is listed in the Click event of the Create DataReader button located on the startup form for the Working with ADO.NET solution. Alternatively, you can use the CreateCommand() method of the Connection object. ```vbnet cmdCustomers = connNorthwind.CreateCommand() ``` **OVERLOADING THE COMMAND OBJECT CONSTRUCTOR** Like the Connection object, the constructor for the Command object can also be overloaded. By overloading the constructor, you can pass in the SQL statement and connection, while instantiating the Command object—all at the same time. To retrieve data from the Customers table, you could type the following: ```vbnet Dim cmdCustomers As OleDbCommand = New OleDbCommand(_ "Customers", connNorthwind) ``` CONTINUED ➤ Then set its CommandText property to the name of the Customers table: cmdCustomers.CommandType = CommandType.TableDirect The TableDirect property is supported only by the OLE DB .NET data provider. The TableDirect is equivalent to using a SELECT * FROM tablename SQL statement. Why doesn't the SqlCommand object support this? Microsoft feels that when using specific .NET data providers, programmers should have better knowledge and control of what their Command objects are doing. You can cater to your Command objects more efficiently when you explicitly return all the records in a table by using an SQL statement or stored procedure, rather than depending on the TableDirect property to do so for you. When you explicitly specify SQL, you have tighter reign on how the data is returned, especially considering that the TableDirect property might not choose the most efficient execution plan. The CommandText property tells ADO.NET how to interpret the command. In this example, the command is the name of a table. You could have used an SQL statement to retrieve selected rows from the Customers table, such as the customers from Germany: strCmdText = "SELECT ALL FROM Customers" strCmdText = strCmdText & "WHERE Country = 'Germany'" cmdCustomers.CommandText = strCmdText cmdCustomers.CommandType = CommandType.Text By setting the CommandType property to a different value, you can execute different types of commands against the database. NOTE In previous versions of ADO, you are able to set the command to execute asynchronously and use the State property to poll for the current fetch status. In VB .NET, you now have full support of the threading model and can execute your commands on a separate thread with full control, by using the Threading namespace. Regardless of what type of data you are retuning with your specific Execute() method, the Command object exposes a ParameterCollection that you can use to access input and output parameters for a stored procedure or SQL statement. If you are using the `ExecuteReader()` method, you must first close your DataReader object before you are able to query the parameters collection. **WARNING** For those of you who have experience working with parameters with OLE DB, keep in mind that you must use named parameters with the `System.Data.SqlClient` namespace. You can no longer use the question mark character (`?`) as an indicator for dynamic parameters, as you had to do with OLE DB. --- **The `DataAdapter` Object** The `DataAdapter` represents a completely new concept within Microsoft’s data access architecture. The `DataAdapter` gives you the full reign to coordinate between your in-memory data representation and your permanent data storage source. In the OLE DB/ADO architecture, all this happened behind the scenes, preventing you from specifying how you wanted your synchronization to occur. The `DataAdapter` object works as the ambassador between your data and data-access mechanism. Its methods give you a way to retrieve and store data from the data source and the `DataSet` object. This way, the `DataSet` object can be completely agnostic of its data source. The `DataAdapter` also understands how to translate *deltagrams*, which are the `DataSet` changes made by a user, back to the data source. It does this by using different `Command` objects to reconcile the changes, as shown in Figure 14.3. We show how to work with these `Command` objects shortly. The `DataAdapter` implicitly works with `Connection` objects as well, via the `Command` object’s interface. Besides explicitly working with a `Connection` object, this is the only other way you can work with the `Connection` object. The `DataAdapter` object is very “polite,” always cleaning up after itself. When you create the `Connection` object implicitly through the `DataAdapter`, the `DataAdapter` will check the status of the connection. If it’s already open, it will go ahead and use the existing open connection. However, if it’s closed, it will quickly open and close the connection when it’s done with it, courteously restoring the connection back to the way the DataAdapter found it. The DataAdapter works with ADO.NET Command objects, mapping them to specific database update logic that you provide. Because all this logic is stored outside of the DataSet, your DataSet becomes much more liberated. The DataSet is free to collect data from many different data sources, relying on the DataAdapter to propagate any changes back to its appropriate source. **Populating a DataSet** Although we discuss the DataSet object in more detail later in this chapter, it is difficult to express the power of the DataAdapter without referring to the DataSet object. The DataAdapter contains one of the most important methods in ADO.NET: the Fill() method. The Fill() method populates a DataSet and is the only time that the DataSet touches a live database connection. Functionally, the Fill() method’s mechanism for populating a DataSet works much like creating a static, client-side cursor in classic ADO. In the end, you end up with a disconnected representation of your data. The Fill() method comes with many overloaded implementations. A notable version is the one that enables you to populate an ADO.NET DataSet from a classic ADO RecordSet. This makes interoperability between your existing native ADO/OLE DB code and ADO.NET a breeze. If you wanted to populate a DataSet from an existing ADO 2.x RecordSet called adoRS, the relevant segment of your code would read: ```vbnet Dim daFromRS As OleDbDataAdapter = New OleDbDataAdapter Dim dsFromRS As DataSet = New DataSet daFromRS.Fill(dsFromRS, adoRS) ``` **WARNING** You must use the OleDb implementation of the DataAdapter to populate your DataSet from a classic ADO RecordSet. Accordingly, you would need to import the `System.Data.OleDb` namespace. ### Updating a Data Source from a DataSet by Using the DataAdapter The DataAdapter uses the `Update()` method to perform the relevant SQL action commands against the data source from the deltagram in the DataSet. **TIP** The DataAdapter maps commands to the DataSet via the DataTable. Although the DataAdapter maps only one DataTable at a time, you can use multiple DataAdapters to fill your DataSet by using multiple DataTables. ### Using SqlCommand and SqlParameter Objects to Update the Northwind Database **NOTE** The code for the walkthrough in this section can be found in the `Updating Data Using ADO.NET.sln` solution file. Listing 14.1 is contained within the Click event of the Inserting Data Using DataAdapters With Mapped Insert Commands button. The DataAdapter gives you a simple way to map the commands by using its `SelectCommand`, `UpdateCommand`, `DeleteCommand`, and `InsertCommand` properties. When you call the `Update()` method, the DataAdapter maps the appropriate update, add, and delete SQL statements or stored procedures to their appropriate Command object. (Alternatively, if you use the SelectCommand property, this command would execute with the Fill() method.) If you want to perform an insert into the Customers table of the Northwind database, you could type the code in Listing 14.1. **Listing 14.1: Insert Commands by Using the DataAdapter Object with Parameters** ```vbnet Dim strSelectCustomers As String = "SELECT * FROM " & _ "Customers ORDER BY CustomerID" Dim strConnString As String = "data source=(local);" & _ "initial catalog=Northwind;integrated security=SSPI;" ' We can't use the implicit connection created by the ' DataSet since our update command requires a ' connection object in its constructor, rather than a ' connection string Dim connNorthwind As New SqlConnection(strConnString) ' String to update the customer record - it helps to ' specify this in advance so the CommandBuilder doesn't ' affect our performance at runtime Dim strInsertCommand As String = _ "INSERT INTO Customers(CustomerID,CompanyName) " & _ "VALUES (@CustomerID, @CompanyName)" Dim daCustomers As New SqlDataAdapter() Dim dsCustomers As New DataSet() Dim cmdSelectCustomer As SqlCommand = New SqlCommand(_, strSelectCustomers, connNorthwind) Dim cmdInsertCustomer As New SqlCommand(strInsertCommand, _ connNorthwind) daCustomers.SelectCommand = cmdSelectCustomer daCustomers.InsertCommand = cmdInsertCustomer cmdSelectCustomer.Connection = connNorthwind connNorthwind.Open() daCustomers.Fill(dsCustomers, "dtCustomerTable") Add scalar parameter for CustomerID cmdInsertCustomer.Parameters.Add _ (New SqlParameter ("@CustomerID", SqlDbType.NChar, 5)).Value = "ARHAN" Add scalar parameter for CompanyName cmdInsertCustomer.Parameters.Add _ ("@CompanyName", SqlDbType.VarChar, 40)).Value = _ "Amanda Aman Apak Merkez Inc." cmdInsertCustomer.ExecuteNonQuery() connNorthwind.Close() This code sets up both the SelectCommand and InsertCommand for the DataAdapter and executes the Insert query with no results. To map the Insert command with the values you are inserting, you use the Parameters property of the appropriate SqlCommand objects. This example adds parameters to the InsertCommand of the DataAdapter. As you can see from the DataAdapter object model in Figure 14.3, each of the SqlCommand objects supports a ParameterCollection. As you can see, the Insert statement need not contain all the fields in the parameters—and it usually doesn’t. However, you must specify all the fields that can’t accept Null values. If you don’t, the DBMS will reject the operation with a trappable runtime error. In this example, only two of the new row’s fields are set: the CustomerID and the CompanyName fields, because neither can be Null. **WARNING** In this code, notice that you can’t use the implicit connection created by the DataSet. This is because the InsertCommand object requires a Connection object in its constructor rather than a connection string. If you don’t have an explicitly created Connection object, you won’t have any variable to pass to the constructor. **TIP** Because you create the connection explicitly, you must make sure to close your connection when you are finished with it. Although implicitly creating your connection takes care of cleanup for you, it’s not a bad idea to explicitly open the connection, because you might want to leave it open so you can execute multiple fills and updates. Each of the DataSet’s Command objects have their own CommandType and Connection properties, which make them very powerful. Consider how you can use them to combine different types of command types, such as stored procedures and SQL statements. In addition, you can combine commands from multiple data sources, by using one database for retrievals and another for updates. As you can see, the DataAdapter with its Command objects is an extremely powerful feature of ADO.NET. In classic ADO, you don’t have any control of how your selects, inserts, updates, and deletes are handled. What if you wanted to add some specific business logic to these actions? You would have to write custom stored procedures or SQL statements, which you would call separately from your VB code. You couldn’t take advantage of the native ADO RecordSet updates, because ADO hides the logic from you. In summary, you work with a DataAdapter by using the following steps: 1. Instantiate your DataAdapter object. 2. Specify the SQL statement or stored procedure for the SelectCommand object. This is the only Command object that the DataAdapter requires. 3. Specify the appropriate connection string for the SelectCommand’s Connection object. 4. Specify the SQL statements or stored procedures for the InsertCommand, UpdateCommand, and DeleteCommand objects. Alternatively, you could use the CommandBuilder to dynamically map your actions at runtime. This step is not required. 5. Call the Fill() method to populate the DataSet with the results from the SelectCommand object. 6. If you used Step 4, call the appropriate Execute() method to execute your command objects against your data source. **WARNING** Use the CommandBuilder sparingly, because it imposes a heavy performance overhead at runtime. **THE DataReader Object** The DataReader object is a fast mechanism for retrieving forward-only, read-only streams of data. The SQL Server .NET provider have completely optimized this mechanism, so use it as often as you can for fast performance of read-only data. Unlike ADO RecordSets, which force you to load more in memory than you actually need, the DataReader is a toned-down, slender data stream, using only the necessary parts of the ADO.NET Framework. You can think of it as analogous to the server-side, read-only, forward-only cursor that you used in native OLE DB/ADO. Because of this server-side connection, you should use the DataReader cautiously, closing it as soon as you are finished with it. Otherwise, you will tie up your Connection object, allowing no other operations to execute against it (except for the Close() method, of course). As we mentioned earlier, you can create a DataReader object by using the ExecuteReader() method of the Command object. You would use DataReader objects when you need fast retrieval of read-only data, such as populating ComboBox lists. Listing 14.2 depicts an example of how you create the DataReader object, assuming you’ve already created the Connection object connNorthwind. Listing 14.2: Creating the DataReader Object ```vbnet Dim strCustomerSelect as String = "SELECT * from Customers" Dim cmdCustomers as New SqlCommand(strCustomerSelect, connNorthwind) Dim drCustomers as SqlDataReader connNorthwind.Open() drCustomers = cmdCustomers.ExecuteReader() ``` **NOTE** The code in Listing 14.2 can be found in the Click event of the Create DataReader button on the startup form for the Working with ADO.NET solution, which you can download from the Sybex website. Notice that you can’t directly instantiate the DataReader object, but must go through the Command object interface. **WARNING** You cannot update data by using the DataReader object. The DataReader absolves you from writing tedious MoveFirst() and MoveNext() navigation. The Read() method of the DataReader simplifies your coding tasks by automatically navigating to a position prior to the first record of your stream and moving forward without any calls to navigation methods, such as the MoveNext() method. To continue our example from Listing 14.2, you could retrieve the first column from all the rows in your DataReader by typing in the following code: ```csharp While(drCustomers.Read()) Console.WriteLine(drCustomers.GetString(0)) End While ``` **NOTE** The `Console.WriteLine` statement is similar to the `Debug.Print()` method you used in VB6. Because the DataReader stores only one record at a time in memory, your memory resource load is considerably lighter. Now if you wanted to scroll backward or make updates to this data, you would have to use the DataSet object, which we discuss in the next section. Alternatively, you can move the data out of the DataReader and into a structure that is updateable, such as the DataTable or DataRow objects. **WARNING** By default, the DataReader navigates to a point prior to the first record. There- fore, you must always call the `Read()` method before you can retrieve any data from the DataReader object. ### The DataSet Object There will come a time when the DataReader is not sufficient for your data manipulation needs. If you ever need to update your data, or store relational or hierarchical data, look no further than the DataSet object. Because the DataReader navigation mechanism is linear, you have no way of traversing between relational or hierarchical data structures. The DataSet provides a liberated way of navigating through both relational and hierarchical data, by using array-like indexing and tree walking, respectively. Unlike the managed provider objects, the DataSet object and friends do not diverge between the `OleDb` and `SqlClient` .NET namespaces. You declare a DataSet object the same way regardless of which .NET data provider you are using: ```csharp Dim dsCustomer as DataSet ``` Realize that DataSets stand alone. A DataSet is not a part of the managed data providers and knows nothing of its data source. The DataSet has no clue about transactions, connections, or even a database. Because the DataSet is data source agnostic, it needs something to get the data to it. This is where the DataAdapter comes into play. Although the DataAdapter is not a part of the DataSet, it understands how to communicate with the DataSet in order to populate the DataSet with data. **DataSets and XML** The DataSet object is the nexus where ADO.NET and XML meet. The DataSet is persisted as XML, and only XML. You have several ways of populating a DataSet: You can traditionally load from a database or reverse engineer your XML files back into DataSets. You can even create your own customized application data without using XML or a database, by creating custom DataTables and DataRows. We show you how to create DataSets on the fly in this chapter in the section “Creating Custom DataSets.” DataSets are perfect for working with data transfer across Internet applications, especially when working with WebServices. Unlike native OLE DB/ADO, which uses a proprietary COM protocol, DataSets transfer data by using native XML serialization, which is a ubiquitous data format. This makes it easy to move data through firewalls over HTTP. Remoting becomes much simpler with XML over the wire, rather than the heavier binary formats you have with ADO RecordSets. We demonstrated how you do this in Chapter 12, “Developing Web Applications with ASP.NET.” As we mentioned earlier, DataSet objects take advantage of the XML model by separating the data storage from the data presentation. In addition, DataSets separate navigational data access from the traditional set-based data access. We show you how DataSet navigation differs from RecordSet navigation later in this chapter in Table 14.4. **DataSets versus RecordSets** As you can see in Figure 14.4, DataSets are much different from tabular RecordSets. You can see that they contain many types of nested collections, such as relations and tables, which you will explore throughout the examples in this chapter. What’s so great about DataSets? You’re happy with the ADO 2.x RecordSets. You want to know why you should migrate over to using ADO.NET DataSets. There are many compelling reasons. First, DataSet objects separate all the disconnected logic from the connected logic. This makes them easier to work with. For example, you could use a DataSet to store a web user’s order information for their online shopping cart, sending deltagrams to the server as they update their order information. In fact, almost any scenario where you collect application data based on user interaction is a good candidate for using DataSets. Using DataSets to manage your application data is much easier than working with arrays, and safer than working with connection-aware RecordSets. Another motivation for using DataSets lies in their capability to be safely cached with web applications. Caching on the web server helps alleviate the processing burden on your database servers. ASP caching is something you really can’t do safely with a RecordSet, because of the chance that the RecordSet might hold a connection and state. Because DataSets independently maintain their own state, you never have to worry about tying up resources on your servers. You can even safely store the DataSet object in your ASP.NET Session object, which you are warned never to do with RecordSets. RecordSets are dangerous in a Session object; they can crash in some versions of ADO because of issues with marshalling, especially when you use open client-side cursors that aren’t streamed. In addition, you can run into threading issues with ADO RecordSets, because they are apartment threaded, which causes your web server to run in the same thread. DataSets are great for remoting because they are easily understandable by both .NET and non-.NET applications. DataSets use XML as their storage and transfer mechanism. .NET applications don’t even have to deserialize the XML data, because you can pass the DataSet much like you would a RecordSet object. Non-.NET applications can also interpret the DataSet as XML, make modifications using XML, and return the final XML back to the .NET application. The .NET application takes the XML and automatically interprets it as a DataSet, once again. Last, DataSets work well with systems that require tight user interaction. DataSets integrate tightly with bound controls. You can easily display the data with DataViews, which enable scrolling, searching, editing, and filtering with nominal effort. Now that we’ve explained how the DataSet gives you more flexibility and power than using the ADO RecordSet, examine Table 14.3, which summarizes the differences between ADO and ADO.NET. <table> <thead> <tr> <th>Feature Set</th> <th>ADO</th> <th>ADO.NET</th> <th>ADO.NET’s Advantage</th> </tr> </thead> <tbody> <tr> <td>Data persistence format</td> <td>RecordSet</td> <td>Uses XML</td> <td>With ADO.NET, you don’t have datatype restrictions.</td> </tr> <tr> <td>Data transfer format</td> <td>COM marshalling</td> <td>Uses XML</td> <td>ADO.NET uses a ubiquitous format that is easily transferable and that multiple platforms and sites can readily translate. In addition, XML strings are much more manageable than binary COM objects.</td> </tr> <tr> <td>Web transfer protocol</td> <td>You would need to use DCOM to tunnel through Port 80 and pass proprietary COM data, which firewalls could filter out.</td> <td>Uses HTTP</td> <td>ADO.NET data is more readily transferable though firewalls.</td> </tr> </tbody> </table> Let’s explore how to work with the various members of the DataSet object to retrieve and manipulate data from your data source. Although the DataSet is designed for data access with any data source, in this chapter we focus on SQL Server as our data source. **Working with DataSets** Often you will work with the DataReader object when retrieving data, because it offers you the best performance. As we have explained, in some cases the DataSet’s powerful interface for data manipulation will be more practical for your needs. In this section, we discuss techniques you can use for working with data in your DataSet. The DataSet is an efficient storage mechanism. The DataSet object hosts multiple result sets stored in one or more DataTables. These DataTables are returned by the DBMS in response to the execution of a command. The DataTable object uses rows and columns to contain the structure of a result set. You use the properties and methods of the DataTable object to access the records of a table. Table 14.4 demonstrates the power and flexibility you get with ADO.NET when retrieving data versus classic ADO. **TABLE 14.4:** Why ADO.NET Is a Better Data Storage Mechanism than ADO <table> <thead> <tr> <th>Feature Set</th> <th>ADO</th> <th>ADO.NET</th> <th>ADO.NET’s Advantage</th> </tr> </thead> <tbody> <tr> <td>Disconnected</td> <td>Uses disconnected RecordSets, which store data into a single table.</td> <td>Uses DataSets that store one or many DataTables.</td> <td>Storing multiple result sets is simple in ADO.NET. The result sets can come from a variety of data sources. Navigating between these result sets is intuitive, using the standard collection navigation. DataSets never maintain state, unlike RecordSets, making them safer to use with n-tier, disconnected designs.</td> </tr> </tbody> </table> TABLE 14.4 continued: Why ADO.NET Is a Better Data Storage Mechanism than ADO <table> <thead> <tr> <th>Feature Set</th> <th>ADO</th> <th>ADO.NET</th> <th>ADO.NET's Advantage</th> </tr> </thead> <tbody> <tr> <td>Relationship management</td> <td>Uses JOINs, which pull data into a single result table. Alternatively, you can use the SHAPE syntax with the shaping OLE DB service provider.</td> <td>Uses the DataRelation object to associate multiple DataTables to one another.</td> <td>ADO.NET's DataTable collection sets the stage for more robust relationship management. With ADO, JOINs bring back only a single result table from multiple tables. You end up with redundant data. The SHAPE syntax is cumbersome and awkward. With ADO.NET, DataRelations provide an object-oriented, relational way to manage relations such as constraints and cascading referential integrity, all within the constructs of ADO.NET. The ADO shaping commands are in an SQL-like format, rather than being native to ADO objects.</td> </tr> <tr> <td>Navigation mechanism</td> <td>RecordSets give you the option to only view data sequentially.</td> <td>DataSets have a nonlinear navigation model.</td> <td>DataSets enable you to traverse the data among multiple DataTables, using the relevant DataRelations to skip from one table to another. In addition, you can view your relational data in a hierarchical fashion by using the tree-like structure of XML.</td> </tr> </tbody> </table> There are three main ways to populate a DataSet: - After establishing a connection to the database, you prepare the DataAdapter object, which will retrieve your results from your database as XML. You can use the DataAdapter to fill your DataSet. - You can read an XML document into your DataSet. The .NET Framework provides an XMLDataDocument namespace, which is modeled parallel to the ADO.NET Framework. You can use DataTables to build your DataSet in memory without the use of XML files or a data source of any kind. You will explore this option in the section “Updating Your Database by Using DataSets” later in this chapter. Let’s work with retrieving data from the Northwind database. First, you must prepare the DataSet object, which can be instantiated with the following statement: ```vbnet Dim dsCustomers As New DataSet() ``` Assuming you’ve prepared your DataAdapter object, all you would have to call is the `Fill()` method. Listing 14.3 shows you the code to populate your DataSet object with customer information. ### Listing 14.3: Creating the DataSet Object ```vbnet Dim strSelectCustomers As String = "SELECT * FROM " & _ "Customers ORDER BY CustomerID" Dim strConnString As String = "data source=(local);" & _ "initial catalog=Northwind;integrated security=SSPI;" Dim daCustomers As New SqlDataAdapter(strSelectCustomers, _ strConnString) Dim dsCustomers As New DataSet() Dim connNorthwind As New SqlConnection(strConnString) daCustomers.Fill(dsCustomers, "dtCustomerTable") MsgBox(dsCustomers.GetXml, , _ "Results of Customer DataSet in XML") ``` **NOTE** The code in Listing 14.3 can be found in the Click event of the Create Single Table DataSet button on the startup form for the Working with ADO.NET solution, which you can download from the Sybex website. This code uses the `GetXml()` method to return the results of your DataSet as XML. The rows of the Customers table are retrieved through the `dsCustomers` object variable. The DataTable object within the DataSet exposes a number of properties and methods for manipulating the data by using the DataRow and DataColumn collections. You will explore how to navigate through the DataSet in the upcoming section, “Navigating Through DataSets.” However, first you must understand the main collections that comprise a DataSet, the DataTable, and DataRelation collections. The DataTableCollection Unlike the ADO RecordSet, which contained only a single table object, the ADO.NET DataSet contains one or more tables, stored as a DataTableCollection. The DataTableCollection is what makes DataSets stand out from disconnected ADO RecordSets. You never could do something like this in classic ADO. The only choice you have with ADO is to nest RecordSets within RecordSets and use cumbersome navigation logic to move between parent and child RecordSets. The ADO.NET navigation model provides a user-friendly navigation model for moving between DataTables. In ADO.NET, DataTables factor out different result sets that can come from different data sources. You can even dynamically relate these DataTables to one another by using DataRelations, which we discuss in the next section. NOTE If you want, you can think of a DataTable as analogous to a disconnected RecordSet, and the DataSet as a collection of those disconnected RecordSets. Let’s go ahead and add another table to the DataSet created earlier in Listing 14.3. Adding tables is easy with ADO.NET, and navigating between the multiple DataTables in your DataSet is simple and straightforward. In the section “Creating Custom DataSets,” we show you how to build DataSets on the fly by using multiple DataTables. The code in Listing 14.4 shows how to add another DataTable to the DataSet that you created in Listing 14.3. NOTE The code in Listing 14.4 can be found in the Click event of the Create DataSet With Two Tables button on the startup form for the Working with ADO.NET solution, which you can download from the Sybex website. Listing 14.4: Adding Another DataTable to a DataSet Dim strSelectCustomers As String = "SELECT * FROM " & _ "Customers ORDER BY CustomerID" Dim strSelectOrders As String = "SELECT * FROM Orders" Dim strConnString As String = "data source=(local);" & _ "initial catalog=Northwind;integrated security=SSPI;" Dim daCustomers As New SqlDataAdapter(strSelectCustomers, _ strConnString) Dim dsCustomers As New DataSet() Dim daOrders As New SqlDataAdapter(strSelectOrders, _ strConnString) daCustomers.Fill(dsCustomers, "dtCustomerTable") daOrders.Fill(dsCustomers, "dtOrderTable") Console.WriteLine(dsCustomers.GetXml) **WARNING** DataTables are conditionally case sensitive. In Listing 14.4, the DataTable is called dtCustomerTable. This would cause no conflicts when used alone, whether you referred to it as dtCustomerTable or dtCUSTOMERTABLE. However, if you had another DataTable called dtCUSTOMERTABLE, it would be treated as an object separate from dtCustomerTable. As you can see, all you had to do was create a new DataAdapter to map to your Orders table, which you then filled into the DataSet object you had created earlier. This creates a collection of two DataTable objects within your DataSet. Now let’s explore how to relate these DataTable objects together. **The DataRelation Collection** The DataSet object eliminates the cumbersome shaping syntax you had to use with ADO RecordSets, replacing it with a more robust relationship engine in the form of DataRelation objects. The DataSet contains a collection of DataRelation objects within its Relations property. Each DataRelation object links disparate DataTables by using referential integrity such as primary keys, foreign keys, and constraints. The DataRelation doesn’t have to use any joins or nested DataTables to do this, as you had to do with ADO RecordSets. In classic ADO, you create relationships by nesting your RecordSets into a single tabular RecordSet. Aside from being clumsy to use, this mechanism also made it awkward to dynamically link disparate sets of data. With ADO.NET, you can take advantage of new features such as cascading referential integrity. You can do this by adding a ForeignKeyConstraint object to the ConstraintCollection within a DataTable. The ForeignKeyConstraint object enforces referential integrity between a set of columns in multiple DataTables. As we explained in Chapter 13, in the “Database Integrity” section, this will prevent orphaned records. In addition, you can cascade your updates and deletes from the parent table down to the child table. Listing 14.5 shows you how to link the CustomerID column of your Customer and Orders DataTables. Using the code from Listing 14.3, all you have to do is add a new declaration for your DataRelation. Listing 14.5: Using a Simple DataRelation ```vbnet Dim drCustomerOrders As DataRelation = New DataRelation("CustomerOrderRelation", dsCustomers.Tables("Customers").Columns("CustomerID"), dsCustomers.Tables("Orders").Columns("CustomerID")) dsCustomers.Relations.Add(drCustomerOrders) ``` NOTE The code in Listing 14.5 can be found in the Click event of the Using Simple DataRelations button on the startup form for the Working with ADO.NET solution, which you can download from the Sybex website. As you can with other ADO.NET objects, you can overload the DataRelation constructor. In this example, you pass in three parameters. The first parameter indicates the name of the relation. This is similar to how you would name a relationship within SQL Server. The next two parameters indicate the two columns that you wish to relate. After creating the DataRelation object, you add it to the Relations collection of the DataSet object. The datatype of the two columns you wish to relate must be identical. Listing 14.6 shows you how to use DataRelations between the Customers and Orders tables of the Northwind database to ensure that when a customer ID is deleted or updated, it is reflected within the Orders table. Listing 14.6: Using Cascading Updates ```vbnet Dim fkCustomerID As ForeignKeyConstraint fkCustomerID = New ForeignKeyConstraint ("CustomerOrderConstraint", dsCustomers.Tables("Customers").Columns("CustomerID"), dsCustomers.Tables("Orders").Columns("CustomerID"))) ``` fkCustomerID.UpdateRule = Rule.Cascade fkCustomerID.AcceptRejectRule = AcceptRejectRule.Cascade dsCustomers.Tables("CustomerOrder").Constraints.Add (fkCustomerID) dsCustomers.EnforceConstraints = True **NOTE** The code in Listing 14.6 can be found in the Click event of the Using Cascading Updates button on the startup form for the Working with ADO.NET solution, which you can download from the Sybex website. In this example, you create a foreign key constraint with cascading updates and add it to the ConstraintCollection of your DataSet. First, you declare and instantiate a ForeignKeyConstraint object, as you did earlier when creating the DataRelation object. Afterward, you set the properties of the ForeignKeyConstraint, such as the UpdateRule and AcceptRejectRule, finally adding it to your ConstraintCollection. You have to ensure that your constraints activate by setting the EnforceConstraints property to True. ### Navigating through DataSets We already discussed navigation through a DataReader. To sum it up, as long as the DataReader’s Read() method returns True, then you have successfully positioned yourself in the DataReader. Now let’s discuss how you would navigate through a DataSet. In classic ADO, to navigate through the rows of an ADO RecordSet, you use the Move() method and its variations. The MoveFirst(), MovePrevious(), MoveLast(), and MoveNext() methods take you to the first, previous, last, and next rows in the RecordSet, respectively. This forces you to deal with cursoring and absolute positioning. This makes navigation cumbersome because you have to first position yourself within a RecordSet and then read the data that you need. In ADO 2.x, a fundamental concept in programming for RecordSets is that of the current row: To read the fields of a row, you must first move to the desired row. The RecordSet object supports a number of navigational methods, which enable you to locate the desired row, and the Fields property, which enables you to access (read or modify) the current row’s fields. With ADO.NET, you no longer have to use fixed positioning to locate your records; instead, you can use array-like navigation. Unlike ADO RecordSets, the concept of the current row no longer matters with DataSets. DataSets work like other in-memory data representations, such as arrays and collections, and use familiar navigational behaviors. DataSets provide an explicit in-memory representation of data in the form of a collection-based model. This enables you to get rid of the infamous `Do While Not rs.EOF() And Not rs.BOF()` loop. With ADO.NET, you can use the friendly `For Each` loop to iterate through the DataTables of your DataSet. If you want to iterate through the rows and columns within an existing DataTable named `tblCustomers`, stored in a `dsCustomers` DataSet, you could use the following loop in Listing 14.7. **Listing 14.7: Navigating through a DataSet** ```vbnet For Each tblCustomer In dsCustomers.Tables Dim rowCustomer As DataRow For Each rowCustomer In tblCustomer.Rows Dim colCustomer As DataColumn For Each colCustomer In thisTable.Columns Console.WriteLine(rowCustomer(colCustomer)) Next colCustomer Next rowCustomer Next tblCustomer ``` This will print out the values in each column of the customers DataSet created in Listing 14.3. As you can see, the `For Each` logic saves you from having to monitor antiquated properties such as `EOF` and `BOF` of the ADO RecordSet. DataTables contain collections of DataRows and DataColumns, which also simplify your navigation mechanism. Instead of worrying about the `RecordCount` property of RecordSets, you can use the traditional `UBound()` property to collect the number of rows within a DataTable. For the example in Listing 14.7, you can calculate the row count for the customer records by using the following statement: ```vbnet UBound(rowCustomer) ``` ### DataTable Capacities In classic ADO, you could specify `paged RecordSets`—the type of RecordSets displayed on web pages when the results of a query are too many to be displayed on a single page. The web server displays 20 or so records and a number of buttons at the bottom of the page that enable you to move quickly to another group of 20 records. This technique is common in web applications, and ADO supports a few properties that simplify the creation of paged RecordSets, such as the AbsolutePage, PageSize, and PageCount properties. With ADO.NET, you can use the MinimumCapacity property to specify the number of rows you wish to bring back for a DataTable. The default setting is 25 rows. This setting is especially useful if you want to improve performance on your web pages in ASP.NET. If you want to ensure that only 50 customer records display for the Customers DataTable, you would specify the following: ```csharp dtCustomers.MinimumCapacity = 50 ``` If you have worked with paged RecordSets, you will realize that this performance technique is much less involved than the convoluted paging logic you had to use in ADO 2.x. **Navigating a Relationship between Tables** ADO.NET provides a navigation model for navigating through DataTables by using the relationships that connect them. Keep in mind that relations work as separate objects. When you create the relationship between the Customers and Orders tables, you can’t directly jump from a customer DataRow to the related order DataRows. You must open the DataRelation separately and then pull the related rows. This is fine with one-to-many relationships; however, if you are using one-to-one relationships, you should stick with SQL JOIN statements. You will explore the many techniques you can do with your retrieved data later in this chapter. First, let’s review basic ways of updating your data sources by using DataSets. **Updating Your Database by Using DataSets** The two connected and disconnected models of ADO.NET work very differently when updating the database. Connected, or managed, providers communicate with the database by using command-based updates. As we showed you in “The DataSet Object” section earlier, disconnected DataSets update the database by using a cached, batch-optimistic method. DataSets work independently from a connection, working with the deltagram of data on the disconnected DataSet and committing the changes only after you call the Update() method from the DataAdapter. The separation between the command-based model used with managed providers and the optimistic model carried out by the DataSet objects enables the programmer to make a distinction between server-side execution and cached execution. **WARNING** In ADO 2.x, there was a good amount of confusion regarding client-side cursors. Some implementations mistakenly used server-side cursors when they meant to use client-cursors on the application server. Don’t confuse disconnected, cached DataSets as user-side data. The DataSets can also be stored on your middle tier, which you should consider as a client-side cache, even though it is stored on your application server. You’ll explore how to use DataSets within your ASP.NET code in Part IV, “Database Programming.” To update data, you make changes to your DataSet and pass them up to the server. Obviously, you can’t use the DataReader, because its forward-only, read-only nature can’t be updated. There are many ways that you can make updates to a DataSet: - Make changes to an existing DataSet which was retrieved from a query executed on your database server(s). Pass the changes to the data source via the Data Adapter. - Load data from an XML file by using the ReadXml() method. Map the resulting DataSet to your data source by using the DataAdapter. - Merge multiple DataSets by using the Merge() method, passing the results to the data source via the DataAdapter. - Create a new DataSet with new schema and data on the fly, mapping it to a data source by using the DataAdapter. As you can see, all these options have one thing in common: Your changes are not committed back to the server until the DataAdapter intervenes. DataSets are completely unaware of where their data comes from and how their changes relate back to the appropriate data source. The DataAdapter takes care of all this. Realize that updating a record is not always a straightforward process. What happens if a user changes the record after you have read it? And what will happen if the record you’re about to update has already been deleted by another user? In this chapter, you will learn the basics of updating databases through the ADO.NET DataSet, assuming no concurrency is involved. However, we discuss the implications of concurrency at the end of this chapter. In the meantime, let’s set up your ADO.NET objects to insert a customer row into the Northwind database. **Updating Your DataSet by Using the DataTable and DataRow Objects** Earlier in this chapter, we showed you how to update your database by using parameterized stored procedures. Although this is efficient for making single row changes, it isn’t quite useful when you have a significant number of changes to pass to the server. What happens when you want to apply changes in bulk? Consider an e-commerce application that uses an online shopping cart. The shopping cart could have multiple rows of data that would be inserted and updated as the user browsed through the site. When it comes time to push these changes to the server, it would be much easier to pass them in one single batch, rather than call the stored procedure multiple times for each row that’s modified. In ADO 2.x, you use disconnected RecordSets along with the `UpdateBatch()` method to pass your changes on to the server. In ADO.NET, you pass the disconnected deltagram from the DataSet object to the `DataAdapter.Update()` method. Once again, ADO.NET clearly draws the line between your data and your data source. The DataSet object doesn’t directly contact the data source. First, let’s see how you can manage changes within a DataSet. As the user edits the in-memory cache, the changes are stored into a buffer and not yet committed to the DataSet. You can commit modifications to a DataSet by using the `AcceptChanges()` method of the DataSet, DataTable, or DataRow objects. If you execute this method on the parent object, it will propagate down onto the children. For example, if you call `AcceptChanges()` on the DataSet object, it will cascade down onto the DataTables within the DataSet’s Table collection (likewise for a DataTable to its relevant DataRow collection). When you insert a row into a DataTable, you can monitor the “dirtiness” of a row by examining the RowState property. Let’s go ahead and add a new row to your dsCustomers DataSet. In Figure 14.5, we continue the logic that we used in Listing 14.3 to populate your dsCustomers DataSet. **NOTE** Until you call the `Update()` method, your DataSet changes will not be committed to your data source. First, let’s look at the code that pulls down the data that you want to work with from your database into a DataSet. Using the existing DataSet, you will add a new row directly to the DataSet by using the DataTable and DataRow collections of the DataSet. **NOTE** The code depicted in Figure 14.5 can be found in the `Updating Data using ADO.NET.sln` solution file, within the Click event of the Inserting Data With DataSets And DataTables button. As you see in Figure 14.5, DataSet updates are very straightforward. All you have to do is fill your DataSet, as we’ve shown you earlier in the chapter. Then you set up a new DataRow object with the DataTable’s `NewRow()` method. The `Add()` collection of the Rows collection will add your new row to the collection. Finally, you call the `AcceptChanges()` method of the DataSet, which will automatically cascade all changes down to its inner DataTables and DataRows. Alternatively, you could call the `AcceptChanges()` method specifically on the inner object you wish to update because the DataTable and DataRow also support the `AcceptChanges()` method. As the note indicates, the source code for this example is available for download on the companion website. Go ahead and load the code into Visual Studio .NET and place a breakpoint on the `Add()` method. Execute the code by pressing F5. When you get to your breakpoint, type the following in the Command window: ```csharp ?dtcustomer.rows.count ``` If you have difficulty working with the Command window, it might be because you are not in Immediate mode. If you see a > prompt, then this is most likely the case. Toggle the mode from Command mode to Immediate mode by typing `immed` at the prompt and pressing Enter. Now you should be able to debug your code. You will see the number of rows in your Customers table, within your DataSet, prior to making changes. Hit F11 to step into the Add() method. This will update your DataSet with the newly added row. Go back to the Command window and hit the Up arrow key and Enter to re-execute the row count statement. The results will show that the Add() method increments your row count in your DataRow by one record. However, if you compare the result to the data in the database, you will see that your data still has the same number of original rows. This is an important point. None of your changes will be committed to the data source until you call the Update() method of the DataAdapter object. Finish the execution of the code to commit the changes in your DataSet. In summary, all you have to do is execute the following steps to commit updates to your DataSet: 1. Instantiate your DataSet and DataAdapter objects. 2. Fill your DataSet object from the DataAdapter object. 3. Manipulate your DataSet by using the DataRow objects. 4. Call the AcceptChanges() method of the DataSet, DataTable, or DataRow object to commit your changes to your DataSet. **Updating Your Data Source by Using the DataSet and DataAdapter** In this section, we show you how to insert a new row into your DataSet with the DataRow and DataTable objects. After you’ve updated your DataSet, we show you how you can commit those changes to the DataSet. Committing changes to a DataSet doesn’t mean that they are committed to the database. To commit your changes to the database, you use the Update() method, which is similar to the Fill() method, only it works in reverse, updating your data source with the deltagram from the DataSet. Listing 14.8 contains the code that enables you to update a database with changes from a DataSet object. **NOTE** The code in Listing 14.8 can be found in the Updating Data Using ADO.NET solution, within the Click event of the Committing Changes From Your DataSet To Your Database button. You can download this solution from the companion website. Although the Update() method is the only method you need to call to commit your changes back to the database, you must do some preparation work in advance. You must set up the appropriate action-based Command objects before you call the DataAdapter’s Update() method. These Command objects map to the relevant insert, update, and delete stored procedures or SQL statements. Alternatively, you can use the CommandBuilder object to dynamically generate the appropriate SQL statements for you. Listing 14.8: Committing DataSet Changes to a Database Dim strSelectCustomers As String = "SELECT * FROM " & _ "Customers ORDER BY CustomerID" Dim strConnString As String = "data source=(local);" & _ "initial catalog=Northwind;integrated security=SSPI;" Dim connNorthwind As New SqlConnection(strConnString) Dim daCustomers As New SqlDataAdapter(strSelectCustomers, _ connNorthwind) Dim dsCustomers As New DataSet() Dim dtCustomer As DataTable Dim drNewCustomer As DataRow Dim custCB As SqlCommandBuilder = + New SqlCommandBuilder(daCustomers) connNorthwind.Open() daCustomers.Fill(dsCustomers, "dtCustomerTable") connNorthwind.Close() dtCustomer = dsCustomers.Tables("dtCustomerTable") Try drNewCustomer = dtCustomer.NewRow() drNewCustomer(0) = "OTISP" drNewCustomer(1) = "Otis P. Wilson Spaghetti House." dtCustomer.Rows.Add(drNewCustomer) Dim drModified As DataRow() = _ dsCustomers.Tables("dtCustomerTable").Select(Nothing, _ Nothing, DataViewRowState.Added) connNorthwind.Open() daCustomers.Update(drModified) Catch eInsertException As Exception MsgBox(eInsertException.Message) Throw eInsertException Finally connNorthwind.Close() End Try In summary, all you have to do is execute the following steps to update your data source from your DataSet, after you’ve made your changes to the DataSet: 1. Create a new row object that contains all the modified rows. You can use the DataViewRowState property to extract the appropriate rows. In our case, we used the DataViewRowState.Added value. 2. Call the `Update()` method of the `DataAdapter` object to send your changes back to the appropriate data source(s). Pass a copy of the `DataRow` containing your changes. That's it. As you see, it's quite simple to add new rows to your database. Updates and deletes work the same way. **Managing DataSet Changes** Because the `DataSet` is inherently disconnected from the data source, it must manage its changes by itself. The `DataSet` supports several “dirty” flags that indicate whether changes have occurred. These flags come in the form of the `GetChanges()` and `HasChanges()` methods, which enable it to reconcile changes back to its data source via the `DataAdapter` object. These methods are used in conjunction with the `RowState` property, which we discuss next. **The RowState Property** The `RowState` property enables you to track the condition of your rows. It works hand in hand with the `AcceptChanges()` method, which we discuss next. Until the `AcceptChanges()` method is called, the row state will be dirty. After `AcceptChanges()` has been called on the row, the row state will reflect a committed record that is no longer in flux. The `RowState` depends on what type of modification was made on the row, such as an insert, update, or delete. Table 14.5 shows you the possible values that the `RowState` might contain and why. <table> <thead> <tr> <th><strong>Constant</strong></th> <th><strong>Description</strong></th> </tr> </thead> <tbody> <tr> <td>Added</td> <td>Occurs when a new row is first added to the <code>DataRowCollection</code></td> </tr> <tr> <td>Deleted</td> <td>Indicates that the row was marked for deletion</td> </tr> <tr> <td>Detached</td> <td>Indicates that the row is “floating” and not yet attached to a <code>DataRowCollection</code></td> </tr> <tr> <td>Modified</td> <td>Indicates that the row is &quot;dirty&quot;</td> </tr> <tr> <td>Unchanged</td> <td>Indicates that either the row was never touched in the first place, or the <code>AcceptChanges()</code> method was called, committing the changes to the row</td> </tr> </tbody> </table> A First Look at ADO.NET 593 The **AcceptChanges()** Method Until you call this method, all the modified rows in your DataSet will remain in edit mode. The `AcceptChanges()` method commits your modifications to a DataSet. The DataTable and DataRow objects also support this method. Keep in mind that this will not update your database, just your DataSet and friends. `AcceptChanges()` works incrementally, updating the DataSet with the modifications since the last time you called it. As we noted earlier, you can cascade your changes down to children objects. If you wanted to automatically accept changes for all the DataRows within a DataTable, you would need to call only the `AcceptChanges()` method on the DataTable, which automatically commits the changes for all its member DataRows. The **RejectChanges()** Method If you decide not to commit the new row to the DataSet, call the `RejectChanges()` method. This method doesn’t require any arguments. It simply deletes the newly added row or reverses the changes you made to an existing row. The **HasChanges()** Method The `HasChanges()` method queries whether a DataSet contains “dirty” rows. Generally, you would call this method before you called the `GetChanges()` method, so you don’t unnecessarily retrieve changes that might not exist. This method can be overloaded by passing in the RowState as a parameter. By doing this, you can filter out specific change types. If you only wanted to query if the DataSet had any deletions, you would type: ```vbnet If dsCustomers.HasChanges(DataRowState.Deleted) Then ' Do some logic to get the changes End If ``` The **GetChanges()** Method The `GetChanges()` method creates a DataSet containing the changes made to it since the last time you called the `AcceptChanges()` method. If you haven’t called `AcceptChanges()`, then it will retrieve a copy of the DataSet with all your changes. You can optionally use the overloaded version of this method, which accepts the DataRowState as a parameter. This way, you can get only the changes based on a certain state. If you wanted to get only the deletions for a DataSet, you would first call the HasChanges() method to see if any deletions occurred and then retrieve the changes: ```csharp dsCustomers = dsCustomers.GetChanges(DataRowState.Deleted) ``` **Merging** Another technique for working with DataSets uses the ability to merge results from multiple DataTables or DataSets. The merge operation can also combine multiple schemas together. The Merge() method enables you to extend one schema to support additional columns from the other, and vice versa. In the end, you end up with a union of both schemas and data. This is useful when you want to bring together data from heteroge- neous data sources, or to add a subset of data to an existing DataSet. The merge operation is quite simple: ```csharp dsCustomers.Merge (dsIncomingCustomers) ``` **Typed DataSets** There are many data typing differences between ADO and ADO.NET. In classic ADO, you have more memory overhead than ADO because the fields in a RecordSet are late-bound, returning data as the Variant datatype. ADO.NET supports stricter data typing. ADO.NET uses the Object, rather than the Variant datatype for your data. Although Objects are more lightweight than Variants, your code will be even more efficient if you know the type ahead of time. You could use the GetString() method to convert your column values to strings. This way, you avoid boxing your variables to the generic Object type. You can use similar syntax for the other datatypes, such as GetBoolean() or GetGuid(). Try to convert your values to the native format to reduce your memory overhead. When you work with classic ADO, you experience performance degradation when you refer to your fields by name. You would type the following: ```csharp strName = rsCustomers.Fields("CustomerName").Value ``` Now, with ADO.NET, you can use strong typing to reference the fields of a DataSet directly by name, like so: ```csharp strName = dsCustomers.CustomerName ``` Because the values are strictly typed in ADO.NET, you don’t have to write type-checking code. ADO.NET will generate a compile-time error if your have a type mismatch, unlike the ADO runtime errors you get much too late. With ADO.NET, if you try to pass a string to an integer field, you will raise an error when you compile the code. Creating Custom DataSets You don’t need a database to create a DataSet. In fact, you can create your own DataSet without any data at all. The ADO.NET DataSet enables you to create new tables, rows, and columns from scratch. You can use these objects to build relationships and constraints, ending up with a mini-database into which you can load your data. Listing 14.9 contains code that enables you to build a simple three-column online shopping cart DataSet on the fly. First, let’s create a BuildShoppingCart() method that will create your table schema. Listing 14.9: Creating a DataSet on the Fly ```csharp Public Function BuildShoppingCart() As DataTable Dim tblCart As DataTable = New DataTable("tblOrders") Dim dcOrderID As DataColumn = New DataColumn("OrderID", Type.GetType("System.Int32")) Dim dcQty As DataColumn = New DataColumn("Quantity", Type.GetType("System.Int32")) Dim dcCustomerName As DataColumn = New DataColumn("CustomerName", Type.GetType("System.String")) tblCart.Columns.Add(dcOrderID) tblCart.Columns.Add(dcQty) tblCart.Columns.Add(dcCustomerName) Return tblCart End Function ``` Now, all you have to do is set a DataTable variable to the results of your method and populate it. Place a breakpoint on the Add() method of the DataRow collection, as shown in Figure 14.6. This way, you can use the Immediate mode of the Command window to see if your custom DataSet was successfully updated. With ADO.NET, it’s easy to use array-like navigation to return the exact value you are looking for. In this example, you query the value of the customer name in the first row by using the tblCart.Rows(0).Item(2) statement. Figure 14.6 shows you the results. Again, you can see the power of constructors. In this sample, you see how you can set your constructor to a method result. Being able to create your own DataSet from within your code enables you to apply many of the techniques discussed in this book. You can use these custom DataSets to store application data, without incurring the cost of crossing your network until you need to commit your changes. **MANAGING CONCURRENCY** When you set up your DataSet, you should consider the type of locking, or concurrency control, that you will use. Concurrency control determines what will happen when two users attempt to update the same row. ADO.NET uses an optimistic architecture, rather than a pessimistic model. *Pessimistic locking* locks the database when a record is retrieved for editing. Be careful when you consider pessimistic locking. Pessimistic locking extremely limits your scalability. You really can’t use pessimistic... locking in a system with a large number of users. Only certain types of designs can support this type of locking. Consider an airline booking system. A passenger (let’s call her Sam) makes a request to book a seat and retrieves a list of the available seats from the database. Sam selects a seat and updates the information in the database. Under optimistic locking, if someone else took her seat, she would see a message on her screen asking her to select a new one. Now let’s consider what happens under pessimistic locking. After Sam makes a request for the list of available seats, she decides to go to lunch. Because pessimistic locking prevents other users from making changes when Sam is making edits, everyone else would be unable to book their seats. Of course, you could add some logic for lock timeouts, but the point is still the same. Pessimistic locking doesn’t scale very well. In addition, disconnected architecture cannot support pessimistic locking because connections attach to the database only long enough to read or update a row, not long enough to maintain an indefinite lock. In classic ADO, you could choose between different flavors of optimistic and pessimistic locks. This is no longer the case. The .NET Framework supports only an optimistic lock type. An optimistic lock type assumes that the data source is locked only at the time the data update commitment occurs. This means changes could have occurred while you were updating the disconnected data cache. A user could have updated the same CompanyName while you were making changes to the disconnected DataSet. Under optimistic locking, when you try to commit your CompanyName changes to the data source, you will override the changes made by the last user. The changes made by the last user could have been made after you had retrieved your disconnected DataSet. You could have updated the CompanyName for a customer, after someone else had updated the Address. When you push your update to the server, the updated address information would be lost. If you expect concurrency conflicts of this nature, you must make sure that your logic detects and rejects conflicting updates. If you have worked with ADO 2.x, you can think of the Update() method of the DataAdapter object as analogous to the UpdateBatch() method you used with the RecordSet object. Both models follow the concept of committing your deltagram to the data source by using an optimistic lock type. Understanding how locking works in ADO.NET is an essential part of building a solid architecture. ADO.NET makes great strides by advancing the locking mechanism. Let’s take a look at how it changes from classic ADO in order to get an idea of how much power ADO.NET gives you. In ADO 2.x, when you make changes to a disconnected RecordSet, you call the `UpdateBatch()` method to push your updates to the server. You really don’t know what goes on under the covers and you hope that your inserts, updates, and deletes will take. You can’t control the SQL statements that modify the database. When you use optimistic concurrency, you still need some way to determine whether your server data has been changed since the last read. You have three choices with managing concurrency: time-date stamps, version numbers, and storing the original values. Time-date stamps are a commonly used approach to tracking updates. The comparison logic checks to see if the time-date of the updated data matches the time-date stamp of original data in the database. It’s a simple yet effective technique. Your logic would sit in your SQL statements or stored procedures, such as: ```sql UPDATE Customers SET CustomerID = "SHAMSI", CustomerName = "Irish Twinkle SuperMart" WHERE DateTimeStamp = olddatetimestamp ``` The second approach is to use version numbers, which is similar to using the time-date stamp, but this approach labels the row with version numbers, which you can then compare. The last approach is to store the original values so that when you go back to the database with your updates, you can compare the stored values with what’s in the database. If they match, you can safely update your data because no one else has touched it since your last retrieval. ADO.NET does data reconciliation natively by using the `HasVersion()` method of your DataRow object. The `HasVersion()` method indicates the condition of the updated DataRow object. Possible values for this property are `Current`, `Default`, `Original`, or `Proposed`. These values fall under the `DataRowVersion` enumeration. If you wanted to see whether the DataRow changes still contained original values, you could check to see if the DataRow has changed by using the `HasVersion()` method: ```vbnet If r.HasVersion(datarowversion.Proposed) Then ' Add logic End if ``` This concludes our discussion of the basic properties of the ADO.NET objects. After reading this chapter, you should be able to answer the questions that we asked you in the beginning: - What are .NET data providers? - What are the ADO.NET classes? - What are the appropriate conditions for using a DataReader versus a DataSet? - How does OLE DB fit into the picture? - What are the advantages of using ADO.NET over classic ADO? - How do you retrieve and update databases from ADO.NET? - How does XML integration go beyond the simple representation of data as XML? Although you covered a lot of ground in this chapter, there is still a good amount of ADO.NET functionality we haven’t discussed. We use this chapter as a building block for the next few chapters. In the next chapter, you will learn more about retrieving data using ADO.NET. Chapter 15 will deal primarily with retrieving data from a data source. This chapter is a good introduction to how you will use data in most of your ASP.NET applications. ASP.NET was covered in Part II. Part IV rounds out with a couple of chapters on editing data and using this data in your Windows Forms applications.
High Performance, Point-to-Point, Transmission Line Signaling André DeHon andrea@ai.mit.edu MIT Artificial Intelligence Laboratory NEA3-791, 545 Technology Sq., Cambridge, MA 02139 Phone: (617) 253-5868 Thomas F. Knight, Jr. tk@ai.mit.edu FAX: (617) 253-5000 June 30, 1995 Abstract Inter-chip signaling latency and bandwidth can be key factors limiting the performance of large VLSI systems. We present a high performance, transmission line signaling scheme for point-to-point communications between VLSI components. In particular, we detail circuitry which allows a pad driver to sense the voltage level on the attached pad during signaling and adjust the drive impedance to match the external transmission line impedance. This allows clean, reflection-free signaling despite the wide range of variations common in IC device processing and interconnect fabrication. Further, we show how similar techniques can be used to adjust the arrival time of signals to allow high signaling bandwidth despite variations in interconnect delays. This scheme employed for high performance signaling is a specific embodiment of a more general technique. Conventional electronic systems must accommodate a range of system characteristics (e.g., delay, voltage, impedance). As a result, circuit designers traditionally build large operating margins into their circuits to guarantee proper operation across all possible ranges of these characteristics. These margins are generally added at the expense of performance. The alternative scheme exemplified here is to sample these system characteristics in the device’s final operating environment and use this feedback to tune system operation around the observed characteristics. This tuning operation reduces the range of characteristics the system must accommodate, allowing increased performance. We briefly contrast this sampled, system-level feedback with the more conventional, fino-grained feedback employed on ICs (e.g., PLLs). 1 Introduction In this paper, we address the issue of high-performance, point-to-point transmission-line signaling. Our objective is to achieve low transmission latency and high signaling bandwidth with a design which is economical in real-estate and power consumption while remaining compatible with commodity IC technology. For large VLSI systems, inter-chip signaling can account for a significant fraction of the operational cycle time. The delay between a pair of ICs can be decomposed into three components: 1. output delay—delay through the output pad to drive the large, external capacitance associated with any component pad 2. signal propagation delay—the time required for a signal to propagate across the interconnect media from the source to the destination 3. input delay—delay through the input pad while the signal is being sensed and level-restored for internal IC consumption In this paper, we specifically address the issue of minimizing signal propagation delay across transmission line interconnect. Note that interconnect is best modeled as a transmission line any time when the propagation delay across the interconnect is comparable to or greater than the rise time or fall time of the signal. That is: \[ t_{pd} \geq t_r \] (1) The transmission line propagation delay (\( t_{pd} \)) is de- terminated by the materials in use and the physical interconnect length (l): \[ v = \frac{1}{\sqrt{\mu_r \varepsilon_r}} = \frac{c}{\sqrt{\mu_r \varepsilon_r}} \] (2) \[ t_{pd} = \frac{l}{v} \] (3) For most available interconnect technologies \( \mu_r = 1 \) and \( 2 \leq \varepsilon_r \leq 5 \). With a relative dielectric constant \( \varepsilon_r \) of four \((\varepsilon_{prop} = \frac{c}{\mu_r} = 15 \text{ cm/\mu s})\), which is common among PCB technologies, and fast edge rates \((l \leq 0.5 \text{ ns})\), traces over a few centimeters begin to exhibit transmission line effects. For long interconnects (tens of centimeters), propagation delay becomes the dominant fraction of interchip communication delay. The minimum transmission line propagation delay, however, is only achieved when the transmission line is properly terminated so that no reflections occur and the line settles to the desired voltage in one propagation time. Process variation in IC and PCB fabrication make the termination matching difficult to achieve. Termination integrated on-chip is desirable to avoid the area and cost associated with external termination devices but on-chip resistances vary widely with IC processing, operating voltage, and temperature. Further, the characteristics of the interconnect media itself can vary. To avoid these problems, we develop a technique which allows an output driver to examine the voltage on the line during the propagation and serve the impedance to match the attached interconnect. This technique requires no external termination devices to achieve clean impedance matching and allows a single IC to match clean across a wide range of interconnect impedances. This paper also addresses the issue of increasing signaling bandwidth on our transmission line interconnect. In this domain, two key properties limit signaling bandwidth: - **signal deformation** – limited rise and fall times along with dispersive effects in circuits and interconnect spread out data bits. - **skew and delay variation** – uncertainty in the propagation delay through components or interconnect must be accommodated by spacing the data bits to accommodate the entire range of possible transition timings. To achieve reliable, high bandwidth signaling over transmission line interconnect, our techniques sense the delays actual seen on a fabricated component in system. On-chip delay is then adjusted to make the total propagation delay between ICs conform to tighter timing constraints. Together, these techniques allow us to remove much of the uncertainty and variation associated with signal transmission and, consequently, significantly reduce the inter-bit separation necessary for reliable inter-chip signaling. As with the impedance matching, this technique also allows a system to tolerate a larger variance in interconnect characteristics while still achieving reliable, high bandwidth operation. Most CMOS circuit designers are familiar with the practice of designing circuitry to compensate for the wide variations associated with silicon processing. These adjustable termination and timing techniques take a strategy one step further allowing the component to compensate for variations in its external environment. The key theme here is to measure system characteristics and then employ circuitry to bring the characteristics into a tight and favorable operating range. Bringing these techniques together, we present a design for adaptable I/O pads engineered for high-performance, point-to-point transmission-line signaling. Section 2 presents an overview of the signaling strategy. We show a low-voltage swing, matched-impedance output pad for driving series-terminated transmission lines along with a complementary receiver in Section 3. We introduce a technique in Section 4 for capturing dynamic timing information in response to signaling events. Section 5 details how the series impedance is adjusted in-system so the source drive impedance matches the impedance of the attached transmission line. Section 6 reviews techniques for on-chip delay adjustment and Section 7 shows how the timing extraction and delay adjustment techniques facilitate the retime of inter-chip communications. Section 8 describes how these techniques impact the testing of high performance interconnect. In Section 9, we present highlights of a prototype I/O pad which incorporates many of the techniques described in this paper. In Section 10 we highlight limitations of the techniques presented here before concluding in Section 11. ### 2 Signaling Strategy To meet the needs of point-to-point signaling with high speed and acceptable power, we utilize a series-terminated, low-voltage swing signaling scheme which uses on-chip termination and employs feedback to match termination and transmission line impedances. For the purposes of the following discussion, we focus on a CMOS integrated circuit technology. Low-voltage swing signaling is motivated by the desire to drive the resistive transmission line load with acceptable power dissipation. We see in Equation 4 that power is quadratic in signaling voltage. In the designs which follow, we specifically consider signaling between zero and one-volt. Limiting the voltage swings to one-volt saves a factor of 25 in power over traditional five-volt signaling (i.e., with a 50Ω transmission line, \( P_{\text{drive}} = 250\text{mW} \) with five-volt signal swings and \( P_{\text{drive}} = 10\text{mW} \) with one-volt swings). \[ P_{\text{drive}} = \frac{(\Delta V_{\text{line}})^2}{R} \] (4) Recall that a properly series terminated transmission line will present a load of \( R = 2Z_0 \) to the driver since the transmission line impedance \( Z_0 \) occurs in series with the termination resistance. To achieve one-volt signaling, we provide components with a one-volt power supply for the purpose of signaling. This frees the individual components from converting between the logic supply voltage and the signaling voltage level. Any power which must be consumed in the process of generating the one-volt supply is dissipated in the power supply, and not in the individual ICs. Series termination offers several advantages over parallel termination for point-to-point signaling. First, we can integrate the termination impedance into the driver. In the parallel terminated case, we needed to drive the voltage on the transmission line close to the signaling supply rail. The effective resistance across the driver between the supply rails and the driven transmission line must be small compared to the transmission line impedance, \( Z_0 \), in order to drive the transmission line voltage close to the signaling supply (See Figure 1). In a CMOS implementation, this means that the size of the transistors implementing the final driver must have a large \( W/L \) ratio to make the resistance small. As a consequence, the final driver is large and, therefore, has considerable associated capacitance, \( C_{\text{driver}} \). The larger capacitance, of course, increases the delay required to buffer internal logic signals to drive the final pad output stage. It also means that the charging power, \( P_{\text{charge}} \) (Equation 5), will be large. In contrast, the series terminated driver can use a higher-impedance driver. The higher impedance of the series terminated driver allows it to have a smaller \( W/L \) ratio and hence smaller \( C_{\text{driver}} \), resulting in a lower output delay and requiring less power to drive the output. \[ P_{\text{charge}} = \frac{1}{2}C_{\text{driver}}(\Delta V_{\text{driver}})^2f \] (5) Additionally, the series terminated configuration gives us the opportunity to use voltage feedback to adjust the on-chip, series termination to match the transmission line impedance. We expect both the transmission line impedance and the conductance of the drive transistors to vary due to process variations. By monitoring the stable line voltage during the round-trip transit time between the initial transition at the source end of the transmission line and the arrival of the reflection, a controller can identify whether the driver termination is high, low, or matched to the transmission line impedance. With a properly terminated series transmission line, we expect the voltage to settle halfway between ground and the signaling supply during the first round-trip transit time. If the voltage settles much above the halfway point, the drive impedance is too low. If the voltage settles much below the halfway point, the driver impedance is too high (See Figure 2). By monitoring the voltage at the pad, the system can adjust the drive impedance until it matches the line impedance. This allows the integrated circuit to compensate for process variation in both the silicon processing and PCB manufacture. ### 3 Signaling Circuitry In this section we present driver and receiver circuitry which facilitate low voltage signaling and impedance adjustment. #### 3.1 Driver To control the output pad impedance, the output driver is connected to the high and low signaling supplies through adjustable impedance networks. As shown in Figure 3, a set of exponentially sized drive transistors form the adjustable impedance network. The impedance control drivers can be enabled via digital control lines from a scan-loaded control register and serve as a D-to-A network for the pad drive resistance. Gabara and Knauser suggest a similar scheme which places only the set of exponentially sized transistors between the signaling supplies and the output pad [6]. An AND gate preceding the D-to-A network serves to combine the logical drive value with the impedance selection. Shown here is the basic CMOS transmission line driver. Shown at right is the basic driver. Shown on the left is a simplified model of the driver making explicit the fact that each transistor, when enabled, can be modeled as a resistor of some resistance determined by the transistor's W/L ratio and process parameters. Figure 1: CMOS Transmission Line Driver Shown above is a digital controlled-impedance driver after [5]. The digital values on pu.impedance and pd.impedance enable the parallel impedance control transistors. Drive transistors are placed in series between the impedance control networks and the output pad. The desired signaling voltage is connected to the pad by enabling the appropriate drive transistor. The digital impedance controls remain static during normal operation. N.B. This design assume the signaling supply (V_{signal}) is lower than the high logic value on pu and the pu.impedance lines making it feasible to employ NMOS transistors for the pull-up impedance network. Figure 3: CMOS Driver with Adjustable Drive Impedance Shown above are three possibilities for the voltage waveform at the source of a series-terminated transmission line. At the top, we have a matched impedance situation. The middle diagram shows a case where the series terminating impedance is too large, and the bottom diagram shows a case where the series impedance is too small. N.B. T on the time axis is the unidirectional transit time, $t_{pd}$ from Equation 3. Figure 2: Series-Terminated Source Transitions 3.2 Receiver The receiver must convert the low-voltage swing input signal to a full-swing logic signal for use inside the component. In the interest of high-speed switching, we want a receiver which has high gain for small signal deviations around the mid-point between the signaling supplies. [2] and [9] introduce suitable differential receivers. Figure 4 shows one such receiver. Figure 4: CMOS Low-voltage Differential Receiver Circuitry 4 Sample Register This section introduces the sample register, the key circuitry for timing extraction. A sample register is a string of latches enabled at closely spaced time intervals. Each latch “samples” the binary value of the signal under test during the time it is enabled. By rippling the latch enables in rapid succession, the sample register captures a discrete representation of the time behavior of a signal. In this section, we develop sample register circuits starting from an “ideal” model and progressively refining the circuitry into practical implementations. Alternately, one could consider delaying the target input signal as seen by each latch and using a single, register-wide enable rather than delaying the enables. We focus our discussion around a delayed enable since we generally have more control over the timing of the enable pulses we generate than we do over a random signal whose timing we wish to capture. The sample register is compatible with scan-based Test-Access Ports (TAPs), such as the JTAG/IEEE 1149 standard [3] TAP. The TAP can be used to initiate events which the sample register captures and to offload the data captured in the sample register. An ideal sample register is composed of a sequence of latches each enabled at fixed delays. When a timing event occurs, a short enable pulse is driven into the sample register. Each sample latch records the value seen by the receiver when it was last enabled. After the enable pulse propagates through the sample register, the sample register holds a discrete-time sample of the target input. Figure 5: Ideal Sample Register 4.1 Ideal An ideal sample register would consist of an infinitely long string of latches, each enabled at uniform time intervals (See Figure 5). When the enable pulse fires at the beginning of the delay chain, the sample register captures a discrete-time representation of the attached input signal. We seek to approximate the ideal behavior with a reasonably small, finite-length sample register and employ simple circuitry to generate the sequence of enables necessary to capture timing samples. 4.2 Inverter Timing Chain For many applications, a pair of inverters will suffice to form the inter-sample-bit delay (See Figure 6). The delay through a single inverter is often the finest granularity of timing used in a design. With a little care during layout, the geometry and loading of each inverter in the delay chain can be made identical. Consequently, the only variation between delays will be due to process variation across the die. Since a single sample register will typically occupy only a small region of the die, the processing variation among inverters in a sample register will be minimal. 4.3 Sliding Window If we can cause the event we are timing to occur under scan control, we can tradeoff time for space. We do not have to capture the entire waveform in a single event. We can reuse a small sample register in time to capture a long waveform. The temporal placement of the sample register can be controlled via the TAP, and the composite waveform can be reconstructed off chip. Figure 7 shows the basic sliding window concept. Figure 8 shows one possible implementation for the sliding window. The sample pulse is recycled after rippling through several sample delays. A scan-loaded configuration is compared against a trip count to allow the sample pulse to be recycled for a predetermined number of times. After the enable pulse settles following a trigger event, the sample register will contain the values corresponding to the last time the pulse was allowed to ripple through the delay chain. Note that we depict the ripple pulse recycling before the end of the sample register. It is unlikely that the delay on the recycle path can be accurately matched to the inter-sample time defined by the inverters. By recycling the ripple from a point prior to the end of the sample register we can do two things: (1) provide overlap between sample windows and (2) make sure that there is always an inverter-pair delay between adjacent samples used to reconstruct the longer waveform. As we will see in the following section, with sufficient overlap, calibration can help us factor out any delay anomalies associated with the recycle path. The choice of how many bits to include in the sample register and recycle path will depend on the relative speed of operation of various logic functions in the target technology. In particular, the operational frequency of the counter-comparator combination will set a lower limit on the delay between successive ripple pulses. For example, a technology with 100 ps minimum inverter delays and a maximum counter operational frequency of 500 MHz A simple sample register can be implemented using a pair of inverters to provide the fixed, intersample-bit delays. Figure 6: Sample Register | Target Data | 00000000011111111... | Sample 1 | 00000000011111111... | Sample 2 | 00000000011111111... | Sample 3 | 00000000011111111... Using a small, fixed-size, sample register, we can capture a portion of the discrete-time waveform for a signal with each timing event. If we vary the placement of the capture window and repeatedly fire the timing event, we can capture the entire waveform over a series of such samples. Figure 7: Sliding Window We can implement the sliding window in our sample register by recycling the sample-enable pulse a configurable number of times. The values left in the sample register after a timing event will correspond to the data capture during the last cycle made by the enable pulse. Figure 8: Recycling Sample Register would require a minimum of 10 inverter pairs in the recycling portion of the sample delay chain. With slightly lower accuracy, it is possible to use only a single latch in the sample register. As shown in Figure 9, a mux can be used to select the fine timing delays while the counter-comparator combination selects the coarse-grain timing window. The delay between sample bits in this scheme will not generally be as accurate as the earlier circuits, but may be sufficient for many applications. 4.4 Calibration and Sharing With the circuits shown so far, we only know when the signal is occurring in units of inverter-pair delays. Since process and environmental variation can easily account for a factor of two variance in an inverter delay, inverter-pair delays alone are not sufficient to extract fine-grained timing information. If a known timing source, such as the component clock, is available we can mux the sample input between the sample register and the known timing source to calibrate the inter-sample-bit delay time (See Figure 10). A known-frequency clock will allow us to determine the timing of events on the sample register and reassemble the overlapped, sliding windows appropriately (See Appendix A). The mux can also be used to share a single sample register between several target signals. This muxing can be used simply to minimize the need for sample registers. It can also serve to acquire accurate, relative timing information for groups of related signals. For example, one might share a single sample register among the bits of an 8-bit data bus and its data strobe. This arrangement would provide accurate timing information on the relative occurrence of data bit transitions to each other, as well as, providing an indication of when data bit transitions occur in relation to the data strobe. 4.5 Tighter Timing In applications where timing accuracy tighter than a pair of inverter delays is required, a fine-resolution, variable-delay buffer can be placed at the front-end of the delay path (See Figure 11). Some variable delay buffers developed for CMOS, Phase-Locked Loop (PLL) circuitry are suitable for this application. Figure 12 shows a voltage-controlled, variable-delay buffer which operates by varying the capacitive load seen by each stage. Horowitz [7] details a phase interpolator which smoothly varies the phase in 15 steps between two references under digital control. Such a phase interpolator can be applied to an inverter-pair delay to provide a resolution of roughly one-eighth of an inverter delay. 4.6 Enable Pulse To acquire a “sample”, we must be able to initiate both the event under test and the sample-enable pulse. When we are using a scan-based TAP for offloading the sample registers, it will be most convenient to initiate these events under scan control. In practice, we generally want the enable pulse to fire synchronized to the event we wish to observe on the IC. Scan control is used to prime the enable pulse to trigger on the next synchronization event and to initiate the event for observation. For example, when the component is in a scan testing mode, the standard scan register load facilities can be used to cause signal transitions within the IC or at the IC boundary. Once fired, circuitry inhibits the enable pulse from firing again until we have had a chance to offload the sample register and configure it to capture the next timing event. 4.7 Summary Combining these techniques we can repeatedly fire an event we wish to time and sample its behavior in narrow, fixed-size windows. By integrating the information acquired across multiple samples at varying window offsets and calibrating to known frequency and phase sources, we can build up an accurate, discrete-time representation of a signal on the IC. We can easily achieve timing resolutions down to two inverter delays and, with care, can achieve even tighter resolutions. Data capture and acquisition can be completely controlled through a scan-based TAP. 5 Impedance Matching Each controlled impedance pad is constructed from: 1. Driver (Figure 3) 2. Receiver biased to trip half way between the high and low signaling supplies (Figure 4) 3. TAP scan register as shown in Figure 13 With slightly less accuracy, a fine-grained, variable delay buffer can be employed reducing the number of sample latches required to one. **Figure 9: Sample Register with Single Latch** To calibrate the sample register, we can mux in a known frequency and phase reference such as the system clock. **Figure 10: Calibrated Sample Register** The receiver plays the dual purpose of (1) bringing the signal onto the chip when the pad is acting as an input and (2) monitoring the source of the transmission line when calibrating driver impedance. The pad's sample register is enabled whenever the core logic toggles the value driven into the output pad. Following such transitions, the sample register records the value on the output pad as seen by the receiver. Since the receiver is biased to trip at the midpoint voltage, we can use the value recorded in the sample register to determine when the series-terminated transmission line is matched. When the We can achieve higher resolution by adding a fine-grained, variable-delay buffer in the enable path preceding the inverter chain. This allows us to vary the timing of the sample taken by sub-inverter-pair quanta. Figure 11: Sample Register with Fine Prefix Delay Shown here is a voltage-controlled delay line (VCDL) after [1] and [8]. Varying vctrl effectively controls the amount of capacitive load seen by the output of each inverter stage and hence the delay through each inverter stage. The number of stages one uses in the VCDL will depend on the range of delays required from the buffer. Figure 12: Voltage-Controlled Variable-Delay Buffer drive impedance is too low, the driver will quickly drive the line past the half-way point and the receiver will capture the transition. If the impedance is too high, the driver will not drive the voltage past the half-way point and the receiver will not see a transition until subsequent reflections bring the voltage past the midpoint. By firing a series of test transitions and recovering the sampled result, an off-chip controller can select an impedance setting where the drive impedance is well matched to the transmission line impedance. The impedance selection time is almost entirely dictated by the bandwidth to the off-chip controller. Impedance setting using a scan-based TAP can take half a millisecond for a single pin. For an entire chip with hundreds of pins, scan-based impedance setting can take on the order of 100 ms (See Appendix B). Figure 14 shows data collected from a test chip (See Section 9) while scanning through impedance settings. Figure 15 shows both ends of a series-terminated transmission line after the driver has been automatically matched to the line impedance using the data collected in Figure 14. The sample register is important to this application for two key reasons: 1. The delays through the driver and receiver will depend on IC processing. By capturing a window of the signal, we can be sure to capture the transition regardless of processing. 2. When we cannot control the length of the attached transmission line, it is difficult to know when the transmission line voltage corresponds to the initial drive or reflections. Coupled with process variation, we cannot sam- “Boundary Cell” contains the standard boundary scan registers for a bidirectional I/O pad. “Impedance Register” holds the digital value which controls the pad drive impedance. “Sample Register” is a sample register as described in Section 4. Figure 13: Bidirectional, Controlled-Impedance Pad Scan Architecture The data above accompanies a low-to-high transition of the output value. The dark areas indicate that the receiver saw a high value, while the light areas indicate a low value. Impedance setting 0x3F corresponds to all impedance transistors enabled which is the lowest impedance setting while 0x00 corresponds to all impedance transistors disabled, the highest setting. Figure 14: Sample Register Data Following low-to-high Transition of Output Value. Scope traces show waveforms near both ends of 50Ω PCB trace. Figure 15: Matched Impedance Transitions using Sample Register Data The sample register allows us to see transitions occur. The transitions act as calibration marks, informing us when various events occur. In effect, we have built a poor Time-Domain Reflectometer (TDR) which we use to match the driver impedance to the line impedance. The discrete-time sample in the sample register is coarser than real TDRs; the rise time on the signals is much slower, and the length of the line monitored is limited by the window size captured by the sample register. For comparison, we connected one of the test pads to a small wire terminating in a short circuit and recorded the sample data for various impedance settings. Figure 16 compares the sample data to a real TDR waveform. The time range of the test pad is limited because we used a 16-bit sample register without the recycling technique (See Section 9). The sliding window (Section 4.3) allows us to extend the time range captured considerably for modest additional silicon real-estate. Section 9 summarizes the characteristics of the test pad used to acquire the data shown in Figures 14, 15, and 16. Section 10 elaborates on the limitations of this technique as well as the expected usage patterns. 6 Delay Adjustment 6.1 Mechanism We can use similar techniques to adjust the timing of key IC signals. Figure 17 shows a variable-delay buffer suitable for coarse-grain delay adjustment. Since this buffer uses inverter pairs as the basic, unit-delay element, it provides the same granularity of adjustment as most of the sample register designs presented in Section 4. For finer-grained delay adjustments, we can borrow variable-delay elements from PLL circuits such as the VCDL buffer (Figure 12) or Horowitz’s phase interpolator mentioned in Section 4.5. 6.2 Comparison with PLLs Phase-Locked Loops are commonly employed to match timing of on-chip clock signals to external references. In such cases, where the signal is periodic with fixed frequency, on-chip circuitry can close the feedback loop to adapt component timing to match system timing. However, PLL techniques cannot be applied to non-periodic control signals and data paths. Further, traditional PLLs cannot be used to guarantee the simultaneous arrival of the bits of a wide data bus. Our TAP-based timing extraction and timing control can provide in-system adjustment of on-chip timing for these non-periodic signals. The TAP can be employed to force events to occur and capture their timing relationships. Through the TAP, we can adjust the delay controls to servo the on-chip delays until the proper timing relationships are achieved. The feedback loop using TAP-based timing extraction and control is, of course, much slower than the feedback loops in conventional PLLs and does not operate continuously. Coupled with generally coarser-grained timing information, this makes TAP-based timing control unsuitable for the fine-grained timing adjustment provided by competent PLLs in the same technology. The coarser control of TAP-based timing does bring many of the advantages of feedback control to non-periodic signals and signal groups. 6.3 In-system Tuning On-chip delay adjustment allows us to tune the timing of events to the target system. Once a component is deployed in its final system many of the variables which had to be considered during design are fixed and will remain effectively constant during operational epochs. Such variables include: - IC processing of all ICs in the target system, including this one - External interconnect characteristics (e.g., path length, line impedance, propagation delay, capacitive loading) - Target system clock frequency Other variables may vary during operational epochs, but do so relatively slowly (e.g., component temperature and operating voltage). If we can monitor changes in these parameters (e.g., with on-chip temperature sensors) we can often treat these parameters as constants, retuning whenever significant environmental changes make retuning necessary. Once system delays are fixed, and can be measured using our timing extraction techniques, in-system component operation can be spe- cialized to these system characteristics. By specializing component timings around system characteristics, we can achieve higher performance than is possible when our design must allow for all possible variations in system parameters. In-system delay adjustment effectively gives us most of the advantages of self-timed logic without incurring the complexity and testability problems associated with asynchronous logic. 7 Transmission Line Timing Adjustment For high-bandwidth signaling over long transmission lines, we can pipeline multiple data bits on the transmission line. This wire pipelining requires: 1. We know how many clock cycles it takes to traverse each transmission line interconnect. 2. We guarantee that data transitions do not occur during the setup to hold time window of the receiving IC. Computers such as the Cray-1 [10] and CM-5 [13] satisfy these criteria by carefully selecting the interconnect cable lengths and designing the basic system around the logical lengths of each interconnect. Using the techniques we have introduced here, it is possible to satisfy these two conditions by monitoring the transmission line reflections and adapting the output timing to the length of the connected transmission line. To handle long transmission lines, we add a tunable delay and sliding-window sample register with a nized clock for calibration to the pad design described in Section 5. We can tune the output impedance to the transmission line as described previously. However, while scanning impedances the longer effective sample window gives us an additional piece of information, the timing of the first reflection arrival. When the transmission line impedance is set slightly above the transition point, the receiver will trip when the first reflection arrives. Scanning through impedance settings thus tells us both when the source is driven and when the first reflection occurs. Assuming synchronous clock distribution and symmetry of transit times across the transmission line, this also allows us to determine when the signal arrives at the destination end of the transmission line. We can take this time and determine (1) how many clock cycles it requires to traverse this interconnect and (2) where during the clock cycle the signal is arriving at the far end of the transmission line. We can then tune the variable delay associated with the output signal so that the transition at the destination is guaranteed to occur outside of the setup to hold time window around the clock after taking into account any necessary uncertainties associated with the delay through the input receiver. By combining the sample register with series-terminated, transmission-line signaling, we can tune the arrival of a transition at the far end of a variable length interconnect. This kind of tuning is not possible with conventional PLL circuits. Figure 18 shows a suitable pad PLL circuit architecture including the tunable output delay. 8 Implications on Interconnect Testing It is worthwhile to note that techniques presented here allow us to test the dynamic properties of our interconnect media. Today, TAPs are commonly employed to test out the DC characteristics of ICs and interconnect integrity. Standard TAP techniques, however, cannot identify interconnect faults which only affect high-speed signals. For example, conventional TAP interconnect testing cannot identify the impedance discontinuity arising from a poorly seated connector or a short to some piece of foreign material. With the pad architecture presented here, the TAP can capture the dynamic profile of the voltage waveform resulting from signaling events. This allows the recovery of TDR-like data for the attached interconnect. Consequently, these techniques allow us to extend our TAP testing to identify the interconnect faults which affect high-speed signals. 9 Implementation We have implemented a prototype, matched impedance i/o pad which incorporates many of the techniques described here [5]. Table 19 summarizes the key characteristics of the prototype pad and Figure 20 shows the layout for the bidirectional i/o pad. This pad's scan architecture matches the one depicted in Figure 13. Note that the sample register occupies 350µ of the test pad length. In application, one could construct a single sample-register Shown above is the revised bidirectional pad architecture incorporating variable-delay buffers in the output path as well as a scannable register to control the delay buffers. **Figure 18: Adjustable Delay, Bidirectional Pad Scan Architecture** <table> <thead> <tr> <th>Process</th> <th>0.8μ CMOS (HP)</th> </tr> </thead> <tbody> <tr> <td>Latency (driver)</td> <td>2 ns</td> </tr> <tr> <td>Latency (receiver)</td> <td>1.5 ns</td> </tr> <tr> <td>Register Bits</td> <td></td> </tr> <tr> <td>(impedance control)</td> <td>6 per direction</td> </tr> <tr> <td>(sample register)</td> <td>16</td> </tr> <tr> <td>Impedance Range</td> <td>40 to 100 Ohms</td> </tr> <tr> <td>Area</td> <td>150μ x 930μ</td> </tr> <tr> <td>Power</td> <td>10mW+2mW/100MHz</td> </tr> </tbody> </table> **Figure 19: Prototype Matched Impedance I/O Pad Characteristics** With muxed inputs from many adjacent pads. In this way the area cost of the sample-register could be amortized across multiple pads as suggested in Section 4.4. For testing purposes, we drove the test component's TAP from the parallel port of a Personal Computer (PC). Software running on the PC would: 1. load impedance configurations 2. create output transitions using the component’s boundary scan cells 3. offload sample results 4. search through the sequence of recovered sample results similar to that shown in Figure 14 to select an impedance setting 5. install the selected setting in the impedance configuration and return the component to its normal, non-scan, operating mode The data shown in Figures 14, 15, and 16 came from this test component using the PC for off-chip control. In a deployed application, the off-chip control task would be handled by the same embedded controller which managed the Test-Access Port and device testing. Standard components, such as National Semiconductor’s SCNPSC100F [11] or Texas Instrument’s SN74ACT8900 [12], are now available for interfacing microprocessors to IEEE 1149.1 compatible scan chains. **10 Limitations** In this section we review some of the costs and limitations of these on-chip sensing and adjustment techniques. We briefly address the impact of these limitations on the pragmatic application of these techniques. **Point-to-Point Signaling** As noted, the specific technique described here is primarily applicable to single driver, single receiver, series-terminated signaling applications. The tuning behavior depends on the reflection profile of the series terminated transmission line for proper operation. **Single Impedance Media** The techniques described here assume the interconnection media, while varying in impedance, is characterized by a single, homogeneous impedance between the source and the driver. This is typically the case if the interconnect is a printed-circuit board or cable between ICs. However, if the ICs are connected through multiple media this might not be the case. For instance, if two ICs are communicating over a long cable and each IC has a long wire run between the cable and the IC on its attached printed circuit board, the intervening interconnect could be characterized by three distinct impedance regions. The techniques presented here will allow one to identify the impedance discontinuities, but not to compensate for them. Of course, one could use the techniques presented here to build an impedance matching buffer component to place at each potential impedance discontinuity. The impedance matching buffer could then separately match to each interconnect segment. Such a scheme would, however, add I/O delay to the signaling path for each such impedance matching buffer encountered. **Area** This technique does require dedicated, on-chip silicon area. As noted in the previous section, a 16-bit sample register occupied just under $350 \mu \text{m} \times 150 \mu \text{m}$ [5]. The prototype sample register included the inverter chain, sample latch, shift register, and a 16-bit configuration register, but did not include any recirculation or calibration circuitry. For comparison, the standard bidirectional I/O pad boundary-scan registers in the same design occupied $140 \mu \text{m} \times 150 \mu \text{m}$. Layout for the standard boundary-scan I/O registers was partially determined by control signal routing, while the the sample register contains local connections and is dominated more by the size of shift and configuration registers. Using the recirculation techniques suggested in Section 4, one could build a smaller, 8- to 10-bit sample register and then build a 4- to 5-bit counter and comparator in comparable space. Calibration support then requires the addition of an input mux along with an attached configuration register. Recall from Section 4.4 that the input mux can be expanded in order to share a single scan register among several signals. For bussed signals, a single sample register would typically be shared among a series of 4 to 8 adjacent lines to amortize the area cost required. **Configuration and Tuning Latency** Up front tuning latency using the sample register scheme can be moderately large. - Sample registers can collect many bits of data per timed signal per experiment, but must still offload such data via the low bandwidth, serial scan interface. - To keep the area requirements down, recirculating and shared sample registers reuse the sample register in time. Consequently, many timing experiments must be performed in sequence in order to reconstruct a single waveform. These effects make TAP-based timing extraction and configuration a moderately high-latency operation. For example, a 160-pin component using the prototype pad from Section 9 requires 50-60 ms to tune all pads. Appendix B shows how to estimate configuration latencies based on scan and component architecture. In practice tuning would occur initially at system startup time and thereafter only when environmental characteristics change. As long as the environmental characteristics change slowly, the tuning latency does not have an adverse effect on signaling operation. **Periodic Retiming Requirements** As noted in Section 6.3, tuned parameters such as delay or impedance will depend on some slowly-changing environmental characteristics such as temperature and attached hardware configuration. These parameters will need to be retuned whenever environmental characteristics drift significantly from the point of tuning. The way this retuning fits into system operation will vary considerably among applications. In systems with adequate error detection, the retuning can be recognized by the error detection mechanism, and retuning may serve as a primary response to excessive errors. In systems without this kind of error detection, more preventative measures may be required. For example, a crude, on-chip temperature sensor can serve as an early warning indicator so that retuning can compensate for changes in temperature. Off-Chip Controller These techniques will require a moderately complex off-chip controller for waveform extraction and reintegration. A personal computer or low-end workstation is both sufficient and economical for in-system testing and tuning. For impedance and delay tuning applications, the controller should be an inseparable part of the base system. In many systems, the task can be assumed by an existing processor in the system. Some systems may require an additional, embedded microcontroller to orchestrate tuning and configuration functions. 11 Summary By exploiting the information available at the source end of a series-terminated transmission line, we can identify important characteristics of our interconnect. Employing a TAP accessible sample register, timing control, and impedance control, we can match a series-terminated transmission-line driver to its attached transmission-line in system. Specifically, we take discrete-time, binary samples of the voltage seen by the driving pad at various impedance settings. Using the sample feedback, we can determine both: 1 the driver impedance setting which best matches to the transmission line impedance 2 the arrival time of the signal at the far end of the transmission line In effect, we get the capabilities of a crude, on-chip TDR. Using this information and fine-grained timing adjustment at the source end of the transmission line, we can reliably pipeline the transmission of data bits over variable length interconnect media. This pipelining allows high bandwidth signaling even when interconnect distances are long. In general, these techniques allow us to factor out process variation for the ICs and interconnect media and adapt to system specific parameters such as interconnect impedance and length. Additionally, these techniques allow us to use a TAP to determine the integrity of our interconnect for high-speed signaling. References Reassembling windows is easy in this case since we immediately discover the inter-sample-bit time and can align corresponding edges between samples (See Figure 22). To be in the fast clock case, we need: $$n_{bits} \geq 2 + \left\lfloor \frac{\min(t_{clklow}, t_{clkhigh})}{t_{sb}} \right\rfloor$$ (7) For a 500 MHz clock with a 50% duty cycle in a technology with a minimum inverter delay of 100 ps this means: $$n_{bits} \geq \left( 2 + \frac{1 \text{ ns}}{100 \text{ ps}} \right) = 12$$ (8) **Slow Clock Case** When the available calibration clock is slower it may not make sense to build a sample register long enough to cover half a clock period. In this mode we have to look for two pieces of information independently: - $T_{cycle}$ - the time between successive placements of the sample register window - $t_{sb}$ - the time between successive bits within the sample register Each edge occurs at some offset within a sample register ($n_{bp}$) and at some cycle offset ($n_{up}$). Given a pair of edges separated by time $T_{e-e}$: $$(n_{up2} - n_{up1})T_{cycle} + (n_{bp2} - n_{bp1})t_{sb} = T_{e-e}$$ (9) From here there are two ways we can solve for $t_{sb}$ and $T_{cycle}$. 1. If we can see an edge in two successive windows, we know: $$T_{cycle} + n_{bp2} \cdot t_{sb} = n_{bp1} \cdot t_{sb}$$ (10) The only time when this will never occur is when there is no overlap. That is: $$n_{bits} < \frac{T_{cycle}}{t_{sb}} + 1$$ (11) See Figure 23 for an example of this case. 2. We can also derive a relationship between $t_{sb}$ and $T_{cycle}$, if we can get two different sets of $(n_{up2} - n_{up1}, n_{bp2} - n_{bp1})$ pairs. An example of such differing pairs is shown in Figure 24. Either of these cases provide a second equation in two unknowns allowing us to solve for the bit and cycle times and calibrate the sample windows. --- --- **A Calibration** Sample register calibration is required for two reasons: 1. The time delay between sample bits varies with process variation. 2. The timing on the recycle path in the sliding window also varies and is generally different from the time delay between sample bits. As a result, when we recover a pair of adjacent samples, we do not immediately know the amount of overlap between samples. Figure 21 depicts an example of the alignment problem. As suggested in Section 4.4, if we can mux a known frequency source into the sample register, we can calibrate both bit times and sample overlap. This in turn allows us reconstruct composite waveforms from a collection of window samples and to determine coarse-grain, absolute timing between signals. **Fast Clock Case** If the sample register is long enough, relative to the clock frequency, we can capture an entire clock phase in one sample register. Once captured, we can divide the clock phase time by the on, or off, duration, in bits, to get the inter-sample-bit time. That is $$t_{sb} = \frac{t_{clkhigh}}{n_{high-samples}}$$ $$t_{sb} = \frac{t_{clklow}}{n_{low-samples}}$$ (6) Given a pair of samples, 00001111 and 11111111, we do not immediately know how the sample bits overlap between samples. Shown above is the basic range of possibilities for this pair of samples assuming that the sample register has been designed to guarantee at least one bit of overlap between a pair of adjacent sample windows. Sub-bit-time shifts are also possible. Figure 21: Example of Alignment Problem <table> <thead> <tr> <th>Clock Signal</th> <th>Clock Sample 1</th> <th>Clock Sample 2</th> <th>Clock Sample 3</th> </tr> </thead> <tbody> <tr> <td>000011110000111100001111...</td> <td>000011110000111100001111...</td> <td>000011110000111100001111...</td> <td>000011110000111100001111...</td> </tr> </tbody> </table> When the calibration clock is fast relative to the length of the sample register, we can capture an entire clock phase and directly determine the inter-sample bit time. Shown here is a 7-bit sample register with 2 bits of overlap per window. The calibration clock has a period of 8 sample bit delays and a 50% duty cycle. Figure 22: Fast Clock Sample Register Calibration If \( T_{\text{cycle}} \) is not an exact multiple of \( T_{\text{cycle}} \), the edge positions within the sample windows will drift from window to window. This will guarantee that one of the above two cases will occur allowing calibration. If \( T_{\text{cycle}} \) is an exact multiple of \( T_{\text{cycle}} \), \( n_{\text{bpg}} - n_{\text{bpg}} = 0 \), so we immediately know \( T_{\text{cycle}} \) but do not know \( t_{\text{bit}} \). There is also a potential problem if \( T_{\text{cycle}} \) is an exact multiple of \( t_{\text{bit}} \) for certain values of the multiplier. If we have no control over the calibration clock edges and wish to avoid these exceptional cases, it will be necessary to make the timing on the recycle path adjustable. For example, an optional, extra inverter pair delay in the recycle path would allow us to change \( T_{\text{cycle}} \) so that it was no longer a proper divisor of \( T_{\text{cycle}} \). Of course, if \( T_{\text{cycle}} \) is close to being a multiple of \( T_{\text{cycle}} \), it may take many calibration clock periods, and hence windows, to guarantee the adequate positional shift to guarantee sufficient calibration data. The optional recycle delay can also be useful in this case to reduce the required time coverage for the sliding window sample register. B Impedance Tuning Time The most straightforward way to set the impedance involves a sequence of: 1. start with minimum impedance 2. load in current impedance value 3. force a transition 4. offload resulting sample register 5. increment current impedance value and repeat at step 2 until all impedance values have been tested Once this sequence of data has been collected, the controller has a collection of information like that shown in Figure 14. The impedance setting can then be found from this data (See [4] for additional details). In practice, it is necessary to set the pull-up and pull-down impedance separately and a few iterations are required to converge. The subsequent iterations can almost certainly be done without scanning through the full range of possible Shown here is a slow clock case where a pair of calibration clock edges never occur in a single sample window. When we can catch a transition in two successive windows, we can immediately determine the relationship between the sample bit delay and the window cycle time (See Equation 10). Figure 23: Slow Clock Sample Register Calibration (Overlap Case) Here again is a slow clock case where a pair of calibration clock edges never occur in a single sample window. In the two sequences shown, the pair of edges occur within a differing number of windows. The combined information from these two series of samples give us enough information to solve for the sample bit delay and window cycle time using Equation 9. Figure 24: Slow Clock Sample Register Calibration (Differing Window Alignment) impedance values. From this basic algorithm, we can compute the bandwidth requirements for impedance tuning and get a basic estimate for tuning time: - **Step #2** requires loading in an impedance values. With \( n_i \) impedance transistors on each supply network, this requires \( 2n_i \) bits per controlled impedance pad. - **Step #3** can occur in parallel for all pads being tuned and can be done in tens of clock cycles per iteration. - **Step #4** offloads the impedance values and requires \( n_{bit} \) bits per controlled impedance pad. If we scan through all \( 2^{n_i} \) settings for an impedance network, and have to iterate the process four times to achieve reasonable convergence, the total number of bits transferred is: \[ n_{bit} = 4 \cdot 2^{n_i} \cdot n_{pads} \cdot (n_{bit} + 2n_i) \] (12) If we move data to and from the chip under serial TAP control, we get to move one bit per clock cycle. There will be some additional overhead for the TAP protocol, but these are small compared to the clock cycles required to move data on and off the chip. The tuning time can thus be approximated as: \[ T_{tune} = (4 \cdot 2^{n_i} \cdot n_{pads} \cdot (n_{bit} + 2n_i)) \cdot t_{clk} \] (13) To make this concrete, we can consider the test pad from Section 9. This pad had \( n_i = 6 \) and \( n_{bit} = 16 \). If we further consider a component with 160 impedance controlled pins and a scan-based TAP with a 20 MHz scan clock (TCLK): \[ T_{tune} = (4 \cdot 2^6 \cdot 160 \cdot (16 + 2 \cdot 6)) \cdot 50 \text{ ns} \] \approx 57 \text{ ms} This time can be reduced by clever arrangement of the scan operations. For instance, if we know we are going to be tuning all 160 pins in parallel, a parallel load of all 160 pins from the same $2n_i$ impedance bits would make the cost of uploading test impedance values almost negligible. For the example above, such a change would reduce the time to roughly 33 ms. In the analysis above, we assumed that it was necessary to reload both the pull-up and pull-down impedance during step #2, while only one of these impedances generally varies during a scan iteration. Simply allowing the pull-up and pull-down networks to be loaded independently would allow us to tune the impedance in 45 ms. Of course, if a single pad's impedance and sample registers can be accessed independently, the offload time for a single pad during tuning would be: $$T_{\text{tune,one-pad}} = (4 \cdot 2^5 \cdot 1 \cdot (16 + 2 \cdot 6)) \cdot 50 \text{ ns}$$ $$\approx 300 \mu s$$ When tuning a single pad, the scan overhead will not be as negligible. A more accurate estimate for single pad tuning time is on the order of 500 $\mu$s using a scan-based TAP.
Saccade Target Selection Relies on Feedback Competitive Signal Integration Joke P. Kalisvaart, 1 André J. Noest, 2 Albert V. van den Berg, 1 and Jeroen Goossens 1 1 Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition and Behaviour, Department of Cognitive Neuroscience, Section Biophysics, 6500 HB Nijmegen, The Netherlands, and 2 Developmental Biology Department, Utrecht University, 3584 CH Utrecht, The Netherlands It is often assumed that decision making involves neural competition, accumulation of evidence “scores” over time, and commitment to a particular alternative once its scores reach a critical decision threshold first. So far, however, neither the first-to-threshold rule nor the nature of competition (feedforward or feedback inhibition) has been revealed by experiments. Here, we presented two simultaneously flashed targets that reversed their intensity difference during each presentation and instructed human subjects to make a saccade toward the brightest target. All subjects preferentially chose the target that was brightest during the first stimulus phase. Unless this first phase lasted only 40 ms, this primacy effect persisted even if the second, reversed-intensity phase lasted longer. This effect did not result from premature commitment to the initially dominant target, because a strong target imbalance in the opposite direction later drove nearly all responses toward that location. Moreover, there was a nonmonotonic relation between target imbalance and primacy: increasing the target imbalance beyond 40 cd/m 2 caused an attenuation of primacy. These are the hallmarks of hysteresis, predicted by models in which target representations compete through strong feedback. Reaction times were independent of the choice probability. This dissociation suggests that target selection and movement initiation are distinct phenomena. Introduction To explain how saccadic responses compete for selection and execution, current decision-making theories assume that sensory evidence is noisy and that it is accumulated over time to reach a decision bound (Ratcliff and McKoon, 2008). Different types of models make different assumptions about how evidence is combined (Smith and Ratcliff, 2004): “evidence scores” for two or more alternatives can accumulate independently (Cousineau, 2004), compete with feedforward inhibitory interaction (Laming, 1966; Link and Heath, 1975; Nosofsky and Palmeri, 1997; Palmeri, 1997), or compete via feedback inhibition (Usher and McClelland, 2001; Wang, 2002; Wong and Wang, 2006). So far, it has been impossible to single out one among the many types of visual decision-making models; despite different architectures, they predict similarly optimal choice behavior in static two-alternative forced choice tasks (model mimicry; van Zandt and Ratcliff, 1995; Bogacz et al., 2006). To distinguish between feedforward and feedback cross-inhibition, we introduced a new manipulation: we studied saccadic choices between pairs of briefly flashed targets the intensities of which were swapped during presentation (Fig. 1A,B). Under such conditions, circuits with feedback competition that exceed a certain critical strength are sensitive to the initial stimulus bias but relatively resistant to the later bias reversal. This characteristic behavior (hysteresis) occurs because the initial “winner” maintains its advantage by recurrent inhibition of its competitor (Noest et al., 2007; Furman and Wang, 2008). These models thus predict that subjects typically choose the target with the highest initial strength provided that the initial differences outlast the time constant of the feedback creating the hysteresis and the stimuli are not strong enough to overcome this hysteresis (Fig. 1C). In contrast, feedforward integration models predict that for balanced durations of the two stimulus epochs, early and late biases are equally effective (because the total evidence for both targets is the same), whereas leaky-integrator models even predict that later biases dominate the final evidence scores. This implies that subjects should either show no preference at all or show a preference for the target that is strongest at the end, unless they make an irreversible choice for the initially strongest target before the stimulus ends (premature choice commitment). Indeed, all accumulator models can produce primacy (i.e., a choice preference for the initially strongest target) if they include some thresholding mechanism that induce choice commitment. We probed the contribution of such thresholding mechanisms by appending strong intensity biases toward the initially weakest target. We found robust preferences for the initially strongest target, which reversed only if the first stimulus epoch was sufficiently short and the second epoch outlasted the first one by a sufficient amount. The basic primacy effect was attenuated at larger stimulus biases and could even be inverted completely if the stimuli were immediately followed by a strong intensity bias toward the initially weakest target. This near-complete transition from primacy to recency shows that the observed primacy cannot be attributed to premature commitment to the initially strongest target. Simulations showed that all these results were best described by a feedback cross-inhibition model. Reaction times were independent of the changes in choice probability and instead depended on the initial intensity of the selected target. Materials and Methods Subjects Ten adult human subjects (5 male, 5 female) participated in the experiments. All had normal or corrected-to-normal visual acuity. Subjects were informed about the experimental procedures and gave informed consent before the start of the experiments. Procedures were approved by the Radboud University Medical Centre. Subjects J.R., V.G., J.G., and J.K. were experimenters; all other subjects were kept unaware of the aim of the study. Setup Subjects were seated in a darkened room at 80 cm of a projection screen on which stimuli were back projected. In the first three experiments, an LCD projector (model DLA-S10E; JVC,) with a refresh rate of 75 Hz and a maximum luminance level of ~45 cd/m² was used. To present targets at higher contrast and luminance levels in Experiment 4, we used a digital light processing projector with a maximum luminance level of ~300 cd/m² and a refresh rate of 60 Hz (P1265 model DXN0702; Acer). Luminance levels were measured with a luminance meter (model LS-100; Minolta). A chinrest was used to minimize head movements. Eye movements were measured with the scleral search coil technique (Remmel Laboratories). Coils were inserted after one drop of topical anesthetic (oxybuprocaine hydrochloride 0.4%; Thea Pharma). Once the coil was in place, a drop of artificial tear (methylcellulose 0.5%; Thea Pharma) and a bandage lens (a large contact lens with a strength of zero diopters) were applied to minimize ocular discomfort. Use of the bandage lens doubled the measuring time with the coil to ~1 h per session (Sprenger et al., 2008). Eye position signals were low-pass filtered, amplified, and sampled at 500 Hz per channel. The spatial resolution of the horizontal and vertical eye position signals was better than 0.1° (root mean square measure). Paradigms In the first three experiments, each trial consisted of four epochs (Fig. 1). At the beginning of each trial, a fixation ring with diameter of 0.5° was presented at the center of the screen. Then, after a random period of 400–1200 ms, the fixation ring disappeared and two filled, circular targets with a diameter of 0.5° were presented simultaneously at 10° to the left and right of the center. After a variable delay (D1) of 40, 80, or 120 ms, this first set of targets was replaced by a second set of targets at the same locations. In control trials, the intensity of both the left and right target remained unaltered. In reversal trials, however, the intensities of the left and right target were reversed compared with the first target epoch. In both cases, targets were then displayed for another 40, 80, or 120 ms (D2) until the screen turned black. For each reversal condition, control stimuli were presented with the same total duration (i.e., D1 + D2). Subjects were always instructed to first look at the fixation ring and then make a saccade to the most intense target as quickly and accurately as possible. They received no feedback about their performance. Target intensities and durations of the first and second target presentation epoch were manipulated systematically across trials and experimental sessions. Target intensities in Experiments 1–3 ranged from 4.5 to 36.6 cd/m². Background luminance was 0.273 cd/m². The intensity difference (∆I) between the left and the right target on any given trial could be large (4.5 vs 36.6 cd/m²), medium (7.8 vs 29.1 cd/m²), small (12.3 vs 22.4 cd/m²), or zero (both targets 16.7 cd/m²). Thus, in control trials, target luminance could be unambiguous or ambiguous. In reversal trials, the mean intensity of the left and right target was the same only when the durations of the first and second epoch were the same. High-to-low and low-to-high intensity changes were achieved with a time constant of ~10 ms, limiting the shortest practical presentation time to 40 ms. Experiments 1–3 established the conditions under which primacy arises, but could not discriminate between two very different mechanisms that can generate primacy: hysteresis due to strong feedback competition or premature commitment due to an absorbing bound (see Introduction). Testing whether primacy can still be reverted does discriminate between these different model types. Primacy due to absorbing bounds is by definition irreversible and should increase monotonically with increasing ∆I. Conversely, primacy due to hysteric integration dynamics should be reversible by a sufficiently strong stimulus. In Experiment 4, we therefore presented reversal stimuli across an extended range of target contrasts (i.e., from 22.5 vs 33.6 cd/m² up to 1.5 vs 77.5 cd/m²). The durations D1 and D2 were kept fixed at 50 ms. Targets in the 100 ms Figure 1. Intensity reversal paradigm and model predictions. A, In each trial, a central fixation ring was presented for 400–1200 ms. Upon disappearance of the fixation ring, two peripheral targets with different intensities (∆I) appeared. After a delay of 40–120 ms (D1), the intensities of the left and right target reversed (reversal trials) or remained unaltered (control trials, data not shown). In either case, both targets remained present for another 40–120 ms (D2) until they were extinguished simultaneously. Subjects were instructed to look at the fixation ring and make a saccade toward the most intense target as quickly and as accurately as possible. B, Time traces illustrating the intensity changes of the right target (T_R), the left target (T_L), and the fixation ring, as well as the ensuing saccade. Depicted traces show an initial intensity bias toward the right target, followed by an equally strong bias toward the left target. The intensity differences between the left and right target, the timing of the intensity reversals, and the overall stimulus durations were systematically varied. In a modified version of this paradigm, the D1 and D2 epochs were followed by a third epoch (D3) in which there was a strong intensity bias toward the initially weakest target. See Materials and Methods for details. C, Response of competing decision units. Models with sufficiently strong feedback competition predict a choice preference for targets with highest initial strength. Feedbackforward integration models predict equal choice probabilities because the early and late biases are equally effective. Feedforward models with leaky integration predict that later biases dominate, resulting in a choice preference for targets with highest final strength. Premature commitment induced by low decision thresholds (dashed horizontal lines) predicts primacy for any of these architectures, but in such bounded accumulator models, primacy cannot be undone by later stimuli. ambiguous condition had an intensity of 27.8 cd/m². Control stimuli lasted 50 ms and target contrasts ranged from 7.2 versus 49.5 cd/m² to 26.0 versus 29.7 cd/m². Background luminance was 0.18 cd/m². To further probe the extent to which primacy can be undone by later stimulus evidence, Experiment 4 also included two forced reversal conditions in which the stimulus presentation ended with a bright pulse at the location of the initially weakest target. More specifically, in the 50-50-67 ms forced reversal condition, a 50-50 ms reversal stimulus was immediately followed by a third, 67 ms epoch in which there was a strong bias toward the initially weakest target. In the 83-67 ms forced reversal condition, 83 ms of initial bias was immediately followed by 67 ms of reversed extreme bias. In both cases, the initial bias was 49.5 versus 7.2 cd/m² and the final, reversed bias was 7.2 versus 283 cd/m². For subjects J.K. and J.G., we also tested forced reversal conditions with an initial bias of 13.7 versus 40.0 cd/m² and 3.6 versus 62.7 cd/m², respectively, because those intensity differences better matched the intensity differences that evoked the strongest primacy effect in the unforced condition. As in Experiments 1–3, we kept the stimulus durations short to ensure that the subjects would not consciously perceive the intensity manipulations. Data were collected in blocks of 120–138 trials in which all experimental conditions were presented five or six times in pseudorandom order. Subjects completed 10–12 blocks across different sessions until each unique condition was tested 60 times (Experiments 1–3) or at least 100 times (Experiment 4). This typically required three or four sessions per subject per experiment. Data analysis Saccades were detected on the basis of calibrated eye position signals with custom software. Detection of saccade onsets and offsets was based on velocity and acceleration criteria. All saccade markers were examined by the experimenters and, if necessary, corrected. Further analysis was done with MATLAB (version 7.9, MathWorks) using custom software. To determine the subject’s choice regarding the most intense target, we only considered the direction of his/her first saccadic eye movement after stimulus offset that exceeded the detection criteria of 0.5° amplitude and 30°/s velocity. Subsequent correction saccades in the opposite direction were discarded. Such corrections occurred only in ~1.5% of trials with equal probability across all stimulus conditions. Therefore, excluding trials in which corrections occurred did not change any of our conclusions. Psychometric response functions. A generalized linear model with a logit link function was used to fit psychometric curves through the data points from Experiments 1–3 using the following logistic equation: \[ P_R = \frac{1}{1 + e^{-a \cdot \Delta I + b}} \] (1) Where \( P_R \) is the probability of a rightward saccade and \( \Delta I \) the intensity difference (in cd/m²) between the right and the left target at stimulus onset. Positive values of \( \Delta I \) indicate that the (initial) intensity of the right target is larger than that of the left target. The parameters \( a \) and \( b \) represent the subject’s sensitivity and bias, respectively. Student’s \( t \) tests were used to test whether the fit parameters significantly differed from zero and whether the slopes \( a \) of the psychometric curves differed between experimental conditions. The simple logistic model from Equation 1 was no longer adequate to describe the nonmonotonic psychometric curves obtained in Experiment 4. We therefore extended the model in the following way: \[ P_R = \frac{1}{1 + e^{-d + a \cdot (\Delta I - b) + c \cdot (\Delta I - b)^3}} \] (2) Note that \( Q \) is now a third-order polynomial where the additional parameter \( c \) represents the strength of the hysteresis reductions at higher \( \Delta I \)'s (see Results). The parameters \( b \) and \( d \) capture the subject-specific response biases. The horizontal shift \( b \) reflects an offset in the input to the choice stage. The vertical shift \( d \) reflects some bias at the output of that stage. Experiment 4 brings out the distinction between these two types of biases. In Experiments 1–3, they are confounded into one net bias parameter (i.e., \( b \) in Equation 1) because the applied range of target contrasts appeared to be too small in these experiments. Absorbing bounds inequality test. To quantify how strongly the data from Experiment 4 challenge mechanisms that attribute our primacy effects to premature commitment caused by absorbing bounds, we consider the class of accumulator models that have absorbing bounds but no hysteresis in their evidence accumulation. This whole class of models can then be tested by comparing the forced reversal conditions with their unforced counterparts. Key in this analysis is the following notion: at the end of \( D_2 \) (which is either the end of the whole stimulus or the start of forcing \( D_3 \) phase) a fraction of the accumulated-evidence trajectories already reached the boundary of the initially strongest target (unknown fraction \( A_i \)) or the boundary for the other target (unknown fraction \( A_f \), leaving only a limited fraction \( 1 - A_i - A_f \) of formally “undecided” trajectories that can still be influenced by a later forcing stimulus (Fig. 5A). If we knew for sure that the bright \( D_3 \) forcing pulse were strong enough to drive all “surviving” trajectories to (or closest to) the boundary of the initially weakest target, the probability of still choosing the initially strongest target in the forced reversal condition would provide a direct measure of \( A_i \). Unfortunately, we cannot be sure that the forcing stimulus is indeed sufficiently strong to achieve this. We can, however, show that the paired probabilities from the two experimental conditions must satisfy an inequality that—if violated by our data—rejects this whole class of bounded accumulator models: Let \( F_i \) denote the probability of still choosing the initially strongest target in the forced reversal condition. This observed probability then sets the upper limit for the unknown fraction of trials, \( A_i \), in which the accumulated-evidence trajectories reached the absorbing bound of the initially strongest target before the end of \( D_2 \). Note that we can only conclude that \( F_i \geq A_i \) because, as we noted above, we cannot be sure that the \( D_3 \) forcing stimulus is strong enough. However, from the time- and polarity-antisymmetric nature of the reversal stimulus and from the observed behavior in the (unforced) reversal condition, we can now derive further constraints on \( A_i \). For simplicity, we first assume absence of response biases. This assumption is eventually lifted, because it can be shown to only make our test more conservative. Now let \( R_i \) denote the observed probability of choosing the initially dominant target in the (unforced) reversal condition. Next, keep in mind that \( R_i \) is the sum of \( A_i \) and the probability that an undecided trajectory ends up being closer to the bound of the initially strongest target (Fig. 5A). Next, we know that \( A_i \approx A_f \). This holds because at the end of \( D_1 \), the trajectories tend to be closer to the boundary of the initially brightest target (as verified by the control condition). Therefore, given that \( D_2 = D_3 \) and given the same (but reversed) \( \Delta I \), a drift to the other boundary during \( D_3 \) takes more time. This, in combination with the assumed lack of hysteresis and bias in this model class, implies that at most one half of the undecided trajectories can eventually contribute to choosing the initially strongest target. That gives us the starting point for some algebra which relates \( R_i \) to \( F_i \): \[ R_i \leq A_i + \frac{1}{2} (1 - A_i - A_f) = \frac{1}{2} (1 + A_i - A_f) \geq \frac{1}{2} (1 + F_i) \] (3) or: \[ F_i \geq 2 R_i - 1 \] (4) Therefore, we can test whether the measured \( R_i \) and \( F_i \) combinations violate this inequality. The only remaining step required for the inequality test of Equation 4 is to account for the effect of choice biases that were clearly present in our data. We can do this by averaging the “raw” \( R_i \) and \( F_i \) values plotted in Figure 4 for the two mirror-symmetric stimulus conditions per subject. This results in more bias-resistant measures of \( R_i \) and \( F_i \): \[ \bar{R}_i = \frac{1}{2} (R_i^+ + 1 - R_i^-) \] and \( \bar{F}_i = \frac{1}{2} (F_i^+ + 1 - F_i^-) \) (5) Where the + and − superscripts refer to initial stimulus biases of +\( \Delta I \) and −\( \Delta I \), respectively. Using the modified \( R_i \) and \( F_i \) variables in this test not only reduces the effect of subject-dependent biases (which are difficult to estimate precisely), but the remaining “second-order” bias effects only makes our test more conservative (at least in a wide regime of practical interest, $F_1 < 0.5$, because of convexity properties of the psychometric functions). SEs for $\tilde{R}_i$ and $\tilde{F}_i$ were obtained from error-propagation rules: $$SE(\tilde{R}_i) = \sqrt{\frac{R_i^2 (1 - R_i^2)}{4N_i}} + \frac{R_i^2 (1 - R_i^2)}{4N_i} \quad \text{and}$$ $$SE(\tilde{F}_i) = \sqrt{\frac{F_i^2 (1 - F_i^2)}{4N_i}} + \frac{F_i^2 (1 - F_i^2)}{4N_i}$$ (6) Where the $N$s denote the total numbers of trials per test condition, with the subscripts $R$ and $F$ referring to the reversal conditions and forced reversal conditions, respectively, and with the $+$ and $-$ superscripts referring to the two different intensity biases as before. Using normal approximation, we then tested the null hypothesis from Equation 4 against the alternative hypothesis (i.e., $F_i < 2R_i - 1$) by applying the following one-sided z test: $$z = \frac{2\tilde{R}_i - 1 - \tilde{F}_i}{SE} \quad \text{with} \quad SE = \sqrt{4 \cdot SE(\tilde{R}_i)^2 + SE(\tilde{F}_i)^2}$$ (7) Chronometric response functions. Saccade latencies were also analyzed as a function of $\Delta I$. For each subject, we first subtracted the average reaction time measured in the ambiguous control condition pooled across rightward and leftward saccades. This was done because the average latency relative to target onset differed greatly among subjects (range: 250–600 ms). The resulting chronometric response functions were quantified by fitting linear regression lines to the data: $$\Delta RT = \alpha \cdot \Delta I + \beta$$ (8) One-way ANCOVA, with $\Delta I$ as the predictor and $\Delta RT$ (in ms) as the response and condition as the grouping variable, was used to test for differences between the chronometric curves. Correction for multiple testing was achieved by using a Tukey–Kramer test for multiple comparisons. Results Balanced durations In the first experiment, two targets were presented simultaneously for 80 or 160 ms, and the initial intensity bias reversed halfway during stimulus presentation (i.e., $D_1 = D_2 = 40$ ms or $D_1 = D_2 = 80$ ms). In control trials, the intensities remained the same across $D_1$ and $D_2$. Figure 2 shows the probability of rightward saccades as a function of the initial rightward intensity bias ($\Delta I$, positive if the right target was brightest at stimulus onset). In both control conditions (Fig. 2A, C), subjects discriminated target intensities easily; the psychometric curves had steep, positive slopes and saccades were almost always directed toward the brightest target at the largest intensity differences ($\Delta I = \pm 36$ cd/m$^2$). For purely ambiguous control stimuli ($\Delta I = 0$ cd/m$^2$), choice probabilities scattered around 50%. The psychometric curves obtained in the 80–80 ms reversal condition (Fig. 2B) also had steep, positive slopes, indicating a significant preference for the initially strongest target in all subjects ($t$ test, $p < 0.0001$). Subjects even maintained a clear preference for the initially brightest target in the 40–40 ms reversal condition (Fig. 2D; slopes > 0; $t$ test, $p < 0.001$ for all subjects). Compared with the control conditions, the slopes were somewhat reduced (80–80 ms condition: $t$ test, $p < 0.05$ for all subjects), or reduced considerably (40–40 ms reversal condition: $t$ test, $p < 0.01$ for all subjects except V.G.), indicating that target selection was not exclusively based on the initial intensity differences; subsequent stimulus information also had a significant influence on the eventual decision, even though the changes in target luminance were not consciously perceived. Unbalanced durations To probe further the contributions of initial and later stimulus epochs, two additional experiments were performed in which durations $D_1$ and $D_2$ were manipulated. In Experiment 2, we first kept the total stimulus duration fixed at $D_1 + D_2 = 160$ ms and swapped the targets’ intensities at $D_1 = 40, 80,$ or 120 ms after stimulus onset. Choices in the control condition (Fig. 3A) and 80–80 ms reversal condition (Fig. 3C) replicated the primacy effects of Experiment 1 with more subjects. In the new 120–40 ms reversal condition (Fig. 3B), that is, if $D_1$ lasted three times longer than $D_2$, choice behavior became almost undistinguishable from the control condition in all subjects. In contrast, if $D_1$ was three times shorter than $D_2$ in the 40–120 ms reversal condition, the subjects’ choice behaviors diverged considerably (Fig. 3D). Three subjects (I.B., A.M., and J.K.) still showed a significant, albeit weaker primacy effect (slopes > 0; $t$ test, $p < 0.05$), but the other four subjects (J.R., V.G., D.B., and D.A.) now showed a significant preference for the target that was most intense at the end (slopes < 0; $t$ test, $p < 0.01$). However, even in these four subjects, the choice probabilities were not completely inverted compared with the 120–40 ms condition. To summarize these changes, Figure 3E shows the averaged slopes of the psychometric curves as a function of $\Delta D = D_1 - D_2$. Note the systematic increase in slopes with increasing $\Delta D$. The asymmetric effect of intensity difference... in $D_1$ versus $D_2$ is reflected in the fact that the trend line does not pass through the origin. Because Experiment 2 kept the total duration constant, one cannot conclude that $\Delta D$ is the only or even the mechanistically relevant variable controlling the choice process. Indeed, strong hysteresis is only expected if $D_1$ exceeds the (probably subject-dependent) feedback time constant. To test this, Experiment 3 used the smallest practical $D_1$ (40 ms) and varied $D_2$ among 40, 80, and 120 ms to quantify the potential breakdown of the primacy we found at larger $D_1$. Figure 3F, G illustrates the behavior of two different subjects in this experiment. Note that their choices were consistently influenced by $D_2$, with a clear breakdown of primacy at $D_2 \geq 80$ ms in one of them (Fig. 3G). This behavior is quantified in Figure 3H for all four subjects. Note the systematic decrease in slope of the psychometric curves with increasing $D_2$: the longer the $D_2$, the weaker the preference for the initially most intense target. For two subjects, the sign of the slopes even flipped from positive to negative if $D_2$ exceeded $D_1$, indicating a preference for the target with the highest final intensity. Thus, a brief (40-80 ms) initial bias consistently dominated the responses if $D_1$ lasted an equal amount of time or longer than $D_2$. However, stimulus information in $D_2$ was not simply ignored. In Experiment 2, the longer that $D_2$ exceeded $D_1$, the more often subjects responded to the target with the highest final intensity and, for the short $D_1$ durations in Experiment 3, some but not all subjects showed a transition from primacy to recency. Strong target imbalance As noted in the Introduction, observing primacy does not necessarily imply that the dynamics of evidence accumulation produces hysteresis. In fact, any decision process that can commit to a particular choice before the stimulus ends can produce primacy independently of its dynamic properties (Fig. 1C). The capability for such premature commitment is common to a wide range of decision models, which share the crucial assumption of becoming committed to a particular choice as soon as the evidence (accumulated via some model-dependent process) for that particular alternative hits a fixed bound or threshold. However, the crucial distinction is that dynamic hysteresis due to feedback cross-inhibition creates primacy without irreversible commitment: sufficiently strong input should be able to “override” hysteresis, even when such input arrives well after the internal dynamics has settled into one of its hysteretic states. We therefore decided to present equal-duration reversal stimuli across an extended range of target imbalances. This yields two clearly different predictions for models with or without strong feedback competition. In the absence of feedback competition or if the feedback is not strong enough to create hysteresis, primacy due to absorbing bounds should increase systematically with target imbalance, so the psychometric curves should increase monotonically from saturation at \( p = 0 \) to saturation at \( p = 1 \). Primacy due to strong feedback competition, however, should produce nonmonotonic curves because primacy will eventually decrease if the target imbalance becomes sufficiently large. This prediction holds because the input during \( D_2 \) will start to exceed the strength of the hysteresis induced during \( D_1 \), and the absence of commitment (absorbing boundaries) will allow the decision process to evolve for at least the full duration of the stimulus. Some of the psychometric curves from Experiments 1–3 may already hint toward such nonmonotonic choice behavior, but testing this prediction over a wider range of target imbalances is clearly called for. For this purpose, we switched to a different projector capable of producing higher contrasts and luminances (see Materials and Methods). Figure 4 shows the results from all six subjects who participated in this fourth experiment. Dashed curves are the psychometric curve fitted to the control data (Equation 1, circles). For these brief, 50 ms stimuli, all subjects correctly chose the most intense target in the condition with the highest intensity and correct responses decreased to \( \approx 50\% \) for very small contrast. In the 50-50 ms reversal condition (Equation 1, squares), subjects showed a consistent primacy effect that increased with target imbalance up to \( \approx 40 \text{ cd/m}^2 \). This regime corresponds to the behavior measured in the 40–40 ms reversal conditions of Experiments 1 and 3. At higher intensity biases, however, the choice probabilities clearly showed a reversed trend, in that the primacy effect started to decline. To determine whether this decline was statistically significant, we first compared the probabilities of the outmost two points on either side of the curve. We found a significant (Fischer’s exact test, \( p < 0.05 \)) decrease in primacy on one side of the range of target contrasts for all subjects except J.G. and a significant decrease on both sides for subject J.E. To further quantify the strength of this trend reversal, we fitted a minimally extended generalized psychometric function model to the entire dataset (Equation 2, solid lines; see Materials and Methods). For all subjects, the coefficient of the third-order polynomial term was significantly negative (mean \( \pm \text{SD across subjects: } c = (-2.73 \pm 0.72) \times 10^{-6}; t \text{ test, } p < 0.05, \text{ for all individual subjects}, \) indicating that the data indeed showed a significant decline of the basic primacy effect at higher target contrasts. These tests thus demonstrate that the observed primacy effects cannot be attributed purely to early choice commitment because, without sufficiently strong feedback competition, the latter would predict a monotonic increase in primacy. **Forced reversal** Although the basic primacy effect could indeed be reduced by stronger target contrasts (approximately for \( |\Delta I| \geq 40 \text{ cd/m}^2 \)), these findings could not provide a strict upper limit on the possible contribution of premature choice commitment or a strict lower limit on the contribution of dynamic hysteresis. To obtain those limits, we investigated to what extent the primacy effect breaks down when later-arriving stimulus evidence in favor of the opposite target location is very strong. Experiment 4 therefore included “forced reversal” trials in which \( D_2 \) was immediately followed by a \( D_1 \) epoch in which the reversed target imbalance was made even stronger than the reversed imbalance during \( D_2 \). As explained in the Materials and Methods and illustrated in Figure 5A, any residual probability of choosing the initially brightest target in this forced reversal condition then sets an upper limit on how much of the primacy seen without the forcing $D_3$ phase is caused by premature choice commitment. Conversely, the amount of primacy in the unforced condition that can be attributed to hysteresis is at least as large as the reduction in primacy that occurs when appending the forcing $D_3$ phase to the stimulus. For efficiency, and to ensure that forced reversal trials were relatively rare occurrences in any given block of trials (17%), we only used a target contrast that produced a strong primacy effect, as measured in pilot experiments. The duration of the forcing $D_3$ phase was set at 67 ms. Note that if $D_3$ were too short to reach maximal choice forcing, it would merely reduce the test’s power to reject the hypothesis of absorbing bounds being responsible for primacy effects. As shown in Figure 4 (isolated diamonds near $p = 0$ and $p = 1$), adding the $D_3$ forcing phase to the stimulus completely abolished primacy and drove all subjects to choose almost exclusively the target with final dominance. The same result was obtained with the 83–67 ms forcing condition (Fig. 4, triangles; see Materials and Methods). These findings strongly suggest that the contribution of premature choice commitment was extremely small. As outlined in the Materials and Methods, the simplest approach to quantifying how strong these data challenge mechanisms that attribute primacy effects to premature commitment is to focus on a slightly more specific class of models with absorbing bounds, namely those without hysteresis in their evidence accumulation. This restriction still includes all “standard” decision models, such as “bounded drift-diffusion” (Gold and Shadlen, 2007). All models that we are aware of that use bounded integration, leaky or not, and independently of their mutual interactions as long as they are too weak to cause hysteresis. This whole class of models can then be tested quantitatively by comparing the probability $F_i$ of still choosing the initially strongest target in forced reversal conditions with the probability $R_i$ of choosing the initially strongest target in the corresponding unforced reversal conditions. More specifically, we could test whether the measured $R_i$ and $F_i$ combinations violate the inequality (see Materials and Methods for details): $F_i \geq 2R_i - 1$. If so, our data would provide strong evidence against the very wide class of models with absorbing bounds and no hysteresis. One can in fact derive less conservative inequalities for the $R_i, F_i$ combinations when focusing on specific models within this highly diverse class, but this is beyond the scope of the present paper and proved unnecessary for our present purposes. To visualize the inequality test, we plotted the averaged $R_i$ and $F_i$ data pairs from each subject (see Materials and Methods) in the scatter plot in Figure 5 (error bars indicate ±1 SEM). Note that the data points would have to fall within the gray area delineated by the line $F_i = 2R_i - 1$ to satisfy the predictions of absorbing bound models without hysteresis. It is quite clear, however, that nearly all data from the 50–50–67 ms forcing condition fell well outside of that region (Fig. 5B). A one-sided z test on each data pair (see Materials and Methods) indeed showed a highly significant violation of inequality 4 ($p < 0.00005$) for all but one of our subjects. When we performed the test with the $F_i$’s derived from the 83–67 ms forcing condition (Fig. 5C), we also found highly significant violations of inequality 4 ($p < 0.00005$), except for subject J.E. As can be seen in Figure 4, the lack of statistical significance for J.E. in these tests stems from a single $R_i$ outlier data point. If we would have based the test for this subject on his full dataset (i.e., sampling from the fit-function based on all data points), this test too would have strongly violated inequality 4. Model simulations The choice data from Experiment 4 provide strong evidence against the broad class of accumulator models with absorbing Figure 6. Model properties. **A**, **B**, Diagram of the feedforward (**A**) and feedback (**B**) cross-inhibition model together with simulations of the 80-80 ms reversal condition from our experiments. Arrowheads denote excitation, bulletheads inhibition. Inputs represent the intensities of the left (L, green) and the right (R, red) target as a function of time. The inputs pass through a leaky integrator and a nonlinear Naka-Rushton compression stage \((s = x^2/(x^2 + c))\) for \(x > 0\), otherwise \(s = 0; c = 1\). The difference between the two models is in the type of cross-inhibition only. Choices were assigned by comparing the units’ response magnitude at stimulus offset. An additional integration step (shown transparently) in the feedforward model can—for this reversal condition—“repair” its failure to produce primacy without absorbing bounds. **C**, Output of the left (green) and right (red) decision units from the feedback model as a function of time for leftward and rightward choices (columns) in the three different \((D_1, D_2)\) timing conditions of Experiment 3 (rows). Model parameters for the simulations in **A**–**C**, Integration constant \(T_i = 80\) ms; strength of the cross-inhibition, \(\gamma = 3.33\) for feedback cross-inhibition, \(\lambda = 1.0\) for feedback cross-inhibition; Variance of the Gaussian white noise on the inputs was 20% of the mean. bounds. Here, we simulated feedforward and feedback competition models without absorbing bounds and tested their performance against our observations. Figure 6 shows a schematic drawing of the feedforward (Fig. 6A) and feedback (Fig. 6B) architectures. The inputs, representing the intensities of the two targets, pass through a leaky-integrator and nonlinear compression stage. Decisions are based on the units’ activity levels at target offset. The two models differ only in the type of cross-inhibition that mediates the competition. In the feedforward model, each decision unit receives cross-inhibition from the opposing input, whereas in the feedback model, each unit receives cross-inhibition from the output of the alternate decision unit. To show the effect of changes in stimulus intensity, we simulated reversal trials. For the feedback circuit, the nature of the response (degree of hysteresis) depends crucially on how \(D_1\) relates to the time constant of the overall feedback dynamics. For our first demonstration (Fig. 6B), we simulated the 80-80 ms reversal condition. The time constant of the decision units was chosen as just sufficient to allow the network to settle on a choice determined by the initially strongest target before the beginning of \(D_2\) (i.e., \(T_i = 80\) ms). This prevents the decision unit of the initially weaker target from responding significantly despite the fact that the target intensities have reversed (Fig. 6B, insets). Conversely, the output of the feedforward model more or less follows the sequence of input events (Fig. 6A, insets). Figure 6C illustrates how the behavior of the feedback model changes if \(D_1\) is shorter than the feedback time constant. Increasing \(D_1\) then results in a transition from a preference for the initial brightest target if \(D_2\) is short (top row) to a preference for the most intense target at the end if \(D_2\) becomes large (bottom rows). Figure 7 shows the choice behavior of the two models (same format as Figs. 2, 3). Five different conditions were simulated. The first three panels in each row show simulations of the reversal conditions from Experiment 2 (from left to right, 40-120 ms, 80-80 ms and 120-40 ms, respectively). The fourth panel shows the 40-40 ms reversal condition from Experiment 1. The last panel on the right shows the 50-50 ms reversal condition that was tested over a larger contrast range in Experiment 4. The two curves in each graph show the results obtained with two different parameter sets, representing two subjects that showed distinctly different choice behavior in the 40-120 ms reversal condition of Experiment 2 (gray: D.A.-like subject; black: J.K.-like subject). The feedback model (Fig. 7A) generally prefers the initially most intense target. However, in the 40-120 ms reversal condition (Fig. 7A, left), the presence of hysteresis depends critically on the integration constant \((T_i)\) and the cross-inhibition strength \((\gamma)\). With a longer \(T_i\) and a weaker \(\gamma\), the hysteresis disappears (note that the absolute values of \(T_i\) should not be taken literally because they scale with the shape parameter \(c\) of the sigmoid function). For high target contrasts (Fig. 7A, right), the model shows the observed decline of the primacy effect as such strong stimuli are able to “override” the hysteresis. Response biases as observed in the experiments could be induced, for example, by asymmetries in input gains for the left and right decision units (data not shown). For the feedforward model, a direct readout of the competition stage at stimulus offset (Fig. 7B) always predicts a preference for the target that is most intense at the end. Note, however, that this failure to replicate the basic primacy effect can be “repaired” by adding an accumulator stage that integrates the output of the competition stage without any further interactions or absorbing bounds (Fig. 6A). Indeed, when the choices are based on the total time integral of its competition stage output (Fig. 7C), primacy occurs when the two stimulus epochs are equal or the first epoch is longer. Nevertheless, with this augmented version of the feedforward model, it remains impossible to produce primacy in the 40–120 ms reversal condition (as seen in some of our subjects in Experiments 2 and 3). This is due to the absence of absorbing bounds and absence of hysteresis in the evidence accumulation. Both versions of the feedforward model also fail to account for the observed decline in primacy at large target contrasts (Experiment 4) regardless of its parameter settings. Thus, in short, neither the competition stage nor the added accumulator stage of the feedforward model were able to fully capture all key features of the saccadic choice behavior that were revealed across our different reversal conditions. Reaction time The choice patterns in Experiments 2 and 3 showed remarkably different dependencies on the same stimulus set between our subjects. This offers a unique opportunity to investigate whether reaction times for the alternatives follow similarly different patterns in the same subjects or if they are dissociated from choice probabilities altogether. We wondered, for example, if the relation between reaction time and choice probability followed some simple constraints such as: (1) the mean latencies of saccades toward the most frequently chosen target are systematically shorter than the mean latencies of saccades toward the competing target, and (2) increases in choice preference are associated with increases in this latency difference. The data from Experiments 1–3 showed, however, that there was no significant correlation between choice preference and mean latency difference of leftward versus rightward saccades in the control conditions (Fig. 8A; $r = 0.11$, $t$ test, $p = 0.28$ over all subjects and conditions). For most data from the reversal conditions (Fig. 8B), we did find a significant correlation between choice preference and latency difference ($r = 0.65$, $t$ test, $p < 0.0001$), but only for timing conditions that produced a preference for the initially brightest target (Fig. 8B, small symbols, thin regression line). For timing conditions that produced a preference for the target with the highest final intensity (i.e., negative slopes of psychometric curves, ~20% of the data), we actually found an opposite relationship between latency and choice. Under these latter conditions (Fig. 8B, large symbols, thick regression line), the mean latencies of rightward saccades increased significantly compared with the mean latencies of leftward saccades as the preference for the right target increased and vice versa if preference for the left target increased ($r = -0.68$, $t$ test, $p < 0.0001$). This remarkable dissociation between choice probability and saccade latency is further illustrated in Figure 8C–E for the four subjects that showed nearly inverted choice preferences in the 80-80 ms (black) versus 40-120 ms (gray) reversal conditions in Experiment 2 (Fig. 3B,D). Note that for each saccade direction, the changes in reaction time as a function of initial target contrast were the same under both timing conditions (Fig. 8D,E; ANCOVA, $F = 0.24$, $p > 0.8$ and $F = 1.88$, $p > 0.1$, respectively), whereas the changes in choice probability were opposite (Fig. 8C, $t$ test, $p < 0.0001$). This shows that the reaction times depended strongly on the initial target contrast and on the direction of the ensuing saccade, but not on the probability of choosing either target. Saccade latency was short compared with the ambiguous control condition if the initially stronger target was chosen (i.e., $\Delta RT < 0$) but long if the initially weaker target was chosen (i.e., $\Delta RT > 0$), and these latency differences increased as a function of the initial intensity difference regardless of the choice probability. This relation between reaction time and the initial intensity of the selected target (or its initial intensity difference with the other target) was found across all experiments and all conditions (Fig. 9). In Experiments 1–3, the chronometric response functions obtained in reversal conditions (Fig. 9B) were actually quite similar to the ones obtained in control conditions (Fig. 9A). This is quantified in Figure 9C for all test conditions applied in Experiment 1 using the slopes of the chronometric response functions (see Materials and Methods, above; Eq. 8). Because the chronometric functions for rightward and leftward saccades were practically mirror images, we pooled these responses by inverting the slope of the rightward responses. Neither the 80-80 ms nor the 40-40 ms reversal condition produced significantly different latency effects compared with the control conditions (ANCOVA, $F = 1.18$, $p > 0.2$ and $F = 0.5$, $p > 0.4$, respectively). Similar results were obtained in Experiments 2 and 3 (Fig. 9D,E, respectively). In both experiments, changes in $D_l$ and $D_r$ had very little influence on the saccade latencies. This contrasts markedly with the robust influences on choice behavior (Fig. 3). In fact, the chronometric functions remained invariant to both the presence and timing of the reversals (ANCOVA: no significant differences in Experiment 2, $F = 0.28$, $p > 0.8$ and Experiment 3, $F = 1.25$, $p > 0.2$). This similarity between control and reversal data in Experiments 1–3 shows that a revision of the initial decision by later evidence did not lead to an extra delay in saccade reaction time. In fact, in Experiment 4, when we forced saccades in the opposite direction of the initially strongest target, thus breaking down the hysteresis built up during the initial part of the stimulus, the latencies were somewhat shorter than the ones in the corresponding unforced condition (mean ± SEM latency difference for movements to the same location: 26 ± 4 ms, paired $t$ test, $p < 0.0001$; and for movements to the opposite location: 17 ± 5 ms, $p < 0.01$; data not shown). This latency reduction by late stimulus evidence also refutes the notion that initial information alone might have been the determining factor in reaction times. The latter is further corroborated by the significant differences in slopes of the chronometric function for the control versus reversal conditions obtained in Experiment 4 (Fig. 9F; ANCOVA: $F = 47.53$, $p < 0.0001$), for which the mean target intensity levels were larger than in Experiments 1–3 (i.e., 27.8 cd/m$^2$ vs 16.7 cd/m$^2$, respectively; see Materials and Methods). Discussion We studied saccadic decision making using two simultaneously (±160 ms) presented targets with intensity reversals occurring at different moments in time. Using this novel approach, we found a robust primacy effect: subjects always prefer the target that was brightest during the early part of each stimulus even if longer-lasting, opposite differences were present in the second part of the stimulus. This primacy effect collapsed when: (1) the duration of the early stimulus phase was reduced to ~40 ms, (2) the target intensity differences were large, or (3) the reversal stimulus was followed by a strong stimulus bias toward the initially weakest target. The latter two findings show that the basic primacy effect did not result from premature commitment to the initially dominant target. A decision model that assumes feedback cross-inhibition, however, fully described the observed choice behavior. There is some prior evidence that decision making is biased toward early visual information. In a motion discrimination task, Ludwig et al. (2005) found that subjects rely mostly on information provided in the beginning, a finding that they later explained with a time-varying decision threshold (Ludwig, 2009). Disturbing motion pulses also have more effect when they occur with a time-varying decision threshold (Ludwig, 2009). Dis-} initially strongest target and concluded that this could be due to feedback competition. One might argue that feedforward models do predict primacy if subjects adopted a low decision threshold reached shortly after stimulus onset (Ludwig, 2009) or if the integration of evidence were bounded (Kiani et al., 2008). However, our finding that the primacy effect decreases at higher target contrasts (Fig. 4) cannot be explained by any feedforward model with irreversible decision thresholds. Moreover, the very low probabilities of still choosing the initially strongest target in the forced reversal condition (Fig. 5) show that even for feedback models that do allow dynamic hysteresis, absorbing bounds play no significant role in controlling the well defined saccadic choice behavior that occurred in Experiment 4. These robust findings are corroborated by some of the results obtained by Tsetsos et al. (2012), who found that motion-direction discrimination in one of their subjects showed a primacy-to-recency transition for short- versus long-duration stimuli. Given the evidence against choice commitment, dominance of the initial stimulus also contradicts many other model types: high-leakage models (Kiani et al., 2008) and most “urgency” models (Ditterich, 2006; Cisek et al., 2009) give more weight to the final stimulus (unless the urgency signal rose faster than our 40 ms minimal duration; Standage et al., 2011). Likewise, recent gating models (Purcell et al., 2010; Schall et al., 2011) prevent accumulation of the initial, low-amplitude part of the sensory input. In theory, it is possible that a different mechanism triggered a saccade to the bright forcing target even though the original mechanism had already committed to the other target. In particular, one might worry that, due to subsiding fixation activity, the late forcing pulses were able to elicit so-called express saccades (Dorris et al., 1997; Bell et al., 2006). There was, however, no evidence for this; the latencies of individual saccades were always >150 ms relative to the onset of the forcing pulse, which rules out any specific involvement of the express pathways. We also examined the actual eye movement traces. Prior commitment to the initially strongest target in a parallel pathway would predict that saccades toward the forcing target are substantially influenced by (preparatory) activity related to the other impending saccade. However, as shown in Figure 10, the metrics and kinematics of saccades in the forced condition were actually very similar to those in the control and unforced condition. This supports our assumption that they resulted from the same saccade mechanism. Interestingly, we found that reaction times depended on the initial stimulus contrast and on the direction of the ensuing sac- cade (Fig. 9), but not on the probability of choosing either target (Fig. 8). In fact, by manipulating the timing of the intensity swaps in Experiments 2 and 3, we could nearly invert the relation between reaction time and choice probability from faster to slower reactions for the more likely outcome in some of our subjects. Such a remarkable dissociation between reaction time and choice behavior has, to our knowledge, never been reported. Instead, it is typically found that changes in choice probability go hand in hand with opposite changes in reaction time (Palmer et al., 2005; Chittka et al., 2009, but see Niwa and Ditterich, 2008, who found another example of a dissociation: reaction times that varied across conditions without changes in choice probability and vice versa). Current decision-making theories explain these findings from the first-to-threshold principle, but this principle cannot account for the choice behaviors observed in Experiment 4 (Figs. 4, 5), the decoupling of choices and reaction times (Fig. 8), and the remarkable invariance of the chronometric functions to both the presence and timing of the reversals observed in Experiments 1–3 (Fig. 9A–E). We thus conclude that reaction time and choice are determined by separate mechanism rather than by a single-stage competition process. This fits in previous conceptual schemes (Findlay and Walker, 1999), which propose that “when” and “where” are determined by parallel but hierarchically organized pathways. Physiological studies clearly support the notion that target selection and movement initiation are distinct phenomena. In monkeys performing visual search, visually responsive cells in frontal eye fields, lateral intraparietal sulcus, and superior colliculus discriminate between target and distractors (frontal eye field: Schall and Hanes, 1993; lateral intraparietal sulcus: Schall and Hanes, 1993; Ipata et al., 2006; Thomas and Paré, 2007; superior colliculus: Basso and Wurtz, 1997; McPeek and Keller, 2002), but the latency with which these cells discriminate the target from distractors is unrelated to the timing of the ensuing movement. In fact, the selection takes place independently of movement execution (Schall et al., 1995; Thompson et al., 1996; Thompson et al., 1997; Murthy et al., 2001; Sato and Schall, 2003; Juan et al., 2004; Murthy et al., 2009). The chronometric functions from all of our experiments indicated that reaction times decreased systematically with increasing initial intensity of the selected target (Figs. 8, 9). One might speculate, therefore, that the reaction times might have been determined by the target contrasts at stimulus onset. Such a theory would be consistent, for example, with the contrast-dependent spike timing recently observed in primary visual cortex (Lee et al., 2010) and is further supported by Experiments 1–3, which showed that neither the presence nor the timing of the intensity reversals had a significant influence on the chronometric response curves (Fig. 9C–E). However, in Experiment 4, we found that the chronometric functions for the control and reversal conditions were clearly different (Fig. 9F) and that the later forcing stimulus actually shortened the saccade latencies, indicating that later stimulus information did affect the reaction times. A possible interpretation of these effects is that increases in stimulus intensity decrease the delays in visual processing (Bell et al., 2006). The present literature leaves some uncertainty about how the target-selection stage is read out. In our simulations, we tested two different approaches. Either the integrated output of the decision units was compared, choosing the channel with the highest value as the current winner, or the output levels of the decision units at stimulus offset were compared. The latter could rely, for example, on the activity levels of cells that keep the latest choice in working memory (Kojima and Goldman-Rakic, 1982; Chafee and Goldman-Rakic, 1998). For the feedforward model, the type of readout heavily influenced its choice behavior, but due to the absence of hysteresis in the evidence accumulation, neither type could be reconciled with the experimental data (Fig. 7B, C). For the feedback model, only a readout at stimulus offset produced predictions that were qualitatively consistent with the observed behavior (Fig. 7A). Results from the integrated output (data not shown) failed to account for the nonmonotonic nature of the psychometric curves in the 50–50 ms reversal conditions of Experiment 4. Bollimunta and Ditterich (2012) obtained physiological evidence from monkey lateral intraparietal sulcus for the presence of a feedforward inhibition component in the random-dot motion-direction discrimination task. At the same time, the presence of feedback inhibition could not be ruled out. Here, we did not simulate hybrid models. It is possible, and perhaps even likely, that feedforward inhibition also contributes to the visual target-selection process that we have studied here. However, having ruled out a significant contribution of premature choice commitment in Experiment 4, the essence of our findings is that the observed primacy effects can only be accounted for by a model that includes sufficiently strong feedback cross-inhibition. We conclude that saccadic responses to competing visual targets are best described by a model featuring a competitive choice mechanism based on feedback cross-inhibition that exerts executive control (possibly mediated by the substantia nigra; Hikosaka and Wurtz, 1983) over the initiation of upcoming saccades. References Kalivisar et al. • Target Selection Relies on Feedback Competition J. Neurosci., July 17, 2013 • 33(29):12077–12089 • 12089
Abstract The Ancha Formation contains granite-bearing gravel, sand, and subordinate mud derived from the southwestern flank of the Sangre de Cristo Mountains and deposited on a streamflow-dominated piedmont. These strata compose a locally important, albeit thin (less than 45 m [148 ft] of saturated thickness), aquifer for domestic water wells south of Santa Fe, New Mexico. Spiegel and Baldwin (1963) originally defined a partial type section for the Ancha Formation using a 49-m-thick (16-ft-thick) exposed interval of weakly consolidated, subhorizontal, arkosic strata on the southwest slope of Cañada Ancha. However, new geologic mapping, sedimentologic field studies, and geochronologic data indicate that the Ancha Formation should be restricted to the upper 12 m (39 ft) of Spiegel and Baldwin’s type section, with the underlying strata being correlative to the upper Tesuque Formation. Because this 12-m (39-ft) interval is not well exposed at the type section, we designate four new reference sections that illustrate the textural variability of the Ancha Formation and its stratigraphic relationship with other rock units. New 40Ar/39Ar data help constrain the age of the Ancha Formation. A bed of rhyolite lapilli (1.48 ± 0.02 Ma) that is temporally correlative to one of the Cerro Toledo events is recognized near the top of the section south of Santa Fe, New Mexico. A fluvial deposit interpreted to be inset against the Ancha Formation contains lapilli dated at 1.25 ± 0.06 Ma. These dates indicate that deposition of the Ancha Formation generally ended during early Pleistocene time. However, there is evidence suggesting that aggradation continued into the middle or late Pleistocene in mountain-front canyons east of the Santa Fe embayment. The age of the basal Ancha Formation is diachronous and ranges from ~2.7–3.5 Ma in the western Santa Fe embayment to 1.6 Ma in the eastern embayment near the Sangre de Cristo Mountains. The Pliocene–early Pleistocene aggradation that formed the Ancha Formation in the Santa Fe embayment occurred elsewhere in the Española Basin and Albuquerque Basins, suggesting a regional climatic influence on deposition in the uppermost Santa Fe Group. A rise in base level due to late Pliocene volcanism and tectonism may be responsible for the significant differences of accommodation space, as reflected in deposit thickness, of the Santa Fe embayment compared with the piedmont regions to the south and the generally degraded upland regions to the north. Introduction The Ancha Formation underlies the Santa Fe embayment and is a distinct lithostratigraphic unit in the synrift basin fill of the Rio Grande rift. We use the term Santa Fe embayment in the physiographic sense for the southern arm of the Española Basin that extends south of the city of Santa Fe, New Mexico (Fig. 1). There, a west-sloping piedmont along the southwestern flank of the Sangre de Cristo Mountains overlies a north-plunging syncline developed in pre-Pliocene strata (Biehler, 1999; Grant, 1999; Koning and Hallett, 2000). The Santa Fe embayment is bounded by the granite-dominated Sangre de Cristo Mountains to the east, Galisteo Creek to the south, the Cerrillos Hills to the southwest, basalt-capped mesas of the Cerros del Rio volcanic field to the northwest, and the dissected upland underlain by the Tesuque Formation (Miocene) north of the Santa Fe River, called here the Santa Fe uplands (Figs. 1, 2). This piedmont ranges from approximately 1,830 to 2,190 m (6,004–7,185 ft) in elevation and has been incised by drainages associated with the Santa Fe River, Galisteo Creek, and the Rio Grande. The Ancha Formation was originally proposed by Spiegel and Baldwin (1963, pp. 45–50) for arkosic gravel, sand, and silt, inferred to be from late Pliocene to Pleistocene in age, that lie with angular unconformity upon moderately tilted Tesuque Formation in the regional vicinity of Santa Fe. Spiegel and Baldwin (1963) included the Ancha Formation as the uppermost unit of the Santa Fe Group in the Santa Fe area. The upper boundary of the Ancha Formation in most of the Santa Fe embayment was defined by the Plains surface, a formerly extensive constructional surface preserved on interfluvies (Spiegel and Baldwin, 1963; Koning and Hallett, 2000). The Ancha Formation constitutes an important hydrogeologic unit in the Santa Fe embayment, where many communities locally rely on ground water. Although only about half of this unit is saturated with ground water, it is as much as ten times more permeable than the more consolidated and cemented Tesuque Formation (Fleming, 1991; Frost et al., 1994). Ground water generally flows west through this saturated zone from the Sangre de Cristo Mountains, but local recharge also occurs (Frost et al., 1994). Aquifer tests from various consultant reports listed in Koning and Hallett (2000) estimate hydraulic conductivity values from 2 to 130 ft/day (7.1 x 10^-4 to 4.6 x 10^-2 cm/s) in the Ancha Formation and from 1 to 20 ft/day (3.5 x 10^-4 to 7.1 x 10^-3 cm/s) in the underlying Tesuque Formation. This paper redefines the Ancha Formation type section in light of new mapping, sedimentologic, and geochronologic data. We also designate four reference stratigraphic sections and summarize sedimentologic, surficial, and stratigraphic characteristics. In closing, we discuss the age of the Ancha Formation and possible tectonic and climatic influences on its aggradation. Regional geology and previous work The southern Española Basin contains several lithostratigraphic units relevant to the Ancha Formation. The Tesuque Formation underlies most of the Ancha Formation and is a pinkish-tan, arkosic, silty sandstone with minor conglomerate and siltstone that is commonly exposed in the central and eastern parts of the Española Basin north of Santa Fe. Galusha and Blick (1971) proposed the Chamita Formation for Santa Fe Group strata of Miocene age that unconformably lie above the Tesuque Formation north of the Santa Fe embayment. The Chamita Formation is generally brown or gray sand with subordinate gravel of mixed provenance; the gravel contains significant quartzite and other metamorphic clasts mixed together with granite, sedimentary, and volcanic clasts (Galusha and Blick, 1971; Tedford and Barghoorn, 1993). The Puye Formation is a coarse-grained, volcaniclastic, alluvial sequence on the western margin of the Española Basin that was shed primarily from the Tsichica volcanic center during the Pliocene–early Pleistocene (Manley, 1976b; Waresback, August 2002, Volume 24, Number 3 Spiegel and Baldwin (1963) correlated ridge-capping gravel deposits in the northeastern Española Basin to the Ancha Formation. However, Manley (1979a) recommended abandoning this term for high-level gravel deposits in the northeastern Española Basin, including the Oso, Entrañas, and Truchas surfaces. Manley (1976a, 1979b) interpreted that the Ancha Formation underlies the Cerros del Rio basalts near Cañada Ancha, although Kelley (1978) did not delineate it there. ### Methods Exposures of late Cenozoic strata in or near the Santa Fe embayment (Fig. 1) were examined and described during the summers of 1999 and 2000. In addition, we examined outcrops along five stratigraphic sections. Sections were measured using a Jacob staff and Brunton compass or Alnico needle. Descriptions of the type and reference sections of the Ancha Formation and other Santa Fe Group deposits are in Connell et al. (in prep.); an abbreviated description of the Ancha Formation type section at Cañada Ancha is in Appendix 1. The base or top of stratigraphic sections was surveyed using a hand-held, non-differentially corrected GPS unit, except in cases where the section could be easily located on a topographic map (within an estimated ± 20 m [66 ft] horizontal distance). Colors were described using Munsell (1992) notation. We described sedimentary structures and textures using methods discussed in Compton (1985), Dutro et al. (1989), and Ehlers and Blatt (1982). Sandstone composition was visually estimated in the field and provisionally classified using the system of Folk (1974). Soil profiles were briefly described using methods and nomenclature of Birkeland (1999). We determined gravel clast composition from sieved pebble concentrates (>9.52-mm diameter clasts) at selected intervals or in situ at well-cemented outcrops. Paleocurrent directions were obtained from clast imbrications along with crossbed and channel orientations. Radioisotopic dating was performed using the 40Ar/39Ar method at the New Mexico Geochronological Research Laboratory in Socorro, New Mexico. The data are discussed and interpreted in Winick (1999) and Peters (2000). Older and less precise K/Ar and fission-track ages are from previously published studies (Bachman and Mehnert, 1978; Manley and Naeser, 1977; Manley, 1976a). ### Stratigraphy #### Type section at Cañada Ancha The Ancha Formation partial type section of Spiegel and Baldwin (1963) at Cañada Ancha was re-measured, re-described, and dated (Fig. 3). The lower part of these exposures (units 1–7, Appendix 1) contains more than 37 m (121 ft) of reddish-yellow to strong-brown, sandy pebble conglomerate to pebbly sandstone. Deposits are loose to moderately consolidated with local cementation by calcium carbonate. Beds are subhorizontal. Gravel generally contains 93–99% granite, ≤ 1% quartzite, 5% amphibolite and gneiss, and < 1% volcanic clasts. Only unit 7 contains significant cobbles and has as much as 3% quartzite. A 30-cm-thick (12-inch-thick) pumice bed (unit 5) lies approximately 46 m (151 ft) above the floor of Cañada Ancha. This bed is interpreted to be a primary fallout because it is composed of ≥ 95% white, clast-supported, lapilli-size pumice. In contrast, reworked pumice in overlying beds is subordinated to arkosic sediment and generally coarse ash in size. Pumice clasts in unit 5 were dated at 8.48 ± 0.14 Ma using 40Ar/39Ar methods (Table 1). To the east of Cañada Ancha, deposits assigned to the informal coarse upper unit of the Tesuque Formation are generally reddish-yellow to pink conglomerate and sandstone (Koning and Maldonado, 2001); these strata contain abundant granite with less than 1% quartzite, and paleocurrent data indicate westward paleoflow from the Sangre de Cristo Mountains. We assign units 1–7 of the Cañada Ancha section to the upper Tesuque Formation, principally because these units are lithologically similar to the coarse upper unit of the Tesuque Formation described in Koning and Maldonado (2001). Outcrops and clasts in hill-slope colluvium described between these units at the Cañada Ancha section and good exposures of the coarse upper unit of the Tesuque Formation, located 3–4.5 km (2–3 mi) to the northeast, indicate physical continuity between the upper Tesuque Formation and the Cañada Ancha section (Koning and Maldonado, 2001). In addition, pumiceous beds that are similar to units 3, 5, and 6 of the Cañada Ancha section extend northward along strike from the Cañada Ancha section into sandstone and conglomerate strata of the coarse upper unit of the Tesuque Formation, where the average dip of the beds is 3° W (Fig. 2; Koning and Maldonado, 2001). Thus, mapping of these pumiceous beds supports our interpretation that units 1–7 correlate to the upper Tesuque Formation and are Miocene in age rather than correlating to the Ancha Formation. This is consistent with the 8.48 ± 0.14 Ma 40Ar/39Ar age for unit 5 at the Cañada Ancha section. Significant lithologic and provenance differences are recognized between units 1–7 of the Cañada Ancha section and the Miocene Chamita Formation described by Galusha and Blick (1971) to the north. In the Cañada Ancha section, the preponderance of pink granitic clasts and paleocurrent data indicate westward paleoflow from the Sangre de Cristo Mountains. In FIGURE 1—Shaded-relief map of the Santa Fe embayment, Hagan embayment, and Santo Domingo sub-basin of the Albuquerque Basin. Major geographic and physiographic features include the Cerros del Río volcanic field, Santa Fe uplands, Sangre de Cristo Mountains, Ortiz Mountains, and Cerrillos Hills. Abbreviations of stratigraphic sections (shown by short red dash) include: Cañada Ancha (CA), La Ciénega (LC), Arroyo Hondo (AH), and Galisteo #1 and #2 (G1 and G2, respectively). Red x and label denotes field exposure H-136. Red circles denote tephra sample localities (Table 1). Black line encloses area shown in Figure 2. Base created from 30-m DEM data from the U.S. Geological Survey National Elevation Database (NED). Black outline shows the map area for Figure 2. FIGURE 2—Simplified geologic map and northeast-southwest cross-section of the Santa Fe embayment and vicinity (modified from Koning and Maldonado, 2001; Read et al., 1999 and 2000; Koning and Hallett, 2000; Borchert and Read, 2002; Dethier, 1997; Kelley, 1978; Johnson, 1975; Bachman, 1975; Spiegel and Baldwin, 1963; Sawyer et al., 2001). Drainages and major highways shown for reference. Short vertical lines on the cross-section denote drill-holes (refer to Koning and Hallett, 2000; Read et al., 1999). On map, v-patterns denote strata in upper Tesuque Formation that contain sparse beds of poorly sorted, brownish to grayish, pumiceous sandstone and pebble conglomerate. Abbreviations of stratigraphic sections (shown by short green dashes) include: Cañada Ancha (CA), La Cienega (LC), Arroyo Hondo (AH), and Galisteo #1 and #2 (G1 and G2, respectively). contrast, the Chamita Formation contains a diverse assemblage of clasts, including abundant quartzite, which indicates derivation of sediment from the north and northeast (Galusha and Blick, 1971; Tedford and Bangham, 1993). These observations, in addition to map data (i.e. Galusha and Blick, 1971, and Koning and Maldonado, 2001), do not support correlation of units 1–7 of the Cañada Ancha section with the Chamita Formation of Galusha and Blick (1971). Overlying units 1–7 at the Cañada Ancha section is 6 m (20 ft) of loose and very poorly exposed, light yellowish-brown, gravelly arkosic sand containing 3–5% rounded quartzite cobbles and pebbles (unit 8, Appendix 1). A 2-m-thick (7-ft-thick) pumice bed underlying similar sediment at the same stratigraphic position as unit 8, exposed 200–250 m (656–820 ft) downstream of the Cañada Ancha section (Manley, 1976b, p. 22; K. Manley, pers. comm. 2002), returned a zircon fission-track age of 2.7 ± 0.4 Ma (Table 1; Manley and Naeser, 1977). This pumice is stratigraphically higher than unit 5 and provides a lower age constraint for the Pliocene sediment exposed near the type section. Units 9–11 do not contain quartzite clasts but have scattered pebbles and cobbles of granite and basalt within silty sand and sandy silt (Appendix 1). These strata possess very thin, tabular, even-bedded basaltic lapilli from the Cerros del Rio volcanic field that are interpreted to represent fallout phreatomagmatic tephra (Koning and Maldonado, 2001). A 14-m-thick (46-ft-thick) basalt flow caps the sediment at the Cañada Ancha section (unit 12). Although this has not been dated, mesa-capping basalt and basaltic andesite(? flows near this site have yielded K/Ar and 40Ar/39Ar ages of 2.0–2.5 Ma (WoldeGabriel et al., 1996; Manley, 1976a). Units 8–9 are similar to deposits mapped to the north (Ta on the maps of Koning and Maldonado, 2001, and Dethier, 1997). These deposits are generally loose to weakly consolidated, brownish to yellowish, granitic sand and pebbles with 5–20% cobbles. Approximately half of the cobbles and 1–5% of the pebbles in these deposits are quartzite. Although the difference is subtle and not ubiquitous, these deposits may be distinguished locally from the underlying Tesuque Formation by their slightly more brownish color, lesser overall consistence, relatively higher cobble content, and their relatively higher (by at least a factor of 2) quartzite clast content. The contact between these units and sub-horizontal beds of the Tesuque Formation is a planar discontinuity that is not obvious in the coarse strata. These deposits underlie the flows and phreatomagmatic deposits of the Cerros del Rio volcanic field and in places are interbedded with Cerros del Rio phreatomagmatic deposits. Extensive basaltic talus and colluvium generally cover these sediment units, but one good exposure demonstrating this interbedded relationship is observed 3.1 km (2 mi) downstream along Cañada Ancha, across from the mouth of Cabalasa Arroyo (H-136 in Fig. 1). Along the western edge of the Santa Fe embayment to the south, the Ancha Formation is interbedded with fluvially reworked volcanoclastic deposits, probably phreatomagmatic in origin, of the Cerros del Rio volcanic field (see below). Thus, units 8 and 9 and the nearby correlative deposits of unit Ta (Koning and Maldonado, 2001; Dethier, 1997) occur in the same stratigraphic position and are of similar age to the Ancha Formation. We therefore interpret that units 8 and 9 and Ta represent a northern tongue of the Ancha Formation. Perhaps these units could correlate to one of the discontinuous, high-level, gravel-bearing terrace deposits mapped to the east in the Santa Fe uplands (QTg in Fig. 2). **Ancha Formation** The Ancha Formation is predominantly silty sand to pebbly sand with varying amounts of gravel derived from the Sangre de Cristo Mountains. Common colors range from brownish yellow, yellowish brown, and light yellowish brown to very pale brown (7.5–10YR hues). Strong-brown (7.5–10YR) clayey sand is subordinate and mostly found to the southwest. The sand is subangular to subrounded, poorly sorted, and arkosic. The sand is predominately fine-grained in the southwest part of the embayment but is mostly medium to very coarse grained to the north and east near the front of the Sangre de Cristo Mountains. Gravel is generally clast supported, subrounded, and commonly contains 85–97% granite with sparse amphibolite, quartzite, and gneiss. Around the Cerrillos Hills, sparse Oligocene intrusive clasts derived from these hills are scattered in the Ancha Formation within 2 km (1.2 mi) from its contact with the Tuerto formation. Bedding is tabular (particularly for sand and mud) or lenticular (particularly for gravels), and very thin to thick. Sedimentary structures are sparse and only locally recognized. Buried soils in the Ancha Formation are generally not common. Where exposed, these buried intraformational soils commonly contain calcic horizon(s) with stage III calcic carbonate morphology that are overlain by clayey Bt horizon(s). A veneer of sheetwash or colluvium commonly covers the Ancha Formation; good exposures tend to be limited to roadcuts and a few arroyo walls. Muddy volcanoclastic deposits containing altered and multicolored tephra and basalt clasts are present as discontinuous or channel-shaped bodies as much as 4 m (13 ft) thick. These are locally recognized in the western Santa Fe embayment along the lower reaches of the Santa Fe River, Arroyo Hondo, and La Cienega Creek (Figs. 2, 3). These volcanoclastic lithofacies probably represent fluvially reworked phreatomagmatic deposits derived from the Cerros del Rio volcanic field. The Ancha Formation is weakly to non-deformed, and its lower contact is an angular unconformity with tilted Miocene and Paleogene strata. The basal unconformity is locally well exposed in the bluffs north of Galisteo Creek and at the Arroyo Hondo reference section. Subsurface relief of the basal Ancha contact where it overlies distinctive Espinoza Formation near the western margin of the embayment (Fig. 2; Koning and Hallett, 2000) indicates that the unconformity is not everywhere a planar, beveled surface. Bedding is subhorizontal or dips less than 2° west-southwest. Where exposed, the Ancha Formation is not obviously cut by faults except for a locality 900–950 m (2,953–3,117 ft) northeast of the town of Lamy (Lisenbee, 1999). **Reference sections.** We designate four reference sections (Fig. 3) to supplement the poorly exposed type section at Cañada Ancha. These sections also illustrate the textural variability of the Ancha Formation and its relationship with other rock units. Complete descriptive data is forthcoming (Connell et al., in prep.). The Arroyo Hondo section (AH) is located on the north slope of Arroyo Hondo, 1.1 km (0.7 mi) west of the exposed bedrock of the Sangre de Cristo Mountains (Figs. 1–3). The sediment is loose and composed primarily of lenticular-bedded, locally cross-stratified sand and gravel channel deposits that unconformably overlie redder, tilted sandstone beds of the Tesuque Formation. The gravel includes poorly sorted pebbles, cobbles, and boulders and is composed of granite with 1–3% amphibolite and gneiss. Fluviually recycled pumice is dispersed in the sand and gravel in the lower 9.5 m (31 ft) of the Ancha Formation. We correlate this pumice to an outcrop of pumice lapilli located 120 m (394 ft) to the southwest that was mapped by Read et al. (1999 and 2000) as the Guaje Pumice Bed (lower Bandelier Tuff). Near the top of the section is a well-developed soil possessing a calcic horizon with stage III carbonate morphology; this is overlain by 1.7 m (6 ft) of sandy gravel (unit 6) whose top has likely been eroded. The La Cienega (LC) section includes 28 m (92 ft) of Ancha Formation (Fig. 3). The lower 16 m (52 ft; units 2–6) is composed of sand, silt, and clay. One 3.7-m-thick (12-ft-thick) interval (unit 4) contains local thin to medium, wavy, well-cemented beds. This horizontally bedded sediment is not correlated to the Tesuque Formation because a nearby outcrop of Tesuque Formation, located 0.6 km (0.37 mi) to the northwest, has indurated gravel beds dipping 26° southeast. The upper 13 m (43 ft) of the Ancha Formation contains sandy pebbles... and cobbles composed of granite with 5-7% rounded quartzite and 1-2% amphibolite and gneiss. Locally within the upper Ancha Formation are 2-4 m-thick (7-13 ft-thick), non-laterally extensive, channelized deposits of reworked volcaniclastic sediment composed of massive muddy sand with 10-50% altered tephra and volcanic pebbles. Above the Ancha Formation, across a covered contact, lies approximately 3 m (10 ft) of Quaternary sand and gravel terrace deposits (Qao1 of Koning and Hallett, 2000). Two sections, Galisteo #1 and #2, were described along Galisteo Creek, where the Ancha Formation clearly overlies tilted Paleogene and Cretaceous strata with distinct angular unconformity (Fig. 3, G1 and G2). The basal 10-13 m (33-43 ft) is a well-cemented pebbly sandstone and subordinate pebble to cobble conglomerate. Clasts are composed primarily of granite with minor limestone, schist, fine-grained to porphyritic and intrusive rocks, sandstone, siltstone, and quartz. The porphyritic intrusive and volcanic rocks are probably derived from the Ortiz Mountains and related Oligocene igneous centers. At Galisteo #2, the upper 7 m (23 ft) of strata is weakly cemented and poorly exposed. At section Galisteo #1, granite-bearing conglomerate and sandstone of the Ancha Formation is overlain by 5 m (16 ft) of Tuerto formation containing sand with porphyritic andesite- and monzonite-bearing gravel. Paleocurrent directions measured from trough cross-stratification in the lower Ancha Formation range from 270° to 280°. Members. The texture of the Ancha Formation varies across the Santa Fe embayment, and this allows the differentiation of two informal members. The fine alluvial member (fa) is located south of I-25 in the western part of the Santa Fe embayment (Fig. 2). There, the Ancha Formation is generally composed of clayey to silty sand, with minor pebble lenses and coarse- to very coarse grained sand beds. This member is represented by units 2 through 6 of the lower La Cienega reference section (Fig. 3). Sediment coarsens to gravelly sand in the eastern part of the Santa Fe embayment and is informally called the coarse alluvial member (ca; Fig. 2), represented by the Arroyo Hondo reference section and unit 7 of the La Cienega reference section (Fig. 3). Between the Santa Fe River and Arroyo Hondo drainages, the coarse alluvial member extends to the western margin of the Santa Fe embayment and consists mostly of sandy gravel with subequal proportions of pebbles to cobbles. Boulders compose approximately 2-5% of the total sediment volume in the west and as much as 20% in the east. The westward extension of the coarse alluvial member north of the fine alluvial member (Fig. 2) may represent deposition by an ancestral Santa Fe River. Quartzite clasts compose 1-15% of the gravel in this alluvium but are significantly sparser to the south. Cementation zones. In the southern Santa Fe embayment, the Ancha Formation is divided into two zones based on cementation. Here, the lower Ancha Formation is called the well-cemented zone because it is well-cemented with sparry calcite and typically forms ledges or cliffs (Fig. 3, G1 and G2). The upper unit is called the weakly cemented zone; it typically forms poorly exposed slopes and is mostly covered by colluvium. These zones are generally not mappable at a scale of 1:24,000 but are distinctive in outcrop. The well-cemented zone is commonly recognized in the southwestern Santa Fe embayment near Galisteo Creek, where the Ancha Formation rests upon the Espinaso Formation, Galisteo Formation, and older strata. The weakly cemented zone is more laterally extensive than the well-cemented zone, particularly in the north where it lies upon deposits of the Tesuque Formation. Plains surface. The top of the Ancha Formation is typically modified by erosion and is best preserved locally on broad interfluves between entrenched drainages in the Santa Fe embayment. The relatively flat surface preserved on these broad interfluves has been called the Plains surface by Spiegel and Baldwin (1963). Soils developed on the Plains surface locally exhibit < 25 cm (< 9.75 inches) thick, clayey Bt or Bbk horizons underlain by 50 cm (20 inches) to over 100 cm (39 inches) thick calcic and siliceous Bk or Bkq horizons with stage II to III+ pedogenic carbonate morphology. Soil development is weaker where the Plains surface has been eroded or affected by younger deposition. Near major drainages, such as Arroyo Hondo, erosion and subsequent deposition of younger inset stream deposits have locally modified the Plains surface. For example, exposures in railroad cuts near upper Arroyo Hondo show loose granitic sand and gravel disconformably overlying the Ancha Formation. The upper 1.7 m (7 ft) of sandy gravel at the Arroyo Hondo reference section (unit 6), overlying a well-developed soil with a stage III calcic horizon, may also represent a younger inset stream deposit (Fig. 3, AH). These thin (< 4-m-thick [< 13-ft-thick]) deposits are lithologically identical to the underlying Ancha Formation, locally strip the Plains surface and associated soils, and can be differentiated from the Ancha Formation only locally because of the lack of exposure. Radiocarbon age control is not available for these younger inset deposits. Thickness. The Ancha Formation ranges from 10 to 90 m (33–295 ft) in thickness based on geologic map, drill-hole, and seismic data. Surface exposures of the Ancha Formation are commonly 10–40 m (33–131 ft) thick. Using drill cuttings and well logs to differentiate the Ancha and Tesuque Formations is difficult because of lithologic similarities between the two units. However, analysis of drill-hole data and surface geologic observations suggest that differentiating these formations may be possible locally based on slight textural changes, with the Tesuque Formation being slightly finer, and on pronounced local color changes, such as observed in the Arroyo Hondo reference section. In the upper reach of Gallina Arroyo, seismic refraction studies suggest that the Ancha Formation is approximately 30–90 m (98–295 ft) thick (S. Biehler, pers. comm. 1999). Ten to 11 km (6–7 mi) to the south, however, exposures along the bluffs north of Galisteo Creek near Lamy, New Mexico, are approximately 25–45 m (82–148 ft) thick (map data from Johnson, 1975, and Lisenee, 1999). The difference between these two thickness estimates suggests a northward thinning of the Ancha Formation or may reflect the uncertainty in the geophysical modeling. Between the Cerrillos Hills and the Sangre de Cristo Mountains, the thickness of the Ancha Formation is relatively easy to constrain using well data because it overlies light- to dark-gray, lithic-rich sandstone and conglomerate of the Espinaso Formation; reddish-brown, clay-rich sandstone and mudstone of the Galisteo Formation; or limestone of the Madera Formation; or limestone of the Madera Formation (Koning and Hallett, 2000; Grant Enterprises, Inc., 1998). In the southwestern Santa Fe embayment, east of the Cerrillos Hills, interpretation of well data indicates a thickness of 40–70 m (131–230 ft) for the Ancha Formation and approximately 30 m (98 ft) of relief on its lower contact with the Espinaso Formation (Koning and Hallett, 2000; American Groundwater Consultants, 1985). Similar subsurface relief is also interpreted for the western embayment east of La Cienega based on drill-hole data (Fig. 2). In the southeastern Santa Fe embayment near the Sangre de Cristo Mountains, well data suggest as much as 90 m (295 ft) of Ancha Formation (Grant Enterprises, Inc., 1998). The Tesuque Formation generally underlies the Ancha Formation in the northern Santa Fe embayment, and here it is difficult to confidently determine the thickness of the Ancha Formation and the underlying paleopedology. Stratigraphic position. South of Galisteo Creek, the Tuerto formation overlies approximately 13 m (43 ft) of granite-bearing gravel and sand of the Ancha Formation (Fig. 3, G1). Measurements of cross-stratification and channel margins indicate a westerly paleoflow, suggesting an ancestral Galisteo Creek deposited the Ancha Formation at the Galisteo #1 section. Southern tributaries of Galisteo Creek near Cerrillos, New Mexico, drain the Ortiz Mountains, and this was probably true during the time of Ancha Formation aggradation. The composition of the sediment delivered by these tributary drainages FIGURE 3—Type and reference stratigraphic sections of the Ancha Formation in the Santa Fe embayment. Sedimentologic descriptions of the Cañada Ancha type section (CA) are in Appendix 1. Descriptions of the La Cienega (LC), Arroyo Hondo (AH), and Galisteo #1 and #2 (G1 and G2) sections are in Connell et al. (in prep.). Members labeled only for sections LC, AH, and G2; ca = coarse alluvial member; fa = fine alluvial member. would be correlative to the Tuerto formation. Thus, the arkosic sand and granitic gravel deposited by the ancestral Galisteo Creek, as observed at the Galisteo #1 section, would approximate the southern limit of the Ancha Formation. Near the Santa Fe River, approximately 4–5.5 km (2.5–3.5 mi) north of La Cienega (Fig. 1), Ancha Formation strata with rounded quartzite pebbles are interbedded with basalt flows of the Cerros del Rio field. Furthermore, colluvial deposits overlying these basalt flows contain locally derived basaltic deposits mixed with fine- to coarse-grained sand composed of potassium feldspar and quartz interpreted to be derived from the Sangre de Cristo Mountains (Koning and Hallett, 2000). There is also local granitic lag gravel on basalt flows approximately 3 km (2 mi) west of La Cienega (D. Sawyer, pers. comm. 2001). These last two observations indicate that TABLE 1—Geochronologic data for late Tertiary and early Pleistocene tephras. <table> <thead> <tr> <th>Sample no. (Fig. 1)</th> <th>Map unit (Fig. 2)</th> <th>Location, UTM, NAD83, zone 135 (m)</th> <th>Description</th> <th>NMGRL No.</th> <th>WM K/Ca ratio ± N</th> <th>Number of sanidine crystals (MSWD)</th> <th>WM ± 2N age (Ma)</th> </tr> </thead> <tbody> <tr> <td>T-318 Qa* 7</td> <td>N: 3,933,270</td> <td>E: 400,520</td> <td>&lt; 10 cm lapilli bed interbedded with pebbly sand along south margin of Bonanza Creek.</td> <td>51220</td> <td>24.8 ± 1.3</td> <td>2 (n.a.)</td> <td>1.25 ± 0.06</td> </tr> <tr> <td>T-264 QTa</td> <td>N: 3,940,320</td> <td>E: 406,300</td> <td>White, 80 cm thick, pebbly pumice bed overlain by 50 cm of pumiceous sand in west roadcut just south of I-25.</td> <td>51222</td> <td>29.4 ± 6.7</td> <td>7 (0.88)</td> <td>1.48 ± 0.02</td> </tr> <tr> <td>BS-2* QTa</td> <td>N: 3,953,470</td> <td>E: 404,420</td> <td>Intersection of I-25 and Richards Avenue. About 1–6 m below eroded Ancha Fm. surface.</td> <td>9777, 9779, 9700</td> <td>34.5 ±12.4</td> <td>38 (2.66)</td> <td>1.61 ± 0.02</td> </tr> <tr> <td>BS-1* QTa</td> <td>N: 3,942,350</td> <td>E: 415,900</td> <td>About 1 m below eroded Ancha Fm. surface north of Arroyo Hondo.</td> <td>9777</td> <td>31.2 ± 16.5</td> <td>13 (5.07)</td> <td>1.67 ± 0.03</td> </tr> <tr> <td>T-40 QTa</td> <td>N: 3,941,940</td> <td>E: 404,305</td> <td>120 cm thick, white ash and pumice mixed with sand and sandy gravel in roadcut by aqueduct.</td> <td>51221</td> <td>60.8 ± 12.7</td> <td>14 (0.82)</td> <td>1.63 ± 0.02</td> </tr> <tr> <td>DL-HR* TT</td> <td>N: 3,956,680</td> <td>E: 399,870</td> <td>White lapilli bed ~20 m below basalt flow and ~9 m below QTa(?) at Cañada Ancha section.</td> <td>6049</td> <td>0.06ψ</td> <td>n.a.</td> <td>8.48 ± 0.14</td> </tr> <tr> <td>73A2** QTa</td> <td>N: 3,956,893</td> <td>E: 399,807</td> <td>2 m thick, white pumice interbedded in arkosic sand and granitic gravel near base of Ancha Fm. 200–250 m downstream of Cañada Ancha stratigraphic section.</td> <td>n.a.</td> <td>n.a.</td> <td>n.a.</td> <td>2.7 ± 0.4</td> </tr> </tbody> </table> Teaphras analyzed by the New Mexico Geochronological Research Laboratory (NMGRL; Peters, 2000, and Winick, 1999) by the laser total fusion method, except for samples DL-HR* and 73A2** (see below). Locations of teaphra sample localities given in Figure 1. Geographic coordinates (Universal Transverse Mercator, 1983 North American Datum) are rounded to the nearest 5 m (16 ft). The results of the 40Ar/39Ar analyses indicate that the samples are age equivalent to tephras derived from the neighboring Jemez volcanic field and are interpreted to include the Guaje Pumice Bed (1.61 Ma; Izett and Obradovich, 1994), Cerro Toledo Rhyolite (1.62–1.22 Ma; Spell et al., 1996), and Tsankawi Pumice Bed (1.22 Ma; Izett and Obradovich, 1994). These teaphra ages and stratigraphic relationships indicate that the Ancha Formation was aggrading in the north part of the embayment during early Pleistocene time. A bed of lapilli near the interchange of I-25 and NM-14 (sample T-264; Table 1, Figs. 1, 4) temporarily correlates to one of the Cerro Toledo Rhyolite events. This teaphra is approximately 7–11 m (23–36 ft) below the projection of the Plains surface. Located in the same general stratigraphic interval, tephra beds that are age equivalent to the Guaje Pumice Bed are recognized in at least two localities in the Santa Fe embayment (samples T-40, BS-2, and possibly BS-1; Table 1, Figs. 1, 4). These tephra are approximately 4–18 m (13–59 ft) below projections of the Plains surface. Other tephra in similar stratigraphic positions to these sampled beds, and thus probably Cerro Toldeo Rhyolite or the Guaje Pumice Bed, are scattered throughout the northern part of the Santa Fe embayment. North of the middle reach of Arroyo Hondo, between samples T-264 and BS-2, these tephra are overlain by as much as 12–15 m (39–49 ft) of Ancha Formation, indicating significant deposition occurred at least locally after their emplacement. These tephra have not been recognized in the southern part of the embayment, suggesting that deposition ceased before 1.6 Ma in the south, deposits younger than ~1.6 Ma have been eroded, or that tephra correla- tive to the Guaje Pumice Bed or Cerro Toledo Rhyolite were not deposited in the south. In the lower reach of Bonanza Creek near the west side of the Santa Fe embayment, pumice lapilli from a deposit of pebbly sand, interpreted to be inset against the Ancha Formation, is approximately 20–25 m (66–82 ft) below the projected Plains surface (sample T-318; Table 1, Figs. 1, 4). A sample from a single pumice lapilli bed here yields a range of ages between 1.25 and 2.6 Ma. Seventy-five percent of the analyzed sanidine crystals of this youngest tephra yield ages between 1.48 and 2.6 Ma, indicating that significant recycling of older Ancha Formation deposits was underway by 1.25 Ma. Thus, the Ancha Formation generally ceased aggrading between 1.48 Ma and 1.25 Ma. The cessation was probably a response to incision of this area by drainages associated with the Santa Fe River. The Santa Fe River may have experienced an increase in incision rate at this time because it had succeeded in cutting through relatively resistant rocks of the Cerros del Rio volcanic field, or because of incision of the Rio Grande at White Rock Canyon (Dethier, 1997; Dethier and Reneau, 1995). We assume that the southern Santa Fe embayment also ceased aggrading in response to incision during the early Pleis- tocene. However, this area is drained by tributaries to Galisteo Creek, such as Galli- na Arroyo, which may have a different incision history than the Santa Fe River drainage system. Regional stratigraphic relationships and radioisotopic ages constrain the lower age limit of the Ancha Formation to approxi- mately 2.7–3.5 Ma. The base of the Ancha Formation is locally older than 2.2–2.8 Ma flows of the Cerros del Rio vol- canic field (Stearns, 1979; Bachman and Mehnert, 1978; Koning and Hallett, 2000; WoldeGabriel et al., 1996; Manley, 1976a; Sawyer et al., 2001) and younger than the 8.48 Ma tephra in the Tesuque Formation at the Cañada Ancha stratigraphic section. Near the mouth of Cañada Ancha, the Ancha Formation (unit Ta of Dethier, 1997) lies below a basalt flow that gave an 40Ar/39Ar age of 2.49 ± 0.03 Ma (Dethier, 1997; WoldeGabriel et al., 1996). Here, these deposits are also interbedded with phreatomagmatic deposits of the Cerros del Rio volcanic field, which was active follow- ing 2.8 Ma (Bachman and Mehnert, 1978; WoldeGabriel et al., 1996; Sawyer et al., 2001). A 2.7 ± 0.4 Ma zircon fission-track age (Manley and Naeser, 1977) from a pumice bed probably correlative to the basal Ancha Formation (K. Manley, pers. comm. 2002) is consistent with the above data and indicates that the Ancha Formation near Cañada Ancha began aggrading at about 2.7 Ma. In the northern Santa Fe embayment, there is evidence that the lower Ancha For- mation is diachronous, with the base being older in the west and younger to the east. North of La Cienega, at least 25 m (82 ft) of Ancha Formation underlies a 2.3–2.8 Ma basalt flow of the Cerros del Rio volcanic field (Koning and Hallett, 2000), compared to 12 m (39 ft) at the Ancha Formation type section, so it is likely that the Ancha For- mation near La Cienega is older than at Cañada Ancha, possibly extending back to 3.5(7) Ma. Near the mountain front at the Arroyo Hondo reference section, only 2 m (7 ft) of Ancha Formation underlies pumiceous sediment that correlates to the 1.6 Ma Guaje Pumice Bed (Fig. 3; Read et al., 2000). Thus, the lower age of the Ancha Formation probably ranges from 2.7 to 3.5(5) Ma in the northwestern Santa Fe embayment but may be as young as ~1.6 Ma to the east near the Sangre de Cristo Mountains. Discussion The stratigraphic relationships at the Cañada Ancha stratigraphic section, where Pleistocene deposits correlate to the Ancha Formation unconformably overlie the Tesuque Formation strata containing 8–9 Ma tephra, are similar to the previously men- tioned exposure in Bayo Canyon (see top of p. 78; Fig. 1). This similarity suggests that is associated unconformity is region- al in extent and that aggradation of the Tesuque Formation continued to about 8 Ma in the southern Española Basin. We concur with most previous mapping of the Ancha Formation except for the inclusion of Oligocene monzonite or dior- ite gravels on the flanks of the Cerillos Hills (e.g. Bachman, 1975). Rather, we pro- pose that the Ancha Formation be restrict- ed to deposits whose gravel contain more than 5% granite clasts. This serves to clear- do differentiates phryrhy- and monzonite- bearing Tuerto formation, which was deposited by streams draining the Cerillos Hills and Ortiz Mountains, from the granite-bearing Ancha Formation, which was deposited by streams draining the southern Sangre de Cristo Mountains. Within and north of the Santa Fe uplands, topographically high-level stream gravels are preserved as terraces. The highest ter- race deposits probably predate the Ancha Formation (Koning and Maldonado, 2001), but lower terrace deposits may be partly concomitant with Ancha Formation aggra- dation (Koning and Maldonado, 2001). However, it is difficult to correlate these terrace deposits among different drain- ages, and aside from a few dated terraces it is difficult to correlate these deposits with the Ancha Formation. Thus, we concur with Manley (1979a) that these upland gravel deposits should not be assigned to the Ancha Formation. This flight of relatively thin terrace deposits indicates long- term, post-late Oligocene fluvial dissection for that landscape. In contrast, south of the Santa Fe River there appears to have been general aggradation, as manifested by the thicker Pliocene–early Pleistocene deposits preserved there. The Pliocene–early Pleis- tocene deposits in these two areas reflect markedly different environments, one of general aggradation and the other of land- scape degradation, and this justifies the concept of keeping the Ancha Formation confined to the Santa Fe embayment and extending it northwest beneath the Cerros del Rio volcanic field. Aggradation of the Ancha Formation probably began around 2.7–3.5(?Ma and continued into the early Pleistocene, when regional incision occurred over much of the Santa Fe embayment. We interpret that large-scale aggradation likely began as a response to a relative rise in base level because the western, distal part of the basin appears to have aggraded signifi- cantly before the eastern margin. Later aggradation near the Pliocene–Pleistocene boundary appears to have been concent- trated in the eastern Santa Fe embayment and may have been more influenced by sediment supply and discharge factors in the Sangre de Cristo Mountains than local base level control. The relative rise in base level at the beginning of Ancha Formation aggradation was probably driven by a combination of Mesozoic del Rio volcanism and to a lesser degree by Pliocene tectonism. Basalt flow thickness, geologic map relations, and 40Ar/39Ar ages in the Tetilla Peak quadrangle (Sawyer et al., 2001) sug- gest that particular Cerros del Rio basalt flows may not wholly account for Ancha aggradation, but the top of the basalt is higher than the projected Plains surface. Moreover, Cerros del Rio volcanism proba- FIGURE 5—Correlation and comparison of stratigraphic units used in this report to stratigraphic sections in the Española and Albuquerque Basins. QTg = Quaternary and Tertiary high-level gravel deposits, Qbt = Tsankawi Pumice Bed, Qbg = Guaje Pumice Bed. Ages of Tsankawi and Guaje Pumice Beds from Izett and Obradovich (1994). Age followed by (*) from WoldeGabriel et al. (2001); ages followed by (**) from McIntosh and Quade (1995); age followed by (γ) from Smith (2000). Pattern composed of diagonal lines represents missing stratigraphic section and an associated unconformity. Neogene tectonism appears to have restricted the aggradation and preservation of thick late Pliocene–early Pleistocene alluvial deposits to the Santa Fe embayment. The Santa Fe uplands to the north were likely a topographic high during Ancha Formation deposition, which explains why the Ancha Formation pinches out northward against them. The uplands may have formed by differential uplift of the rift hanging wall during the late Miocene–Pliocene (Koning and Maldonado, 2001; Smith and Roy, 2001). Between 2.7 and 0.5 Ma, as much as 500 m (1,640 ft) of offset occurred along the middle part of the La Bajada fault, and the footwall experienced relative uplift (Sawyer et al., 1999). Consequently, the Ancha Formation thins against the Espina-so and Galisteo Formations in the footwall west of La Cienega (Fig. 2, geologic cross section). The Pliocene–Pleistocene Ancha, Puye, and Tuerto formations are generally gravely and unconformably overlie older, generally tilted, and finer strata of the Santa Fe Group (Fig. 5). Pliocene sand and gravel sequences are recognized in the upper Arroyo Ojito and Sierra Ladrones Formations in the adjacent Albuquerque Basin (Fig. 5; Connell et al., 1999; Smith and Kullie, 1998; Maldonado et al., 1999). The coarse-grained character of these upper Santa Fe Group units, which are exposed over a wide area and in different structural basins, suggests that significant regional erosion and accompanying deposition may have been triggered by paleoclimatic factors that increased stream discharge and competence. However, we interpret that late Tertiary tectonic factors and emplacement of the Cerros del Rio volcanic field were largely responsible for the significant differences of stratal thickness between the Santa Fe embayment versus the piedmont regions near the Ortiz Mountains south of the embayment and the generally degraded upland regions to the north of the embayment. Conclusions Recent mapping and sedimentologic study in and near the Santa Fe embayment, along with 40Ar/39Ar dating of tephra layers, demonstrate that the lower three-quarters of the Ancha Formation partial type section of Spiegel and Baldwin (1963) encompasses the Tesuque Formation. Consequently, we restrict the type section to include 12 m (39 ft) of arkosic sediment located above the Tesuque Formation and below deposits and flows of the Cerros del Rio volcanic field. We also propose a granitic clast content of 5% for differentiation of the granite-rich Ancha Formation from the granite-lacking Tuerto formation. Paleoclimatic influences may be responsible for deposition of regionally extensive, coarse-grained, Pliocene–early Pleistocene strata. However, preservation of thick deposits in the Santa Fe embayment may be attributed to a relative rise of local base level because of Cerros del Rio volcanic flows and Neogene tectonism. Based on 40Ar/39Ar dating of tephras, the Ancha Formation aggraded until regional incision occurred between 1.25 and 1.48 Ma. Outcrop and subsurface data suggest that the Ancha Formation is generally 10–90 m (33–295 ft) thick and pinches out against older and topographically higher Tesuque Formation deposits north of the Santa Fe River. The suite of thin terrace deposits in the uplands north of the Santa Fe River should not be included in the Ancha Formation because of correlation ambiguities. Instead, the Ancha Formation should be restricted to Pliocene–early Pleistocene, arkosic, basin-fill sediment within the Santa Fe embayment and extending beneath the Cerros del Rio volcanic field. Acknowledgments The authors thank Steve Maynard, John Hawley, John Rogers, Adam Read, Dave Love, Charles Stearns, David Dethier, Jack Frost, David Sawyer, Florian Maldonado, Ralph Shroba, and Charles Ferguson for discussions on the geology of the Santa Fe and Hagan embayments. Dave Love collected the tephra sample at the Cañada Ancha section. We especially thank Kim Manley and Shari Kelley for revisiting outcrops in Cañada Ancha. Discussions with Gary Smith greatly improved an early version of this paper. This study was funded, in part, by the New Mexico State Map Program of the National Cooperative Geologic Mapping Act of 1992 (F. W. Bauer, Program Manager), and New Mexico Bureau of Geology and Mineral Resources (P. A. Scholle, Director). We thank Mr. Gaylon Duke, Ms. Barbara Harnack, and the Bonanza Ranch for allowing access onto their lands for this study. We also thank Jack Frost for sharing drill-hole and aquifer data for the Santa Fe area, and Charles Hibner for soil-profile data. Glen Jones prepared the shaded relief base for Figure 1. Gary Smith and Spencer Lucas reviewed the manuscript. References Calusha, T., and Blick, J. C., 1971, Stratigraphy of the Santa Fe Group, New Mexico: Bulletin of the... Peters, L., 2000, \(^{40}\)Ar/\(^{39}\)Ar geochronology results from tephra and basalt clasts: New Mexico Geochronological Research Laboratory (NMGRRL), Internal Report, NMGRRL-IR-123, 4 pp. plus figures, tables, and appendices. Winick, J., 1999, \(^{40}\)Ar/\(^{39}\)Ar geochronology results from the Seton Village quadrangle: New Mexico Geochronological Research Laboratory (NMGRRL), Internal Report, NMGRRL-IR-78, 6 pp. plus figures, tables, and appendices. Appendix 1 Measured section of Ancha Formation partial type section of Spiegel and Baldwin (1963). Measured and described along west slope of Cañada Ancha, from floor of arroyo to top of cliff-forming basalt of the Cerros del Rio volcanic field by D. J. Koning on October 17, 2000. Base at N: 3,956,720 ± 20 m, E: 399,955 ± 20 m (zone 13, NAD 83). Horcado Ranch 7.5-min quadrangle, Santa Fe County, New Mexico. Colors determined dry. Numerical unit designations established upsection, but listed in descending stratigraphic order. <table> <thead> <tr> <th>Unit Description</th> <th>Thickness (m)</th> <th>Thickness (ft)</th> </tr> </thead> <tbody> <tr> <td>6. Pebblly sandstone and sandy pebble conglomerate; similar to unit 4 except unit is mixed with varying amounts of sand-size pumice (&lt;20%) and volcanic lithic (5-7%) fragments. Unit contains 10% slightly (~5%) muddy and pebbly sand, pink to very pale brown (7.5–10YR 7/8–3); these beds are lenticular (over 10s of meters distance) and medium to thick (25–80 cm); mostly matrix supported; have 10–1.5% very fine to medium pebbles with ~60% granite and ~40% plagioclase, and poorly to moderately sorted and contains 20% pumice, 10% lithics, and 70% arkosic grains; sharp but relatively planar lower contact; generally loose to moderately consolidated.</td> <td>1.6 5.2</td> <td></td> </tr> <tr> <td>7. Sandy silt, very pale brown (10YR 8/2); massive; contains 5% very fine to medium basaltic pebbles with 10% granitic pebbles; paleosol with a stage II calcium carbonate horizon is present 60–90 cm below top of unit; upper 60 cm is light brown (7.5YR 6/4) and contains a possible Bt soil horizon overprinted by contact thermal alteration from overlying basalt; sharp lower contact. Interpreted as a bioturbated eolian deposit.</td> <td>5.6 18.3</td> <td></td> </tr> <tr> <td>8. Gravelly sand (very poorly exposed), light yellowish-brown (10YR 6/4); surface gravel are cobbles and pebbles consisting of 3–5% quartzite, 25% granite, and 70% basalt clasts (some or all of latter is sloughed from underlying units); sand is subangular to subrounded, arkosic, and loose with strong effervescence in dilute hydrochloric acid. Composition, stratigraphic position, and weakly consolidated nature suggest correlation to Ancha Formation of Spiegel and Baldwin (1963). Basal contact is generally covered, but presumably discontinuable. Grain size slightly increase in quartzite compared to underlying units.</td> <td>6.1 20.0</td> <td></td> </tr> <tr> <td>9. Silty sand with 10–15% pebbles, pale-brown (10YR 6/3); massive and matrix-supported with 2–3% very thin to thin, planar beds or lenses of basaltic lapilli; pebbles are very fine to medium, subrounded, and have –40% granitic and ~60% basalt sand; sand is very fine to very coarse grained, poorly sorted, lithic-rich (~4% feldspar/lithics ratio), and subrounded (lithic grains) to subangular (feldspar grains); abundance of granitic clasts decreases to &lt; 5% in the upper half of the unit; lower contact not well exposed; loose to moderately consolidated. The tops of two prominent buried paleosols are 89 cm and 200 cm below the top.</td> <td>14.0 45.9</td> <td></td> </tr> <tr> <td>1. Covered; probably same as unit 2 except for one exposed bed of pumiceous sandstone about 11 ± 1 m above the base. This bed is a 30-cm-thick pebbly sand, pale-brown to light yellow-brown (10YR 6/3–4), poorly sorted, and matrix-supported; about 10–15% granitic pebbles; a lithic-rich feldspathic wacke sand with 10–15% pumice grains. Base not exposed.</td> <td>18.0 59.1</td> <td></td> </tr> <tr> <td>3.7 12.0</td> <td></td> <td></td> </tr> <tr> <td>0.3 1.0</td> <td></td> <td></td> </tr> <tr> <td>4. Pebblly sandstone and sandy pebble conglomerate, reddish-yellow (5YR 6/6); lenticular and very thin to medium (2–35 cm) beds; gravel has about 5% cobbles and is clast supported, poorly to moderately sorted, and consists of 1% quartzite, 1–2% amphibolite, 97–98% granitic clasts, and trace dactite(?); sand is generally coarse to very coarse, moderately sorted, and has 5–10% lithic fragments (mostly basalt, pyroxene(?), or olivine, 10–15% quartz, 10–15% plagioclase, and 60–75% potassium feldspar; 3% of sediment is well cemented by 2–15-cm-thick, discontinuous layers of calcium carbonate (which are more common near the bottom of the unit); lower contact is scoured and slightly wavy (~5 cm of relief on lower contact); loose to weakly consolidated. Trace of sediment is yellowish-red (5YR 5/6) clayey sand.</td> <td>27.0 88.6</td> <td></td> </tr> <tr> <td>5. Pebblly sandstone, very pale brown (10YR 7/4) and slightly (~5%) muddy; bed is lenticular and interbedded with arkosic clasts; clast and matrix supported; pebbles compose 5–10% of sediment, are poorly sorted, and consist of granitic clasts; sand consists of 3% pumice, 30–50% lithic fragments (basalt(?), pyroxene(?), olivine), 10–15% quartz, 20% potassium feldspar, and 15–40% plagioclase grains; moderately consolidated; lower contact slightly wavy and scoured (~5 cm of relief)</td> <td>3.7 12.0</td> <td></td> </tr> <tr> <td>0.4 1.3</td> <td></td> <td></td> </tr> <tr> <td>2. Pebblly sandstone and sandy pebble conglomerate, light-brown (7.5YR 6/4); beds are thin to medium (9–14 cm); gravel is clast supported, subangular to subrounded, poorly sorted, and consists of granitic clasts with trace basalt(?), trace amphibolite, and trace quartzite; arkosic sand; lower contact not exposed; loose.</td> <td>5.0 16.4</td> <td></td> </tr> <tr> <td>0.5 1.6</td> <td></td> <td></td> </tr> </tbody> </table>
CHAPTER 1 Relevance of heat transfer and heat exchangers for the development of sustainable energy systems B. Sundén & L. Wang 1Division of Heat Transfer, Department of Energy Sciences, Lund University, Lund, Sweden. 2Siemens Industrial Turbines, Finspong, Sweden. Abstract There are many reasons why heat transfer and heat exchangers play a key role in the development of sustainable energy systems as well as in the reduction of emissions and pollutants. In general, all attempts to achieve processes and thermodynamic cycles with high efficiency, low emissions and low costs include heat transfer and heat exchangers to a large extent. It is known that sustainable energy development can be achieved by three parallel approaches: by reducing final energy consumption, by improving overall conversion efficiency and by making use of renewable energy sources. In these three areas, it is important to apply advanced heat transfer and heat exchanger technologies, which are explained extensively in this chapter. In addition, heat transfer and heat exchangers are important in protecting the environment by reducing emissions and pollutants. To illustrate this, several research examples from our group are used to demonstrate why heat transfer and heat exchangers are important in the development of sustainable energy systems. It can be concluded that the attempt to provide efficient, compact and cheap heat transfer methods and heat exchangers is a real challenge for research. To achieve this, both theoretical and experimental investigations must be conducted, and modern techniques must be adopted. 1 Introduction The concept of sustainable development dates back to several decades, and it was brought to the international agenda by the World Commission on Environment and Development in 1987 [1]. At the same time, it also provided the henceforth most commonly used definition of sustainable development, describing it as development which meets the needs of the present without compromising the ability of future generations to meet their own needs. This concept has indeed expressed people’s concern about the growing fragility of the earth’s life support systems, i.e. the use of the available resources on our planet. Among the aspects concerned, energy is certainly a very important part, and sustainable energy systems have become the worldwide concern among scientific and political communities as well as among ordinary people. Today, the production of electricity and heat is mainly based on finite primary energy sources. Fossil fuels are combusted in such large amounts that flue gas emissions have affected the environment, e.g. green house effect and toxic pollutants. A general approach to improve the degree of sustainability of the energy supply lies in the following three aspects: reducing final energy consumption, improving overall conversion efficiency and making use of renewable sources [2]. To reduce final energy consumption is an obvious approach, which requires more energy efficient process components and systems. The energy source requirement for the same energy output can be brought down by improving overall conversion efficiency. To use renewable energy sources other than fossil fuels, such as hydropower, biomass, wind and solar energy, is an attractive approach because they are sustainable in nature. In all three aspects, it was found that heat transfer and heat exchangers play an important role. For instance, increasing the efficiency in thermal processes for heat and power generation requires increasing the highest temperature in the process and it has to be increased further in the future. To enable the materials of the equipment, e.g. in gas turbine units, to withstand such high temperatures, cooling is needed. In this chapter, several examples will be illustrated to stress the importance of the relevant heat transfer and heat exchangers in the development of sustainable energy systems. Examples will also be given to illustrate that heat transfer and heat exchanger technologies can bring down the emissions of green house gases and other pollutants. It can be concluded that the attempt to provide efficient, compact and cheap heat transfer methods and heat exchangers is a real challenge for research, and that both theoretical and experimental investigations must be carried out and modern scientific techniques must be adopted to develop sustainable energy systems. 2 Reduction of energy consumption The process industry remains one of the biggest sectors in consuming energy. A typical process, shown in Fig. 1, consists of three parts: chemical plant, utility plant and heat recovery network [3]. The purpose of the chemical plant is to produce products from raw materials with the supply of energy from both the utility plant and the heat recovery network. The utility plant produces power, hot utility and cold utility. The heat recovery network, which consists of many Relevance of Heat Transfer and Heat Exchangers Heat exchangers, aims to recover heat from hot streams to heat cold streams. Maximizing heat recovery in the heat recovery network can bring down both energy consumption and consequently flue gas emission from the utility plant. Therefore, reduction in energy consumption requires the optimization of the heat recovery network, i.e. heat exchanger networks. Advanced heat exchanger technologies can improve the efficiency of heat exchanger networks. Such technologies include compact heat exchangers, multi-stream heat exchangers, heat transfer enhancement, mini- and micro-heat exchangers, etc. [4]. Using these technologies, current processes can be improved and the final energy demands can be reduced. Conventional heat exchangers in process industries are shell-and-tube heat exchangers. There are several disadvantages in using such units, e.g. low ratio of surface to volume, tendency of severe fouling, use of multi-pass design, low efficiency due to a relatively high pressure drop per unit of heat transfer in the shell side, etc. Most of these disadvantages are due to the relatively large hydraulic diameter. To overcome these disadvantages, compact heat exchangers have been developed. A compact heat exchanger is one which incorporates a heat transfer surface with area density (or compactness) of above 700 m²/m³ on at least one of the fluid sides [5]. The common types include plate heat exchangers (PHEs), plate-fin heat exchangers, tube-fin heat exchangers, etc. In the process industries using compact heat exchangers, energy consumption can be reduced in addition to the reduced capital cost and complexity of the plant. Compact heat exchangers usually have a small hydraulic diameter, which results in high heat transfer coefficients. This will reduce the unit size and weight, hence the unit capital cost. In addition, the high heat transfer coefficients permit compact heat exchangers to operate under conditions with small temperature differences. This is significant in the optimization of heat exchanger networks. In the pinch ![Figure 1: Total processing system.](image-url) Thermal Engineering in Power Systems analysis method for the design of heat exchanger networks [6], the minimum temperature difference is the decisive parameter to construct the so-called composite curves, which are shown schematically in Fig. 2. By using compact heat exchangers, the minimum temperature difference can be reduced significantly compared to shell-and-tube heat exchangers. This makes the two lines in the composite curves approach very close to each other, which means that the heat recovery is enlarged, and at the same time, the external utility requirements are reduced. Therefore, the utility consumption in the entire plant is reduced. Due to the high heat transfer coefficients and low unit capital costs, the total capital cost for the heat recovery system can still be lower than that using shell-and-tube heat exchangers. A multi-stream heat exchanger is a good option, when too many heat exchanger units are required. In the optimization of heat exchanger networks using the pinch technology, a large number of exchangers are often required when the network is designed in terms of two-stream exchangers. This not only increases the capital cost but also increases the complexity of the network. Therefore, it may challenge the optimal solution, and relaxation has to be made. Using multi-stream heat exchangers might be a good way to circumvent this problem, and it offers a number of potential benefits including large savings in capital and installation costs, reduction in physical weight and space, better integration of the process, etc. However, the streams connected to them should not be too far away in physical space to save piping costs. Common multi-stream heat exchangers include multi-stream plate-fin heat exchangers, multi-stream PHEs, etc. [7]. Heat transfer enhancement for shell-and-tube heat exchangers should be also considered in the optimization of heat exchanger networks. It reduces the capital cost because of the small size needed for a given duty. It also reduces the temperature driving force, which reduces the entropy generation and increases the second law efficiency. In addition, heat transfer enhancement enables heat exchangers to Figure 2: Composite curves. operate at a smaller velocity but still achieve the same or even higher heat transfer coefficient. This means that a reduction in pressure drop, corresponding to less power utilization, may be achieved. All these advantages have made heat transfer enhancement technology attractive in heat exchanger applications. For the tube side, different geometries (e.g. low-finned tubes, twisted tubes, grooved tubes) and tube inserts (e.g. twist taped inserts, wire coil inserts, extended surface inserts) have been developed [8]. For the shell side, improvements have been also made, e.g. helical baffles and twisted tube heat exchangers [4]. More heat transfer and heat exchanger technologies are available to improve the process, and consequently to reduce the final energy consumption. These may include micro- and mini-heat exchangers, integrated chemical reactor heat exchangers, etc. Due to the space constraints in this chapter, these technologies are not explored in detail. However, the possibilities of their application in process industries should not be underestimated. 3 Improved efficiency of energy conversion There are many ways to improve the efficiency of thermal power plants, but heat transfer and heat exchangers play a significant role in all means. This can be highlighted by considering as an example a power plant that uses gas turbines. The original Brayton cycle for the power plant only needs a compressor, a combustion chamber and a power turbine; this concept can be found in any textbook on thermodynamics, e.g. Cengel and Boles [9]. However, the thermal efficiency is usually very low in such systems, and improvements can be made by employing the concept of intercooling, recuperation (regeneration) and reheating. Such a flow sheet is illustrated in Fig. 3, and the corresponding thermodynamic cycle is shown in Fig. 4. Two stages of gas compression are provided to reduce the power consumption for compression due to the lower inlet temperature of the gas in the second compression stage by using an intercooler. Because the compression power required is reduced, the net power output is certainly increased. The concept of recuperation is the utilization of energy in the turbine exhaust gases to heat the air entering the combustion chamber, thus saving a certain amount of fuel in the combustion process. This will certainly increase the overall thermal efficiency as well. In addition, the turbine output can be increased by dividing the expansion into two or more stages, and reheating the gas to the maximum permissible temperature between the stages. Although the power output is improved, the cost of additional fuel will be heavy unless a heat exchanger is also employed. These concepts can be also seen in the thermodynamic cycle in Fig. 4. The cycle 1-2-3-4-1 corresponds to the simple Brayton cycle. The cycle 9-11-12-2 represents the intercooling and the cycle 15-14-13-4 represents the reheating. The cycles 4-7-12-5 and 4-6-2-5 represent recuperation in the case of intercooling and no intercooling, respectively. This concept has already been incorporated in some real gas turbines, e.g. LMS100 from GE makes use of an intercooler, Mercury 50 from Solar Turbine makes use of a recuperator and GT24/26 from ASL TOM uses sequential combustion. These features significantly increase the efficiency of gas turbines, and a great deal of work has been done for the design of reliable heat exchangers that are operated at higher temperatures. Thermal Engineering in Power Systems The thermal efficiency and power output of gas turbines will increase with increasing turbine rotor inlet temperature, which corresponds to the temperature at point 3 in Fig. 4. This is the reason why modern advanced gas turbine engines operate at high temperatures (ISO turbine inlet temperature in the range of 1200–1400°C), and the trend is to operate at even higher temperatures. To enable this, in addition to material innovation cooling technologies must be developed for the combustion chamber, turbine blade, guide vane, etc. Over the years, film cooling, convection cooling and impingement cooling have been developed for both combustion chamber (see Fig. 5) and turbine blade (see Fig. 6), and the technique of transpiration cooling is still under development due to engineering difficulties. In addition, more advanced high temperature materials such as superalloys of single crystals and ceramic coating significantly contribute to the high turbine inlet temperature operation. With these advanced cooling technologies, reliable and high-efficiency power plants can be sustained. The above blade cooling technologies are for air-cooled gas turbines, but a new turbine cooling concept is available, i.e. steam cooled gas turbines. Steam provides several benefits over air as a cooling medium. First, steam provides higher heat... Relevance of Heat Transfer and Heat Exchangers Transfer characteristics because its heat capacity is higher than that of air. Second, the use of steam as a cooling medium reduces the use of cooling air, which means that more cooling air is available for the combustion process, which contributes to improving emissions. Third, reduction in cooling air results in less temperature dilution of the hot gas caused, while mixing with the cooling air. This increases the turbine inlet temperature, which results in more power availability. Finally, no ejection of cooling air to the main gas flow means aerodynamic loss is minimized. With this technology, the efficiency of the gas turbine is greatly enhanced; the best example of this is GE’s H class gas turbine, which is the first gas turbine to achieve 60% efficiency in the combined cycle power plants. However, to design such turbines, the heat transfer characteristics of steam as a cooling medium must be thoroughly understood, which requires extensive research. For the gas side, the use of different fuels can lead to a significant change in properties for the gas in the turbine part. The Integrated Gasification Combined Cycle application operates on hydrogen, and consequently the syngas will increase the amount of heat transferred to rotating and stationary airfoils due to increased moisture content and Figure 5: Cooling concepts of combustion chamber: (a) Film cooling; (b) Transpiration cooling; (c) Enhanced convective cooling; (d) Impingement cooling. Figure 6: Cooling concepts of gas turbine blade: (a) convection cooling; (b) impingement cooling; (c) film cooling; (d) transpiration cooling. mass flow. Thus, research will provide a better understanding of heat transfer mechanisms in a syngas environment. Another way to improve energy conversion efficiency is to use combined cycles incorporating steam turbines, fuel cells, etc. A combined cycle with steam turbines is a relatively old but still very effective approach, and heat transfer and heat exchangers play a significant role in this approach without any doubt. Here, a brief discussion is given for the heat transfer issues associated with fuel cells. Figure 7 shows a typical configuration for a combined cycle using both a gas turbine and a fuel cell. As is well known, fuel cells can convert the chemical energy stored in the fuel into electrical and thermal energy through electrochemical processes. Because these processes are not subject to the Carnot cycle limitation, high electrical efficiencies can be obtained. Typical fuel cell types include phosphoric acid fuel cells, proton exchange membrane, solid oxide fuel cell and molten carbonate fuel cell, etc. [10]. The operation principle indicates that heat and mass transfer play an important role in fuel cells [10]. One typical fuel cell construction is the flat plate design for solid oxide fuel cells, shown in Fig. 8. As can be seen, the fuel in fuel ducts has both heat and mass transfer on the top wall with the anode, and the air in air ducts has both heat and mass transfer on the bottom wall with the cathode. In addition, two-phase flows exist in fuel ducts after a part of the fuel is consumed. Therefore, the conditions of fluid flow and heat transfer in air and fuel ducts have great effects on the performance of fuel cells and consequently the entire power cycle. Most of the current designs are based on constant values of Nusselt number and friction factor. Such rough estimations cannot meet future developments, and considerable research efforts must be given to this complex heat and mass transfer problem. Figure 7: Reference fuel cell and gas turbine system layout [11]. The above analysis demonstrates that high efficiency of power conversion can be reached with the help of relevant heat transfer and heat exchanger technologies. Therefore, attempts to provide compact, efficient heat transfer methods and heat exchangers and at the same time allowing a cheap and relatively simple manufacturing technique are real challenges for research. 4 Use of renewable energy Hydropower, biomass, wind and solar energy are regarded the most important renewable and sustainable energy sources. Hydropower is, of course, dependent on the earth’s contour, and it is not substantial for those countries with flat earth surface. Biomass appears to be an attractive option for many countries, and technologies for the conversion of biomass into electricity and heat are highly similar to the technologies for other solid fossil fuels. Wind and solar energies are strongly fluctuating sources, but they are very clean, with no pollutant emissions and have received great attention. In these renewable energy systems, heat transfer and heat exchangers play an important role as in those systems described earlier. Consider now a simple solar energy system as an example. Figure 9 is a schematic view of a typical domestic hot water heating system designed for residential applications. When there is sun, the photovoltaic (PV) module produces power, which runs a small circulating pump. Antifreeze is pumped through the solar collectors and is heated. The fluid then returns to a reservoir in the heat exchanger module. Water coils in the reservoir absorb the heat from the solar fluid. The domestic water flows through these heat exchanger coils by natural thermosiphon action. As the water is heated, it rises and returns to the top of the tank, drawing cold water from the bottom of the tank into the heat exchanger. It should be pointed out that no external heat exchanger as shown in Fig. 9 was used historically. Instead, the heat exchangers were coils of copper pipes located at the bottom of the solar storage tank. The current design shown in Fig. 9 has a number of advantages. First, the system performance is enhanced. External heat exchangers can be configured so that the potable water circulates by natural convection (i.e. it thermostombs), which means that excellent temperature stratification can be achieved in the storage tank. With the hot water remaining at the top of the tank, usable hot water is available more rapidly with an external heat exchanger. Second, the thermodynamic efficiency is improved with the external heat exchanger configuration. The rate of heat transfer is directly proportional to the difference in temperature between the water being heated and the antifreeze from the solar collectors. With the external heat exchanger configuration, the heat exchanger coil is always surrounded by the very cold water, which means that thermal efficiency is greatly improved. Third, low cost can be achieved due to the long lifetime of the external heat exchanger compared to the solar tank. The external heat exchanger can be saved when the solar tank develops a leak, and thus cost saving is achieved. However, the heat transfer mechanism involved in the external heat exchanger is highly complex. Both forced convection and natural convection have important impacts. Shell-and-tube heat exchangers may serve well in this condition, but compact heat exchangers (such as PHEs) also claim superior operating condition. Because this practical application is still in its infancy, more research is expected in the future. In addition, the solar collector using the PV module is a special heat exchanger. On one side of the surface, solar energy (radiant energy) is absorbed. This energy is transferred to the second side of the coolant. This is a quite complex heat transfer problem, not only because it involves both the radiant and the convection heat transfer but also because it is a time-dependent issue. The solar energy varies with time and location, and this must be taken into account in the use of this renewable energy. The importance of heat transfer and heat exchangers has been illustrated for the solar energy system. Similar conclusions can be reached when dealing with the other types of renewable energy systems. However, they are not fully exploited here due to space constraints. 5 Reduction of emission and pollutant Heat transfer and heat exchangers are also important in reducing emissions and pollutants. As illustrated earlier, they play an important role in the development of sustainable energy systems. The reduction of final energy consumption means less prime energy (e.g. fossil fuels) consumption, which results in overall reduction in emissions and pollutants. Improved efficiency of power plants certainly also reduces the primary energy consumption as well as the consequent emissions. Alternative fuels like biofuels (including biomass and waste utilization) are said to be neutral in terms of CO$_2$. The other renewable energy sources – solar, hydropower and wind – simply are clean enough and no emissions exist at all. In addition, by considering the pressure drop and associated pressure losses (work loss) in the heat transfer processes and attempting to reduce it, the consumption of electricity will be decreased, which is also beneficial. Therefore, heat transfer and heat exchangers are important for the protection of the environment, with regard to their role in the development of sustainable energy system. The above influences on emissions and pollutants are obviously the indirect effect. However, heat transfer and heat exchangers can also have a direct effect on reducing emissions and pollutants in many situations. One example is their presence in internal combustion engines. In diesel combustion engines, exhaust gas recycling (EGR) was used for a while because this has been found to be an efficient method to reduce NO$_x$. However, particle emissions are increased and the engine performance is reduced. It has been recognized that if the exhaust gas is cooled in a heat exchanger, the above-mentioned problems can be overcome or at least partially avoided. In addition, the NO$_x$ emission will be further reduced as shown in Fig. 10. In this situation, several factors must be considered. First, due to the limited space in automobiles, an EGR cooler must be compact and lightweight. Second, because the cooling water is taken from the total engine cooling water, the amount of cooling water for the EGR cooler is limited and must be kept as small as possible. This means that the EGR cooler must have high thermal efficiency. Third, the EGR cooler is always subject to unsteady or oscillatory operation and is also severely affected by fouling, which means that the operating reliability and lifetime are extremely important in selecting the heat exchanger type. Therefore, a compact heat exchanger (e.g. a brazed plate heat exchanger) may be a better option, although shell-and-tube heat exchangers are currently often used in automobiles. To design an EGR cooler giving very reliable performance and durability, further research must be carried out. Another example is the combustion chamber in gas turbine systems. It is well known that the production of NO$_x$ is related to high flame temperatures. One way to reduce the flame temperature is to use high air to fuel ratios [13]. This means that much more compressor air is needed for combustion and consequently less air is available for the cooling of combustion chambers and turbine blades. However, low temperature zones lead to unburned hydrocarbons. Thus, the emission control and the cooling system are coupled and need careful attention. This is another evidence that heat transfer design has a direct effect on reducing emissions and pollutants. 6 Some examples of recent research 6.1 Case study of a heat exchanger network design using the pinch technology A heat recovery system at a Swedish pulp mill has been investigated. At the mill, there is a big amount of hot water and thin liquor coming from the washing and bleaching process. These hot streams exchange heat with some cold streams, which will be used in the digesting plant. Since the hot streams labelled 2 and 3 contain a small content of fibres and some other substances, fouling may occur quite easily. Therefore, the process is very appropriate for PHEs because of the characteristic of easy cleaning. Specially designed PHEs, called wide gap PHEs, are used for the streams 2 and 3. The network investigated contains three hot streams and five cold streams. The existing network is presented in grid form in Fig. 11. All the existing heat exchangers are PHEs and the total heat transfer area is 1436.5 m². The heat capacity flow rate, supply and target temperatures, physical properties and allowable pressure drop of each stream are given in Table 1. It should be pointed out that the allowable pressure drops are treated as the pressure drops to promote Relevance of Heat Transfer and Heat Exchangers 6.1.1 Grassroots design The composite curves are plotted in Fig. 12 for DTMIN 6°C. The optimal hot and cold utilities as well as the estimated total heat transfer area are calculated. The optimal hot and cold utility requirements are 1788 and 6800 kW, respectively. By comparing these figures with those in Fig. 11, it is obvious that the hot and cold utility consumption in the existing network could be reduced by 44.5% and 17.4%, respectively. For the total heat transfer area, the estimation is carried out based on the proposed method. The total heat transfer area is a function of both DTMIN and the Thermal Engineering in Power Systems It is obvious that there is an optimal corrugation angle corresponding to the minimum area after DTMIN is specified. In addition, it is easy to understand that the heat transfer area becomes larger for lower DTMIN for the same corrugation angle. For DTMIN = 6°C, the minimum total heat transfer area is 1095 m², and the optimal corrugation angle is about 60°. This value is much lower than the existing exchanger area. This is probably due to the fact that the existing exchangers do not fully use the allowable pressure drop. In addition, a small part of the exchangers have lower corrugation angles, which also increases the total heat transfer area. In the calculation, the fouling resistances for stream 1, streams 2 and 3, and the rest are taken as 0.0001, 0.0003 and 0.00008 m²·K/W, respectively. The hydraulic diameters for the wide gap PHEs and the normal PHEs are 0.022 and 0.008 m, respectively. The variation in DTMIN causes variations in utility consumption, heat transfer area and most likely in the structure of the network. The variation in the corrugation angle causes variation in heat transfer area. Therefore, the optimal DTMIN and corrugation angle should be determined before any network generation. The annual costs for energy and exchanger area can be estimated by the following relationship: \[ \text{Capital cost} = 2700 \cdot \text{Area}^{0.85} \] \[ \text{Energy cost} = 1400 \cdot \text{Hot utility} + 400 \cdot \text{Cold utility} \] The estimation for the capital cost is based on some experience of a PHE manufacturer and the estimation for the energy cost is provided by the staff at the mill. The hot utility is a live steam, and the cold utility is the normal cold water. The units for cost, area and utility are Swedish Crown (SEK), m² and kW. Now, it is possible to plot a graph of the total annualized cost versus DTMIN and the corrugation angle. The plot is given in Fig. 14, and the optimal DTMIN and corrugation angle are close to 1°C and 62°, respectively. The optimal DTMIN is quite small because the energy cost is the dominant part in the total cost. The Pinch design suggested by Linnhoff et al. [6] is employed to design the network. The minimum DTMIN is taken as 6°C after considering the minimum temperature difference for PHEs. The optimal corrugation angle for this value of DTMIN is close to 60°. The final design is shown in Fig. 15. After the detailed calculation is carried out, the total heat transfer area is 1247 m². The deviation between predicted and calculated values is about 12%. Considering the fact that the vertical alignment is assumed in the prediction while it is actually not in the network synthesis, this deviation is acceptable for the pre-optimization. Hence, it demonstrates that the suggested method is suitable for the optimization of heat exchanger networks using PHEs. As for the potential use of multi-stream PHEs, the heat exchangers 2 and 4, 3 and 5, and 6 and 7 are likely to be combined as three-stream PHEs. By doing so, the capital cost and installation cost are greatly reduced. The process is also made more integrated. The operability concern can be solved by several ways suggested above. Although the detailed calculation is not carried out here, the potential for use of multi-stream PHEs is obvious. 6.1.2 Retrofit design The reason for the excessive consumption of both hot and cold utility is that there is heat transfer (exchange 7 in Fig. 11) across the pinch point. To reduce the utility cost to the optimal level, this cross-pinch heat transfer must be eliminated. The suggested retrofit design is shown in Fig. 16. As can be seen, the exchanger 7 has been moved to another place and some of the plates are removed. In addition, two new exchangers 8 and 9 are added. By doing so, both the hot and cold utility consumption are reduced. The two new heat exchangers 8 and 9 have heat transfer areas of 122 and 82 m$^2$, respectively. The investment for the new exchangers is 0.248 MSEK, and the payback period is only about 12.5 months. The payback period is very short because the running cost is much higher than the capital cost in this case. It also demonstrates why process integration is so important in industry where energy cost is high. However, the utility consumption is still not the minimum value because there is still a small amount of heat transferred across the pinch in exchanger 9. This can be eliminated if the low-temperature end of stream 8 is heated by stream 2. However, the energy reduction is small, whereas the cost of the exchanger, connecting pipe and others is quite high. The structure of the network also becomes more complicated, which is not good for operability. Hence, the payback period will be rather long. 6.2 High temperature heat exchangers High temperature heat exchanger technology has become important for improving the performance of power generation. There is a need to develop various types of high temperature heat exchangers in different applications such as hydrogen production, reforming process of solid oxide fuel cells, generation of high temperature gas, low emission power plants. In this section, monolithic heat exchangers are considered and some specific problems are addressed. 6.2.1 Monolithic exchangers 6.2.1.1 Ceramic monolith structures Ceramic monolith structures are used in the industry today and they are produced in large numbers by using the extrusion technique. They are unibody structures composed of interconnected repeating cells or channels (Figs. 17 and 18). They are increasingly under development and evaluation for many new reactor applications [14, 15], e.g. chemical process and refining industries, catalytic combustion, low emission power plants. However, monoliths are mainly used where only one fluid flows through all the channels. An example is the monolithic exhaust structure in automotive applications. In endothermic and slow reactions such as steam reforming of hydrocarbons, large amounts of heat are needed to maintain reaction rates. If the catalysts were deposited on tubes, usage of monoliths would be more efficient, leading to greater reaction rates and a smaller reactor [16]. Additionally, there would be a great improvement in mechanical integrity. Especially, it would be advantageous if two fluids in monolithic channels can exchange heat and/or mass. The reason why monoliths are not widely used in these applications is because of complex technique for feeding and distributing the two fluids in and out of the channels. Figure 16: Grid structure of the retrofit design. Selimovic and Sunden [17] focused on the compact ceramic heat exchanger where two fluids are fed and distributed into individual channels in a multi-channels structure. Their study shows three different approaches of modelling: analytical, experimental and numerical modelling. The exchanger is of monolithic shape where heat and mass is transferred in rectangular channels. Usually, for the pressure drop calculations of standard channel shapes, different available correlations can be applied. However, when these channels are attached to a manifolded and connected to other components, complex geometries are involved and then modelling with correlation parameters may be unsuccessful. Similar to PHEs, the pressure drop, as well as thermal performance, depends on distribution of fluid. Therefore, it is important to investigate how good the flow distribution is from the main port pipe into the channels. The analytical investigation made here includes both U- and Z-type configurations. Monolithic ‘honeycomb’ structure has been manifolded by two stage manifolds where either U-type or Z-type manifolds can be used to distribute the flow rate uniformly through each branch. This stage manifold can be compared to the manifolding of PHEs. The main difference compared to PHEs is that each branch will further divide the flow to the monolithic structure with specified channel arrangement. This stage manifolding is called I-type manifold here. More detailed picture of I-type manifold can be observed in Fig. 19. Concerning the monolithic channels, two different gas distributions (channel arrangements) are investigated: the checkerboard and linear (Figs. 18 and 19). The important physical characteristics are then the size of the channel through which the gaseous reactants and products traverse wall thickness, and the total monolith’s compactness. Rafidi and Blasiak [18] developed a two-dimensional simulation model to find out the temperature distribution of the solid storing material and flowing gases and other thermal and flow parameters for this regenerator and compared the computed results with experiments. Because of geometric symmetry of the honeycomb Figure 19: I-type manifold assembly: 1 and 2 – manifold top, 3 – dividing plates, 4 – monolithic channels, 5 – collecting plate, 6 and 7 – manifold bottom, 8 – checkerboard channel arrangement. structure, mathematical analysis was made on one honeycomb cell, or matrix, that formed a small part of the regenerator cross-section along the flow path. The regenerator is composed of two different materials along the heat exchangers, one is 0.2 m long alumina and the other is 0.1 m long cordierite. Figure 19 shows the dimensions of a heat regenerator used in a twin-type 100 kW HiTAC (high temperature air combustion) regenerative burner. The regenerator dimensions are 150 × 150 × 300 mm$^3$. The cell size is attributed to 100 cells/in$^2$ and hence, the specific heat transfer area is 4200 m$^2$/m$^3$. All flue gases generated by combustion are sucked again by the burners and pass through the regenerators. The honeycomb compact heat regenerator has relatively high effectiveness of about 88% and recovers 72% of energy contained in combustion flue gases at nominal operating conditions. Consequently, the energy storage and the pressure drop are calculated and the thermal performance of the honeycomb heat regenerator is evaluated at different switching times and loading. The model takes into account the thermal conductivity parallel and perpendicular to flow direction of solid and flowing gases. It considers the variation in all thermal properties of solid materials and gases with temperature. Moreover, the radiation from combustion flue gases to the storage materials was considered in the analysis. 6.3 Heat load prediction in combustors Different phenomena such as complex flow field and heat release by combustion are involved in the heat transfer process in combustion chambers. This section concerns prediction of heat load and wall temperature in a gas turbine combustor by taking different phenomena into account. Two-dimensional axisymmetric models were used to model the flow field and combustion in a premixed combustor with two different cooling schemes. The $k$–$\epsilon$ turbulence model and Eddy Dissipation Concept were used for modelling turbulent flow and combustion, respectively. In the modelling of heat transfer through the walls, a conjugate heat transfer formulation was applied. The temperatures calculated by the models were compared with experimental data. The results showed that in the mid part of the liner, the prediction of the wall temperature is good, although worse agreement is found in other parts. In addition, radiative heat transfer has been studied. The results showed that radiative heat transfer in simple and ribbed duct cooling schemes can increase the average inner wall temperature up to 33 and 40 K, respectively. Here computational fluid dynamics (CFD) simulations are used to, first, predict the temperature and heat transfer rate to the combustor wall (called liner wall hereafter) by using a conjugate heat transfer method and, second, study quantities of convective and radiative heat transfer in this type of combustor. The analysis is carried out on a VT4400 LPP combustor developed by Volvo Aero Corporation. A slightly simplified geometry is used to simulate this combustor and some experimental data of inner and outer liner wall temperatures were provided to validate the simulation results. 6.3.1 Combustor description and its modelling The VT4400 LPP is a lean premixed combustor, which is fuelled by natural gas. In the case of measured data, the equivalence ratio has been set to 0.59. The supplied Relevance of Heat Transfer and Heat Exchangers Air from the compressor is divided into two parts. The primary air, after passing through a cooling duct, enters the swirl system and mixes with the natural gas and is then burnt. The height of the cooling duct is 8 mm. The primary air and swirl number are about 1.57 kg/s and 0.6, respectively. By using the geometrical data and definition of swirl number (see eqn (1)), the axial and tangential velocities at the inlet of the combustor can be set. \[ S_n = \frac{R_2^2}{R_2^2} \int_0^{R_1} \frac{U_z U_0 r^2 dr}{R(a)} \int_0^{R_1} \frac{U_z^2 r dr}{R_2^2} \tag{1} \] At the second inlet, the secondary air is mixed with the burnt gases before the entrance to the turbine. In the experiments, the combustor was equipped with two different cooling schemes; a simple duct and a ribbed duct with thermal barrier coating (TBC) on the inner side of the liner wall. The thickness of the liner wall is 1.5 mm and its thermal conductivity is about 25 W/m K. In the second scheme, a TBC layer with thermal conductivity 1.3 W/m K has been used. The inlet temperature from the compressor is 662 K and according to the experiment this temperature is increased by 48 K at the outlet of the channel. The described combustor was modelled by a two-dimensional geometry (see Fig. 20). The model was meshed by two-dimensional (three-dimensional with one cell thickness) multi-block axisymmetric grids. A grid dependence study was carried out in the simple cooling duct case and 42,580 cells showed satisfactory accuracy. Then this meshed model was improved for the ribbed duct and TBC case and the number of cells reached 70,090. To capture the temperature distribution, the liner has been divided into 10 cell thickness. For boundary conditions, inlet and pressure boundaries were used for inlet and outlet, periodic and symmetry boundaries were used for the \( r-z \) faces in the liner and cooling duct, respectively. 6.3.2 Governing equations and solution methods To model the flow field, continuity and Navier–Stokes equations were solved. The turbulence was modelled by solving the transport equations for the turbulent kinetic energy and turbulent dissipation, which are implemented in the standard \( k-\varepsilon \) model. The summarized governing equations are listed in Table 2. ![Figure 20: The combustor model for the case of ribbed duct.](image-url) Thermal Engineering in Power Systems The premixed turbulent combustion was modelled by a one-step reaction for burning methane. The reaction rate was approximated by using the Eddy Dissipation Concept [19] and implemented in the source term of the species transport equation. According to this model, the reaction rate is given by: $$a_{re} = \frac{-\dot{\nu}}{\text{min}}$$ (8) where $a_{re}$ is the reaction rate, $\dot{\nu}$ is the reaction time, and min is the minimum of the reaction rates. The solution domain was discretized by the finite volume method and the STAR-CD CFD code was used for all computational processes. The convective terms of the transport equations were handled by a second order scheme (MARS) [20] and the SIMPLE algorithm [21] was used for the pressure-velocity coupling. Convergence criteria for solution of all equations were set on $1.0 \times 10^{-4}$ and besides that the temperature data at some boundaries were controlled. 6.3.2.1 Heat transfer Heat transfer through the liner wall was modelled by a conjugate heat transfer formulation, whereas other walls were modelled by their thermal resistance and environment temperature. Convective heat transfer through the liner wall was modelled by a conjugate heat transfer formulation, whereas other walls were modelled by their thermal resistance and environment temperature. on both hot and cold sides of the liner was modelled by the standard wall function relations [22], which are valid in the log-law region of the turbulent boundary layer. For this reason, the values of \( y^+ \) in near the wall regions were kept in the range of 30–40. Also, to mitigate the effect of circulation zones in the case of the ribbed duct channel, a non-equilibrium wall function has been imposed, which can take the effect of pressure gradient into account. In the radiative heat transfer part, the radiative transfer equation (RTE) [22] in participating media has been solved by the S4 discrete ordinates method with 24 directions. This selection gives satisfactory accuracy [23] for absorption and emission and keeps the computational efforts as low as possible. Also, because of the low temperature in the cooling duct, the radiative heat transfer was only considered on the inside of the liner. Combustion of methane generates some \( \text{CO}_2 \) and \( \text{H}_2\text{O} \) and these are the most important participating gases in the absorption and emission of thermal radiation. To take their effects into account, the spectral line weighted sum of grey gases method (SLW) [24] has been used, which is an accurate model. For calculation by the SLW method, five grey gases, optimised by a conjugate gradient method were selected. For each grey gas, the blackbody weight is calculated and then the grey gas absorption coefficient and its blackbody weight are used in solving the RTE. For the radiative boundary conditions, the liner wall emissivities have been set to 0.7 and updated wall temperatures are used during the solution process. 6.3.3 Results and discussion 6.3.3.1 Flow and temperature fields The axial velocity and temperature fields inside the liner and before mixing with the secondary air are shown in Fig. 21. As can be seen, the velocity changes sharply close to the swirler with negative values near the liner wall. The sharp variation is mitigated along the length of the liner. At \( z/R_0 = 1.5 \) (about 74 mm along the liner), the velocity profile starts to change direction near the wall and therefore a stagnation point is formed. By further increase in \( z/R_0 \), the velocity is stabilized in the new direction and its profile in the radial direction will be flattened. Similar to the velocity field, the temperature field varies sharply close to the swirler. The main reason for the variation is the combustion of fuel in the region. and negative velocities at the centre and wall regions of the liner. By increasing \( z/R_0 \) up to 1, the temperature does not change considerably near the wall regions; however, by further increase, the temperature is decreased. This can be due to the fact that the combustion process has been completed and convective heat transfer. **6.3.3.2 Simple cooling duct** Temperature distributions inside and outside of the liner wall are shown in Fig. 22. As can be seen, the peak of the wall temperature is predicted at a distance 74 mm from the entrance. This is also the position for zero axial velocity (see Fig. 21). Radiation has increased the wall temperature both on the inner and on the outer sides and its effect is stronger in the low temperature zones. Because of radiation, the average temperature has increased about 33 K on the inner wall. In addition, comparison of predictions and experimental data shows that at the beginning of the liner, the agreement is better, whereas with increasing liner length, the difference between predictions and experimental data becomes larger. Total heat loads to the wall with and without radiation are shown in Fig. 23. It can be clearly seen that the increase in radiative heat flux near the entrance of the cooling duct is large and low radiative heat flux occurs at the hot region of the wall. In the mid part, the heat load with radiation is almost constant, 450 kW, which means that the temperature difference between the two sides of the wall for a short distance is almost constant. The average heat load without radiation is about 388 kW and radiation increases this value by 8%. Because of the small wall thickness, the heat load is very sensitive to the temperature difference between the two sides of the liner wall and it is noted that for 1 K difference, the heat load changes about 17 kW/m$^2$. 6.3.3.3 Ribbed cooling duct and TBC In Fig. 24, the temperature distributions on the inner and outer liner walls for the case of a ribbed duct with TBC are shown. In this case, the influence of radiation is higher than that for the simple duct case. The average temperature on the inner wall has increased by 40 K. The agreement between predictions and experimental data at the entrance of the liner is somewhat poor, but it is obvious that radiation is important. At the middle and end parts, the predictions for the outer face of the wall are very good, but the values of the inner face have been overpredicted. This might be due to error in the experimental data, because the outer wall at the same position is well predicted. In that part of the liner, the flow and temperature near the wall are stabilized, so the sharp decrease in the inner wall temperature is doubtful. The predicted heat load is shown in Fig. 25. Similar to the simple cooling duct case, the radiative heat load is stronger near the entrance of the cooling duct. In this case, the average convective heat load is about 393 kW/m$^2$ and the radiative heat transfer increases the heat load by about 7%. In summary, the wall temperatures and heat loads in a premixed combustor have been predicted. The results showed that in the mid part of the liner, the prediction of the wall temperature is good, but poorer agreement exists in other parts. In addition, radiative heat transfer has been included in the study. The results showed that radiative heat transfer for simple and ribbed duct cooling schemes can increase the average inner wall temperature by 33 and 40 K, respectively. As an extension of this study, the accuracy of the model in prediction of wall temperature and heat loads to the walls can be investigated by using different wall treatments such as a two-layer wall function approach or applying a low Reynolds model. 6.4 CFD methods in analysis of thermal problems CFD can be applied to heat exchangers in quite different ways. In the first way, the entire heat exchanger or the heat transferring surface is modelled. This can be done by using large scale or relatively coarse computational meshes or by applying a local averaging or porous medium approach. For the latter case, volume porosities, surface permeabilities and flow and thermal resistances have to be introduced. The porous medium approach was first introduced by Patankar and Spalding [25] for shell-and-tube heat exchangers and was later followed by many others. Another way is to identify modules or group of modules, which repeat themselves in a periodic or cyclic manner in the main flow direction. This will enable accurate calculations for the modules, but the entire heat exchanger including manifolds and distribution areas are not included. The idea of streamwise periodic flow and heat transfer was introduced by Patankar et al. [26]. The finite volume method is a popular method particularly for convective flow and heat transfer. It is also applied in several commercial CFD codes. Further details can be found in [21, 27]. In heat transfer equipment like heat exchangers, both laminar and turbulent flows are of interest. While laminar convective flow and heat transfer can be simulated, turbulent flow and heat transfer normally require modelling approaches in addition. By turbulence modelling, the goal is to account for the relevant physics by using as simple a mathematical model as possible. This section gives a brief introduction to the modelling of turbulent flows. The instantaneous mass conservation, momentum and energy equations form a closed set of five unknowns \( u, v, w, p \) and \( T \). However, the computing requirements, in terms of resolution in space and time for direct solution of the time dependent equations of fully turbulent flows at high Reynolds numbers (so-called direct numerical simulation (DNS) calculations), are enormous and major developments in computer hardware are needed. Thus, DNS is more viewed as a research tool for relatively simple flows at moderate Reynolds number. In the meanwhile, practicing thermal engineers need computational procedures supply- ing information about the turbulent processes, but avoiding the need to predict effects of every eddy in the flow. This calls for information about the time-averaged properties of the flow and temperature fields (e.g. mean velocities, mean stresses, mean temperature). Commonly, a time-averaging operation, called Reynolds decomposition is carried out. Every variable is then written as a sum of a time-averaged value and a superimposed fluctuating value. In the governing equations, additional unknowns appear, six for the momentum equations and three for the temperature field equation. The additional terms in the differential equations are called turbulent stresses and turbulent heat fluxes, respectively. The task of turbulence modelling is to provide the procedures to predict the additional unknowns, i.e. the turbulent stresses and turbulent heat fluxes with sufficient generality and accuracy. Methods based on the Reynolds-averaged equations are commonly referred to as Reynolds-averaged Navier–Stokes (RANS) methods. 6.4.1 Types of models The most common turbulence models for industrial applications are classified as - zero-equation models, - one-equation models, - two-equation models, - Reynolds stress models, - algebraic stress models and - large eddy simulations (LES). The first three models in this list account for the turbulent stresses and heat fluxes by introducing a turbulent viscosity (eddy viscosity) and a turbulent diffusivity (eddy diffusivity). Linear and non-linear models exist [28–30]. The eddy viscosity is usually obtained from certain parameters representing the fluctuating motion. In two-equation models, these parameters are determined by solving two additional differential equations. However, one should remember that these equations are not exact, but approximate and involves several adjustable constants. Models using the eddy viscosity and eddy diffusivity approach are isotropic in nature and cannot evaluate non-isotropic effects. Various modifications and alternative modelling concepts have been proposed. Examples of models of this category are the $k–\varepsilon$, and $k–\omega$ models in high or low Reynolds number versions as well as in linear and non-linear versions. A lately popular model is the so-called V2F model introduced by Durbin [31]. It extends the use of the $k–\varepsilon$ model by incorporating near-wall turbulence anisotropy and non-local pressure–strain effects, while retaining a linear eddy viscosity assumption. Two additional transport equations are solved, namely one for the velocity fluctuation normal to walls and another for a global relaxation factor. In Reynolds stress equation models, differential equations for the turbulent stresses (Reynolds stresses) are solved and directional effects are naturally accounted for. Six modelled equations (i.e. not exact equations) for the turbulent stress transport are solved together with a model equation for the turbulent scalar dissipation rate $\varepsilon$. Reynolds stress equation models are quite complex and require large computing efforts and for this reason, are not widely used for industrial flow and heat transfer applications. Algebraic stress models (ASM) and explicit ones such as EASM present an economical way to account for the anisotropy of the turbulent stresses without solving the Reynolds stress transport equations. One idea is that the convective and diffusive terms are modelled or even neglected and then the Reynolds stress equations reduce to a set of algebraic equations. For calculation of the turbulent heat fluxes, most commonly a simple eddy diffusivity concept is applied. The turbulent diffusivity for heat transport is then obtained by dividing the turbulent viscosity by a turbulent Prandtl number. Such a model cannot account for non-isotropic effects in the thermal field but still this model is frequently used in engineering applications. There are some models presented in the literature to account for non-isotropic heat transport, e.g. the generalized gradient diffusion hypothesis and the WET (wealth = earnings × time) method. These higher order models require that the Reynolds stresses are calculated accurately by taking non-isotropic effects into account. If not, the performance may not be improved. In addition, partial differential equations can be formulated for the three turbulent heat fluxes, but numerical solutions of these modelled equations are rarely found. Further details can be found in [32]. The LES is a model where the time-dependent flow equations are solved for the mean flow and the largest eddies, while the effects of the smaller eddies are modelled. The LES model has been expected to emerge as the future model for industrial applications, but it still limited to relatively low Reynolds number and simple geometries. Handling wall-bounded flows with focus on the near wall phenomena like heat and mass transfer and shear at high Reynolds number present a problem due to the near-wall resolution requirements. Complex wall topologies also present problem for LES. Nowadays, approaches to combine LES and RANS based methods have been suggested. 6.4.2 Wall effects There are two standard procedures to account for wall effects in numerical calculations of turbulent flow and heat transfer. One is to employ low Reynolds number modelling procedures, and the other is to apply the wall function method. The wall functions approach includes empirical formulas and functions linking the dependent variables at the near-wall cells to the corresponding parameters on the wall. The functions are composed of laws of the wall for the mean velocity and temperature, and formulae for the near-wall turbulent quantities. The accuracy of the wall function approach is increasing with increasing Reynolds numbers. In general, the wall function approach is efficient and requires less CPU time and memory size, but it becomes inaccurate at low Reynolds numbers. When low Reynolds number effects are important in the flow domain, the wall function approach ceases to be valid. The so-called low Reynolds number versions of the turbulence models are introduced and the molecular viscosity appears in the diffusion terms. In addition, damping functions are introduced. Also, the so-called two-layer models have been suggested where the transport equation for the turbulent kinetic energy is solved, while an algebraic equation is used for, e.g. the turbulent dissipation rate. 6.4.3 CFD codes Several industries and companies worldwide are nowadays using commercially available, general-purpose, so-called CFD codes for simulation of flow and heat transfer topics in heat exchangers, investigations on enhanced heat transfer, electronics cooling, gas turbine heat transfer and other application areas, e.g. fuel cells. Among these codes are: FLUENT, CFX, STAR-CD, FIDAP, ADINA, CFD2000, PHOENICS and others. Also many universities and research institutes worldwide apply commercial codes besides in-house developed codes. However, to apply such codes successfully and to interpret the computed results, it is necessary to understand the fundamental concepts of computational methods. 6.4.4 Ducts with bumps As a duct with bumps is considered, this type of duct appears in some rotary regenerative heat exchangers. The basic idea with introduction of bumps is to design corrugated ducts as indicated in Fig. 26 for ducts with triangular cross section. The intention is that this corrugation should affect the flow field and introduce low Reynolds number turbulence and a swirling motion as sketched in Fig. 26. At a certain distance downstream the corrugation element, the turbulence and the swirling motion will be attenuated and gradually the intensity of the fluctuations will be reduced. Therefore, at a position upstream where the complex flow pattern (strong secondary cross-sectional flow and separated flow) has been significantly weakened or has disappeared, a new corrugation element is introduced to re-establish the violent and swirling like motion. CFD calculations have been performed and a non-orthogonal structured grid was employed. Periodic conditions were imposed in the main flow direction. About 40,000 control volumes (CVs) were used, 30 × 60 CVs in the cross-sectional plane. The existence of a secondary flow was revealed and a result is shown in Fig. 27. It is obvious that a swirling motion is created by the bumps and the triangular cross section. The Reynolds number corresponding to the flow in Fig. 27 is about 2000. In the simulations, a low Reynolds number $k–\varepsilon$ model was used. The secondary motion exists also for laminar cases, as it is partly geometry-driven. It is found that the heat transfer is enhanced compared to a smooth duct, but the pressure drop increase is high. Further details of this investigation can be found in [33]. Figure 26: Conjectured flow pattern in a duct with bumps. 6.5 Flow structures in ribbed ducts Ribbed duct flows are encountered in numerous engineering applications, e.g. turbine blades and combustor walls cooling. The flow behind a rib is typically characterized by flow separation and subsequent reattachment. Flow in a separated shear layer is complicated by the presence of reverse flow and a high level of the turbulence intensity. Despite the substantial progress in experimental and numerical studies on turbulent flows with separation, our understanding of this phenomenon is far from complete. The first review of the experimental data for separated flow was provided by Bradshaw and Wong [34] for flow over a backward-facing step. On the basis of single point measurement, they concluded that the shear layer split in two parts at the reattachment point and the bifurcation caused a rapid decrease in turbulence shear stress. Troutt et al. [35] showed that the separated shear layer was dominated by the large-scale vortices, which retained their organization far downstream of the reattachment region. Ruderich and Fernholz [36] indicated a self-similar behaviour for the mean and fluctuating quantities in a short region upstream of the reattachment point. The data of Castro and Haque [37] showed that the turbulent structure of the separated shear layer differed from that of a plane. Figure 27: Secondary flow velocity vectors in a cross sectional plane midway over a corrugation element. mixing layer between two streams. On the other hand, they argued that the flow close to the wall within the recirculation region had some features reminiscent of a laminar boundary layer in a favourable pressure gradient. Thereafter, Hasan [38] confirmed that the reattaching shear layer did split into two and a low-frequency flapping motion of the shear layer is observed. In the numerical simulation of turbulent flow over a backward-facing step, Le et al. [39] pointed out that the turbulent kinetic energy budget in the recirculation region is similar to that of a turbulent mixing layer. Previous research mostly focused on the flow with separation induced by a backward-facing step, which is considered to be the benchmark to study this phenomenon. The flow past a rib is more complicated because it involves an additional separation in the upstream region of the obstacle. On the other hand, the geometry of rib has also an essential influence on the flow separation and reattachment. According to Fröhlich et al. [40], separation from continuous and curved surfaces displays a strong spatial and temporal fluctuation of the separation line; meanwhile, the mean location of reattachment is sensitively dependent on that of separation. These characteristics imply that the separation from contoured protrusions is more elusive than that from obstacles with sharp edge. In our recent work, square-shaped, transversely placed ribs were employed to investigate the separated flow in a square channel. To highlight the physical mechanism of flow separation, only one wall of the channel is fitted with periodic ribs. The ribs obstruct the channel by 15% of its height and are arranged 12 rib heights apart. The inter-rib spacing is set such that the reattachment is allowed to take place on the portion between consecutive ribs and a distinct redevelopment region is introduced prior to a re-separation over the next rib. Many numerical and analytical studies [41–44] were carried out to investigate the characteristics of flow separation in a ribbed channel based on the DNS or LES techniques. Corresponding to the numerous simulation works, very few experimental works, however, were executed to give high-resolution velocity measurements and turbulent properties. On the basis of literature review, Islam et al. [45] conducted an experimental study on the turbulent water flow in a rib-roughened rectangular channel by particle image velocimetry (PIV). Given the limited body of experimental data, experiments are performed to study the unsteady turbulent flow inside a square, ribbed channel. In this study, two-dimensional PIV technique is implemented to measure the instantaneous velocity fields and turbulent statistical quantities. The research reported here was undertaken to fulfill two objectives, i.e. to gain sight into the physical process of separation and to provide experimental data of ribbed channel flows for validation of CFD models. Wang et al. [46] examined experimentally the flow structures and turbulent properties associated with the flow separation in a square, ribbed channel by using PIV. The Reynolds number, based on the bulk-mean velocity and the channel hydraulic diameter, is fixed at 22,000. The ribs obstruct the channel by 15% of its height and are arranged 12 rib heights apart. Due to the flow periodicity, the investigated domain ranges from the sixth to seventh ribs. Two-dimensional velocity measurements are made in both the vertical symmetry plane and the horizontal planes. The instantaneous velocity gives evidence that the separated shear layer is dominated by the coherent vortices, which are generated by the Kelvin–Helmholtz instability. Similar to the plane mixing layers, the growth rate of the separated shear layer is linear with respect to the streamwise direction. Moreover, it is noticed that the turbulence production, for both turbulent kinetic energy and shear stress, has a remarkable peak at \( y/e = 1 \), which approximately coincides with the inflexion point \( \frac{d^2\langle u \rangle}{dy^2} \). Two distinct features near reattachment have been identified. First, the maximum shear stresses decrease rapidly just downstream of the reattachment. Second, the anisotropy parameters deviate to a small extent from unity near reattachment. Further downstream of the reattachment, the acceleration in the inner part coupled with deceleration in the outer part makes the redevelopment of the boundary layer different from the behaviour of an equilibrium boundary layer. 7 Conclusions To develop sustainable energy systems, one must minimize the final use of energy, improve the efficiency of energy conversion and use renewable energy sources. In all these aspects, heat transfer and heat exchangers play a significant role, and this fact has been reviewed and illustrated throughout this article. In addition, heat transfer and heat exchangers also have great influence on the reduction of emissions and pollutants directly and indirectly. Therefore, the attempt to provide efficient, compact and cheap heat transfer methods and heat exchangers is a real challenge for research. Both theoretical and experimental investigations must be conducted, and modern scientific techniques must be adopted, such as CFD, laser techniques, liquid crystal thermography. By doing so, sustainable energy systems can be established, and this will contribute to global sustainable development. Acknowledgement Financial support has been received from Swedish Energy Agency, Swedish Scientific Council as well as EU. Several graduate students have been involved in performing the various projects. References
Partial coherent states in graphene E Díaz-Bautista, J Negro and L M Nieto Department of Theoretical Physics, Atomic Physics and Optics, University of Valladolid, 47011 Valladolid, Spain E-mail: ediaz@fis.cinvestav.mx, jnegro@fta.uva.es, luismiguel.nieto.calzada@uva.es Abstract. We employ a symmetric gauge to describe the interaction of electrons in graphene with a magnetic field which is orthogonal to the layer surface and to build the so-called partial and bidimensional coherent states for this system in the Barut-Girardello sense. We also evaluate the corresponding probability and current densities as well as the mean energy value. 1. Introduction In 1925 Fock [1] solved the physical problem of a spinless particle moving in the $xy$-plane under the action of a uniform magnetic field $\vec{B}$ directed along $z$-axis and an oscillator-like potential $V(x,y)$ employing the so-called symmetric gauge [2,3], $$\vec{A} = \frac{1}{2} \vec{B} \times \vec{r} = \frac{B_0}{2} (-y, x, 0), \quad \vec{B} = \nabla \times \vec{A} = B_0 \hat{k}. \quad (1)$$ Since then, the motion of a charged particle in a magnetic field became one of the most studied quantum systems. In particular, Man’ko and Malkin [4] were able to build the simplest coherent states for this system as 2-dimensional generalizations of the Glauber ones [5], taking as starting point the results obtained by Landau [6]. On the other hand, graphene is a material that consists in a sheet of carbon atoms arranged on a honeycomb lattice [7–9], in which the dynamics of the electrons close to Dirac points is described by the $(2+1)$ dimensional massless Dirac-like equation with an effective velocity $v_F = c/300$, what results in many relativistic phenomena [7,9–13]. The interaction of electrons in graphene and magnetic fields has attracted interest in many physics branches [14–23] with the purpose of controlling or confining such particles for the design of electronic devices. Thus, one can try to apply the coherent states formalism in this system considering e.g. homogeneous perpendicular magnetic fields. In [24] coherent states with a translational symmetry along $y$ direction have been constructed assuming the Landau gauge $\vec{A} = B_0 x \hat{j}$, and the time-independent Dirac-Weyl (DW) equation $$H_{DW} \Psi(x,y) = v_F \vec{\sigma} \cdot \left( \vec{p} + \frac{e}{c} \vec{A} \right) \Psi(x,y) = E \Psi(x,y), \quad (2)$$ where $\vec{\sigma} = (\sigma_x, \sigma_y, \sigma_z)$ are the Pauli matrices and $\Psi(x,y)$ are wave functions of two components. Our purpose here is to look for generalizations of those quantum states that preserve the rotational invariance through a symmetric gauge for describing the electron dynamics in graphene. This work is organized as follows. In Sect. 2, Dirac-Weyl equation is solved for a symmetric gauge and the associated algebraic structure is discussed. In Sect. 3, spinorial families of partial coherent states are obtained as eigenstates of two independent generalized annihilation operators, $A^-$ and $B^-$. Also, the corresponding probability and current densities and the mean energy are evaluated. In Sect. 4, bidimensional coherent states for graphene are presented as common eigenstates of both operators $A^-$ and $B^-$. Our conclusions are presented in Sect. 5. 2. Dirac-Weyl Hamiltonian By substituting the symmetric gauge (1) in Eq. (2), we obtain $$H_{DW} \Psi(x, y) = v_F \left( \sigma_x \left[ p_x - \frac{eB_0}{2c} y \right] + \sigma_y \left[ p_y + \frac{eB_0}{2c} x \right] \right) \Psi(x, y) = E \Psi(x, y), \quad (3)$$ which can be expressed as $$H_{DW} \Psi(x, y) = \sqrt{\omega} \hbar v_F \left[ \begin{array}{cc} 0 & -iA^- \\ iA^+ & 0 \end{array} \right] \Psi(x, y) = E \Psi(x, y), \quad \Psi(x, y) = \left( \begin{array}{c} \psi_1(x, y) \\ i\psi_2(x, y) \end{array} \right),$$ by defining the first order differential operators $$A^\pm = \pm \frac{i}{\sqrt{\omega} \hbar} \left( p_x - \omega \hbar y \right) \pm i \left( p_y + \frac{\omega \hbar}{4} x \right),$$ where $\omega = 2eB_0/c\hbar$ is the cyclotron frequency. The eigenvalue equation (3) encodes two coupled equations: $$A^- \psi_2(x, y) = \epsilon \psi_1(x, y), \quad A^+ \psi_1(x, y) = \epsilon \psi_2(x, y), \quad \epsilon = \frac{E}{\sqrt{\omega} \hbar v_F}.$$ which give place to the following equations for each pseudo-spinor component: $$\mathcal{H}^+ \psi_2(x, y) = A^+ A^- \psi_2(x, y) = \epsilon^2 \psi_2(x, y),$$ $$\mathcal{H}^- \psi_1(x, y) = A^- A^+ \psi_1(x, y) = \epsilon^2 \psi_1(x, y).$$ Since the problem has a geometrical rotational symmetry around $z$-axis, it is convenient to express the Hamiltonians $\mathcal{H}^\pm$ in polar coordinates $(\rho, \theta)$. Thus, by introducing the dimensionless variable $\xi$ and the parameter $\mathcal{E}$, defined as follows: $$\xi = \frac{\sqrt{\omega}}{2} \rho, \quad \mathcal{E} \equiv \epsilon^2 = \frac{E^2}{\omega \hbar^2 v_F^2},$$ the corresponding eigenvalue equations take the form $$\mathcal{H}^+ \psi_2(\xi, \theta) = \frac{1}{4} \left[ -\left( \partial_\xi^2 + \frac{1}{\xi^2} \partial_\xi + \frac{1}{\xi^2} \partial_\theta^2 \right) - 2i \partial_\theta + \xi^2 - 2 \right] \psi_2(\xi, \theta) = \mathcal{E} \psi_2(\xi, \theta),$$ $$\mathcal{H}^- \psi_1(\xi, \theta) = \frac{1}{4} \left[ -\left( \partial_\xi^2 + \frac{1}{\xi^2} \partial_\xi + \frac{1}{\xi^2} \partial_\theta^2 \right) - 2i \partial_\theta + \xi^2 + 2 \right] \psi_1(\xi, \theta) = \mathcal{E} \psi_1(\xi, \theta).$$ This set of differential equations reminds the known Fock-Darwin system $[1, 3, 25]$. The above relations imply that the eigenvalues of the Hamiltonians $\mathcal{H}^\pm$ are related as $\mathcal{E}_{1,n} = \mathcal{E}_{2,n} = n$, and therefore the spectrum of the DW equation (3) is $$E_n = \pm \hbar v_F \sqrt{n \omega}, \quad n = 0, 1, 2, \ldots,$$ where the negative energies correspond to holes in graphene. 2.1. Algebraic treatment and eigenstates Both Hamiltonians $\mathcal{H}^\pm$ can be factorized in terms of two set of differential operators as [25–27]: $$\mathcal{H}^+ = A^+ A^- = B^+ B^- + L_z, \quad \mathcal{H}^- = \mathcal{H}^+ + 1.$$ where, in dimensionless polar coordinates $(\xi, \theta)$, $$A^+ = \frac{\exp(i\theta)}{2} \left( -\partial_\xi + \frac{-i\partial_\theta}{\xi} + \xi \right), \quad A^- = \frac{\exp(-i\theta)}{2} \left( \partial_\xi + \frac{-i\partial_\theta}{\xi} + \xi \right),$$ $$B^+ = \frac{\exp(-i\theta)}{2} \left( -\partial_\xi + \frac{i\partial_\theta}{\xi} + \xi \right), \quad B^- = \frac{\exp(i\theta)}{2} \left( \partial_\xi + \frac{i\partial_\theta}{\xi} + \xi \right),$$ $$L_z = N - M = -i\partial_\theta, \quad N = A^+ A^-, \quad M = B^+ B^-,$$ where $L_z$ denotes the component in z-direction of the angular momentum operator. These operators satisfy the following commutation relations $$[A^-, A^+] = [B^-, B^+] = 1,$$ $$[A^\pm, B^\pm] = [A^\pm, B^\mp] = 0,$$ $$[L_z, A^\pm] = \pm A^\pm, \quad [L_z, B^\pm] = \mp B^\pm.$$ Therefore, Eqs. (5a) and (5b) imply that the operators $A^+$ and $A^-$, acting on an eigenstate of $L_z$, increases or decreases, respectively, the eigenvalue of $L_z$ in an unity; the operators $B^\pm$ have the contrary effect. The eigenstates of the Hamiltonians $\mathcal{H}^\pm$ are labeled by two positive integer numbers $m, n$, that are the eigenvalues of the number operators $M$ and $N$, respectively (see Figure 1): $$\psi_1(\xi, \theta) \equiv \psi_{m,n-1}(\xi, \theta), \quad \psi_2(\xi, \theta) \equiv \psi_{m,n}(\xi, \theta),$$ while Eq. (4c) implies that the states $\psi_{m,n}$ are also eigenstates of the operator $L_z$ with eigenvalue $l = n - m$. Also, the action of the operators $A^\pm$ and $B^\pm$ on such states is $$A^- \psi_{m,n} = \sqrt{n} \psi_{m,n-1}, \quad A^+ \psi_{m,n} = \sqrt{n + 1} \psi_{m,n+1},$$ $$B^- \psi_{m,n} = \sqrt{m} \psi_{m-1,n}, \quad B^+ \psi_{m,n} = \sqrt{m + 1} \psi_{m+1,n}.$$ On the other hand, the normalized eigenfunctions of the Hamiltonian $\mathcal{H}^+$ turn out to be $$ \psi_{m,n}(\rho, \theta) = \begin{cases} (-1)^{\min(m,n)} \sqrt{\frac{\omega}{4\pi \max(m,n)!}} \left( \frac{\omega}{2} \right)^{[n-m]} \exp \left( -\frac{\omega}{8} \rho^2 + i(n-m)\theta \right) L_{\min(m,n)}^{[n-m]} \left( \frac{\omega}{4} \right), \\ \end{cases} $$ (6) $n, m = 0, 1, 2, \ldots$, while the normalized eigenfunctions of the Hamiltonian $\mathcal{H}^-$ are obtained as $\psi_{m,n-1} = A^- \psi_{m,n}/\sqrt{n}$. These kind of solutions were obtained initially in [1]. Furthermore, we can label as $\Psi_{m,n}^+(x, y)$ the spinorial states whose two scalar components have positive $z$-component of the angular momentum ($l \geq 0$), and as $\Psi_{m,n}^-(x, y)$ those ones whose two scalar components have negative $z$-component of the angular momentum ($l < 0$), i.e., $$ \Psi_{m,n}^+(x, y) = \frac{1}{\sqrt{2}} \begin{pmatrix} \psi_{m,n}^+(x, y) \\ i\psi_{m,n}^+(x, y) \end{pmatrix}, \quad \Psi_{m,n}^-(x, y) = \frac{1}{\sqrt{2(1-\delta_{0n})}} \begin{pmatrix} (1-\delta_{0n})\psi_{m,n-1}^-(x, y) \\ i\psi_{m,n}^-(x, y) \end{pmatrix}, $$ where $\psi_{m,n}^+(x, y) \equiv \psi_{m,n}^+(\rho, \theta)$ ($\psi_{m,n}^-(x, y) \equiv \psi_{m,n}^-(\rho, \theta)$) identifies the states that belong to the upper (lower) sector in Figure 1, and $\delta_{mn}$ denotes the Kronecker delta. In addition, by defining the total angular momentum operator in $z$-direction as $J_z = L_z \otimes \mathbb{I} + \sigma_z/2$, we have that $$ J_z \Psi_{m,n}(x, y) = \frac{l - 1/2}{\sqrt{2(1-\delta_{0n})}} \left( (1-\delta_{0n})\psi_{m,n-1}(x, y) \right) = j \Psi_{m,n}(x, y), $$ (7) i.e., the states $\Psi_{m,n}(x, y)$ are also eigenstates of $J_z$ with eigenvalue $j \equiv l - 1/2$. 3. Partially displaced states Defining generalized annihilation operators $A^-$ and $B^-$ in terms of the scalar ones $A^-$ and $B^-$, we can build bidimensional coherent states (2D-CS) $\Psi_{\alpha,\beta}(x, y) \equiv \langle x, y|\alpha, \beta \rangle$ such that [4,25,28]: $$ A^- \Psi_{\alpha,\beta}(x, y) = \alpha \Psi_{\alpha,\beta}(x, y), \quad B^- \Psi_{\alpha,\beta}(x, y) = \beta \Psi_{\alpha,\beta}(x, y), \quad \alpha, \beta \in \mathbb{C}. $$ Moreover, if one takes specific sums over one of the quantum numbers, $n$ or $m$, the so-called partial coherent states (PCS) can be obtained [4,28]. In this section, we discuss the construction of all these coherent states for graphene. 3.1. Annihilation operator $B^-$ Let us consider the operator $B^-$ defined as $$ B^- = B^- \otimes \mathbb{I} = \begin{bmatrix} B^- & 0 \\ 0 & B^- \end{bmatrix}, \quad B^+ = (B^-)^\dagger \implies [B^-, B^+] = \mathbb{I}, $$ (8) such that $$ B^- \Psi_{m,n}^+(x, y) = \sqrt{m} \Psi_{m-1,n}^+(x, y). $$ A first family of PCS $\Psi_{\beta}^n(x, y) \equiv \langle x, y|n, \beta \rangle$ is obtained from the eigenvalue equation $$ B^- \Psi_{\beta}^n(x, y) = \beta \Psi_{\beta}^n(x, y), \quad \beta \in \mathbb{C}, \quad n = 0, 1, 2, \ldots. $$ The PCS with a well-defined energy $E_n = \sqrt{n\omega} \hbar v_F$ turn out to be $$ \Psi_{\beta}^n(x, y) = \frac{1}{\sqrt{2(1-\delta_{bn})}} \left( (1-\delta_{bn})\psi_{\beta}^{n-1}(x, y) \right), \quad n = 0, 1, 2, \ldots, $$ (9) where we have identified two scalar eigenstates $\psi^\alpha_{\beta}$ of $B^-$ for each energy level $n$. Now, each scalar coherent state in Eq. (9) satisfies one of these equation systems: \[ \begin{align*} B^- \psi^\alpha_{\beta}(x, y) &= \beta \psi^\alpha_{\beta}(x, y), \\ B^- \psi^{n-1}_{\beta}(x, y) &= \beta \psi^{n-1}_{\beta}(x, y), \end{align*} \] (10a) \hspace{1cm} \hspace{1cm} (10b) Therefore, in order to find the solutions $\psi^\alpha_{\beta}$, one should define the complex parameter $z$ as \[ z = \frac{\sqrt{\omega}}{2} \rho \exp(i\theta) = \sqrt{\omega} \left(\frac{x + iy}{2}\right), \] (11) and the operators $A^\pm$ and $B^\pm$ should be also expressed in terms of $z$. Thus, after solving the expressions in Eq. (10), the normalized spinorial PCS $\Psi^\alpha_{\beta}(x, y)$, $n = 0, 1, 2, \ldots$, take the form \[ \Psi^\alpha_{\beta}(x, y) = \frac{1}{\sqrt{2(1-\delta_{0n})\pi n!}} \exp \left( \left[ \beta - \frac{z}{2} \right] z^* - \frac{|\beta|^2}{2} \left( \frac{\sqrt{n}(z - \beta)^{-1}}{i(z - \beta)^n} \right) \right). \] (12) As a final comment, let us mention that the coherent states $\Psi^\alpha_{\beta}(x, y)$ can be also obtained by operating an unitary displacement operator $D(\beta, \beta^*)$ on the spinorial states $\Psi^\alpha_{0n}(x, y)$, i.e., \[ \Psi^\alpha_{\beta}(x, y) = D(\beta, \beta^*) \Psi^\alpha_{0n}(x, y), \quad D(\beta, \beta^*) = \exp \left( \beta \mathbb{B}^+ - \beta^* \mathbb{B}^- \right). \] (13) 3.1.1. Probability and current densities. From Eq. (13), we can establish that the coherent states $\Psi^\alpha_{\beta}(x, y)$ are displaced from the origin, similarly to the standard coherent states (SCS) in phase space, being centered at the point \[ (x_0, y_0) = \left( \frac{2|\beta|}{\sqrt{\omega}} \cos(\varphi), \frac{2|\beta|}{\sqrt{\omega}} \sin(\varphi) \right), \quad \beta = |\beta| \exp(i\varphi). \] Such a translation is generated through the magnetic translational operators that represent, in a classical interpretation, the coordinates of the centre of a circle on $xy$-plane in which a charged particle moves [29–32]. In complex number notation, the coordinates $(x, y)$ with respect to the origin of such a particle moving in a circular trajectory of radius $\rho' = \sqrt{x'^2 + y'^2}$, will be \[ z = \sqrt{\omega} \left( \frac{x + iy}{2} \right) = (|\beta| \cos(\varphi) + \xi' \cos(\theta')) + i(|\beta| \sin(\varphi) + \xi' \sin(\theta')) = \beta + z', \] where $\xi' = \sqrt{\omega} \rho'/2$ and $z' = \xi' \exp(i\theta')$. Hence, the probability density $\rho_{n,\beta}(x, y)$ and the current densities $j_{n,\alpha'}(x', y')$ along the directions of the vectors $\hat{\rho}'$ and $\hat{\theta}'$ in the displaced frame are, respectively (see Figure 2): \[ \rho_{n,\beta}(x, y) = \Psi^\alpha_{\beta}(x, y) \Psi^\alpha_{\beta}(x, y) = \frac{1}{2(1-\delta_{0n})\pi n!} \exp \left( -|z - \beta|^2 \right) |z - \beta|^{2n-2} \left( |z - \beta|^2 + n \right), \] \[ j_{n,\alpha'}(x', y') = e v_F \Psi^\alpha_{\beta}(x', y') \left( \hat{\sigma} \cdot \hat{n}' \right)_k \Psi^\alpha_{\beta}(x', y') = \frac{2 e v_F \sqrt{n}}{2(1-\delta_{0n})\pi n!} \exp \left( -|z'|^2 \right) \frac{|z'|^{2n}}{\xi'} \Re[i(-1)^k]. \] For $k = 0$ ($k = 1$), last equation gives the flux of probability in the radial (angular) direction. As is expected, the function $j_{n,\alpha'}(x', y')$ is null for any value of $n$. 3.2. Annihilation operator $\hat{A}^{-}$ Now, let us consider the operator $\hat{A}^{-}$ defined as $$\hat{A}^{-} = \frac{1}{\sqrt{2}} \begin{pmatrix} \sqrt{\frac{N-1}{N+1}} A^{-} & \frac{1}{\sqrt{N+1}} (A^{-})^2 \\ \frac{1}{\sqrt{N+1}} (A^{-})^2 & \sqrt{\frac{N}{N+1}} A^{-} \end{pmatrix}, \quad \hat{A}^+ = (\hat{A}^{-})^\dagger \implies [\hat{A}^{-}, \hat{A}^+] = \mathbb{I}, \quad (14)$$ such that $$\hat{A}^{-} \Psi_{m,n}(x,y) = \sqrt{n} \exp(i\pi/4) \Psi_{m,n-1}, \quad n = 0, 1, 2, \ldots.$$ A second family of PCS $\Psi^\alpha_m(x,y) \equiv \langle x, y | \alpha, m \rangle$ is obtained from the eigenvalue equation $$\hat{A}^{-} \Psi^\alpha_m(x,y) = \alpha \Psi^\alpha_m(x,y), \quad \alpha \in \mathbb{C}, \quad m = 0, 1, 2, \ldots. \quad (15)$$ By applying Eq. (15) and taking $\hat{\alpha} = \alpha \exp(-i\pi/4)$, the corresponding PCS take the form $$\Psi^\alpha_m(x,y) = [2 \exp(|\hat{\alpha}|^2) - 1]^{-1/2} \left[ \Psi_{m,0}(x,y) + (1 - \delta_{0m}) \sum_{n=1}^{m} \sqrt{2n} \Psi_{m-n}(x,y) + \sum_{n=m+1}^{\infty} \sqrt{2n} \Psi^\dagger_{m,n}(x,y) \right]. \quad (16)$$ 3.2.1. Probability and current densities and mean energy. Using again the complex parameter $z$ in Eq. (11), the densities $\rho_{m,\alpha}(x,y)$ and $j_{m,\alpha}(x',y')$, and the mean energy $\langle H_{DW} \rangle_\alpha$ are $$\rho_{m,\alpha}(x,y) = \rho_{m,\alpha}(\rho, \theta) = \Psi^\dagger_{\alpha^* m}(x,y) \Psi^\alpha_m(x,y)$$ $$= [2 \exp(|\hat{\alpha}|^2) - 1]^{-1} \left[ g_m^2(\rho) + \sum_{n=m+1}^{\infty} \frac{(\hat{\alpha})^n}{n!} L_{m-n}^n \left( \frac{\omega}{4} \rho^2 \right) f_m(\rho) \right]$$ $$+ \left[ \sum_{n=m+1}^{\infty} \frac{(\hat{\alpha})^n}{n!} L_{m-n}^n \left( \frac{\omega}{4} \rho^2 \right) g_m(\rho) \right]^2 + 2 \Re \left[ \sum_{n=m+1}^{\infty} \frac{(\hat{\alpha})^n}{n!} f_m(\rho) g_m(\rho) L_{m-n}^n(\rho^2) \right]$$ $$+ (1 - \delta_{0m}) \left[ \sum_{n=1}^{m} \left( -\frac{\hat{\alpha}}{z^*} \right)^n L_{m-n}^n \left( \frac{\omega}{4} \rho^2 \right) g_m(\rho) \right]$$ $$+ 2 \Re \left[ \sum_{n=1}^{m} \left( -\frac{\hat{\alpha}}{z^*} \right)^n L_{m-n}^n \left( \frac{\omega}{4} \rho^2 \right) g_m(\rho) \right] \times$$ Figure 2: For PCS $\Psi^\alpha_0(x,y)$, the probability density $\rho_{m,\beta}(x,y)$ (3D) as well the angular current density $j_{m,\beta\gamma}(x',y')$ (2D) are shown for different values of $n$, for $B_0 = 1/2$ and $\omega = 1$. (a) $\rho_{m,\beta}(x,y)$ for $n = 0$ and $\beta = \exp(i\pi/2)$ (b) $\rho_{m,\beta}(x',y')$ and $j_{m,\beta\gamma}(x',y')$ with $\beta = 1$ and $n = 1$ (c) $\rho_{m,\beta}(x',y')$ and $j_{m,\beta\gamma}(x',y')$ with $\beta = 1$ and $n = 3$ \[ \Psi(x,y) = \mathcal{N} \sum_{n=0}^{\infty} \sum_{m=0}^{\infty} c_n \Psi_n^m(x,y) = \mathcal{N} \sum_{m=0}^{\infty} d_m \Psi_m^m(x,y) = \mathcal{N} \Psi_m^m(x,y) \Psi_m^m(x,y), \] where \( \mathcal{N} \) is a normalization constant. Hence, employing the coherent states shown in Eqs. (12) and (16), we obtain the following expressions for 2D-CS, as well their corresponding probability and current densities (see Figure 5): \[ \alpha = \exp(i\pi/2), \quad m = 0. \] \[ \alpha = \exp(i\pi/2), \quad m = 0. \] \[ \alpha = 5 \exp(i\pi/2), \quad m = 0. \] \[ \alpha = 5 \exp(i\pi/2), \quad m = 0. \] \[ \alpha = 5 \exp(i\pi/2), \quad m = 2. \] \[ \alpha = 5 \exp(i\pi/2), \quad m = 2. \] Figure 3: For PCS \( \Psi^m_{\alpha}(x, y) \), the probability density \( \rho_{m,\alpha}(x, y) \) (3D), the radial current density \( j_{m,\hat{r}\alpha}(x, y) \) (2D, a, c, e) and the angular current density \( j_{m,\hat{\theta}\alpha}(x, y) \) (2D, b, d, f) are shown for some values of \( \alpha \) and \( m \). In all the cases \( B_0 = 1/2 \) and \( \omega = 1 \). \[ j_{\alpha,\beta,\hat{n}}(x, y) = \frac{2evF}{\pi(2\exp(|\alpha|^2) - 1)} \text{Re} \left[ (i)^k e^{-i\theta} \left( \sum_{n'=0}^{\infty} \frac{\hat{\alpha}(z - \beta)}{n!} \right)^n \left( \sum_{n=0}^{\infty} \frac{\hat{\alpha}^*(z^* - \beta^*)}{n!(z^* - \beta^*)} \right) \right]. \] (18c) The expression for the mean energy \( \langle H_{DW}\rangle_{\alpha} \) for these states is the same than those ones in Eq. (16) due to the coherent states \( \Psi^m_{\beta}(x, y) \) are actually stationary states (see Figure 4). \[ \alpha = \exp(i\pi/2), \beta = 0. \] \[ \alpha = \exp(i\pi/2), \beta = 0. \] \[ \alpha = 2\exp(i\pi/2), \beta = 2. \] \[ \alpha = 2\exp(i\pi/2), \beta = 2. \] \[ \alpha = 3\exp(i\pi/2), \beta = 3. \] \[ \alpha = 3\exp(i\pi/2), \beta = 3. \] Figure 5: For 2D-CS \( \Psi_{\alpha,\beta}(x, y) \), the probability density \( \rho_{\alpha,\beta}(x, y) \) (3D) and radial current density \( j_{\alpha,\beta}(x, y) \) (2D, a, c, e), as well as the angular current density \( j_{\alpha,\beta}\hat{\rho}(x, y) \) (2D, b, d, f) are shown for some values of \( \alpha \) and \( \beta \). In all the cases \( B_0 = 1/2 \) and \( \omega = 1 \). Red lines on \( xy \)-plane correspond to the classical trajectory of a charged particle in a magnetic field: the coordinates of the center of the circle are determined by \( \beta \) while \( \alpha \) gives the coordinates in which the maximum probability amplitude can be found respect to that point. 5. Conclusions In this work, we have followed the ideas of Malkin and Man’ko [4] to obtain the simplest coherent states for electrons in graphene interacting with a magnetic field through a symmetric gauge, in order to describe the dynamics of such particles close to Dirac points. For the gauge considered, DW solutions have axial symmetry (Eq. (6)) and infinite degeneracy due to the existence of rotational symmetry, \([H^z, L_z] = 0\). Two sets of scalar ladder operators are identified with them, we can define two generalized annihilation operators \( A^- \) and \( B^- \) (Eqs. (8) and (14)) with which we can build their associated coherent states. Hence, three different kinds of coherent states \( \Psi^n(x, y) \), \( \Psi^m(x, y) \) and \( \Psi_{\alpha,\beta}(x, y) \) with non-definite angular momentum are obtained. For the coherent states in Eqs. (16) and (18a), there is a flux of probability in both radial and angular directions that, we assume, is due to the contribution of the wave functions of both sublattices in the unit cell of graphene (Figures 3, 5). Meanwhile, for the states in Eq. (12), there is only flux of probability in angular direction with axial symmetry because these ones are actually stationary states that have been displaced on \( xy \)-plane (Figure 2). On the other hand, both families of PCS \( \Psi^n(x, y) \) and \( \Psi^m(x, y) \) posses a Gaussian probability distribution only for \( n = 0 \) and \( m = 0 \), respectively (Figures 2, 3), while 2D-CS \( \Psi_{\alpha,\beta}(x, y) \) have a stable Gaussian-like probability distribution regardless the value of \( \alpha \) and \( \beta \), as happens with SCS in phase space. Similar to what is observed for the bidimensional coherent states obtained by Malkin and Man’ko, in a semi-classical interpretation, the eigenvalue \( \beta \) determines... the coordinates of the center of the circle while $\alpha$ is related with the coordinates of the charged particle rotating around such center (see Figure 5). This allows us to conclude that these last coherent states are the simplest ones that can be obtained for electrons in graphene interacting with an external constant magnetic field for a symmetric gauge [4]. In addition, the behavior of the mean energy function suggests the possibility of using both states $\Psi_{\alpha}^{m}(x,y)$ and $\Psi_{\alpha,\beta}^{n}(x,y)$ in semi-classical treatments (Figure 4). Finally, it is important to remark that, as has been discussed in [33–35], it is possible to construct coherent states that are also eigenstates of the total angular momentum operator in $z$-direction (Eq. (7)) through ladder operators $K^- = A^- B^-,$ $K^+ = (K^-)^\dagger$ that, together with the operator $2K_0 = [K^-, K^+]$, are generators of $su(1,1)$ algebra. This work is in progress. Acknowledgments This work has been supported in part by the Spanish Junta de Castilla y León (Projects VA137G18 and BU229P18) and MINECO (Project MTM2014-57129-C2-1-P). EDB also acknowledges the support of Conacyt and the warm hospitality at Department of Theoretical Physics of the University of Valladolid, as well his family moral support, specially of Act. J. Manuel Zapata L. References [6] Landau L 1930 Z. Phys. 64 629 [23] Dai-Nam L, Van-Hoang L and Pinakv R 2017 Physica E 96 17
Proceedings of the Fifteenth Annual Meeting of the ENTOMOLOGICAL SOCIETY of ALBERTA Edmonton, Alberta October 19th, 20th, and 21st, 1967 Proceedings of the ENTOMOLOGICAL SOCIETY OF ALBERTA Volume 15 October 1967 The Fifteenth Annual Meeting of the Entomological Society of Alberta was held jointly with the Entomological Society of Saskatchewan at the MacDonald Hotel, Edmonton, on October 19, 20 and 21. Officers 1967 <table> <thead> <tr> <th>Position</th> <th>Saskatchewan</th> <th>Alberta</th> </tr> </thead> <tbody> <tr> <td>President</td> <td>L. Burgess</td> <td>B. Hocking</td> </tr> <tr> <td>Vice-President</td> <td>J. F. Doane</td> <td>H. Tripp</td> </tr> <tr> <td>Secretary</td> <td>W. W. A. Stewart</td> <td>L. K. Peterson</td> </tr> <tr> <td>Treasurer</td> <td>W. W. A. Stewart</td> <td>K. E. Ball</td> </tr> <tr> <td>Editor</td> <td>N. Church</td> <td>M. A. Chance</td> </tr> <tr> <td>Regional Director</td> <td>M. A. Taylor</td> <td>G. E. Ball</td> </tr> <tr> <td>CONTENTS</td> <td>Page</td> <td></td> </tr> <tr> <td>------------------------------------------------------------------------</td> <td>------</td> <td></td> </tr> <tr> <td>Opening Address - L. Burgess</td> <td>3</td> <td></td> </tr> <tr> <td>Abstracts of Papers Presented</td> <td></td> <td></td> </tr> <tr> <td>The formation and histochemistry of the spermatophore in the Caragana Blister Beetle, <em>Lytta nuttalli</em> Say - G.H. Gerber</td> <td>4</td> <td></td> </tr> <tr> <td>Copulation, oviposition and fertility in grasshoppers - D.S. Smith</td> <td>5</td> <td></td> </tr> <tr> <td>Glycerol content of insects collected at Lake Hazen, Ellesmere Island - R.W. Salt &amp; J. Shorthouse</td> <td>5</td> <td></td> </tr> <tr> <td>Some recent advances in the study of insect freezing - R.W. Salt</td> <td>5</td> <td></td> </tr> <tr> <td>A study of the sensilla of the larval head of the yellow fever mosquito, <em>Aedes aegypti</em> - L.R. Ko</td> <td>6</td> <td></td> </tr> <tr> <td>A technique for testing insect susceptibility to fumigants - C.R. Ellis</td> <td>6</td> <td></td> </tr> <tr> <td>Differences in susceptibility of cutworm species to insecticides in the laboratory - S. McDonald</td> <td>7</td> <td></td> </tr> <tr> <td>Water absorption and development in eggs of the prairie grain wireworm, <em>Ctenicera destructor</em> (Brown) in relation to temperature - J.F. Doane</td> <td>7</td> <td></td> </tr> <tr> <td>Some predictions in zoogeography - D.R. Whitehead</td> <td>8</td> <td></td> </tr> <tr> <td>The carabid fauna at the gates of hell - G.E. Ball</td> <td>8</td> <td></td> </tr> <tr> <td>Fire: A possible zoogeographic isolation mechanism of the Mexican transvolcanic belt - T.L. Erwin</td> <td>9</td> <td></td> </tr> <tr> <td>Territorial relationships of the dragonfly, <em>Libellula quadramaculata</em> L. - F. Conner</td> <td>9</td> <td></td> </tr> <tr> <td>Intertidal insects of California - W.G. Evans</td> <td>10</td> <td></td> </tr> </tbody> </table> Solar cookers, flowers, and insects - P.G. Kevan ...... 11 The effect of spring maximum temperatures and fall embryological development on the date of hatching in Melanoplus sanguinipes (Fabricius) (Orthoptera: Acrididae) - R.L. Randell ........................................ 11 Analysis of oviposition behaviour of grasshoppers in the laboratory - P.W. Riegert & G.L. Gilkinson .......... 12 Serological methods in ecological research - J.H. Frank ....................................................................... 12 Problems in estimating natality in a fruit-infesting insect with overlapping generations - G. Pritchard ...... 13 An outline of the Expo 67 shadfly control project - F.J.H. Fredeen ...................................................... 13 Pleistocene fossil Coleoptera from Alaska - J.V. Matthews ................................................................. 14 Society Business Minutes of the Executive Meeting, October 19, 1967 ...... 15 Minutes of the Fifteenth Annual Business Meeting, Part one, October 20 ............................................. 16 Part two, October 21 .............................................. 18 Financial Statement for the Year Ending January, 1968 ... 22 List of Entomological Society of Alberta prize winners, 1954 to 1967 ................................................................ 23 List of Entomological Society of Alberta insect collection competition winners, 1953 to 1967 ..................... 24 List of the Presidents of the Entomological Society of Alberta, 1953 to 1967 ........................................ 28 Obituary ...................................................................... 29 Membership List .......................................................... 30 OPENING ADDRESS L. Burgess Canada Agriculture Research Station Saskatoon, Saskatchewan Members of the Entomological Society of Alberta, members of the Entomological Society of Saskatchewan, honoured guests: It gives me great pleasure to welcome you to this joint annual meeting of our two Societies. I am certain that this is a very happy occasion, and I trust that it will also be a very rewarding one, wherein we learn of the entomological interests, accomplishments, aspirations and philosophies of others, wherein items of common interest are freely discussed, and wherein old friendships are renewed and new friendships are made. On behalf of the Entomological Society of Saskatchewan, I express to the Entomological Society of Alberta our sincere thanks for inviting us to meet with you at this time. It has been a great pleasure working along with you in preparing for this meeting. We hope that you will meet with us sometime in the future in Saskatchewan. In the immediate future we hope that we will see many of you at the Entomological Society of Canada Meetings that are being held in Saskatoon in August of 1968. On behalf of everyone present at our joint meeting this morning, I express thanks and appreciation to the hard-working committees for the efforts that they have expended in organizing this meeting. To Dr. Craig and Dr. Church and the members of the Program Committee, to Mr. Edmunds and members of the Local Arrangements Committee, to Mrs. Hocking and members of the Ladies Program Committee, and to all others who assisted in various ways, we say "Thank you for a job well done". Now let us carry on with the scientific program of our meeting. Reproduction in Male Insects: Is there an Endocrine Involvement? A. B. Ewen and J. Saucier Canada Agriculture Research Station Saskatoon, Saskatchewan The generally accepted view that the genetic sex of an insect is determined at the moment of fertilization, depending only upon the contributions of the two gametes, should be re-examined. Recent work by Naisse (Arch. Biol. 77, 1966) clearly demonstrates that primary and secondary male sex characters are influenced by a hormone secreted by the apical tissues of the testes in the 4th instar larva. This androgenic tissue, in turn, is stimulated by the cerebral neurosecretory cells. This apical tissue is not germinal, but somatic. In this sense, the gonad in the insect contains the same tissues as the vertebrate gonad - germinal and interstitial - with the latter secreting androgenic hormones in both. The Formation and Histochemistry of the Spermatophore in the Caragana Blister Beetle, Lytta nuttalli Say George H. Gerber Department of Biology University of Saskatchewan Saskatoon, Saskatchewan The spermatophore is composed of two parts, a tubular portion and an amorphous mass of jelly-like material. The tube is composed of three layers derived from three chemically different materials produced by the spiral accessory glands of the male. Large quantities of a carbohydrate-protein material, produced by the vasa deferentia of the male, are found in the jelly-like mass; a thin, carbohydrate-lipid layer surrounds this material. The mass material serves to localize the spermatozoa near the spermathecal duct opening and is used in the nutrition of the female. Copulation, Oviposition and Fertility in Grasshoppers D. S. Smith Canada Agriculture Research Station Lethbridge, Alberta After a single copulation females of Melanoplus sanguinipes (Fab.) lived longer and laid more eggs than did females allowed to copulate ad. lib. The viability of these eggs was lower however so that the number of viable eggs per female was the same in both cases. Glycerol Content of Insects Collected at Lake Hazen, Ellesmere Island R. W. Salt and J. Shorthouse Canada Agriculture Research Station Lethbridge, Alberta Spring arrived early at Lake Hazen in 1967, and many insects had ceased hibernating and become active by the time J. S. arrived there. Collections were nevertheless made from May 25 to 29 and later tested for the presence of glycerol. Larvae of a lymantriid moth and dipterous larvae collected from a dead lemming contained small concentrations of glycerol, probably less than during the winter. A spider and a collembolan contained no glycerol, whereas some would be expected during hibernation. Mosquito larvae, needing no protection in their warm aquatic habitat, had no glycerol. Some Recent Advances in the Study of Insect Freezing R. W. Salt Canada Agriculture Research Station Lethbridge, Alberta Freezing of intact insects has been observed to begin only in the gut contents, both in feeding and in non-feeding forms. Isolated appendages, in which there are no digestive elements, freeze at lower temperatures than do their intact donors; their nucleation sites are not randomly located, but occur at a few preferred locations. From this it is inferred that nucleation in tissues does not occur in haemolymph or inside cells. In excised grasshopper legs, freezing starts in the femoro-tibial joint about 80% of the time. Either a structural entity in the joint or a concentration of a substance that is a good nucleator is probably responsible. Freezing temperatures of excised grasshopper legs are linearly related to leg weight. In view of the preponderance of nucleation in the femoro-tibial joint, it is clear that the determining factor is the size of the unknown nucleating entity in the joint, be it a structure or an accumulation of a substance, and not the size of the leg as such. Quality and quantity of nucleators therefore determine where and when freezing will start. Although gut contents have always been observed to nucleate first in intact insects, exceptions probably occur. A Study of the Sensilla of the Larval Head of the Yellow Fever Mosquito, *Aedes aegypti* L. R. Ko Department of Biology University of Saskatchewan Saskatoon, Saskatchewan A study of the sensory organs on the head of the 4th instar larva of *Aedes aegypti* was made. The various types of sensilla were discussed. A Technique for Testing Insect Susceptibility to Fumigants C. R. Ellis Department of Entomology University of Alberta Edmonton, Alberta Six strains of adult *Sitophilus granarius* and one each of *Tribolium confusum* and *Oryzaephilus surinamensis* were fumigated with ethylene dibromide and ethylene dichloride to evaluate a small chamber fumigation technique as a means of establishing susceptibility levels. Fumigation chambers were quart preserving jars. Fumigant dosages were weighed in a piece of fine glass capillary tubing. Day-to-day variation in susceptibility was less than 6% when susceptibility was expressed as a ratio of the LD50's of a test and standard strain fumigated on the same day. Differences in Susceptibility of Cutworm Species to Insecticides in the Laboratory S. McDonald Canada Agriculture Research Station Lethbridge, Alberta Comparisons were made in the laboratory between the susceptibility of fresh-molt, fifth-instar larvae of the pale western cutworm, *Agrotis orthogonia* (Morr.) and the red-backed cutworm, *Euxoa ochrogaster* (Guenee), to endrin and proposed alternate insecticides. Endrin was found to be equi-toxic to both species, whereas Dursban (0, 0, -diethyl 0-3, 5, 6-trichloro-2-pyridyl phosphorothioate) was slightly more toxic to red-backed cutworm. With AC 47470 (2-(diethoxyphosphinylimino)-4-methyl-1, 3-dithiolane) and AC 47031 (2-(diethoxyphosphinylimino)-1, 3, dithiolane), the red-backed cutworm larvae were found to be 2 and 5 times less susceptible than pale western cutworm. The weight differences exhibited between the species suggest that these differences would not be as pronounced under field conditions. Differences in susceptibility between species should be established before undertaking expensive field trials or making general control recommendations. Water Absorption and Development in Eggs of the Prairie Grain Wireworm, *Ctenicera destructor* (Brown) in Relation to Temperature John F. Doane Canada Agriculture Research Station Saskatoon, Saskatchewan Water absorption by *C. destructor* eggs was shown to be directly related to embryonic development. Eggs required the same length of time for absorption at 25°, 20°, and 15°C when this was expressed as a percentage of the total developmental period. The ecological implications were discussed. Some Predictions in Zoogeography D. R. Whitehead Department of Entomology University of Alberta Edmonton, Alberta Most zoogeographic investigations are concerned with explaining present phenomena and suggesting past history. The assumptions and conclusions are speculative and usually cannot be subjected to the test of proof, since this is dependent on the discovery of fossil evidence. To demonstrate that these speculations are meaningful it is necessary to find circumstances from which predictions can be made. Certain cases are discussed in which it is possible to suggest present distributions, based on zoogeographic speculation. Proof of such predictions lends strong support to the underlying analysis. The Carabid Fauna at the Gates of Hell George E. Ball Department of Entomology University of Alberta Edmonton, Alberta During the summer of 1967, the effect of volcanism on animal life was examined by means of a study of the distribution of 28 species of the Family Carabidae in the vicinity of the volcano El Paricutin, state of Michoacan, Mexico. This volcano was active during the period 1943-1952. Its lava flows and ash deposits destroyed much of the biota in its immediate vicinity. Particular attention was given to the fauna of a small hill, Capatzun, located about a mile north of the volcano. The base of Capatzun was completely surrounded by lava which isolated it from areas immediately beyond the northern edge of the lava flow. In 1967 the hill was forested, so evidently these trees survived the catastrophe, as did the trees of moderate size beyond the lava flow, in spite of inundation by ash which accumulated to a depth of several feet. Thus the vegetation was not completely destroyed and this was taken as evidence that the habitat was not completely destroyed. In contrast to the six species comprising the carabid fauna of the lava flow, the fauna of Capatzun consisted of 20 species, 18 of which were represented among the 26 species collected in areas peripheral to the lava flow. This suggests that Capatzun was not colonized by beetles that walked across the bare lava following cessation of volcanic activity. Further, many of the species on Capatzun were represented by brachypterous (i.e. flightless) individuals, and could not have colonized Capatzun by flying from a peripheral area across the intervening lava field. It was suggested, therefore, that the carabid fauna of Capatzun survived the volcanic eruption in situ. It was concluded that the fauna of an area is likely to survive a catastrophe of major proportions, as long as the habitat is not totally destroyed. Fire: A Possible Zoogeographic Isolation Mechanism of the Mexican Transvolcanic Belt Terry L. Erwin Department of Entomology University of Alberta Edmonton, Alberta The volcanic mountain belt of Mexico, which runs from western Vera Cruz to Jalisco and Colima, appears to be a barrier to dispersal of many animal species, some genera, and even a few families. It is suggested that fire, directly or indirectly, may be the mechanism that makes the transvolcanic belt a barrier to some animals and animal groups. The pine trees of the volcanic belt have much the same growth pattern as that of the Eastern United States' fire climax pine, Pinus palustris Mill. This indicates that the volcanic pine forests are frequently swept by light ground fires, eliminating all but the fire climax flora and fauna. Therefore, animals that cannot establish themselves in a fire climax forest are presented with an unpenetrable barrier. Other animals become dependent upon the fire climax forest and are not found outside its boundaries. It is suggested that future work done in this area by ecologists would aid zoogeographers in determining the effectiveness of the volcanic belt as a barrier to animal movements. Territorial Relationships of the Dragonfly, Libellula quadrimaculata L. Floyd Conner Department of Biology University of Saskatchewan Saskatoon, Saskatchewan Territorial behaviour is a form of competition which is as yet poorly understood. Libellula quadrimaculata was chosen as the object of a study of this behaviour because of the ease of observation of large numbers of interactions between individuals of this species. Territories are found over shallow water, and consist of a perch area surrounded by a portion of oviposition habitat. The owner of a territory chases *Libellula quadrimaculata* males and dragonflies of some other species from this area. Females entering a territory mate with the owner and oviposit within the territory while guarded by the owner. Both mating and oviposition may be hindered if male density in the area is very high. Few highly specialized behavioural sequences are apparent to the author, but some frequently-recurring patterns have been noted. One of these is "flying parallel". A similar territorial behaviour has been described in a great diversity of animals. Territories were mapped and found to coincide closely with micro-topographic areas, or micro-areas, delimited by vegetation. The extent of localization of males and the use of a given micro-area by consecutive males was studied by marking individuals with spray paint on the wing tips. Considerable movement of individuals along a region of breeding habitat was observed, but use of individual micro-areas was relatively constant. **Intertidal Insects of California** William G. Evans Department of Entomology University of Alberta Edmonton, Alberta Insects are found in all the major types of marine littoral habitats of California, such as rocky shores, sandy beaches, mud flats and salt marshes. In these habitats they are present in all the zones which are differentiated by degree of exposure to tidal level and to wave action. This intertidal insect fauna however, is imbalanced, consisting of only four orders, of which the Coleoptera and Diptera predominate. Several species of Collembola inhabit all zones in the sandy beach and rocky shore habitats while one or two species of Thysanura are found in rock crevices. At least ten families of beetles, consisting mostly of predators and scavengers, are found in all zones. Of these, the Staphylinidae are by far the dominant group, occupying such diverse habitats as the interstices of algal holdfasts in the lowest tidal level, the rock crevice habitat extending from low water to high water and the high beach sand habitat. The Diptera are represented by at least six families. The larvae of tipulids and chironomids feed on algae growing at all tidal levels on rocky surfaces while the larvae of canaceids and anthomyids are scavengers on algae, mainly browns and reds, washed up on beaches. Solar Cookers, Flowers, and Insects Peter G. Kevan Department of Entomology University of Alberta Edmonton, Alberta Flowers act as parabolic reflectors of solar radiation. The use insects make of this heat source is considered. The Effect of Spring Maximum Temperatures and Fall Embryological Development on the Date of Hatching in *Melanoplus sanguinipes* (Fabricius) (Orthoptera : Acrididae) Robert Latham Randell Canada Agriculture Research Station Saskatoon, Saskatchewan The date of hatching in the economically important grasshopper species of Saskatchewan is extremely variable. Due to the concentration of control measures on the early nymphal instars there is considerable interest in the prediction of the date of hatch. Analysis of hatching dates, estimated from the age distribution of the post-hatch population, from two areas in Saskatchewan indicates the importance of both embryological development in the preceding fall and the daily mean maximum temperature in early March, late May, and early June. Significant correlations were found between the estimated date of hatch for a period of ten years and the following independent variables: (1) embryological development in the preceding fall and (2) the orthogonal polynomials; calculated by the method of Fisher, from \( x^0 \) to \( x^3 \) for the daily maximum screen temperatures from March 2 to June 28 bulked into 24 five day periods. Analysis of Oviposition Behaviour of Grasshoppers in the Laboratory P. W. Riegert and G. L. Gilkinson Canada Agriculture Research Station Saskatoon, Saskatchewan When given a choice, in the laboratory, grasshoppers will choose their oviposition sites with great discrimination and care. _Melanoplus sanguinipes_ preferred soil surface temperatures of 30 C but did not wholly reject sites at 40 C or 20 C. _Melanoplus bivittatus_ preferred a wider temperature spectrum of 30 to 40 C while _Melanoplus packardii_ almost ignored the coolest site of 20 C and deposited most of its eggs at 40 C. When given a choice of coarse (0.84 mm diameter), medium (0.42 mm) or fine (0.25 mm) sand, as well as three soil surface temperatures, _M. sanguinipes_ deposited more eggs in the fine- and medium-textured soil as long as these were maintained at 30 and 40 C. _M. packardii_ was found to oviposit in any type of soil providing the surface temperature was near 40 C. Furthermore, this species laid eggs in completely dry soil at the high temperature and ignored the moist sand; this presumably because the latter was cooler due to evaporation. None of the three species laid in soil that was sodden. These findings, when related to a roadside habitat, indicate that favoured oviposition sites of _M. packardii_ are on the ditch crowns. Here good drainage leaves the soil fairly dry and maximum solar radiation keeps temperatures fairly high. _M. sanguinipes_ will favour the relatively flat area under the field fence line where soils are usually fine textured (drift ridges) and not too wet. The bottom of the roadside ditch is generally unsuitable as an oviposition site because the soil is often sodden and when dry, too hard and compact. Serological Methods in Ecological Research J. H. Frank Department of Entomology University of Alberta Edmonton, Alberta Serological techniques are of great value to the ecologist in tracing food chains. They may be used when more conventional methods fail or are too laborious. Despite the fact that the method by which antibodies are produced has only recently been elucidated, serological techniques have been used by zoologists and botanists, very occasionally, for over 40 years. They may be used quite empirically, without a great knowledge of biochemistry. Both immunoelectrophoresis and direct diffusion methods are useful to taxonomists in separating species and higher categories. Ecologists, perhaps more than taxonomists, are slow to adopt these methods to their own studies. Problems in Estimating Natality in a Fruit-Infesting Insect with Overlapping Generations Gordon Pritchard Department of Biology University of Calgary Calgary, Alberta Attempts are being made to estimate the number of eggs laid in a season by the Queensland fruit fly, Dacus tryoni (Diptera; Tephritidae), a serious pest of cultivated fruit in eastern Australia. Problems are of two kinds - technical and statistical. 1. The eggs are small and the same colour as the fruit. The difficulties in finding than were overcome by first localizing the possible sites of egg deposition with a water-soluble dye, and then clearing away the fruit tissue enzymatically. 2. Eggs are laid continually throughout the summer and they are laid into fruit which eventually falls and rots. Sampling must be restricted to fruit on the tree. But some fruit survives more than one sampling period, while some will receive eggs and fall between sampling periods. The final model depends on the number of hatched and unhatched eggs present at any one time, the speed of development of eggs at different times in different varieties of fruit, the percentage of eggs that are infertile, and the rate of fruit fall. An Outline of the Expo 67 Shadfly Control Project F. J. H. Fredeen Canada Agriculture Research Station Saskatoon, Saskatchewan Abatement of a variety of nuisance insects, mainly Trichoptera, at the Expo 67 site was achieved by four 16-minute applications of less than 0.4 ppm of Rhothane to the St. Lawrence River. Each larvicide treatment was effective for only about 4 miles downriver. No harmful effects to fish or birds were expected or observed, and only a minor increase in the natural background level of DDT or Rhothane in the St. Lawrence River waters is expected. Pleistocene Fossil Coleoptera from Alaska John V. Matthews, Fr. Department of Geology University of Alberta Edmonton, Alberta North America - particularly Alaska - offers an opportunity for the study of Pleistocene paleoenvironments using the evidence accumulated from an examination of fossil insects. That such an approach is worthy of investigation is revealed by the success with similar research projects of G. R. Coope in England. In Alaska insect fossils - most of which are from Coleoptera - are abundant in organic sediments. Often only fragments of the entire insect are preserved; however, such fossils may be identified by careful comparison with museum specimens. Optimal fossil preservation is found in peats. In such situations articulated fossils possessing such important characters as the male genitalia have been found. Even peats which are known to be older than 700,000 years contain partially articulated Coleoptera fossils. Recently, a pilot study of three late Pleistocene Coleoptera assemblages from Fairbanks, Alaska has established the feasibility of using insect fossils for environmental reconstruction in Alaska. Three examples should reveal the variety of information which such studies and those to be performed will yield. First, using extremely old samples such as the one mentioned above, the longevity of species of insects may be established. If significant evolution has occurred within the last 700,000 years, statements concerning the phylogeny of certain arctic and sub-arctic insect groups should be possible. Second, the study of insect assemblages associated with extinct small mammal fossils should enable one to infer the type of environment in which the extinct vertebrates lived. Third, species of insects which are now very rare may be relatively abundant in fossil assemblages. In at least one case (Silpha coloradensis Wick.) the associations of the fossils lead the author to infer some of the heretofore unknown ecological requirements of the species. ENTOMOLOGICAL SOCIETY OF ALBERTA Minutes of Executive Meeting October 19, 1967 An executive meeting was held in the MacDonald Hotel, Edmonton at 8:00 p.m., October 19, 1967. Present were: B. Hocking (President), Kathleen Ball, G.E. Ball, H. Cerezke, M. Chance, H. Tripp, Dr. Ruby Larson and L. K. Peterson. Minutes of the last executive meeting and the last annual meeting in Banff were reviewed and business arising from these minutes was discussed. Considerable discussion was held on membership fees versus student fees and the complication that arises with these differences. It was agreed a notice of motion be presented at the annual meeting to change this section of our by-laws. The following committees were appointed by the executive: Nominating Committee - L.A. Jacobson (Chairman) W.G. Evans A. Raske Resolutions Committee - S. McDonald (Chairman) W.A. Charnetski Insect Collection Competition - Mr. C.E. Lilly was appointed as Chairman of Judges of insect collections with the power to add to his committee as was necessary to complete the job. The nominating committee will take note that Mr. P. Graham will not be able to continue as a representative member with Mr. J.B. Gurba on the Alberta Conservation Council. Requests for back issues of the Annual Meeting Proceedings were discussed at length and it was decided that this should be brought to the attention of the Society as a whole at the annual meeting. These requests, if met, may influence the type of papers presented by members. A Progress Report is requested from the chairman or co-chairman of the committee to establish guide lines for the treasurer of the Society. Dr. K.E. Ball reported a balance of $497.63 for Treasurer's interim report. The meeting closed at 10:00 p.m. with pleasant hospitality. Secretary ENTOMOLOGICAL SOCIETY OF ALBERTA Minutes of the 15th Annual Business Meeting - Part One. October 20, 1967 The annual meeting of the Entomological Society of Alberta was held in the MacDonald Hotel, Edmonton, Alberta, October 20, 1967. The meeting was opened by Dr. B. Hocking, President, who welcomed the members of the Entomological Society of Saskatchewan to sit in on our deliberations. The minutes of the 14th Annual Meeting held in Banff were adopted as circulated in the proceedings on a motion by P. E. Blakeley, seconded by J. A. Shemanchuk, with the following correction: Under the section on Revenue, Page 19 of 1966 proceedings it reads, <table> <thead> <tr> <th>Entomological Society of Canada - Program speakers - Miscellaneous expenses</th> <th>Budgeted</th> <th>Received</th> </tr> </thead> <tbody> <tr> <td></td> <td>$1,250.00</td> <td>$ 500.00</td> </tr> </tbody> </table> This should read - <table> <thead> <tr> <th>Entomological Society of Canada - Program speakers - Miscellaneous expenses</th> <th>Budgeted</th> <th>Received</th> </tr> </thead> <tbody> <tr> <td></td> <td>$ 750.00</td> <td>$ 500.00</td> </tr> <tr> <td></td> <td>$ 500.00</td> <td>$ 500.00</td> </tr> </tbody> </table> CARRIED G. E. Ball moved and H. A. Tripp seconded a motion that the minutes of the executive meeting held on March 21 be adopted as read. CARRIED Moved by H. A. Tripp, and seconded by H. Cerezke, that the minutes of the executive meeting of October 19 be accepted as read. CARRIED Mr. P. E. Blakeley gave a progress report for chairman A. M. Harper on "guide lines for treasurers" as requested by the executive. President Hocking suggested that the members of the society consider a memorial to the late Dr. C. W. Farstad. This will be discussed at a general meeting the following day, Saturday, October 21. The students' award committee presented a report as per attached to minutes, on the students' award prize. It was reported that the society library had been found in Calgary and a plea was put forth for all past editors to check their files for extra copies of the proceedings. Two complete sets of the proceedings should be kept in the archives. A notice of motion to change the by-laws to alter students fees was put by Ruby Larson. Interim Report was given by the treasurer, K. E. Ball, that we had a balance of $497.63 at the start of the meetings. Moved by W. G. Evans, seconded by R. E. Leech we accept this report. CARRIED Because of a misunderstanding between the MacDonald Hotel and the society there was a large deficit for the liquor bar on Thursday evening. It was moved by G. E. Ball that those drinking the Thursday evening at the bar should give the treasurer $0.75 per drink and because of the misunderstanding the society should absorb any remaining costs. Seconded by D. A. Craig. CARRIED The publication of the proceedings from this meeting was discussed. Since this meeting was being held as a joint meeting with the Entomological Society of Saskatchewan, it was suggested that each society publish its own papers as done regularly along with the titles of the papers from our sister society. If this was done then there would be a complete published record with the two proceedings. After some discussion each society decided that they would share the full responsibility for financing and publication of the proceedings of the meeting. It was reported that there was a deficit of three hundred forty-five dollars ($345.00) at present, but that part of this would be defrayed through the collection of $0.75 per drink from a number of members and of a $100 contribution from the University of Alberta. Meeting was adjourned at 2:00 p.m. to continue with the paper presentation as program dictated. The second section of the business meeting was called to order by President B. Hocking, Saturday, October 21, 1967. Business arose from the first section of the minutes of the Part One meeting of the previous day. The first item was a memorial to Dr. C. W. Farstad. It was decided that the executive should take action on a memorial volume and to solicit a suitable inscription from the Lethbridge group for the book to be placed in the Strickland Memorial Library. Dr. N. D. Holmes accepted the responsibility for the suitable inscription. There was a vociferous and lengthy discussion on the sale and distribution of our annual proceedings. It was moved by G. E. Ball that the proceedings be treated as a normal publication and distribution in accordance with this through sales and to members. Motion was seconded by D. A. Craig. R. W. Salt stated that a number of papers were preliminary reports and not available for publication. He also stated that if the proceedings were widely distributed beyond members then he would not submit an abstract but give a title only. N. D. Holmes stated that this was considered an inter-laboratory conference and not a scientific meeting. Otherwise they would be restricted in travel. He also stated he would request that all members of his staff at Lethbridge not submit abstracts for the proceedings if the motion was passed. G. E. Ball, on consultation with D. A. Craig, withdrew the motion. It was finally decided that the proceedings be distributed to members in good standing as hitherto and that persons and organisations requesting copies be sent application forms for membership and advised that copies could only be supplied on receipt of fee for the year in question and subject to availability. It was also requested that all surplus issues be sent to the University of Alberta, Edmonton. Ruby Larson moved that By-law 1(a) be amended to read, "The annual fee for full membership shall be $2.00; that By-law 1(b) and 1(d) be deleted; that By-law 1(c) be renumbered to 1(b). Seconded by G. E. Ball. CARRIED It was then moved by R. H. Gooding, seconded by D. M. Rosenberg that the registration fees for students attending the Entomological Society of Canada meetings be cancelled when these were held in Alberta with the Entomological Society of Alberta as hosts. An amendment to the motion was put by G. E. Ball, seconded by M. Chance that the fees be cancelled for students who are members of the Entomological Society of Canada. CARRIED P. E. Blakeley put a further amendment, seconded by N. D. Holmes that the registration fees be reduced for student members of the Entomological Society of Canada. CARRIED The motion as twice amended was then put and CARRIED. Report of the Nomination Committee - L. A. Jacobson, chairman of the nomination committee, presented the following slate of officers for 1968. <table> <thead> <tr> <th>Position</th> <th>Name</th> </tr> </thead> <tbody> <tr> <td>President</td> <td>H. A. Tripp</td> </tr> <tr> <td>Vice President</td> <td>J. A. Shemanchuk</td> </tr> <tr> <td>Secretary</td> <td>H. F. Cerezke</td> </tr> <tr> <td>Treasurer</td> <td>P. E. Blakeley</td> </tr> <tr> <td>Editor</td> <td>R. E. Stevenson</td> </tr> <tr> <td>Directors</td> <td>J. H. McGeheay</td> </tr> <tr> <td></td> <td>G. E. Swailes</td> </tr> <tr> <td></td> <td>D. M. Rosenberg</td> </tr> <tr> <td>Regional Director</td> <td>R. H. Gooding (to complete 1 year term vacated by G. E. Ball)</td> </tr> <tr> <td>Representative on Alberta Conservation Council</td> <td>P. G. Kevan</td> </tr> </tbody> </table> Because of Article 5 of the Constitution, P. E. Blakeley must be replaced. L. A. Jacobson put an amendment to the motion, nominating G. N. Lanier to replace Mr. Blakeley. Seconded by W. G. Evans. N. D. Holmes moved nominations cease, seconded by G. E. Ball. CARRIED P. E. Blakeley was appointed for a three year term as chairman of a treasurers' committee and was directed to submit guide lines for treasurers and a draft change in the constitution to the executive for distribution to the membership at large as a notice of motion. Report of the Resolution Committee - S. McDonald, Chairman, and W. A. Charnetski submitted the following resolutions: 'Whereas the success of the program of the 17th Annual Meeting of the Entomological Society of Alberta held jointly with the Entomological Society of Saskatchewan has been in large measure attributed to the following parties, BE IT RESOLVED that letters of appreciation be sent to: - The University of Alberta for sharing the cost of the banquet. - The Manager of the MacDonald Hotel for the accommodations and services. - Mrs. Hocking and her Committee for arranging the Ladies' Program. - Dr. Hocking for providing the after dinner entertainment. BE IT RESOLVED that a vote of thanks be given to: - The members of both societies who comprised the program committee. - The members of the society who comprised the Committee for the Local Arrangements.' The report was accepted with some reservations concerning the MacDonald Hotel. Insect Collection Competition - C. E. Lilly reported that the insect collection competition drew one of the largest entries we have had. There were 14 entries this year: 1 senior, 3 junior and 10 challenge competition. N. D. Holmes reported that the Entomological Society of Canada meetings held in Montreal were not as good as they have been in the past. Mr. I. S. Lindsay has resigned as secretary and Mr. D. G. Peterson is the new secretary. The Zoological Record contribution will be cancelled until the Entomological Society of Canada receive a letter acknowledging receipt of the contribution. The President said he would write to D. G. Peterson about this. Moved by N. D. Holmes, seconded by W. G. Evans that the University of Lethbridge be included in the disposal movement of the library. The sequence to be followed is University of Alberta, University of Calgary, Canada Department of Agriculture Research Station in Lethbridge, University of Lethbridge; each institution to retain such item as it wished. CARRIED It was moved by G. E. Swailes and seconded by C. E. Lilly that W. G. Evans and G. Pritchard be appointed as auditors. CARRIED Moved by N. D. Holmes, seconded by C. E. Lilly that the signing authority for treasurer be G. E. Swailes and P. E. Blakeley. CARRIED G. E. Swailes moved that we recommend to the Entomological Society of Canada that their meeting be held later in the calendar year. Seconded by G. E. Ball. CARRIED President Hocking then thanked the executive and various committees for their fine work and assistance. The meeting was moved adjourned by C. E. Lilly, seconded by M. Chance, at 1:00 p.m. CARRIED FINANCIAL STATEMENT OF 1967 Receipts Bank balance transferred from Lethbridge $ 687.74 Membership fees: Entomological Society of Alberta 84 full members @ $2.00 ................. 168.00 14 student members @ $1.00 .......... 14.00 Entomological Society of Canada 34 full members @ $8.00 ............... 272.00 9 student members @ $4.00 .......... 36.00 Bank interest .................................. 15.01 Sale of bulletins ................................ 45.00 Annual meeting of Society 71 registrations @ $5.00 ............... 355.00 34 wives banquet @ $3.00 ............ 102.00 University of Alberta contribution ........ 100.00 Liquor contributions .......................... 81.00 Sale of Alberta Natural History Books - 40% of books less exchange ...................... 94.80 less $1.30 exchange ......................... 1.30 1970.55 Expenditures Fees to Entomological Society of Canada ........ 302.00 Insect collection competition prizes and postage .. 48.35 University of Alberta Prize ..................... 50.00 Proceedings .................................... 132.23 Donation Zoological Society of London ............ 20.30 Stationery ..................................... 69.44 Postage ....................................... 4.20 Miscellaneous ($10 overcharge Tripp, cards, book) .... 11.23 Annual meeting of Society Telephone calls, D. Craig ................... 24.01 Hotel Macdonald ................................ 770.60 Programmes .................................. 40.71 1473.07 Balance as at December 31, 1967 .................. $ 496.18 Audited W.G. Evans G. Pritchard K.E. Ball, treasurer <table> <thead> <tr> <th>Year</th> <th>Name</th> <th>Position</th> </tr> </thead> <tbody> <tr> <td>1954</td> <td>Roman P. Fodchuk</td> <td>Associate Professor, University of Guelph.</td> </tr> <tr> <td>1956</td> <td>Waldemar Klassen</td> <td>Insect Geneticist, U.S.A.</td> </tr> <tr> <td>1957</td> <td>Ronald H. Gooding</td> <td>Assistant Professor, University of Alberta.</td> </tr> <tr> <td>1958</td> <td>Natalka Horeczko</td> <td>Medicine, Edmonton.</td> </tr> <tr> <td>1959</td> <td>Herbert Cerezke</td> <td>Forest Biology, Calgary, Alberta.</td> </tr> <tr> <td>1960</td> <td>Max W. McFadden</td> <td>Post-doctoral, University of Washington.</td> </tr> <tr> <td>1961</td> <td>Gordon Pritchard</td> <td>Department of Biology, University of Calgary.</td> </tr> <tr> <td>1963</td> <td>Doreen E. Waldbauer</td> <td>Finishing M.Sc., Edmonton.</td> </tr> <tr> <td>1964</td> <td>Walter Jerry Awram</td> <td>Working on Ph.D., Rothamsted.</td> </tr> <tr> <td>1965</td> <td>David J. Larson</td> <td>Canada Department of Agriculture, Lethbridge, Alberta.</td> </tr> <tr> <td>1966</td> <td>Mary M. Galloway</td> <td>Working for M.Sc., Edmonton.</td> </tr> <tr> <td></td> <td>David M. Rosenberg</td> <td>Working for Ph.D., Edmonton.</td> </tr> <tr> <td>1967</td> <td>G. Jo Turner</td> <td>Producing <em>Quaestiones entomologicae</em></td> </tr> </tbody> </table> INSECT COLLECTION COMPETITION History of Awards 1954 First Prize, Senior - Norman Rollingson, 3309 Parkside Drive, Lethbridge. Second Prize, Senior - Ronald Law, 1631 - 21 Avenue N.W., Calgary. Third Prize, Senior - Fred Vincent, 2340 - 24 Avenue N.W., Calgary. First Prize, Junior - Donna Mae Nattrass, Manyberries. Second Prize, Junior - Wayne Nattrass, Manyberries. Third Prize, Junior - Cam Huth, 2719 - 18 Street N.W., Calgary. 1955 First Prize, Senior - Donna Mae Nattrass, Manyberries. Second Prize, Senior - Joy Molyneux, 1124 - 9 Street E., Calgary. Third Prize, Senior - Hilary Anderberg, 927 - 7 Avenue W., Calgary. First Prize, Junior - Wayne Nattrass, Manyberries. Second Prize, Junior - Kenneth Beswick, Spring Coulee. Third Prize, Junior - Clinton Walker, 11224 - 87 Avenue, Edmonton. 1956 First Prize, Senior - Doug Salt, c/o Dr. R. W. Salt, Research Station, Canada Agriculture, Lethbridge. Second Prize, Senior - Ron Popik, Glen Park, Calmar. First Prize, Junior - Kenneth Beswick, Spring Coulee. Second Prize, Junior - Brian Martin, 9107 - 117 Street, Edmonton. 1957 First Prize, Senior - Kenneth Beswick, Spring Coulee. Second Prize, Senior - Doug Salt, c/o Research Station, Canada Agriculture, Lethbridge. Third Prize, Senior - Jane Moonen, Millet. First Prize, Junior - Christine Marshall, Howsann School, RCAF Station, Claresholm. Second Prize, Junior - Bruce Martin, 9107 - 117 Street, Edmonton. Third Prize, Junior - Gary Brown, 42 Cambridge Road, Calgary. 1958 First Prize, Senior - Andrew and Myron Baziuk, Redwater. Second Prize, Senior - David Larson, 1201 - 24 Street S., Lethbridge. Third Prize, Senior - Keith and Neil Redding, 648 - 14 Street S., Lethbridge. Consolation, Senior - Jack Haberman, 3115 - 10 Avenue A S., Lethbridge. First Prize, Junior - Joe Shorthouse, 2317 - 13 Avenue S., Lethbridge. 1959 First Prize, Senior - David J. Larson, 1201 - 24 Street S., Lethbridge. Second Prize, Senior - Jack Haberman, 3115 - 10 Avenue A S., Lethbridge. Third Prize, Senior - Joseph Shorthouse, 2317 - 13 Avenue S., Lethbridge. No Junior Prizes were awarded this year. 1960 First Prize, Senior - David J. Larson, 1201 - 24 Street S., Lethbridge. Second Prize, Senior - Joseph Shorthouse, 2317 - 13 Avenue S., Lethbridge. Third Prize, Senior - Kenneth Richards, 2209 - 10 Avenue S., Lethbridge. Honorable Mention, Senior - M. S. Carleton, Banff. Consolation, Junior - Lacombe School, Grade 8. 1961 First Prize, Senior - Joseph Shorthouse, 2317 - 13 Avenue S., Lethbridge. Second Prize, Senior - Kenneth Richards, 2209 - 10 Avenue S., Lethbridge. Third Prize, Senior - M. S. Carleton, Lethbridge. No Junior Prizes were awarded this year. 1962 General Collection, First Prize (one entry) - Kenneth Richards, 2209 - 10 Avenue S., Lethbridge. Challenge Competition (two entries) - Draw with two winners, David Larson and Joseph Shorthouse (both of Lethbridge). 1963 First Prize, Junior - Robert Iverson. Second Prize, Junior - Gordon Bridgewater. Third Prize, Junior - John Kloppenborg. First Prize, Challenge Event - Joe Shorthouse. Second Prize, Challenge Event - Ken Richards. No Senior Prizes were awarded this year. 1964 First Prize, Senior - Robert Iverson, Edmonton. First Prize, Junior - Beverly Ann Lambert, Edmonton. No other prizes were awarded. 1965 No prizes awarded. 1966 First Prize, Senior - Norman Wood, 9135 - 142 Street, Edmonton. Second Prize, Senior - Alan Mathieson, Box 695, Olds. First Prize, Junior - Selma Scott, 140 Lamone Street, Calgary. Second Prize, Junior (Draw) - Hugh Godwin, Olds, and Cecelia Williams, Taber. 1967 First Prize, Senior - Donald Wayne Chomyn, Box 977, Leduc. First Prize, Junior - Selma Scott, 140 Lamone Street, Calgary. Second Prize, Junior - Hugh Godwin, Box 760, Olds. Third Prize, Junior - John Acorn, 14416 - 78 Avenue, Edmonton. First Prize, Open - Sharon Erickson, O.A.V.C., Olds. Second Prize, Open - Ross Hyatt, Box 128, Bowden. Third Prize, Open - Joseph Hartwell, Box 125, Olds. Honorable Mention (Open) - Norman Tensen, O.A.V.C., Olds, Alan and John Mathieson, Box 695, Olds. PRESIDENTS OF THE ENTOMOLOGICAL SOCIETY OF ALBERTA Strickland, E. H. ....................... 1953 Painter, R. H. ......................... 1954 Hurtig, H. .............................. 1955 Hopping, G. R. ......................... 1956 Farstad, C. W. ......................... 1957 Ball, G. E. .............................. 1958 Brown, C. E. ............................ 1959 Jacobsen, L. A. ......................... 1960 Edmunds, J. W. ......................... 1961 Van Veen, N. W. ...................... 1962 Holmes, N. W. .......................... 1963 Evans, W. G. ........................... 1964 Hartland-Rowe, R. C. B. .............. 1965 Salt, R. W. .............................. 1966 Hocking, B. ............................. 1967 OBITUARY JOHN HUGH (JACK) BROWN died on 5 December 1967 at Edmonton. He was born 18 May 1904 at Parrsboro, Nova Scotia, the seventh son and eleventh of thirteen children of George Hibbert and Adelia Anne (née Lamb) Brown. Jack was a member of the Entomological Society of Alberta from 1953 until his death and participated with enthusiasm in meetings of the Society until, in his last few years, ill health interfered with this as with so many of his other activities. He was educated in agricultural disciplines in Nova Scotia, and in Alberta at the Olds School of Agriculture and the University where he earned the B.Sc. (1940) and M.Sc. (1942) degrees in the Department of Entomology. In the wartime emergency when Professor Strickland was called to military duty he taught entomology from 1942-1944, but his abiding interest was in public health entomology and he never again left this. From 1943 onwards he was the Alberta Department of Public Health's authority in this field, conducting ectoparasite and plague surveys, directing the Department's Division of Entomology, developing the urgently needed Poison Control Centers, and publishing some sixty scientific papers, leaflets, bulletins, pamphlets and articles. Some years before his death he generously deposited his substantial collections of ectoparasites and reprints in the Department of Entomology at the University in Edmonton where they will remain as a memorial to him and where a memorial volume in the library will be inscribed to him on behalf of the Society. MEMBERSHIP LIST, 1967-1968 Honorary Members Hopping, Mr. G. R. 9924 Fifth Street S. E., Calgary. Painter, Mr. R. H. 422 - 25 Street South, Lethbridge. Seamans, Mr. H. L. 581 Fraser Avenue, McKellar Park, Ottawa. White, Mr. R. M. R. R. 1, West Summerland, British Columbia. Members Ball, Dr. G. E. Entomology Department, University of Alberta, Edmonton. Ball, Mrs. K. Entomology Department, University of Alberta, Edmonton. Barron, Mr. J. K. Entomology Department, University of Alberta, Edmonton. Berg, Dr. C. O. Department of Entomology, and Limnology, Cornell University, Ithaca, N. Y. Blakeley, Mr. P. E. Research Station, Canada Agriculture, Lethbridge. Brown, Mr. C. E. Department of Forestry, Centennial Tower Building, 400 Laurier Avenue West, Ottawa 4. Burgess, Miss Angie Entomology Department, University of Alberta, Edmonton. <table> <thead> <tr> <th>Name</th> <th>Address</th> </tr> </thead> <tbody> <tr> <td>Burgess, Mr. G.D.</td> <td>Biology Department, University of Calgary.</td> </tr> <tr> <td>Carr, Mr. J.L.</td> <td>R.R. 4, Calgary.</td> </tr> <tr> <td>Cerezke, Mr. H.F.</td> <td>Forest Research Laboratory, 721 Public Building, Calgary.</td> </tr> <tr> <td>Chance, Mr. M.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Chance, Mrs. M.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Charnetski, Mr. W.A.</td> <td>Research Station, Canada Agriculture, Lethbridge.</td> </tr> <tr> <td>Chiang, Mr. P.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Chomyn, Mr. D.</td> <td>4515 - 46 Ave., Leduc.</td> </tr> <tr> <td>Craig, Dr. D.A.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Depner, Dr. K.R.</td> <td>Research Station, Canada Agriculture, Lethbridge.</td> </tr> <tr> <td>Edmunds, Mr. J.W.</td> <td>Alberta Department of Agriculture, 10405 - 100 Ave., Edmonton.</td> </tr> <tr> <td>Ellis, Mr. C.R.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Erwin, Mr. T.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Evans, Dr. W.G.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> </tbody> </table> Ewen, Dr. A. B. Canada Department of Agriculture, Research Station, Saskatoon. Frank, Dr. J. H. Entomology Department, University of Alberta, Edmonton. Gooding, Dr. R. H. Entomology Department, University of Alberta, Edmonton. Griffiths, Mr. G. C. D. Entomology Department, University of Alberta, Edmonton. Gurba, Mr. J. B. Crop Protection and Pest Control, Alberta Department of Agriculture, Edmonton. Gushul, Mr. E. T. Research Station, Canada Agriculture, Lethbridge. Harper, Dr. A. M. Research Station, Canada Agriculture, Lethbridge. Hartland-Rowe, Dr. R. C. B. Zoology Department, University of Calgary. Haufe, Dr. W. O. Research Station, Canada Agriculture, Lethbridge. Huang, Mr. C. T. Entomology Department, University of Alberta, Edmonton. Hilton, Mr. D. # 7, 6708 - 90 Ave., Edmonton. Hobbs, Dr. G. A. Research Station, Canada Agriculture, Lethbridge. Hocking, Dr. B. Entomology Department, University of Alberta, Edmonton. Holmes, Dr. N. D. Research Station, Canada Agriculture, Lethbridge. Hopkins, Mrs. M. E. P. 3 Canyon Drive, Calgary. Jacobson, Mr. L.A. Research Station, Canada Agriculture, Lethbridge. Johnson, Dr. P.C. Intermountain Forest and Range, Experiment Station, Federal Building, Missoula, Montana, 59801. Kevan, Mr. P. Entomology Department, University of Alberta, Edmonton. Khatamian, Mr. H. 10551 - 79 Ave., Edmonton. Krishnan, Dr., Y.E.S. Entomology Department, University of Alberta, Edmonton. Kush, Mr. D.K. Forest Research Laboratory, 721 Public Building, Calgary. Lanier, Dr. G.N. Department of Forestry and Rural Development, 132 - 9th Ave. S.W., Calgary. Larson, Mr. D.J. Research Station, Canada Agriculture, Lethbridge. Larson, Mrs. D.J. Biology Department, University of Lethbridge, Lethbridge. Larson, Dr. Ruby I. Research Station, Canada Agriculture, Lethbridge. Lee, Mr. F.C. 633 Gore Avenue, Vancouver, British Columbia. Leech, Mr. R.E. Entomology Department, University of Alberta, Edmonton. Lilly, Mr. C.E. Research Station, Canada Agriculture, Lethbridge. Lipsit, Mr. R. Chemagro Ltd., P.O. Box 1208, Calgary. McDonald, Mr. S. Research Station, Canada Agriculture, Lethbridge. McGeheay, Mr. J.H. Department of Forestry and Rural Development, 132 - 9th Ave. S.W., Calgary. Nelson, Dr. W.A. Research Station, Canada Agriculture, Lethbridge. Nimmo, Mr. A. Entomology Department, University of Alberta, Edmonton. Pankiw, Dr. P. Research Station, Beaverlodge. Pearson, Mr. T.R. Entomology Department, University of Alberta, Edmonton. Peterson, Mr. L.K. Field Crops Branch, Alberta Department of Agriculture, Edmonton. Pritchard, Dr. G. Biology Department, University of Calgary, Calgary. Puca, Miss Amalia Box 117, Macdonald College, Quebec. Reddy, Mr. M.J. Entomology Department, University of Alberta, Edmonton. Reid, Dr. R.W. Forest Research Laboratory, 721 Public Building, Calgary. Richards, Mr. K.W. Entomology Department, University of Alberta, Edmonton. Rosenberg, Mr. D.M. Entomology Department, University of Alberta, Edmonton. Safaranyk, Mr. L. Forest Research Laboratory, 721 Public Building, Calgary. Salt, Dr. R.W. Research Station, Canada Agriculture, Lethbridge. <table> <thead> <tr> <th>Name</th> <th>Organization</th> </tr> </thead> <tbody> <tr> <td>Schaaf, Mr. A.C.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Scott, Mr. J.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Sehgal, Mr. V.K.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Sharplin, Dr. Janet</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Shemanchuk, Mr. J.A.</td> <td>Research Station, Canada Agriculture, Lethbridge.</td> </tr> <tr> <td>Shepherd, Dr. R.F.</td> <td>Forest Research Laboratory, 721 Public Building, Calgary.</td> </tr> <tr> <td>Shore, Miss Joan</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Smith, Dr. D.S.</td> <td>Research Station, Canada Agriculture, Lethbridge.</td> </tr> <tr> <td>Stevenson, Mr. R.E.</td> <td>Forest Research Laboratory, 721 Public Building, Calgary.</td> </tr> <tr> <td>Steward, Dr. C.C.</td> <td>Research Station, Canada Agriculture, Lethbridge.</td> </tr> <tr> <td>Swailes, Dr. G.E.</td> <td>Research Station, Canada Agriculture, Lethbridge.</td> </tr> <tr> <td>Thomas, Mr. A.W.</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> <tr> <td>Tripp, Mr. H.A.</td> <td>Forest Research Laboratory, 721 Public Building, Calgary.</td> </tr> <tr> <td>Turner, Miss Jo</td> <td>Entomology Department, University of Alberta, Edmonton.</td> </tr> </tbody> </table> Warren, Mr. J. W. Chemagro Corp., 3 North 7th Ave., Ste. B., Yakima, Washington. Weintraub, Mr. J. Research Station, Canada Agriculture, Lethbridge. Whitehead, Mr. D. R. Entomology Department, University of Alberta, Edmonton.
The proven platform for joined up marketing Omnichannel Content Management at your fingertips master your content Fuel business growth and better customer experiences Put all your content on a single platform that interlinks every image, video, document and file. Connect everyone you want for easy collaboration. Find any piece of content quickly and use automated tasks for faster content management and production. With censhare you have all the tools, workflows and processes you need to respond faster to opportunities, collaborate more easily, publish more frequently and target more effectively. It’s all the power you need to deliver the right message to the right customer at the right time, on their preferred channel, in any language, locally or globally. Now, and for the long term censhare technology has been developed and refined over many years. It plays nicely with other systems and is open for all content types, data models, contributors and channels. Offering DAM, PIM and Content Management capabilities, you get a solution that fits perfectly around your own workflows and infrastructure — a proven platform that develops and grows alongside your organization. Cut through complexity Using a single platform that enables marketing communication across all channels, you radically simplify the management of digital assets, content and product information. Unique semantic database technology lets you handle a vast volume and diversity of content. You get instant response to a search along with intuitive filtering to quickly find the asset or information you want. With censhare, all data is held centrally and interlinked by relationships that can be quickly displayed and searched. The result is easier collaboration and rapid insight into any aspect of the content and how it is used. “We regard censhare as our central platform that we will use to control and steer all of our marketing contents in future.” — Christian J. Geiger, Head of Corporate Marketing, Endress + Hauser Flawless customer experiences Instead of being stifled by the complexity of working with multiple systems with different user interfaces, your teams are freed up to work faster and more creatively. You can create and distribute content rapidly, launching campaigns when the market opportunity is at its ripest. Customers receive accurate, targeted, up-to-date communications that brings you increased loyalty in return. The flexibility of the censhare platform lets you automatically transform and tailor digital assets for all channels. Users can author and manage simple and complex websites and microsites — whether corporate website, regional site, online shop or mobile site. The same assets can be used for digital channels like social media and email, as well as traditional print mediums such as magazines, catalogs and direct mail. The platform also supports more innovative outlets like digital POS signage. Thanks to its open design and content centric approach, almost any interface can be supported — even new, as yet unknown channels in the future. censhare can integrate with existing tools of all sizes, like CRM, ERP, CMS, translation memories, publishing suites, social media tools, or MS Office, and make all of their data accessible and usable for production of marketing and other content. This enables you to manage processes more transparently and effectively, while tracking the progress of campaigns more easily. In effect, censhare provides a ‘Content as a Service’ layer with the actual delivery into channels undertaken at a later stage. The flexible Application Programming Interface (API) also allows app developers to connect to and even create omnichannel applications with ease. “Attractively presented products turn shopping into an experience and have a lasting effect on purchasing decisions. This applies to both the real and the digital world. A crucial requirement for the highest quality here is efficient and easy-to-use content management, and with the use of the censhare platform, BSH has created an essential prerequisite for accompanying BSH’s digital consumer journey.” — Joachim J. Reichel, CIO, BSH Content is power Content is more than just text and pictures. It covers videos, packaging data, product information, business data, people profiles, project plans, and much more. Content is at the heart of all business communication and true control over content opens up a world of possibilities. Giving you this control is at the heart of our Omnichannel Management Platform. All content is managed and processed by the platform’s fully integrated core products: Digital Asset Management (DAM), Product Information Management (PIM) and Content Management. Combined with Content Management, DAM can automatically transform assets and information, and reuse them in the channels that matter. In conjunction with PIM, DAM can create a single source of truth for all product related information that can be used by your entire organization. No more costly duplication of content. No more silos where great content is hidden away from different departments or locations. The censhare platform grows with your business, so you can flexibly add users, infrastructure and capabilities. You can enable new use cases without wasting time and money for new tools, data migration and training. Trusted by some of the world’s biggest brands, censhare has proven reliability. censhare connects the information spaces censhare’s Omnichannel Content Platform can connect to internal and external systems to import and export data and content for powering the entire lifecycle of content — from inception and planning to creation and use. This enables marketing teams to collaborate better, focus on creating more effective marketing campaigns more efficiently, and establish a single source of truth for all content, and for the entire organization. With censhare’s streamlined workflows and content focused structure you can quickly create and deliver great experiences in any channel, powering growth for your brand. Do more, sooner By automating many content production and data tasks, censhare makes it easier to publish more frequently. It also eliminates problems of managing translations, regional and cultural variations, targeting and other time-taking chores. Create great content and use it to tell your story in the channels that matter your target audience. You don’t need to worry about file formats because the platform can manage all formats and content, including images, videos, text and PDF documents, 3D files, or presentations. Work together as one Increase efficiency by avoiding departmental information silos and process bottlenecks and enhancing creativity and productivity. Track projects in a single system used by both in-house and external teams thus avoiding potential disputes. Process driven collaboration via workflows can be encouraged across the entire organization, including agencies, freelancers, and channel partners. Improve productivity with transparent processes that enable teams to work in parallel on the same projects and deliverables. Always use the latest resources Everything is accessible in a central system, so users can be confident they are accessing the latest, approved content. You can boost productivity with fast, reliable searches to quickly obtain the required documents. Automatically update brochures, product catalogs, magazines and other collateral and integrate print into digital workflows by using the same information base. Work the way you want Censhare’s many use cases all run on the same underlying platform that can evolve and scale as your needs change and grow. Choose to host the platform in your data center, with censhare or in the Cloud. Choose from flexible licensing options and technical support packages to match your needs. The censhare platform powers all the capabilities of Omnichannel Content Management, providing a comprehensive array of features that ensures DAM, PIM and Content Management work together smoothly. Collaborative working is at the core of censhare with features that let your teams work easily together and share information regardless of individual location. Workflows are centered around the content being worked on. Users can customize the look and feel of their own personal workspaces helping them feel completely comfortable and in control. With advanced semantic database technology and a powerful search engine, users can quickly find the assets they need, as well as relevant information about their use and history. The front end uses the latest web technologies to support all modern browsers, while an application programming interface (API) allows content to be exchanged with other systems and applications. Security is achieved through a domain concept and advanced access control functions and permissions, supporting multiple brands, clients, user roles and users. This ensures each user can access only content with a matching permission. File management allows distributed storage of all content but with centralized administration. As all capabilities are part of the platform, you stay in control and evolve the scope of your system in any direction and at the right pace to suit your needs. Use Case in Action Grocery chain raises print production efficiency by 75% THE CHALLENGE: A major German grocery group needed to promote messages to a market with 80 million consumers across 16 different regions. It wanted: - Control and oversight of all products, information, and prices - Direct and comprehensive communication - Management of all promotions and campaigns - Revision security - Centralized production of advertising orders and layouts THE SOLUTION: censhare’s core products (DAM, PIM, Content Management) and the Print Production Management module allows efficient production of advertising material in multiple formats for different regions and needs. THE BENEFITS: Weekly sales brochures are created automatically for each market using a central database of images, product information and prices. Last minute updates can be applied across all the company’s advertising material produced through censhare. Print production is now 75% more efficient and pricing errors have been eliminated. DAM: Turn digital assets into powerful content Digital Asset Management gives you centralized control over every type of digital content from images, videos, text documents and graphics to 3D files, presentations, layout files and more. The simplicity and automation of Digital Asset Management enables teams to create the best customer experience without being distracted by complex processes. Your users can create and import assets and asset variants, store them centrally, edit and update them, add reference information and create unlimited links to any other asset. Your entire organization can then search the full text of assets as well as their metadata according to almost any criteria, such as keywords, segmentation information, sources, usage rights, or other information. PIM: The right product data on tap Be the master of data. Automatically combine large amounts of product information, technical data and product content across an enterprise using product SKUs. Enable data to be classified, aligned, checked for completeness, enriched and translated. Data, from any source, including existing ERP or PIM systems or spreadsheets, can be combined with content, images and documents in censhare. You achieve easy and rapid production of sales and marketing material for all channels, including print, thanks to automated processes for retrieving and making product data available for production. What’s more, the platform also controls and manages the automated update of content across all media, from websites and online shops, to mobile apps, print collateral and POS applications. “In marketing, censhare has brought us extreme increases in efficiency and with that also falling costs.” – Matthias Wesselmann, former Head of Group Marketing & Communication, Vitra AG Use Case in Action Automation removes complexity at large European retailer THE CHALLENGE: A leading retailer in one European country, with more than 600 outlets, needed to cope with four languages in its customer communications. Because it was using several systems to produce a wide range of advertising materials, its processes were convoluted and complex. THE SOLUTION: censhare handles the entire production process through automated workflows and its software guarantees consistency of the materials produced. This also improves communication between different departments and agencies. censhare connects to the retailer’s central Master Data Management system and transfers all information required by marketing teams to run effective campaigns. THE BENEFITS: The solution ensures that all required data is available and up to date for each campaign exactly when needed. censhare also improved the management of the product information with more than 600,000 images and visuals. Today, a user friendly platform links its teams to external studios via process oriented workflows on a single, user friendly platform and costs can be significantly reduced. Content Management: A connected world needs connected content Create media neutral content and use it everywhere. Avoiding wasteful repetition of design effort, the censhare Content Management System (CMS) lets you create media neutral content once, and once only, making it quicker and easier to manage content at every stage, from brief to design to production. Similarly, writers can produce an article or text and use it to customize, for example, headlines and copy of different lengths for different channels, languages, devices and other use cases. Content is ready for immediate use, avoiding the need for writers to familiarize themselves with the content more than once. Through the intuitive editor, content is created in XML without users needing to worry about the underlying complexity. censhare’s CMS adapts content automatically for use anywhere — from print and online to point of sale and ATMs. Flexibility is ensured through its Headless CMS, a powerful API that allows the system to exchange content and data with external applications, while also allowing the easy creation of omnichannel applications and custom front ends. The Content Data Quality feature checks the completeness of content to easily identify gaps in the data, so you can be sure that all data is available before using it. Cosmopolitan has teamed up with some of the best-loved beauty brands to celebrate what beauty means today, and we want you to get involved. THE #1AMBEAUTY CAMPAIGN IS A CELEBRATORY EXPLORATION OF OUR BEAUTY - AND YOU CAN PLAY A KEY PART IN IT SIMPLY BY TAKING A SELIE! So how can you get involved? Just tweet, Instagram or Facebook us with the hashtag ‘#1AMBEAUTY’ and don’t forget to include your name, age, hometown and heritage. For this to work, we need everyone involved, so whether your heritage is Indian, English, African, Irish, Chinese or any of the many others which make up our world, we must hear from you. The proven platform for joined up marketing Use Case in Action Consistent messaging achieved for interiors specialist THE CHALLENGE: This Swiss company, specializing in interior concepts, furniture and accessories for homes, offices and public spaces, has a multichannel marketing strategy. They require an efficient, media neutral marketing system that lets them easily manage a wide range of information in every channel. THE SOLUTION: censhare’s omnichannel content management with DAM and PIM integrates all workflows, from the creation of content and product information through to its distribution and publication. Automated processes translate and adapt the content for countries, sales regions, and target groups. THE BENEFITS: The company now communicates consistently across all channels and touch points — from print publications, price lists and websites to social media, in one orchestrated campaign. Real time updating of stock data and prices is performed entirely through a single interface to the company’s ERP system. Optional modules offer ultimate flexibility. Make use of additional features to perform specific tasks more efficiently and effectively. Choose from a range of add-on modules that let you turn censhare into the solution your teams need and want. Each one is built to suit specific use cases and the way you work, both internally and with partners. Instead of tying up your teams with time-consuming specialist tasks such as producing and managing multiple variants of publications and delivering localized offers, simply tap into features specifically developed for this work. You can add modules as and when you need them. Pick any combination and know they all work seamlessly together. **Master every content connection** Organizations frequently need to exchange data with external systems for further processing by specialist applications. censhare offers a powerful API to transfer data to and from censhare. However, if you require additional capabilities that are provided by external solutions, censhare Connectivity ensures you can readily connect to such applications and dynamically export and import content, for example, to connect to social media management tools or advanced artificial intelligence solutions to enhance your content and its metadata. “We were able to reduce the costs of advertising material production by a tremendous measure. In the case of hosting, this was in excess of 70%, while we achieved savings of 15% at the agency. Moreover, the entire process is now far more efficient thanks to system support, and has also become highly transparent, as everything is traceable at all times.” — Promotions Management at Migros <table> <thead> <tr> <th>MODULE</th> <th>FUNCTIONS</th> </tr> </thead> <tbody> <tr> <td>Marketing Project Planning</td> <td>Plan, manage and visualize marketing projects and campaigns across the organization and with suppliers</td> </tr> <tr> <td>Variants Management &amp; Targeting</td> <td>Create context between target groups, information and content, and deliver personalized content and variants based on user profile and segmentation data</td> </tr> <tr> <td>Localization &amp; Translation</td> <td>Manage translations for content and metadata, with interfaces to external translation service providers</td> </tr> <tr> <td>Print Production Management</td> <td>Manage all print related digital assets and streamline the creation and production of print based material through integrated page planning, workflows and automation</td> </tr> <tr> <td>Web CMS</td> <td>Automatically transform digital assets for publishing on a website. Author and manage content on simple and complex microsites, single sites, and multiple sites</td> </tr> <tr> <td>Headless CMS</td> <td>Application Programming Interfaces (APIs) for the exchange of data with any system, device or application, and to control censhare via external systems</td> </tr> </tbody> </table> A toolkit for marketing success The censhare Omnichannel Content Platform helps Christie’s, Dyson, Allianz, Migros, Vitra, Lufthansa, McDonald’s, Hearst Magazines UK and many other leading brands achieve effective omnichannel, multi language customer communications. Efficient processes and easy collaboration Provide a single system for managing all marketing processes and cut through complexity by supporting collaborative communications across your entire organization. Aid efficient coordination of activities by aligning content, tasks and resources at all times, in all locations. Effective and forward looking For over twenty years, censhare has been at the vanguard of content technology, not by chance but because we, our partners and our clients believe in its fundamentally transformative power. We are constantly developing and improving the core components of the platform itself, including the semantic database and search engine to achieve new capabilities and levels of performance. Expert support Our dedicated professional services teams ensure timely delivery, training and technical support. If you need strategic advice, we’re happy to be your trusted advisor, too. Further support comes from our partners. These include creative agencies skilled in getting the most from your censhare solution; implementation partners with expertise in setting it up, integrating your data sources and extending its capabilities; and technical partners that provide customized functionality directly or through existing integrations. Deployment your way censhare’s Omnichannel Content Management Platform offers two flexible deployment and licensing options to suit your business needs: Hosting • In the censhare datacentre, operated by censhare • In the Cloud, operated by you or a certified censhare partner Licensing • Annual subscription • Pricing is based on functionality and number of users About censhare Our proven omnichannel content platform lets you master your content in any language, locally or globally, to provide a consistent omnichannel customer experience. Clients like Allianz, Lands’ End, Dyson, Christie’s and hundreds more rely on censhare to deliver brand-accurate, up-to-date content, and make the most of every opportunity to reach the right customer at the right time. censhare. Master your content. master your content
[REMOVED]
In vivo visualization of single-unit recording sites using MRI-detectable elgiloy deposit marking Kenji W. Koyano,1 Akinori Machino,1,2 Masaki Takeda,1 Teppei Matsui,1,2 Ryoko Fujimichi,1 Yohei Ohashi,1 and Yasushi Miyashita1,2 1Department of Physiology, The University of Tokyo School of Medicine, and 2Department of Physics, The University of Tokyo School of Science, Bunkyo-ku, Tokyo, Japan Submitted 19 April 2010; accepted in final form 29 November 2010 Koyano KW, Machino A, Takeda M, Matsui T, Fujimichi R, Ohashi Y, Miyashita Y. In vivo visualization of single-unit recording sites using MRI-detectable elgiloy deposit marking. J Neurophysiol 105: 1380–1392, 2011. First published December 1, 2010; doi:10.1152/jn.00358.2010.—Precise localization of single-neuron activity has elucidated functional architectures of the primate cerebral cortex, related to vertically stacked layers and horizontally aligned columns. The traditional “gold standard” method for localizing recorded neuron is histological examination of electrolytic lesion marks at recording sites. Although this method can localize recorded neurons with fine neuroanatomy, the necessity for postmortem analysis prohibits its use in long-term chronic experiments. To localize recorded single-neuron positions in vivo, we introduced MRI-detectable elgiloy deposit marks, which can be created by electrolysis of an elgiloy microelectrode tip and visualized on highly contrasted magnetic resonance (MR) images. Histological analysis validated that the deposit mark centers could be localized relative to neuroanatomy in vivo with single-voxel accuracy, at an in-plane resolution of 200 μm. To demonstrate practical applications of the technique, we recorded single-neuron activity from a monkey performing a cognitive task and localized it in vivo using deposit marks (deposition: 2 μA for 3 min; scanning: fast-spin-echo sequence with 0.15 × 0.15 × 0.8 mm³ resolution, 120/4500 ms of echo-time/repetition-time and 8 echo-train-length), as is usually performed with conventional postmortem methods using electrolytic lesion marks. Two localization procedures were demonstrated: 1) deposit marks within a microelectrode track were used to reconstruct a dozen recorded neuron positions along the track directly on MR images; 2) combination with X-ray imaging allowed estimation of hundreds of neuron positions on MR images. This new in vivo method is feasible for chronic experiments with nonhuman primates, enabling analysis of the functional architecture of the cerebral cortex underlying cognitive processes. OVER THE PAST 50 YEARS, EXTRACELLULAR single-unit recording methods from primates have been used to map the firing patterns of neurons and have provided important insights into how the human brain processes information (Logothetis 1998; Parker and Newsome 1998; Miyashita 2004). Precise localization of recorded neuronal activity has revealed many features of the functional architecture of the cerebral cortex at different levels, including cortical layers (Hubel and Wiesel 1968), columnar organization (Mountcastle 1957; Hubel and Wiesel 1962; Merzenich and Brugge 1973), and interactions between adjacent cortical areas (Zeki 1978; Naya et al. 2001). The currently accepted “gold standard” method for identifying the location of recorded neurophysiological responses is the use of electrolytic microlesions (Hubel and Wiesel 1962), which can be placed at electrode tip positions by passing an electrical current. The lesions can then be detected in postmortem histological sections, allowing reconstruction of recording site positions along a penetration track by interpolating between the lesions. This procedure provides definite locations of recording sites within cortical structures and is sufficient for experiments using acute preparations. However, in chronic recordings, postmortem measurement is often inadequate because the locations remain unknown until all the in vivo experiments are completed. This uncertainty of recording sites is especially crucial for the long-term chronic experiments using behaving monkeys, which often last for several months or years. In addition, the number of detectable recording sites within a local region is severely limited because lesions become undetectable several weeks after the placement, and closely spaced lesions from adjacent tracks are difficult to identify. This limitation in the number of detectable recording sites of the electrolytic lesion method can be improved to some degree by using other marking techniques, such as marking with metal deposition (Hess 1932; Adrian and Moruzzi 1939; Marshall 1940; Green 1958; Brown and Tasaki 1961; Suzuki and Azuma 1987), injecting dyes (Thomas and Wilson 1965; Stretton and Kravitz 1968; Lee et al. 1969) or carbon fiber (Sawaguchi et al. 1986), coating the electrode with fluorescent dyes (Honig and Hume 1989; Snodderly and Gur 1995; DiCarlo et al. 1996; Naselaris et al. 2005), detecting gliosis immunocytochemically (Benevento and McCleary 1992), and the juxtacellular labeling of single cells (Pinault 1996). However, all of these methods still require postmortem analysis and are thus unable to detect recording sites until all the in vivo experiments have been finished. To overcome the difficulties of postmortem methods in chronic experiments, several alternative techniques have been proposed to localize recording sites in vivo. The easiest and most widely used method is the predictive estimation of the location using stereotaxic coordinates (Horsley and Clarke 1908; Saunders et al. 1990; Asahi et al. 2003). However, the accuracy of this estimation is severely limited, because the method does not consider trajectory variation across each recording penetration, which is especially critical for deep brain areas (for detailed discussion, see Cox et al. 2008). Sonography (Collier et al. 1980; Tokuno et al. 2000; Glimcher et al. 2001) and X-ray imaging (Aggleton and Passingham 1981; Nahm et al. 1994; Cox et al. 2008) are low-cost, convenient, and noninvasive imaging methods to visualize inserted electrodes in each recording session. However, due to their low tissue contrast for neuroanatomy, these imaging methods need to be complemented with anatomical information from another resource (Nahm et al. 1994; Cox et al. 2008), and thus can only indirectly locate recording sites relative to fine brain structures. MRI is a promising imaging method for localizing recording sites within the brain, because of its high tissue contrast and high spatial resolution (Fahlbusch and Samii 2007). Previous studies have shown that MRI can visualize inserted microelectrodes within the brain (Jog et al. 2002; Martínez Santiesteban et al. 2006; Tammer et al. 2006; Matsui et al. 2007) and localize them at an accuracy of single-voxel size (50 μm in vitro and 150 μm in vivo; Matsui et al. 2007). These MRI-based approaches perform better than other imaging modalities for directly localizing inserted microelectrodes in relation to fine brain anatomy (Nakahara et al. 2007). However, because MRI requires expensive hardware, it is often difficult to use in a laboratory setting, and everyday use is not a practical choice for many laboratories. A previous study reported an alternative MRI-based approach that could solve this problem (Fung et al. 1998). Fung et al. found that gradient-echo MRI could detect iron deposits placed within the rat brain by passing small electrolytic currents (5–30 μC) through a stainless steel electrode. Because a large number of deposit marks can be detected simultaneously in a single MRI session, this approach does not require frequent MRI usage, so it is feasible for many laboratories that have limited access to an MRI scanner. However, at the same time, this approach has a potential drawback in practical recording experiments, as pointed out by Fung et al. themselves: stainless steel electrodes are generally thought to deliver noisier electrophysiological recordings than other common electrode materials, such as tungsten, platinum-iridium, or elgiloy, and are generally avoided in vivo recording experiments (for example, see Snodderly and Gur 1995; Geddes and Roeder 2003). Actually, to the best of our knowledge, stainless steel deposits have never been used in vivo to localize recording sites of stainless steel microelectrode directly (but see Pezaris et al. 2000; Pezaris and Reid 2009 for indirect estimation of tungsten/gold-plate tetrode recording sites in separate recording sessions). The recording characteristics of stainless steel microelectrodes should be improved significantly, so as to establish the usefulness of this MRI-based approach, and to promote its widespread adoption in chronic recording experiments. Cobalt-nickel-iron alloy (elgiloy) is another candidate electrode material for use in MRI-visible metal deposit marking methods. Elgiloy can create iron-containing metal deposits similar to those produced by stainless steel, but possesses superior recording characteristics (Suzuki and Azuma 1976, 1987; Ashford et al. 1985). Indeed, this material has been used in hundreds of primate electrophysiological studies (for example, Suzuki and Azuma 1976; Sugita 1999; Kakei et al. 2001; Ohbayashi et al. 2003; Kamigaki et al. 2009, 2011; Yamagata et al. 2009). In the current study, we extend the approach of Fung et al. by using an elgiloy microelectrode and showed that elgiloy metal deposits can be localized accurately with MRI. Furthermore, we demonstrated two practical applications of this elgiloy deposit mark method to record and localize the activity of single neurons in vivo from a monkey performing a cognitive task: direct reconstruction of recorded neuron positions along the microelectrode track on a magnetic resonance (MR) image, and estimation of hundreds of neuronal positions in combination with X-ray imaging. METHODS Animals. We used three macaque monkeys (two Macaca mulatta and one Macaca fuscata, weighing 4.3–9.0 kg) in this experiment. An MRI-compatible head holder and a recording chamber (Crist Instruments, Hagerstown, MD) were attached to the skull under aseptic conditions and general anesthesia with pentobarbital sodium (4 mg·kg⁻¹·h⁻¹ iv) and xylazine (2 mg·kg⁻¹·h⁻¹ iv), supplemented as needed. Monkeys were given postsurgical analgesics (acetaminophen, 20 mg·kg⁻¹·day⁻¹ or pranoprofen, 3 mg·kg⁻¹·day⁻¹, per os) for at least 3 days and postsurgical prophylactic antibiotics (benzylpenicillin, 20,000 U·kg⁻¹·day⁻¹, ampicillin, 100 mg·kg⁻¹·day⁻¹, intramuscular injection or enrofloxacin, 5 mg·kg⁻¹·day⁻¹ sc) for 1 wk. All experiments were conducted in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Review Committee of The University of Tokyo School of Medicine. Metal deposition and MRI. We used a glass-coated iron-cobalt-nickel alloy (elgiloy) microelectrode (0.3–0.8 MΩ, Suzuki and Azuma 1976; Ashford et al. 1985) for the electrophysiological recordings and metal deposition. Using a hydraulic microdrive manipulator (MO-95; Narishige, Tokyo, Japan), the electrode was inserted into the brain along a stainless steel guide tube, and metal deposits were placed at the electrode tip position by passing an anodic direct current (2–10 mA for 3–30 min) through the microelectrode (Fig. 1A) (Suzuki and Azuma 1987). The current parameters used for each deposition are shown in Table S1 (Supplemental Material for this article is available online at the Journal website). The number of marks placed in each animal and the number recovered are summarized in Table S2. To detect the metal deposits, we used a 4.7 T MRI scanner (BioSpec 47/40; Bruker BioSpin, Ettlingen, Germany) and actively decoupled a surface receive radiofrequency coil of 50-mm diameter with a volume radiofrequency coil for transmission (Bruker BioSpin). Monkeys were kept under anesthesia during the MRI scan with propofol (5–10 mg·kg⁻¹·h⁻¹ iv), supplemented as needed with xylazine (0.5–2 mg/kg im). Blood pressure, heart rate, and oxygen saturation were continuously monitored, body temperature was kept constant using hot-water bags, and glucose-lactated Ringer solution was given intravenously at a rate of 5–10 ml·kg⁻¹·h⁻¹. A fast low-angle shot gradient-echo (FLASH) and fast spin-echo (FSE) were performed to visualize the metal deposits (see Table S1 for detailed scan parameters). After the MRI, the monkeys were returned to their home cage, and their general state was monitored, by their body temperature kept, until they recovered from anesthesia. Histology. After each series of experimental sessions, two of the three monkeys were euthanized, and their brains were examined histologically by conventional methods (e.g., Koyano et al. 2005). We could not perform a histological examination on the other monkey, who was kept alive and continues to be used in further experiments. The two monkeys were deeply anesthetized with pentobarbital sodium (60 mg/kg iv) and perfused intracardially with saline followed by 4% paraformaldehyde in 0.1 M phosphate buffer. After removal of the skull, brains were postfixed for 48 h in 4% paraformaldehyde at 4°C and were cryoprotected in 30% sucrose in PBS at 4°C until they sank. Brains were cut into 40-μm cryostat sections along the same plane of the previously acquired MR images. Sections were collected for two series and mounted onto slides. One series of sections was stained with cresyl violet for Nissl to show cytoarchitecture. The other series of sections was stained using Prussian blue reaction to detect iron-containing metal deposits as azure spots (Brown and Tasaki 1961; Suzuki and Azuma 1976; Fung et al. 1998). The sections were treated with 2.5% ferricyanide/2.5% ferrocyanide for 10 min, 2.5% ferricya- The size of a deposit on the MR image was calculated as an average of diameters measured along the major and minor axes of the ellipse-shaped hypointense spot. Because the metal deposit marks were not simply ellipse-shaped in the histological sections, we measured the area instead of the diameter, and calculated the “effective” diameter according to the following formula: \( D_e = 2A^{1/2}/\pi^{1/2} \), where \( D_e \) is the effective diameter and \( A \) is the area of a metal deposit mark. We distinguished blood vessels (which were often visualized as small hypointense spots on MR images) from deposit marks using the following procedures: 1) We acquired a series of multi-slice MR images before the metal deposition to compare with MR images after deposition. 2) In each MR scanning session, we initially acquired low-resolution multi-slice MR images, in which blood vessels were typically visualized as continuous small holes across several MR slices and/or visualized as short winding lines. We examined the coronal and sagittal multi-slice images to determine whether the hypointense spots observed in MR images were blood vessels or not. We compared the positions of the metal deposits on spin-echo MR images with those on histological sections. The distance between the deposit center and the nearest pial surface was measured, and localization accuracy was calculated by subtracting this distance on the MR image from that on the corresponding histological sections. For the measurement of the distance from the pial surface, we chose the marks that were located within the gray matter of the cerebral cortex to minimize errors arising from global tissue distortion. **Behavioral task and electrophysiology.** We recorded extracellular discharge of single neurons from one task-trained monkey while performing a visual-visual pair-association task (Sakai and Miyashita 1991; Higuchi and Miyashita 1996; Naya et al. 2001; Takeda et al. 2005). Twenty-four monochrome Fourier descriptors extending \( 5^\circ \times 5^\circ \) were paired arbitrarily into 12 pairs and used as visual stimuli. In each trial, one cue stimulus then two choice stimuli (the paired-associate of the cue stimulus and a distractor) were presented sequentially with a delay of 2.0 s. The monkey was rewarded with fruit juice for touching the correct target (the paired associate of the cue). Glass-insulated tungsten microelectrodes (0.4–1 MΩ) and elgiloy microelectrodes (0.3–0.8 MΩ) were used for the extracellular recording. The elgiloy electrodes were used for recording tracks where the metal deposits were marked. Of 62 penetrations, 8 penetrations were performed with elgiloy electodes, whereas the other 54 tracks were performed with tungsten electrodes. The microelectrode was inserted vertically into the target region through the intact dura mater along a stainless steel guide tube using a hydraulic microdrive manipulator (MO-95, Narishige). The extracellular action potentials were amplified, band-pass filtered (50–10 kHz), and isolated online with a dual-window discriminator (EN-611F; Nihon Kohden, Tokyo, Japan). Spike waveforms were digitized at 20 kHz by data acquisition board (PCI-6220; National Instruments, Austin, TX) and stored to a hard disk. The spike waveforms were then low-pass filtered offline at 5 kHz, and the quality of the isolations was checked. The signal-to-noise ratio of the action potentials was calculated by dividing the mean maximum amplitude by the standard deviation of the baseline. We recorded from the first well-isolated neuron encountered while searching for the next neuron along each penetration of the microelectrode. **X-ray imaging.** The location of the electrode track for each recording session was measured on an X-ray image (Aggleton and Passingham 1981; Naya et al. 2001). When the microelectrode reached a certain depth near the target region, we acquired a pair of orthogonal X-ray images at right angles to the sagittal and coronal planes of the monkey’s head using a portable X-ray unit (PX-30N, 2.0-mm focal spot; Acoma X-ray Industry, Tokyo, Japan) and X-ray films (RX-U; Fujifilm, Tokyo, Japan) with intensifying screens (MS-V; Kyokko, Kanagawa, Japan). The height, distance, and orientation of the X-ray source relative to the animal’s head and the film were kept constant across X-ray imaging sessions of different days. This positioning was confirmed each time by measuring distances between the X-ray source... and the primate chair, and by overlapping backlight-projected positions of a pair of crosshair-shaped fiducial marks on both the closer and further side of the primate chair. The films were exposed with the portable X-ray unit set at 80 kV at 15 mA for 1.6 s. The locations of the microelectrodes were then measured along the orbitomeatal plane on the films and corrected for the magnification factor (coronal image: 102.0%; sagittal image: 104.8%). The anteroposterior position of the inserted microelectrode was then measured on the sagittal image as the distance from the external auditory meatus, and the mediolateral position was measured on the coronal image as the distance from the midline. Since the microelectrode tip was too small to detect from X-ray images, the dorsoventral location of each recording site and metal deposition along the microelectrode penetration direction was estimated from microdrive readings. In addition, at the end of most penetration tracks, the bottom of the cortex was determined from a characteristic “crunching” noise in the local field potential to confirm the dorsoventral location. The localization error of the film-based measurements was evaluated across days from the variance in the measurement of the distance between the external auditory meatus and the posterior tip of the sphenoid bone on the film (Aggleton and Passingham 1981; Yoshiida et al. 2003; 24.50 ± 0.20 mm, means ± SD, n = 62). Thus, the error in the measurement was estimated to be comparable to a standard deviation of 0.2 mm. MRI-based in vivo track reconstruction. For the recording tracks on which deposit marks were placed, the position of each neuron in MR images was estimated from those of the deposit marks on the same track and the distances driven by the microdrive manipulator in that recording session. The neuronal positions were reconstructed in vivo directly on the MR images by interpolating between the within-track deposit marks, the position of each neuron in MR images and the posterior tip of the sphenoid bone on the film (Aggleton and Passingham 1981; Yoshida et al. 2003; 24.50 ± 0.20 mm, means ± SD, n = 62). Thus, the error in the measurement was estimated to be comparable to a standard deviation of 0.2 mm. RESULTS Detection of metal deposits after elgiloy microelectrode electrolysis. We first examined whether the metal deposits of the elgiloy microelectrode were visible on the MR images (Fig. 1). Direct anodic currents of 2–5 μA for 5 min passed through elgiloy microelectrode-yielded hypointense spots on the MR images of 150 × 150 μm² in-plane resolution (Fig. 1, B and D; see Table S1 for detailed scan parameters). A corresponding histochemical section showed iron deposit marks as azure spots by means of Prussian blue reaction (Fig. 1, C and E). The locations of these iron deposit marks were well matched to those of the hypointense spots in the MR images. These results demonstrated that the metal deposit marks created by electrolysis of the elgiloy microelectrode could be detected as hypointense spots on the MR images. Effects of marking and imaging parameters on metal deposit size. The amount of the metal deposition depends on the total charge used for the electrolysis (Suzuki and Azuma 1987). So we examined the relationships between the passed currents and the appearance of the deposit, by creating metal deposit marks using a range of different charges (Fig. 2). The results showed that passing a current as low as 2 μA for 3 min created metal deposit marks that were visible in both spin-echo (FSE; Fig. 2E) and gradient-echo (FLASH; Fig. 2F) MR images of 176 × 176 μm² resolution (see Table S1 for detailed scan parameters). We found that larger charges tended to produce the larger marks (Fig. 2, B–D, G–J), and the size of the metal deposits on the MR images correlated well with the total charge that was used to create the deposits (for spin-echo sequence, R = 0.927, P < 0.0005, Fig. 2O; for gradient-echo sequence, R = 0.763, P < 0.02, Fig. 2P; n = 10 for each sequence). There was also a weaker correlation between the size of the metal deposits on the histological section and the total charge (R = 0.654, P < 0.05, Fig. 2Q; see also Fig. S1). The electrolytic current for the deposition appeared to cause some damage, exhibited as gliosis around the marks (Fig. S1). The diameter of the gliosis, resulting from the metal deposition procedure performed within 2 wk before death, was also correlated with the total charge (R = 0.634, P < 0.001). In cases where the total charge used for the deposition was smaller than 500 μC, the mean diameter of the gliosis was only 233 ± 65.5 μm. The close relationships between the mark sizes on the MR images and total charges can be useful for creating metal deposit marks of a particular size, depending on the resolution of MR images to be acquired. The size of the metal deposits was larger in the gradient-echo than spin-echo sequence (paired t-test, P < 0.01), and the sizes in the images of both MRI sequences were larger than in the histological section (paired t-test, P < 0.002). These differences in size are consistent with the well-known phenomena whereby ferromagnetic metals appear larger on MR images than their actual size, due to susceptibility artifacts (Luedeke et al. 1985), and gradient-echo sequences are more... sensitive to susceptibility artifacts than spin-echo sequences (Posse and Aue 1990). The susceptibility artifact not only depends on scan sequences, but also on several other scan parameters (Luedke et al. 1985; Ericsson et al. 1988; Posse and Aue 1990). Therefore, we next examined the effects of those scan parameters, echo time (TE), repetition time (TR), bandwidth, frequency encoding direction, echo train length (spin-echo only), and flip angle (gradient-echo only), with metal deposit marks created by two different charges (Fig. 3; n = 8 for each condition). Two-way ANOVAs revealed significant main effects of charge in all conditions (P < 0.0001), but no significant interactions between charge and scan parameters in all conditions (P > 0.2). In the spin-echo sequence, bandwidth (F3,56 = 8.47, P < 0.01), frequency encoding direction (F3,28 = 14.05, P < 0.001), and echo train length (F3,42 = 5.49, P < 0.01) showed significant main effects. The mark sizes tended to be smaller at larger bandwidths (post hoc Tukey test, P < 0.01; Fig. 3C) and larger echo train lengths (post hoc Tukey test, P < 0.05; Fig. 3E), and the sizes tended to be smaller and appeared thinner when the frequency encoding direction was set parallel to the penetration track compared with when it was set perpendicular to the track (Fig. 3D). In the gradient-echo sequence, TE showed a significant main effect of scan parameter (F3,56 = 11.14, P < 0.05). The mark sizes tended to be larger at longer TE (post hoc Tukey test, P < 0.05; Fig. 3F). Stable detection of the metal deposits over a year. In contrast to short-lived electrolytic lesions, the metal deposits produced by the elgiloy microelectrode remained detectable on histological sections even after a survival period of more than 6 mo (Suzuki and Azuma 1987). We examined the long-term visibility of the metal deposit marks on the MR images and found that the metal deposits remained visible after 18 mo of survival (Fig. 4, see Table S1 for detailed scan parameters). Even the smallest deposits (150 μm in diameter; numbered as 3 in Fig. 4A), which were marked by currents of 2 μA for 5 min, were still visible at 18 mo after the deposition. The long-lasting nature of these metal deposits enables this method to be used in the chronic experiments, which are often performed over several months or even a year. Accurate localization of metal deposits. The locations of the deposit marks on MR images corresponded well to those on histological sections (Fig. 5, A–C). A deposit mark, located at the layer I/II border near the pial surface on a histological section, was found at almost the same location near the pial surface on the corresponding MR image (Fig. 5, A–C). The location of another deposit mark, farther from the pial surface, also matched well between MRI and histology (Fig. 5, A–C). We then evaluated the accuracy of the mark positions across 26 data points, whose MR images were acquired at an in-plane resolution of 200 × 200 μm². To quantify the mark positions, we measured and compared the distances from the pial surface between histological sections and the corresponding MR images. This distance on MR images corresponded well with those on the histological sections, and a linear regression line between them fitted well and had a slope of 0.95 (R² = 0.92, P < 0.0001; Fig. 5D). The median difference of this distance along orthogonal to the penetration direction (Fig. S2). The median difference of this distance between the MR images and histological sections was 100.9 ± 7 μm, corresponding to 0.50 voxels (Fig. 5E), and this was significantly smaller than the single voxel size (Wilcoxon signed-rank test, P < 0.0001). We also analyzed relative positioning errors both along and orthogonal to the penetration direction (Fig. S2). The median Fig. 3. Effects of scan parameters on the metal deposit diameters in the FSE (A–E) and FLASH (F–J) sequences (n = 8 for each point). Metal deposits were marked at 2 μA for 5 min (600 μC, bottom of MR images and blue lines in line plots) or 5 μA for 5 min (1,500 μC, top of MR images and red lines in line plots). ⊥ and ∥ in D and I: perpendicular and parallel frequency encoding direction in relation to the microelectrode penetration direction, respectively. *, **, and †: significant difference of the diameter between scan conditions (P < 0.05 with Tukey post hoc test, P < 0.01 with Tukey post hoc test, and P < 0.001 with 2-way ANOVA, respectively). Scale bar, 1 mm. Error bars, SD. Fig. 4. Detection of metal deposits (arrows) over a year. A–C: MRI with FSE sequence. D–F: MRI with FLASH sequence. G: postmortem detection with Prussian blue reaction. MRI was performed at 1 (A and D), 7 (B and E), and 18 (C and F) mo after creation of the metal deposits. Numbers in left panel of A correspond to those in right panels. The metal deposits are found even 18 mo after the marking. Current parameters: 2 μA for 7 min (1), 4 μA for 5 min (2), 2 μA for 3 min (3), 2 μA for 10 min (4), 2 μA for 7 min (5), and 2 μA for 15 min (6). Scale bars, 5 mm (left) and 1 mm (right panels of A–F and G). error along the penetration direction was 50.3 μm and that orthogonal to the penetration direction was 62.4 μm, both of which were significantly smaller than single voxel size (200 μm; Wilcoxon signed-rank test, P < 0.0001). Practical demonstration of direct track reconstruction on the magnetic resonance image using within-track deposit marks. To demonstrate recording site localization in a practical chronic experimental situation, we penetrated an elgiloy microelectrode into the inferotemporal cortex of a monkey performing a visual-visual pair-association task. During the course of the penetration track, we recorded single-unit neuronal activity and then left three metal deposit marks along the track (Fig. 6A). Subsequent MRI with an in-plane resolution of $150 \times 150 \mu m^2$ detected three corresponding metal deposit marks in a straight line around the rhinal sulcus of the inferotemporal cortex (Fig. 6A). The positions of the recorded neurons were reconstructed on the MR image by aligning the deposit marks on the reconstruction track with those on the MR image (Fig. 6A). Figure 6B shows an example of recorded spike waveforms of an isolated neuron, which was located lateral to the ventral lip of the rhinal sulcus, the area 36 (Fig. 6A). The amplitude reached ~150 μV at the trough with a signal-to-noise ratio of 9.44, demonstrating a good isolation of the single neuronal unit. Stimulus-selective visual responses were detected in this neuron when the visual stimuli were presented as a cue (1-way ANOVA across 24 stimuli, P < 0.0001). The firing rates increased rapidly when a visual stimulus was presented (t-test against 300-ms period before the cue onset, P < 0.0001; Fig. 6C), whereas another stimulus did not elicit such responses (t-test, P > 0.5; Fig. 6C). The signal-to-noise ratio across 12 units in Fig. 6A was as high as that in the example shown in Fig. 6B, 8.29 ± 2.44. This example demonstrates the usefulness of the elgiloy microelectrode, which can record spike waveforms with a high signal-to-noise ratio, isolate single-unit neuronal activity related to cognitive functions, and localize recorded neurons in vivo directly on highly contrasted brain images by deposit marking. Alignment of X-ray-based coordinates with MRI using metal deposits as within-brain local positional references. The convenience of this method for a given researcher might depend on their access to an MRI scanner, which varies across laboratories. If it is desirable to avoid frequent use of MRI, other handy in vivo localization methods such as neurosonography (Tokuno et al. 2000; Glimcher et al. 2001) or X-ray imaging (Aggleton and Passingham 1981; Nahm et al. 1994; Cox et al. 2008) can be more practical for everyday use, although the tissue contrast of these methods is far inferior to that of MRI. To complement the anatomical information in these methods, the determined recording site positions can be aligned with MR images using common positional references located outside of the skull (Nahm et al. 1994; Cox et al. 2008). Here we demonstrated that the metal deposit marks in the current paradigm can be used as a within-brain local positional reference to align the recording positions between X-ray and MRI. We performed eight recording penetrations with elgiloy microelectrodes into the anterior inferotemporal cortices of a monkey, while performing a pair-association task, and determined the position of the penetration axes with X-ray imaging (Fig. 7A). A total of 29 metal deposit marks were left by these penetrations, and their positions were determined within highly contrasted brain structures with a subsequent MRI scan of $150 \times 150 \mu m^2$ in-plane resolution (Fig. 7A). An optimal transformation of the metal deposit mark positions in the X-ray-based coordinates was then computed to align with positions in the MRI coordinates, using a least-square estimation of a global rigid-body transformation and translations along each penetration axis (Fig. 7A, see METHODS for detailed transformation procedures). After this alignment, the positions of the metal deposit marks were matched well between X-ray and MRI (Fig. 7B, 16.4 ± 224.8-μm difference in anteroposterior direction; 12.9 ± 200.4 μm in lateromedial direction; 13.8 ± 197.4 μm in dorsoventral direction), showing the feasibility of the coordinate transformation from X-ray to MRI. We then applied this transformation to 62 penetration tracks, whose positions were measured with X-ray imaging, and aligned them to the MR images. A total of 687 neurons from those penetration tracks were reconstructed on the MR images of the inferotemporal cortex (Fig. 7C). Consistent with the anatomy revealed by the MRI, the neurons predominantly located within the inferotemporal cerebral cortices and were not positioned in white matter, sulci, or areas outside the brain. In this study, we developed a novel MRI-detectable elgiloy deposit marking method for in vivo localization of recording sites. Similar to the currently accepted gold standard method using electrolytic lesion marks, which are detectable in postmortem histology, this MRI-based approach enabled direct localization of recorded neuronal activity in vivo within highly contrasted fine brain structures. Quantitative analysis showed that metal deposits could be localized with single-voxel accuracy at an in-plane resolution of 200 μm². We successfully demonstrated two practical applications of the deposit mark in recording experiments from a behaving monkey: reconstruction of a penetration track directly on MR images using within-track deposit marks, and transformation of X-ray-based neuronal activity positions onto MR images with reference to deposit mark positions. These in vivo applications are feasible for chronic experiments using non-human primates, providing a powerful tool for electrophysiological analysis of the functional architecture of the cerebral cortex underlying cognitive processes. Advantages of the in vivo localization and its potential drawbacks. In vivo localization offers several advantages over conventional postmortem methods. First, noninvasive imaging provides identification of recording site locations without killing animals, which enables the method to be used repeatedly to localize a large number of recording sites from an individual animal. Second, the immediate feedback during the course of the experimental sessions allows correctional adjustment of the penetration trajectory in subsequent experiments and ensures recordings from target areas at an appropriate density. Third, the use of MRI can replace the laborious histological processing with brief MRI scanning that takes only a few hours. Fourth, localization from living organs is free from the distortion effects of postmortem histological processing, providing positions within accurate 3-D coordinates. The metal deposition method described here does not require specialized equipment other than an MRI scanner. Standard microdrive manipulators can be used for electrophysiological recordings; elgiloy electrodes are now commercially available from a number of sources; and various types of direct-current power supply can be used for the deposition. Because the deposit marks remain detectable for over 1 year (Fig. 4), there is no need to acquire MR images immediately after deposition. In addition, MRI does not need to be performed frequently because multiple deposit marks can be detected simultaneously in a single MRI session. MR images can be acquired at a convenient day after several penetrations and marking procedures. Therefore, many physiological laboratories that have access to MRI can use the MRI-detectable metal deposits immediately at no additional cost. The use of elgiloy microelectrodes allowed us to record single-unit neuron activity and substantially extend previous studies using stainless steel microelectrodes to create metal deposit marks (Fung et al. 1998; Pezaris and Dubowitz 1999). In contrast to stainless steel microelectrodes, which are ineffective for neuronal recordings, elgiloy microelectrodes can record single-unit activity as demonstrated in the present study (Fig. 6). The high efficacy of the elgiloy electrodes is further supported by the fact that the elgiloy electrodes have already been used in many single-unit electrophysiological studies (e.g., Suzuki and Azuma 1976; Sugita 1999; Kakei et al. 2001; Yamagata et al. 2009; Hikida et al. 2010) including several studies conducted in our laboratory (Ohbayashi et al. 2003; Fukushima et al. 2004; Kamigaki et al. 2009, 2011). Compared with other standard single-unit electrodes such as tungsten, however, we have found that the efficacy of the elgiloy electrodes can occasionally be slightly worse, although this difference in the efficacy between elgiloy and tungsten electrodes was small and difficult to quantify. This might be a potential downside of the elgiloy electrodes relative to electrodes pro- Fig. 7. Across-track reconstruction of recording penetrations from a behaving monkey by a combination of MRI and X-ray imaging. A: computation of coordinate transformation from X-ray imaging to MRI by using metal deposits as within-brain local positional reference. The metal deposits were localized within 3-D spaces using MRI (red circles, top left) and X-ray imaging with manipulator readings (blue circles, bottom left), and then the positions of the metal deposit marks in the X-ray coordinates were transformed and aligned to those in the MRI coordinates (right). D, dorsal coordinate from the orbitomeatal plane; A, anterior coordinate form interaural line; L, lateral coordinate from the midline. B: distributions of registration errors, as measured by the difference in the metal deposit coordinates between MRI and transformed X-ray. Means ± SD are 16.4 ± 224.8 μm (AP, anteroposterior direction), 12.9 ± 200.4 μm (LM, lateromedial direction), and 13.8 ± 197.4 μm (DV, dorsoventral direction). C: reconstructed neuronal positions (green circles) on coronal MR images of the inferotemporal cortices. Coordinates of the neurons were determined with X-ray imaging and transformed to MRI. Neurons that were localized within ± 0.5-mm range from the imaging slice are superimposed on each MR image for display purposes. Although some recordings which located further from the center of the MR slice sometimes appeared outside the brain, these were actually located within the brain in other MR image slices that were centered nearer to those recording sites. C, right: schematic drawings of a brain in a lateral view and coronal plane, showing positions of the MR images (red squares). Scale bar, 2 mm (C). duced from other materials such as tungsten, although the recording efficacy of the elgiloy electrode is high enough for single-unit studies. One of the main advantages of using elgiloy electrodes is the ability to isolate single-unit activity and create deposit marks within the same electrode track. As such, the within-track deposit marks allow the recording track to be reconstructed directly on a MR image in vivo (Fig. 6), just as electrolytic lesion marks do on the histological sections postmortem (Hubel and Wiesel 1962, 1968). This direct identification of recording positions on MR images provides definite locations relative to fine neuroanatomy, and is robust to the potential tissue distortion that occurs in the course of microelectrode advancement (Bourgeois et al. 1999; Tokuno et al. 2000). Some potential drawbacks of the method described in this study must be considered, particularly when a large number of recording sites are concentrated in a small space. First, closely spaced marks are difficult to distinguish in a single MRI session. One solution for this problem is to perform MRI periodically. Marks made in different recording penetrations can be distinguished by creating an MRI database of the deposit marks at each time point. Second, repeated marking within a restricted region might accumulate damage due to the electrical current, which may affect physiological function. Although the damage around a single mark is likely to be negligible in most cases (~250 μm at <500 μC), this problem is inevitable to some extent. Close relationships between the mark size and total charge used for the deposition (Fig. 2, Fig. S1) may be helpful to minimize the effects of the damage. A fundamental solution for these drawbacks is to combine the technique with other imaging methods, as described in detail below. Combination of MRI with other noninvasive methods. The combination of the current method with other noninvasive imaging methods for inserted microelectrodes, such as sonography (Collier et al. 1980; Tokuno et al. 2000; Glimcher et al. 2001) or X-ray imaging (Aggleton and Passingham 1981; Nahm et al. 1994; Cox et al. 2008), would be beneficial to compensate for the drawbacks of the deposit marking method. In the current study, we demonstrated the efficacy of combining X-ray imaging and MRI, using the metal deposit marks as a common positional reference (Fig. 7). Such a combination is one of the modes of use we propose for our method, which was able to achieve real-time estimation of the recording site repeatedly without leaving damage in the brain. Neurosonography is another imaging method that could potentially be used in combination with MRI. Although its spatial resolution and tissue contrast are lower than those of MRI, the real-time visualization of electrode penetration with some tissue images can reduce the risk of vessel damage and prevent severe stroke (Tokuno et al. 2000; Glimcher et al. 2001). A common problem with these combined approaches is the potential error derived from alignment processes to MR images. This error has not been quantitatively evaluated in previous studies using combined X-ray/MRI approaches for in vivo microelectrode localization (Nahm et al. 1994; Cox et al. 2008). In the current study, we measured the registration error between X-ray imaging and MRI around the target region, and showed that the error was sufficiently small (near-zero mean and standard deviations of ~200 µm) to be acceptable for most applications. One potential reason for this precise registration might be the use of the metal deposits as tissue-based positional references, which can help to map X-ray coordinates onto the tissue. To align the X-ray and MRI coordinates, previous studies used fiducial markers located outside the skull, based on the assumption that the brain and skull can be considered a single rigid body. Although this assumption may be true at a resolution in the order of millimeters, the rigidity of soft brain tissue, which floats inside the skull, should be considered carefully at a submillimeter scale. Indeed, even a simple head direction change can induce a positional change of the human brain within the skull of up to 1.7 mm due to gravitational effects (Schnaudigel et al. 2010). Because the length of the macaque monkey brain is approximately half that of the human brain, a movement of 1.7 mm of the human brain corresponds to 0.85 mm of movement in the macaque brain. Therefore, assuming that soft brain tissue and the skull constitute a single rigid body may be inappropriate when considering the localization accuracy of microelectrode recordings at resolutions in the order of a few hundred micrometers. In the current study, we used metal deposit marks as within-brain local positional references to align X-ray and MRI coordinates, under the assumption that local brain regions can be considered rigid bodies. We found that the X-ray and MRI coordinates were aligned accurately with a small registration error (~0.2 mm, as described above), suggesting that within-brain local positional references are robust against several potential error factors (such as ~0.85-mm tissue movement within the skull, although in some conditions this would be smaller) and feasible for soft and floating brain tissues. In the present study, we used 29 metal deposits from 8 penetration tracks to calculate the registration. However, in principle, registration could be calculated from fewer metal deposits. Although a single internal fiducial is insufficient due to the degrees of freedom in the rotation angle, more than three linearly independent fiducials enable the positions to be determined in three-dimensional space. *Application of the reconstruction procedures.* We proposed two modes of reconstruction procedure: the direct reconstruction of a penetration track onto an MR image using within-track marks (Fig. 6), and the transformation of X-ray-based coordinates on MR images using across-track marks as positional references (Fig. 7). These two procedures are complementary to each other. The former procedure has advantages in the direct and definite localization on MR images, which are robust against tissue-based errors, but possesses disadvantages in the reconstruction of a large number of recording sites from a small region of interest. In contrast, the latter procedure has advantages in the reconstruction of a large number of recording sites without causing tissue damage (Nahm et al. 1994; Cox et al. 2008), but possesses disadvantages in the potential for alignment error between X-ray-based and MRI-based coordinates. Usage of our metal deposit marks as a novel tissue-based reference frame reduces the alignment error between the X-ray-based coordinates with MR images. The most appropriate use of these two procedures depends on the specific experimental situation, including the purpose of the research, the targeted recording areas, number of necessary recording sites, access to an MRI scanner, etc. To clearly illustrate how to use these procedures, several example situations are described below. *Example 1:* in the case of distributed recording from a relatively large area such as the primary visual cortex, a dozen tracks can be reconstructed directly on MR images by leaving one or two marks on each penetration track. By leaving spaces of a few millimeters across the tracks, one can distinguish each track and thus localize all of the dozens of tracks in a single MRI scanning session, which can be performed on a day after recording and marking at the experimenter’s convenience. This mode of use would be feasible for examining fine functional architecture within the cerebral cortex in alert and behaving primates. *Example 2:* in cases where an experimenter cannot access an MRI scanner frequently and/or in case one intends to record a large number of neurons from a small restricted region, it can be useful to combine X-ray imaging and MRI, as reported in several previous studies (Nahm et al. 1994; Cox et al. 2008). Our deposit marks can be used as internal tissue-based fiducials to transfer the X-ray based coordinates onto MR images. Deposit marks can be created during some of the recording sessions and then visualized with MRI at a later date. This approach enables in vivo localization of cortical areas in which neurons were recorded. *Example 3:* if an experimenter wants to record tens of neurons from a small restricted region without causing tissue damage and directly localize the neurons on MR images, it is possible to create marks above and/or below the target region, leaving the region itself intact. By reconstructing actual recording sites from within-track marks and microdrive readings, the location of tens of neurons can be directly mapped onto MR images. In a recently published electrophysiological paper from our laboratory, a small target region (area 35 at the fundus of the rhinal sulcus) was localized in vivo with this procedure (marking onto the amygdala just above the area 35; Fujimichi et al. 2010). The three examples above are not mutually exclusive: e.g., the approach in example 2 could be used first to map recorded neurons in a broader area, followed by the use of the approach in example 3 to examine fine functional structures at important recording areas of interest. This combination would be useful for studying functional architecture within a cluster of neurons related to specific cognitive functions, such as the “face patch,” consisting of face-selective neurons in the inferotemporal cortices (Tsao et al. 2006; Freiwald et al. 2009), and the “hot spot,” consisting of pair-coding neurons in the perirhinal cortex (Naya et al. 2001; Yoshida et al. 2003). **Metal deposit visualization with MRI.** The deposited metal we detected as azure spots by the Prussian blue reaction is likely to be iron, since the reaction products of other metals produce different colors (Sharpe 1976). Iron and the other ferromagnetic metal materials composing elgiloy, nickel, and cobalt, could potentially cause susceptibility artifacts on MR images (Ho and Shellock 1999; Matsuura et al. 2002). Susceptibility artifacts induce geometrical distortion and signal loss... due to intravoxel phase dispersion (the so-called T2* effect) at local regions around metal deposits (Luedeke et al. 1985; Posse and Aue 1990), generating hypointense spots as observed on the MR images in this study. The strength of the susceptibility artifacts correlates with the amount of ferromagnetic metal (Allkemper et al. 2004; Hardy et al. 2005), as reflected in the close relationship between the metal deposit appearance and the total charge used for the deposition (Fig. 2). In gradient-echo sequences, the T2* effect is larger than geometrical distortion, and thus strongly depends on TE (Fig. 3E; Ericsson et al. 1988; Posse and Aue 1990). In contrast, spin-echo sequences are less sensitive to susceptibility artifacts, because the T2* effect is reduced by the refocusing pulse (Luedeke et al. 1985; Ericsson et al. 1988; Posse and Aue 1990), resulting in a smaller metal deposit size compared with the gradient-echo sequence (Fig. 2). Because the effect of geometrical distortion is larger than the T2* effect (Posse and Aue 1990), spin-echo sequences depend on parameters related to geometrical encoding, such as bandwidth and frequency encoding direction (Fig. 3, C and D). The number of refocusing pulse repetitions in the FSE, namely the echo train length, also affected the size of the resulting deposit marks (Fig. 3E; Reimer et al. 1996). The use of spin-echo sequences would be appropriate for localizing the deposit mark positions accurately, because they are less sensitive to the gradient magnetic field inhomogeneity and can thus minimize global tissue distortion. Although the gradient-echo sequence is relatively sensitive to the magnetic field inhomogeneity, its larger signal and higher sensitivity to susceptibility artifacts might be useful when searching for metal deposits in initial exploratory scans of each MRI session. The most appropriate scanning parameters depend on several factors in individual experiments, such as the imaging contrast required, total scanning time available, and/or the specifications of the MRI scanner. Here, we reconstructed recorded neurons on T2-weighted FSE images taken at a spatial resolution of 0.15 × 0.15 × 0.8 mm³ with 120/4,500 ms of TE/TR and 8 echo train lengths (Figs. 6 and 7). In accordance with the high (0.15 × 0.15 mm²) in-plane resolution, we created most of the metal deposits at 2 μA for 180 s, whose diameters ranged between 1 and 3 voxels on the MR images. Using the above scanning and depositing parameters, we successfully visualized metal deposits on highly contrasted fine brain structures. We used two-dimensional MRI sequences and found that the metal deposits could be localized with single-voxel accuracy at an in-plane resolution of 200 × 200 μm². However, the slice thicknesses were larger than the in-plane resolutions in these two-dimensional imaging sequences. The accuracy along the normal direction to the imaging plane would be expected to be less than that within the plane, since the spatial resolution is worse in the former direction for the two-dimensional imaging sequences. There are two possible methods for achieving higher accuracy along the normal direction to an imaging plane: one method is to acquire MR images with a thinner slice thickness using a higher-gradient magnetic field, at the cost of a lower signal-to-noise ratio of the image; the other method is to also acquire MR images in another direction, at the cost of doubled scanning time. **Possible future applications.** An increasing number of laboratories have recently been using functional MRI (fMRI) in nonhuman primates as a navigation tool to target microelectrode recordings. Researchers can identify multiple responsive regions at the whole brain level using fMRI, then investigate the electrical activity of neurons with a high spatio-temporal resolution using microelectrode recordings (Sawamura et al. 2006; Tsao et al. 2006; Freiwald et al. 2009). Since the metal deposit marks are directly detectable in the MR images, it is straightforward to use these marks to compare the location of electrophysiological recordings with that of observed fMRI activity. In this study, we localized metal deposits at an in-plane resolution of 200 μm using a 4.7 T MRI system and a surface receiver radiofrequency coil. MRI technology has continuously advanced in recent decades, as the spatial resolution in recent monkey fMRI studies has been greatly improved by the use of iron oxide (Vanduffel et al. 2001; Ekstrom et al. 2008). Higher spatial resolution and image contrast of anatomical images have been enabled by recent advancements in MRI technologies, such as ultra-high magnetic fields (Logothetis et al. 2002; Pfeuffer et al. 2004; Vaughan et al. 2006), implantable surface coils (Logothetis et al. 2002; Pfeuffer et al. 2004), parallel imaging systems (Ekstrom et al. 2008; Kolster et al. 2009; Wiggins et al. 2009), cryogenic probes (Darrasse and Giniferi 2003; Baltes et al. 2009), and manganese-enhancement imaging (Silva et al. 2008). In the future, these technologies are likely to allow in vivo localization of recorded neurons at a resolution of tens of micrometers (Boretius et al. 2009; Baltes et al. 2009) with highly contrasted cortical layer structures (Fatterpekar et al. 2002; Barbier et al. 2002; Walters et al. 2007; Boretius et al. 2009). **ACKNOWLEDGMENTS** We thank Tomomi Watanabe for technical assistance, Yuji Naya for helpful comments and discussions, and Takahiro Osada and Yusuke Adachi for support with the MRI. **GRANTS** This work was supported by Grant-in-Aid 19002010 for Specially Promoted Research from Ministry for Education, Culture, Sports, Science, and Technology (MEXT) to Y. Miyashita and Grant-in-Aid 21700342 for Young Scientists from MEXT to M. Takeda, a grant from Takeda Science Foundation to Y. Miyashita, and Japan Society for the Promotion of Science (JSPS) Research Fellowships for Young Scientists to K. W. Koyano (200956), T. Matsui (218747), and Y. Ohashi (195225). This work was also supported in part by Global COE Program (Integrative Life Science Based on the Study of Biosignaling Mechanisms) from MEXT. K. W. Koyano, T. Matsui, and Y. Ohashi are JSPS Research Fellows. **DISCLOSURES** No conflicts of interest, financial or otherwise, are declared by the author(s). **REFERENCES** Asahi T, Tamura R, Eifuku S, Hayashi N, Endo S, Nishijo H, Ono T. A method for accurate determination of stereotaxic coordinates in single-unit... Logothetis NK. MRI-DETECTABLE MARK FOR SINGLE-UNIT RECORDING Innovative Methodology
Mapping and classification of volcanic deposits using multi-sensor unoccupied aerial systems Brett B. Carr, Einat Lev, Theresa Sawi, Kristen A. Bennett, Christopher S. Edwards, S. Adam Soule, Silvia Vallejo Vargas, Gayatri Indah Marliyani Lamont-Doherty Earth Observatory, Columbia University, 61 Route 9W, Palisades, NY 10964, USA U.S. Geological Survey, Hawaiian Volcano Observatory, 1266 Kamehameha Ave., Suite A8, Hilo, HI 96720, USA U.S. Geological Survey, Astrogeology Science Center, 2255 N. Gemini Dr., Flagstaff, AZ 86001, USA Department of Astronomy and Planetary Science, Northern Arizona University, PO Box 6010, Flagstaff, AZ 86011, USA Woods Hole Oceanographic Institution, 266 Woods Hole Road, MS #24, Woods Hole, MA 02543, USA Instituto Geofísico, Escuela Politécnica Nacional, Ladron de Guevara E11-253, Quito 170025, Ecuador Department of Geological Engineering, Faculty of Engineering, Universitas Gadjah Mada, PT UGM Jl. Grajagan No. 02, Yogyakarta 55281, Indonesia ARTICLE INFO Edited by: Jing M. Chen Keywords: Thermal remote sensing Unoccupied aerial systems (UAS) Lava flows Land surface classification Mapping of volcanic deposits ABSTRACT The deposits from volcanic eruptions represent the record of activity at a volcano. Identification, classification, and interpretation of these deposits are crucial to the understanding of volcanic processes and assessing hazards. However, deposits often cover large areas and can be difficult or dangerous to access, making field mapping hazardous and time-consuming. Remote sensing techniques are often used to map and identify the deposits of volcanic eruptions, though these techniques present their own trade-offs in terms of image resolution, wavelength, and observation frequency. Here, we present a new approach for mapping and classifying volcanic deposits using a multi-sensor unoccupied aerial system (UAS) and demonstrate its application on lava and tephra deposits associated with the 2018 eruption of Sierra Negra volcano (Galápagos Archipelago, Ecuador). We surveyed the study area and collected visible and thermal infrared (TIR) images. We used structure-from-motion photogrammetry to create a digital elevation model (DEM) from the visual images and calculated the solar heating rate of the surface from temperature maps based on the TIR images. We find that the solar heating rate is highest for tephra deposits and lowest for pahoehoe lava, with a'a lava having intermediate values. This is consistent with the solar heating rate correlating to the density and particle size of the surface. The solar heating rate for the lava flow also decreases with increasing distance from the vent, consistent with an increase in density as the lava degasses. We applied both supervised and unsupervised machine learning algorithms. A supervised classification method can replicate the manual classification while the unsupervised method can identify major surface units with no ground truth information. These methods allow for remote mapping and classification at high spatial resolution (< 1 m) of a variety of volcanic deposits, with potential for application to deposits from other processes (e.g., fluvial, glacial) and deposits on other planetary bodies. 1. Introduction Characterization of the morphology and physical characteristics (e.g., grain size, density) of volcanic deposits such as lava flows, tephra, and pyroclastic density currents (PDCs) is fundamental to the ability to understand the eruption and emplacement processes that produced the deposits. Insight on eruption processes gained from the study of their deposits is key to interpreting the history and eruptive potential of volcanic areas, especially in cases where eruptions were not directly observed. Detailed knowledge of the volcanic history of a region Corresponding author at: Lamont-Doherty Earth Observatory, Columbia University, 61 Route 9W, Palisades, NY 10964, USA. E-mail addresses: bcarr@usgs.gov (B.B. Carr), einatlev@ldeo.columbia.edu (E. Lev), tsawi@ldeo.columbia.edu (T. Sawi), kbenett@usgs.gov (K.A. Bennett), Christopher.Edwards@nau.edu (C.S. Edwards), ssoule@whoi.edu (S.A. Soule), svallejo@igepn.edu.ec (S. Vallejo Vargas), gayatri.marliyani@ugm.ac.id (G.I. Marliyani). https://doi.org/10.1016/j.rse.2021.112581 Received 10 June 2021; Accepted 1 July 2021 Available online 4 August 2021 0034-4257/Published by Elsevier Inc. facilitates progress towards many objectives, from hazard assessment and mitigation for future eruptions to investigations of the evolution of planetary surfaces. Lava flow morphology can be used to infer lava properties (e.g., viscosity, temperature) and emplacement dynamics (Fink and Griffiths, 1992; Griffiths, 2000). As an example, for a similar lava viscosity, ‘a’a morphology is indicative of a higher flow rate relative to pāhoehoe morphology. For similar flow rate, ‘a’a indicates higher viscosity (Lipman and Banks, 1987; Whelley et al., 2017). Different lava flow morphologies are characterized by different surface roughness. At length scales of 1–10 m, pāhoehoe lava generally has smoother texture than ‘a’a (Lipman and Banks, 1987; Whelley et al., 2017). Lava flows are in general rougher than tephras deposits, which consist of smaller clasts of ash and scoria. Depositional distinctions also exist among PDC deposits. Debris avalanches, pyroclastic flows, and pyroclastic surges have characteristic grain size distributions, morphology, or roughness relative to each other (Charbonnier and Gertisser, 2008; Whelley et al., 2014; Solikhin et al., 2015). Unfortunately, volcanic deposits can be difficult and dangerous to access and navigate on foot. In addition, deposit areas are often too extensive or remote to be mapped effectively with limited time or personnel. The ability to remotely describe and quantify volcanic deposits is thus highly valuable and numerous remote sensing techniques utilizing ground-based, airborne, and satellite instruments have been developed to observe a variety of volcanic processes and deposits (e.g., Wooster et al., 2000; Whelley et al., 2014; Solikhin et al., 2015; Ganci et al., 2018; Pallister et al., 2019; Corradino et al., 2019). The rise of unoccupied aerial systems (UAS) technology in recent years as a cost-effective and efficient means to conduct airborne surveys has further facilitated several advancements in volcanological mapping (James et al., 2020a, and references therein). We describe a new approach in which we classify and map lava flow morphology and tephra deposits from a volcanic eruption by combining data derived from UAS-mounted visual and thermal infrared (TIR) cameras. From the collected images, we produced two separate remotely sensed data sets describing the study area’s surface roughness and its solar heating rate (a proxy for its thermal inertia). Such data sets are often used to characterize surface types, albeit separately. They are sensitive to different properties of the surface, and thus their combination allows for more accurate and consistent identification of surface types than either quantity alone. Based on the mapped locations of the surface types we classify; we can make interpretations about the mechanisms of flow emplacement during the eruption. Additionally, roughness and thermal inertia proxies are commonly calculated from satellite data, and our technique deriving these values from UAS surveys provides comparable datasets with an increase in spatial resolution of more than an order of magnitude in most cases. This method has applications for using UAS to better understand the history of eruptive processes in volcanic areas by facilitating multi-scale investigations of volcanic deposits and improving the safety and efficiency of field mapping. 1.1. Surface roughness of volcanic deposits The surface roughness of volcanic deposits can be used to identify different morphological units using remotely sensed data. Roughness is typically calculated from a digital elevation model (DEM) that can be produced using radar (Morris et al., 2008; Richardson and Karlstrom, 2019), light detection and ranging (LiDAR) (Mazzarini et al., 2009; Whelley et al., 2014; Whelley et al., 2017), or photogrammetric (Bretar et al., 2015) data sources that can be satellite-, airborne-, or ground-based. However, there is no standard unit or method for calculating roughness. Grohmann et al. (2011) evaluated several methods, including: surface area to plan area ratio, surface normal vector dispersion, the standard deviation of elevation, the standard deviation of residual topography after subtracting a smoothed DEM, the standard deviation of slope, and the standard deviation of profile curvature. These methods all invoke a “neighborhood” (i.e., a moving window of a given size), where the roughness is determined by comparing the pixel values within a region centered on the pixel for which the roughness value will be assigned. The size of the region/moving window is determined by the user for reasons that can include the DEM resolution and the scale of interest (e.g., smaller window size is more sensitive to relatively minor topographic changes whereas a larger window size will better generalize the terrain) (Shepard et al., 2001). Of the methods evaluated, Grohmann et al. (2011) found the standard deviation of slope to be the preferred method for geomorphology, citing the simplicity of the calculation, detection of both fine and regional scale relief, and consistent performance regardless of DEM or moving window scale. For applications of surface roughness focused only on local (as opposed to regional) roughness features, a common technique is to first detrend the DEM to remove background or regional slopes (Shepard et al., 2001; Whelley et al., 2014; Whelley et al., 2017; Richardson and Karlstrom, 2019). Roughness can then be calculated by various methods, including the standard deviation or root-mean-square of the residual elevations (Whelley et al., 2014), or the application of a 2D discrete Fourier transform (Richardson and Karlstrom, 2019). Roughness derived from LiDAR surveys has been used to classify and map volcanic deposits including both pyroclastic deposits (Mazzarini et al., 2009; Whelley et al., 2014) and lava flows (Morris et al., 2008; Whelley et al., 2017). These studies applied statistical analyses either directly to the elevation data (Morris et al., 2008; Mazzarini et al., 2009) or to roughness values derived from residual elevations after detrending the DEM (Whelley et al., 2014, 2017). The statistical measures allowed for multi-component analyses that showed grain size and deposit thickness control roughness in pyroclastic deposits (Mazzarini et al., 2009) and identified how measurements of roughness varied depending on both lava flow morphology and the spatial resolution at which the roughness was calculated. Whelley et al. (2014, 2017) found that the mean roughness value, homogeneity, and entropy calculated for a neighborhood around each pixel to be best at distinguishing different surface types. Using this technique, distinct mappable units in the Mt. St. Helens pumice plain (e.g., channels, pumice lobes, debris avalanches; Whelley et al., 2014) and the 1974 Mauna Ulu lava flow (e.g., ‘a’a, pāhoehoe, slabbly pāhoehoe, overflow ‘a’a; Whelley et al., 2017) were identified by visually grouping areas sharing similar roughness texture statistics. 1.2. Thermophysical properties of volcanic deposits Different surface types or morphologies also vary in their thermophysical properties, such as thermal inertia, and can be identified using TIR remote sensing (Ramsey and Fink, 1999; Price et al., 2016; Ramsey et al., 2016; Simurda et al., 2020). Thermal inertia is a physical material property that is related to the resistance to temperature change and is commonly derived by modeling observations of the diurnal temperature response of a surface (Ramsey et al., 2016; Simurda et al., 2020). Thermal Inertia (TI) is defined as: \[ TI = \sqrt{\kappa pc} \] where \( k \) is thermal conductivity (J s\(^{-1}\) m\(^{-1}\) K\(^{-1}\)), \( \rho \) is density (kg m\(^{-3}\)), and \( c \) is the specific heat (J K\(^{-1}\) kg\(^{-1}\)), such that the units of thermal inertia are J m\(^2\) K\(^{-1}\) s\(^{-1/2}\). In general, lower thermal inertia (low resistance to temperature change) is associated with finer-grained and/or unconsolidated material (such as dust or sand) while higher thermal inertia (high resistance to temperature change) corresponds to larger particle sizes and/or densely packed grains (i.e., bedrock) (Ramsey et al., 2016; Ferguson et al., 2006; Price et al., 2016; Simurda et al., 2020). Thermal inertia is sensitive to grain size because the number of grain-to-grain contacts decreases with increasing particle size (as the solid phases have significantly higher thermal conductivity than the... pore-filling gasses). Thus, larger particles can more efficiently conduct heat into the sub-surface as compared to smaller particles, where the numerous grain-to-grain contacts restrict the thermal conductivity. Over a diurnal cycle of solar heating, lower thermal inertia materials/surfaces will heat (and cool) faster compared to high thermal inertia materials/surfaces, ultimately reaching higher daytime temperatures and lower nighttime temperatures. As the material properties used by Eq. 1 are not possible to measure remotely, an apparent thermal inertia (ATI) has been defined as \[ ATI = \frac{(1 - \alpha)}{\Delta T} \] (2) where \(\alpha\) is the albedo of the land surface over the visible/near-infrared and short-wave infrared wavelengths and \(\Delta T\) is the difference in brightness temperature between day and night TIR images (Price, 1977). ATI is inversely proportional to the temperature difference in two thermal images acquired over the diurnal cycle (Simurda et al., 2020). Measurements of ATI or, more generally, the heating rate between observations of a shorter duration, have been used as a proxy for thermal inertia, and utilize data from TIR sensors on satellites (Price, 1977; Scheidt et al., 2010; Price et al., 2016; Simurda et al., 2020) or, in our case, a UAS. Thermal inertia is commonly used to investigate planetary surfaces, notably Mars, where the value can provide information on the degree of mantling by dust and the presence of exposed bedrock (Ramsey et al., 2016; Fergason et al., 2006). Crown and Ramsey (2017) found highly variable thermal signals on small spatial scales for lava flows in Arsia Mons, indicating complex relations between the rough and blocky surface of lava flows and mantling by fine-grained material. For terrestrial targets, the Earth’s thick and highly variable atmosphere and vegetation complicate estimates of thermal inertia from satellite (Price, 1977; Simurda et al., 2020). However, satellite-derived ATI has been used to investigate, for example, soil moisture (Price, 1977; Scheidt et al., 2010) and tephra mantling of lava flows (Price et al., 2016; Simurda et al., 2020). Simurda et al. (2020) have recently demonstrated that relating grain size to heating rate or ATI observed from orbit is influenced by sub-pixel roughness, where surfaces with different roughness characteristics (and thus, different ATI) can be present within a single pixel (90 m spatial resolution in Simurda et al., 2020). They found that the highest ATI values were associated not with surfaces with predominantly coarse-size grains as would be expected, but rather with surfaces containing moderate-sized grains. The ATI of surfaces with coarse grain sizes, they discovered, was lowered due to self-shadowing and the trapping of fines by the coarse grains. In our study investigating relative thermal inertia we utilize both a multi-sensor approach and a high spatial resolution visible dataset, as recommended in Simurda et al. (2020). Data from TIR sensors are also sensitive to density (e.g., Eq. 1). Ramsey and Fink (1999) demonstrated this concept for volcanic deposits, using multi-band airborne TIR imagery to quantify the vesicularity of silicic lava flows. 1.3. Sierra Negra volcano, Ecuador Sierra Negra volcano (1124 m a.s.l.) is one of six large basaltic shield volcanoes that form Isla Isabela in the Galápagos Archipelago of Ecuador. The volcano is characterized by a large (7 × 10.5 km) summit caldera. Sierra Negra erupts frequently, with events in 2018, 2005, and 1979 (Geist et al., 2007; Vasconez et al., 2018). Activity during recent eruptions has been focused along fissures on the northern crater rim and on the north flank, which fed lava flows that traveled both down the north flank and into the caldera (Geist et al., 2007; Vasconez et al., 2018). The 2018 eruption of Sierra Negra began on June 26. A series of fissures (Fig. 1a) opened along and to the north of the north rim of the summit caldera (Vasconez et al., 2018). Lava flows descended as far as 7 km down the north slope of the volcano, and one flow went into the caldera. This phase of the eruption with multiple active fissures lasted less than 24 h and the emplacement of lava flows was complete within 1–2 days (Vasconez et al., 2018). Following the 26 June activity, the eruption moved downslope to the northwest. Sustained effusion from fissure 4 (Fig. 1a) fed a large lava flow field that entered the ocean (Vasconez et al., 2018). The eruption ended on 23 August 2018. We visited the eruption site in October 2018, roughly four months after the summit eruption ended. The lava flows observed for this study (Fig. 1b) were emplaced on terrain consisting of lava flows, tephra, craters, and fissures from previous eruptions. Pahoehoe morphology dominated in proximity to the vents but transitioned quickly (within a few hundred meters or less) to ‘a’a for most of the flow length. Our study area (Fig. 1b) is a roughly 0.5 km² region that includes two vents from fissure 1 (to the immediate west and northeast of Region 3, Fig. 1b). Each vent fed a lava flow that traveled downslope to the north. Multiple flow branches and lobes break off from the main channels, and in one place the lava has filled the floor of a crater (Region 19, Fig. 1b). Flow thickness is generally no more than a few meters (Vasconez et al., 2018). 2. Methods 2.1. UAS surveys We used a DJI Matrice 210 (M210) quadcopter for this study. The M210 has two gimbal-stabilized camera mounts, on which a DJI Zenmuse X4S visual camera and a DJI Zenmuse XT TIR camera were mounted. The Zenmuse X4S has a 20-megapixel (MP), 1” CMOS (complementary metal oxide semiconductor) sensor with a mechanical shutter and an 8.8 mm focal length. The Zenmuse XT is a FLIR Tau 640 × 512 pixels (0.3 MP), and a 30 hertz frame rate. The sensor is sensitive to a temperature range of -25°C–135°C, has a sensitivity of < 50 mK, and an accuracy of ±5°C. The Zenmuse XT is sensitive to a temperature range of 25–135°C in the high-gain setting and to a range of 25–550°C in the low-gain setting. As deposits surveyed in this study were cooled from their original emplacement, we used the high-gain setting. Equipped with these cameras, the M210 has a maximum flight time of approximately 25 min. The location of the survey region within the extent of recent eruptive deposits (Fig. 1) was chosen such that multiple types of volcanic deposits (e.g., tephra and different lava flow morphologies) would be present in the resulting data products. The size of the region was limited by the area that could be surveyed by the UAS in one flight and was the result of a balance between the desired diversity of deposits and map spatial resolution (a factor of the height above ground of the UAS flight). For each flight, the two cameras were synched and set to capture an image every three seconds with a slightly forward-looking viewing angle of 10° off-nadir. We flew a series of adjacent back-and-forth swaths (i.e., a “lawnmower” flight pattern) over the survey region at approximately 150 m above ground level. Image overlap for both the visual and TIR images in both the flight direction and between flight lines was generally about 75%, with flight speed adjustments and varying ground elevation causing overlap for individual images to range from 50 to 90%. We conducted three UAS flights with the TIR camera and measured the temperature change of the volcanic deposits in our study area due to solar heating. Solar heating rate is highest and the difference in heating rate among surfaces is most pronounced in the hours immediately following sunrise (Price, 1977). The UAS survey was conducted at Sierra Negra on October 22, 2018, and UAS take off times were at 7:11 am, 8:18 am, and 9:35 am local time (UTC-06) (Table 1). While two flights are sufficient to determine solar heating rate, three or more measurements provide higher accuracy and data redundancy. Each flight lasted approximately 24 min and an average of 450 images were acquired per camera per flight (Table 1). Sunrise on this day was at 5:44 am. Ideally when applying this technique, the first UAS flight should occur prior to sunrise to observe conditions with no solar heating. However, on this day fog was present at sunrise, preventing UAS flight but also limiting solar heating of the ground. We flew the first flight immediately as the fog cleared to minimize the effect of solar heating in our first thermal survey. 2.2. Photogrammetric processing We applied structure-from-motion (SfM) photogrammetry to create DEMs and orthophotos from the images taken during UAS flights (e.g., James and Robson, 2012; Bemis et al., 2014; James et al., 2019). We used Agisoft Metashape® version 1.5 for SfM processing. The location of each image tagged by the on-board GPS of the M210 provided the spatial information for the resulting models. The ‘high’ setting (in Metashape®), which means the images were processed at their original size, without downsampling) was used for both the initial alignment and generation of the dense cloud for all models. We generated orthophotos and DEMs for both the visual and TIR images from each flight. All products from SfM processing with Metashape® were exported with identical spatial resolution and boundary coordinates such that the pixel locations are identical for calculating the solar heating rate of the surface. Metashape® uses bilinear interpolation to vary the spatial resolution when exporting DEMs and orthophotos. We selected a spatial resolution of 0.20 m for these DEMs and orthophotos, based on rounding up from the lowest resolution temperature map of the three flights (Table 1). 2.3. Roughness We calculated roughness from the DEM created from the visual images taken during the second of the three survey flights. This flight had better spatial coverage compared to the other two, and the visual images produced higher resolution DEMs and orthophotos compared to the TIR. Table 1 <table> <thead> <tr> <th>Flight</th> <th>Time/ (minutes)</th> <th>Flight Duration/ (minutes)</th> <th>Image Type</th> <th>Photos in Model</th> <th>Dense Cloud Points</th> <th>Orthophoto Resolution (m)</th> <th>DEM Resolution (m)</th> <th>Alignment Error (°C)</th> <th>Tmin (°C)</th> <th>Tmax (°C)</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>07:15</td> <td>24</td> <td>TIR</td> <td>474</td> <td>4,861,830</td> <td>0.17</td> <td>0.19</td> <td>7.9</td> <td>134.6</td> <td></td> </tr> <tr> <td>2</td> <td>08:30</td> <td>24</td> <td>TIR</td> <td>490</td> <td>5,153,991</td> <td>0.19</td> <td>0.18</td> <td>10.7</td> <td>124.2</td> <td></td> </tr> <tr> <td>3</td> <td>09:45</td> <td>22</td> <td>Visual</td> <td>487</td> <td>161,460,941</td> <td>0.04</td> <td>0.08</td> <td>NA</td> <td></td> <td></td> </tr> </tbody> </table> Notes: - * UAS in flight at this time. Takeoff times were: 7:11 am, 8:18 am, and 9:35 am. Times are local (GMT-06). - † Control point alignment error of TIR orthophoto to Flight 2 visual orthophoto (17 control points used). - ‡ Maximum and minimum temperatures for any single pixel in any single image from the specified flight- used for the linear scaling (Eq. 3). captured during a single flight), we performed a linear scaling of the TIF format (TIF) file that could be read by Matlab® (DN (DN values from 0 to 65,535). This conversion gives a DN value of 65,535 to the highest pixel temperature found in any image (among other software). The temperature for any pixel \( i \) in any image in the set \( T_i \) is then rounded to the nearest integer. As the range between the maximum and minimum temperature in an image set for this study is \(< 130 \degree C\) (Table 1), each 16-bit DN value represents a step of \(< 0.002 \degree C\). Given that the TIR camera has a sensitivity of \(< 50 \text{ mK} (<0.05 \degree C)\), the temperature scaling preserves the precision of the original measurement. The linearly scaled DN images were loaded into Metashape® and processed as described in Section 2.2. We provided spatial reference using control points (markers) in Metashape® to align the temperature maps to the Flight 2 visual orthophoto. The control points were locations identifiable in both the visual and TIR orthophotos, and we used coordinates for the points derived from the visual orthophoto to ensure as precise an alignment of the temperature maps as possible. Using 17 controls points, we achieved sub-spatial resolution alignment accuracy \(< 0.2 \text{ m}\) for all three temperature maps (Table 1). It is also possible (and common) to use a mapping GPS unit to survey control points and provide spatial reference for this type of UAS survey (James et al., 2019). However, we do not do this here because 1) distributing and measuring control points before the UAS survey was not ideal as it would have involved doing so in the dark (pre-sunrise) or leaving the control point markers in the field overnight and 2) this allows us to present a method that can be utilized in cases were surveying control points via GPS is similarly not feasible or impossible. To produce the final temperature maps, we first exported the orthophoto created in Metashape® from the scaled images. Next, in Matlab®, we converted the 16-bit DN values back to temperature using the inverse of Eq. (3) (i.e., solving for \( T_i \) when knowing \( DN_i \), rather than solving for \( DN_i \) knowing \( T_i \), as shown). Both the sensor measurement error and the SFM processing impact the accuracy of the temperature images (Table 1). Following the workflow of James et al. (2020b), we estimated that the average vertical error of the DEM is 0.15 m. This suggests that the DEM is sensitive to elevation changes between neighboring pixels on the order of 0.1 m. From the DEM, we calculated the slope and aspect of the terrain. Following Grohmann et al. (2011), we calculated surface roughness as the standard deviation of the slope values within a 5 × 5 pixel moving window. This resulted in roughness sensitive to variations with a lateral extent on the order of 1 m (five pixels with 0.2 m spatial resolution). ### 2.4. Solar heating rate maps Agisoft Metashape® is only compatible with integer digital number (DN) images. When the radiometric JPEG (joint photographic experts group) images from the Zenmuse XT are loaded into Metashape®, the temperature values are automatically converted to a single-band grayscale 8-bit DN (DN values from 0 to 255). To create a photogrammetric model that we can correlate to temperature values, the images captured by the TIR camera must be uniformly converted to DN. We first used the ResearchIR® software from FLIR to convert all images to a tagged image format (TIF) file that could be read by Matlab® (among other software platforms). For each group of TIR images that was used to create one photogrammetric model in Metashape® (in this case all the images captured during a single flight), we performed a linear scaling of the temperature values over the entire image set and converted to a 16-bit DN (DN values from 0 to 65,535). This conversion gives a DN value of 0 to the lowest temperature recorded in a pixel in any image in the set \( T_{\text{min}} \) and a DN value of 65,535 to the highest pixel temperature found in any image \( T_{\text{max}} \). The temperature for any pixel \( i \) in any image in the set \( T_i \) is then assigned a DN value by \[ DN_i = 65535 \times \left( \frac{T_i - T_{\text{min}}}{T_{\text{max}} - T_{\text{min}}} \right) \tag{3} \] and \( DN_i \) is then rounded to the nearest integer. As the range between the maximum and minimum temperature in an image set for this study is \(< 130 \degree C\) (Table 1), each 16-bit DN value represents a step of \(< 0.002 \degree C\). Given that the TIR camera has a sensitivity of \(< 50 \text{ mK} (<0.05 \degree C)\), the temperature scaling preserves the precision of the original measurement. We used the linearly scaled DN images as the input to Metashape® to create the temperature maps. To generate an orthophoto from TIR images, Metashape® effectively averages the DN for a location as it appears in multiple individual images to produce a single value for the orthophoto. This will smooth details such as local maxima and minima but also minimize the effect of outlying values due to sensor measurement error (±5°C for the Zen-muse XT). For terrain with broadly uniform or gradual changes in surface temperature (such as our study area), the result is a temperature map with relative errors no larger than the measurement error in the images. The solar heating rate for the surveyed region was calculated by fitting a linear best-fit line through the three temperature values for each pixel from the three temperature maps. We chose units for the solar heating rate of °C hr⁻¹ for this study. To account for issues related to the surface reflectivity and preferential heating of a sloped surface facing towards or away from the rising sun. The blue line running down each flow branch is the profile line for Fig. 5. Numbered boxes are the regions used to define the manual classification. The 2018 lava flows are outlined in black. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 4. Roughness of Sierra Negra surfaces. The surface roughness of the study area shows that the smoothest surfaces (lighter red) are tephra deposits and the roughest surfaces (darker red) are found near fissures and cliffs. Within the lava flow (black outline), the smoothest surfaces are pahoehoe crust and the roughest are slabbly pahoehoe. The blue line running down each flow branch is the profile line for Fig. 5. Numbered boxes are the regions used to define the manual classification. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Fig. 5. Lava flow profiles. Downflow profiles of the elevation (A and B) and solar heating rate and roughness (C and D) are shown for the west (A and C) and east (B and D) flow branches in the study area. The profile line is shown in Figs. 3 and 4. Solar heating rate and roughness are plotted as 5 m running averages (25 pixels) to improve visualization by reducing noise and assigning a running average value to ‘no data’ locations. Solar heating rate shows a general decreasing trend downflow, with the lowest values associated with small breaks in slope (see at ~125 and ~225 m in the west profile and ~250 m in the east profile). Roughness does not show a strong trend downflow but is lowest near the vent where solar heating rate is highest (< 100 m downflow), associated with pahoehoe crust surfaces. towards or away from the rising sun, we masked all pixels with a slope above 20 degrees. We also mask all pixels that are in shadow in the visual orthophoto (Fig. 1b) to account for decreased solar heating in these areas. The error in the solar heating rate is controlled by the variance of the temperature measurement error for each pixel in each map (a uniform error across all measurements does not affect solar heating rate). A two-point \( \Delta T \) with a \( \pm 5 \degree C \) measurement error could have an error as high as \( \pm 10 \degree C \), or \( 4 \degree C \, \text{hr}^{-1} \) for the 2.5 h of our measurements. Our use of a third temperature measurement to calculate solar heating rate reduces the effect of individual measurement errors. As the measurement error for each pixel in each map cannot be known to directly quantify the error, we assume the solar heating rate error to be \( < 4 \degree C \, \text{hr}^{-1} \). The solar heating rate values in the resulting map are best used as a measure of the relative thermal inertia of the different surface types in the study area. In addition to the properties of the surface, the solar heating rate is sensitive to numerous factors including ambient air temperature, cloud cover, season, and latitude. Thus, solar heating rate values are not directly comparable between different locations or different days, even for similar or identical surfaces. For future studies, a quantitative determination of ATI (Eq. 2) could be made possible through a more sophisticated methodology that includes estimating albedo. Albedo can be determined using either satellite-derived reflectivity data (e.g. Price et al., 2016; Simurda et al., 2020) or via UAS using a radiometrically calibrated sensor and a calibration target. We refrained from attempting this for this study as 1) we were interested in developing a simplified workflow that did not require extensive access to the survey area (e.g., for ground truth, ground control points, calibration targets, etc.), and 2) the highest spatial resolution satellite- derived reflectance data is 90 m, which would cause sub-pixel mixing errors (Simurda et al., 2020) if applied to our 0.2-m spatial resolution data. We also consider the contributions of sub-pixel roughness on the solar heating rate. As noted previously, Simurda et al. (2020) found that ATI does not perfectly correlate with particle size. Coarse sized particles did not yield the highest ATI because they produced sub-pixel shadows and trapped fine particles. At the scale investigated in our study (0.2 m), tephra of various sizes and small blocks of lava represent the fine, medium, and coarse particle sizes that influence sub-pixel roughness. Sub-pixel roughness due to variations in these particle sizes will not impact our lava classification methods because we do not anticipate widespread sub-pixel (less than 0.2 m) mixing of tephra, pāhoehoe lava, and ‘ā’a lava (see next section for additional details on our classification scheme). Additionally, if these variations are more prevalent than we expect, their effects will be accounted for in the classification technique. The classifications are based on variations in solar heating rate and roughness. Therefore, any significant variations in solar heating rate due to sub-pixel roughness will be included in our classification scheme. 2.5. Classification We classified surfaces based on their roughness and solar heating rate using manual and machine learning techniques. We applied a simplified (n = 3) and refined (n = 5) classification of surface types that are most prevalent in the study area, based on field observations. In the simplified classification, these surfaces were tephra, pāhoehoe lava, and ‘ā’a lava. In the refined classification, these surfaces were tephra, pāhoehoe with intact crust, slabby pāhoehoe, blocky ‘ā’a, and ‘ā’a with visible surface ridges. The four categories of lava correlate roughly to the down flow progression of lava morphology which we observed (Fig. 1b) and the progression of lava morphologies described by Lipman and Banks (1987). Pāhoehoe with intact crust is located closest to the vent (pāhoehoe, Lipman and Banks, 1987). Slabby pāhoehoe occurs where the crust was fractured before the lava transitioned to ‘ā’a (slabby ‘ā’a or pāhoehoe, Lipman and Banks, 1987). The ‘ā’a with visible surface ridges (scoriaceous ‘ā’a, Lipman and Banks, 1987) is found in breakout lobes and some channelized portions of the flow (Fig. 1b) and has smaller surface clast sizes compared to the blocky ‘ā’a, (blocky ‘ā’a, Lipman and Banks, 1987). We observed blocky ‘ā’a downflow of the pāhoehoe–‘ā’a transition and in the main channel of the eastern flow branch (Fig. 1b). For the manual classification method, we first identified 31 ‘training’ areas that contain only one of the surface types. We then calculated the average roughness and solar heating rate for each of these areas (Table 2 and boxes in Fig. 1b). These average values define regions in roughness–solar heating rate space, which provide the ranges of roughness and solar heating rate values associated with each surface type. We used these ranges to classify each pixel in the study area. As these regions represent a single surface type, the variation of roughness and solar heating rate values for the pixels within the regions provides a means to assess the significance of differences between the region averages. In general, solar heating rate is more uniform within a region compared to roughness. The standard deviation of the pixel values for each region suggests that differences in the average value between regions of 1 °C hr⁻¹ for solar heating rate and 2 for roughness represent clear distinctions in surface characteristics. We experimented with supervised and unsupervised machine learning methods for per-pixel classification of surface types, then tested the performance of these methods by comparing them to the manually classified maps. Supervised machine learning techniques have proven successful at efficiently mapping volcanic deposits in remote imagery (e.g., Li et al., 2017; Corradino et al., 2019). Similar to previous works (e.g., Waske et al., 2009; Kereszturi et al., 2018), we used a random forest algorithm trained on the manually identified training areas to classify each pixel in the image as one of the three (or five) surface types. We carry out this step using the \texttt{scipy.ensemble.RandomForestClassifier} Python package (Pedregosa et al., 2011). We also explored unsupervised machine learning, used to recognize patterns within large datasets without training data. Unsupervised methods are thus beneficial for classifications in regions where field observations may not be possible, or where previous knowledge of the existing surface deposits may be limited. We used a k-means algorithm to identify discrete clusters of pixels grouped in roughness-solar heating rate space, which we then interpreted as different surface types. We chose to use three (or five) clusters to match the manual simplified and refined classifications. We performed the k-means clustering using the \texttt{scipy.cluster.KMeans} Python package (Pedregosa et al., 2011). 3. Results The three temperature maps generated from the UAS TIR images show surfaces heating at different rates as solar input increases within the sequence (Fig. 2). In the latest image (Fig. 2c, most solar input), the lava flows appear notably cooler than the surrounding terrain. Small, high-temperature thermal anomalies (white arrows, Fig. 2) indicate that in areas where the lava ponded or was emplaced at a greater thickness, the flow was still cooling four months after the eruption. The highest temperature observed in any of the maps is 115.3 °C (yellow arrow, Fig. 2c), located along a crack in the lava flow surface in an area where lava ponded in a crater. The solar heating rate of the land surface in the study area shows several consistent patterns (Fig. 3). The highest solar heating rates are found where tephra is the dominant surface type (compare Fig. 3 to Fig. 1b). Variations in the solar heating rate of tephra are primarily related to the slope and aspect of the surface- flat and/or east-facing (sun-facing) slopes have generally higher solar heating rate than steeper and/or west-facing slopes. The effect of slope on the solar heating rate in tephra deposits can be seen in Fig. 3 near Region 4 where solar heating rate decreases smoothly to the south as the slope increases towards the caldera rim. The lowest solar heating rate found in tephra deposits is in Region 3 (Fig. 3), near a vent from the 2018 eruption. The solar heating rate generally decreases down the length of the lava flows as flow morphology transitions from initially pāhoehoe near the vent to 'a‘a. The highest solar heating rates on the lava flow surface are found near the vent where intact pāhoehoe crust is present (Fig. 1b; Fig. 3). The lowest solar heating rate within the flow is located where the lava flows over a ‘step’ where the slope is greater (Region 23, Fig. 3). The meter-scale surface roughness of the study area (Fig. 4) shows that tephra surfaces are the smoothest. The roughest lava flow surface is fractured pāhoehoe crust near the transition to ‘a‘a (Regions 16 and 17, Fig. 4), with intact pāhoehoe crust and ridged ‘a‘a both appearing smoother (Regions 13 and 6, Fig. 4). Directly comparing the roughness and solar heating rate along two flow profiles (Fig. 5) shows the general decrease in solar heating rate with distance downflow. We calculated the average solar heating rate and roughness for the 31 regions shown in Figs. 3 and 4 (Table 2) and plot them against each other in Fig. 6. These regional average values define the training data for the supervised machine learning classification. The gray lines in Fig. 6 indicate the boundaries between surface types used to define the manual In the case where an ‘a‘a region has a roughness similar to tephra (due to a scoriaceous texture with tightly packed, smaller blocks that created a relatively smooth surface), it is distinguished by its slower solar heating rate (Region 21, Fig. 6). Pāhoehoe and ‘a‘a lava morphologies are primarily separated by the solar heating rate (Fig. 6). In the refined classification (five surface types), the distinction between morphologies of ‘a‘a (blocky and ridge) and pāhoehoe (crust and slabby) is based on the roughness (Fig. 6); there is no discernable difference in the solar heating rate for these lava morphologies. The proximity of many regions to the west of Region 24, Fig. 8a) than it is broad areas of the flow surface to the west (Region 7, Fig. 7a). Some surface area outside of the lava flows is classified as pāhoehoe (red areas around Region 5, Fig. 7a) and is mostly associated with changes in slope (i.e., ridges, fissures, and other rock outcroppings that increase roughness) in tephra-dominated regions (yellow). Very little area outside of the lava flows is classified as ‘a‘a. Some areas of smooth pāhoehoe crust are also classified as tephra because of their lower roughness (yellow area near Region 11, Fig. 7a). The refined classification (Fig. 8a) is considerably noisier than the simplified classification (Fig. 7a) and does not show any clear pattern in the presence of lava morphologies in the flow. Slabby pāhoehoe is particularly poorly classified. Due to the 0.2 m spatial resolution of the input maps, the refined classification is better at identifying the edges and surfaces of individual pāhoehoe crust blocks (areas surrounding Regions 14 and 16, Fig. 8a) than the broad region of slabby pāhoehoe morphology (Fig. 1b). Similarly, for ‘a‘a, the refined classification is better at identifying individual ridges (Region 6, Fig. 8a) or levees (to the west of Region 24, Fig. 8a) than it is broad areas of the flow surface where ridged or blocky morphology is present. For three surface categories, machine learning methods, both supervised and unsupervised, can efficiently replicate the manual classification (Fig. 7). Because it was trained on the 31 manually classified regions, the supervised (random forest) method very closely resembles the manual method (compare Fig. 7a and b). The unsupervised (k-means) method identifies tephra and ‘a‘a lava with minimal... misclassification compared to the manual and supervised machine learning method (compare Regions 5 and 29 in Fig. 7a and c). However, the unsupervised method is unable to broadly identify pahoehoe lava. The near-vent pahoehoe-dominated region of the lava flow is absent, with pahoehoe-classified pixels limited to the edges of crust blocks (Fig. 7c). Results from the refined machine learning classifications are mixed. Supervised machine learning can differentiate between the manually defined surface types, with the map having similar appearance to the manual classification (Fig. 8a and b). However, the refined supervised classification is worse at distinguishing tephra and pahoehoe compared to the simplified supervised classification (Figs. 7b and 8b). The unsupervised method, with no training data to guide it, deviates from our visually defined categories (Fig. 8c). The ‘a’ā lava (dark blue, Fig. 8c) remains a single surface type and appears similarly to the ‘a’ā regions in the simplified classifications (Fig. 7). Tephra is split into two categories (yellow and red, Fig. 8c) with the difference appearing to be based on the slope and aspect of the surface. These two tephra categories also include most of what we visually identified as intact pahoehoe crust (Regions 10 and 11, Fig. 8c). Light blue pixels correlate with extremely high roughness at sharp edges but are relatively few in number compared to other categories and difficult to identify in Fig. 8c. However, slabby pahoehoe is identified with reasonable accuracy (Regions 14, 16, and 17, Fig. 8c). In general, the refined unsupervised classification is poor and not able to identify categories that correspond to visually distinct surface morphologies. The patterns discussed above are represented quantitatively in the confusion matrices in Fig. 9. In these plots, the ‘true label’ is the manual classification and the ‘predicted label’ is the machine learning classification. The number in each square is the fraction of the pixels in a ‘true’ category that are found in a given ‘predicted’ category. For example, in Fig. 9a, 93% of pahoehoe pixels in the manual classification were also classified as pahoehoe by the supervised random forest method. For the simplified classification, the supervised method classifies all categories with greater than 90% accuracy (Fig. 9a). The unsupervised method’s difficulty in classifying pahoehoe is clearly shown (Fig. 9b), with only 21% of manually classified pahoehoe pixels correctly predicted, compared to 92% and 83% accuracy for tephra and ‘a’ā, respectively. For the refined classification, the supervised method classifies all categories with 84–88% accuracy except for slabby pahoehoe, which is 61% (Fig. 9c). The confusion matrix for the unsupervised refined classification (Fig. 9d) confirms the patterns observed in the classification map (Fig. 8c), where both the blocky ‘a’ā (99%) and ridged ‘a’ā (79%) were classified as blocky ‘a’ā, and tephra was split nearly evenly into two categories (54% and 46%). The unsupervised method is superior to the supervised method in classifying slabby pahoehoe however, doing so with 74% accuracy. 4. Discussion 4.1. Using solar heating rate and roughness to describe volcanic deposits Roughness and solar heating rate describe inherently different surface properties. Roughness, as a measurement of the surface texture, is better at distinguishing morphological differences in volcanic deposits. This is shown by the separate morphologies of pahoehoe (crust and slabby) and ‘a’ā (ridge and blocky) lava in the refined classification having essentially no difference in solar heating rate but variable roughness (Fig. 6). Solar heating rate, which is inversely related to thermal inertia (Eq. 1; Eq. 2), is a measurement of physical properties of the deposit. In this study that primarily refers to the grain size and density. However, solar heating rate also includes the effects of sub-pixel roughness: roughness that is below the scale of the measured roughness that was discussed above. We have determined that solar heating rate is an excellent method for discriminating between the major depositional types (i.e., tephra, pahoehoe, and ‘a’a) in the study area (Fig. 3; Fig. 6). Tephra is generally distinct from the lava due to its smaller grain size compared to the blocks and crust of the lava flow (Fig. 3; Fig. 6). Similarly, Price et al. (2016) also found that tephra had lower ATI (i.e., higher solar heating rate) compared to lava. While solar heating rate should be sensitive to variations in the clast size of tephra deposits, we did not visually observe in the field any significant clast size variations in regions where gradients in the tephra solar heating rate are visible (near Regions 4 and 5, Fig. 3). We thus attribute these gradients to the influence of the slope and aspect of the surface, which likely overwhelms minor variations in tephra clast size. One possible exception to this is observed when comparing Region 3 (near vents from the 2018 eruption) to Region 4 (200 m to the east) (Fig. 3; Fig. 6; Table 2). Both regions have similar slope and aspect, yet the near-vent Region 3 has the lowest solar heating rate (and highest roughness) of any tephra region (Fig. 6; Table 2). We observed relatively larger clasts of tephra and spatter surrounding the vents, so clast size is a possible explanation in this case for the difference in solar heating rate. We do not have quantitative clast sizes for the tephra studied in this area. The downflow decrease in solar heating rate that we observe at Sierra Negra is likely a result of increasing density of the lava (decreasing vesicularity) due to degassing as it flowed downslope. This pattern is well-described for other lava flows (e.g., Lipman and Banks, 1987). Lava density may also explain the solar heating rate and roughness we observe for the slabby pahoehoe (Fig. 6). While the slabby pahoehoe represents the roughest surface of the Sierra Negra lava flows (Fig. 4), the solar heating rate does not have correspondingly low values indicative of such large blocks. Rather, slabby pahoehoe has intermediate solar heating rate between the pahoehoe crust and ‘a’a morphologies, a result of the higher vesicularity of this material compared to the ‘a’a downflow (Fig. 3; Fig. 5d; Fig. 6). This intermediate solar heating rate could also be influenced by sub-pixel roughness. As demonstrated in Simurda et al. (2020), if the scale of the slabs in the slabby pahoehoe are such that they introduce small (sub-pixel) shadows, this could raise the solar heating rate slightly. The precise relative effects of density and particle size on the solar heating rate for a material cannot be separated in this study. The conditions that existed during lava flow emplacement can be interpreted based on where different lava flow morphologies are located. The pahoehoe to ‘a’a transition for both the east and west flows is located near an increase in slope (Fig. 5; Fig. 7). The steeper slope would have increased flow velocity and thus the strain rate within the lava, promoting the transition to ‘a’a as the lava viscosity also increased with distance from the vent due to degassing and cooling. The ridged ‘a’a, with higher heating rate and lower roughness (i.e., lower density and/or smaller blocks) compared to the blocky ‘a’a, is preferentially located in flow lobes away from the main channel (Regions 6, 21, and 22, Fig. 8). This suggests the ridged ‘a’a regions were emplaced with lower viscosity earlier in the eruption, whereas the blocky ‘a’a morphology in the main channel is likely due to dense, higher viscosity lava emplaced as the eruption rate decreased and the flow came to a stop. 4.2. Classification accuracy Investigation of the areas of apparent misclassification demonstrates the capabilities (or limitations) of our classification methods and identifies localities with complex deposits. For example, all classification methods we applied successfully identified an older lava flow as having 'a-a' morphology (Region 7, Fig. 7). This indicates our classification by roughness and solar heating rate does not distinguish between similar deposits of different ages (at least for age differences on the scale of decades). This is advantageous for mapping flow field morphology and extent surrounding a vent (or vents) but can be a hindrance for mapping a specific flow within a flow field if its extent is not known. Many surfaces outside of the 2018 flows which are classified as pahoehoe are also not necessarily misclassified but are locations where older pahoehoe surfaces are exposed within the tephra deposits (Fig. 7). Segments of pahoehoe crust are often misclassified as tephra (yellow areas near Regions 11 and 13, Fig. 7), but in areas near the vent, the classification is correctly identifying tephra mantling the pahoehoe crust (southwest of Region 11, Fig. 7). A section of 'a-a' lava in the northern part of the western flow segment (Region 29, Fig. 7) is misclassified as pahoehoe due to a higher solar heating rate than is typical for 'a-a' (Fig. 3; Fig. 6). From the visual UAS images, block sizes in the flow appear smaller in this location and this is the likely cause of the increased solar heating rate. Interestingly, this area corresponds to a part of the 2018 lava flow that flowed over an 'a-a' flow from a previous eruption, suggesting that the roughness of the substrate over which lava flows may affect the block size of the lava. This is expected, since small-scale bed roughness impacts the flow advance rate and thus the balance between shearing and cooling timescales and the resulting flow morphology (Rumpf et al., 2018). Both the manual and machine learning classifications highlight differences in how field identification of units differs from per pixel classification in remote sensing data. For example, it is straightforward to visually identify the pahoehoe sections of the lava flow, yet all classification methods had difficulty identifying both pahoehoe and the crust and slabby sub-morphologies (Fig. 9). The low accuracy of the unsupervised machine learning methods demonstrates that pahoehoe is not identifiable as a single cohesive unit compared to other features in the study area (Fig. 7c; Fig. 9c). Similarly, while ʻaʻa is broadly identifiable as a distinctive unit by all classification methods (Fig. 7), the differences in roughness and solar heating rate between the visually identified blocky and ridged surface morphologies are not significant to the unsupervised machine learning method (Figs. 8c and 9d). Including the visual orthophoto as an additional component of the classification may be a means to improve pixel-by-pixel classification of lava flow morphology, as the properties of visual images may more directly correlate to how deposits are manually identified in the field (Soule et al., 2019). Overall, the supervised method classifies lava surface types with an accuracy of 96% and 82% for the simplified and refined classifications, respectively, demonstrating its usefulness as an efficient and semi-automated mapping tool. The unsupervised machine learning method delivers a 61% and 55% accurate classification when compared to the manual simplified and refined classification maps, respectively. The unsupervised method’s loss in per-pixel accuracy relative to supervised method makes it inadequate for mapping the different lava flow morphologies targeted in this study. However, Fig. 7c shows how the unsupervised k-means clustering can distinguish between tephra and the lava flow as a whole, independent of a priori knowledge about the deposits. The overall efficient mapping abilities of the unsupervised machine learning method is thus useful in cases where identification of major surface types without training data is desired. The manual and supervised machine learning classification techniques are best suited for applications looking to identify and map a specific deposit (or suite of deposits), as these methods allow the user to define the values of roughness and solar heating rate that apply to the surface(s) of interest. 4.3. Effects of spatial resolution on classification A key factor that may account for the difference in visually identified units (Fig. 1b) and the classification techniques (Figs. 7 and 8) is the spatial resolution of the data. Whereas the eye can generalize across broad sections of terrain, our classification methods cannot and are limited to the 0.2 m resolution of the DEM and temperature maps. This is especially evident in the noisy appearance of the refined classification (Fig. 8), which identifies lava flow morphology varying on the order of meters rather than the 10s–100s of meters scale variations that we identified in the field (Fig. 1b). The 31 training regions (Table 2; Fig. 6) represent a classification based on a larger spatial resolution. The variations within each region are averaged out and result in a clear distinction between the intact crust and slabby pāhoehoe (Fig. 6) that is not seen in the classification maps (Fig. 8). To test the effect of spatial resolution on the classification result, we resampled the manual refined classifications to a resolution of 2 m and 20 m (one and two orders of magnitude larger than the 0.2 m UAS dataset). The resampled pixel classification was determined by the most common surface category (i.e., the mode) among the 0.2 m resolution pixels contained within the new, larger pixel (Fig. 10). The 2 m resampled classification map (Fig. 10a) shows a reduction of noise within the lava morphologies compared to the original classification (Fig. 8a). The slabby pāhoehoe region is more clearly identifiable (Regions 16 and 17, Fig. 10a), but pāhoehoe crust is less distinct due to the --- Fig. 10. Effect of reduced spatial resolution on classification. The manual refined classification (Fig. 8a) resampled to 2 m (A) and 20 m (B) spatial resolution. At 2 m resolution, the dominant morphology for different segments of the lava flow is potentially easier to identify compared to the original 0.2 m resolution. A resolution of 20 m is too coarse to clearly identify the flow margins and classification errors are more prominent. prevalence of smooth crust surfaces that were misclassified as tephra (Regions 10 and 11, Fig. 10a). It is easier to see that the main eastern channel has a dominantly blocky ‘a’a morphology (vicinity of Region 24, Fig. 10a), whereas the western branch of this channel is dominantly ridged ‘a’a (vicinity of Region 22, Fig. 10a). Downflow trends become harder to distinguish as the resolution approaches the width of the lava flow (Regions 22 and 28, Fig. 10b). Fig. 10 suggests that the ideal spatial resolution to use for classification in this context should be approximately an order of magnitude smaller than the scale of the identified units. If a unit has dimensions of 10s of meters (e.g., a 50 m wide lava flow), the spatial resolution should be on the order of a few meters to identify it clearly. This will reduce issues related to both classification on too fine a scale as described here and sub-pixel mixing of surface types when spatial resolution exceeds the scale of the features of interest (i.e., the failure of the surface uniformity assumption) as described by Simurda et al. (2020). While we do not examine this in detail here, we also acknowledge that varying the spatial resolution of the input data (i.e., the UAS-derived DEM and temperature maps) changes the scale of feature to which the surface roughness is sensitive and the detail of variation in the solar heating rate. This potentially alters what the resulting classification map can show. The desired goals of the mapping and classification, and the spatial resolution required to achieve those goals, are important considerations when designing the initial UAS survey. 4.4. Application to other types of volcanic deposits Surface solar heating rate and roughness can be used to classify many types of deposits in addition to those discussed so far. We conducted a similar UAS survey to the one described here at Sinabung Volcano (North Sumatra, Indonesia) (Supplementary Table S1). Starting in late 2013, an effusive eruption at Sinabung emplaced a 3 km long andesite lava flow and generated hundreds to thousands of PDCs caused by both lava dome collapse and Vulcanian-style explosions (Nakada et al., 2019). The deposits from the eruption cover approximately 10 km² (Pallister et al., 2019). We flew our dual-sensor UAS over a region of the northeastern part of the PDC deposits measuring approximately 2 km north-south and 1 km east-west. We captured TIR images during three flights before (335 TIR images), during (306 TIR images), and after (370 TIR images) sunrise (05:30, 06:30, 07:30 local time, GMT + 07) on July 6, 2018 (Supplementary Table S1). We created three thermal maps with 1-m spatial resolution (Supplementary Fig. S2), from which we calculated the solar heating rate (Fig. 11a). We used 323 visual images captured during two flights on July 8, 2018 to create an orthophoto and DEM. Compared to the solar heating rate at Sierra Negra, the solar heating rate at Sinabung is minimal and broadly uniform (note the scale of Fig. 3 for Sierra Negra is from 0 to 15 °C hr⁻¹ whereas the scale of Fig. 11a is from 0 to 3 °C hr⁻¹). The most notable feature in Fig. 11a is an area of relatively high solar heating rate in the center of the image. This area does not correlate with any variation in roughness or slope, which are all broadly homogenous at 1-m resolution for this region (Supplementary Fig. S3). We suggest that the higher heating rate in this location is due to a smaller average grain size. As a possible explanation for why smaller grain sizes are found in this area, we observe that the high heating rate correlates to a slight topographic high (Fig. 11b). This broad ridge is significant enough to affect the pattern of new drainage channels eroding into the pyroclastic deposits (note that channels are larger and denser to the north and west of the high heating rate area compared to the area visible in the orthophoto immediately to the northwest in Fig. 10. (continued). Fig. 11a). It is possible that this topographic high could have diverted more coarse-grained pyroclastic flows and led to the preferential deposition of finer-grained material in this area. Near-zero solar heating rates are common near a river running north-to-south along the east side of Fig. 11a. Though it may not explain similarly low solar heating rate in other locations, these low values are potentially due to higher water content in the deposits near the river, which increases the thermal inertia (decreases solar heating rate) in materials. This reinforces the potential of UAS-derived solar heating rate to detect moisture content of surfaces, as was measured using satellite-derived ATI by Scheidt et al. (2010). Ongoing activity at Sinabung prevented the ability to ground truth the region surveyed and any grain size variations that may exist are not obvious in visual inspection of the UAS images or the 0.5 m orthophoto we created. Additionally, understanding of the solar heating rate patterns observed would benefit from more data coverage to the north and west, but this was not possible due to the range of the UAS and safety considerations. We captured a portion of the lava flow in our data (far west center of Fig. 11a), but not enough to draw any conclusions related to the block size or density of the lava. Still, this application demonstrates two useful benefits of measuring surface solar heating rate for volcanic deposits. First, solar heating rate is more sensitive to grain size variations than roughness when spatial resolution exceeds the grain size. Second, measuring solar heating rate enables at least qualitative description of grain size variations in pyroclastic deposits while an eruption is ongoing and before it is safe to access on foot for direct sampling and measurement. 4.5. Further applications Combining high-spatial resolution roughness and solar heating rate measurements represents a powerful technique for investigating volcanic deposits. As these quantities measure fundamentally different properties of a surface, classification and description of different surface types is improved by using both quantities compared to using either alone. A specific advantage to using solar heating rate to improve classification of volcanic surfaces is the sensitivity of solar heating rate to sub pixel size variations in the grain or block size of deposits. In contrast, remote sensing of environment 264 (2021) 112581 18 Our simplified (three category) manual classification of the study area relied primarily on solar heating rate to differentiate between tephra, pahoehoe lava, and ‘a‘a lava. For our refined (five category) manual classification, roughness was the main value separating pahoehoe crust from slabby pahoehoe and ridged ‘a‘a from blocky ‘a‘a. The manual classification broadly agreed with the location of surface types we visually identified in the field. The supervised machine learning method matches the manual classifications with an accuracy greater than 80%, showing its ability to map known deposit types. The unsupervised machine learning method was not able to match the surface types we identified well enough to be useful for mapping (accuracy ~60%), though it successfully separated tephra from the lava flow without initial ground truth information. When applied in appropriate cases, classification via machine learning can improve on the capability of manual classification. Many advantages exist for using UAS to measure solar heating rate and roughness at high spatial resolution. The use of UAS enables data collection on short time-scales and at specific times. In contrast, to determine ATI from satellite it can be necessary to wait weeks or longer for the satellite to capture a cloud-free day-night pair of images. The satellite to capture a cloud-free day-night pair of images. The satellite to capture a cloud-free day-night pair of images. The satellite to capture a cloud-free day-night pair of images. 5. Conclusions We have presented a new approach to fine-scale classification of volcanic surfaces in a basaltic flow field utilizing visual and TIR imagery collected by UAS to derive surface roughness and solar heating rate. Solar heating rate excels at identifying differences in physical properties of deposits such as grain size and density, whereas roughness is better at identifying variations in surface morphology. We used these quantities in tandem to classify and map different volcanic deposits manually and using machine learning techniques. For lava flows from the 2018 eruption of Sierra Negra volcano, we observe that the solar heating rate of the lava flow surface decreases downflow, indicative of increasing flow degassing and increasing lava density during a transition from pahoehoe morphology near-vent to ‘a‘a further downflow. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have influenced the work reported in this paper. The authors declare the following financial interests/personal relationships which may be considered as potential competing interests. Acknowledgments This work was supported by NSF EAR Postdoctoral Fellowship Award #1725768 and a WHOI Independent Research and Development Grant (PI: Adam Soule). El was supported by NSF award EAR-1654588. Field work in Ecuador was conducted in cooperation with the Instituto Geofísico, the Parque Nacional Galápagos, and the Charles Darwin Research Station. Marco Córdova of Instituto Geofísico and Meghan Jones of Woods Hole Oceanographic Institute assisted with data collection at Sierra Negra. Field work in Indonesia was conducted under a memorandum of understanding between Arizona State University (Tempe, AZ, USA) and Universitas Gadjah Mada (Yogyakarta, Indonesia). Danielle Meyer and Emily Carey of Drexel University and Aida Bugna and Reza Firdaus Nasution of Universitas Gadjah Mada assisted with data collection at Sinabung Volcano. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. Appendix A. Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.rse.2021.112581. References
[REMOVED]
Voice Activity Detection for Transient Noisy Environment Based on Diffusion Nets Amir Ivry, Baruch Berdugo, and Israel Cohen, Fellow, IEEE Abstract—We address voice activity detection in acoustic environments of transients and stationary noises, which often occur in real life scenarios. We exploit unique spatial patterns of speech and non-speech audio frames by independently learning their underlying geometric structure. This process is done through a deep encoder-decoder based neural network architecture. This structure involves an encoder that maps spectral features with temporal information to their low-dimensional representations, which are generated by applying the diffusion maps method. The encoder feeds a decoder that maps the embedded data back into the high-dimensional space. A deep neural network, which is trained to separate speech from non-speech frames, is obtained by concatenating the decoder to the encoder, resembling the known Diffusion nets architecture. Experimental results show enhanced performance compared to competing voice activity detection methods. The improvement is achieved in both accuracy, robustness and generalization ability. Our model performs in a real-time manner and can be integrated into audio-based communication systems. We also present a batch algorithm which obtains an even higher accuracy for off-line applications. Index Terms—Deep learning, diffusion maps, voice activity detection. I. INTRODUCTION V OICE activity detection refers to a family of methods that perform segmentation of an audio signal into parts that contain speech and silent parts. In this study, audio signals are captured by a single microphone and contain clean sequences of speech and silence. These signals are mixed with stationary and non-stationary noises (transients), e.g., door knocks and keyboard tapping [1], [2]. Our objective is to correctly assign each captured audio frame into the category of speech presence or absence. A solution to this problem may benefit many speech-based applications such as speech and speaker recognition, speech enhancement, emotion recognition and dominant speaker identification. In acoustic environments that contain neither stationary nor non-stationary noise, speech is detected by using methods that rely on frequency and energy values in short time frames [3]–[5]. These methods show significant deterioration in performance when noise is present, even with mild levels of signal-to-noise ratios (SNRs). To cope with this problem, several approaches assume statistical models of the noisy signal in order to estimate its parameters [6]–[11]. Nonetheless, these methods are incapable of properly modeling transient interferences, which constitute an essential part of this study. Ideas that involve dimensionality reduction through kernel-based methods are introduced in [12], where both supervised and unsupervised approaches have been exploited. However, its main limitation is a significant low-dimensional overlap between speech and non-speech representations. Machine learning techniques have been employed for voice activity detection in recent studies [13], [14]. In contrast to classic methods, these approaches learn to implicitly model data without assuming an explicit model of a noisy signal. In particular, deep learning based methods have gained popularity in recent years due to a substantial increase in both computational power and data resources. Mendelev et al. [15] constructed a deep neural network for voice activity detection, and suggested to employ the dropout technique [16] for enhanced robustness. The main drawback of this method is that temporal information between adjacent audio frames is ignored, due to independent classification of each time frame. Studies presented in [17]–[20] used a recurrent neural network (RNN) to integrate temporal context with the use of past frames. However, the rapid time variations and prominent energy values of non-stationary noises in comparison to speech are still the main cause of degraded performance in these methods. A recent study conducted by Ariav et al. [21] proposed to use an auto-encoder to implicitly learn an audio signal embedded representation. To enhance temporal relations between frames, this auto-encoder feeds an RNN. Despite its leading performance, the reported results are still unsatisfactory. Our study found that the main limitation of this algorithm is the dense low-dimensional representation forced by the auto-encoder and into the RNN. This density occurs largely due to the joint training of speech and non-speech frames, which fails to enhance their unique features. Thus, their low-dimensional representations, which are the sole information that feeds the RNN, are embedded closely in terms of Euclidean distance. Eventually, this poses a difficulty in separation of speech from non-speech frames based merely on temporal information, which is the core advantage of using RNN architecture. In this work, we propose an algorithm that addresses the limitations found in the methods proposed in [12] and [21]. We independently learn the low-dimensional spatial patterns of speech and non-speech audio frames through the Diffusion Maps (DM) method. DM is a method that performs non-linear dimensionality reduction by mapping high-dimensional data points to a manifold, embedded in a low-dimensional space [22]. The mapped coordinates that lay on this manifold are referred to as DM coordinates. Since this method preserves locality, frames with similar contents in the original high dimension are mapped closely in the low, embedded dimension, with respect to their Euclidean distance. We separately apply DM for speech and non-speech frames through a pair of independent deep encoder-decoder structures. Inspired by the Diffusion nets architecture \cite{23}, the end of each encoder is forced to coincide with the embedded DM coordinates of its high-dimensional input. This approach allows us to differ the intrinsic structure of speech from the ones of transients and background noises based on the Euclidean metric. We suggest two variations for the voice activity detection algorithm, one for real-time applications and one for batch processes. We test both approaches on five comparative experiments conducted in \cite{12, 21, 24}. Results show enhanced voice activity detection performance, that surpasses the known state-of-the-art speech detection results. Furthermore, our proposed architecture is more robust and has better generalization ability than competing methods, as demonstrated through experiments. The remainder of this paper is organized as follows. In Section II, we formulate the problem. In Section III, we introduce the proposed solution. In Section IV we expand on the data set and feature extraction. In Section V we describe the training and testing processes. In Section VI we present the results of the proposed approach for voice activity detection with comparisons to competing methods. Finally, in Section VII we draw conclusions as well as future research directions. II. PROBLEM FORMULATION Let \( s[n] \) denote the following audio signal: \[ s[n] = s^{sp}[n] + s^{st}[n] + s^{t}[n], \] where \( sp, st \) and \( t \) stand for speech, stationary background noise and transient interference, respectively. The time domain signal is processed in overlapping time frames of length \( M \). Let \( f_n \in \mathbb{R}^M \) denote the \( n \)th audio frame and let \( \{f_n\}_{n=1}^{N} \) denote the audio data set of \( N \) time frames. Let \( \mathcal{H}^0 \) and \( \mathcal{H}^1 \) be two hypotheses that stand for speech absence and presence, respectively. In addition, let \( I(f_n) \) be a speech indicator of the \( n \)th audio frame, defined as: \[ I(f_n) = \begin{cases} 1, & f_n \in \mathcal{H}^1 \\ 0, & f_n \in \mathcal{H}^0 \end{cases}. \] The goal of this study is to estimate \( I(f_n) \), i.e., to correctly classify each audio frame \( f_n \) as a speech or non-speech frame. III. PROPOSED ALGORITHM FOR VOICE ACTIVITY DETECTION Our proposed approach comprises several steps, as illustrated in Fig. 1. Initially, feature extraction is employed in the time domain. The features include the Mel Frequency Cepstral Coefficients (MFCCs) and their low-dimensional representation, generated by the DM method. A detailed description is given in Section IV-B. Subsequently, a deep encoder-decoder based neural network is used to learn the unique patterns of speech and non-speech signals. Since this structure makes use of the DM method, it is regarded in this study as diffusion encoder-decoder (DED). Next, error measures are extracted from the deep architecture. Those errors are represented in a coordinate system, notated in this study as error map. It should be highlighted that no mathematical operation is applied on the errors extracted from the network. i.e., the error map is merely a representation form which allows us to conduct better analysis and gain deeper insights on the performance of our detector, as will be shown along this paper. A classifier, fed by the coordinates of the error map, is constructed to separate speech presence and absence. In this study, two different modes are used for classification. First, a batch mode is considered, where a substantial corpus of speech and non-speech audio frames must be at hand, in order to evaluate the outcome of the DM process correctly. In batch mode, both low and high-dimensional errors are taken into account during classification. The second classification mode is real-time, which exploits merely high-dimensional error information. In this case, integration of DM is not required, which allows a frame-by-frame classification with negligible delay. A. Deep Encoder-Decoder Neural Network Our approach suggests that speech frames can be separated from non-speech frames based on their intrinsic low-dimensional representation. Ideas from \cite{23} are adopted to merge DM with two independent, identically constructed DEDs, notated by DED\( i \), where \( i \in \{0, 1\} \). DM allows a geometric interpretation of the data by constructing its underlying embedding, which can be represented by the middle layer of any basic encoder-decoder network \cite{21}. To exploit this property, the middle layer is forced to coincide with the true DM coordinates of the input layer. As a result, the encoder of DED\( i \) is trained to map spectral features affiliated with \( \mathcal{H}^i \) from their original space to the lower diffusion space. Subsequently, the decoder of DED\( i \) learns the inverse mapping back to the high-dimensional feature space. A deep architecture is constructed to implement the above notion, as illustrated in Fig. 1. In this proposed system, each DED comprises two stacked parts, an encoder and a decoder. The former is constructed from a 72 neurons input layer followed by two layers of 200 neurons each and a final layer of 3 neurons. The deep decoder is a reflection of the deep encoder. While the size of the middle and hidden layers are determined empirically, the size of the input (and thus, the output) layer of each DED is derived from the feature extraction process, as described in Section IV-B. In the output of each layer, an identical activation function is employed on each neuron \( (12) \). B. Error Maps and Voice Activity Detection Classifier Let us denote a single observation of an input feature vector as \( a \) and its true DM coordinates as \( m \). Additionally, \( \tilde{m} \) and \( \tilde{a} \) denote the encoding of \( a \) by a trained encoder and its reconstruction by a trained decoder, respectively. Each observation is fed into the trained DEDs simultaneously. That way, the relations between each hypothesis and the constructed embeddings are compared under the same conditions. These measures are employed through $e_{en}(m)$ and $e_{de}(a)$, where: $$e_{en}(m) = ||m - \hat{m}||_1; e_{de}(a) = ||a - \hat{a}||_1,$$ while $||\cdot||_1$ denotes the $\ell_1$ norm. Namely, as $e_{en}(m)$ represents the mapping error, $e_{de}(a)$ is associated with the reconstruction error of $a$. In this study, two classification modes are considered. In the batch mode, both $e_{en}(m)$ and $e_{de}(a)$ are taken into account. Namely, each observation $a$ ultimately generates two pairs of errors, one from each DED. These errors are interpreted as a four-dimensional coordinate that is embedded into an error map. In the real-time mode, on the other hand, only the decoder error $e_{de}(a)$ is extracted from each DED. i.e., in this scenario a two-dimensional coordinate is embedded into the error map. Subsequently, a support vector machine (SVM) classifier with linear kernel is trained on the error map, which contains the generated error measures from a corpus of observations. The objective of this classifier is to separate between coordinates affiliated with different hypotheses. As a result, two decision regions are created, for speech presence and absence. Since DED$^1$ is trained to construct a low-dimensional manifold on which $\mathcal{H}^1$ is embedded, frames related to $\mathcal{H}^1$ highly fit the learned mapping of DED$^1$. This leads to substantially lower errors, which could be easily identified as a separate cluster. This assumption is derived from the property of the DM method, in which the diffusion distance in the original feature dimension is proportional to the $\ell_1$ norm in the diffusion space. In this study, a classic SVM classifier is shown to be sufficient. It is worth noting that we have also implemented an alternative architecture to the one presented in Fig. 1 which involves a unified network instead of an SVM. The goal of this was to assert the improvement, and thus justify the employment, of our suggested system over the alternative of the fully connected neural network, which is commonly used in deep learning algorithms. We concatenated the output layer of both DED branches to each other and to the input layer. Then, this augmented layer was connected to a single-bit output neuron that carries the VAD decision. Results have shown very similar performance, with a slight tendency to the SVM based method. As a result, we have decided to use the originally presented architecture. Two minor advantages of the SVM can be noted over the unified neural network. First, it is less computationally expensive in comparison to using an additional layer, which will consume higher memory and time during back propagation. Second, the original method explicitly constructs the error measures and feeds them to the SVM, which leads to high separation of speech from silence. Therefore, the hidden layer attempts to implicitly represent the data in a similar manner, i.e., to find the relation between the neurons which will ultimately lead to good separation. Representing the error measures in the two-dimensional space and applying the SVM on it both serves as a more natural, intuitive classification algorithm and avoids the infamous “black box” property of the neural network, as well as grants us the ability to analyze the decision of the detector in a helpful and profound manner, as will be done later on. IV. DATABASE AND FEATURE EXTRACTION A. Database We adopt the audio database presented in [12] to construct a DED training set, a classifier training set and a back to end test set. This database is obtained from 11 different speakers reading aloud an article chosen from the web, while making natural pauses every few sentences. Naturally, these recordings are composed of sequences of speech followed by non-speech frames. Each sequence varies from several hundred milliseconds to several seconds in length. These signals were recorded with an estimated SNR of 25 dB at a sampling rate of 8 kHz. Each of the 11 signals is 120 seconds long and it is processed using short time frames of 634 samples with 50% overlap, which effectively generates a 25 frames/second rate. The clean speech signal $s^{sp}[n]$, defined in [1], is used to determine the presence or absence of speech in each time frame, and to construct a label set accordingly. These clean audio signals are contaminated by 42 different pairs of additive stationary and non-stationary noises, which construct a varied data set. The noise signals employed include white and colored Gaussian noise, babble and musical instruments. Transients include keyboard taps, scissors snapping, hammering and door knocks. B. Feature Extraction We wish to exploit the ability of deep neural networks to learn complex relations between their inputs and outputs. Hence, our objective is to feed our architecture with features that express the unique patterns of each hypothesis [2]. To generate spectral information from the time domain database, MFCCs are employed. These coefficients are concatenated along a fixed number of adjacent frames, in order to gain temporal context between them. DM is applied to integrate spatial properties and to find a relation between the spectrum of the signal and its geometric low-dimensional structure. 1) Mel Frequency Cepstral Coefficients: Features based on a spectral representation of audio signals are fully adopted from the study of Dov et al. [12]. To construct them, weighted MFCCs are employed. MFCCs use the perceptually meaningful Mel-frequency scale, which allows a compact representation of the spectrum of speech [25]. MFCC features are used in the presence of highly non-stationary noise, where they were found to perform well for speech detection tasks [26]. However, speech frames may have similar MFCC representation to frames comprising highly non-stationary noise as well, since they both have akin spectral attributes. To address this challenge, noise estimation is performed in each frame and the MFCCs in that frame are weighted accordingly [12], [26], [27]. This enables better analysis by separating the background noise from the rest. Next, several consecutive time frames are taken into account. Hence, the nature of transients, which their typical duration is assumed to be of the order of a single time frame, can be exploited. Formally, consider \( a_n \in \mathbb{R}^C \) as a row vector of \( C \) coefficients, consisting of weighted MFCCs, and their first and second derivatives, \( \Delta \) and \( \Delta \Delta \), respectively. These values are extracted from the \( n \)th time domain audio frame \( f_n \), introduced in Section II. Let: \[ \tilde{a}_n = [a_{n-J}, \ldots, a_n, \ldots, a_{n+J}] \in \mathbb{R}^{(2J+1)C} \] (4) denote concatenation of feature vectors from \( 2J + 1 \) adjacent frames, where \( J \) is the number of past and future time frames. For \( J \geq 1 \), the elements of \( \tilde{a}_n \) in the presence of transients are expected to vary faster than in the presence of speech. In this study, the number of MFCCs is 8, as commonly used. Thus, \( a_n \) comprises of \( C = 24 \) coefficients. For practical considerations, we assign a relatively small value of \( J = 1 \). This allows informative characterization of audio frames based on past-future relations, while consuming low computational load. Thus: \[ \tilde{a}_n = [a_{n-1}, a_n, a_{n+1}] \in \mathbb{R}^{72}. \] (5) Next, standardization is applied on (5). Let us assume a set of \( N \) observations, while the \( n \)th observation is given by (5), for \( n \in \{1, \ldots, N\} \). For each feature index \( l \in \{1, \ldots, 72\} \), a row vector \( O_l \in \mathbb{R}^N \) is defined as: \[ O_l = [\tilde{a}_1(l), \ldots, \tilde{a}_N(l)]. \] (6) Then, the mean and standard deviation of \( O_l \) are extracted and termed \( \mu_l \) and \( \sigma_l \), respectively. Next, the following vectors are constructed: \[ \mu = [\mu_1, \ldots, \mu_{72}] ; \quad \sigma = [\sigma_1, \ldots, \sigma_{72}]. \] (7) Let \( \tilde{a}_n(l) \) denote the \( l \)th element of the standardized \( \tilde{a}_n(l) \), defined as: \[ \tilde{a}_n(l) = \frac{a_n(l) - \mu_l}{\sigma_l}. \] (8) 2) Diffusion Maps: The middle layer of any basic autoencoder architecture can be viewed as a low-dimensional representation of its input layer [28]. Our method exploits this by forcing the middle layer to coincide with the embedded coordinates of \( \tilde{a}_n \), generated by the DM method [29]. Thus, the encoder learns to approximate this low-dimensional mapping, while the decoder learns the inverse high-dimensional mapping. DM is a manifold learning approach that is established on the graph Laplacian of the high-dimensional data corpus [30]. DM has been employed well in several signal processing, image processing and machine learning applications [31]–[39]. Let us consider a set of feature vectors \( \{\tilde{a}_n\}_n \), constructed according to (8). A weighted graph is created with the elements of the set as nodes (or points), where the weight of the edge connecting these nodes is given by the commonly used radial basis function (RBF) kernel. The scaling parameter of the kernel is set separately for each edge as in [20]. Practically, merely the 10 nearest neighbors of every point are used to compute the edges. Namely, edges that are not among the nearest neighbors of \( \tilde{a}_n \) are nullified. In order for the embedding and the distribution of the nodes to be independent, we perform normalization of the data. Therefore, an approximation of the Laplace-Beltrami operator on the data is obtained [29], [31]. This operation generates a row-stochastic matrix \( P \) which can be viewed as the transition matrix of a Markov chain on the data set \( \{\tilde{a}_n\} \). Two sets of bi-orthogonal left and right eigenvectors, \( \{\phi_n\} \) and \( \{\psi_n\} \), are constructed by employing an Eigenvalue decomposition of \( P \). This process also yields a series of strictly positive eigenvalues \( \lambda_0 \geq |\lambda_1| \geq \ldots \geq |\lambda_{n-1}| > 0 \). Through informal experiments, we found that for retaining the desired patterns of speech and non-speech frames, it is sufficient that the embedded dimension is set to \( d = 3 \) (excluding the trivial dimension associated with \( \lambda_0 \)). Furthermore, \( d \) is small enough to exclude undesired high frequency noise, mostly represented by higher dimensions. The low-dimensional embedding of \( \tilde{a}_n \) [8] is noted by \( m_n \) and defined as: \[ m_n = (\lambda_1 \psi_1 (\tilde{a}_n), \ldots, \lambda_3 \psi_3 (\tilde{a}_n)). \] (9) Therefore, the set \( \{m_n\} \) is embedded into the Euclidean space \( \mathbb{R}^3 \). In this space, the Euclidean distance is equal to the diffusion distance in the high-dimensional space of \( \{\tilde{a}_n\} \). Our architecture integrates an activation function which maps its input to the interval [0,1]. On the other hand, \( m_n \) often holds values which may exceed this interval. Therefore, this mismatch increases the error measures defined in (3). Earlier works have demonstrated that prediction accuracy can be improved by normalizing DM coordinates [41]. We employ these notions to overcome the aforementioned mismatch, by mapping the dynamic range of \( m_n \) to [0,1]. Specifically, the transformation that is employed corresponds to connecting \( m_n \) to \( \tilde{m}_n \) through a softmax layer [42], as follows: \[ \tilde{m}_n(k) = \frac{e^{m_n(k)}}{\sum_{l=1}^{3} e^{m_n(l)}}, \] (10) where \( 1 \leq k \leq 3 \). As a result, \( 0 \leq \tilde{m}_n(k) \leq 1 \) and \( \sum_{k=1}^{3} \tilde{m}_n(k) = 1 \). V. Experimental Setting A. Notation Let \( s_j \in \mathbb{R}^L \) denote the contaminated audio signal associated with speaker \( j \in \{1, \ldots, 11\} \), comprising of \( L \) samples. Let $s^i_j$ denote the union of audio time frames in $s^i$ that belong to hypothesis $H^i$. Then, $s^i = \{s^i_1, ..., s^i_{11}\}$ with respect to all 11 speakers, namely: $$s^i = \{s^i_1, ..., s^i_{11}\}, \quad (11)$$ where $i \in \{0, 1\}$. **B. DED Training Process** Let us consider the two distinct sets $s^0_i$ and $s^1_i$. Two training sets, noted $s^0_{tr,ded}$ and $s^1_{tr,ded}$, are created by randomly extracting 70% of $s^0_i$ and $s^1_i$, respectively. Following Section [V-B1] the feature vector extracted from the $n$th frame of $s^0_{tr,ded}$ is denoted $\tilde{a}^i_{tr,ded,n} \in \mathbb{R}^{72}$. Next, standardization process is applied on the latter, which yields $\tilde{a}^i_{tr,ded,n} \in \mathbb{R}^{72}$. This reveals two advantages; first, the network performs a faster learning process. This occurs since standardization implicitly weights all features equally in their representation. Thus, the rate at which the weights connected to the input nodes learn is balanced. This balance allows to rescale the learning rate through the learning process. As a result, the adaptive gradient descent optimization method can be deployed instead of the traditional gradient descent. Second, this approach reduces saturation effects, caused by large values assigned to activation functions. Next, the DM method is applied on the set $\{\tilde{a}^i_{tr,ded,n}\}_n$ separately, for each $i \in \{0, 1\}$, as described in Section [V-B2]. The resulting low-dimensional embedding is clipped to the dynamic range $[0, 1]$ and denoted by $\tilde{m}^i_{tr,ded,n} \in \mathbb{R}^{3}$. The proposed architecture entails that while $\tilde{a}^i_{tr,ded,n}$ is fed to DED$^i$, the middle layer of the latter is enforced to coincide with $\tilde{m}^i_{tr,ded,n}$. Let us denote DED$^i$ as the $i$th trained DED. We integrate the Positive Saturating Linear Transfer (PSLT) activation function, defined as follows: $$\sigma(z) = \begin{cases} 0, & z \leq 0 \\ \frac{z}{\max(0, z)}, & 0 \leq z \leq 1 \\ 1, & z \geq 1 \end{cases} \quad (12)$$ The dynamic range that $\sigma(z)$ generates, differently from the known ReLU, suggests maintaining the fluctuations which may appear along the tangled network. Employing $\sigma(z)$ is beneficial in terms of low computational load that is consumed during back propagation, since the derivative of $\sigma(z)$ is simply 1 or 0, neglecting singularities. During back propagation, a nullified derivative will decrease computation time even further, at the expense of updating the weights of the network with less information. Empirically, it was shown not to deteriorate performance. Also, it should be highlighted that complex non-linear patterns can still be learned by the deep architecture. Pre-training is applied on each layer separately in an unsupervised manner, using encoder-decoder structures with 1 epoch and learning rate of 0.1. The optimized weights obtained by this process are considered instead of the random initialization commonly used, which enhances performance since it helps the network to avoid local minima. Pre-training is extremely effective in case there is a relatively small amount of training data, as in our scenario. Next, fine-tuning is applied separately on the encoder and the decoder. Namely, $\tilde{a}^i_{tr,ded,n}$ is encoded into a low-dimensional representation and decoded back to the output layer independently. Subsequently, the two tuned parts are merged and fine-tuning is again utilized, this time on the full stacked DED. Optimization is employed by back propagation through time, which makes use of gradient descent method, parameterized with learning rate of $10^{-5}$ and momentum of 0.9. Prior to pre-training, the weights are initialized with values drawn from a random normal distribution with zero mean and variance 0.01. Cost function with $L_2$ weight regularization of $10^{-7}$, sparsity regularization of 4 and sparsity proportion of 0.1 is employed. Relatively large sparsity related parameters were assigned, to achieve two goals. First, this allows the networks to avoid over-fitting by effectively ignoring weights with negligible values. Second, it decreases the computational load, since the embedding process involves a sparse affinity matrix. The network was trained until either 1,000 epochs or minimum gradient value of $10^{-6}$ were achieved. A typical simulation as such took approximately 10 hours on a i7-7820HQ CPU 64-bit operating system, x64 based processor. In this study, the architecture was trained using a batch size of 128 observations. As a result, less memory was used compared with feature-by-feature feeding, since fewer registers were employed at the same time. Moreover, the training was accelerated due to less updates performed, i.e., less propagations through the network. On the other hand, batch training may lead to less accurate and stable estimation of the gradient. **C. Classifier Training Process** Let $s^i_{tr,cl}$ contain random 15% of the observations contained in $s^i$, and let $s^i_{tr,cl} = [s^0_{tr,cl} \; s^1_{tr,cl}]$ be the full classifier training set. $s^i_{tr,cl}$ is built so it is disjoint with the DED training set. Similarly to the DED training process, $\tilde{a}^i_{tr,cl,n} \in \mathbb{R}^{72}$ and $\tilde{m}^i_{tr,cl,n} \in \mathbb{R}^{3}$ represent feature vectors extracted from the $n$th frame of $s^i_{tr,cl}$, according to $[6]$ and $[10]$, respectively. Error measures are defined to distinguish between features that are mapped and reconstructed well and features that are not. Consider two outcomes of propagating $\tilde{a}^i_{tr,cl,n}$ through DED$^i$. Namely, its low-dimensional predicted representation, denoted by $\tilde{m}^i_{pr,n}$, and its subsequently predicted reconstruction, denoted by $\tilde{a}^i_{pr,n}$. Consequently, the following error measures are defined, given $\tilde{a}^i_{tr,cl,n}$: $$e^i_{en}(n) \triangleq \| \tilde{m}^i_{tr,cl,n} - \tilde{m}^i_{pr,n} \|_1, \quad (13)$$ which is associated with encoder$^i$, and: $$e^i_{de}(n) \triangleq \| \tilde{a}^i_{tr,cl,n} - \tilde{a}^i_{pr,n} \|_1, \quad (14)$$ associated with decoder$^i$. In both cases, $\| \cdot \|_1$ denotes the $\ell_1$ norm. According to $[13]$ and $[14]$, it can be inferred that a pair of numerical errors is generated by feeding $\tilde{a}^i_{tr,cl,n}$ to each trained DED. In this study, the two pairs of errors, associated with DED$^i_t$ and DED$^i_r$, are interpreted as a coordinate in $\mathbb{R}^{4}$ and are represented by $(e^0_{en}(n), e^0_{de}(n), e^1_{en}(n), e^1_{de}(n))$. Namely, $\tilde{a}_{te,n}$ is eventually represented in a four-dimensional coordinate system. An SVM classifier, notated by $C$, is applied on the error map, as detailed in Section III-B. In this study, $C$ is trained to separate coordinates held by $H^0$ from coordinates held by $H^1$. Thus, two decision regions are created. In this study, both real-time and batch modes are considered, as described in Section V-D. For batch mode, $C$ is trained on both the encoder and decoder errors projected on the error map, i.e., $C$ is a three-dimensional hyper plane, embedded in $\mathbb{R}^3$. Real-time mode only exploits the decoder error. Namely, in this case the error map is a two-dimensional coordinate system, and correspondingly $C$ divides $\mathbb{R}^2$ into two regions. D. Testing Process The DM method requires a batch of both speech and non-speech frames to estimate the low-dimensional embedding. This is impractical for real-time mode where a very small number of frames is available. Therefore, two testing processes are presented; a frame-by-frame testing process in which employment of the DM method is not required, and a batch testing process, which is shown to be more accurate, with substantially higher delay. 1) Batch Mode Testing Process: In batch mode, both the encoder and the decoder errors are exploited, which increases prediction accuracy. On the other hand, the encoder error is well approximated as long as a large batch of time domain audio data from both hypotheses (2) is at hand, which leads to delay in prediction. The test set, notated by $s_{te}$, is constructed by following similar steps as in the previous section, while ensuring that the intersection of $s_{te}$ and the training sets of the DED neural network and classifier is empty. $s_{te}$ includes 15% of both $s_{en}$ and $s_{de}$ (11). For completion, $\tilde{a}_{te,n} \in \mathbb{R}^2$ and $\tilde{e}_{te,n} \in \mathbb{R}^3$ denote the feature vectors associated with the $n$th observation of $s_{te}$, extracted according to (8) and (10), respectively. Let $(e^0_{en}(n), e^0_{de}(n))$ and $(e^1_{en}(n), e^1_{de}(n))$ represent the two-dimensional coordinate generated by the propagation of $\tilde{a}_{te,n}$ through DED$^n_{en}$ and DED$^n_{de}$, respectively. For the sake of clarity, we neglect the time index $n$ and address $e^0_{en}(n)$ and $e^1_{de}(n)$ as a two-dimensional coordinate $(e^0_{en}, e^0_{de})$. As stated earlier, $(e^0_{en}, e^0_{de})$ and $(e^1_{en}, e^1_{de})$ are concatenated and projected into a four-dimensional error map. Let $R_j$ stand for region $j$ created by the decision $C$ applied to the error map, where $j \in \{0, 1\}$. As a result, the following decision rule is considered by the classifier $C$, regarding input feature vector $\tilde{a}_{te,n}$: $$C\{\tilde{a}_{te,n}\} = \begin{cases} H^0, & (e^0_{en}, e^0_{de}, e^1_{en}, e^1_{de}) \in R_0 \\ H^1, & (e^0_{en}, e^0_{de}, e^1_{en}, e^1_{de}) \in R_1 \end{cases}.$$ \tag{16}$$ 2) Real-Time Mode Testing Process: Since immediate prediction is often required in many audio-based applications, real-time mode is considered as the main branch of this study. Compared with the batch mode, the low-dimensional error is now unavailable. Meaning, the high-dimensional error becomes the single measure to distinguish between audio frames of different hypotheses. Let $e^i_{de}(n)$ denote the error produced by propagating $\tilde{a}_{te,n}$ through DED$^n_{de}$. In a similar manner to the batch mode, $e^0_{de}(n)$ and $e^1_{de}(n)$ are joined and projected into a two-dimensional error map. For sake of clarity, we again address these two measures as $(e^0_{de}, e^1_{de})$. Let $R_j$ stand for region $j$ created by the deviation $C$ applied to the two-dimensional error map, where $j \in \{0, 1\}$. As a result, the following decision rule is considered by the classifier $C$, regarding input feature vector $\tilde{a}_{te,n}$: $$C\{\tilde{a}_{te,n}\} = \begin{cases} H^0, & (e^0_{de}, e^1_{de}) \in R_0 \\ H^1, & (e^0_{de}, e^1_{de}) \in R_1 \end{cases}.$$ \tag{16}$$ VI. EXPERIMENTAL RESULTS In each of the experiments described in this section, comparisons are made between our proposed approach and several competing voice activity detectors. In order to avoid skewness and unfair imbalance, performances were generated by using identical experimental conditions. Specifically, the same test set, acoustic setup and optimization measure, i.e., TN + TP (true positive + true negative), are uniformly employed. To allow appropriate assessment of performances, two measures are used: The optimized TP+TN measure, and the relation between TP and TN measures. A. Performance of Proposed Approach 1) Accuracy: Primarily, the proposed method is applied using 100% of the DED training data set in a batch mode, as detailed in Section V-D1. The accuracy rate is 99.1%. In this mode, voice activity is detected by using both low and high-dimensional numerical measures. This performance gives rise to the main assumption of this research. Namely, that speech can be distinguished from transients based on their underlying geometric structures. Real-time voice activity detection is performed according to Section V-D2. In this mode, the accuracy rate reaches up to 98.1% when 100% of the DED training data set is used. Visualization of the error map is given in Fig. 2. It should be highlighted that similar visualization is not given for the batch mode, since the corresponding error map lays in \( \mathbb{R}^4 \). These results reflect on the strong relation between low and high-dimensional information. Namely, even though low-dimensional measures are not integrated into the decision rule, the separation in the diffusion space is implicitly expressed through the inverse mapping of the decoder. Therefore, the reconstructed high-dimensional information in the feature space is a sufficient measure to tell apart speech from non-speech frames. By examining the results, high robustness can be concluded. Namely, despite the variety of stationary and non-stationary noises included in the database, the intrinsic structure of speech is still well detected. 2) Generalization: Generalization and sensitivity of the proposed method are analyzed by performing an additional experiment in the real-time mode. These properties are examined with respect to two parameters; the corpus size of the DED training set and the ratio of speech observations in the latter. In this experiment, 5 different fractions of the full DED training set are considered. For each fraction, 5 different ratios between speech and non-speech observations are inspected. Results are demonstrated in Fig. 3. It can be observed that the accuracy rate surpasses 95%, even when merely 50% of the training data is available, which projects on the low sensitivity of the proposed algorithm to this measure. Also, the maximal accuracy is achieved when the speech observations ratio is equal to 50%, i.e., when there is an equal amount of speech and non-speech observations in the DED training set. This optimal ratio allows the network to learn two separate manifolds with minimal bias. This bias, if exists, can come to surface during testing, when one mapping is more robust than the other. In this case, relying on Euclidean distance between manifolds as done in this research may be harmful for classification. It can also be inferred that the performance has low sensitivity to changes in the speech observations ratio parameter. For example, let us consider the results achieved by exploiting 100% of the training corpus. Then, speech observations ratios of 20%, 50% and 80% yield accuracies of 95.2%, 98.1% and 94.4%, respectively. It is interesting to note that the degradation in performance is not symmetric around the ratio of 50%. i.e., degradation is more noticeable when the amount of noise observations is lower than those of speech in the training process. This can be related to the high varying nature of non-stationary noises in comparison to speech. Meaning, larger corpus of transients is needed to construct a robust low-dimensional structure with the DM method. As mentioned in Section IV-A, the constructed database comprises 42 different combinations of stationary and non-stationary noises. Thus, a fundamental question is concerned with the ability of the proposed detection system, and specifically DED\(^0\), to generalize well to other types of noise. In order to increase the generalization ability of the suggested detector to noises of various kinds, we performed several actions that regard both the architecture of the system and the feature extraction process. The way the architecture is built puts emphasis on both the difference between speech and noise, and on the similarity of noise to previously trained noises. As a result, the decision mechanism of the system relies on a combination of two learning systems. The features that are extracted from the time domain are constructed to exploit this form of architecture. During training, not only temporal and spectral features are derived, as traditionally done in state-of-the-art methods, but also the informative spatial diffusion map features. This reveals the unique intrinsic geometric structure of speech utterances. Ultimately, when feeding the system with unseen noise, its intrinsic structure is evaluated by the system and compared against speech and non-speech frames separately. Therefore, the performance of the system is not sensitive to unseen noises, in comparison to competing methods, as shown through the experimental setup detailed earlier in this section. **TABLE I** <table> <thead> <tr> <th></th> <th>Babble 10dB SNR Keyboard</th> <th>Musical 10dB SNR Hammering</th> <th>Colored 5dB SNR Hammering</th> <th>Musical 0dB SNR Keyboard</th> <th>Babble 15dB SNR Scissors</th> <th>std</th> </tr> </thead> <tbody> <tr> <td>Tamura</td> <td>73.6</td> <td>83.8</td> <td>83.9</td> <td>73.8</td> <td>81.2</td> <td>5.2</td> </tr> <tr> <td>Dov - Audio</td> <td>87.7</td> <td>89.9</td> <td>87.8</td> <td>86.5</td> <td>90.2</td> <td>1.6</td> </tr> <tr> <td>Dov - Video</td> <td>89.6</td> <td>89.6</td> <td>89.6</td> <td>89.6</td> <td>89.6</td> <td>0.0</td> </tr> <tr> <td>Dov - AV</td> <td>92.9</td> <td>94.5</td> <td>92.8</td> <td>92.9</td> <td>94.6</td> <td>0.9</td> </tr> <tr> <td>Ariav - AV</td> <td>95.8</td> <td>95.4</td> <td>95.9</td> <td>95.1</td> <td>97.2</td> <td>0.8</td> </tr> <tr> <td>Proposed Real Time</td> <td><strong>98.4</strong></td> <td><strong>98.3</strong></td> <td><strong>98.3</strong></td> <td><strong>98.3</strong></td> <td><strong>98.5</strong></td> <td><strong>0.1</strong></td> </tr> <tr> <td>Proposed Batch</td> <td><strong>99.3</strong></td> <td><strong>99.6</strong></td> <td><strong>99.3</strong></td> <td><strong>99.3</strong></td> <td><strong>99.5</strong></td> <td><strong>0.1</strong></td> </tr> </tbody> </table> Fig. 3. Accuracy rate percentage (TP+TN) of the proposed method using the real-time mode. Different fractions of the full DED training set (25, 50, 75, 100[%]) are considered along a grid of speech observations ratios. B. Comparison to Competing Methods In order to assert the performance of our architecture in a global scale, it is compared to 5 voice activity detectors. The competing methods are presented in [12], [21], [24] and are denoted “Ariav”, “Dov” and “Tamura”, respectively. Table I presents the performance of each method in 5 different acoustic environments that compose of transients (keyboard, hammering, scissors) and stationary noises (babble, musical, colored Gaussian noise) with different SNR values (0, 5, 10, 15 [dB]). The real-time and batch modes are notated by ‘Proposed Real-Time’ and ‘Proposed Batch’, respectively. It can be observed that the proposed algorithm, even in real-time mode, achieves the best accuracy rate through all varied setups. It should be highlighted that the proposed solution exploits only audio signals, while competing methods rely on integration of both audio and video data. By observing the standard deviation (std) measure in Table I, it is shown that, unlike competing methods, the performance of the proposed approach is barely affected by the change in the acoustic environments. This high robustness can be related to the construction of intrinsic representations of the audio frames. These representations do not consider the contents of transients or background noises, but merely their intrinsic geometric patterns. These patterns are unique for speech and non-speech audio frames, which allows enhanced performance regardless of the setup. The results presented in Table I show slight improvement in comparison to the results presented in Section VI-A. While in the former, 5 specific setups are inspected, 37 additional setups are considered in the latter. This indicates the existence of specific combinations of speech, stationary and non-stationary noises that are harder to comprehend. Deeper analysis of this phenomenon should be addressed in future work. To allow further evaluation, we employ the receiver operating characteristic (ROC) curve. Three acoustic setups presented in Table I are considered in Figs. 4 - 6. In each ROC curve, the real-time and batch proposed approaches are compared against four competing voice activity detectors. Since the test set is identical and balanced across all methods, a constructive comparison is made by the ROC curves. The latter allows analysis of the relation between TP and TN, thus delivering information about the trade-off between the two. It is worth noting that TN can be derived from the false positive (FP) measure, held by the x axis, by simply applying the relation TN = 1-FP. It can be observed that our voice activity detector outperforms the competing methods in a wide range of operating points. C. Performance Analysis This study presents a voice activity detection method that reaches substantially higher accuracy results in comparison to other state-of-the-art methods. This improvement can be attributed to several novelties, where two of them are considered the most influential. First, the integration of the DM method, forced at the end of the encoder. Second, construction of two separate DEDs, one trained with speech presence observations and the second with speech absence observations. This section is divided into two main parts. Initially, the differences between two competing methods and the proposed approach are analyzed and theoretical explanations of the gap in accuracies are given. Then, two experiments are conducted to establish these explanations. First, the method proposed in [12] is considered. In this method, low-dimensional embedding is built with the DM method, as done in our study. This embedding is constructed by considering joint relations between speech and non-speech features. However, our approach employs the DM method by considering relations between features of the same hypothesis only. In order to evaluate the influence of this difference on the degradation in performance, the algorithm proposed in [12] has been implemented. Consequently, high overlap of speech and non-speech embeddings in the diffusion space has been observed. This method performs voice activity detection mainly by modeling two low-dimensional Gaussian mixture models. Meaning, this approach aims to separate speech from non-speech coordinates by constructing a separator from a sum of weighted exponential kernels. As a result, overlapped coordinates are highly at risk to be misclassified. Next, the method proposed in [21] is analyzed. In this approach, a single auto-encoder attempts to learn the low-dimensional embedding of both speech and non-speech frames. As a result, joint embedding is shown to lead to high overlap in the low-dimension, much like in the research conducted in [12]. Additionally, this architecture does not consider the DM method as a constraint on the embedded data, so dimensionality reduction is done automatically. This leads to a lack of spatial information in the low-dimension and absence of geometric insight. Ultimately, this causes significant overlap between low-dimensional representations and to deterioration in performance. The high accuracy shown in [21] can be related to high exploitation of temporal relations, carried by the RNN, and integration of visual features in the classification process. To explore the performance of the network without video, the authors of this work implemented audio-only version of the method presented in [21]. The outcome shows severe degradation in performance, as the average accuracy is 83% with respect to all 5 setups considered in Table I. Two experiments are conducted in order to validate the above notions. First, the algorithms proposed in [12] and [21] are implemented with merely audio data, as demonstrated in Fig. 7. Accuracy rates of these methods are calculated by employing different fractions of the full DED training set. For this particular experiment, the ratio of speech observations was fixed to 50%, to achieve optimal results. Several interesting insights can be obtained based on these outcomes. Primarily, there is a substantial gap between performances when considering only the audio data and neglecting visual features. Moreover, it is noticeable that the method proposed in [21] is not affected as much by the change in the amount of training observations. As previously stated, the latter does not consider any geometric or structural constraint on the embedded data. Therefore, as long as the training observations are divided roughly equal between hypotheses, their amount has lower significance. On the other hand, the study presented in [12] highly relies on the intrinsic structure of the data, i.e., the more training observations are available, the better the joint relations between speech and non-speech features are modeled. In this case, larger training set leads to a more robust manifold construction. In order to further explore the core of the advantages of the proposed approach, another experiment is conducted. This time, the studies in [12] and [21] are implemented by integration of several principles of this study. It should be noted that the detection algorithm presented in each of these studies remains the same. In [12], the algorithm was altered such that the low-dimensional coordinates are learned separately for speech and non-speech frames before applying the Gaussian mixture model on the generated manifolds. In [21], two separate auto-encoders were implemented. Each auto-encoder learned the low-dimensional mapping of speech and non-speech audio frames independently. Also, the DM method was applied in a similar manner to the proposed method in order to integrate spatial information. The output of each encoder was inserted into a separate RNN. The output of each RNN represents the probability that a test observation is taken from a speech audio frame. Ultimately, the probabilities of the two RNNs are intersected and a prediction is made by a constructed decision rule. The results of this experiment are given in Fig. 8. For each method, the accuracy is calculated along a grid of fractions of the full DED training set, while the speech observations ratio is once again set to 50%. Moreover, the performance of each method is given once with its original implementation and once with the improved implementation that combines principles from our method. Regarding the studies presented in [12], [21], the accuracies of the two new implementations significantly improve. Also, these models are less sensitive to changes in the size of the DED training corpus. Even though an increase in performance can be observed, the studies presented in [12] and [21] still do not reach the results of the proposed method. The core classification algorithm of each of the three discussed methods remains unchanged through all the comparative experiments conducted in this study. Therefore, the core classification algorithm proposed in our study may be responsible for the observed gap. VII. CONCLUSIONS In this work we have performed voice activity detection with audio-based features. We separately represented the low-dimensional geometric structures of speech and non-speech frames by integrating the diffusion maps method with two independent, encoder-decoder based, deep neural networks. This separation of speech from stationary noises and transients during the training process of the two networks also led to high The authors thank the Guest Editor, Dr. Bo Li, and the anonymous reviewers for their constructive comments and useful suggestions. **References** robust audio-visual speech recognition using audio-visual voice activity detection,” in Proc. 11th Annual Conference of the International Speech [25] B. Logan, “Mel frequency cepstral coefficients for music modeling,” in [26] S. Mousazadeh and I. Cohen, “Voice activity detection in presence of [27] B. Logan, “Mel frequency cepstral coefficients for music modeling,” in for monosyllabic word recognition in continuously spoken sentences,” 366, 1980. on deep denoising autoencoder,” in Interspeech, 08 2013, pp. 436–440. [31] Z. Farbman, R. Fattal, and D. Lischinski, “Diffusion maps for edge- aware image editing,” ACM Trans. Graph, vol. 29, no. 6, pp. 145:1– 145:10, 12 2010. of nonlocal neighborhood filters for signal denoising;” SIAM Journal cence suppression with diffusion maps,” IEEE Trans. Audio, Speech [34] G. Mishne and I. Cohen, “Multiscale anomaly detection using diffusion 2013. of nonlocal neighborhood filters for signal denoising;” SIAM Journal cence suppression with diffusion maps,” IEEE Trans. Audio, Speech
Managing Author Records and Using Author Search in Web of Science Pioneering the world of research for more than 50 years Bob Green Solution Specialist January 2020 Agenda 1. An Overview 2. Features and Functionality 3. Workflows/Scenarios 4. FAQs 5. Additional Resources An Overview Why Create this Process? “The Web of Science Group is on a journey of transformation and innovation to support a more holistic and researcher-centric workflow.” ✓ Help researchers track more of their impact and own their online identity. ✓ Deliver the highest-quality disambiguated author data. ✓ Bring the highest-quality author data into the Web of Science Group’s other solutions. ✓ Make the Web of Science Group’s solutions the most trusted resources for confident discovery of an author’s published work, as well as assessment of their output and associated impact. What has changed? In November 2019, we enhanced the quality of author disambiguation and accessibility of author data in Web of Science, while giving researchers ownership of their Author Record via Publons. Features as release in BETA • A fully re-imagined Author Search. • A new Author Record view. • Ability for Web of Science users to submit feedback to correct publication records. • An enhanced author disambiguation algorithm that suggests author records and learns from user feedback. • Give researchers ability to claim ownership of their ‘Web of Science Author Record’ via Publons. What is author disambiguation? Name ambiguity is a frequently encountered problem in the scholarly community: - Different researchers publish under the same name. - Individual researchers publish under many names. - Languages and cultural naming conventions introduce additional challenges. Author disambiguation is a process that aims to find all publications that belong to a given author and distinguish them from publications of other authors who share the same name. FACT: A mere hundred surnames still make up over 85% of China’s 1.3 billion citizens. The top three—Wang, Li, and Zhang—cover more than 20% of the population. With ResearcherID, we pioneered the concept of assigning a unique identifier for authors that could work across systems. Web of Science Group has supported ORCID from the beginning, even prior to its launch. The concept, code, and some of the original funding for ORCID came from ResearcherID back in 2011. We have always integrated with ORCID... now with Publons, researchers can effortlessly keep their ORCID up to date by simply updating their Publons profile – as the Publons profile offers so much more. NOW we’ve introduced true Author Records in Web of Science, and Web of Science ResearcherID is that unique identifier to ensure direct discovery of that author at any time. Web of Science and Publons each have a unique Identifying number. Web of Science ResearcherID links the disambiguated data across systems in a bidirectional relationship. - Creating a Publons profile will generate a Web of Science ResearcherID - Authors can adjust which Web of Science publications are theirs in Publons and those changes are automatically reflected in Web of Science. Update your Publons profile and changes can be sent to ORCiD - or - Update your ORCiD and changes can be sent to Publons. An easier way to manage profiles. We are all trying to get to better data. Let’s get there together. Author data, made better together. Author disambiguation at scale needs an algorithmic approach + human curation Deliver a true Author Record via intuitive Author Search in Web of Science Author profiles are core Feedback is reviewed by a team of specialists - Accepted feedback will improve our disambiguation algorithm Allow for users to provide feedback; authors to claim and curate their record via Publons Continuously improve author disambiguation Features and Functionality It all begins with a simple, fast and intuitive Author Search Beta A new Author Search quickly and efficiently guides users through the process of easily locating the author you are looking for. Regardless of how common their name is. Saving users time, while improving the ease and accuracy of finding authors’ full publication records in Web of Science Core Collection... Search by Author with type-ahead functionality Search by Web of Science ResearcherID or ORCID Your search experience automatically adjusts depending on the level of name ambiguity **Completely unique names** will take you straight to the author record. **Moderately ambiguous names** take you to a results screen where you can select the correct record or merge records into one author view. **Highly ambiguous names** will intuitively guide users to further refine their search (as shown) before going to the results page. **Our disambiguation algorithm** uses more than 40 indicators to group together publications likely authored by the same person into an Author Record. **Author Search includes wildcard** (e.g., 'Joseph Wilson' returns 'Joseph A.P. Wilson') **Institution filter uses Org Enhanced** (e.g., Amity Univ, Amity Univ Gurgaon consolidated under Amity University) ... but it’s where the search takes you that’s even more exciting ... What is an Author Record? A clean and comprehensive picture of an author’s Web of Science Core Collection publication and citation record. Same Authors. New View. Author Record (BETA) See the person, rather than just a list of publications. - Author name (most publications) - Alternate name variants - Affiliations (5 most common, in order) - List of publications – (including any outside subscription) ability to view as a set of results to export, and analyze with links to full text. Web of Science Citation Network view - H-index - Sum of Times Cited - Total Citing Articles (total in the WoS core collection, not just in their subscription) A new seamless curation process gives users the ability to submit feedback to improve Author Records and persistently correct publication records. Authors can now claim ownership and maintain their Web of Science Author Record via Publons. Any Web of Science registered user can submit feedback to correct an Unclaimed Author Record. Everyone benefits from corrections made by the research community. A mutually manual curation process... Feedback isn’t just going into a computer... All submitted feedback is being reviewed by a real human! Because if you are taking the time to suggest improvements, we want to make sure they are validated and implemented correctly. ... the algorithm learns from your feedback. Our advanced clustering algorithm uses artificial intelligence to learn from user feedback and will continuously improve the accuracy of author disambiguation. Author data, made better together Giving authors ownership of their Web of Science publication record While anyone signed into Web of Science can suggest feedback on an author’s record... ...only one person (the Author) can CLAIM ownership of their record. When they click CLAIM THIS RECORD they will be taken to Publons, to sign into their existing profile (which will already have a Web of Science ResearcherID), or to create a profile, (in which case, a unique Web of Science ResearcherID is created). This Web of Science ResearcherID will display in both Web of Science and Publons. For reliable exposure in Web of Science Once claimed by the author this is clearly indicated in the Author Record and nobody except the author can make corrections. The option to CLAIM THIS RECORD is no longer available. A link to the author’s Publons profile is provided. If the Author Record needs updating, the author does this from their Publons profile. Any changes are synchronised to Web of Science and ORCiD. Researchers can own their identity Once in Publons Author’s can manage their Web of Science Author Record, any corrections are fed back into Web of Science and ORCiD. Publons is a true profile. Not just an author profile, but a comprehensive researcher profile, containing bio, publications, reviews and editorial involvement. Researchers can track their publications, citation metrics and benchmark their peer review activity. **Publons links with ORCiD** - Login to Publons with ORCiD. - One click import publications from ORCiD to Publons profile. - One click export publication and review records from Publons to ORCiD. - Reviews can be automatically exported to ORCiD if you require. - This will be the case for publications too before the end of the month. Easy to add publications to your Publons profile - Import publications directly from Web of Science via your Private Dashboard. - Import publications from ORCiD, DOI/title search, or by file upload. - Validate which publications are yours and add them to your profile. Whenever a publication is added to your profile, we automatically search Web of Science and CrossRef for additional metadata to improve your records and find any missing citations. Claim your publications directly from Web of Science and export to your Publons profile. Note: If you have a pop-up blocker, you will need to disable this to be able to claim your publications from within Web of Science. Workflows/Scenarios Claiming your publications in the Web of Science - **Do you have an ORCID?** - **No** - Find your Author Record in WoS and claim it by creating a Publons account. - **Yes** - Use the import tool to claim your publications. - **Do you have a RID?** - **No** - Navigate to WoS and find your author record and claim it. Or search for your author record from WoS. - **Yes** - Have you imported your publications from WoS? - **No** - A Publons profile has been created for you. Find your profile and activate it. - **Yes** - Manage your publications in the Web of Science. If you have an ORCID, link it to your Publons profile. New Publons profiles will have a three character prefix in their RID, not one. "I am a librarian and I want to correct an Author Record" - After searching for the author’s name, either open a single Author Record or select multiple records that contain publications by the same person and click ‘View Combined Record’ - On the Author Record screen, click ‘Correct this record’, sign in to WoS, and proceed to the curation page. - Select which publications which are not authored by the researcher to remove them from the record. - You can remove publications in bulk by name and journal title. - Submit your changes, detailing any additional information in the free text provided. - You will receive email notification confirming your corrections, and another when they have been reviewed by our editorial team (aim within 48 hours). - If your feedback is accepted, changes to WoS will be visible in up to 3 days. - If your feedback is rejected, a reason will be given and opportunity to escalate to our support team. This can only be done if the Author Record(s) have not been claimed. “I am an author with publications in Publons from an ‘old’ RID/ORCID (same name)” - I search WoS for my published name variants - My Author Record displays my associated RID (linkable to Publons). - My Author Record may contain other algorithmically-added publications. - The record is not yet ‘claimed’ - I ‘claim’ the record in Publons. - I select the publications that are mine. - This submits feedback to WoS (i.e., these are mine, these are not mine) - My list of publications in Publons updates to include any newly added publications. - My WoS Author Record is ‘claimed’ - No-one else can claim it - The publications list matches Publons* - The Author Record shows as ‘claimed’ immediately but any changes to the record (add/remove publications) may not show for ~48 hours. *Publons does not currently import some types of publication (e.g., corrections) therefore the numbers may sometimes not match “I am an author with publications in Publons from an ‘old’ RID/ORCID (different name)” - I search WoS for my published name variants - My Author Record displays without an associated RID.* - The record is unclaimed. - The publications list is augmented (by DAIS) *DAIS has been unable to match the RID name to any names of the authorships with those publications. - I ‘claim’ the record in Publons. - I select the publications that are mine. - This submits feedback to WoS (i.e., these are mine, these are not mine) - My list of publications in Publons updates to include any newly added publications. - My WoS Author Record is ‘claimed’ - No-one else can claim it - The publications list matches Publons* - My RID is now associated with the authorships of those publications. - The Author Record shows as ‘claimed’ immediately but any changes to the record (add/remove publications) may not show for ~48 hours. - My RID displays on my Author Record. *Publons does not currently import some types of publication (e.g., corrections) therefore the numbers may sometimes not match “I am a new user importing my publications to Publons” - I create an account in Publons. - I add Alternative publishing names. - I click ‘Publications’ > ‘Import publications’ > ‘See my Web of Science Publications’ - Publons searches WoS for matching Author Records by email and then by name or alternative published names - I select the publications that are mine. - This submits feedback to WoS (i.e., these are mine, these are not mine) - My list of publications in Publons updates to include any newly added publications. - My WoS Author Record is ‘claimed’ - No-one else can claim it - The publications list matches Publons* - The Author Record shows as ‘claimed’ immediately but any changes to the record (add/remove publications) may not show for ~48 hours. *Publons does not currently import some types of publication (e.g., corrections) therefore the numbers may sometimes not match I am a new user claiming my record on Web of Science - I navigate to Author Search in Web of Science Core Collection - I search for my published names - I select any records that look like they contain my publications. - I click ‘claim this record’ - I get taken to Publons, where I create a new account. - I select which publications are mine from the list (I can filter out publications by year, organization and name). - I click to import the publications. - (After ~24 hrs) My WoS Author Record is ‘claimed’ - No-one else can claim it - The publications list matches Publons* *Publons does not currently import some types of publication (e.g., corrections) therefore the numbers may sometimes not match. FAQ Why have we released it as a beta? Whilst we are confident that the Author Search functionality will offer high levels of accuracy to the user from day one, releasing it in beta will allow us to gauge user response and set expectations around timelines for processing as yet unknown levels of feedback about the author records. How accurate is the cluster quality? We're investing heavily to continually improve DAIS but no algorithm is ever going to be 100% accurate, which is why we offer two pathways to suggesting amendments to the records: (i) by claiming a record through Publons, or (ii) submitting feedback through WoS. All amendments we receive will feed back into DAIS to improve its accuracy further. How long will it take for submitted feedback to show up in Web of Science? During the beta phase this will largely depend on the quantity and complexity of the feedback we receive, but users can expect to see their corrections visible in the WoS records within 1-2 weeks. We will be continually improving and streamlining this workflow as we progress. FAQ What is a Web of Science ResearcherID (RID) now? Web of Science ResearcherID has evolved from an author profile to a unique author identifier in Web of Science. What if you had a RID before Author Records was launched? All of those individuals that previously held a ResearcherID still have them and they remain unchanged. They will be managed via Publons. What if you did not have a RID before the launch? Now, any author that claims their WoS Author Record via Publons will be receive a RID, and any researcher creating a Publons account and importing their WoS publications will get a RID. Who has a RID in the future? When? The end goal will be to automatically give every author in the Web of Science Core Collection a RID, creating a unique identifier for every individual author in Web of Science Core Collection. FAQ What happens when claiming on Publons? What does this look like and how does it impact my record? In Publons, users can choose to import their publications by performing a WoS search from within the Publons interface and confirming which of the records it finds are theirs. Publons shares this data back with WoS so the records are reflected the same in both places. When can a claimed record change? What happens when there’s a new publication that an author hasn’t yet claimed? Researchers can find and claim their new publications to add to their claimed Author Record by using the WoS search tool in Publons. New publications will NOT be automatically added to claimed Author Records, and will instead form a separate author record until the author claims them in their claimed Author Record. When can a curated record change? Can DAIS add records? Can I suggest new records? The DAIS algorithm can add new publications to a curated Author Record if it believes them to be by the same researcher. Other users can also suggest new publications to a record by combining any separate, unclaimed Author Records (but only until the record is Claimed). Can I make suggestions to a claimed record? In this phase, no changes can be made to an author record once it has been claimed by the author. FAQ Will a user be able to suggest feedback after an author has already claimed their record? NO. Currently, no one will be able to suggest feedback on claimed records, but we will monitor and take feedback from market to address any concerns going forward. Will that always be the case? We will develop new functionality inline with user feedback and requirements, which may include the ability to amend claimed records. Why does a Publons profile not match the Author Search results? If the name in the Author Search is not the same as the one used for the Publons Profile, this can cause issues in this Beta release. For example, Green Bob and Bob Green will not be seen as the same. Also diacritics in one and not the other will also not be seen as the same. This mismatch in names effects the Author Search too. When using RID or ORCID results will only be returned if the name in Publons matches what the Author Record uses. These issue will be resolved in a future release. Additional Resources Additional Resources Author data, made better together PHILIP REIMANN, SENIOR PRODUCT MANAGER, WEB OF SCIENCE GROUP OCTOBER 9, 2019 The Web of Science Group introduces a collaborative approach to improving researcher identity and disambiguation. The increasingly global approach to research presents unique challenges. Researchers must establish a research identity for their contribution to their field to be recognized – all while competing for funding and positions and establishing valuable collaborative relationships. The Web of Science Group has designed the latest Web of Science™ release – a new Author Search, Author Record and curation mechanism – to meet these challenges faced by the research community. Thank you Bob Green Solution Specialist bob.green@clarivate.com clarivate.com/ Web of Science Group retains all intellectual property rights in, and asserts rights of confidentiality over, all parts of its response submitted within this presentation. By submitting this response we authorise you to make and distribute such copies of our proposal within your organisation and to any party contracted directly to solely assist in the evaluation process of our presentation on a confidential basis. Any further use will be strictly subject to agreeing appropriate terms.
Preliminary Report on the Study of Beam-Induced Background Effects at a Muon Collider Nazar Bartosik\(^1\), Alessandro Bertolin\(^2\), Massimo Casarsa\(^3\), Francesco Collamati\(^4\), Alfredo Ferrari\(^5\), Anna Ferrari\(^8\), Alessio Gianelle\(^2\), Donatella Lucchesi\(^6\), Nikolai Mokhov\(^9\), Stefan Mueller\(^8\), Nadia Pastrone\(^1\), Paola Sala\(^7\), Lorenzo Sestini\(^2\), and Sergei Striganov\(^9\) \(^1\)INFN Sezione di Torino, Torino, Italy \(^2\)INFN Sezione di Padova, Padova, Italy \(^3\)INFN Sezione di Trieste, Trieste, Italy \(^4\)INFN Sezione di Roma, Roma, Italy \(^5\)CERN, Geneva, Switzerland \(^6\)University of Padova and INFN Sezione di Padova, Padova, Italy \(^7\)INFN Sezione di Milano, Milano, Italy \(^8\)HZDR, Dresden, Germany \(^9\)Fermilab, Batavia, Illinois, U.S.A May 9, 2019 Abstract Physics at a multi-TeV muon collider needs a change of perspective for the detector design due to the large amount of background induced by muon beam decays. Preliminary studies, based on simulated data, on the composition and the characteristics of the particles originated from the muon decays and reaching the detectors are presented here. The reconstruction performance of the physics processes $H \rightarrow bb$ and $Z \rightarrow bb$ has been investigated for the time being without the effect of the machine induced background. A preliminary study of the environment hazard due to the radiation induced by neutrino interactions with the matter is presented using the FLUKA simulation program. Keywords Particle Physics · Future Colliders · Muon Collider · Detectors 1 Introduction The quest for higher energy colliders has re-opened the discussion on the possibility to exploit muon collisions to reach multi-TeV energies in the center of mass. During 2018, in preparation for the update of the European strategy for particle physics, the muon collider (MC) working group has submitted an input document\(^[1]\) which summarizes the status of different projects. While machines based on different technologies for the muon production have been studied in the past, as presented in Ref.\(^[1]\), the effects of the background induced by the muon beams decays on the physics reaches have not been studied in details, due to the complexity of the beam background at the interaction region (IR). In fact, the muon decays products can arrive to IR from a distance that varies with the beam energy, therefore the collider optics and its superconducting magnets with appropriate protective elements need to be designed and included in simulations\(^[3]\) to evaluate it. Since the level of the background is too high to operate a particle physics detector, two tungsten cone-shaped shields have been proposed, as presented\(^[4]\) and optimized in\(^[5, 6]\), to protect the IR and the detector. The exact design of the machine-detector interface (MDI), which includes these shields, is needed to evaluate the distribution of the induced background in any position of the detector. The MAP collaboration\(^[2]\) studied in details the beam induced background up to the particles distribution on the detectors, then, due to the ending of the research program, no further studies were performed. The study presented in this paper starts from the latest results obtained by the MAP collaboration, and makes use of their IR and MDI optimized for 1.5-TeV center of mass energy\(^[5]\). The software framework used for the propagation of the beam background through the detector is the same as that used... by MAP before the shutdown of the program. Recently, a different method to produce muons has been proposed where the beam intensity is expected to be lower by one or two orders of magnitude than the proton-driven one. In this configuration, the level of the machine-induced background in the detector will be reduced, but at the moment no MDI design is available. Hence, the current studies are performed using the MAP parameters, which in any case represent a worse background scenario. 2 Beam-induced background simulation The composition and the characteristics of the beam-induced background in a muon collider have been studied in detail in Refs. [5] and [6] for \( \mu^+\mu^- \) collisions at \( \sqrt{s} = 1.5 \) TeV and \( \sqrt{s} = 125 \) GeV, respectively. Here the most relevant features of the background are summarized. ![Figure 1: Illustration of the model built for the MARS15 simulation in a range of ±100 m around the interaction point. It includes the machine components in the tunnel and the ILC 4th concept detector upgraded for the High-Luminosity LHC phase. The shielding nozzles are represented in yellow inside the detector. This figure has been reproduced from Ref. [5].](image) The above-mentioned studies are based on the MARS15 software [8], which provides a realistic simulation of the beam-induced background inside the detector. MARS15 implements a model of the machine-detector interface, the experimental hall, and the machine tunnel with all the collider components in a ±200 m range around the interaction point (IP), including a realistic description of the geometry, the material distribution and the magnetic fields of the lattice elements (see Figure 1). Source of the beam-induced background are the electrons and positrons, generated in muon decays, and the synchrotron photons, successively radiated by the primary \( e^\pm \), which interact with the machine components and the surrounding environment producing secondary particles (charged and neutral hadrons, Bethe-Heitler muons, electrons and photons) that eventually may reach the detector. The actual background level in the detector depends on the beam energy and the configuration of the machine-detector interface. In particular, the MARS15 studies demonstrated that two tungsten cone-shaped shields (nozzles) in proximity of the interaction point, accurately designed and optimized for a specific beam energy, play a crucial role in background mitigation inside the detector. The position and shape of the nozzles are shown in Figs. 1 and 5. Figure 2 shows the MARS15-calculated distributions of the different species of background particles as a function of the decay point of the muon from which they had origin. The beam-induced background primarily consists of photons, neutrons, \( e^\pm \), charge hadrons, and Bethe-Heitler \( \mu^\pm \), which are produced by muons decaying in a range of tens of meters around the IP. Outside that range, which mainly depends on the collider energy and the machine design, the detector background contributions become quickly negligible for all components, except from Bethe-Heitler muons, whose range of interest is ±100 m from IP. This allows to restrict the computationally demanding simulation of the background sample to muons decaying in a range of ±25 m (±30 m) around the IP for a 1.5-TeV (125-GeV) collider. Such a sample accounts for ∼80% of the Bethe-Heitler muons. Table 1 reports the expected average number of muon decays per meter and the estimated yields of background particles entering the detector per bunch crossing for the two considered beam energies, when a bunch intensity of \( 2 \times 10^{12} \) muons is assumed. Due to the effect of the IP and MDI design, specifically optimized for the two energies, the actual level of the detector background does not scale linearly with the beam energy. The suppression factor of the shielding nozzle, protective inserts inside and masks in between the IR superconducting magnets [3] is of the order of ∼1/500. Figure 2: Particle composition of the beam-induced background as a function of the muon decay distance from the interaction point for the cases of a 1.5 TeV (left) and a 125 GeV (right) collider. ![Particle composition graph](image) Table 1: Expected average number of muon decays per meter and estimated number of background particles entering the detector per bunch crossing for beam energies of 62.5 and 750 GeV. A bunch intensity of $2 \times 10^{12}$ is assumed. In parentheses are shown the thresholds set on the particles kinetic energy. <table> <thead> <tr> <th>beam energy [GeV]</th> <th>62.5</th> <th>750</th> </tr> </thead> <tbody> <tr> <td>$\mu$ decay length [m]</td> <td>$3.9 \times 10^3$</td> <td>$4.7 \times 10^5$</td> </tr> <tr> <td>$\mu$ decays/m per beam</td> <td>$5.1 \times 10^6$</td> <td>$4.3 \times 10^5$</td> </tr> <tr> <td>photons ($E_{ph}^{kin} &gt; 0.2$ MeV)</td> <td>$3.4 \times 10^8$</td> <td>$1.6 \times 10^8$</td> </tr> <tr> <td>neutrons ($E_{n}^{kin} &gt; 0.1$ MeV)</td> <td>$4.6 \times 10^7$</td> <td>$4.8 \times 10^7$</td> </tr> <tr> <td>electrons ($E_{el}^{kin} &gt; 0.2$ MeV)</td> <td>$2.6 \times 10^6$</td> <td>$1.5 \times 10^6$</td> </tr> <tr> <td>charged hadrons ($E_{ch, had}^{kin} &gt; 1$ MeV)</td> <td>$2.2 \times 10^4$</td> <td>$6.2 \times 10^4$</td> </tr> <tr> <td>muons ($E_{mu}^{kin} &gt; 1$ MeV)</td> <td>$2.5 \times 10^3$</td> <td>$2.7 \times 10^3$</td> </tr> </tbody> </table> Nevertheless, the absolute flux of particles is still very high and poses a serious challenge for the detector readout and particle reconstruction. Another potential approach for reducing the flux of background particles is discussed in Section 6. In Figure 3, the momentum spectra of the beam-induced background are shown for the case of 750-GeV beams. The electromagnetic component presents relatively soft momentum spectra ($\langle p_{ph} \rangle = 1.7$ MeV and $\langle p_{el} \rangle = 6.4$ MeV), the charged and neutral hadrons have an average momentum of about half a GeV ($\langle p_{n} \rangle = 477$ MeV and $\langle p_{ch, had} \rangle = 481$ MeV), whereas muons momenta are much higher ($\langle p_{mu} \rangle = 14$ GeV). Another distinctive feature of the background particles from muon decays is represented by their timing. Figure 4 shows the distributions of the time of arrival at the detector entry point with respect to the bunch crossing time for the different background components. The evident peaks around zero are due to leakages of mainly photons and electrons in correspondence with the IP, where the shielding is minimal. 3 Beam-induced background characterization The background samples generated with the MARS15 program are the inputs to the simulation of the detector response in the ILCRoot framework [9]. The detector used for the studies presented here has been thought for a MC with a center of mass energy of 1.5 TeV. Both the framework and the detector are the same as those used by the MAP collaboration before 2014. Several improvements have been achieved since then from the detectors point of view, a new detector design based on up-to-date technologies is needed to compare the physics potential of this machine to the other proposed Future Colliders. The old configuration is used as a starting point for this study, which is going to be updated. In the following, it has to be kept in mind that this is not the best that can be done as of today. The detector simulation includes a vertex (VXD) and a tracking (Tracker) silicon pixel subsystem, as described in Refs. [9] and [10]. Outside a 400-$\mu$m thick Beryllium beam pipe of 2.2-cm radius, the vertex detector covers a region 42-cm long with five cylindrical layers at distances from 3 to 12.9 cm in the transverse plane to the beam axis. The VXD pixel size is 20 µm. The tracker is constituted by silicon pixel sensors of 50 µm pitch, mounted on five cylindrical layers from 20 to 120 cm in transverse radius and 330-cm long. The forward region is instrumented with disks also based on silicon pixel sensors, properly shaped in order to host the tungsten shielding nozzles. The full simulation includes electronic noise and thresholds and saturation effects in the final digitized signals. The calorimeter is based on a scintillation-Cherenkov dual-readout technique, A Dual-Readout Integrally Active and Non segmented Option (ADRIANO) \[11\]. The calorimeter simulation for MC in ILCRoot \[12\] considers a fully projective geometry with a polar-angle coverage down to 8.4°. The barrel and the endcap regions consist of about 23.6 thousand towers of 1.4° aperture angle of lead glass with scintillating fibers. Cherenkov and scintillation hits are simulated separately and digitized independently. The photodetector noise, wavelength-dependent light attenuation and collection efficiency are taken into account in the simulation of the detector response. Clusters of digitized energy deposits are then used by the jet reconstruction algorithm. The tracking system and the calorimeter are immersed in a solenoidal magnetic field of 3.57 T. Simulation of the muon detector is not performed given that this is the outermost detector and signatures studied in this article do not include final state muons. Figure 5 shows a schematic view of the full detector used in the simulation. Before describing the physical objects reconstruction, we discuss the beam-induced background and the handles available to mitigate its impact. As shown in Section 2, the noise in the detectors comes from the muon decay products and from their interaction with the nozzles. The spatial and the kinematic distributions show that the tracking system is Figure 5: Actual configuration of the detector. From inside to outside, in cyan are the nozzles followed by the tracking system in magenta. The magnetic coil is drawn in blue and the calorimeter system is depicted in red. The muon system, not implemented yet, is represented in green. the most affected detector. As presented in Ref. [5], the maximum neutron fluence in the innermost layer of the silicon tracker ($R = 3$ cm) for a one-year operation is at the level of $10^8$ cm$^{-2}$, which is lower than what has been measured for LHC in a similar position and several order of magnitude lower than the $10^{17}$ cm$^{-2}$ expected for FCC-hh [13]. The number of hits released in the tracking detector by background particles can be reduced by exploiting the time information. As shown in [9] and reproduced in this study, these particles have an arrival time distribution that is significantly different from the signal ones. In Figure 6 it is shown the simulated arrival time of particles to the tracker modules with respect to the arrival time of the photons radiated from the interaction point. By selecting a time window of a few ns around the expected arrival time, a large fraction of the background can be suppressed. This possibility must be studied in detail in the light of the new timing detectors already proposed for HL-LHC where resolutions of tens of picoseconds are achievable [14]. Figure 7 shows the hits density as function of the vertex detector layers. As expected, the first barrel layer, which is closer to the beam, has high hit density, around 450 cm$^{-2}$ in this configuration. The occupancy of the other barrel layers is significantly lower, at the level or below 50 cm$^{-2}$, while the endcap layers show an occupancy around 100 cm$^{-2}$. The cluster density is reduced by applying a time cut, in the first layer it goes down to about 250 cm$^{-2}$ by requiring a time window of ±0.5 ns. Improvements are seen also in the endcap layers. In Ref. [9] preliminary studies were presented to illustrate the benefits of using a double layer silicon design. Other strategies, not viable at the time of quoted studies, can be adopted in order to reduce the detector occupancy exploiting... Figure 7: Vertex detector occupancy, defined as the number of hit clusters per cm$^2$ area, as a function of the detector layers. Layers from 1 to 5 correspond to the barrel layers, from the closer to the more distant from the beam pipe. Layers from 6 to 9 correspond to the endcap layers, from the closer to the more distant from the nominal interaction point. Since endcap layers are on both side with respect to the interaction point, the mean occupancy of left and right layers is shown. Occupancy with and without a time window cut (±0.5 ns) is presented. the developments done in the latest years for LHC and HL-LHC. This level of background, thought unsustainable years ago, nowadays is comparable with what expected, for example, by the ALICE experiment. In the ALICE Inner Silicon Tracker [15] the hit density of the order of 150 cm$^{-2}$ is foreseen and silicon pixels of 20 $\times$ 20 $\mu$m are adequate to resolve tracks in Pb-Pb events. From what discussed above it is obvious that MC detectors will largely benefit from the R&D planned for the future colliders. The effect of the machine induced background on the calorimeter has been studied and discussed in Ref. [16]. Figure 8 shows the background energy deposition per bunch crossing in the ADRIANO calorimeter as obtained in this study, which is in agreement with what found before. As discussed in [16], the contamination of this background to the calorimeter clusters associated to signal particles can be reduced by applying appropriate energy thresholds, which is not done here. Figure 8: Background energy deposition per bunch crossing in the ADRIANO calorimeter as a function of the polar angle with respect to the beam axis ($\theta$) and the azimuthal angle ($\phi$). 3.1 Physical object reconstruction Tracks are reconstructed from clusters of tracker hits that pass the cuts on timing and deposited energy. The parallel Kalman filter, which is part of the framework, is used for pattern recognition, track propagation and refitting with several track hypotheses in parallel, allowing for cluster sharing between multiple tracks. Both primary tracks (constrained to A complete study of tracks efficiency has to be performed including the machine background with a detailed evaluation. These numbers refer to signal only, since no background is added to the events. Vertex, significantly displaced from the primary vertex, formed by at least three tracks is searched. Tracks with an Jets originating from Jet reconstruction was not included in the ILCRoot package, therefore a dedicated algorithm was developed for jet the pseudo-rapidity variable. b-quarks in the final state are generated, simulated and reconstructed. The Higgs and Z boson is mainly produced in association with In this study b-tagging is not applied in order to not reduce the statistics, and b-tagging algorithm. A secondary These numbers refer to signal only, since no background is added to the events. A complete study of tracks efficiency has to be performed including the machine background with a detailed evaluation of the fake tracks. This is mandatory also for the evaluation of the b-jet tagging performances in terms of wrong tags. Similar studies have to be completed also for the calorimeter, where anyhow we expect lower contribution from the background. 4 Characterization of \( H \rightarrow b\bar{b} \) and \( Z \rightarrow b\bar{b} \) processes The reconstruction of \( H \rightarrow b\bar{b} \) and \( Z \rightarrow b\bar{b} \) is taken as a benchmark to assess the first physics performance of the MC at 1.5 TeV. The two resonances are generated with Pythia 8. In Table 2 the production cross sections of processes with two b-quarks in the final state are summarized. The Higgs and Z signals are generated, simulated and reconstructed following the procedures described above. In this study b-tagging is not applied in order to not reduce the statistics, and the background described in Section 3 is not included. The fiducial region considered is defined by an uncorrected jet transverse momentum greater than 10 GeV and an absolute jet pseudorapidity lower than 2.5. In Figure 9 the uncorrected jet transverse momentum and the jet pseudorapidity in Higgs and Z events are shown. It is evident that jets in Higgs events are well contained in the fiducial region while part of Z events fail the requirements. In Figure 9, the reconstructed di-jet mass distributions for Higgs and Z are shown. The Z boson is mainly produced in association with a high energy photon (see Table 2), therefore the Z distribution is labeled as \( Z + \gamma \). The relative normalization of the Higgs and Z distributions is taken as the ratio of the expected number of events, considering the selection efficiencies and the cross sections, and it is equal to 12. Although the cross sections are similar, most of the \( Z + \gamma \) events fail the fiducial region cuts, therefore a low yield of such events is expected. Since b-tagging is not applied a tail at high mass in the Z distribution is present, it corresponds to candidates where the \( \gamma \) is reconstructed as a jet. \[ \Delta \phi \text{ is the difference between the calorimeter cluster and the jet axis in the azimuthal angle. } \Delta \eta \text{ is the same difference in the pseudo-rapidity variable.} \] <table> <thead> <tr> <th>Process</th> <th>cross section [pb]</th> </tr> </thead> <tbody> <tr> <td>( \mu^+\mu^- \rightarrow \gamma^*/Z \rightarrow b\bar{b} )</td> <td>0.046</td> </tr> <tr> <td>( \mu^+\mu^- \rightarrow \gamma^<em>/Z\gamma^</em>/Z \rightarrow b\bar{b} + X )</td> <td>0.029</td> </tr> <tr> <td>( \mu^+\mu^- \rightarrow \gamma^<em>/Z\gamma^</em> \rightarrow b\bar{b}\gamma )</td> <td>0.12</td> </tr> <tr> <td>( \mu^+\mu^- \rightarrow HZ \rightarrow b\bar{b} + X )</td> <td>0.004</td> </tr> <tr> <td>( \mu^+\mu^- \rightarrow \mu^+\mu^- H \rightarrow b\bar{b} ) (ZZ fusion)</td> <td>0.018</td> </tr> <tr> <td>( \mu^+\mu^- \rightarrow \nu\bar{\nu}, H \rightarrow b\bar{b} ) (WW fusion)</td> <td>0.18</td> </tr> </tbody> </table> Table 2: Cross sections for processes with two b-quarks in the final state \( b \)-tagging is not applied in order to not reduce the statistics, and \( b \)-tagging algorithm. A secondary vertex, significantly displaced from the primary vertex, formed by at least three tracks is searched. Tracks with an impact parameter greater than 0.04 mm inside the jets are used as inputs to the algorithm. The 2-track vertices are built requiring a distance of closest approach between the two tracks less than 0.02 mm, and a total transverse momentum greater than 2 GeV. Finally, 2-track vertices that share one track are combined to form 3-track vertices. The \( b \)-jet tagging efficiency defined as \( \epsilon_b = N_{b\text{-tagged}}/N_{\text{reconstructed}} \) is found to be \( \epsilon_b = 63\% \) at 125 GeV and \( \epsilon_b = 69\% \) at 1.5 TeV. These numbers refer to signal only, since no background is added to the events. Jets originating from \( b \)-quarks are identified using a simple and not yet optimized \( b \)-tagging algorithm. A secondary vertex, significantly displaced from the primary vertex, formed by at least three tracks is searched. Tracks with an impact parameter greater than 0.04 mm inside the jets are used as inputs to the algorithm. The 2-track vertices are built requiring a distance of closest approach between the two tracks less than 0.02 mm, and a total transverse momentum greater than 2 GeV. Finally, 2-track vertices that share one track are combined to form 3-track vertices. The \( b \)-jet tagging efficiency defined as \( \epsilon_b = N_{b\text{-tagged}}/N_{\text{reconstructed}} \) is found to be \( \epsilon_b = 63\% \) at 125 GeV and \( \epsilon_b = 69\% \) at 1.5 TeV. These numbers refer to signal only, since no background is added to the events. A complete study of tracks efficiency has to be performed including the machine background with a detailed evaluation of the fake tracks. This is mandatory also for the evaluation of the \( b \)-jet tagging performances in terms of wrong tags. Similar studies have to be completed also for the calorimeter, where anyhow we expect lower contribution from the background. Jet reconstruction was not included in the ILCRoot package, therefore a dedicated algorithm was developed for jet clustering combining information from the tracking and calorimeter detectors. First, the reconstructed tracks and the calorimeter clusters are combined using a Particle Flow (PF) algorithm [33], which performs matching between tracks and clusters to avoid double counting. PF candidates with the transverse momentum greater than 0.5 MeV are then used as input objects in the jet clustering algorithm with the cone size parameter \( R = \sqrt{\Delta \eta^2 + \Delta \phi^2} \) of 2.0 and 1.0 for the 125 GeV and 1.5 TeV cases, respectively. The jet radius is optimized in order to contain most of the energy of \( b \)-quark jets from the Higgs boson decay. A jet energy correction is applied as a function of the jet transverse momentum. It is determined by comparing the reconstructed jet energy to the energy of jets clustered from Monte Carlo truth-level particles. The jet energy resolution was found to be 11% for the 125 GeV case and 20% at 1.5 TeV, when no beam-induced background is present in the detector. Jets originating from \( b \)-quarks are identified using a simple and not yet optimized \( b \)-tagging algorithm. A secondary vertex, significantly displaced from the primary vertex, formed by at least three tracks is searched. Tracks with an impact parameter greater than 0.04 mm inside the jets are used as inputs to the algorithm. The 2-track vertices are built requiring a distance of closest approach between the two tracks less than 0.02 mm, and a total transverse momentum greater than 2 GeV. Finally, 2-track vertices that share one track are combined to form 3-track vertices. The \( b \)-jet tagging efficiency defined as \( \epsilon_b = N_{b\text{-tagged}}/N_{\text{reconstructed}} \) is found to be \( \epsilon_b = 63\% \) at 125 GeV and \( \epsilon_b = 69\% \) at 1.5 TeV. These numbers refer to signal only, since no background is added to the events. A complete study of tracks efficiency has to be performed including the machine background with a detailed evaluation of the fake tracks. This is mandatory also for the evaluation of the \( b \)-jet tagging performances in terms of wrong tags. Similar studies have to be completed also for the calorimeter, where anyhow we expect lower contribution from the background. Figure 9: Uncorrected jet transverse momentum (left) and jet pseudorapidity (right) in Higgs and $Z$ events produced in 1.5-TeV muon collisions. Higgs and $Z$ distributions are normalized to the same area. Background described in Section 3 is not included. Figure 10: Di-jet mass distributions for Higgs and $Z$ produced in 1.5-TeV muon collisions, without and with a logarithmic scale in y-axis (left and right figures, respectively). The relative normalization of the two distributions is equal to the ratio of the expected number of events, considering the selection efficiencies and the cross sections. Background described in Section 3 is not included. The next step would be to reconstruct the $H \rightarrow b \bar{b}$ and the $Z \rightarrow b \bar{b}$ including the machine-induced background, but unfortunately the software and the framework, or at least the knowledge that the authors of this paper have of it, has not allow to do it up to now. The work is in progress focusing primarily on tracking studies. 5 Neutrino induced hazard The importance of radiation hazard due to highly collimated intense neutrino beams is known since many years. It has already been studied in an analytic way and with MARS15 simulations, as reported for instance in Refs. [18, 19, 20]. Concerns come from the dose at the point where the neutrino beam reaches the earth surface, far away from the production point. The dose shall be well below the recommended annual dose limit for public, presently at 1 mSv/year. A goal of 0.1 mSv/year is assumed here. The neutrino beam spread is roughly given by $1/\gamma$ of the parent muons. At 1 TeV, $1/\gamma \approx 1. \times 10^{-4}$, resulting in a 100 m spot at a distance of 100 km from the production point. Despite the very small cross section, products from neutrino interactions are concentrated in a small cone, thus delivering a sizable dose. When considering a real collider, part of the neutrinos will be produced by muons decaying in the arcs, part in the straight sections. The level and distribution of dose is different in the two situations. In an ideal ring, with no straight sections, the neutrino products will reach the Earth surface along a ring concentric to the collider, at a distance that (for a flat Earth) is roughly proportional to $1/D^2$, were $D$ is the depth at which the collider is situated. The dose from a ring scales approximately with $E^3$, $E$ being the muon energy: deposited energy scales with $E$, the spot size with $1/\gamma E$, neutrino cross section again with $E$. Products from straight sections emerge on a spot-like area, and straight sections dose scales with $E^4$ due to an additional $1/\gamma$ factor. Dose can be mitigated by proper design limiting straight sections, beam wobbling, beam focusing/defocusing. Preliminary results shown in the following have to be considered as upper limits. In view of a full FLUKA \cite{21,22} based simulation of detector backgrounds and neutrino hazard in realistic layouts, we describe here the setup and validation of the simulation tools. 5.1 FLUKA for muon and neutrino transport Muon transport in FLUKA includes all interaction processes, from ionization energy losses to bremsstrahlung, pair production, photonuclear interactions and, obviously, decay. Descriptions and comparisons with experimental data are available in the literature, for instance in \cite{23,24}. The FLUKA neutrino event generator NUNDIS \cite{25} handles quasi-elastic, resonant and deep inelastic neutrino interactions on nucleons and nuclei. The FLUKA nuclear models are exploited to simulate initial and final state effects around neutrino-nucleon interactions. Products of the neutrino interactions can be transported directly in the simulated experimental setup, as was done, for instance, for the ICARUS-T600 experiment in the Gran Sasso underground laboratory \cite{26} or the ArgoNeut chamber\cite{27}. In view of the extended energy range foreseen for neutrinos at muon colliders, a check of the NUNDIS prediction at multi-TeV energy has been performed through the comparison with recent IceCube data\cite{28}. NUNDIS results agree with IceCube within experimental errors, showing that the calculated cross section exhibits the correct decrease with respect to linearity versus $\nu$ energy. 5.2 Simulation setup Simulations described in the following refer to either an idealized ring, assuming continuous bend and no beam divergence, or to idealized straight sections, again with no beam divergence. The Earth surface is assumed to be flat, no mountains, no valleys. A first implementation of wobbling in the ring is also discussed. The source, ring or section, is placed at the fixed depth of 550 m. Results for smaller depths can be simply recovered from the depth-distance relation. Neutrinos are forced to interact along the path from the source to Earth boundary, with a probability proportional to cross section and material density. For the moment, the density and composition of the traversed soil is constant and uniform. Due to the small neutrino cross section, non-uniformity along the path will have no influence on the results. Neutrino products are fully transported and ambient dose equivalent ($H^*(10)$) is calculated online trough convolution of particle fluence and conversion coefficients. Results are presented for $1+1$ TeV, $1.5+1.5$ TeV, $62.5+62.5$ GeV, and comparisons are made with previous results from N. Mokhov and A. Van Ginneken, with related physics models in MARS15, the simulation setup and results of comprehensive simulations described in \cite{18} 5.3 Ring Figure 11: $H^*(10)$ from a $1+1$TeV ring, versus distance from ring and depth. Color scale units are pSv/10^{10} muon decays. Left: from $\nu_\mu$ and $\bar{\nu}_\mu$ Right: from $\nu_e$ and $\bar{\nu}_e$. In order to graphically illustrate the shower development, figure 11 shows the ambient dose equivalent from a ring with circulating $\mu^+ + \bar{\mu}$ at 1 TeV per beam, as a function of the distance from the ring and of depth. Curved lines correspond to earth surface for different depth of the ring, in 100 m steps. Even at the maximum distance of about 80 km, from the ring, the shower is vertically contained within ±30 m. Contribution from electron neutrinos and muon neutrinos are shown separately, to highlight the small difference in vertical spread due to the different ranges of the produced electrons/muons. In order to compare with results from MARS15 simulations [18], values from figure 11 have been normalized to the same number of muon decays per year, namely $1.2 \times 10^{21}$. This normalization can correspond to $2 \times 10^{12} \mu$/bunch with a frequency of 15 Hz and a run time of 200 days/year. $H^*(10)$ values averaged over 1 m in the vertical plane are shown in figure 12 for 1 and 1.5 TeV. Present results agree within a factor of 2 with values in figure 8 in [18], confirming the soundness of both simulations on this frontier problem. It appears that dose from a TeV ring is manageable at reasonable collider depths. The approximate $E^3$ scaling is verified, pointing to more problematic situations for multi-TeV colliders. It has to be reminded that the values in Figure 12 refer to a conservative situation, where the muon beam is perfectly parallel. Any variation of beam angular spread along the ring would greatly improve the situation. Mitigation procedures have already been put forward, such as vertical wobbling of the beam through inclination of the bending magnets [18]. Figure 13 shows the effect of a periodic deflection with a maximum of 100 $\mu$rad on the dose from a 1 TeV ring. Reduction of about one order of magnitude can be achieved (again in agreement with MARS15 results [18]). ### 5.4 Straight sections Muons and their decay products, from straight sections, are further collimated with respect to arc sections. As a consequence, neutrino-induced dose levels are much higher, and dominate the hazard. Dose depends on the ratio $L/C$ between the length of the considered straight section and the total length of the collider, simply because of the ratio of muon decays. Figure 14 illustrates the ambient dose equivalent rates for a TeV collider and for a Higgs energy collider. The latter case is normalized to $4.8 \times 10^{21} \mu$ decays/yr, equivalent to $4.8 \times 10^{12} \mu$/bunch at 30 Hz running 200 days per year. For a TeV-like collider of total circumference of 10 km, already a straight section of 10 m ($L/C=1/1000$) produces Figure 14: $H^*(10)$ as a function of distance from a straight section (bottom axis), or equivalently, depth of the section (values on top axis, vertical dotted lines). Curves correspond to different values of the ratio between section length ($L$) and total ring length ($C$). The dashed horizontal line corresponds to the limit dose of 0.1 mSv/year. Left: for a 1+1 TeV collider, assuming $1.2 \times 10^{22}$ decays/year averaged over 1m in the vertical plane. Right: for a 62.5+62.5 GeV collider, assuming $4.8 \times 10^{22}$ decays/year averaged over 4m in the vertical plane. For both, the saturation at small distance is due to the too coarse averaging. Doses over 1 mSv/year at the furthest (or deepest) location. The situation is of course much better for low energy rings, except for extreme geometries, not so different however from the ones envisaged for neutrino factories. Also in this case, beam divergence or focusing/defocusing would attenuate the risk. Given the size of the neutrino beam, even a minimal spread of the muon beam would largely reduce the dose. Note that the MARS15 results \[?\] were obtained for the realistic collider optics with beam divergence taken into account. This can explain why the FLUKA results in this application - being rather similar to those from MARS15 - are always higher up to about a factor of two. 5.5 Summary Preliminary simulations of neutrino radiological hazard have been performed with FLUKA. They allowed to validate the tools against past results, and prepare for full simulations in realistic situations. These preliminary results show that care must be taken in the design of the machine in order to reduce neutrino hazard. Limitation in the number and length of straight sections, wobbling, beam divergence, are all factors that will make possible to constrain the dose rate below the one allowed for general public. 6 Future developments These studies are just at the beginning, and there is therefore room for improvement in every aspect. In the following the most urgent ones are described. 6.1 Framework One of the serious limiting factors of this study is the simulation software based on the ILRCRoot framework \[9\]. While it has a very detailed implementation of the full simulation of each subdetector, it is rather outdated, suffers from certain performance issues and has a very limited user base. It is clear that the optimal layout and technologies used in a potential detector at a muon collider would differ from the ones used in this study to benefit from the most recent progress in detector technologies, e.g. high-precision timing at sub-nanosecond level, 3D silicon pixel sensors, high-granularity calorimetry, etc. In view of the complicated arrangement of the code and the absence of other active users, making any significant modifications in the detector design and reconstruction sequence are very time consuming. and inefficient. Therefore we foresee a change towards a modern framework that is actively developed and is used for studies on similar projects. A good candidate could be the simulation and analysis software used by the CLIC experiment [31], which is based on the IlcSoft framework [30]. Besides being a modern set of tools with a large group of active users it has a number of features that are important for the future studies on a muon collider. One of the important features is the detector description based on the DD4hep toolkit [32], which provides a comprehensive detector definition shared between the simulation, reconstruction and analysis stages, making the iterative process of finding optimal detector configuration much easier. An almost complete object reconstruction chain is already implemented in the CLIC software, including the novel conformal tracking algorithm [34], which minimises the dependence of the code implementation on the actual tracker geometry, Particle Flow, jet reconstruction and b-tagging. The up-to-date software base makes it easy to parallelise the computationally intensive simulation tasks at the modern cloud-based infrastructures. 6.2 Machine induced background The study presented in this paper is based on background files generated by colleagues of the MAP collaboration, when that program was still active [35]. Despite being this a good starting point, it is however evident that the actual study of a possible muon collider, to be compared to other future accelerators at 3-TeV c.m. energy, must rely on the possibility to have these backgrounds simulated for different machine configurations. It has in fact to be remembered that particles hitting the detector are demonstrated to originate from tens of meters from the IP, and their yield and distribution thus strongly depends on the MDI optics. In this context, a solid and flexible set of simulation tools must be developed to evaluate the distribution of these background particles and their interaction in the detector as a function of the different optics that will be investigated during the design of the machine. To this aim, the plan is to use FLUKA to simulate the MDI region including the nozzles, in such a way to obtain the distribution of the machine-induced background on the detector surface. These files will be then used as an input for the detailed detector simulation. A recent comparison of the machine-related detector background at the a 125-GeV muon collider simulated by MARS15 and FLUKA shows a reasonable agreement between the two programs [36]. To gain the aforementioned flexibility against possible optics variations, we plan to use scripting tools (e.g. FlukaLineBuilder [37]) to quickly generate MDI geometries starting from optics files. To this moment accurately designed nozzles have been extensively studied as the primary way of isolating the detector from the muon-beam decay products. In addition to this passive approach the flux of background particles can be further reduced by an active shielding setup. This approach has been facilitated in the SHiP experiment [29], where it allows to reduce the incoming muon flux by $\sim 7$ orders of magnitude. A dedicated R&D on application of this technology to a muon collider configuration would be needed. 6.3 Detector As pointed out in Section 3 the detector simulated with this software package goes back to the time when MAP was active. Nonetheless, it has been of fundamental importance to perform the preliminary studies here presented and in an intermediate stage, before a new framework is adopted, a not-optimal detector can be anyway used to obtain an initial assessment of the Physics performance. Given the peculiarity of the muon collider, a dedicated detector is necessary to fully exploit its Physics potential. This activity will require a dedicated group of experts on different detector technologies. 7 Conclusions We have presented a preliminary study on the effects of the machine-induced background on the detector by using the MAP full simulation framework and MARS15 generated backgrounds for a muon collider with a center of mass energy of 1.5 TeV. The results are in agreement with previously published studies for the tracking system. The calorimeter jet reconstruction, not available in the framework, seems to have similar performance to results presented at conferences. The $b$-jet tagging is studied on events with no machine-induced background, a full evaluation of its performance is in progress. The requirements for the detectors are compared, where available, to those for the detectors at future colliders showing similar challenges. Future studies will largely benefit from the R&D program foreseen for the detectors for future colliders. Given the peculiarity of the background, the studies of the physics reaches at different energies need the full simulation which must include the machine-detector-interface. The plan for that include a new full machine-induce background performed with FLUKA and the use of a modern framework. The evaluation of the hazard due to the neutrino interaction with matter has been presented performed with FLUKA in agreement with previous published results obtained with MARS15. FLUKA will be used to evaluate neutrino induced hazard as function of the muon beam energy and interaction region design. Acknowledgments We owe a huge debt of gratitude to Anna Mazzacane and Vito Di Benedetto (FNAL) who made this study possible by providing us with the MAP’s software framework and simulation code. We would like to thank Mark Palmer (BNL) for his invaluable support and for the advice and the discussions on the studies to be done. References
From the Social Production of the Person to Transnational Capitalism: Parsons, Turner, and Globalization Daniel Reichman University of Rochester, daniel.reichman@rochester.edu Follow this and additional works at: https://digitalcommons.trinity.edu/tipiti Part of the Social and Cultural Anthropology Commons Recommended Citation This Article is brought to you for free and open access by Digital Commons @ Trinity. It has been accepted for inclusion in Tipití: Journal of the Society for the Anthropology of Lowland South America by an authorized editor of Digital Commons @ Trinity. For more information, please contact jcostanz@trinity.edu. From the Social Production of the Person to Transnational Capitalism: Parsons, Turner, and Globalization Daniel Reichman Department of Anthropology University of Rochester USA Introduction Terence Turner’s work serves as a model of anthropological scholarship that is committed to universal theory yet simultaneously grounded in the empirical study of one culture area. His work on indigenous peoples of Central Brazil has made landmark contributions to symbolic anthropology, theories of narrative, the body, kinship, Marxism, media, environmental anthropolgy, activism, and the study of globalization. Furthermore, his writings have resonated with a readership far beyond those of Latin American studies as defined in the North American academy. This paper focuses on the sometimes hidden influence that Turner’s graduate school teacher, Talcott Parsons, had on his thought, and explores how Turner’s Brazilian ethnography led to a critical rethinking of Parsons’ general theory of action. While Turner’s adaptation of Marxian theory of value to anthropology is often read as a practice-oriented alternative to structuralism (see, for example, Graeber 2001, 2013), I argue that it can also be fruitfully understood as a response to general theoretical questions first posed by Parsons. Turner only partially outlined his vision of Marxian symbolic anthropology in a series of papers over the course of his career, and he never explicitly discussed its connection to Parsonian theory. This paper, I hope, will make the Parsonian connection clear and will help to elaborate some of Turner’s important contributions to anthropological theory, particularly to the study of globalization and cultural change, an aspect of Turner’s thought which has not attracted a great deal of attention in the literature compared to his earlier work on Kayapo myth, symbol, and social structure based on his Brazilian fieldwork.¹ In the late 1990s and early 2000s, Turner authored a series of papers (1998, 2002, 2003) on globalization, understood as the political, economic, and sociocultural integration of the world’s nation-states into a single system connected by transnational capitalism. While this focus emerged from the Kayapo’s own participation in global activist networks, new media, and ecofriendly consumerism projects in the 1980s and 1990s, the topic of globalization represented a change in Turner’s focus in several important respects. He shifted his attention from bounded synchronic analysis of small-scale societies to the study of transnational political economy and changing class politics in the capitalist core countries. He also began to engage with Marxian political economy at the level of the nation-state and the world-system, an aspect of “conventional” Marxism that was virtually absent from his work until then. Turner’s own intellectual transition from “local” to “global” levels of analysis took place gradually (see his own narration of the early stages of this story in Turner 1991) and the connection between his Brazilian ethnography and his work on globalization is not always apparent. This paper argues that it is necessary to understand Turner’s Parsonian roots in order to fully grasp the theoretical and political implications of his scholarship on globalization, and how his “local” Brazilian ethnography and his scholarship on “globalization” both emerge from his attempts to refine Parsonian systems theory through Marxism. Parsons Theory of Generalized Media of Interaction Talcott Parsons was one of Turner’s teachers and key influences while he was a graduate student in Harvard’s Department of Social Relations, and two of Turner’s early publications address Parsons’ theory directly.² By the mid-1970s, Turner shifted his focus away from Parsonian theory and Parsons became more or less ignored in anthropology. In Marxism, Turner found answers to Parsonian questions that remain fundamental to anthropology: How should anthropologists theorize the meaning of symbols in society? How can we systematically understand the relationship between symbolic forms and social practice? How do symbols operate as a link between intentional activity of individuals, collective norms, and social reproduction? To put it more succinctly: How do symbols mediate between agency and structure? Early in his career, Turner identified some limitations in Parsons’ theories of social action, and Marxism provided the most satisfying resolution to those limitations. During Turner’s doctoral studies in the early 1960s, Talcott Parsons was at the height of his influence in the social sciences, leading Harvard’s Department of Social Relations, a program created to integrate the latest research in anthropology, sociology, economics, and psychology to develop a general theory of social action. Clifford Geertz, who was a student in the program about ten years before Turner, describes the “almost millenarian exhilaration that attended the social relations department in the 1950s, and what we who were there then were pleased to call its Project—the construction of ‘A Common Language for the Social Sciences’” (Geertz 2001:8). In a sense, the social relations project was the “big data” of its day, an attempt to develop a universal model of human behavior by bringing together the latest research in sociology, anthropology, psychology, political science, and economics. In this spirit, Parsons published an article in Public Opinion Quarterly in 1963 devoted to understanding and predicting voting behavior. The ostensible goal of the article, called “On the Concept of Influence,” was to identify the factors that shape political opinions in modern society, but Parsons took the opportunity to offer what now seems like an audaciously ambitious theory of the role of “generalized media of social interaction” in human behavior. “Generalized media of social interaction” are specialized symbolic systems that people employ to “to get results” in social interaction. Drawing on the linguists Jakobson and Halle, Parsons called language the prototypical generalized medium. Language is a shared (and therefore generalized) system of symbols used to “have an effect on the action of others” (Parsons 1963:38). But words are not the only generalized media of interaction. Parsons identifies three other media that shape particular subsystems of society: money, power, and influence. Money is the generalized medium of economic interaction which symbolizes utility or economic value. Power is the generalized medium of political interaction which symbolizes “effectiveness of collective action.” Influence is the generalized medium that symbolizes persuasiveness in social interaction (Parsons 1963:48). A brief overview of Parsons’ framework helps us to understand its relations to anthropology. In Parsonian systems theory, economy, politics, culture, and society were all defined as interconnected “subsystems” within the general action system. These subsystems were treated as ideal types that were separate from each other for analytical purposes but, of course, interacted with each other in the real world. Theorizing the nature of the interaction between the subsystems was the ultimate goal of the project, with each social science discipline contributing insights from its subsystem of specialization. Sociologists studied society, anthropologists studied culture, psychologists studied personality, economists studied the economy, and so on. Parsons viewed each subsystem on its own terms, while also developing a complex and, in hindsight, somewhat quixotic theory of how personality, economy, politics, culture, and society interacted to shape the actions of individuals. The study of influence was part of the study of the social subsystem, which dealt with patterned relationships, such as formal institutions, hierarchies, and kinship structures. Influence, he argued, was key to the study of voting decisions because it measured the ability of a social actor to persuade others to act without the use or threat of force. Influence was, in this sense, a generalized medium of social status as opposed to political status, which would be measured by power. The three media—money, power, and influence—share a generalized and abstract aspect. One can have a little or a lot of influence, money, or power. Like money, influence can be lost or gained, deflated or inflated under certain conditions. Think, for example, of how, as their social reputation as objective authorities is eroded by competing sources of information, the influence of mainstream news journalists has been “deflated” by the internet. While influence lacks the exact forms of measurement that money provides in the economic sphere, it does “measure” some capacity to act in a social system (and now we have quantitative internet-driven metrics of influence, proving Parsons was on the right track). In certain kinds (socially recognized forms) of interaction, individuals refer to these symbolic codes to influence the behavior of others. Buyers persuade sellers to relinquish property by means of money; bosses direct workers to act by implicit or explicit reference to their power; doctors convince patients to trust their advice through influence. It can be a violation of social norms if one uses the wrong symbolic language for a certain kind of interaction, like using currency in a country where it is not legal tender. Bribery, for example, would be a case where money is used to “get results” in a situation where it is not normatively accepted. If a doctor used the threat of force (power) to convince a patient to follow a certain course of treatment, it would be a violation of social norms, using power rather than influence to get results. In Parsons’ analogy, the relationship of “influence to information” mirrors that between “money and goods and services.” Each medium symbolically refers to a shared code that allows an actor to influence the behavior of others in a particular kind of social interaction. These are specialized languages, which, in Parsons’ (1963:45) notoriously opaque phrasing, “bridge the gap between the normative and factual aspects of the system in which they operate.” In other words, generalized media are concrete instantiations of the normative social order. They are the link between cultural systems of belief and the choices made by actual people in real life. People try to accumulate money, power, or influence. In doing so, they reproduce the normative social order that creates these specialized “languages” in the first place. Influence, Beauty, and Value Turner argues in his 1968 paper (one of his first publications) that Parsons’ implied social evolutionism prevented the full realization of the potential of the concept of generalized media of social interaction. For Parsons, as societies progressed along a continuum from a simple to complex division of labor, symbolic media became increasingly abstract and generalized, leading to greater “degrees of freedom” in the kind of interactions that they could facilitate (Turner 1968:126). The supposed evolution from primitive gift economies to a money economy (passing through a phase of barter exchange) was, for Parsons, the paradigmatic example of this process. As the uses of money as a medium of exchange became less determined by social status and more open to individual manipulation—therefore more “generalized”—this process would increase the “degrees of freedom” in social interaction by creating an impersonal standard of value. In this theory, liberal individualism was at the top of the developmental pyramid because an impersonal system of value, unmoored from social status, allowed for a greater degree of individual freedom. Drawing on his fieldwork with the Kayapo, Turner interrogated Parsons’ typology of symbolic media by pointing to the existence of symbolic media that are not as generalized or abstract as money, but still shape social interaction in a way that is analogous in every other respect to the function of money in modern economies. How should one theorize these media that also operate as a specialized language that people use to “get results” in social life? Turner argued that generalized media of interaction function differently in gift economies than they do in modern capitalism. In gift economies, “the emphasis is not on how individual actors can manipulate the medium to ‘get results’ from other actors. It is on the standardization of the relationship between actors according to a collectively imposed pattern” (Turner 1968:126). Generalized media serve to model an ideal relationship between actors and to reinforce normative patterns of behavior by defining the terms of a particular kind of interaction. Symbolic media therefore become reflections of the orienting values that guide social behavior. In Turner’s theory, Kayapo symbols of beauty or completeness function as the conscious goal for individual transactions, and these symbolic media pattern the reproduction of social relationships, hierarchies, and individual identities. Symbolic representations of these values include ritual names, age group categories, and other markers of social status that function like monetary wealth in some ways, but cannot be exchanged or produced except through prescribed and limited forms of ceremonial action. These symbolic systems, objectified in ritual objects, bodily styles, and other symbolic tokens, do more than simply reflect gender relationships, age hierarchies, political systems, and family organization. The symbols play an active role in producing the social system as a whole, that is, they are symbols that have a function in the total process of social reproduction. Turner’s 1969 article in *Natural History* on Kayapo bodily decoration can therefore be seen as an ethnographic follow up to his theoretical gambit against Parsons. The article is an elegant symbolic analysis of how social values are materialized through dress and ornamentation of the individual body, and how styles of dress mark the transformation of the Kayapo as they move through the life cycle. He concludes the article by stating that, “the decoration of the body serves as a symbolic link between the ‘inner man’ and some of his society’s most important values” (Turner 1969:70). Money, Turner argues, plays a similar role in capitalist societies, by crystallizing buyer/seller as a specific kind of relationship out of the “miscellaneous welter” of social life (1968:132). “Generalized media, no matter how specialized in the direction of Gesellschaft and individual manipulation they may be, are always to some extent devices by which society as a whole imposes certain forms and limitations on the transactions in which they are used” (Turner 1968:126). In a key passage, Turner (1968:123) writes, Generalized media operate as a kind of feedback system, linking the level of individual transactions between acting units and the level of collective or institutional structure by means of a “circulating” system of tokens or symbols which themselves reflect the structure of the transactional system they mediate. In my opinion, this one complex sentence encapsulates one of the major concerns of Turner’s lifelong theoretical project, and it identifies the crux of the issue that he eventually resolved through Marx’s theory of value: the dialectical play between code and message, or between symbolic systems and practical activity. The key insight here is that symbols—such as money in capitalism or ritual names and objects for the Kayapo—are objective embodiments of social value that come to be the orienting guides to intentional activity, while simultaneously reproducing the social structures in which they operate. To clarify this move from Parsons to Marx, we can rephrase the Turner quotation cited above, swapping the term “value” for “generalized media.” It would read: Value [my rephrasing] operates as a feedback system, linking the level of individual consciousness and the level of collective or institutional structure by means of a “circulating” system of tokens or symbols which themselves reflect the structure of the system they mediate *albeit in alienated form* [italics are my addition]. My reference to “alienation” is crucial. Based on his reading of Marx’s *Capital*, Turner took a critical stance toward collective systems of value, arguing that generalized media were alienated, objectified representations of human productive powers, not just reflections of shared systems of value and belief. “Value” was the generalized medium that structured almost all levels of capitalist society, including concepts of space/time, personhood, gender, and the family. It defined and imposed certain limitations on social interaction by defining what was desirable in a way that stunted (or at least diverted) human productive energies. In these terms, Turner’s transition from Parsons to Marx is more of a critical revision of Parsonian functionalism than a radical break. **Generalized Media and Commodity Fetishism** At this point, we must recognize that Turner added a strong critical dimension to Parsonian theory, which tended to take a “value-free” approach. Parsons had no theory of ideology, alienation, or fetishism. (In fact, I once asked Turner if he thought Parsons had any theory of ideology. He grinned and said, “No, but he had an ideological theory!”) Parsons’ entire approach was based on a model of social equilibrium in which cultural values functioned like glue to hold together political systems and social hierarchies. Authority, hierarchy, and social order were positively valued, as long as they were “functionally integrated” with the norma- tive value system of the society in which they existed. By providing cultural legitimacy for any existing system of social relationships, cultural values performed a “pattern-maintenance” function that Parsons saw as the sign of a healthy social system. Turner’s Marxist revision viewed “pattern-maintenance” as an ideological defense of the normative social order, and he used Marxist concepts like alienation, fetishism, and ideology to critically reinterpret symbolic systems. If, for Parsons, symbolic systems were the glue that held society together, then for Turner this glue was brushed over deep cracks, holding together a structure which, if fractured or reoriented even a little, could lead to social transformations. Turner’s use of Marx was not—at its root—focused on capitalism or even political economy per se. Turner saw Marx as a protosymbolic anthropologist who was able to systematically demonstrate how symbolic forms—value objectified in money, for example—oriented intentional activity while simultaneously facilitating social reproduction in a feedback loop which, crucially, misrecognized the total process through which symbolic forms were produced (see Graeber 2013). Many readers will wonder, rightly, how Turner’s use of Marx differs from the more well-known use of “fetishism” used by many anthropologists, most famously Michael Taussig in *The Devil and Commodity Fetishism in South America*. For Taussig (1980), as with other anthropologists inspired by the Frankfurt School, the fetish was used as a concept to explain how modern capitalism hides or mystifies relationships of inequality behind a symbolic veil of autonomous exchange value. As is true of Turner’s theory, here the fetish also functions as an alienated representation of value that masks relations of inequality and exploitation. Turner’s major criticism of Taussig and Marxist anthropologists more generally was that they used the mode of production concept incorrectly in that they conceptually prioritized economic production and exchange over culture more broadly and, as a result, misconceived the base/superstructure relationship. In so doing, they treated the production of social relationships and cultural values as secondary to the production of the means of subsistence. In contrast, Turner viewed production as “a global process involving the production of social persons, families, and communal relations of cooperation as well as means of subsistence” (1986:101). From this perspective, economic production, the production of the family, symbolic values, and social institutions were all treated as a single integrated system. For Turner, the total social system was always oriented toward the production of “social persons,” which, in effect, prioritized cultural value systems. If the entire social system was oriented towards the creation of fully developed “social persons” then the values that defined a “social person” in any particular culture were of primary importance, structuring every other aspect of the system. Turner did not therefore see fetishism as something that only afflicts capitalist societies. His entire oeuvre was premised on the idea that, to some degree, all societies rely on alienated symbolic representations to mediate the total process of social production, and the work of symbolic anthropology was to interpret the nature of this relationship in any given social system.6 While Turner continually asserted that his treatment of production was true to the original ideas of Marx and Engels, we can also see its strong residual affinities with Parsonian systems theory; for Parsons as well as Turner, the cultural system provided the ultimate values that coordinated all the other dimensions of social action into a single integrated system. Ultimately Turner came to define as culture as schema following developmental psychologists Jean Piaget (Turner 1973). The basic idea of the schema metaphor was that cultural values functioned like a blueprint for the production of both the person and collective social institutions. **The Parsonian Roots of Globalization** Turner’s theory required societies to be viewed as integrated systems. This was possible in a relatively small-scale society like the Kayapo but more difficult to apply to modern nation-states and the interactions between them. However, it is important to recognize that Parsonian systems theory was an importance antecedent to what became known as the study of globalization. The anthropology of globalization (and its predecessor, modernization) emerged from two debates in the 1950s and 1960s. The first concerned the theorization of processes of acculturation and modernization, while the second centered on Parsonian social action theory. Prior to the 1950s, patterns of sociocultural change were understood through “diffusion” or “culture loss” or they were largely ignored as a thin veneer that overlaid the proper object of anthropological study, namely pre-capitalist cultural traditions. Beginning with the study of folk/urban transitions by people like Robert Redfield and Oscar Lewis, and moving to analysis of acculturation (Redfield, Linton and Herskovits 1936) anthropologists began to theorize the dynamics of change itself. Parsons provided a theoretical model that helped explain the relationship between urbanization, modernization, and changes in beliefs and values. The early work of Clifford Geertz perfectly synthesized these two concerns. His first major publication “Ritual and Social Change: A Javanese Example” illustrates Parsons’ influence on the study of modernization in the 1950s. Geertz (1957:33–34) writes: One of the more useful ways—but far from the only one—of distinguishing between culture and social system is to see the former as an ordered system of meaning and of symbols, in terms of which social interaction takes place; and to see the latter as the pattern of social interaction itself. On the one level there is the framework of beliefs, expressive symbols, and values in terms of which individuals define their world, express their feelings, and make their judgments; on the other level there is the ongoing process of interactive behavior, whose persistent form we call social structure. Culture is the fabric of meaning in terms of which human beings interpret their experience and guide their action; social structure is the form that action takes, the actually existing network of social relations. Citing Parsons at length, Geertz (1957:34) describes the social system as having “the kind of integration one finds in an organism, where all the parts are united in a single causal web; each part is an element in a reverberating causal ring which ‘keeps the system going.’” In this article, Geertz famously analyzes a Javanese funeral to argue that neither culture and society nor meaning and structure are always integrated or coordinated, particularly during moments of rapid change, when, for example, “traditional” Javanese peasants arrive in small cities and come into contact with the modern state. They carry out their social existence as city dwellers, yet culturally they still inhabit a “traditional” village. This was the first step towards Geertz’ repudiation of systems theory in favor of the more humanistic approach to meaning that characterized his later work. Turner, on the other hand, took the same basic question—the relationship between meaning and social structure—and branched off from Parsons in a completely different direction. Whereas Geertz moved more and more towards an “antisystem” definition of meaning as a contingent, local, historically-produced “web of signification,” Turner’s symbolic anthropology was nothing if not systematic and highly dependent on models that seem mechanistic in comparison to Geertz’ resolute antitstructuralism. As Geertz’ article demonstrates, the twentieth-century study of modernizing societies and cultural change—which I argue we are still dealing with in one way or another—was not originally supposed to help define modernity or capitalism. Quite the opposite. The original goal, derived from Parsons, was to study people in the midst of rapid change in one aspect of the social system—usually the economy during a transition to capitalism—and to then determine how that change shaped or was shaped by cultural or symbolic systems. When anthropologists looked at situations where cultural values were being contested, we could better understand how symbolic systems shaped human behavior. Most contemporary anthropology has followed Geertz’ trajectory in one important respect: the link between macrosocial forces and systems of meaning—in the broadest terms “society and culture”—is asserted or vividly illustrated through ethnographic examples, but it is rarely theorized in the systematic, holistic, and synchronic sense of theory understood by both Parsons and Turner. This tendency is reflected in the style of contemporary ethnographic writing, in which descriptive ethnographic vignettes have become the paradigmatic form used to illustrate a connection between social forces and patterns of behavior. (In all of Turner’s writing, I can think of only one or two cases where he used a vignette to illustrate a theoretical argument.) As a result, we now have an incredibly detailed and valuable ethnographic record of, for example, the impact of neoliberalism on people around the world, but the **process** through which a given social system—like neoliberal capitalism—produces particular forms of social consciousness is not systematically theorized. Contemporary anthropological theory thus tends to move in the opposite direction from that which Parsons or Turner might have wanted. For them, the important issue is not the political economic **system** itself (e.g., neoliberalism), but the **process** by which the collective/institutional aspects of any system come to influence the behavior of individual actors, i.e. the relationship between structure and agency, between **langue** and **parole**. Whereas the Geertzian approach seeks to represent the meanings made by others, the Turner/Parsons approach seeks to understand how meaning itself is made.7 **Marxist Political Economy Versus Social Production of the Person** Marxist political economy has provided another way to talk about structure and agency, and it is particularly influential in Latin Americanist anthropology where a certain variant continues to shape the field. The work of Eric Wolf, Sidney Mintz, June Nash, William Roseberry and others drew on Marxist concepts from dependency and world-systems theory, “solidifying the position of historically-oriented political economy within US anthropology” (Edelman and Haugerud 2004:14). In general, this work explored how the worldwide spread of capitalism transformed particular places around the world, particularly the Americas, focusing on how unequal trade relationships between commodity producing societies in the South (or periphery) and commodity consuming countries in the North (or core) shaped (or inhibited) patterns of “development.” In its latter forms, it applied Gramscian concepts of culture and hegemony to theorize the relationships between class formations and culture through time (see, for example, Roseberry 1996). Like Turner’s work, this variant of anthropological Marxism sought to theorize the relation between structure and agency, but here “structure” referred to the history of the global capitalist market and the ways in which the market produced patterns of exploitation in different settings around the world. While both variants of Marxism explored the relationship of cultural forms to the mode of production, Turner (1986) argued that “political economy” tended to define production from the point of view of the capitalist system, rather than prioritizing the production of social persons, relationships, and values as the **primary** driver of human behavior. In the 1990s, the rising interest in globalization and neoliberalism adapted some of the basic approaches of world-systems theory to a “new” global economy, marked by new trade relationships, social movements, and, of course, “flows” (Appadurai 1996). Some of the structural concepts of Marxist political economy (like core/periphery and base/superstructure) were eschewed in the study of neoliberalism and globalization, yet most ethnographers of globalization are still dialecticians of one sort or another, in that we (I include myself in this category) attempt to trace the relationships between systems and social practice. This is where I believe Turner’s “global” view of social production provides a useful alternative to Marxist political economy. Whether we use the term “neoliberalism,” “globalization,” “late capitalism,” “post-Fordism,” or just “capitalism,” much of recent anthropology of the Americas deals with the process by which macrolevel systems and microlevel practice are mutually constitutive. The connection between the anthropology of neoliberalism and Parsonian concepts is rarely acknowledged, but I would argue that anthropologists are still swimming in Parsonian waters in our attempts to formulate the relationship between symbolic systems and social structure, particularly during moments of rapid change, when the persistence of cultural values is contested. For example, in my own work (Reichman 2011) I analyzed the impact of migration on a rural Honduran community, describing how changes in the economy wrought by migration and new forms of agriculture led to changing value systems, religious movements, kinship systems, and political beliefs. This was very much a project with Parsonian roots, though it was also shaped by Appadurai-inspired debates about globalization and deterritorialized cultural movements, such as global migration, diasporic movements, and transnational consumerism. One of the contradictions of the anthropology of globalization was that it was premised on the existence of a single global system, yet the theory of “flows” and movements, derived from Appadurai (1996) and George Marcus (1998) was antistructural. If you were looking for a more systematic model, you could turn to abstractions like “neoliberalism” or “post-Fordism,” but these categories often merely recast an economistic “base/superstructure” relationship into a new idiom and did not fully theorize the production of social consciousness or meaning, except as a product of the political economic system treated as an external force. Turner’s two articles on globalization (2002 and 2003) are both attempts to reformulate the changing relationship between the global market, the nation-state, and social consciousness through his theory of the social production of the person. The basic claim is that since the 1970s, the role of the nation-state changed in the so-called advanced capitalist countries, mainly Europe and North America. The class compromise that had been achieved between labor and capital began to fray as states saw their principal role as mediating between the nation and global financial markets, which now existed beyond the regulatory control of any state or international organization. This is a well-known story of the roots of neoliberalism. As states began to weaken their ideological commitments to national well-being in the interest of global competitiveness, the ideological bond between nation and state, which Turner calls the principle of popular sovereignty, became thin. Nation and state were dehyphenated, and national progress as an ideological project lost its force or became in Turner’s words “an idiom of last resort for social losers and marginal groups to make claims upon the state for amelioration of their marginal or otherwise disadvantaged situations” (Turner 2002:64). New forms of social identity emerged in which individual differences based on race, gender, language, ethnicity, and lifestyle became the anchoring concepts of personal identity. Heterogeneity came to be a structuring principle or schema for the socialization of individuals. Particularity became a universal value. In the economic sphere, this led to neoliberalism; in the cultural sphere, this led to postmodernism. In the political sphere, this led to the rise of identity politics and issue-oriented nonstate networks. Whereas the modern nation-state sought to transform difference into similarity, the postmodern nation-state valued difference as an end in itself. Turner deepens our understanding of the genesis of this shift by tracing various forms of identity politics, new social movements, multiculturalism, and socially conscious consumerism back to changes in class politics that began to emerge in the 1970s. To this end he uses the Bakhtinian concept of the chronotope, a socially produced category of space and time that functions as a structuring principle or schema. Turner describes the chronotope of the modern nation-state as “diachronic assimilationism” through which the production of similarity out of difference becomes the orienting goal of the state. Under postmodernity, he argues, the chronotope of “synchronic pluralism” has become hegemonic, and diachronic notions of transformation, such as progress or modernization, have lost their ideological force. Turner (2002:70) continues, The vision of society as a pluralism of equal differences is a static vision, with no room for the directed assimilation or transformation of any identity, collective or individual, into any other. “Synchronic pluralism” thus replaces the diachronic assimilationism (i.e., “progress”) of the modern nation-state as the new form of social consciousness—the chronotope, to use Bakhtin’s apt expression, of consumerism and the classes that primarily construct their social identities through it. Space as well as time takes on new forms and meanings. In the synchronic pluralist society of equal differences, there can be no “center,” nor any consequential boundary or periphery, in the sense of a point where difference begins to be devalued as alien or “underdeveloped.” Where all identities and cultural styles are equally valid and synchronically self-existing, there can be no “deeper” systemic dynamics or infrastructure, no underlying causes or constraints, but only a surface pattern of contrasting signs of difference. Synchrony as “pluralism” does not imply a motionless world of fixed spatial enclaves, but rather a world of aleatory movements and freely circulating discourses, where “flows” are reversible. Lacking a constant temporal direction, they do not become structurally consequential changes. In this analysis, we see how Turner’s Marxism can be scaled up to a complex society. Turner views the nation-state as the social institution that mediates social production, which he defined as “production in the widest human sense of the term, including the production of personal identity and empowerment for the realization of cultural values, as well as the production of material commodities and means of subsistence” (Turner 2002:78). In functional terms, the nation-state here takes the place of the Kayapo men’s house as a collective institution that structures the transformation of individuals into particular kinds of social beings across a particular chronotope or spatiotemporal frame. Without question, Turner’s analysis of globalization takes place at a very high level of generality and there is a need to ethnographically substantiate some of his claims. Yet this theory remains one of the more ambitious anthropological attempts to systematically interpret political economic structures and systems of meaning under conditions of globalization. Turner’s concept of the social production of the person provides a useful vocabulary to analyze systemic relationships without prioritizing the capitalist market, on the one hand, or aleatory cultural systems on the other. In short, it provides a way to think about the relationship between changing ways of life and changing ways of understanding and symbolically representing the world. The systematic approach in which all levels of social structure are directed toward a broad concept of “production” has clear Parsonian roots. Though it may be unrecognizable and out of fashion in a world that seems difficult to being treated as a coherent system, the Parsonian model of society and culture still exerts some influence. Notes 1 Many of the ideas in this article emerged from discussions that I had with Turner while I completed my Ph.D. under his supervision at Cornell University between 2000 and 2006. 3 Parsons used his own difficult terminology to describe his system. The economy can be understood as the “adaptive” subsystem; politics as the “goal attainment” subsystem; social relationships as “integrative”; and culture as the “latent” or “pattern-maintenance” subsystem. 4 In an article that appears directly after Turner’s 1968 paper in the same issue of Sociological Inquiry, Parsons added “commitment” as a fourth generalized medium that applies to the cultural subsystem. “Commitment” was a generalized measure of one’s adherence to a particular cultural value system. The 1968 article seems like an indirect response to Turner’s challenge (described in this article’s following section), yet Turner’s paper is mentioned only in passing at the beginning of Parsons’ article. 5 While answering this question, he developed another more nuanced critique to counter what he called the “individualistic emphasis” of Parsons’ theory (Turner 1968:126). 6 Turner viewed the Baining of New Britain, studied by Jane Fajans (1997) among others, as the least alienated society on Earth, because, according to Fajans, their value system was consciously oriented towards the production the social person—hence the title of her ethnography They Make Themselves. 7 I thank an anonymous reviewer for this succinct explanation. 8 These two publications are essentially the same and I will treat them as such. References Appadurai, Arjun Edelman, Marc and Angelique Haugerud Fajans, Jane Geertz, Clifford Graeber, David Geertz, Clifford Marcus, George Marcus, George Marcus, George Parsons, Talcott Redfield, Robert, Ralph Linton, and Melville J. Herskovits Reichman, Daniel Roseberry, William Roseberry, William Turner, Terence Turner, Terence Turner, Terence Turner, Terence Turner, Terence Taussig, Michael Taussig, Michael
5. DISCUSSION 5.1. Evaluation of biocompatibility: 5.1.1. Measurement of cytotoxic effects: 5.1.1.1. Human gingival fibroblasts: In the present study, the viability and proliferation rate of Human Gingival Fibroblasts (HGF) after exposure to SBC and MBC representing two composite resins based on different monomer compositions was assessed. Though the MBC are popularly and widely used, efforts are being made to overcome the clinical deficiencies by recent development and refocusing from the filler content to the matrix resin. There has been dearth of reports in literature on the biocompatibility of siloranes. Present result of nontoxic nature of the materials is in accordance with the observations of others on methacrylates \(^{95,96}\) and siloranes \(^{20}\). However, there are conflicting reports on the cytotoxicity of these materials \(^{20,97-100}\). In the present study, the cytotoxicity observed was mild in both the materials at any given concentrations. The non-cytotoxicity of these composites could be due to the following reasons: 1. Hydrolytic stability of the material. As showed by Palin et al. 2005 \(^5\) that compared to methacrylate based composites (Z250), silorane-based composite exhibits lower solubility, water sorption and diffusion coefficients. These hydrophobic properties diminish the release of unpolymerized monomers to the oral cavity thus reducing the toxicity \(^{101}\). 2. Lower levels of residual monomers after polymerization. Monomers released due to lower degree of conversion after incomplete polymerization, can increase the cytotoxic effect of the composite resins\textsuperscript{20}. 3. Insufficient release of leachable components that produces the cytotoxicity. The low cytotoxicity does not imply absence of leached components. But if, the process of leaching of toxic compounds is slow, it may not reach the lethal dose at any given time to cause the cytotoxicity in the oral cavity. In the present study also, SBC and MBC exhibited low level of cytotoxicity, as incubation period increased suggesting that these composites have limited leaching of the cytotoxic components into surrounding media, as shown in Figure 4.2 and Figure 4.3. However, there is a contradicting report which has described a lower degree polymerization of methacrylate-based composites (Filtek Z250)\textsuperscript{102} which may release the cytotoxic substances. Earlier, it has been shown that monomer release from composite resins is complete in 24 hours\textsuperscript{29}. Therefore, most toxic effects from composite resins occur during first 24 hours. In our present study no significant cytotoxicity was found even after exposing for 48 hours. 5.1.1.2. Dental pulp cells: In the present study, the viability and proliferation rate of human dental pulp cells (DPC) after exposure to SBC and MBC representing two composite resins based on different monomer compositions was assessed. Biocompatibility of resin composite is an important criterion for pulp vitality after any operative restoration. Cytotoxicity of composite resins has been widely investigated in deep cavities on various cell lines because of their close proximity with restorative dental materials in the oral cavity and is more clinically relevant\textsuperscript{80, 103-105}. Also, any cytotoxic effects that restorative materials may have will be on the dental pulp and for that reason cultured pulp cells should be the accepted model of choice for biocompatibility testing\textsuperscript{105}. Even though the pulp fibroblasts are difficult to culture and their reported survival rate is poor, on the other hand, they are greatly sensitive and show that pulp cells could be more sensitive indicators of cytotoxicity testing\textsuperscript{105, 106}. Physical and chemical properties as well as the clinical performance of composite materials depend on adequate polymerization of resin monomers. As monomer-polymer conversion is never complete, the residual monomers released from resin restorations interact with living oral tissues and leaves composites susceptible to biodegradation\textsuperscript{107-109}. Also, resin monomers like 2,2-bis\{4-(2-hydroxy-3-methacryloyloxy-propoxy)phenyl\}propane (Bis-GMA) or urethane dimethacrylate (UDMA), triethylene glycol dimethacrylate (TEGDMA) or 2-hydroxylethyl methacrylate (HEMA) act as environmental stressors which inevitably disturb the vital cell mechanisms and are cytotoxic via apoptosis, induce genotoxic effects, stimulates prostaglandin E2 production to dental pulp cells and alter the cell cycle\textsuperscript{110}. The amount of released resin monomers to reach dental pulp depends on the solubility of monomers in the tissue fluids or dentin, the severity of dental caries and the remaining dentin thickness after caries removal\textsuperscript{111}. Information on the biocompatibility of the SBC is minimal compared with the vast amount of data reported on classical MBC. A study by Krifka S et al. 2012\textsuperscript{80}, suggested the minimum cellular responses of the silorane based composite (HermesIII) when compared to other methacrylate based composites tested and the cytotoxicity of silorane markedly decreased with time. The different cytotoxic profiles of the silorane- based composite may be attributed to the hydrolytic stability of this material\textsuperscript{5, 80}. In the present study at lower concentrations at 48 hours of SBC (5 µg/ml) showed similar observations as shown in Figure 4.5. Palin et al. 2005\textsuperscript{5} showed that compared to methacrylate-based composite (Filtek Z250), silorane-based composite exhibited lower solubility, water sorption and diffusion coefficient following short- and medium-term immersion periods. These hydrophobic properties also contribute to the low release of unpolymerized monomers and lower amount of residual monomers after polymerization procedure\textsuperscript{20, 101}. In addition, the low solubility of silorane monomers in water is mainly responsible for the absence of cell responses as it has been recently reported that silorane monomers are released from a silorane based dental material into an organic but not into an aqueous solution\textsuperscript{76}. Brackett MG et al. 2007\textsuperscript{20} suggested that the cytotoxic effects (cell viability) of the material were detected immediately after exposure in direct contact with cells, whereas, no cytotoxicity was discovered after aging. The result obtained from this study was in agreement with the study by Shafiei F et al. 2014\textsuperscript{89} who suggested that the exposure of DPCs to SBC resulted in an increase in the cell viability for the extract of day 14 compared to day seven. Whereas in the present study, the cell viability was greater in the first 24hrs and decreased with ageing. According to the obtained results, MBC and SBC exhibited different cytotoxic behaviors in both the time period tested. However, the cytotoxic effects of both the investigated composites were comparatively low. But comparing MBC and SBC, viability was greater in MBC than SBC, contradicting all the results in literature. The reason could be due to curing of composite as it is said that lower degree of conversion could be one of the reason for toxicity as discussed by Marchesi G et al. 2010\textsuperscript{102} who suggested that silorane had low degree of conversion than methacrylate-based composites (Filtek Z250). At this point the fact that human pulp fibroblasts were found to be more sensitive than gingival fibroblasts to the alterations from the most tested substances should not be neglected\textsuperscript{7}. Despite the findings of previous \textit{in vitro} studies, results from an \textit{in vivo} study showed that silorane composites caused no more adverse pulpal and periapical reactions in the deep dentin cavities compared to methacrylate-based composite\textsuperscript{112}. In a clinical scenario, concerns have been raised for clinical procedures such as direct pulp capping or placement of composites restoration in deep cavities due to diffusion of resinous monomers through dentinal tubules especially when the residual dentin thickness is below 1mm as in the low depth cavities, minor histological reactions are shown in the pulp as a result of dental material application\textsuperscript{89}. \subsection*{5.1.2. Evaluation of antimicrobial activity} \textit{S. mutans} and \textit{L. acidophilus} are involved in the etiopathogenesis of caries, periodontal diseases and other infections as they can contribute and alter the equilibrium of oral microbiota by creating favorable conditions to adhere opportunistic bacterial and fungal organisms, to the surfaces of teeth, oral tissues and prosthetic appliances\textsuperscript{36,113}. \textit{C. albicans} is a fungus frequently found on dental biofilm and its ability to secrete organic acids and collagenolytic enzymes. Its role has been implicated in the onset of caries disease. Demineralization produced by fungal organic acids, as well as the presence of cells with \textit{C. albicans} hyphae invading dentinal tubules, would prove the capacity of this yeast to invade and destroy organic and inorganic dental tissues\textsuperscript{114}. \textit{In vitro\textsuperscript{115} and in vivo studies\textsuperscript{116} have reported that resin composites result in more plaque accumulation than other restorative materials or dental hard tissues such as enamel. One of several methods to inhibit biofilm formation on resin composites incorporates antimicrobial agents into their composition. Antimicrobial agents can also be immobilized into the resin fraction of the composite or can be used as filler particles, depending on the agents’ physical and chemical properties. Agar disc diffusion technique has been used to evaluate antimicrobial properties of cured composite resin\textsuperscript{117}. \textit{S. mutans} and \textit{C. albicans} showed an inhibition zone in SBC significantly greater than MBC as shown in Figure 4.6. There was no antibacterial effect against \textit{L. acidophilus} seen according to our results in which no inhibition halo was induced by SBC and MBC as shown in Figure 4.7. Broth macro dilution method (determination of MIC) was used in the present study to evaluate the antibacterial property. Minimum inhibitory concentration (MIC) is defined as the lowest concentration of an antimicrobial that will inhibit the visible growth of a microorganism after overnight incubation. MIC is used by diagnostic laboratories mainly to confirm resistance, but most often as a research tool to determine the in vitro activity of new antimicrobials\textsuperscript{91}. The results of MIC showed that SBC can significantly inhibit the growth of microorganisms: \textit{S. mutans}, \textit{L. acidophilus} and \textit{C. albicans}. The SBC is said to have yttrium fluoride (76\%) in its composition\textsuperscript{19, 64} and fluoride in any form is said to have antimicrobial action\textsuperscript{118-120}. The fluoride acts to reduce the acid tolerance of the bacteria and is most pertinent to reducing the cariogenicity of dental plaque. The composite MBC lacked fluoride in its composition and probably that explains the lower level of antimicrobial potential when compared to SBC (Table 3.1). Also, the cytotoxic profiles of MBC due to the release of TEGDMA or other residual toxic monomers should not be neglected. These have been reported in previous literature\textsuperscript{107-109}. 5.1.3. Adhesion and penetration assay The present study was carried out to evaluate the surface adherence and penetration of microorganisms, \textit{S. mutans}, \textit{L. acidophilus} and \textit{C. albicans} to SBC and MBC. Microbial adhesion and penetration to the composite surface is an important etiological factor in secondary caries formation. \textit{In vivo} and \textit{in vitro} studies have shown that \textit{Streptococcus mutans} and \textit{Lactobacillus acidophilus} are the main bacteria isolated in plaque samples from natural and artificial surfaces during early stages of caries development\textsuperscript{121}. Similarly \textit{Candida albicans} have a high adhering potential to dental material in almost the same manner to oral tissues, and they are known to form biofilm\textsuperscript{122}. A low adhesive material reduces or delays the development of biofilm, thereby preventing the development of dental plaque\textsuperscript{123}. These low adhesive restorative materials not only have a protective effect against secondary caries but also have antibacterial action due to the fluoride releasing ability\textsuperscript{117, 124}. This fluoride ion has both an inhibition effect on cariogenic bacteria metabolism and remineralizing potential on the hard tissues of the tooth. Nevertheless, correlation between the fluoride release rate, the antibacterial action and adhesion potential is not equivocally demonstrated in literature\textsuperscript{125}. Thus the physical and chemical properties of the surface materials may affect the feasibility of bacterial infection. To prevent biofilm formation and host infection, surfaces of restorative materials play a very important role. It has been reported in literature that factors such as surface free energy, surface roughness, hydrostatic forces, hydrophobicity, nature of the material, water sorption, alter the nature of adherence and penetration behavior of microorganisms\textsuperscript{126-128}. Intra oral structures having rough surfaces retain more plaque than do smoother surfaces\textsuperscript{126}. Surface roughness has been found to be one of the important surface properties influencing microbial adhesion\textsuperscript{129, 130}. In the present study, the conventional MBC yielded highest surface roughness in comparison to SBC. This might be probably due to its higher content of larger hybrid filler particles, in contrast to SBC with its microhybrid filler particles present in SBC. The results are represented in Figure 4.11. Apart from surface roughness, surface hydrophobicity has been found to be another pivotal factor influencing microbial adhesion to oral interfaces\textsuperscript{129, 130}. It is generally accepted that surfaces with water contact angles higher than 90° are referred to as hydrophobic, whereas surfaces with water contact angles lower than 90° are described as hydrophilic. Generally silorane based composite shows significantly higher surface hydrophobicity\textsuperscript{19, 64}. This phenomenon is most likely due to its hydrophobic siloxane backbone, which causes increased hydrophobic surface properties than conventional methacrylate based composites. Our findings also agreed with the results reported in previous investigations\textsuperscript{69, 70, 131}. The reduced bacterial adhesion and penetration may be due to increased hydrophobicity of SBC with respect to MBC in the present study. 5.2. Evaluation of biomechanics: 5.2.1. Effect of temperature, time interval and storage condition on fluoride release and recharge from silorane based and methacrylate based restorative material Increased patient demand for esthetic restorations has led to a revolution in use of esthetic restorative materials. Fluoride released from esthetic restorative materials can inhibit caries, enhance remineralisation and provide anticariogenic effect. Increased level of fluoride ions around restorations is especially essential for patients with high caries risk. The success of topical treatments depends on the formation of fluoride reserves within restored teeth, capable of releasing ions for prolonged periods of time. This fluoride replenished from the environment is re-released to the adjacent tooth structure\textsuperscript{132}. Therefore, the fluoride recharge in restorative materials provides a potential fluoride reserve for release in an oral environment\textsuperscript{133}. This study evaluated the \textit{in vitro} fluoride release and recharge potential of three restorative materials as shown in Table 3.1 and Table 3.2, over an extended period of time. Although considered together, the glass ionomers materials had a greater potential for fluoride release. The fluoride release and recharge of the composites was low and the values probably represented fluoride remaining on the surface after the recharge and wash cycle. There were a number of competing factors which contributed to the fluoride release pattern. The final outcome could be confounded to cumulative changing experimental conditions in study such as manipulation of the materials, powder-liquid ratio, mixing, different amount of exposed area for the specimens, weight of specimens, nature of storage medium used, form and concentration of fluoride recharging vehicle. Glass ionomer cements consist of basic glass powder (calcium aluminifluorosilicate) and a water soluble acidic polymer (polyacrylic acid). Setting occurs by neutralization and involves the initial formation of calcium polyacrylate and later aluminium polyacrylate\(^\text{134}\). Our results showed increase in fluoride release overtime (from day 1 to day 28) which was in agreement to the findings of Shaw et al. 1998\(^\text{135}\), Attin et al. 1999\(^\text{136}\) and Vermeersch et al. 2001\(^\text{137}\), which explains that the restorative materials sustain fluoride release over time rather than an ability to demonstrate a high “burst” of fluoride release immediately following placement. Gao et al. 2001\(^\text{130}\), Preston et al. 2003\(^\text{138}\) and Attar et al. 2003\(^\text{139}\) noted the ‘initial burst’ of fluoride release and then release decreased over time, which was in contrast to our findings. The results of which are shown in Figure 4.18. A material can leach ions from the mass that have been penetrated by water. It is said that during water penetration through diffusion, the surface layers will be more saturated than the inner mass. This penetration of water is different for different materials, depending on the permeability of materials. In GIC, fluoride release occurs either by a short term reaction which involves rapid dissolution of fluoride from the outer surface into the solution or a more gradual release resulting in a sustained diffusion of fluoride through the bulk cements\(^\text{140}\). It is said that as the glass dissolves in the acidified water of hydrogel matrix, they have much faster water sorption (highly permeable) and attain saturation very fast, resulting from dissolution of the soluble fraction of material\(^\text{39,141,142}\). Composites diffuse water very slowly and have lower water sorption (permeability). In composites, fluoride must diffuse through a polymer matrix which is inherently more difficult than diffusion through a hydrogel. The low fluoride release might have been caused by fluoride absorbed onto the surface.\textsuperscript{143} The present study used different temperatures to check whether an increase in environmental temperature increases both the fluoride release and recharging of the materials. As broad temperature fluctuations occur in the oral environment, thermal circuits may frequently challenge the restorative materials placed in this environment.\textsuperscript{42} Results of our study suggested that increase in temperature increases fluoride release and recharge capacities in all the materials tested. Also, fluoride treatment of with restorative materials at a high temperature is clinically recommended to improve their recharging ability. This may be important in developing regimes for improving the delivery of topical fluoride products. Test medium also plays a significant role in fluoride release. Several studies used distilled water as the test medium.\textsuperscript{144, 145} It has been shown that fluoride release varied in artificial saliva than into distilled water. Moreover the use of artificial saliva provides test conditions that are more comparable with the oral environment than that with distilled water. Therefore in the present study artificial saliva was adopted as the test medium for this study. The results of our study showed that greater release was observed in artificial saliva than in distilled water suggesting the successful use of fluoride containing restorative materials in the oral environment. This can be explained with respect to pH of the dissolving medium. Studies have suggested that the restoratives tested at acidic pH were able to greatly increase the fluoride release.\textsuperscript{146, 147} Artificial saliva in this study had a pH of 5.3 to 5.5, which explains the higher fluoride release in artificial saliva when compared to distilled water. It has been suggested that the potential for fluoride ‘recharge’ is more important than fluoride release alone. The daily exposure of the dental materials to topical fluoride such as sodium fluoride (NaF), acidulated phosphate fluoride solution (APF) or stannous fluoride (SnF2) in toothpastes or mouthwash creates a potential for the materials, to be recharged with fluoride\textsuperscript{138}. In our study, NaF used for recharge vehicle as it is a form of fluoride that is commonly used in toothpaste and rinses. Although most dentifrices have fluoride at around 1000 parts/10\textsuperscript{6} F level, we have used lower concentrations (500 parts/10\textsuperscript{6} F level) to determine whether or not fluoride recharge would occur at a lower fluoride concentration. Also, the concentration for fluoride replenishing was 0.2\% NaF solution, as increase in fluoride release after exposure to 0.2\% NaF solution could be attributed to fluoride retained in the pores or on their surfaces. This finding agreed with a previous work by Itota et al. 2004\textsuperscript{148}. Recharge previous studies have suggested that the GIC’s have relatively high recharge potentials\textsuperscript{40}. In our study, re-release increased from week 1 to week 3 suggesting that topical applications of fluoride would help in prevention of caries and failure of restorations as shown in Figure 4.19. GIC had better fluoride recharging capabilities. SBC and MBC do not significantly differ suggesting more frequent application of fluoride is needed in composites. Several factors are likely to be involved in the process like the permeability of material and the form and concentration of the fluoride used. If the permeability of the material is high then the absorption and re-release of fluoride can take place to a greater extent when compared to relatively permeable material. 5.2.2. Effect of staining solutions and immersion periods on color stability of silorane based and methacrylate based restorative material Color stability throughout the functional lifetime of restorations is important for the durability of treatment. Color alterations in dental composites are multifactorial and are associated with *intrinsic factors* like chemical changes in the material (like resin matrix- filler particle content)\(^77, 149\) and at the matrix/particle interface\(^150\) and *extrinsic factors* like adsorption or absorption of stains/exogenous sources, dietary, smoking habits\(^151, 152\) and water sorption coefficient of resinous monomers\(^153, 154\). Consumption of certain beverages affects the esthetic and physical properties of resin composites, compromising on the quality of restorations. In the present study, when discoloration of two resin composites was compared, the overall maximum discoloration was seen in MBC when compared with SBC and the results were statistically significant. It is considered that color alteration values of \(\Delta E\) greater than or equal to 3.3 are visually perceptible and clinically unacceptable to 50% of the trained observers\(^155\). In the present study, both SBC and MBC showed perceivable color changes, but composite SBC showed significantly lower color alterations in comparison with the MBC \((p<0.05)\) as shown in Figure 4.20. The possible explanation for the differences between SBC and MBC is that the MBC have higher water sorption\(^156, 157\), induces plasticization and expansion of the methacrylate polymer\(^55\) and have hydrophilic monomer (TEGDMA which have been reported to stain more readily)\(^82, 156\) compared to silorane resins. Siloranes are extremely hydrophobic, (because of the inaccessible siloxane groups to water or water- soluble species)\(^66\), has lower hygroscopic expansion and higher dimensional stability in water\(^55\). Also, the increased synergism between filler particles and resin matrix (siloxane component) may be responsible for the reduction in water sorption and solubility. This hydrophobic attribute of siloxane therefore favors color stability and minimizes the staining in silorane composites. In addition to its hydrophobic properties, the silorane is stable and insoluble in simulated biological fluids containing citric acid, hydrochloric acid, heptanes, or enzymes such as hydrolase or esterase and artificial ageing. Further, it has been reported that resin composites with a lower amount of inorganic fillers showed more color changes because a greater resin matrix volume allows greater water sorption. The lesser filler composition of MBC corresponds to the greater color variation observed in the present study. Although there are conflicting results saying that not all the physical and mechanical properties of these new composites present satisfactory results as they say that color stability is also dependent on internal factors and time (ageing). Previous studies demonstrated that silorane was less susceptible to staining, compared to methacrylate-based composites, when immersed in red wine, coffee, soft drinks, tea, and whisky. However the effect of these staining agents on properties of composite resins depends on length of immersion, the temperature, and pH of immersion solution. Polymeric materials reportedly display a tendency to erode under acidic conditions. In general, food stuffs with low pH have greater erosive effect. Low pH affected the surface integrity of polymers. This was because under acidic conditions, the polymer surface was appreciably softened by loss of structural ions. Yoghurt and lime exhibited a similar behavior. The acidic ingredients in yoghurt and lime for eg. Lactic acid and citric acid might have caused surface dissolution of polymeric surface leading to much paler appearance. Hence specimens after their specified immersion period when observed visually showed a lighter color match when compared to control. In the present study, among all the staining agents used, Turmeric showed highest color changes in day 28 time interval \( (p<0.05) \) as shown in Figure 4.20. It is said that major constituents of turmeric (Curcuma) are curcuminoids, the yellow coloring principles that cause stain. Smaller molecular size of curcumin coupled with the water absorption characteristics of the tested materials has created a stronger staining effect as discussed by Ergun et al. 2005\(^ {45} \). When considering the discoloration in coffee and tea, the specimens immersed in tea showed more discoloration. This is in agreement with the other studies in literature\(^ {164, 165} \). Um and Ruyter et al. 1991\(^ {164} \) reported that the tea produced a yellow-brown stain while coffee stain was yellowish. The discoloration in tea was mainly due to surface adsorption of polar colorants at the surface. However other studies\(^ {156, 166} \) found that coffee was more chromogenic than tea. The discoloration by coffee is due to both surface absorption and adsorption of colorants. Fine coffee particles deposits into pits that may have formed due to polymerization shrinkage of resin during curing. The less polar colorants and water soluble polyphenols in coffee for eg. Tannin, caffeine, caffeinic acid might have penetrated deep into the material, possibly because such colorants could be more compatible with polymer matrices. When cocoa was considered there was less change in discoloration values. This result is attributed to the removal of accumulated layers. As the cocoa layers on specimen reached a certain thickness they tend to break away from the surface of samples and return to the solutions. This was the reason for less staining characteristics in cocoa. Therefore, color changes due to extrinsic factors may be directly related to the organic matrix specifically present in the particular composite, and staining could thus be reduced in more hydrophobic materials. Time was found to be a critical factor for the color stability of tooth colored restorative materials\textsuperscript{167}. In the present study, results showed that as the immersion time increased, the color changes became more intense. The results of this study can give an insight into how different resin composites may behave when exposed to different beverages, thus affecting the clinician’s choice of the material and the patient’s control of dietary habits. Also the results showed that the effect of interaction of different composites, various beverages and time, depended on a multitude of factors. However, to investigate the color stability performance of composites in a clinical setting, these results should be supported by planned \textit{in vivo} studies. 5.2.3. Effect of finishing and polishing techniques and time on surface roughness and surface hardness of silorane based and methacrylate based restorative material Finishing and polishing of composite resins are important clinical steps in restorative procedures that determine the quality of restorations. Evaluating the suitability of various finishing and polishing methods for the tested materials requires assessment of their surface roughness and hardness. The techniques employed during finishing and polishing of the tooth-colored dental restorative materials not only improves its longevity and aesthetic appearance of the material, but also minimizes plaque accumulation, gingival irritation and secondary caries\textsuperscript{49, 50, 167, 168}. It is practically impossible to achieve a highly polished surface because of the heterogeneous nature of the composition, i.e., hard filler particles embedded in a relatively soft matrix, which do not abrade at same degree due to their different hardness\textsuperscript{169, 170}. Literature shows that the use of polyester strip or matrix produces the smoothest surface to the restorative materials, but further contouring and finishing to remove excess material may spoil the finish\textsuperscript{171-173}. Insufficient polymerization in the outer surfaces results in a reduction in hardness or produce surface discoloration. The removal of the outermost composite by finishing and polishing procedures is thus warranted to produce a wear-resistant, harder, and color stabilized restoration\textsuperscript{171,174}. Surface roughness is a function of the microstructure created by a series of physical processes used to modify the surface, and also related to the scale of measurement. When the same polishing system for different composites is used, differences between material compositions should be responsible for different Ra values\textsuperscript{172}. However, the system used for finishing and polishing also should be taken into consideration. Finishing procedures are performed with rigid rotary instruments, such as super-fine-grit diamond burs or multi-fluted tungsten carbide finishing burs. These have been used to contour anatomically structured and concave surfaces such as the lingual of anterior teeth or the occlusal of posterior tooth surfaces\textsuperscript{175}. Finishing diamonds were best suited for gross removal and contouring because of their high cutting efficiency of composite surfaces, while carbide finishing burs would be best suited for smoothing and finishing as a result of their low cutting efficiency\textsuperscript{173}. Most investigators agree that for final polish, flexible aluminum-oxide discs and/or silicone-based points are the best instruments for providing low roughness on composite surfaces\textsuperscript{175,176}. The capability of aluminum oxide discs to produce a smooth surface was not only related to the planar motion of the disc, but also to their ability to cut the filler particle and matrix equally\textsuperscript{176}. As in most studies it is reported that diamond burs (of various degrees of fineness) produced visibly rough surface\textsuperscript{177} with loss of shininess and numerous scratches, but the use of aluminum oxide as secondary finishing agents affected in increased smoothness, as the abrasion created by diamond bur would be reduced and contributed to removal of scratches. Though there are other studies proving the polishing effect of astropol was equivalent to the well-established soflex discs or worse than soflex but still better than that of other polishing systems\textsuperscript{178}. Our results agreed with the finding of literature that for SBC, diamond- soflex combinations and for MBC, tungsten carbide and soflex combinations produced smoother surface finish. The results are represented in Figure 4.21. With hybrid composites, finishing diamonds have shown to produce rough, throughlike surfaces compared to carbide burs. Jung M 2002\textsuperscript{173} suggested that finishing diamonds were best suited for gross removal and countouring because of their high cutting efficiency while carbide finishing burs would be best suited for smoothing and finishing as a result of their low cutting efficiency, which was in agreement with the hybrid composite MBC used in our study. In our study extra fine diamond burs and 30-fluted tungsten carbide burs were used to finish the surface of the restorations and following these procedures Soflex discs (in Groups I and III) and Astropol and Astrobursh (in Groups II and IV) were used to polish the restorations. When different polishing systems were used, the surface roughness (Ra) values varied in either materials. In SBC, Diamond bur- Astropol and Astrobursh showed least Ra values followed by Diamond bur- Soflex disc, Tungsten carbide bur - Soflex disc and Tungsten carbide bur-Astropol and Astrobursh, with higher Ra values, suggesting for the polishing of SBC, Diamond bur- Astropol and Astrobursh combinations to be used. Whereas in MBC, Tungsten carbide bur - Soflex disc showed least Ra values followed by Tungsten carbide bur-Astropol and Astrobursh, Diamond bur- Astropol and Astrobursh whereas Diamond bur- Soflex disc showed highest Ra values. The roughness produced may be attributed to distinct patterns of particle size and their arrangement within the resin matrix. For a finishing system to be rendered effective the cutting particles must be harder than the filler particles; otherwise the abrasive medium may abrade the softer matrix only. This may result in higher surface roughness. Therefore, the effectiveness of finishing and polishing procedures on restorative material surface may be more critical. Another variable responsible for the different results is the polishing time. In clinical situations, resin composites are usually polished and exposed to the oral environment immediately after restoration. The main controversy regarding composite polishing probably is when to initiate polishing. Polishing time and materials employed \(^{177}\), polishing method and instruments \(^{179}\), showed a significant effect on the surface roughness and surface hardness of composite restorations. While some authors claim that finishing and polishing should be done after removal of the matrix or five minutes later \(^{51}, {180, 181}\), several authors have suggested that if these procedures were delayed 24 hours, better marginal sealing could be obtained \(^{182, 183}\). Changes in hardness may reflect the state of the setting reaction of a material and the presence of an ongoing reaction or maturity of the restorative material \(^{183}\). Immediate finishing and polishing could cause plastic deformation (flow) of resin which is cured 75% after 10 min due to the thermal insults of polishing as the composite polymerization reaction would not be complete prior to 24 hours \(^{183}\). It is also proposed to delay any finishing procedures until after hygroscopic expansion occurs because of the risk of fracture of the unsupported enamel surrounding the marginal gap\textsuperscript{181,182}. In our study, the effects of time period on the roughness and hardness, was evaluated at immediate polishing (after one day) and delayed polishing (after one week). Our results showed delaying the finishing and polishing procedure created a smoother and harder surface than immediate polishing. These findings agreed with that of Yap et al., 1998\textsuperscript{51} and Rai et al. 2013\textsuperscript{184} who concluded that the delayed finishing and polishing of polyacid-modified resins resulted in smoother surface. The authors attributed this result to the maturity of resin at the time of finishing and polishing. Several other authors also have proposed a 24-hours delay before the completion of finishing procedures\textsuperscript{182,185}, which supports the result obtained in this study. But during immediate polishing, the fact that hygroscopic expansion will improve marginal adaptation by closing the gap formed by polymerization shrinkage and finishing and polishing procedures should not be neglected\textsuperscript{51}. Therefore, most dentists prefer to do the finishing and polishing step immediately after the light curing of the resin restoration, which is more acceptable and cost effective for the patient as proposed by Ceni et al. 2008\textsuperscript{186} who recommended immediate polishing since this procedure reduces the number of clinical sessions. Venturini et al. 2006\textsuperscript{183} found that immediate polishing did not produce a negative influence on the surface roughness, hardness and microleakage of a microfilled (Filtek A110) and a hybrid (Filtek Z250) resin composite compared to delayed polishing. In our study, use of diamond soflex and tungsten carbide soflex immediately on SBC showed higher surface hardness values. The results are represented in the Figure 4.22. The increase of hardness in delayed finishing was not significant in microhybrid composite SBC, but it was significant in hybrid composite MBC. Due to large differences between filler and matrix hardness immediately after post-cure, immediate finishing and polishing would result in a preferential loss of matrix phase, leaving the filler particles in positive relief. This explains the higher Ra values with immediate finishing and polishing. With time, the matrix phase matures and hardens, decreasing the difference in hardness and the preferential loss of matrix during the finishing and polishing resulting in lower Ra values. These results are in coincidence with the study of Chinelatti MA et al. 2006\textsuperscript{187} who found that the increase in hardness in delayed finishing and polishing generally results in surface similar to or even harder than that obtained with immediate finishing and polishing. On the other hand, another investigation by Cenci et al. 2008\textsuperscript{186} proved that loss of surface properties after polymerization using a delayed polishing procedure. Based on the results of our study, the effects of polishing system on surface roughness, roughness was time dependent and shows that the composite surface post curing properties was better with delayed polishing procedures. In our study, the microhybrid composite SBC exhibited lower hardness but with smoother surface finish than the hybrid composite MBC. These findings can be explained by compositional differences between the two composites, their differences in residual polymerization or an increase in the resilience of the silorane polymer compared to methacrylate. Filler particle should be situated as close as possible in order to protect the resin matrix from abrasives. Reduced interparticle spacing in resin composites is achieved by decreasing the size and increasing the volume fraction of fillers\textsuperscript{188}. Harder filler particles are left protruding from the surface during polishing as the softer resin matrix is preferentially removed. Resin composites with larger filler particles are expected to have higher Ra values after polishing. Since the resin composites used in this study were highly filled hybrid composites MBC with relatively large filler particles, this explains the roughness caused by MBC. Contradicting the results in our study Buchgraber et al. 2011\textsuperscript{78} showed higher Ra values in SBC. Micro filled or microhybrid composites have less inorganic content with smaller filler particle size and arrangement than hybrid and packable composites, therefore they can be finished to a smoother surface than packable and hybrid composite. The average size of filler particles in a microhybrid contains particles ranging in size between 0.01-2 mm which allows them to be polished to a smoother surface than compared to the particle size of hybrid which is 0.6-1.4 micron\textsuperscript{189, 190}. Our results were similar to previous studies where SBC exhibited lowest VHN than MBC\textsuperscript{71, 149, 191-193}. Measurement of degree of conversion is also an indirect measure for hardness. It was found in literature that in SBC; the degree of conversion (DC) was lower than MBC\textsuperscript{65, 85, 193}. The DC of the MBC is measured as a function of the number of C=C double bonds that form during polymerization. However, the chemistry of the silorane composite monomers does not contain C= C aliphatic groups. Therefore, the polymerization of free radical species and cationic species are different\textsuperscript{194}. Also, it can be related to tetrafunctionality of silorane molecules, as the molecule is trapped in the network under formation the mobility of other functionalities decreases. This may explain the low DC values of SBC, but not necessarily result on reduction of mechanical properties\textsuperscript{195}. In MBC as they are richer in highly reactive and more flexible monomers (such as TEGDMA) they are expected to present higher conversion\textsuperscript{85}. 5.2.4. Degradation resistance of silorane based and methacrylate based restorative material: Evaluation of water sorption and solubility Properties of composites, such as water sorption (WS), and solubility (SO), are important parameters with which the behavior of composite restorations are predicted. WS by composite resins is diffusion controlled process that may cause chemical degradation of the material, leading to several drawbacks like decrease in mechanical properties of the material and reduce the longevity of resin composite restorations. The SO of resin composites is reflected by the amount of leached unreacted monomers and filler particle loss. The results of solubility are expected to be correlated with the water sorption, since the solvent needs to penetrate the material for unreacted components to be able to leach out. ISO 4049 is a standard method which is commonly used by researchers to determine water sorption and solubility of restorative dental composites. The standard limits for the water sorption and solubility are 40 μg/mm³ and 7.5 μg/mm³, respectively. The composites tested in this study showed increase in WS mean values from day 1, day 15 and day 30 but these values can be considered as lower and adequate for resin-based filler materials, as the ISO 4049 standard, establishes that the maximum WS value is 40 μg/mm³. Also the SO mean values that the composites presented were lower than the maximum value established by the ISO 4049 standard ie < 7.5 μg/mm³. Artificial saliva and distilled water are the two media in our study used to assess WS and SO. Artificial saliva is a more compatible model with the oral environment conditions and would therefore supply a more realistic knowledge of these phenomena. Indeed, the effect of saliva on composites can be more deleterious than that of water which were in agreement with the result in our study. Water molecules induce the degradation of composites via two mechanisms. Firstly, water molecules diffuse into the polymer network and occupy at the free volume between polymer chains and microvoids, causing plasticization and swelling of polymer matrix and also initiate the chains scission causing monomer elution. The water molecules also tend to degrade the siloxane bonds (bond between silanol groups of the silica surface and the silane coupling agent) via a hydrolysis reaction, causing filler debonding. These lead to the degradation or softening of resin composites which may diminish some physical and mechanical properties such as hardness, strength and modulus of elasticity. Ferracane et al. 2006 have shown that the reduction in mechanical properties was predominantly related to water uptake of composites. The water uptake is mainly responsible for the hydrophilic nature of the monomers. Toledano et al. 2003 reported that WS and SO mainly depend on the resin compositions. Silorane is a monomer, with a combination of hydrophobic siloxane and low-shrinkage ring-opening oxirane. Its cationic photo-initiated polymerization reduces the polymerization shrinkage and increases the degree of conversion. Due to the hydrophobic siloxane backbone, water sorption and solubility of SBC was less compared to MBC. This agrees with the studies in literature about the lower sorption and solubility for SBC, and also confirming a previous finding that siloranes are stable in aqueous environments. The results of which are shown in Figure 4.23 and 4.24. The polymer might be able to swell, but there might not have many unreacted monomers available to leach out. MBC is a methacrylate based (Bis- GMA) material where this high viscosity polymer requires the addition of diluent monomers, such as TEG-DMA. The presence of TEGDMA as a diluent may contribute to the increased sorption because of the hydrophilicity of this monomer. Such diluent monomers, coupled with the presence of hydroxyl groups in the Bis-GMA molecule, result in an increase in WS that promote the expansion of restoration, which affects the durability of restorations and which is detrimental to its longevity. However, some other factors - such as the degree of conversion, the combination of monomers used and the filler loading - may also be significantly responsible and should not be ignored. A correlation between SO and degree of conversion was well explained by da Silva et al. Increase in the degree of conversion reduced SO, since SO is reflected by amount of leachable unreacted monomers, the high degree of conversion reduced the SO because the amount of unreacted monomers available for leaching out was lower due to the high percentage of reacted aliphatic C=C bonds from the dimethacrylate monomers. Some factors affect the SO of composites, such as the number and the size of leachable species, the type of monomers, the quality of resin-filler adhesion, the solvents, immersion time, and temperature. The mass of the components eluted from the composite is found through the water SO data. The value of SO and WS was negative for both SBC and MBC’s. Composites demonstrated a negative value, indicating that, possibly not all of the absorbed water was removed by the drying process i.e. the mass m3 (mass after storage) was higher than the mass m1 (mass after specimen preparation). Also it might be that due to hydrophobicity of the materials, the water absorbed during storage may be trapped, made solvent transport very difficult in and out of the specimen and was included as part of the polymeric structure of the composite material as explained by Vrochari et al. 2010\textsuperscript{206}. Besides unreacted monomers, inorganic ions present as fillers within composites can leach into the surrounding environment. In addition, water in contact with silica filler surfaces can break siloxane bonds and the hydrolysis induces debonding of the filler particles, increasing the mass loss of the composite. These findings were also observed by other investigators in their study\textsuperscript{56,85,154}. 5.2.5. Effect of elevated temperature on silorane based and methacrylate based restorative material With devastating and fatal effects caused by fire, identification of human remains in mass disasters (fires), accidents, and crime investigations is a difficult and challenging task. The forensic odontologist, through each stage of dental evaluation utilizes the charred human dentition to narrow the research for the final identification of the post mortem data\textsuperscript{57}. Dental restorations, their types of filling materials, their radiological morphology are used as unique fingerprints and often are a main feature for identification. The knowledge of detecting residual restorative material and composition of unrecovered adjacent restoration is a valuable tool-mark in the presumptive identification of the dentition when damage has been caused by heat\textsuperscript{60}. Using the knowledge of charred human dentition and residues of restorative material and prosthesis can help in the recognition of bodies burned victims beyond recognition (where the remains are either disfigured, mutilated, or skeletonized)\textsuperscript{57,58}. Teeth, restorations of silver amalgam, gold, alloys, composites, ceramics and prosthesis of acrylics are considered to be the most indestructible components and have the ability to resist extremes conditions. of environment (heat and cold). Therefore, detecting these residual remnants is as unique as fingerprints, and can be used as a valuable tool-mark in the identification process in forensic odontology. In the present study, the unrestored teeth mainly showed a color change from brown to black to gray, which turned completely ashy-white at 1000°C. Small fragments of teeth can be identified from the burn remains and a reliable estimation of the temperature of exposure can be made. This was in accordance to the changes described by Merlati et al. 2004, Rotzscher et al. 2004, Patidar KA, Parwani R, Wanjari S 2010. According to the Gustafson 1958, no major changes are observed in teeth and fillings below 200°C. Hence, the study commenced at a temperature of 500°C and above. A body will be completely destroyed, cremated, at 870–980°C for 1- 1½ hours. Therefore temperature of 1000°C was selected. Teeth restored with composite were identifiable even at temperatures more than 1000°C and showed discoloration, cracks, and fractures. They were important tool in identification as they are fire resistant and radio opaque. This was in agreement with findings of Rossouw RS et al. 1999. The change in the color, hue of the restoration and their external appearance are the most commonly observed findings in an incinerated tooth. When a tooth is incinerated, dehydration causes shrinkage and fragmentation of the tooth causing displacement of the restorative material. In our study, the composite restorations were intact even at a temperature of 1000°C and could be due to mechanical retention factors made during cavity preparation. The results of which are shown in Figure 4.25. From these, observable damages of the teeth subjected to variable temperatures and time can be categorized as Intact (no damage), Scorched (Superficially parched and discolored), Charred (Reduced to carbon by incomplete combustion), and Incinerated (Burned to ashes). The charred appearance of the tooth indicates sudden and quick carbonization of the tooth and its conversion into solid carbon as any biomaterial does on exposure to high temperatures$^{59}$. This study was performed on extracted teeth; hence, exact results may not be obtained when a body is subjected to an intensive heat source. But, valuable information was obtained regarding the predictability of effects. The effects produced depend upon variables, such as the intensity of heat, protection of surrounding tissue, duration of exposure to the heat, presence of an accelerator, and the medium used to extinguish the fire$^{208}$. Absence of these variables may have caused an early evaporation of organic components with subsequent shattering or explosion of the crown around 1000°C$^{208}$. Teeth are amazingly resistant to heat if it is subjected to it gradually, but if heated severely the tooth may disintegrate. Care must be exercised while handling the burnt teeth and methods to reinforce the singed remains must be carried out so as to prevent them from disintegration. Fragmentation is the most important complication of burned human remains$^{62,211}$.
A closed-loop recycling process for discontinuous carbon fibre polyamide 6 composites Rhys J. Tapper¹, Marco L. Longana*, Ian Hamerton¹, Kevin D. Potter¹ 1 Bristol Composites Institute (ACCIS), School of Civil, Aerospace, and Mechanical Engineering, Queen’s Building, University of Bristol, University Walk, Bristol, BS8 1TR, United Kingdom * To whom correspondence should be addressed: m.l.longana@bristol.ac.uk. www.bristol.ac.uk/composites Abstract The effects of a closed-loop recycling methodology are evaluated for degradation using a discontinuous carbon fibre polyamide 6 (CFPA6) composite material. The process comprises two fundamental steps: reclamation and remanufacture. The material properties are analysed over two recycling loops, and CFPA6 specimens show a total decrease of 39.7 % (± 3.5) in tensile stiffness and 40.4 % (± 6.1) in tensile strength. The results of polymer characterisation and fibre analysis suggested that the stiffness reduction was likely due to fibre misalignments primarily caused by fibre agglomerations, as a result of incomplete fibre separation, and by fibre breakages from high compaction pressures. The ultimate tensile strain was statistically invariable as a function of recycling loop which indicated minimal variation in polymer structure as a function of recycling loop. To the authors’ best knowledge, the mechanical performance of the virgin CFPA6 is the highest observed for any aligned discontinuous carbon fibre thermoplastic composites in the literature. This is also true for recycled specimens, which are the highest observed for any recycled thermoplastic composite, and, for any recycled discontinuous carbon fibre composite with either thermosetting or thermoplastic matrices. Keywords: Recycling; Polymer-matrix composites (PMCs); Compression moulding; Discontinuous reinforcement. 1. Introduction Landfill is the standard waste management method for carbon fibre reinforced polymers (CFRP), however it is becoming increasingly unfavourable due to environmental, societal, and economic pressure [1]. Growing social awareness of climate change, and an industrial recognition of the impending composite waste problem, led to the development of environmental legislature that regulates the recyclability of modern structures [2]. The development of recycled CFRP (rCFRP), and processes with which to recycle them, has attracted increased research interest in the last decade [3]. Over this period there have been three comprehensive reviews of the available technologies and future outlook: Pickering in 2006 [4], Pimenta & Pinho in 2011 [5] and Oliveux et al. in 2015 [6]. It is apparent that there is a need to develop a sustainable production cycle for CFRP that fits into the Circular Economy paradigm, developed by the Ellen MacArthur Foundation [7] and supported by the UK Composite strategy for future composite material production [8]. It is therefore equally important to recycle with the intent of finding a valuable application for the recyclate fibre and matrix. The development of a recycling process able to produce a high-value, structural rCFRP goes a long way towards closing the loop for CFRP manufacture, and meeting the requirements of a circular economy paradigm. The term recycling here describes the reclamation of constituents from waste composites and the subsequent remanufacture into useful composite components of value. The current thermoset material paradigm is not conducive to closed-loop recycling due to the currently unavoidable degradation of the thermoset matrix required to reclaim fibres, leaving potentially up to 50 % of the composite volume un-recycled. Thermoplastic matrices present a unique opportunity for recycling, as the molecular structure can be temporarily dissociated using heat or solvent treatment, i.e. through dissolution. Melt recycling causes a significant reduction in mechanical properties due to fibre breakage and matrix degradation during processing as a result of the high temperatures and shear forces required for extrusion and injection moulding [9]. The dissolution/precipitation technique involves the dissolution of a thermoplastic in a solvent at a given temperature. Once in solution, the polymer can potentially be separated from reinforcement by filtration and subsequently reclaimed by precipitation following the addition of a destabilising non-solvent [10,11]. This method has been applied to polystyrene [12], low density polyethylene and high density polyethylene [13,14], with the aim of providing a selective sorting process for mixed municipal wastes, and for recycling un-reinforced PA6, and PA66 [10]. It had not been applied to fibre reinforced polymers until a recent study where it was adapted for CF polypropylene (CFPP) composites [15]. The study showed that CFPP composite could be recycled using an equivalent closed-loop recycling process without any reduction in mechanical properties. The mechanical performance achieved was superior to alternative discontinuous CFPP composites, however it was lower than the requirement for high-value applications such as semi-structural automotive parts. This study aims to improve the previous closed-loop recyclable material, and associated recycling process, by incorporating a matrix of increased mechanical performance, i.e. polyamide 6 (PA6). This will provide a recyclable composite of enhanced mechanical performance that is applicable to a wider range of high-value applications. Without a high-performance, high-value application for rCFRP the closed-loop production paradigm, and thus the stable recycling infrastructure, cannot be realised. PA6 was used as the thermoplastic in this study due to the following advantages: It is a semi-crystalline thermoplastic that is soluble in low-hazard solvents; It can provide higher composite mechanical performance than polypropylene, due to increased matrix tensile stiffness, strength, and interfacial shear strength with CF; It has the best compromise of mechanical properties and processing temperature out of the other polyamide variants, i.e. higher mechanical performance than polyamide 12 and lower processing temperatures than polyamide 66; PA6 is commercially available and comparative composite data is available as CFPA6 have been widely evaluated in the literature [9,16,25–31,17–24]; The industrial applicability is demonstrated by its commercial availability [32]. 2. Materials and Methods 2.1 Materials Virgin PA6 (vPA6) was sourced in powder form from Goodfellow distributors. The material properties of vPA6, as determined through in-house polymer characterisation, can be found in Table 1. PA6 was selected as the matrix for the reasons specified above. <table> <thead> <tr> <th>Density</th> <th>$T_m$</th> <th>$T_p$</th> <th>$X$</th> <th>$M_w$</th> <th>$M_n$</th> <th>PDI</th> </tr> </thead> <tbody> <tr> <td>g/cm$^3$</td> <td>°C</td> <td>°C</td> <td>%</td> <td>g/mol</td> <td>g/mol</td> <td>-</td> </tr> <tr> <td>PA6</td> <td>1.13</td> <td>217</td> <td>45.0</td> <td>34.9</td> <td>5.53 x 10$^4$</td> <td>7.83 x 10$^3$</td> </tr> </tbody> </table> $X$ = crystallinity Discontinuous CFs used in this study were supplied by TohoTenax (standard type C124). The TohoTenax fibres were 3 mm in length and came with a proprietary water-soluble sizing (3.8 wt.%) used to aid dispersion in the alignment carrier liquid. 3 mm CF were used as they are the optimum length for alignment using the HiPerDiF, are less prone to breakages than long fibres, and are similar in form to rCFs. Material properties of the CFs used can be found in Table 2. Reagent grade benzyl alcohol and acetone were sourced from Alfa Aesar and Fisher Scientific, respectively. <table> <thead> <tr> <th>Length</th> <th>Density</th> <th>Diameter</th> <th>$E_T$</th> <th>$\sigma_T$</th> <th>$e$</th> </tr> </thead> <tbody> <tr> <td>mm</td> <td>g/cm$^3$</td> <td>μm</td> <td>GPa</td> <td>MPa</td> <td>%</td> </tr> <tr> <td>CF</td> <td>3</td> <td>1.82</td> <td>7</td> <td>225</td> <td>4344</td> </tr> </tbody> </table> 2.2 Experimental methods 2.2.1 Introduction The closed-loop recycling methodology is represented schematically in Figure 1, complete with annotations indicating the stage where material characterisation occurred. The virgin CFPA6 specimens (vCFPA6) were manufactured from virgin constituents and mechanically tested. vCFPA6 specimens were then washed, chopped and reclaimed using benzyl alcohol as solvent and acetone as non-solvent, compliant with the dissolution/precipitation technique. Fibres were filtered from the matrix solution, after this each constituent followed separate reclamation and remanufacturing paths, these are detailed in later sections. Constituents were remanufactured into specimens analogous to the virgin composite, requiring no additional fibre or matrix, and the cycle was continued for a total of two loops resulting in two recycled specimens, \( r_1 \text{CFPA6-BA} \) and \( r_2 \text{CFPA6-BA} \). The method was developed to minimise property degradation, as the aim was to reclaim high-quality constituents then remanufacture these into high performance components after multiple cycles. 2.2.2 Composite reclamation Experimental parameters used in the reclamation process, can be found in Table 3. Full dissolution of PA6 occurred in benzyl alcohol at 160 °C after 1 hour. Volumes of solvent and non-solvent are dependent on the mass of PA6 in the waste feedstock. The key stage of reclamation was the separation of fibres from the polymer solution. rCFs were filtered at 25 °C and rCFs were dried in a vacuum oven (~ - 29 inHg) at 80 °C for 14 hours. Table 3. Reclamation parameters used for carbon fibre polyamide 6 (CFPA6) recycling. <table> <thead> <tr> <th>Solvent</th> <th>Temp. °C</th> <th>Time mins</th> <th>Non-solvent</th> <th>Conc. % w/v</th> <th>Solvent Ratio</th> <th>Fibre Yield %</th> </tr> </thead> <tbody> <tr> <td>CFPA6-BA</td> <td>benzyl alcohol</td> <td>160</td> <td>acetone</td> <td>1</td> <td>1:1</td> <td>96</td> </tr> </tbody> </table> Temp. = Temperature, Conc. = Concentration, S:NS = solvent : non-solvent The benzyl alcohol solvent system required dissolution at elevated temperatures, which increased the energy demand of recycling. However, elevated temperatures may result in a comparatively faster dissolution rate to other systems. The disparity in boiling point between solvent and non-solvent enables reclamation through fractional distillation. An alternative solvent system was tested, using formic as solvent with water as the non-solvent. This system can provide full dissolution at room temperature, which reduces the recycling process energy demand; this may become substantial at scaled volumes. However, this afforded a decreased dissolution rate and caused solvent reclamation issues through incomplete separation, as the boiling points of the two solvents are very similar and formed an azeotropic mixture. The latter cannot be separated directly by boiling point, but can be separated by more complex separation methods, such as continuous reactive distillation or salt distillation, where the boiling point/vapour pressure of one component is varied to enable distillation [34]. These require additional materials and additional processes to provide solvent reclamation, which will likely increase the energy demand of reclamation and ultimately costs. The rCF bundles were separated by the following wet sonication method. CFs (ca. 100 mg) were suspended in a beaker containing water (1 L) placed in an ultrasonic bath. Stirring separated most fibres however some agglomerations required additional manual separation. PA6 was precipitated by cooling to room temperature and the addition of acetone at a 1:1 volume ration. The precipitate was vacuum filtered using Buchner apparatus, washed and dried in a vacuum oven (~ - 29 inHg) at 80 °C for 14 hours. vPA6 and the reclaimed precipitates, r1PA6-BA and r2PA6-BA, underwent polymer characterisation to determine the effects of recycling on the polymer structure. 2.2.3 Composite re-manufacture Dried rCF were aligned into dry preforms using the HiPerDiF alignment method. Alignment produced highly aligned, dry, rCF preforms with an area of 100 mm x 5 mm. The preform areal weight varied due to intrinsic, experimental variations of the HiPerDiF alignment method, however, the average total preform mass for each specimen was approximately 105 mg. Fibre analysis was carried on the vCF and both batches of reclaimed CFs, r1CFPA6-BA and r2CFPA6-BA. The rCF preforms, and polymer precipitate, were combined in an alternating ABA stacking sequence so that each composite stack was made up of 4 preform layers and 3 matrix layers. The remanufacture tract of Figure 1 exhibits a representative cross-section of a composite stack in the $xy$ plane. Six composite stacks were laid separately in an aluminium tool with six individual cavities of dimensions: $1 \text{ mm} \times 5 \text{ mm} \times 100 \text{ mm}$, this resulted in six composite specimens of equivalent dimensions to the cavity after consolidation and cooling. The compression moulding cycle, conducted in a vacuum bag to remove the catalytic effects of atmospheric oxygen, is plotted in Figure 2. Additional remanufacturing variables are reported in Table 4. Composition analysis, mechanical characterisation and fractography was carried out on virgin composite specimens and recycled composite specimens, $r_1\text{CFPA6-BA}$ and $r_2\text{CFPA6-BA}$. Figure 1 exhibits a representative cross-section of a composite stack in the $xy$ plane. Six composite stacks were laid separately in an aluminium tool with six individual cavities of dimensions: $1 \text{ mm} \times 5 \text{ mm} \times 100 \text{ mm}$, this resulted in six composite specimens of equivalent dimensions to the cavity after consolidation and cooling. The compression moulding cycle, conducted in a vacuum bag to remove the catalytic effects of atmospheric oxygen, is plotted in Figure 2. Additional remanufacturing variables are reported in Table 4. Composition analysis, mechanical characterisation and fractography was carried out on virgin composite specimens and recycled composite specimens, $r_1\text{CFPA6-BA}$ and $r_2\text{CFPA6-BA}$. Figure 2. Compression moulding cycle used to manufacture carbon fibre polyamide 6 specimens. Specimens were held at 0.5 MPa (P1) for 4 hours until transfer from oven to hot press for consolidation at 11.8 MPa (P2). $T_m$ denotes the melting temperature of polyamide 6. Table 4. Remanufacturing parameters used for carbon fibre polyamide 6 (CFPA6) recycling. <table> <thead> <tr> <th>CF mass</th> <th>PA6 mass</th> <th>Temp.</th> <th>Pressure</th> <th>$V_f$</th> <th>Flash</th> </tr> </thead> <tbody> <tr> <td>CFPA6</td> <td>105*</td> <td>180*</td> <td>250</td> <td>11.8</td> <td>28</td> </tr> </tbody> </table> *Per specimen, Temp. = Temperature The dominance of matrix and fibre contributions to the overall composite behaviour could be determined by combining $v$CF with reclaimed matrix and vice versa. A constituent exclusion study was carried out to test these contributions. This study used a variation of the typical closed-loop process formats, manufacturing composite specimens using: $v$PA6 with second iteration recycled fibres (r2CF), and first iteration recycled PA6 (r1PA6) with vCF. The composites produced from these conditions were denoted r2CF.vPA6 and vCF.r1PA6, respectively. Polymer characterisation was carried out on the vPA6 powder and the reclaimed powders after each recycling loop, i.e. r1PA6-BA, and r2PA6-BA. DSC was used to examine the effect of recycling on the melting temperature ($T_m$), and the percentage crystallinity ($X$) of the polymer, determined using equation (1) [35]. \[ X = \left( \frac{\Delta H_m - \Delta H_c}{\Delta H_m} \right) \times 100 \] where $\Delta H_c$ is the enthalpy of fusion for the crystallization transition, $\Delta H_m$ is an enthalpy of fusion reference value for a pure crystal of the polymer, i.e. if the polymer was 100% crystalline. The crystallization enthalpy contribution is only accounted for when a re-crystallisation event, i.e. annealing, occurs. For PA6, a $\Delta H_m$ of 230 J g$^{-1}$ was used, as sourced from the literature [36]. DSC routines included two thermal ramps; the first ramp heated from 30 °C to 250 °C and cooled to 30 °C, the second ramp heated form 30 °C to 250 °C again, then cooled to room temperature. Analysis was carried out using a TA Auto Q2000 DSC in pierced, hermetically sealed aluminium pans on powdered samples (10 ± 3 mg), under flowing N$_2$ (50 cm$^3$/min), at a heating rate of 10 °C/min and cooling rate of 20 °C/min. A Perkins Elmer Spectrum 100 Fourier Transform Infrared Spectrometer was used to compare the vibrational frequencies of polymer functionalities as a function of recycling loop. The characteristic vibrations present in the fingerprint region are sufficient confirmatory evidence of the specific functional groups of PA6, and any subsequent shifts due to degradation, therefore no further spectral analysis was conducted with infra-red. $M_w$, $M_n$, polydispersity (PDI) and molecular weight distribution (MWD) of each recyclate was determined using gel permeation chromatography with size exclusion chromatography (GPC/SEC). Samples were analysed using a Malvern/Viscotek TDA 301 GPC system with associated pump and autosampler, with two PL HFiPgel 300 x 7.5 mm, 9 µm columns and an Agilent PL HFiPgel guard column. The mobile phase used was 1,1,1,3,3,3-hexafluoropropan-2-ol (with 25 mM NaTFAc) at 40 °C at a rate of 0.8 ml/min. A refractive index detector (with differential pressure/viscosity and right-angle light scattering) was used. Samples were prepared by adding the mobile phase (10 ml) to the sample (20 mg) to dissolve overnight. Solutions were then thoroughly mixed and filtered through a 0.45 µm PTFE membrane, directly onto autosampler vials. The vials were placed in an autosampler where injection of a vial aliquot was carried out automatically. High performance liquid chromatography with mass spectrometry (HPLC-MS) was used to qualify the extraction of any additive molecules in the reclamation supernatant, as this could be a cause of mechanical performance reduction. Specific polymer additive mixtures are typically proprietary and therefore unknown; peaks found in the spectrum were cross-referenced with known polymer additives for identification of unknown solutes. This was carried out using a Thermo Orbitrap Elite LCMA. The mass spectrometer was operated in positive ion mode at a 60k resolution with spectra being recorded between 120 and 1200 Da. A Waters Acquity BEH C18 1.7 µm column was used for chromatography. The column was held at 5 % acetonitrile/water for 1 min (0.25 ml/min) and compounds were then eluted with a linear gradient to 95 % acetonitrile over 23 minutes before re-equilibrating the column into the starting conditions. 2.2.5 Mechanical testing Specimens were tested for their tensile properties in accordance with ASTM D3039. Axial force was provided by a Shimadzu AGS-X servo-electric tensile test machine with a 10 kN load cell and a constant cross-head displacement of 1 mm/min. Three strain measurements were taken along the length of each specimen using an Imetrum video-gauge system, over a gauge length of 50 mm. These were averaged to produce the presented strain measurement. specimens were sprayed black and speckled with white dots to aid with strain mapping, analogous to the testing carried out in previous studies [15]. All specimens were tested with the required GFRP end tabs bonded with cyanoacrylate adhesive. Tensile specimen geometry and dimensions are reported in Table 5 and displayed in Figure 3. <table> <thead> <tr> <th>W</th> <th>L</th> <th>LO</th> <th>G</th> <th>T</th> </tr> </thead> </table> Table 5. Tensile test specimen dimensions. 2.2.6 Composite composition analysis The average thickness ($\bar{t}$), and width ($\bar{w}$), of the composite specimens was measured using a micrometer; three measurements were taken at three different locations along the length and an average was taken. Volume fractions of the composite ($V_c$), fibres ($V_F$), matrix ($V_M$), and voids ($V_V$), were calculated using the measured cross-sectional area ($A_x$), measured composite length ($l_c$), measured mass of fibre preforms ($m_F$), measured composite mass ($m_C$), calculated matrix mass ($m_M$), nominal fibre density ($\rho_F$) and matrix density ($\rho_M$), according to the expressions in equation (2): \[ A_x = \bar{t} \times \bar{w}, \quad V_C = l_c \times A_x, \quad m_M = m_C - m_F \] \[ V_F = \frac{m_F}{\rho_F} \cdot \frac{100}{V_C}, \quad V_M = \frac{m_M}{\rho_M} \cdot \frac{100}{V_C}, \quad V_V = 1 - (V_F + V_M) \] Changes in $V_{FP}$ were monitored as a function of recycling loop. It was not possible to perform any other $V_{FP}$ measurement techniques, e.g. resin burn-off acid digestion or cross-sectional analysis, as they would make the materials unavailable for the subsequent recycling loops. Tensile stiffness and strength values were normalised by calculated $V_{FP}$. 2.2.7 Fibre analysis & fractography Fibre surfaces were qualitatively evaluated, using a Hitachi TM3000 scanning electron microscope (SEM) with 5000 V accelerating voltage, to determine surface defects or surface deposition of matrix on fibres in the dried preform state. Fibre length distribution (FLD) analysis was carried out to determine the effects on the fibre length after each loop. A portion of each preform was cut, dispersed in water and slowly gravity filtered until an even distribution of fibres remained on filter paper. A portion of this distribution was scanned using an Epson 11000XL. Each fibre, within a consistent sample region of the high-resolution image, was measured and collated using ImageJ software. Fracture surfaces were inspected using a Hitachi TM3000 SEM (5000 V accelerating voltage). One fracture surface was examined per six specimen batch to maximise the amount of material being recycled. 3. Results and Discussion 3.1 Polymer characterisation The DSC thermograms are overlaid in Figure 4. Melt and crystallisation temperatures are reported alongside crystallinities in Table 6. Following the x-axis from left to right, the first endothermic peak, first exothermic peak and second endothermic peak represent the initial melting ($T_{m1}$), the crystallisation ($T_c$), and the second melting ($T_{m2}$) phase transitions of the polymers. vPA6 exhibited a sharp initial melting peak at 217 °C with a crystallinity of 34.9 %. Both rPA6-BA and rPA6-BA showed a sharp $T_{m1}$ peak at 219 °C and 220 °C, with a crystallinity of 19.8 % and 31.1 % for the first and second recyclate, respectively. The disparity in crystallinity can be explained by the kinetic dependency of polymer crystal formation. In solution, polymers form coils that curl back themselves in a spherical morphology called the hydrodynamic volume [37]. Upon precipitation, the coil is instantaneously forced into the solid phase giving little time for polymer chains to align and form crystal structures. Therefore, the non-linear variation in crystallinity is not surprising, and does not indicate change in the polymer chain structure. There is a second, broad melting peak after $T_{m1}$ in the vPA6 thermogram that is not present in the subsequent reclaimed precipitates, this suggests the extraction of a substance during the first reclamation. The crystallisation transition, at $T_c$, shifted from 177 °C for vPA6, to 186 °C and 187 °C for rPA6-BA, and rPA6-BA, respectively. This equates to an initial increase in the crystallinity by 23 % after the first recycling loop and a 6.3 % decrease after the second recycling loop. The second decrease in crystallinity is small and is assumed to be within the machine measurement variance, which suggests no significant change in the polymer structure. However, there is a significant change in the crystallisation behaviour after the first recycling loop. This shift can be explained as either: a) a consequence of the remanufacturing process, i.e. a change in the molecular weight distribution of the polymer; or b) that the polymer chains forming crystalline regions are not proximal in the un-mixed melt phase of the precipitated polymer, and thus will exhibit different crystallisation behaviour to homogenised vPA6 powder. The final melt transition occurred at the same temperature with the same crystallinity for each sample, however, the peak splits into two broad peaks. These represent the presence of two distinct crystalline phases in the polymer that were not in vPA6. This is a further reinforcement of the hypothesis that the instantaneous precipitation resulted in a heterogeneous spatial, i.e. not statistical, distribution of polymer chains, causing a significantly different crystallinities to vPA6 at the initial melting, recrystallisation and secondary melting transitions. The differences in crystallinity diminish as a function of melt transition which indicates that the differences in spatial distribution are likewise equalizing through melt processing. This also suggests that the variation in crystallinity is not a consequence of polymer degradation. Table 6. Differential scanning calorimetry results from virgin and recycled polyamide 6 as a function of recycling loop. <table> <thead> <tr> <th></th> <th>$T_{m1}$</th> <th>$T_c$</th> <th>$T_{m2}$</th> <th>$X_{m1}$</th> <th>$X_c$</th> <th>$X_{m2}$</th> </tr> </thead> <tbody> <tr> <td>vPA6</td> <td>217</td> <td>177</td> <td>218</td> <td>34.9</td> <td>24.7</td> <td>20.0</td> </tr> <tr> <td>r1PA6-BA</td> <td>219</td> <td>186</td> <td>218</td> <td>19.8</td> <td>30.4</td> <td>20.3</td> </tr> <tr> <td>r2PA6-BA</td> <td>220</td> <td>187</td> <td>218</td> <td>31.1</td> <td>28.5</td> <td>18.2</td> </tr> </tbody> </table> The FTIR spectra collected for vPA6, r1PA6-FA and r2PA6-FA are overlaid in Figure 5. Figure 5. The characteristic vibrational peaks for PA6, using the fingerprint region profile as confirmatory evidence as sourced from the literature [31,38], were present in all specimens and are presented in Table 7. There is a weak peak at 3060 cm$^{-1}$ which may represent a stretching vibration of crystalline hydrogen bonded -N-H [38]. As this peak refers to the crystalline, hydrogen bonded -N-H amine, it reflects the effect of reclamation on the variation in polymer crystallinity observed in the DSC thermograms. However, this is speculative and would require more detailed FTIR analytical techniques to be definitive. There are no observable peaks associated with the depolymerisation product caprolactam which suggests that either: a) any degradation that had occurred was unsubstantial; or b) that the caprolactam produced was extracted into the supernatant and did not precipitate out with the bulk polymer. Figure 5. Fourier transform infrared spectra of polyamide 6 as a function of recycling loop. Table 7. Vibrational peaks for polyamide 6 observed in the Fourier transform infrared spectra [31,38]. <table> <thead> <tr> <th>Wavenumber (cm⁻¹)</th> <th>Vibrational peaks</th> </tr> </thead> <tbody> <tr> <td>3291</td> <td>NH symmetric stretching</td> </tr> <tr> <td>2933</td> <td>CH₂ asymmetric stretching</td> </tr> <tr> <td>2860</td> <td>CH₂ symmetric stretching</td> </tr> <tr> <td>1637</td> <td>C=O symmetric stretching</td> </tr> <tr> <td>1541</td> <td>NH symmetric bending</td> </tr> <tr> <td>1458</td> <td>CN symmetric stretching</td> </tr> <tr> <td>1419</td> <td>CH₃ symmetric bending</td> </tr> <tr> <td>1372</td> <td>NH symmetric bending, CN symmetric stretching, CH₃ symmetric bending</td> </tr> <tr> <td>684</td> <td>NH symmetric bending</td> </tr> </tbody> </table> The MWD curves obtained from GPC/SEC of vPA6, r₁PA6-BA and r₂PA6-BA are shown in Figure 5. Fourier transform infrared spectra of polyamide 6 as a function of recycling loop, and the $M_n$, $M_w$ and PDI values are tabulated in Table 8. The MWDs are representative of polydisperse semi-crystalline polymers, as expected from the PA6 used. The MWD is bi-modal, showing a primary and secondary peak. The secondary peak indicates the substantial presence of smaller molecular weight chains. This could represent the presence of an additive polymer not extracted during reclamation or, most likely, unremoved low-molecular weight, prematurely terminated PA6 by-products of step-growth polymerisation, a typical method of PA6 manufacture. After the first recycling loop there is a 27 % and 12 % decrease in $M_w$ and $M_n$, respectively. The primary peak maximum remains the same height but shifts to a lower molecular weight. This corresponds to the scission of large molecular weight chains into short lengths. The secondary peak remains unchanged, suggesting that the corresponding chains are unaffected by reclamation. After the second recycling loop, the primary peak maximum increases in height and shifts to a lower molecular weight. This represents the scission of larger chains into medium lengths and an increase in shorter chain lengths; again, the secondary peak is unaffected. Overall, the GPC analysis shows successive degradation of the primary peak but no change in the secondary peak. Figure 6. Molecular weight distribution from gel permeation chromatography of recycled polyamide 6 as a function of recycling loop. From left to right the plots run from low to high molecular weights. Table 8. The $M_n$, $M_w$, and poly dispersity values obtained from gel permeation chromatography of polyamide 6 as a function of recycling loop. <table> <thead> <tr> <th></th> <th>$M_n$ (x10^3 g/mol)</th> <th>$M_w$ (x10^4 g/mol)</th> <th>PDI</th> </tr> </thead> <tbody> <tr> <td>vPA6</td> <td>7.19 (0.1)*</td> <td>5.53 (0.1)</td> <td>7.69</td> </tr> <tr> <td>r1PA6-BA</td> <td>6.35 (0.1)</td> <td>4.01 (0.1)</td> <td>6.31</td> </tr> <tr> <td>r2PA6-BA</td> <td>6.38 (0.1)</td> <td>2.83 (0.1)</td> <td>4.43</td> </tr> </tbody> </table> * Coefficient of variance The solids extracted into the supernatant after the first recycling loop were separated using HPLC and analysed using mass spectrometry (MS). The base peak chromatogram obtained is presented in Figure 7, and the corresponding molecular mass values from MS are tabulated in Figure 7. The base peak chromatogram (BPC) obtained from high performance liquid chromatography of the r1PA6-BA supernatant. Table 9. The chromatogram indicates the presence of a varied mixture of compounds in a range of different quantities. These peaks represent any molecules soluble in benzyl alcohol that were not precipitated from the solution. The corresponding molecular mass values of the main chromatogram peaks are all small, < 900 g/mol, and are unlikely to be polymer chains which have larger masses by several orders of magnitude; suggesting the presence of additive molecules. Figure 7. The base peak chromatogram (BPC) obtained from high performance liquid chromatography of the rPA6-BA supernatant. Table 9. Base peak chromatogram peaks from the rPA6-BA supernatant with corresponding mass values from mass spectrometry. <table> <thead> <tr> <th>Peak</th> <th>Molecular mass (g/mol)</th> <th>Peak</th> <th>Molecular mass (g/mol)</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>475.3</td> <td>k</td> <td>461.2</td> </tr> <tr> <td>b</td> <td>362.2</td> <td>i</td> <td>389.2</td> </tr> <tr> <td>c</td> <td>475.3</td> <td>m</td> <td>403.2</td> </tr> <tr> <td>d</td> <td>588.4</td> <td>n</td> <td>403.2</td> </tr> <tr> <td>e</td> <td>701.5</td> <td>o</td> <td>360.4</td> </tr> <tr> <td>f</td> <td>814.6</td> <td>p</td> <td>304.3</td> </tr> <tr> <td>g</td> <td>299.2</td> <td>q</td> <td>743.6</td> </tr> <tr> <td>h</td> <td>304.2</td> <td>r</td> <td>806.6</td> </tr> <tr> <td>i</td> <td>317.2</td> <td>s</td> <td>806.6</td> </tr> <tr> <td>j</td> <td>430.3</td> <td>t</td> <td>782.6</td> </tr> <tr> <td>k</td> <td>313.2</td> <td>u</td> <td>758.6</td> </tr> </tbody> </table> The additive mixtures used in polymers are generally guarded, proprietary information, however a range of known antioxidants with molecular masses ranging from 220 g/mol – 1178 g/mol is known [39,40]. Peak h had a mass of 340.2 it is the mass of the known additive Antioxidant 2246 [40]. The other peaks, especially those of larger masses, must correspond to the molecular ion... peaks or fragment ions of other additives used. In any case, it is apparent that a mixture of small molecules is extracted in the reclamation supernatant. ### 3.1.1 Mechanical testing Representative stress-strain curves obtained from tensile tests of CFPA6-BA as a function of recycling loop are shown in Figure 8. These may not exactly match the tabulated averaged results. The results of the tensile tests are recorded in Table 10, and represented graphically in the Figure 9 bar charts. **Figure 8.** Left - Representative stress-strain plots obtained from tensile test of CFPA6-FA as a function of recycling loop. Right - Representative stress-strain plots obtained from tensile test of r$_2$CF.vPA6 and vCF.rPA6. Representative plots from vCFPA6 and r$_2$CFPA6-BA are added for reference. **Figure 9.** Bar charts showing the normalised tensile stiffness, ultimate tensile strength and ultimate tensile strain of CFPA6-BA as a function of recycling loop, and the constituent exclusion specimens vCF.r$_1$PA6-BA and r$_2$CF.vPA6-BA. **Table 10.** Average mechanical performance data collected from tensile tests of CFPA6-BA after each recycling loop, and the constituent exclusion specimens vCF.r$_1$PA6-BA and r$_2$CF.vPA6-BA. The mechanical performance achieved <table> <thead> <tr> <th></th> <th>$E_{T(0.28)}$ (GPa)</th> <th>$\sigma_{ult(0.28)}$ (MPa)</th> <th>$\varepsilon_{ult}$</th> <th>$\rho$ (MPa)</th> <th>$V_f$ (%)</th> </tr> </thead> <tbody> <tr> <td>vCFPA6</td> <td>60.2 (3.18)*</td> <td>695 (7.75)</td> <td>1.16 (7.24)</td> <td>1.09 (8.38)</td> <td>27.7 (4.61)</td> </tr> <tr> <td>r$_1$CFPA6-BA</td> <td>45.4 (8.02)</td> <td>425 (4.67)</td> <td>1.10 (6.81)</td> <td>1.13 (6.93)</td> <td>30.3 (11.1)</td> </tr> <tr> <td>r$_2$CFPA6-BA</td> <td>36.3 (2.78)</td> <td>414 (2.66)</td> <td>1.12 (5.10)</td> <td>1.16 (2.87)</td> <td>29.2 (0.91)</td> </tr> <tr> <td>vCF.r$_1$PA6-BA</td> <td>44.1 (6.67)</td> <td>559 (7.80)</td> <td>1.10 (8.61)</td> <td>1.14 (6.65)</td> <td>30.1 (1.64)</td> </tr> <tr> <td>r$_2$CF.vPA6</td> <td>36.0 (10.2)</td> <td>460 (13.1)</td> <td>1.30 (10.1)</td> <td>1.20 (6.9)</td> <td>29.7 (15.7)</td> </tr> </tbody> </table> * Coefficient of variance $\sigma_{ult}$ was determined as the maximum stress and $\varepsilon_{ult}$ refers to the corresponding strain. $E_T$ was determined by taking the gradient of the linear-elastic region section of bounds $0.001 < \varepsilon < 0.003$. $E_{T(0.28)}$ and $\sigma_{ult(0.28)}$ denote the tensile stiffness and ultimate tensile strength normalised to a 28 % $V_f$, following the rules of mixture. Tensile specimens exhibited a typical linear-elastic response followed by rapid, brittle failure. Statistical variance between data sets was determined using a Kruskall-Wallis test. To the author’s best knowledge, the mechanical performance achieved by vCFPA6 is currently the highest achieved for any aligned, discontinuous carbon fibre thermoplastic composite in the literature. Similarly, to the authors best knowledge, the recycled specimens achieved a higher performance than any recycled discontinuous carbon fibre composite in the literature. 20 The brittle failure exhibited by all specimens suggested that the IFSS between the CF and PA6 enabled effective load transfer between fibres and matrix resulting in significant fibre fracture. The IFSS between CF and PA6 is typically around 40 – 50 MPa, as reported in the literature [23], this resulted in a critical fibre length of approximately $L_c = 0.3$ mm. For vCF, this meant that all the fibres were above $L_c$ and could be maximally loaded. Despite this being an approximation, as the exact IFSS of the fibre and matrix used was unknown, the value can be assumed accurate and therefore for vCF all fibres were above the $L_c$. vCFPA6-BA has a significantly higher tensile stiffness and ultimate tensile strength than the subsequent specimens. After the first recycling loop the tensile stiffness and ultimate tensile strength decreased by 25 % and 39 %, respectively. After the second recycling loop the ultimate tensile strength and the ultimate tensile strain remained statistically unchanged, however, the tensile stiffness dropped by a further 20 %. Composite stiffness is fibre dominated property, which suggests a successive decrease in fibre alignment as a function of recycling. The second recycling loop did not result in an ultimate tensile strength reduction. Reduction in composite strength could be a result of either, or a combination of: a) a decrease in intrinsic polymer strength; b) a decrease in fibre-matrix adhesion; c) fibre breakage producing a higher proportion of fibres below the critical fibre length; or d) the presence of defects that act as stress-raisers. There is the potential for a strength reduction from thermal shrinkage, associated with the annealing process, causing residual stresses and microcracks throughout the composite. However, there are no traditional annealing stages present in the remanufacturing process, which is kept consistent throughout each batch, and therefore the only differences in the polymer as a function of recycling that would alter the mechanical properties, related to this, would be variation in crystallinity (ratio of crystal structures) and quantification of this is accounted for with DSC, which shows negligible difference as a function of recycling. As the polymer characterisation results suggest limited polymer degradation, a decrease in polymer strength can be ruled out as the main contributor. This conclusion is supported by the 40 % decrease in ultimate tensile strength observed in the $r_2$CF.vPA6-BA specimens, as presented in Figure 8 and Table 10. Despite it containing virgin polymer, the strength reduction is marked; equivalent to decreased observed in the $r_2$CFPA6-BA specimen. Compared with vCFPA6, vCF.$r_1$PA6 exhibited a reduction in tensile stiffness and ultimate tensile strength of 27 % and 20 % respectively but achieved a similar strain, see Figure 8 and Table 10. The assumption made was that because both specimens contained vCF preforms, any differences must be caused primarily by the matrix. However, there was a variance in the areal weights produced by the alignment process, preforms in the vCF.rPA6 specimens were lower than used in vCFPA6. This meant that the property reduction was because either: a) there is a significant matrix contribution to composite performance, or, b) there is a significant variance in the preform properties produced by the alignment process which causes fluctuations in alignment. The significant drop in properties of the vCF.rPA6 specimen, when compared with the vCFPA6 specimen, suggests that variation in fibre deposition and alignment, an aspect of the alignment process itself, is the most likely cause. It is possible that this variation could be exacerbated as a result of recycling, however detailed analysis of the effect of fibre agglomeration/separation on the alignment process would be required to verify this. In any case, this is predominantly a process issue and is separate from any consequences from intrinsic material property degradation which is the focus of this study. The consistency of brittle fracture between recycling loops suggests that fibre-matrix adhesion reduction was not the main contributor either. It is likely that fibre breakage and defects are the cause of the strength reductions, both artefacts of the remanufacturing process. 3.3.3 Fibre analysis The fibre length distribution of preforms made from vCF, and rCF after each recycling loop, are presented in Figure 10. In the vCFPA6 preform sample, 63.9% of fibres are in the range of 2.8 - 3.2 mm. After the first recycling loop 38.6% of fibres are in the range 2.8 - 3.2 mm. r1CFPA6-BA fibre length was more evenly distributed with 0.4 – 0.6 mm, 0.6 - 0.8 mm, and 0.8 – 1.0 mm having 7.5 %, 9.1 % and 9.8 % of the distribution and 3.0 – 3.2 mm representing 15.9 %. The 0.3 mm alignment plate spacing was ≤ 3 % of fibres in each of the recycling iterations, therefore the alignment, and thus the composite stiffness, should not be significantly affected by the evening out of the distribution. This indicates that the decrease in fibre lengths can be ruled out as significant contributors to the reduction in tensile stiffness observed. Figure 10. Fibre length distributions of fibre preforms used in vCFPA6, r1CFPA6-BA and r2CFPA6-BA. SEM micrographs were taken of aligned preforms before impregnation of vCFPA6, r1CFPA6-BA, and r2CFPA6-BA, see a) SEM micrographs of a preform used in vCFPA6 b) SEM micrographs of a preform used in r1CFPA6-BA c) SEM micrographs of a preform used in r2CFPA6-BA a) SEM micrographs of a preform used in vCFPA6 b) SEM micrographs of a preform used in rCFPA6-BA c) SEM micrographs of a preform used in rCFPA6-BA Figure 11a shows the vCFPA6 preform with clean surfaces in both magnifications. a) SEM micrographs of a preform used in vCFPA6 b) SEM micrographs of a preform used in rCFPA6-BA c) SEM micrographs of a preform used in rCFPA6-BA Figure 11b shows the r1CFPA6-BA preform with a coating in residual matrix adhered to the fibre surface. The micrographs of r1CFPA6-BA, Figure 11c, show the same residual matrix but in qualitatively unchanged quantities to r1CFPA6-BA. It is possible that the PA6 residue acted as a form of sizing that increased the IFSS. However, this did not significantly affect the ultimate tensile strength, as observed in previous studies. There is a successive increase of fibre misalignment in the micrographs which reflects the successive decrease in tensile stiffness, however, this may be due to handling of the preform samples during SEM sample preparation. Figure 11. (a-c) Preform cuttings taken from aligned preforms, of each recycling loop, before impregnation. $r_1$CFPA6-BA and $r_2$CFPA6-BA show slight deposition of residue on fibre surface which is deemed to be unwashed matrix. 3.1.2 Fractography SEM micrographs of vCFPA6, r1CFPA6-BA, and r2CFPA6-BA fracture surfaces are shown in a) SEM micrographs of a vCFPA6 fracture surface b) SEM micrographs of a r1CFPA6-BA fracture surface c) SEM micrographs of a r2CFPA6-BA fracture surface a) SEM micrographs of a vCFPA6 fracture surface b) SEM micrographs of a r1CFPA6-BA fracture surface c) SEM micrographs of a r2CFPA6-BA fracture surface Figure 12a shows a typical vCFPA6 fracture surface where some pull-out, fractured fibre ends and well coated fibres are clearly visible. This reflects the linear elastic strain response and brittle failure as the fibres are well adhered to the matrix enabling effective stress transfer. a) SEM micrographs of a vCFPA6 fracture surface b) SEM micrographs of a rCFPA6-BA fracture surface c) SEM micrographs of a rCFPA6-BA fracture surface Figure 12b showed similarly clean matrix crack surfaces however there were large ‘valleys’ along the cross-section where large sections of composite had cracked off. The large ‘valleys’ were also visible in the r_CFP6-BA fracture surface in a) SEM micrographs of a vCFPA6 fracture surface b) SEM micrographs of a r_CFP6-BA fracture surface c) SEM micrographs of a r_CFP6-BA fracture surface Figure 12c. The fracture surface ‘valleys’ suggest that large sections of composite have become detached from the opposite face. This is a consequence of crack propagation past a fibre-dense region, through the matrix-rich regions that typically surround them. The stiffness variation between the two regions initiates and rapidly propagates cracks through the matrix, resulting in reduced tensile strength. The fibre-dense regions are likely to be caused fibre agglomerations that were not separated by the wet-sonication method. This provides a plausible explanation for the reduction in strength observed in the recycled composite specimens. It also explains the reduction in stiffness as the fibre agglomerations will likely cause misalignment of the surrounding fibres. 3.4 Conclusion The mechanical performance, polymer properties and fibre properties of virgin and recycled CFPA6 was examined experimentally. The analysis linked macroscopic performance with microscopic fibre surface phenomena, fibre length and polymer behaviour. DSC analysis indicated that the crystallinity of the polymer was affected by reclamation, however, this was likely due to rapid precipitation on crystal formation and not a consequence of polymer degradation. FTIR spectra also showed no additional peaks or shifts in vibrational frequencies, suggesting unchanged polymer functionalities. GPC analysis reported successive degradation of the large polymer chains, however, the secondary peak chains were unchanged. HPLC-MS indicated the extraction of small molecules into the supernatant; these masses are substantially less than even small polymer chains. Polymers would have been insoluble in the HLPC carrier solvent and removed before analysis and are therefore not visible in the chromatogram. Small molecules had similar masses to known PA6 additives and their fragment ions. The CFPA6 specimens showed a total decrease of 39 % (± 3.5) in tensile stiffness and 40.4 % (± 6.1) in tensile strength. The ultimate tensile strain obtained for the CFPA6-BA specimens were statistically invariable as a function of recycling. The ultimate tensile strength reduction, after careful consideration of the potential causes, was most likely due to the presence of fibre agglomerations in the preforms, which formed fibre-dense stress-raisers in the composite, resulting in early failure. Evidence of the fibre agglomerations was observed in the fracture surfaces of the recycled composites. As the composite stiffness is predominantly fibre dependent, it was most likely a result of fibre misalignment caused by the reclamation process. The fracture surfaces of r1CFPA6-BA and r2CFPA6-BA showed relatively deep ‘valleys’, which suggested that large sections were cleaved during fracture. This indicated the presence of fibre-rich regions surrounded by matrix-dense regions through which cracks easily propagate. Fibre agglomerations were the likely cause of the early failure observed in the recycled specimens. Agglomerations occur during reclamation and are difficult to separate; some are passed through the alignment process without enough prior separation and end up in the preforms. They were not observed in previous studies using an equivalent remanufacturing process, this is likely a consequence of the different separation methods used, suggesting that dry carding is the most effective rCF separation method with regards to minimising agglomerations. The effect of the fibre agglomerations on the mechanical performance indicated that the fibre contribution to composite stiffness and strength was significantly greater than that of the matrix. This makes the issue of reclaimed fibre separation, and pre-treatment prior to remanufacture, of great importance for future investigations and process development. The mechanical performance achieved by the vCFPA6 specimens was considerable, to the authors knowledge these are the highest mechanical performances achieved by any discontinuous CFPA6 composite in the literature. There was a knockdown in performance after the first recycling loop, however, the mechanical performance of \(r_1\)CFPA6-BA and \(r_2\)CFPA6-BA were, to the authors knowledge, the highest observed for any recycled thermoplastic composite, and, for any recycled discontinuous CF composite with either thermosetting or thermoplastic matrices. 4. Acknowledgements This work was supported by the Engineering and Physical Sciences Research Council through the EPSRC Centre for Doctoral Training at the Advance Composites Centre for Innovation and Science (ACCIS, Grant number EP/L016028/1) and the “High Performance Discontinuous Fibre Composites – a sustainable route to the next generation of composites” project (Grant number EP/P027393/1). All data required for reproducibility are provided within the paper. 5. References Experimental data on carbon fibre reinforced polyamide-6 composite (CF60/PA-6) under longitudinal and transverse compression loading 2018;1. doi:10.17632/VPR4TFG27J.1. Botelho E., Figiel L, Rezende MC, Lauke B. Mechanical behavior of carbon fiber reinforced polyamide composites.
EUROWIDE FILM PRODUCTION & FOZ present A FRANCO-ITALIAN COPRODUCTION WITH TEODORA FILM IN ASSOCIATION WITH COFICUP 3 - A FUND OPERATED BY BACKUP FILMS RICKY A FILM BY FRANÇOIS OZON WITH ALEXANDRA LAMY SERGI LOPEZ MÉLUSINE MAYANCE ARTHUR PEYRET ANDRÉ WILMS JEAN-CLAUDÉ BOLLE-REDDAT LENGTH 1H30 - FRANCE / ITALY World Sales: Le Pacte 5 rue Darcet - 75017 Paris Tel: +33 44 69 59 55 - Fax: +33 44 69 59 42 Elisabeth Perlié (Head of International Sales) +336 63 86 77 02 - e.perlie@le-pacte.com Nathalie Jeung (International Sales) +336 60 58 85 33 - n.jeung@le-pacte.com International Press: Magali Montet Tel: +33 6 71 63 36 16 magali_montet@yahoo.fr When Katie, an ordinary woman, meets Paco, an ordinary man, something magical and miraculous happens: they fall in love. Out of their love comes an extraordinary baby: Ricky. The starting point for RICKY was a short story by English novelist Rose Tremain... In English, the title of the story is MOTH, those nocturnal butterflies who are drawn to light. In the French version, the story was called LÉGER COMME L'AIR (LIGHT AS AIR). I liked it immediately when I read it, but I didn't think I could adapt it. The story is very short and its mood reminded me of ROSETTA by the Dardenne brothers: the characters are poor, underprivileged white people living in a trailer park in the heart of the United States. Because of the setting, I wasn’t sure how to approach the story, how to make it mine. And although I liked the way an extraordinary, amazing event disrupts the characters’ otherwise bleak existence, the fantasy element frightened me. It seemed impossible to render. But then I realized that what touched me about the story wasn’t so much the fantasy element as the way it talks about family, our place in it, and how a new member - a new partner or a new child - can shake up the balance. You’re constantly mixing comedy and fantasy in the film. When Katie and her daughter are tending to Ricky’s wings, we don’t know whether to laugh or cringe... There’s an irony in Rose Tremain’s writing that corresponds to my own, and I wanted to preserve that in the film. Whenever the story gets too unreal or bizarre, elements of humor and distance come in to release tension and make the scene work. Katie and her daughter really enjoy looking after this extraordinary baby. Ordinary, everyday life playing out in extraordinary circumstances: RICKY reminds us of UNDER THE SAND with its mixture of realism and fantasy. I'm only interested in fantasy when it's presented in a believable way that allows for audience identification. That's why I wanted to show the process of Ricky's wings growing in precise detail. In the short story the wings appear suddenly, without any explanation. Overnight, the baby has wings. How did you imagine the process? The writing process involved imagining the physical changes the baby would experience, as well as the realistic reactions of those close to him. The first question was: When do the wings appear? Are they present at birth or do they show up later? I thought the lumps should begin to appear when the baby was several months old, that way they would embody the deterioration of the adults' relationship. Also, that time span would allow the family a long moment together before any doctors got involved. So Ricky's wings begin to grow when he's about 7 or 8 months old. As far as that process went, we simply took our inspiration from the way baby birds' wings develop: little lumps slowly begin to form, then mature into stumps with feathers pricking through the skin like tiny fingernails. We tried to replicate the actual way birds' wings develop as best we could, while respecting the aesthetics of the film and its narrative. The idea was to have the different stages of wing growth punctuate different stages of the family members' relationships: the wings appear as bumps that Katie interprets as injuries Paco has perpetrated against the child, leading her to separate from him. When the feathers begin to grow and Ricky begins to fly, the mother and daughter grow close again, etc. You've got some major ellipses in the film, especially in the beginning. The ellipses keep us moving through the various stages of a typical love story: the loneliness, the meeting, the forming of the couple, the little girl feeling left out, the arrival of the baby. It's all necessary to bring us to the heart of the story: the birth of Ricky. And why did you start the film with the scene of Katie talking to the social worker, then go into flashback? I knew this deliberate choice would provoke a variety of interpretations, and I like that. I want to give the audience the freedom to react to the story in their own way, to invent their own interpretations based on personal experience. For me, the scene takes place in the middle of the narrative, just after Paco has left and Katie is on her own with Lisa and Ricky. I wanted to show this «mother courage» at a breaking point, doubling her abilities as a mother, her desperation pushing her to consider placing her child in foster care. Inserting it at the beginning of the film allowed me to rapidly establish the character's social background and introduce the recurring theme of the maternal bond. I also liked the idea of playing around with audience expectations about flashbacks. The realism of the scene implies we are going to see a social drama, making the fantasy element all the more surprising. RICKY is a film about family, but the main character is a woman... I like portraits of women and I wanted to explore the theme of maternity again, but in a different way than I had done with SEE THE SEA. In that film, two facets of the maternal instinct were illustrated through two very different women: the good mother and the monstrous mother. In RICKY, those two aspects are present in the same mother, Katie, and we follow the complex evolution of her maternal impulses. At first she's a lioness, seeking to protect her young, then she becomes a more playful, childlike mother, playing with her baby almost like a child plays with a doll. And finally, she is a mother confronted with the reality of a baby who needs care, a child she's going to have to share, and eventually let go of. Do you think the maternal instinct is more complex than the paternal instinct? I find it more interesting, because the child comes out of the mother's body. Often, mothers feel their children are extensions of themselves. The physiological side of birth and the organic link between mother and child fascinate me. And yet the father, Paco, is also a complex character, whereas the father in the short story clearly only comes back for the money. I wanted to deepen the relationship between the man and the woman. It's true Paco wants to make money off Ricky by charging the journalists to see him, but his motivations are not purely cynical, he's also being logical: with the money earned, they could buy a house, and have enough space to raise Ricky in better conditions. Of course, Paco only comes back when he learns that Ricky is an unusual child. But in his defense, he hadn't had much room to let his paternal feelings develop - Katie had quickly excluded him. I think it's not unusual for fathers to find themselves feeling squeezed out, and the film explores that too. Why did you choose Sergi Lopez for the role of Paco? I'd wanted to work with him for a long time. I wrote the character with him in mind, especially the scenes where Katie talks about how hairy he is. Sergi is a very subtle actor. He's sensual, there's something feminine about the way he moves, and yet at the same time he's extremely virile, which appeals to women and reassures them. He brought ambiguity and humanity to a character who could seem quite negative on paper. And Alexandra Lamy? When I saw her on TV in UN GARS, UNE FILLE (A GUY, A GIRL), I thought she was an interesting actress. She's got a gift for comedy and repartee, she's quick, her timing is excellent. She reminded me of American actresses from the screwball comedies, but I sensed she was also capable of excelling in a more dramatic role. Also, Alexandra can embody the common, unrefined side of Katie's character. I felt that with her, the audience would find the story more believable than if Katie had been played by a more high-profile actress. My main objective with Alexandra was to slow her down, help her feel comfortable with silence and absence. I wanted her to take her time. What about shooting without make-up? Alexandra knew about that from the start and had no problem with it. She's in no way a narcissistic actress. It was important that Katie not be in seduction mode. I wanted to see her skin, her body as it really is, not idealized, not overly beautified... the goal being to stay as close as possible to reality. At the same time, contrary to the usual clichés, I wanted to show the beauty of the working-class suburb where Katie lives, capture the photogenic potential of the housing estate and the lake with its reflections. I tried to combine realism with a certain amount of stylization. I was interested in depicting Katie’s social background because it allowed me to accentuate the notion of confinement that exists in every family. If Katie had been from a middle-class background, she probably would have consulted a top doctor. As it is, she prefers to hide the baby away because she doesn’t feel like she’s part of the system. And the arrival of this baby is like a wonderful stroke of luck, a wonderful event in her otherwise gray, dull existence. The baby is a real source of richness, both literally and figuratively, that she wants to keep for herself. Did you see the special effects as an obstacle or were you excited at the prospect of using them? When the project was in the development stages, we were a bit nervous. Special effects + a baby: that’s a lot of obstacles. But in the end everything went smoothly, much better than the investors and the insurance companies expected. I like special effects when they’re an integral part of the story, when they serve the story, like in Jack Arnold’s THE INCREDIBLE SHRINKING MAN, which was a reference for me. Or in David Cronenberg’s films: he knows how to exploit their organic side. I tried to keep things simple, to avoid doing flashy or overly technical shots because we were using special effects. On the contrary, I wanted to integrate the effects into a straightforward mise-en-scene, with daily life and concrete actions taking place within static shots, reverse shots and continuous shots. This made the special effects that much harder to conceive, as they are usually integrated into quick shots and rapid-fire editing, leaving no time to really see them, only enough time to get a sense of them. The special effects guys at BUF were actually pretty nervous themselves, when they saw the final cut of the film and realized what they were up against! Ricky is not a very realistic name in the context of the film. The baby was called Ricky in the short story. When I began adapting it, I kept the name and in the end, I got used to it and it stuck. For English speakers, the name is outdated and sounds a bit silly today. I thought it was funny, it reminded me of the American TV shows I used to watch as a child. And since Lisa’s the one who chooses the baby’s name in the film, one could say the whole story is a figment of her young imagination... Were you surprised that François Ozon thought of you for the role of Katie? When my agent called to tell me I had an audition with François Ozon, I was indeed surprised. I have a theater background, I studied at the Conservatory, but most people associate me with my role in the television comedy sketch show UN GARS, UNE FILLE (A GUY, A GIRL), which is very different from François’ work. That was actually just as well, because I went to the audition totally relaxed, convinced I’d never get the part. No nerves, no anxiety, I just wanted to have fun. How did that first audition go? I did my two scenes, and I sensed that François was touched. He had me redo a couple of things and then said simply, «We’ll let you know». I didn’t hear from them for several months, so I assumed he’d found someone else. But then my agent called to say François wanted me to come back and do some more tests. At this point it was down to two actresses, so now I was nervous as hell! I played the scenes again and did some tests with the little girl who was cast as Katie’s daughter. When I saw Mélusine, who looks like me, I felt reassured and thought I might have a chance at getting the part! Oddly enough, the Katie character resembles me more than any of the comedy roles I’ve been proposed so far in film. Like many of François Ozon’s films, RICKY is above all the portrait of a woman. Yes, and like with Charlotte Rampling in UNDER THE SAND, François fought to get the actress he wanted, his choice wasn’t immediately obvious. François Why the choice of wings to symbolize difference, in your opinion? To me, the wings symbolize the angelic side of childhood. As well as the desire for freedom and the importance of letting our children leave the nest, even if it's difficult. I love the scene at the lake, when Katie sees Ricky and says to him, «My God, how you've grown.» Ricky's no longer a baby, he's now a little boy, and Katie is relieved to see him doing so well. She in turn feels free and fulfilled, ready to rebuild her family. And get pregnant again. Katie comes from a particular social background, her son Ricky is very unusual, yet she has such a universal quality. How do you explain your ability to create such a strong identification factor? Maybe because I never think about my appearance when I'm acting. I don't pose, I don't worry about only showing my best profile or standing in a certain way. I don't see myself, I don't examine myself, I forget myself entirely to enter the mind and body of my character, and I guess that shows on screen. I like actresses like Meryl Streep, who don't care if they're beautiful or not. They make us want to enter the character with them, because they resemble us. How did you approach the character? I did what Katie would do: I didn't think about it too much, I just let the character come to me. I didn't do much research or try to understand her motivations. I just learned all my lines, well in advance, without thinking too much about how to play them. I wanted to arrive on the set a «virgin». Katie reacts instinctively and I wanted to be like her. Even when I'd rehearse with François, I'd often do what we call an allemande: I'd walk through the scene without really acting the lines. That way, when you hear «action», you're fresh, you haven't played the scene out, and the director himself benefits from the element of surprise. Did François show you any particular films to inspire you? Yes, WANDA by Barbara Loden. Katie's story is completely different from Wanda's, but they share the same spontaneity. Wanda doesn't think things over much, she lets life happen to her and follows her own logic. She sleeps with men according to her instinct, stays with them even if they beat her... and abandons her children without guilt. We sense Katie and her daughter Lisa really enjoy caring for this exceptional baby... They have fun watching Ricky grow and evolve, and the shared experience brings them closer together. This part of the film has a comedic touch that really means a lot to me. The script is darker than the finished film, and François and I discussed this at length. At first he didn't want me smiling in these scenes, but he had the good sense to let me try some lighter, more joyful approaches, and he ended up using them. Katie is hard, even castrating, when she accuses Paco of harming Ricky... Katie is hard in the way I imagine someone with a hard life could become. Someone who works the assembly line in a factory and lives in depressing surroundings. We sense she's carrying heavy baggage. Lisa's father undoubtedly left her. In the scene where she's waiting for Paco to come home, ready to accuse him of having hit Ricky, François asked me to remain seated. We might have expected her to stand up to confront him about it, but she remains seated, so we sense she's made her decision before even speaking to him. She blames him for what happened to Ricky and she's not going to discuss it. When Ricky's wings grow, proving Paco's innocence, she could call him back, but instead she chooses to savor this amazing event in her otherwise dreary existence on her own. She even says to her daughter, «I haven't thought of Paco since Ricky got wings». Katie is down to earth and has a kind of working-class sensibility that I really like. When she discovers her son's wings, her reaction is, «They're bound to fall off»! Of course she also has moments of anxiety, but she doesn't dwell on that, she's a woman of action. What was it like shooting a film with children and special effects? I loved Arthur, the boy who plays Ricky, but people don’t always have a lot of patience with babies, especially on film shoots. He’s wonderful in the film, but sometimes it took a few takes, with us making faces to get him to laugh, or hopping around to attract his attention and get him to stop staring at his mother, the boom mike or even the camera! It takes a lot of effort and it’s pretty exhausting. Mélusine is adorable and very talented, but with her too, it was important to create a bond, play with her. As for Ricky’s wings, they weren’t always visible during the shoot. Sometimes there were dummy ones, but more often you had to imagine them on the baby. I had to bear in mind that if I held my head at a certain angle or picked up the baby in a certain way, my face would be blocked by the wings. You and Sergi Lopez form a highly believable couple. Sergi is a great acting partner, we share the same instinctive approach. François gave us the freedom to change a few words here and there, so we really listened to each other, we acted in sync, looking right into each other’s eyes. Sergi is also a great father, he was very attentive with the kids. He’s like a big teddy bear, very sensitive, his emotions are right on the surface. He really cried in the break-up scene, and at the same time he can be frighteningly ambiguous, as when Paco comes back. Tell us about how François Ozon directs actors. A lot of filmmakers are primarily preoccupied with the image. François puts the actors first. If he’s blocked a scene in a certain way but we can’t pull it off because it doesn’t feel right, he understands that. He’s flexible when it comes to dialogue and movement. And he frames the shots himself, which I love. I could feel him watching me, listening to me, drawing me out. I felt supported. François won’t let you go until you’ve given what you’ve got to give. Oddly enough, in comedies where I’ve played roles that are closer to the role I’m known for, directors have been afraid to trust me, afraid I’d just repeat my character from A GUY, A GIRL. François gave me that trust. He chose me because he felt I was best suited for the role, not just for show or to prove he could break my image. Were you familiar with François Ozon’s films before working with him? No, I don’t see a lot of films and I don’t live in France. But I’d met him before, during a promotional tour. And I knew people who had worked with him. Not having seen his films didn’t worry me - my ignorance is part of who I am. I don’t choose projects based on a filmmaker’s previous work but rather on the script he’s proposing. I want to be in the moment and follow my instinct when I’m reading a script, that’s what matters. You don’t want to be biased, you want to get into the story. What was your reaction when you read RICKY? I was immediately won over by its almost magical simplicity. The story goes straight to the heart of things, like a tale. Paco and Katie’s relationship begins in a very direct way and moves forward quickly. What about Ricky having wings? Despite this surreal detail, RICKY is not science fiction. It’s actually a realistic film about something that’s not realistic. It’s unsettling to see how a flying baby fits into mundane daily life. In RICKY, the characters experience the supernatural in a very natural way. What do you think about the fact that Ricky’s “otherness” is embodied by wings? Everyone has fantasized about flying, it’s a dream people share across the globe. Ricky could be seen as an angel, but François doesn't get into that symbolism. He shows us a much more disturbing reality. Ricky’s otherness is amusing at first, but reality soon takes over. Ricky is half monster, half angel. He’s a cute blond baby with blue eyes, but there’s something monstrous about his wings as they develop. His wings are like a miracle, but we don’t know if the miracle is positive or negative, it all depends on what the family will do. In that sense, Paco is very pragmatic. Some may think he returns home to make money off Ricky. That’s right. Paco sees in Ricky an opportunity to make money, which will in turn allow the family to come back together and be happy. Paco is neither good nor bad, it’s up to the audience to decide what they think of him. I like films that leave room for imagination. RICKY is not a sweet, pretty story, it’s not a «nice» film. I liked that ambiguity, that’s why I accepted the role. Nothing is black and white, you don’t know if this family will stay together. They’re not a bad family or a good family, they’re just a family. Not a very well-adjusted family, but I’m not convinced a well-adjusted family is a good thing. I’m getting tired of innocuous Pollyanna stories full of cheap hope. Pretty little stories with gratuitous happiness, containing no rites of passage or experience of unhappiness seem empty to me. I subscribe to a philosophy of joie de vivre, but a joie de vivre that can’t exist without pain. RICKY is also the portrait of a mother. Yes, and I understand why François would want to explore that theme. It’s so huge, being a mother, in both moral and physical terms. In comparison, being a father seems rather incidental! And yet, the film also explores a fundamental truth about being a father. The experience of fatherhood starts with this simple phrase: «You’re going to be a father». Then the abstract notion suddenly becomes real in the form of a strange little being who breathes, has needs, doesn’t look quite how you expected... In the film, this concept of the baby as a «little monster» who bursts into your life and makes you a father is reinforced by Ricky’s wings. What’s it like shooting with very young children? Not easy, but I’ve got kids myself and I really like children, I communicate well with them. We bond easily. Babies can’t work the way we can, we have to conform to their biological rhythm and that can be complicated. But they’re great because they don’t act, they just are. They’re real, which helps compensate for the weirdness of playing in the absence of special effects. And working with Alexandra Lamy? As with François, I didn’t know her previous work and had no preconceived notions. I got to know her on the shoot. We got along great. I found her pretty, friendly and most of all, very funny. We have a similar approach to acting, we’re both very instinctive. We didn’t discuss Katie and Paco’s relationship, we let ourselves be guided by the context of the shoot, the situations, the dialogue, the direction François gave us... I’m not one of those actors who feels the need to go to the end of the earth in search of his characters’ motivations. How about working with François Ozon? François is extremely methodical. He knows exactly what he wants. He sees the film in his mind and works very quickly. He likes to keep things moving, no wasting time. He’s very impatient. Impatient to get to the next scene, to the next day of shooting, to the editing, to the next film too, I imagine. He likes his actors. Choosing them is part of his work as a director. He sees something in you that corresponds to the character, and from there he lets you express yourself the way you see fit. He doesn’t intervene unless he doesn’t like the direction you’re going in. But he never drives you mad explaining the «essence» of a scene before you shoot. You sense he loves his work and has made a lot of films, he’s strong inside and doesn’t need to throw his weight around. What do you think about the end of the film, in particular the fact that Katie is pregnant again? In a conventional film, you’d say: «Wonderful, life goes on». But François plays around with traditional images. When we see Katie pregnant, we can’t help wondering if the baby inside her will have wings, fins, or paws! SELECT FILMOGRAPHY 2009 - RICKY by François Ozon - LEAVING by Catherine Corsini - LA RÈGATE by Bernard Bellefroid - PETIT INDI by Marc Recha - LES DERNIERS JOURS DU MONDE by Arnaud and Jean-Marie Larrieu - MAP OF THE SOUND OF TOKYO by Isabel Coixet (in tournage) 2007 - PARC by Arnaud des Pallières - THE HOUSE by Manuel Poirier 2005 - PAN’S LABYRINTH by Guillermo del Toro 2004 - TO PAINT OR MAKE LOVE by Arnaud and Jean-Marie Larrieu - BYWAYS by Manuel Poirier - DIRTY PRETTY THINGS by Stephen Frears - JANIS AND JOHN by Samuel Benchetrit - SOME KIND OF BLUE by Alain Corneau 2003 - THE RED KNIGHT by Hélène Angel - JET LAG by Danièle Thompson - HYPNOTIZED AND HYSTERICAL by Claude Duty - WOMEN OR CHILDREN FIRST by Manuel Poirier 2002 - THE MILK OF HUMAN KINDNESS by Dominique Cabrera - A HELL OF A DAY by Marion Vernoux - TE QUIERO by Manuel Poirier 2001 - THE NEW EVE by Catherine Corsini 2000 - HARRY, HE’S HERE TO HELP by Dominik Moll - TOREROS by Eric Barbier 1999 - EMPTY DAYS by Marion Vernoux - A PORNOGRAPHIC AFFAIR by Frédéric Fonteyne 1998 - THE NEW EVE by Catherine Corsini 1997 - WESTERN by Manuel Poirier - MARION by Manuel Poirier 1994 - À LA CAMPAGNE by Manuel Poirier - ATTENTION FRAGILE by Manuel Poirier 1992 - ANTONIO’S GIRLFRIEND by Manuel Poirier What was your reaction when you read the script for RICKY? It's the kind of script you read and think: this is audacious and intriguing, especially coming from François Ozon. His films always walk a fine line, right on the edge of strange. You feel things might shift at any minute. Seeing the finished film, I was even more surprised! I didn't expect such powerful realism. How did you approach the job? Most directors know nothing about special effects. Our job is to explain how they work, reassure them and show them how we can make their vision come to life on the screen. I always try to encourage and help directors who show up with crazy ideas like these! Our first priority is to define a plan of action, determine what we can do and how it will affect the shoot. For example, François was going to have to frame virtual movement, imagine the various trajectories the baby would follow with the special effects. Everything had to be mapped out before the shoot. So you must get involved very early on. Yes, right from the planning stages of the shoot. Sometimes we even get involved during the writing stage, as we did with RICKY. François came to us with questions about what we could do and how much it would cost. Knowing the parameters helped him write the script. François has an open mind, he’s a smart guy, he listens, he understands things quickly, and he also has good production sense, he understands about limitations. **What was particularly challenging about this project?** Making a baby fly! We'd already created angels but never a flying baby. It was a great experience, one we shared with a young supervisor, Mathilde Tollec. The challenge was to make the baby realistic. The slightest error would bring the whole thing down, prevent the audience from suspending disbelief and believing in the story. We took our inspiration from actual flights of birds, or insects, when it came to making Ricky's little wings move. **Is it harder to make a baby fly than an adult?** Yes, because it requires additional security measures that hinder realistic, natural flight. The baby has to be harnessed. To offset the stiffness that creates, we focused on developing the speed of his movements. We did tests before the shoot and realized we’d need to accelerate his movement. Naturally he bumps into things, like a bird trapped in a room. We also tested various wing-flapping speeds to find out what worked best on screen. **How did you come up with the look for the wings?** François wanted wings that weren’t white. Our job was basically to propose different variations and facilitate the technical side and make the wings credible so he could concentrate on his shoot. We did research on different types of wings, studied the different phases of their development, from stumps to full-grown wings. Then we drew them on a baby to get the right proportions. François was extremely precise in his choices. The basic design of the wings was determined before the shoot, but then we fine-tuned the colors, which vary throughout the growth process. And we adapted them to the baby’s hair color. Up to the last minute that is allotted to us, I feel it's important to keep a critical eye on the work and keep correcting the image. Computer-generated 3D special effects have no inherent poetry, unlike any awkward drawing done by a 5-year-old. It’s the time you spend working on and fine-tuning your digital image that will give it a soul. The layers you add are what make your effect interesting. You can’t just whip it out, it’s a long and fastidious craft. How is it different working on an independent film like Ricky as opposed to a big action film? I find independent filmmakers are more involved in the whole process of the film. American directors, who are more like technicians, often have only a partial view. François needed to understand the whole process so he could appropriate it and make his film. That leads to interesting exchanges - you discover another way of seeing things and expressing them. We’d already had some amazing experiences with Wong Kar-Wai and Eric Rohmer. I’d rather talk with independent filmmakers about cinema and the effects they hope to create than listen to directors who specialize in special effects spout jargon. **SELECT FILMOGRAPHY** RICKY by François Ozon DANTE 01 by Marc Caro BE KIND REWIND by Michel Gondry SPEED RACER by Andy & Larry Wachowski THE DARK KNIGHT by Christopher Nolan BABYLON A.D. by Mathieu Kassovitz SPIDER-MAN 3 by Sam Raimi SILENT HILL by Christophe Gans THE PRESTIGE by Christopher Nolan ARTHUR AND THE MINIMOYS by Luc Besson BATMAN BEGINS by Christopher Nolan ANGEL-A by Luc Besson REVOLVER by Guy Ritchie THE THREE BURIALS OF MELQUIADES ESTRADA by Tommy Lee Jones HARRY POTTER AND THE GOBLET OF FIRE by Mike Newell 2046 by Wong Kar Wai ALEXANDER de Oliver Stone MATRIX RELOADED by Andy & Larry Wachowski MATRIX REVOLUTIONS by Andy & Larry Wachowski PANIC ROOM by David Fincher SIMONE by Andrew Niccol THE LADY AND THE DUKE by Eric Rohmer THE CELL by Tarsem Singh FIGHT CLUB by David Fincher BATMAN & ROBIN by Joel Schumacher THE CITY OF LOST CHILDREN by Jean-Pierre Jeunet et Marc Caro Katie Alexandra Lamy Paco Sergi Lopez Lisa Mélusine Mayance Ricky Arthur Peyret Doctor André Wilms Journalist Jean-Claude Bolle-Reddat Librarian Julien Haurant Butcher Eric Forterre Salesman Hakim Romatif Supermarket Manager John Arnold Odile Maryline Even Directed by François Ozon Screenplay by François Ozon Freely adapted from MOTH by Rose Tremain (VINTAGE BOOKS) with the collaboration of Emmanuèle Bernheim Produced by Claudie Ossard and Chris Bolzli Image Jeanne Lapoirie, AFC Sound Brigitte Taillandier Sets Katia Wyszkop Costumes Pascaline Chavanne Make-up Gill Robillard Hair Franck-Pascal Alquinet 1st assistant Hubert Barbin Script supervisor Clémentine Schaeffer Casting Sarah Teper (a.r.d.a), Leila Fournier Children and extras Anaïs Duran Editing Muriel Breton Sound editing Olivier Goinard Sound mixing Jean-Pierre Laforce Visual effects artist Georges Bouchelaghem Mechanical SFX supervisor Pascal Molina Visual effects BUF Stunts Pascal Guégan, Marc Bizet Stills photographer Jean-Claude Moirreau Production manager Philippe Delest Post-production supervisor Christina Crassaris Original music Philippe Rombi © Eurowide & Foz
Diels-Alder Reactions in Micellar Media SIJBREN OTTO* and JAN B. F. N. ENGBERTS University of Groningen, Groningen, The Netherlands I. INTRODUCTION TO DIELS-ALDER REACTIONS The Diels-Alder reaction is a [4+2] cycloaddition in which a diene (four-π component) reacts with a dienophile (two-π component) to provide a six-membered ring (Fig. 1). Six new stereocenters are formed in a single reaction step. Because the conformations of the double bonds are usually fully retained, the reaction is stereospecific and consequently the absolute configuration of the two newly formed asymmetric centers can be controlled efficiently. The Diels-Alder reaction is of great value in organic synthesis and is a key step in the construction of compounds containing six-membered rings [1]. A historic account of this important conversion has been published by Berson [2]. _Homo_ Diels-Alder reactions involve only hydrocarbon fragments. If the diene or dienophile possesses heteroatoms in any of the positions a-f (Fig. 1), heterocyclic ring systems are formed (_hetero_ Diels-Alder reactions). _Normal electron demand_ Diels-Alder reactions are promoted by electron-donating substituents in the diene and electron-withdrawing substituents in the dienophile. The opposite situation applies for _inverse electron demand_ Diels-Alder reactions. _Neutral_ Diels-Alder reactions are accelerated by both electron-withdrawing and electron-donating substituents. Reactivity and selectivity in Diels-Alder reactions are often rationalized in terms of frontier molecular orbital (FMO) theory [3], emphasizing interactions between the highest occupied molecular orbital (HOMO) of one of the reaction partners and the lowest unoccupied molecular orbital (LUMO) of the other. During formation of the two new σ-bonds, orbital symmetry is conserved. Therefore no intermediate is involved and the pericyclic reaction is concerted. There is ample experimental and theoretical evidence for the concerted mechanism [4]. Only in relatively rare cases does the Diels-Alder reaction take place via a nonconcerted two-step mechanism, involving either a zwitterionic or a biradical mechanism and leading to modified stereochemistry. FMO theory has been useful in analyzing possible asynchronicity in the activation process and in predicting kinetically controlled regioselectivity for Diels-Alder processes involving asymmetric dienes in combination with asymmetric dienophiles [5]. Much attention has also been given to Diels-Alder reactions that provide _endo_ and _exo_ cycloadducts (Fig. 2). The _endo-exo_ ratio is usually the result of relatively small differences in transition state energies which appear to be primarily determined by secondary orbital interactions [6,7]. The formation of the _endo_ product is associated with the most compact activated complex and exhibits the most negative volume of activation. Apart from secondary orbital interactions, other factors have been proposed for explaining the _endo-exo_ ratio, including steric effects, London-dispersion interactions, and solvent effects (e.g., [8]). This chapter will describe micellar effects on Diels-Alder reactions with respect to both reaction rates and stereochemical aspects. For a proper understanding of the effects induced by micelles, we will first briefly review what is known about medium and catalytic effects on Diels-Alder cycloadditions. A. Medium and Catalytic Effects on Diels-Alder Reactions The Diels-Alder reaction is a textbook example of a reaction that is rather indifferent toward the choice of the solvent. An extreme example [9] is the dimerization of cyclopentadiene (Table 1), but for many other homo- and also for hetero Diels-Alder reactions, rate constants in a series of organic solvents vary only modestly. Nevertheless, attempts have been made to correlate kinetic data with solvent parameters, both for pure solvents and for binary mixtures [10,11]. Multiparameter analyses of solvent effects on Diels-Alder reactions have been carried out. For example, Gajewski [12] observed a dependence of rate constants for Diels-Alder reactions on the solvent \( \alpha \)-parameter and the cohesive energy density. Intramolecular Diels-Alder reactions in highly viscous media have been related to the solvent density, which affects the translational motion of the reactants [13]. Rather unusual reaction media that have been employed for accelerating Diels-Alder reactions include solutions of lithium perchlorate in diethyl ether, dichloromethane, and nitromethane [14]. After considerable debate, it was argued that the substantial rate enhancements are largely due to Lewis acid catalysis by the lithium cation [15]. It is generally agreed that the small or modest solvent effects on the rates of Diels-Alder reactions are in accord with the concerted character of the cycloaddition that involves only a rather insignificant change of charge distribution during the activation process. The effect of the reaction medium on the regioselectivity of Diels-Alder reactions can be rationalized in terms of the FMO theory [16]. In particular, the hydrogen bond donating character of the solvent, as expressed in the \( \alpha \)-parameter, affects the orbital coefficients on the terminal atoms of diene and dienophile. Medium effects on the endo-exo ratios have received extensive attention, and Berson et al. have even based an empirical solvent polarity scale \( \Omega = \log(\text{endo/exo}) \) on the selectivity of the Diels-Alder reaction between methyl acrylate and cyclopentadiene [17]. Solvent effects on the diastereofacial selectivity of the Diels-Alder process have also been examined and interpreted [18]. Other factors that have been studied with the aim of increasing the rate and stereoselectivity of Diels-Alder reactions. Diels-Alder Reactions in Micellar Media B. Special Effects of Water on Diels-Alder Reactions For a long time water was not a popular solvent for Diels-Alder reactions, although the pioneers Diels and Alder performed the reaction between furan and maleic acid in an aqueous medium in 1931 [42]. The latter experiment was repeated by Woodward and Baer in 1948, and a change in the endo/exo ratio was noted [43]. In 1973, Huisman et al. for the first time noticed a favorable aqueous effect on the rate of the same reaction, but the effect was not further explored [44]. Also, in two early patents the Diels-Alder reaction is mentioned in connection with water as the reaction medium [45]. A breakthrough came in 1980 in the work of Breslow and Rideout [26], who observed a substantial rate increase for simple Diels-Alder reactions in pure water. In subsequent extensive research it was shown that these remarkable kinetic aqueous medium effects are a general phenomenon [46-48]. Depending on the chemical structure of diene and dienophile, rate enhancements in water relative to organic solvents may amount to factors of more than 10^6. Rather soon after Breslow’s pioneering work, synthetic applications of Diels-Alder reactions in aqueous media were explored in some detail, in particular by Grieco and his coworkers [48]. Of course, the often limited solubility of diene and dienophile is a major drawback. In elegant work, Lubineau et al. have tackled this problem by employing dienes that were rendered water soluble by the temporary introduction of a sugar moiety in the molecule [49]. The scaling up of aqueous Diels-Alder reactions has also been studied [50]. Ever since the early work of Breslow, many studies have been devoted to the identification of the special effects of the aqueous reaction medium that lead to the remarkable rate accelerations. These studies have been reviewed [46,47]. After considerable debate and controversy, it is now almost generally agreed that the enhanced reactivity in water is the result of two major effects: the hydrogen bond donating capacity of water and enforced hydrophobic interactions [51,52]. Previous suggestions that preassociation of the reactants in water played an important role were not substantiated. For example, vapor pressure measurements indicated that cyclopentadiene did not form aggregates at concentrations used in the kinetic measurements. Similar observations were made for methyl vinyl ketone, a popular dienophile in mechanistic studies of Diels-Alder reactions in water. The peculiar nature of the Diels-Alder reaction in water was clearly revealed in a study in which Gibbs energies for the Diels-Alder reaction of cyclopentadiene with ethyl vinyl ketone over the whole mole fraction range in the mixture of 1-propanol with water were combined with Gibbs energies of transfer of the diene and dienophile from 1-propanol to the aqueous mixture and to pure water [51,53]. These data showed that the initial state (diene + dienophile) is significantly destabilized in water relative to 1-propanol (Fig. 3). By contrast, the activated complex has nearly equal chemical potentials in water and in 1-propanol. Consequently, in aqueous solution the hydrophobic parts of the activated complex have completely lost their nonpolar character. as far as solvation is concerned. This conclusion has been confirmed in subsequent studies [52,54]. During the activation process of the Diels-Alder reaction, hydrophobic parts of the diene and the dienophile approach each other closely, a process that is particularly favorable in water ("enforced" hydrophobic interaction) compared with nonaqueous reaction media. The term "enforced" is used to stress the fact that the approximation of the nonpolar reagents is driven by the reaction and only enhanced by water. In addition, the electron redistribution that takes place during the activation process leads to an enhanced electron density at the carbonyl oxygen atom of ethyl vinyl ketone and a consequent enhanced propensity for hydrogen bond interaction with a hydrogen bond donating solvent. The small size of water molecules allows a particularly efficient interaction with hydrogen-bond acceptor sites. The medium effects on the chemical potentials, as shown in Fig. 3, are fully consistent with the operation of the hydrophobic and hydrogen-bonding effect. Beautifully detailed computational studies by Jorgensen et al. [55,56] led to similar conclusions and provided more quantitative insights into the relative importance of both solvation influences in water. Attempts have been made to identify Diels-Alder reactions that are exclusively affected by either enforced hydrophobic interactions [57] or hydrogen-bonding effects [58]. The overall results confirmed the analysis and illustrated how the structures of diene and dienophile determine the magnitude of the aqueous rate acceleration. It appears well established now that the hydrophobicities of diene and dienophile as well as the polarizability of the activated complex play a key role in determining the acceleration of Diels-Alder reactions in water. These insights into the nature of the special effect of the aqueous medium are also of immediate relevance for understanding the effects of water on the endo-exo ratios and on the diastereofacial- and regio-selectivity. Finally, we briefly note that studies in the 1990s have shown that many other organic reactions benefit from the use of water as the reaction medium [48,59-62]. II. INTRODUCTION TO MICELLAR CATALYSIS Micelles are highly dynamic, often rather polydisperse aggregates formed from single-chain surfactants [63,64] beyond the critical micelle concentration (cmc). Micellization is primarily driven by bulk hydrophobic interactions between the alkyl chains of the surfactant monomers and usually results from a favorable entropy change [65]. The overall Gibbs energy of the aggregate is a compromise of a complex set of interactions, with major contributions from headgroup repulsions and counterion binding (for ionic surfactants) [64]. The residence times of individual surfactant molecules in the micelle are typically of the order of $10^{-5}$ to $10^{-6}$ s, whereas the lifetime of the micellar entity is about $10^{-2}$ to $10^{-1}$ s. The size and shape of micelles are subject to appreciable structural variations. Average aggregation numbers are usually in the range of 40-150. For ionic micelles, a large fraction of the counterions are bound in the vicinity of the headgroups. The overall structure of the micelle is characterized by a situation in which the ionic or polar headgroups reside at the surface of the aggregates, where they are in contact with water, with the alkyl chains in the interior of the micelle forming a relatively dry hydrophobic core [66]. The alkyl chains of micellized surfactant molecules are not fully extended. Starting from the headgroup, the first two or three carbon-carbon bonds are usually trans, whereas gauche conformations are likely to be encountered near the center of the chain. As a result, the terminal methyl moieties of the chain can be located near the surface of the micelle and may even protrude into the aqueous medium [67]. Consequently, the micellar surface has a definite degree of hydrophobicity. Nuclear magnetic resonance (NMR) studies have shown that the hydrocarbon tails in a micelle are highly mobile and comparable in mobility to the chains in a liquid hydrocarbon [68]. The degree of water penetration into the micellar interior has long been a matter of debate. Small-angle neutron scattering studies have indicated that significant water penetration into the micellar core is unlikely [69]. Micellar catalysis of organic reactions has been extensively studied [70-76]. This type of catalysis is critically determined by the ability of micelles to take up all kinds of molecules. The binding is generally driven by hydrophobic and electrostatic interactions. The take-up of solutes from the aqueous medium into the micelle is close to diffusion controlled, whereas the residence time depends on the structure of the surfactant molecule and the solubilizate and is often of the order of \(10^{-2}-10^{-6}\) s [77]. Hence, these processes are fast on the NMR time scale. Solubilization is usually treated in terms of a pseudophase model in which the bulk aqueous phase is regarded as one phase and the micellar pseudophase as another. This allows the affinity of the solubilizate for the micelle to be quantified by a partition coefficient \(P\). Frequently \(P\) is expressed as the ratio of the mole fractions of solubilizate in the micellar pseudophase and in the aqueous pseudophase. However, for micelle-catalyzed reactions, it is more convenient to express \(P\) as a ratio of concentrations. The time-averaged location of different solubilizates in or at a micelle has been a topic of contention [78]. Apart from saturated hydrocarbons, there is usually a preference for binding in the interfacial region, that is, at the surface of the micelle [79,80]. Such binding locations offer possibilities for hydrophobic interactions and avoid unfavorable disturbances of the interactions between the alkyl groups of the surfactant molecules in the core of the aggregate. The situation is, however, complicated, and the large volume of the interfacial region as compared with the core of the micelle should also be taken into account. The preferential binding of aromatic molecules at the micellar surface has been explained at least in part by the ability of the \(\pi\)-system of the molecule to form weak hydrogen bonds with water [81]. ### A. Kinetic Models Kinetic studies of micellar catalysis and inhibition have been largely focused on organic reactions and the field has been reviewed extensively [70-76]. In these kinetic analyses the dependence of the rate constants on the surfactant concentration has usually been rationalized in terms of the pseudophase model assuming rapid exchange of the substrate(s) between the micellar and aqueous pseudophases. Different models have been developed for unimolecular and bimolecular reactions. For unimolecular reactions, the kinetic micellar effect depends on partitioning of the substrate between both pseudophases and on the rate constant in water \((k_w)\) and in the micellar pseudo phase \((k_m)\). Menger and Portnoy [82] developed the classic model in 1967 and this model has been successfully employed ever since. The micellar rate effect \(k_m/k_w\) depends on the local medium at the substrate binding sites where the substrate experiences specific effects due to hydrophobic segments of the alkyl chains, the polar or ionic headgroups, and the counterions in case of ionic micelles. For bimolecular reactions the analysis is much more complicated, and the overall kinetic effects are now also crucially affected by the local concentration of both reactants \(A\) and \(B\) in the micellar pseudophase. A classic approach has been advanced by Berezin et al. [71,83]. Again the pseudophase model is adopted, but now an independent assessment of at least one of the partition coefficients is required before the other relevant kinetic parameters can be obtained. The overall approach is illustrated in Fig. 4. The apparent second-order rate constant \((k_{app})\), which is a weighted average of the second-order rate constants in the micellar pseudophase \((k_m)\) and in water \((k_w)\), is given by \[ k_{app} = \frac{k_m P_A P_B [S] V_{mol,S} + k_w (1 - [S] V_{mol,S})}{(1 + (P_A - 1) [S] V_{mol,S})(1 + (P_B - 1) [S] V_{mol,S})} \] in which \(P_A\) and \(P_B\) are the micelle-water partition coefficients of \(A\) and \(B\), respectively, defined as the ratios of the concentrations in the micellar and the aqueous phase, \([S]\) is the concentration of surfactant, and \(V_{mol,S}\) is the molar volume of the micellized surfactant. Accurate values of \(V_{mol,S}\) are difficult to obtain, and the actual location of \(A\) and \(B\) in the aggregate may differ (see Section III.A). Usually, estimates of \(V_{mol,S}\) are introduced into Eq. (1), leading to uncertainties in \(k_m\). Despite these serious limitations, the kinetic analyses framed on the basis of Eq. (1) often produce reasonable results. By far the most frequently analyzed types of bimolecular reactions are those involving an ionic reaction partner of the same charge type as the counterion. of the ionic surfactant. Such processes are characterized by competition in binding between the reactive ion and the inert surfactant counterion. Pioneering work has been carried out by Romsted et al. [75], and the pseudophase ion-exchange model (PPIE) has been successfully applied in the micelle-catalyzed ionic bimolecular reactions. Again, it is often observed that the local microenvironment has only a modest influence on \( k_m/k_w \) and that the favorable entropic effect due to the increase of the local concentrations of both reactants in the micellar pseudophase is the dominant catalytic factor [84]. Over the years, the PPIE model has been severely tested; in particular, Romsted and his associates have advanced elegant methods for analyzing detailed aspects of counterion binding to micellar aggregates [85]. Studies of micellar catalysis of bimolecular reactions of uncharged substrates (such as most Diels-Alder reactions) have not been frequent. An example involves the reaction of 1-fluoro-2,4-dinitrobenzene with aniline in the presence of anionic and nonionic surfactants [86]. The apparent second-order rate constant (\( k_{\text{app}} \)) is increased relative to that in water as a result of compartmentalization of both reactants in the micelles. Interestingly, the second-order rate constant for reaction in the micellar pseudophase (\( k_m \)) was found to be roughly equal to or even lower than the rate constant in water. Similarly, the reaction of long-chain alkanethiols with \( p \)-nitrophenyl acetate [87] and the acylation of aryl oximes by \( p \)-nitrophenyl carboxylates [83] are catalyzed by micelles but, apart from local concentration effects, the influence of the micellar surface charge on the ionization constants of the SH and OH groups, respectively, must also be taken into account. ### III. EFFECT OF MICELLES ON DIELS-ALDER REACTIONS Because the diene and dienophile of the majority of intermolecular Diels-Alder reactions have a rather pronounced nonpolar character, efficient binding of both substrates to micelles is anticipated. This would imply that the effective reaction volume for the Diels-Alder reaction is significantly reduced, leading to micellar catalysis. Surprisingly, accounts of micelle-catalyzed Diels-Alder reactions are scarce. The first report of the influence of surfactants on Diels-Alder reactions stems from 1939, when the BASF company patented the use of detergents for promoting the yields of Diels-Alder processes in aqueous dispersions [45a]. Subsequently, more studies have appeared reporting beneficial effects of micellar systems on the yield of Diels-Alder reactions [88]. More mechanistically oriented studies have focused on the effect of micelles on the kinetics (Section III.A), the endo-exo selectivity (Section III.B), and the regioselectivity (Section III.C) of model Diels-Alder reactions. Also, the first example of modest enantioselectivity in a micelle-catalyzed Diels-Alder reaction has been reported (Section III.D). Finally, highly efficient micellar catalysis of a Diels-Alder reaction has been found for micelles with counterions that act as Lewis acid catalysts (Section III.E). #### A. Effect of Micelles on the Rate of Diels-Alder Reactions Studies of the kinetics of Diels-Alder reactions in the presence of micelles typically reveal only modest catalytic effects, and usually the apparent rate constants in micellar media are strikingly similar to the rate constants in water. Little effort was made to obtain second-order rate constants in the micellar pseudophases. We refer here to the work of Breslow et al. [89], who observed a small (15%) acceleration of the Diels-Alder reaction of cyclopentadiene with a number of dienophiles in the presence of sodium \( n \)-dodecyl sulfate (SDS) micelles as compared with water. Also, a modest micelle-induced decrease in the rate constant of a Diels-Alder reactions has been reported [90]. More detailed analyses have been performed by Hunt and Johnson [91], who studied the kinetics of the homo Diels-Alder reaction of 1,2-dicyanoethylene (1) with cyclopentadiene (2) as a function of the concentration of sodium dodecyl sulfate (SDS) surfactant. The presence of micelles induces a modest decrease of the rate of this reaction (Fig. 5). Enthalpies and entropies of activation of the reaction in micellar medium have been determined and compared with those in water, aqueous salt solutions, and organic solvents (Table 2). Gibbs energies, entropies, and enthalpies of activation for the reaction in micellar solutions resemble those in 0.5 M LiCl more than those in organic solvents or water. This seems to point toward the Stern region of the micelles as the prominent site for this Diels-Alder reaction. Wijnen and Engberts [58] have studied the effect of SDS on another homo Diels-Alder reaction between 1,4-napththoquinone (4) and cyclopentadiene (2). The results were compared with a structurally related retro Diels-Alder reaction (Fig. 6). Close to the cmc a modest acceleration of the former bimolecular Diels-Alder reaction was observed, whereas micelles induced a small inhibition of the retro Diels-Alder. However, this Diels-Alder Reactions in Micellar Media ![Diels-Alder Reaction Diagram] FIG. 5 Second-order rate constants for the Diels-Alder reaction of 1 with 2 at different concentrations of sodium dodecyl sulfate (SDS). (Data from Ref. 91.) The process is still considerably faster than that in organic solvents [58]. The same authors have studied a reversible hetero Diels-Alder reaction and compared it with an irreversible analogue (Fig. 7) [92]. This time the rates of both retro and bimolecular Diels-Alder reactions experienced a modest beneficial influence of the presence of SDS micelles. The equilibrium constant is somewhat displaced toward the adduct. This particular reaction is classified by Desimoni et al. [11] as a type C Diels-Alder reaction, signifying that it is almost insensitive to hydrogen bonding effects and that its rate is mainly governed by enforced hydrophobic interactions. ![Relative Rate Constants Diagram] FIG. 6 Relative rate constants for the retro Diels-Alder reaction (●) of 6 and the bimolecular Diels-Alder reaction (■) of 4 with 2 at different concentrations of sodium dodecyl sulfate (SDS). (Data from Ref. 58.) suggests that enforced hydrophobic interactions are slightly more efficient in the Stern region of the SDS micelles than in bulk water. Van der Wel, Wijnen, and Engberts [57] have studied the influence of surfactants on the hetero Diels-Alder reaction of a cationic dienophile 12 with cyclopentadiene (Fig. 8). A 10-fold acceleration is induced by anionic SDS micelles, whereas nonionic Triton X-100 and cationic 1-N-dodecyl-4-methylpyridinium bromide have only modest effects on the rate of the reaction. The efficient catalysis by SDS most likely results from electrostatically enhanced binding of the dienophile to the micelles. The presence of micelles does not lead to a significant alteration of the efficiency of an intramolecular Diels-Alder reaction [93] as compared with the process in pure water. The most detailed kinetic investigation of the effect of micelles on Diels-Alder cycloadditions has focused on the homo Diels-Alder cycloadditions. dienophiles (14a-g) with cyclopentadiene (2) [94]. The influence of micelles of cetyltrimethylammonium bromide (CTAB), SDS, and dodecyl heptaoxyethylene ether (C\textsubscript{12}E\textsubscript{7}) on this process has been studied (Fig. 9). Note that the dienophiles can be divided into nonionic (14a-e), anionic (14f), and cationic (14g) species. A comparison of the effect of non ionic (C\textsubscript{12}E\textsubscript{7}), anionic (SDS), and cationic (CTAB) micelles on the rates of their reactions with 2 enabled assessment of the importance of electrostatic interactions in micellar catalysis or inhibition. The most important results of this study are summarized in Table 3. Under the reaction conditions, the effect of micelles on the rate of the Diels-Alder reaction is obviously small and invariably results in a slight inhibition of the reaction. The most significant effect occurs for anionic 14f in CTAB solution and for cationic 14g in SDS solution. These are the two combinations for which one would expect essentially complete binding of the dienophile to the micelle as a result of favorable electrostatic interactions in addition to the hydrophobic interactions. Apparently, reaction in the micellar environment is slower than reaction in the bulk aqueous phase, despite the anticipated locally increased concentrations of the reactants in the micellar pseudophase. Also, in the case where electrostatic interactions inhibit binding of the dienophile to the micelle, i.e., 14f in SDS and 14g in CTAB solution, a retardation of the reaction is observed. In these cases the dienophile will most likely reside mainly in the aqueous phase. The retardation will result from a decrease in the concentration of 2 in this phase due to its partial solubilization by the micelles. The kinetics of the aforementioned reactions have been analyzed in terms of the pseudophase model (Fig. 4). For the limiting cases of essentially complete binding of the dienophile to the micelle (14f in CTAB and 14g in SDS solution) the following expression [95] was used: \[ \frac{1}{k_{\text{app}}} = \frac{[2]_I}{k_{\text{obs}}} = \frac{V_{\text{mol,S}}}{V_{\text{w}}} \frac{V_m}{[S]} \frac{\text{cmc} \cdot V_{\text{mol,S}}}{k_m} \cdot P_2 \cdot V_1 \cdot \frac{k_m}{k_m} \] (2) Herein [2]\textsubscript{I} is the total number of moles of 2 present in the reaction mixture, divided by the total reaction volume V\textsubscript{t}; k\textsubscript{obs} is the observed pseudo-first-order rate con- Diels-Alder Reactions in Micellar Media 14 a X=NO$_2$ b X=Cl c X=H d X=CH$_3$ e X=OCH$_3$ f X=CH$_2$SO$_3^-$ Na$^+$ g X=CH$_2$N'(CH$_3$)$_3$ Br$^-$ **TABLE 3** Influence of Micelles of CTAB, SDS, and C$_{12}$E$_7$ on the Apparent Second-Order Rate Constants (M$^{-1}$ s$^{-1}$)$^a$ for the Diels-Alder Reaction of 14a, 14f, and 14g with 2 at 25°C$^b$ <table> <thead> <tr> <th>Medium$^c$</th> <th>14a</th> <th>14f</th> <th>14g</th> </tr> </thead> <tbody> <tr> <td>Water</td> <td>4.02 $\times$ 10$^{-3}$</td> <td>1.74 $\times$ 10$^{-3}$</td> <td>2.45 $\times$ 10$^{-3}$</td> </tr> <tr> <td>SDS</td> <td>3.65 $\times$ 10$^{-3}$</td> <td>1.44 $\times$ 10$^{-3}$</td> <td>1.47 $\times$ 10$^{-3}$</td> </tr> <tr> <td>CTAB</td> <td>3.61 $\times$ 10$^{-3}$</td> <td>0.283 $\times$ 10$^{-3}$</td> <td>2.01 $\times$ 10$^{-3}$</td> </tr> <tr> <td>C$_{12}$E$_7$</td> <td>3.35 $\times$ 10$^{-3}$</td> <td>1.62 $\times$ 10$^{-3}$</td> <td>2.05 $\times$ 10$^{-3}$</td> </tr> </tbody> </table> $^a$The apparent second-order rate constants are calculated from the observed pseudo-first-order rate constants by dividing the latter by the overall concentration of 2. $^b$[14] = 2 $\times$ 10$^{-3}$ M; [2] = 2.0 $\times$ 10$^{-3}$ M. $^c$All solutions contain 1.0 $\times$ 10$^{-4}$ M EDTA in order to suppress catalysis by trace amounts of metal ions. The concentration of surfactant is 7.8 mM above the cmc of the particular amphiphile under reaction conditions. **TABLE 4** Analysis Using the Pseudophase Model: Partition Coefficients for 2 over CTAB of SDS Micelles and Water and Second-Order-Rate Constants for the Diels-Alder Reaction of 14f and 14g with 2 in CTAB and SDS Micelles at 25°C <table> <thead> <tr> <th>Surfactant</th> <th>Dienophile</th> <th>$P_2$ (± 10%)</th> <th>$k_m$ (M$^{-1}$ s$^{-1}$) (± 10%)</th> </tr> </thead> <tbody> <tr> <td>CTAB</td> <td>14f</td> <td>65$^a$</td> <td>5.9 $\times$ 10$^{-6}$</td> </tr> <tr> <td>SDS</td> <td>14g</td> <td>49$^a$</td> <td>3.1 $\times$ 10$^{-5}$</td> </tr> </tbody> </table> $^a$Corrected data; see Ref. 95. Source: Data from Ref. 94. croenvironment experienced by the Diels-Alder reac- tants was obtained from analysis of the endo-exo ratio of the reaction between 14c and 2 in surfactant solution and in a number of different organic and aqueous me- dia [94] (see also Section III.B). The results of the study clearly point toward a waterlike environment for the Diels-Alder reaction in the presence of micelles. The inhibitory effect of micelles is suggested to result from the fact that diene and dienophile are on average located in different parts of the micelles. The diene seems to prefer the hydrophobic center of the micelle, whereas the dienophile has a stronger affinity for the Stern region. Evidence comes from 1H-NMR relaxation time studies in which paramagnetic ions are added to the micellar solutions [38,94]. Multivalent ions were used with a charge opposite to that of the surfactant headgroup, ensuring strong binding of these species to the Stern region of the micelles. As these paramagnetic ions enhance the relaxation of the protons in their vi- cinity, species bound to the Stern region will experi- ence a more enhanced rate of relaxation from those residing in the core of the micelle. Comparison of Fig. 10 and Fig. 11 indeed demonstrates that the relaxation rate enhancement experienced by the diene is signifi- cantly smaller than that experienced by the dienophile. In conclusion, the fact that micelles have a rather limited influence on the rate of bimolecular as well as retro and intramolecular Diels-Alder reactions suggests (1) that the micellar medium experienced by the reac- tants is not too different from water and (2) that con- centration effect of the reactants in the micelles is not too efficient. The latter effect is probably a result of the fact that diene and dienophile prefer different binding sites in the micelle. B. Effect of Micelles on the Endo-Exo Selectivity Few detailed studies have been performed regarding micellar effects on endo-exo selectivities. Diego-Castro and Hailes [96] have studied the influence of micelles on the Diels-Alder reaction of cyclopentadiene with several alkyl acrylates of different chain lengths (methyl, ethyl, pentyl, heptyl, and nonyl). Endo-exo ra- tios in micellar media were strikingly similar to those in water irrespective of the length of the alkyl group in the dienophile. Unfortunately, the reactions were performed using a surfactant concentration close to the cmc, where solubilization of the reactants by the mi- celles is rather inefficient and the reaction is more likely to take place in bulk water than in the micelles. Braun, Schuster, and Sauer [97] have studied the endo-exo ratio of the reaction of cyclopentadiene with acrylonitrile and butyl acrylate in micellar media. The endo-exo ratios were significantly larger than in organic solvents, which seems to point toward a highly polar micellar reaction medium. Unfortunately, no compari- sions were made with the endo-exo selectivity in pure water. Otto et al. [94] have studied the effect of mi- celles of SDS, CTAB, and C12E7, on the endo-exo ratio of the Diels-Alder reaction of 14c and 2 (Fig. 9). Com- ![FIG. 10](image-url) **FIG. 10** Paramagnetic ion-induced spin-lattice relaxation rates (r_p) of the protons of 14c (a) and 14g (b) in SDS solution and of SDS in the presence of 14c or 14g, normalized to r_p for the surfactant α-CH. The solutions contained 50 mM SDS, 8 mM 14c or 14g, and 0 or 0.2 mM DyCl_3 and 0 or 0.6 mM cyclen. (Data from Ref. 94.) ![FIG. 11](image-url) **FIG. 11** Paramagnetic ion-induced spin-lattice relaxation rates (r_p) of the protons of 2 in CTAB, SDS, or Zn(DS)_2 solution and of these surfactants in the presence of 2, normalized to r_p for the surfactant α-CH. The solutions contained 25 mM Zn(DS)_2, 50 mM CTAB or SDS, 3 mM 2, and 0 or 0.4 mM [Cu(EDTA)]^2 for CTAB solutions and 0 or 0.2 mM Cu(NO_3)_2 for SDS and Zn(DS)_2 solutions. (Data from Ref. 94.) TABLE 5 Endo-Exo Product Ratios of the Diels-Alder Reaction of 14c with 2 in Surfactant Solutions Compared with Water and Organic Solvents <table> <thead> <tr> <th>Medium</th> <th>%Endo-%exo</th> </tr> </thead> <tbody> <tr> <td>100 mM CTAB</td> <td>86-14</td> </tr> <tr> <td>100 mM SDS</td> <td>88-12</td> </tr> <tr> <td>100 mM C12E7</td> <td>85-15</td> </tr> <tr> <td>Water</td> <td>84-16</td> </tr> <tr> <td>Ethanol</td> <td>77-23</td> </tr> <tr> <td>Acetonitrile</td> <td>67-33</td> </tr> </tbody> </table> Comparison of the results with those obtained for organic solvents and pure water (Table 5) demonstrates that the beneficial solvent effect of water is still present in the micelle-mediated reaction. In summary, endo-exo selectivities in micellar media tend to be comparable to those in pure water [89] and significantly larger than those in organic solvents. Apparently, surfactants can be used in order to improve the solubility of the Diels-Alder reactants in water, without significant deterioration of the selectivity as compared with pure water. Interestingly, in microemulsions the endo-exo selectivity is reduced significantly [89,98]. C. Micellar Effects on the Regioselectivity Significant work in this area has been carried out by Jaeger et al. An interesting issue that was addressed in the early days of micellar catalysis involves the question of how binding to specific sites in micelles could affect the stereochemistry of the reactions. For example, extensive structural changes in substrates were expected to influence the depth of penetration of the substrate into the micellar core with a concomitant change in the efficiency of the micellar catalysis. This expectation was not borne out in practice [99,100]. In fact, one could ask how “micellar binding sites” can be defined with sufficient precision to allow conclusions about the details of the relevant microenvironment and orientation of the substrate. In view of the micellar structure, it is more appropriate to consider a range of binding situations of small differences in Gibbs energy of binding and involving a range of substrate orientations. Most substrates in micelle-catalyzed reactions contain at least one polar substituent that prefers to bind at or close to the micellar surface and at least partly in direct contact with water. Solely apolar molecules, such as alkanes, will preferentially bind in the hydrophobic core of the micelle, assuming orientations that lead to a minimal disturbance of the chain packing of the surfactant molecules. Jaeger et al. [101] examined how monohalogenation of alkyl phenyl ethers CnH20OR (R = n-C3H7, n-C6H13, and n-C12H25) by chlorine and bromine in micellar solutions of SDS and in vesicular solutions to give 1,4-XC6H4OR and 2-XC6H4OR exhibits ortho/para ratios and reaction rates different from those in aqueous buffer solutions in the absence of surfactants. Indeed, in the micelles the ortho/para ratio decreases with increasing length of R, whereas the second-order rate constant decreases in the series. These regioselectivity and kinetic data can be rationalized by assuming different solubilization sites for the aromatic ethers depending on the length of the R substituent. These differences lead to different reaction environments and concomitant kinetic differences. Lengthening of R is proposed to lead to solubilization “deeper” in the micelle and changes in the ortho/para preference. In another series of studies, Jaeger et al. examined regioselectivity control of Diels-Alder reactions for cases in which the diene or both the diene and dienophile were amphiphilic molecules themselves. In a Diels-Alder process involving a cationic surfactant 1,3-diene with a neutral non surfactant dienophile, the orientational effects within the micellar aggregates were not sufficiently strong to overcome the intrinsically preferred regioselectivity of the reaction [102]. Modest regioselectivity was found for a Diels-Alder reaction of another cationic surfactant diene with cationic surfactant dienophiles [103,104]. The reactions were performed at 100°C, most likely decreasing the organizational abilities of the aqueous aggregate compared with those at lower temperatures. A substantially larger regioselectivity [105] was found in a study employing amphiphilic diene 16 (cmc = 1.0 × 10⁻⁴ M) and amphiphilic dienophile 17 (cmc = 4.4 × 10⁻⁵) (Fig. 12). The cycloadducts 18 and 19 were formed, which were separated by preparative reverse-phase HPLC and characterized by ¹H-NMR spectroscopy. Since the substituents at carbons 1 and 2 in 17 are close to being electronically and sterically equivalent with respect to the dienophile reaction center, no regiochemical preference is anticipated in the absence of interfacial orientational effects in the mixed micelles formed from 16 and 17. Evidence for this assumption was also obtained from an analysis of the regioselectivity of the Diels-Alder reaction of 20 and 21 in toluene. As expected, the two analogous cycloadducts were obtained in equal amounts. Interestingly, the re- actions of 16 with 17 at concentrations above their cmc values gave an 18:19 ratio of 6.6:1. Therefore it is clear that interfacial and related orientational effects that result from surfactant aggregation can induce significant regioselectivity in a Diels-Alder reaction in aqueous solution. D. Micellar Effects on the Enantioselectivity Recently, a report appeared that described the first Diels-Alder reaction in aqueous chiral micellar media [106]. The novel (s)-leucine-derived chiral micellar amphiphile 22 was used as a catalyst for the Diels-Alder reaction of cyclopentadiene with n-nonly acrylate (23) (Fig. 13). Preferential formation of the R-endo isomer was observed. Using a surfactant concentration of 11 mg L\(^{-1}\) and in the presence of 4.86 M LiCl, the yield was 75%, with an endo/exo ratio of 2.2 and an enantioselectivity of 15% (R). This result may be compared with the maximum enantioselectivity (21%) found for Diels-Alder reactions in the presence of cyclodextrins. In the absence of surfactant, the reaction in water gave a yield of 70% and an endo/exo ratio of 1.7. Further optimization of the structure of the chiral micellar catalyst might well lead to improved enantioselectivities. In this context it may be noticed that aqueous Diels-Alder reactions catalyzed by chiral Lewis acids may exhibit enantioselectivities up to 74% [36,37]. E. Effects of Micelles with Catalytically Active Counterions The most efficient means of accelerating Diels-Alder reactions is catalysis by Lewis acids. In aqueous media this process is hampered by the strong interaction of the catalysts with water [62]. However, one example has been reported where this difficulty was overcome by modification of the dienophiles so that they can form a chelate with the catalyst ions (Fig. 9) [35-37]. The reaction of these dienophiles with cyclopentadiene in the absence of Lewis acid catalysts has been described in Section III.A. In that case introduction of micelles into the aqueous reaction mixture induced a modest retardation of the reaction. Micellar catalysis of this reaction in combination with Lewis acid catalysis has been studied in detail [94]. The dodecyl sulfate surfactants Co(DS)\(_2\), Ni(DS)\(_2\), Cu(DS)\(_2\), and Zn(DS)\(_2\) containing catalytically active counterions are extremely potent catalysts for the Diels-Alder reaction between 14 and 2. Figure 14 shows the dependence of the rates of the Diels-Alder reactions of 14c, 14f, and 14g with 2 on the concentration of Cu(DS)\(_2\). For all three dienophiles the apparent second-order rate constant for their reaction with 2 increases dramatically when the concentration of Cu(DS)$_2$ reaches the cmc (1.11 mM). Beyond the cmc, the dependence of the rate on the surfactant concentration is subject to two counteractive influences. At higher surfactant concentration, a larger fraction of dienophile will be bound to the micelle, where it reacts faster than in bulk water, resulting in an increase in the rate of the reaction. At the same time, the concentration of diene in the micellar pseudophase will drop with increasing surfactant concentration due to the increase in the volume of the micellar pseudophase. At higher surfactant concentrations the dienophile will be nearly completely bound to the micelles and the dilution effect will start to dominate the behavior. Together, these two effects result in the appearance of a rate maximum at a specific concentration of surfactant that is typical for micelle-catalyzed bimolecular reactions (see also Fig. 8). The position of the maximum depends primarily on the micelle-water partition coefficients of diene and dienophile. Interestingly, the acceleration relative to the reaction in organic media in the absence of catalyst approaches enzymelike magnitudes: compared with the process in acetonitrile (second-order rate constant = 1.40 $\times$ 10$^{-5}$ M$^{-1}$ s$^{-1}$), Cu(DS)$_2$ micelles accelerate the Diels-Alder reaction between 14a and 2 by a factor of 1.8 $\times$ 10$^{6}$. Also the effects of cationic (CTAB) and non ionic (C$_{12}$E$_7$) surfactants on the Cu$^{2+}$-catalyzed reaction have been studied. However, these systems were much less efficient than Cu(DS)$_2$, suggesting that a local high concentration of catalyst ions in the Stern region of the micelles is a prerequisite for a highly efficient interaction with the dienophile. The essentially complete binding of 14g to the Cu(DS)$_2$ micelles allowed treatment of the kinetic data of Fig. 14 using the pseudophase model. Furthermore, complete binding of 14g to the copper ions was assumed, which was supported by ultraviolet-visible analysis [94]. Using Eq. (2), a Cu(DS)$_2$-water distribution coefficient for 2 of 86 was obtained [95]. The second-order rate constant for reaction in the micellar pseudophase was calculated to be 0.21 M$^{-1}$ s$^{-1}$. Comparison of this rate constant with those for the reaction in acetonitrile (0.472 M$^{-1}$ s$^{-1}$) and ethanol (0.309 M$^{-1}$ s$^{-1}$) seems to indicate a relatively apolar medium for the Diels-Alder reaction. This conclusion is hard to reconcile with the ionic character of two of the three reaction partners involved. More insight into the local environment for the catalyzed reaction was obtained from the influence of substituents on the rate of this process in micellar and in different aqueous and organic solvents. The Hammett ρ value in Cu(DS)$_2$ solution was found to resemble closely that in aqueous solution rather than those in organic solvents, suggesting an aqueous microenvironment for the reaction [94]. It appears that the outcome of the analysis using the pseudophase model (a rather apolar reaction environment) is not in agreement with experimental observations (an aqueous reaction environment). Apparently, the assumptions of the pseudophase model are not valid for the Diels-Alder reaction studied. In particular, the treatment of the micellar pseudophase as a homogeneous “solution” might not be warranted. As noted in Section III.A, there are strong indications that the diene and the dienophile reside on average in different parts of the micelle, the diene preferring the core and the dienophile the Stern region of the micelles. Additional paramagnetic $^1$H-NMR relaxation rate studies of the binding location of the reactants in Zn(DS)$_2$ micelles further support this suggestion [38,94]. Surely, spatial separation of diene and dienophile will impede their reaction. In summary, the use of anionic micelles with bivalent metal ions as catalytically active counterions can lead to accelerations of suitable Diels-Alder reactions of enzymelike magnitude. The high efficiency of these systems mainly results from the efficient interaction between dienophile and catalysts in the Stern region of the micelles, where both species are present in high local concentration. Even larger accelerations are anticipated upon modification of the diene so that this species also binds to the Stern region rather than in the core of the micelle. Examples of similar micellar systems have found application in synthetic organic chemistry [107]. IV. SUMMARY AND OUTLOOK It is now well established that many Diels-Alder reactions, both of normal electron demand and of inverse electron demand, can be substantially accelerated by using water as the reaction medium. Also, endo/exo ratios are usually improved for aqueous media. These findings had important implications for further extending the versatility of Diels-Alder reactions in organic synthesis and for providing a stimulus for detailed studies of medium effects on pericyclic reactions. These interesting developments called for studies of Diels-Alder reactions in micellar solutions. By concentrating the diene and dienophile in the micellar reaction volume, further enhancements were anticipated. Furthermore, solubilization of the Diels-Alder reaction partners in the micelles could offer a solution for improving the otherwise limited solubility of diene and dienophile in water. Finally, effects on the Diels-Alder stereochemistry were expected. Specific binding could lead to regioselectivity, whereas the use of chiral micelle-forming surfactants would provide a possibility for obtaining enantioselectivity in appropriate Diels-Alder processes. Studies have illustrated the potential power to bring about these appealing results. Micellar catalysis of Diels-Alder reactions has been pursued and could indeed induce significant accelerations. Examples have been shown in this chapter. However, it is a requisite that diene and dienophile bind to rather similar binding sites in the micelle. In the case of, for example, an apolar diene and a moderately polar dienophile, the diene will preferentially reside in the core of the micelle and encounters with the dienophile, preferentially sitting at the micellar surface, will be hampered. The overall result will then be micellar inhibition rather than catalysis. Extreme rate enhancements can be obtained by combining micellar and Lewis acid catalysis. However, a specially designed dienophile is required for such a catalytic process. Binding of dienes and dienophiles to micellar aggregates will certainly improve their solubilities in water and extend the potential for using aqueous reaction media for Diels-Alder reactions. The use of micelles with the aim of inducing favorable regioselectivity and enantioselectivity has had only modest success. However, it is anticipated that challenging developments in this area are possible through variation of the structural architectures of diene, dienophile, and micelle-forming amphiphile. ACKNOWLEDGMENT The authors gratefully acknowledge Miss H. E. Wolters for typing the manuscript. REFERENCES Diels-Alder Reactions in Micellar Media 261 47. W. Blokzijl and J. B. F. N. Engberts, in ACS Sym- 95. In the original paper [94] Eq. (2) and the data calculated with it contained a small error. This has now been corrected. We gratefully acknowledge Prof. P. Walde for pointing this out.
Protocol converter synthesis V. Androutsopoulos, D.M. Brookes and T.J.W. Clarke Abstract: A system-on-a-chip is an interconnection of different pre-verified IP hardware blocks, which communicate using complex protocols. The integration of IP blocks requires some glue logic to interface otherwise incompatible datapaths. This glue logic is called a protocol converter and its manual design proves to be a tedious and time-consuming task. Automatic synthesis is therefore important, but for optimal system-level design it is necessary to consider not just the correctness, but also the quality (in terms of bandwidth and latency of data transfer) of the converter. A good solution to this problem will allow greater use of protocol-level abstraction as a design tool in system design and synthesis. Results are presented on automatic synthesis of a converter between two protocols. It is shown how converter logic which is bandwidth-optimal can be synthesised for datapaths with an arbitrary number of data ports each of which has arbitrary-size first-in-first-out (FIFO) storage. An extension of the product FSM converter synthesis algorithm to include FIFO data-paths is presented. In addition the converter bandwidth is identified as a mean cycle graph problem which is solved using maximum mean cycle graph algorithms. 1 Introduction In recent years the synthesis of hardware interfaces for protocol-oriented data input/output (I/O) has been a topic of research interest. Protocols represent a convenient abstraction for the specification of module I/O, and progress has been made in automatic synthesis of hardware interfaces from the corresponding protocol definitions. One key problem is the automatic synthesis of the necessary logic to connect together two hardware blocks with different I/O protocols. This interconnection logic, which comprises a datapath with possible data storage, and appropriate control, is called a protocol converter. Another key problem is the automatic synthesis of finite state machine (FSM) protocol controllers for frame-based protocols such as MPEG and ATM from regular grammar-type languages. These are control-dominated designs, which include large state machines. Expressing the protocols using compact regular grammar-type languages instead of FSMs significantly increases design efficiency. The second problem is related to the first problem in the following way. Merging the grammars’ describing two different protocols together and then synthesising the resulting grammar into an FSM implements a protocol controller. This is feasible because of an important property of an interface protocol: it is possible to compose new protocols by merging two existing ones and decompose a protocol into two new protocols by splitting an existing one. In this case, the newly formed grammar describes the protocol converter. In protocol converter synthesis the objective is to derive the grammar (or FSM) for the protocol controller that will synchronise the two incompatible modules. Additional issues arising in protocol converter synthesis are optimising the producer-consumer interconnecting buffer size, and performance (i.e. latency and bandwidth). This work focuses on protocol converter synthesis, and specifically on a technique that allows automatic synthesis with a wide variety of datapaths and bandwidth optimisation of the resulting converter. The main contribution of this work is an algorithm for automatic synthesis of the converter control logic that is optimal in terms of interface bandwidth for a wide variety of possible protocols and datapaths. The running time of the algorithm is polynomial in the size of the two protocols to be interfaced, when they are expressed as FSMs. We extend the elegant product FSM construction first published by [1] and later used by [2, 3]. The protocol specifications are modelled as two FSMs, and the converter FSM is derived from the product of the two FSMs. The contribution in this paper is three-fold: 1. We present a precise mathematical framework for expressing the problem of protocol converter synthesis which allows a class of performance issues to be addressed as graph-theoretic questions. 2. Optimal bandwidth control is discovered by finding the maximum profit per time path through the product of these FSMs subject to the constraints. This is the first time that bandwidth, rather than latency, has been optimised in converter synthesis. We show in Section 4.6 that the two are not necessarily identical. The product FSM construction of [1–3] is modified to allow the analysis of arbitrary first-in-first-out (FIFO) datapaths. In general the necessary sizes of FIFOs in a system depend on global stochastic system properties, a topic we do not address. However, the storage necessary to optimise interface bandwidth, given that both protocols are transferring data as fast as they can, is a well determined problem that we do address. Our work can be used to determine the minimum datapath storage for an optimum-bandwidth interface. 2 Related work Protocol controller synthesis for complex frame-based data communication has been considered in [4–6], Seawright and Brewer [4] were the first to report the synthesis of hardware from such specifications. In their approach, regular grammar constructs are identified directly with hardware patterns, which were shown to perform better experimentally both in terms of area and delay than standard FSM synthesis. Seawright et al. [8] present a graphical user interface as part of their initial synthesis tool that explicitly uses the structure of the complex data (e.g. ATM cell) as input. This was later commercialised by Synopsis in the Synopsys Protocol Compiler (SPC) [9]. Oberg et al. [5] present a grammar and synthesis tool environment called protocol grammar (PROGRAM) which has the freedom to choose the best possible implementation in terms of area and throughput independent of port size specification. So, whereas SPC facilitates a clock true description, PROGRAM specifies the whole sequence that is associated with the input and output port sizes are derived according to the throughput constraints posed in the grammar description. PROGRAM was later applied by other members of Oberg’s group in another approach called maths to asic (MASIC), to specify protocol glue logic (i.e. a protocol converter) between digital signal processor cores [10, 11]. Global control, configuration and timing (GLOCCT) is specified in a grammar notation in an attempt to automatically generate cyclic FSMs is also an extension to [2] and [3], but it achieves this manually through a third automaton which specifies the constraint on the converter. Other kinds of interface synthesis approaches are presented in [17–19]. Filo et al. [17] model concurrent inter-process communication using blocking and non-blocking messages with detailed timing constraints and the synthesis aims to increase performance by making as many communications as possible non-blocking. Madsen and Hald [18] present an approach to interface synthesis based on an algebra which manipulates an abstract communication behaviour between two units by applying transformations that allow both data segmentation and data combination. Finally, Coussy et al. [19] present an IP integration methodology that deduces data exchange delay information during integration of the IP to a shared on-chip-bus. It then uses this, and further data ordering information to generate a detailed bus-functional model of the IP towards co-simulation. 3 Converter synthesis This Section introduces the protocol converter synthesis algorithm using a pass-through datapath. Figure 1 shows how the required converter interfaces to two hardware blocks with protocols $P$ and $P'$ respectively. Each protocol has tuples specifying the binary value of control input lines ($I$ and $I'$), control output lines ($O$ and $O'$) and a data port (data) carrying arbitrary data items. It is assumed that the two data ports have the same type. The direction of the data port in $P$ is output, and $P'$ is input. The input to the synthesis algorithm is a FSM description of $P$ and $P'$. The output of the algorithm is a product FSM to implement a correct converter. The outputs of this FSM drive the control inputs $I$ and $I'$ of $P$ and $P'$. The inputs of this FSM are the control outputs $O$ and $O'$ of $P$ and $P'$. The data ports of the two protocols are connected together, and not to the converter. 3.1 Protocol specification Passerone [3] has argued that regular expressions are more easily understood by designers than FSMs. Specification languages equivalent to FSMs have been proposed [7]. In this work we will assume that protocol compilation, for example from regular expressions to the equivalent finite state automata (FSA) is a separate problem, and consider optimal synthesis from an appropriate low-level protocol specification. It is nevertheless relevant to ask whether the specification that we choose is sufficiently powerful. The choice made here, to use protocols defined by deterministic finite automata, has the merit of theoretical simplicity, and covers a wide range of synchronous protocols. The extensions considered in Section 5.2 are all possible within this framework. The behaviour of a protocol at any one time is defined by the states of a set of Boolean input and output signals, together with a vector of $D$, consisting of one integer for each data port, which determines whether the port is producing (positive) or consuming (negative) data items, and is otherwise zero. Formally we define the protocol behaviour to be over the product of the control input space $I$, control output space $O$, and $D = Z^I$. The number $p$ specifies the number of data ports, and is one in the case of a single data port, as in Fig. 1. In the normal case, each port produces or consumes only one item at a time, so possible values of the components of $D$ are one −1, and zero, representing data production, consumption, and neither respectively. The extensions considered later in Section 4.6 will use protocols which produce or consume more than one item. The product $\Sigma = I \times O \times D$ is called the protocol alphabet. The set of finite strings over this alphabet is, using conventional notation, $\Sigma^*$. Synchronous protocols can be defined as the subset of $\Sigma^*$ containing precisely those strings that are initial segments of traces accepted by the protocol. A protocol is thus formally a grammar. We use the set of grammars (and hence protocols) that can be generated by deterministic FSA. For convenience, without loss of generality, the deterministic FSA are transformed into the equivalent Mealy FSMs [7], with input and output spaces $(I \times I')$ and $(O \times O')$ respectively. We say that an FSA is deterministic if it can take no more than one transition for a given input and present state. The FSMs representing FSA are deterministic when both the next state and output space are uniquely defined for a given input transition. However, they are in general non-deterministic if the next state is uniquely defined for a given present state, input and output. The FSMs with this later property are called pseudo-nondeterministic in that the corresponding FSA are deterministic. We therefore define protocols as pseudo-nondeterministic Mealy FSMs, equipped with an additional output function that determines the output $D$ from the protocol state and inputs. The converter synthesis algorithm can be understood by noting that since the FSMs corresponding to both protocols are in general pseudo-nondeterministic, the state of the FSM representing each protocol can be determined by a corresponding FSM in the controller. The product of these two FSMs thus contains complete information about the state of the system, and from this the required converter FSM can be derived. Since the datapath of the example shown in Fig. 1 contains no storage, the task of the converter FSM is to synchronise the times when the producer protocol produces data with the times when the consuming protocol consumes data. This has been called the ‘data correspondence problem’ in the literature. The following Section shows how this can be accomplished. ### 3.2 Synthesis algorithm The source protocol $P$ is represented formally by a tuple which specifies a pseudo-nondeterministic Mealy FSM, and an additional function $d$, as in (1). The destination protocol $P'$ is represented in an identical way but with its components denoted by single primes. Throughout this Section we will assume that $P$ and $P'$ each have a single data port, so the functions $d$ and $d'$ have integers which specifies whether the port acts as a producer or consumer of data: $$P = (S, I, O, r \in S, T, d : I \times S \rightarrow Z)$$ $$T \subset (I \times S \times S \times O)$$ (1) In (1) $S$ is the set of FSM states, with initial state $r$. $I$ and $O$ are the finite protocol input and output control spaces. An element of $I$ or $O$ thus represents the state of all the protocol hardware inputs or outputs at a given time, and is itself a binary tuple of arity equal to the number of inputs or outputs respectively. For convenience, in the algorithms that follow we do not explicitly decompose this tuple into its individual components, representing the individual protocol control inputs and outputs. Thus, in the expression: $\forall i \in I$ the variable $i$ enumerates all $2^I$ possible sets of control inputs on $k$ wires. $T$ is the set of FSM transitions. $(i, x, y, o) \in T$ represents a transition from state $x$ to state $y$ taken when the FSM input tuple is $i$, and output tuple $o$. In a convenient abuse of notation we write $(R, x, y, o) \in T$ when a transition is taken for a set of inputs $R \subseteq I$. The Mealy FSM behaviour is captured by having multiple transitions from a state, each labelled with a different output tuple, and with possibly the same destination state. The conventional label on the transition $(R, x, y, o) \in T$ of the FSM from $x$ to $y$ would be $R/o$. In addition $d$ is the protocol data function that specifies data transfer for each state. We assume the protocol has a single data port, in which case $d$ will be one for a state that produces data, and −1 for a state that consumes data. $d$ will be zero for a state that both produces and consumes data or a state that neither produces nor consumes data. Note that $d$ can in general depend on $I$ as well as $S$. The FSM is input non-deterministic so transition predicates $R$ from a given state to different destination states need not be disjoint. However, the FSM is output deterministic, so where this is the case the corresponding outputs must be different. We now define formally the construction of a product FSM from the two FSMs representing the two protocols connected by a converter. Throughout this construction we use zero, single and double $\tau$ symbols to designate quantities in the first and second protocol FSMs and the product FSM respectively. Given two protocols: $P$ and $P'$, the corresponding product FSM $P'' = P \times P'$ is defined in (2): $$P = (S, I, O, r \in S, T, d)$$ $$P' = (S', I', O', r', T', d')$$ $$P \times P' = (S \times S', I \times I', O \times O', r \times r', T')$$ (2) where $T''$ is defined as: $$\langle (i, i'), (s, s'), (t, t'), (o, o') \rangle \in T''$$ $$(i, s, t, o) \in T$$ (3) $$\Leftrightarrow (i, s', t, o') \in T'$$ Informally, the execution of this non-deterministic product machine represents all possible interleavings of the behaviours of the two protocols. A correct converter must determine values of, $I$ and $I'$ at all times to control the product FSM state so that datapath constraints are met. This condition is ensured by pruning from the product FSM all states which violate datapath constraints, or may lead to future violation. The synthesis algorithm can now be described as four steps, each operating on the product FSM \( P^o \) constructed from \( P \) and \( P' \) as in (2). By construction this product machine tracks the state of both \( P \) and \( P' \), however, it does not ensure that datapath constraints are met. Step 1 constructs a subset of the product machine \( P^o \) in which all transitions which would violate the datapath constraints (overwrite a data item before it is consumed or consume data before it has been produced) have been removed. Step 2 ensures that all states or transitions that could lead to datapath constraint violation are removed, recursively. In general the protocol outputs, \( O \) and \( O' \) can be arbitrary, whereas the protocol inputs, \( I \) and \( I' \) may be controlled to ensure datapath constraints. Therefore, a good path must exist for every set of outputs and at least one set of inputs. By construction, at the end of Step 2, all transitions in the product machine represent guaranteed safe paths for the two protocols to follow, however, there may be more than one such safe path. The product machine thus represents all possible ways of controlling the two protocols that are feasible. Step 3 therefore determines this in such a way that data transfer bandwidth is optimised. Finally in step 4 the converter control FSM is generated with outputs the required protocol inputs, and inputs the given protocol outputs. Formally: Input: Protocols, \( P \) and \( P' \). Output: Converter FSM with control input space \( O \times O' \), control output space \( I \times I' \) that implements a converter between \( P \) and \( P' \), if one exists. Notation: We define set variables \( F_s \subseteq S \times S' \), \( F_t \subseteq T'' \). We say that states and transitions of the product machine are dead if they are in \( F_s \), or \( F_t \) respectively. Otherwise they are alive. Two auxiliary functions will simplify the description of the algorithm: \[ \Pi(t) = \{ (i,s,t,o) | \exists t', s' \text{ such that } (i,s,t,o) \in T'' \} \\ \Delta(s,o) = \{ (i,s,t,o) | \exists i, t \text{ such that } (i,s,t,o) \in T'' \} \] Thus, \( \Pi \) defines the set of transitions from immediate predecessor states to state \( t \) in the product FSM \( P^o \) and \( \Delta \) defines the set of transitions in \( P^o \) from state \( s \) with output \( o \). We define: \[ s'' = (s', s') \] \[ \delta''(i'', s'') = \delta(i, s) + \delta(i', s') \] Algorithm: Step 1: \( P^o = P \times P' \). Step 1b: \( F_s = \{ s' | \forall i', \delta''(i', s') \neq 0 \} \). This makes sets that violate the datapath constraints dead. It is also necessary to deal with transitions that violate this constraint in states that are not dead, so we set: \[ F_u := \{ (i'', s'', t'', o'') | \exists i', t', s' | \delta''(i', s') \neq 0 \} \] In step 1 of the algorithm, FSM protocols \( P \) and \( P' \) are merged together by taking their product. The product FSM is generated recursively by starting from no states and progressively adding states using a depth-first search (DFS) strategy similar to that of [2] and [3]. The difference to the construction in [2] and [3] is that a cyclic FSM is constructed. Because of the properties of DFS on cyclic graphs, the product graph cannot be pruned at the same time as performing DFS as in the case [2] and [3]. So further steps (steps 2 and 3) are required for the final pruned FSM. Two data structures are created to assist product state machine computation: a stack, and an FSM. The stack is a last-in first-out data structure which timestamps each state. A state is pushed onto the stack when it is first created and popped from the stack when the search finishes examining its adjacency list. It therefore prevents endless computation. Every time a new state is pushed onto the stack, the state becomes the root of a new tree in the depth-first forest. A transition from a state to a state earlier in the stack queue is called a backward transition because the resulting path is a cycle. The FSM data structure is a cache which stores states that have been explored during DFS. It is used to prevent the algorithm from re-exploring states (hence paths) which have already been explored thus resulting in computational saving. Transitions which point to states on the FSM are called forward transitions. The FSM will eventually include all states in the product FSM. Figure 2 illustrates a DFS search on an arbitrary graph structure. The states are annotated by: time added on stack/ time removed from stack (or time added on FSM). The transitions are labeled B or F according to whether they are backward or forward transitions. During DFS, the product states, are unrolled so that states are uniquely defined by \( s'' \) and \( d''(i', s') \). A negative value assumed by this integer, indicates a violation of causality (i.e. implies data received which has not yet been sent). A positive one implies that data has been lost because no storage is available on the datapath. During the DFS, states which violate data dependencies in this way are marked dead on the FSM, and no further exploration is performed from them. Step 2a: Make dead all \( P^o \) transitions that lead to dead states. \[ F_u := F_u \cup \bigcup_{i \in F_s} \Pi(t) \] Step 2b: Make dead all \( P^o \) states that have transitions that are all dead for any given output: \[ (\Delta(s,o) \neq \phi) \text{ and } (\Delta(s,o) \subseteq F_u) \Rightarrow F_s := F_s \cup \{ s \} \] (This condition means that there is no protocol input that can stop a potential transition to a dead state from \( s \), and therefore \( s \) must become dead). Step 2c: Repeat Steps 2b and 2c until \( F_u \) does not change. The output from step 2 is a pruned product FSM \( P^o_u \) equal to \( P^o \) with \( F_u \) and \( F_s \) deleted from transitions and states respectively. After forming the product FSM, it is necessary to remove all unsafe states and transitions which lead to the violation of ![DFS on a digraph](image-url) data path constraints. This operation can be performed either through an iterative or backtracking procedure. The iterative and backtracking approaches use an adjacency list representation of $P'_u$, where each state $s_i$ has a list of all successor states $\text{AdjOut}[s_i]$. The backtracking approach also uses an adjacency list, where each state $s_i$ has a list of all predecessor states $\text{AdjIn}[s_i]$. $\text{AdjIn}[s_i]$ is determined from the transpose of $P'$. This is the graph $(P')^T = (S', (T')^T)$, where $S'$ represents the set of states in $P'$ and $(T')^T$, the set of transitions with their direction reversed. Removing unsafe states through iteration requires the visiting of all states which are not in $F_u$ to determine whether their transitions are all dead for any given output. $F_u$ is updated with transition data path constraints. This operation can be performed as fast as possible. Call the new FSM, with transitions deleted to resolve output non-determinism, $P''_u$. Step 3: Resolve the output non-determinism in $P''_u$ such that the new FSM, with transitions deleted to resolve output non-determinism, $P''_u$. The condition for such a choice is: $$\exists i, j \in I \times I', s, t, u \in S \times S', o \in O \times O' \text{ such that } i \neq j \text{ and } s \neq o \text{ and } (s, u, t, o) \in T_0 \text{ and } (j, i, t, u, o) \in T_0$$ In the case of simple protocols it is possible to resolve this by assuming one item is transferred in each non-trivial cycle of the product FSM transition graph and calculating the minimum path length from $u$ to $t$ through either $s$ or $t$. The shorter path is chosen. This optimises data transfer latency. Section 4 describes a more general way to make this choice. ### 3.3 Example of synthesis algorithm This example demonstrates the synthesis of a converter between two protocols $P$ and $P'$. Figure 4 shows the FSM representation of the two protocols, a producer $P$ and a consumer $P'$. In this case, data transfer function $d$ is independent of $I$, so data transfer $(d = 1, -1$ respectively) is indicated by appending a star to the state label. Both protocols transfer data in state 2, and have reset state 0. $P$ has one control input, and with values indicated on transitions. Unlabelled transitions are taken unconditionally. In this case $P$ is input deterministic. The protocol will produce a new data item every two or more cycles, depending on the input in state 1. In contrast, $P'$ has no input. Its output is indicated on transitions in the form/output. The output in state 3 must be observed to determine whether the path from state 3 to state 2 takes one or two cycles, and this path is not controllable by the converter. The product FSM $P'' = P \times P'$ has 12 states, five of which are shown in Fig. 5. The forks in $P''$ depend on either the output of $P'$, which the converter cannot control, or the input of $P$, which the converter can control. The four transitions from state (1,3) thus split into two groups, depending on the $P'$ output. Within each group, the transition taken is controllable. The two states with dotted edges fail in step 1 due to datapath constraints. The (2,2) state must clearly be in the converter FSM, since it is the only state in which data can be transferred. Inspection shows that states (1,3) and (1,1) are also needed. The state (0,0) is also needed, but not shown. All other states fail in step 2; for example (0,3) has a transition to (1,2) if \( P' \) output is one, and therefore fails. In this example there is no non-determinism to resolve in step 3 because states (2,1) and (1,2) have been removed by datapath constraints. If this had not happened a decision would be made in state (1,3) between transitions to (2,1) and (1,1), and between transitions to (2,2) and (1,2). 4 Protocol converter synthesis with FIFO datapaths This Section will show how converter controllers for a wide range of datapaths, characterised by the amount of FIFO storage in the datapath, can be synthesised. The controllers are optimal in the sense that for the specified datapath they deliver maximum bandwidth, and the synthesis algorithm is polynomial in the size of the pruned product FSM. The bandwidth optimisation does not rely on stochastic aspects of the system. 4.1 Datapath considerations In this Section we assume that the converter to be synthesised has a single port through which data flows unidirectionally. The extension to multiple ports, and bidirectional flows, is discussed in Section 4.6. In this case the datapath consists of a single FIFO, characterised by a maximum number of items stored. Two special cases are when this number is one or zero. A FIFO of length one is equivalent to a single buffer register, a FIFO of length zero corresponds to the previous case of direct connection between the input and output ports. In order to minimise data latency it is assumed that data falls through the FIFO in zero cycles. Figure 6 shows a typical edge triggered implementation for a single buffer register with this property: an equivalent construction can be used to implement zero cycle fall-through from a one-cycle fall-through FIFO. 4.2 Control synthesis In this Section the algorithm of Section 3.2 is extended to allow arbitrary-sized FIFO datapaths. Assume a data FIFO size of \( N > 0 \). The converter FSM must in general keep track of the value of \( N \). This is implemented by unrolling the product machine \( P'' \) from Section 3.2 \( N + 1 \) times, so that the extended product FSM \( P''_e \) has state space \( S''_e \) which is a product: \[ S''_e = (S \times S') \times [0 - N] \] (5) The last component of the state space represents the number of data items in the FIFO, and is called the size of the state. Transitions in \( P''_e \) are defined so that: \[ (i'', s'', i', o') \in T''_e \iff (i', (s', n), (i'', n + d'(i', s'')), o'') \in T''_e \] (6) \( n \) is an integer and its value encodes the state of the FIFO in terms of the number of items. It can be arbitrarily chosen subject to: \[ 0 \leq n \leq N - 1 \] These conditions enforce the datapath constraint that the FIFO must neither underflow nor overflow. The FIFO overflows when the number of items, \( n \), exceeds the capacity of the FIFO, \( N \). FIFO underflow implies a violation of causality, meaning that data is read from the FIFO which has not yet been written. \( P''_e \) is replaced by \( P''_e o \) in step 1, and step 1b is therefore no longer necessary. Step 2 proceeds exactly as before, to generate an output \( P''_e o \). Resolving non-determinism during step 3 is now more complex. We want to optimise the rate at which items are transferred, since this is the bandwidth of the converter. This is in general not the same as optimising the converter latency i.e. the time between production and consumption of a given item of data. The converter bandwidth can be calculated without loss of generality over cycles in the extended product FSM. In any such cycle the number of data items produced and consumed is equal, and can be calculated as the sum of \( |d(s)| \) along the cycle. In general the cycle taken is protocol dependent, and cannot be predicted. It is, however, reasonable to optimise the converter so that at any time, if both producer and consumer run as fast as possible in the future, data rate is maximum. The exact solution to this problem is presented in Section 4.3. 4.3 Maximum cycle mean calculation Determining the best transition when there is non-determinism in \( P''_e o \) at the end of step 2 is a modified case of the maximum cycle mean graph problem [20]. Let \( P''_e o = (S''_e o, I, O, T''_e o) \). Define a digraph \( (V, E) \) with vertices and edges taken from the \( P''_e o \) states and state transitions: \[ (V, E) = \{ (u, v) | (i, u, v, o) \in T''_e o \} \] Assign a weight \( w(u, v) \) to edge \( (u, v) \in E \) to be the maximum number of data items transferred by \( P \) in the transition: \[ w(u,v) = \max_{(i,u) \in E} \left( d(i,u) \right) \quad (7) \] The cycle mean is defined to be the sum of the edge weights divided by the path length, over the cycle. The maximum mean of weights over one cycle then corresponds to the maximum data transfer rate. There are a number of algorithms, surveyed in [20], that find the global maximum cycle mean efficiently. The requirement here is, however, to find a maximum cycle mean separately for every non-deterministic choice. The possible transitions for one such choice are restricted by having a given starting state, \( s \), and transition output value, \( o \). If cycles are restricted to those including a given transition, and restricted maximum cycle means calculated, the transition with the largest restricted maximum cycle mean should be chosen. Efficient solution of this problem has not been addressed directly in the literature, although algorithms to find the global maximum cycle mean exist, mostly based on the work of Karp [21]. Karp’s algorithm contains a recurrence which finds \( D_k(s,v) \), the maximum summed weight of any path length \( k \) starting with state \( s \) and ending in state \( v \). We use a modified version of this recurrence to find \( D_k(s, t, v) \), the maximum summed weight of any path length \( k \) starting with states \( s \) and \( t \) and ending in state \( v \). This then allows us to compute the restricted maximum cycle means as follows: \[ D_k(s, t, v) = \max_{(u,v) \in E} D_{k-1}(s, t, u) + w(u,v) \] \[ D_1(s, t, v) = w(s,t), v \neq t \Rightarrow D_1(s, t, v) = -\infty \quad (8) \] The maximum cycle mean through \( s \) in direction \( t \) can then be calculated as: \[ A(s,t) = \max_{k \in [1,n]} D_k(s,t,s) \] This calculation must be repeated for all non-deterministic transitions from \( s \) to \( t \) and the transition with largest \( A \) chosen. The diagram in Fig. 7 presents how Karp’s algorithm works by giving table entries starting from the source \( s \). Each row (column) of circles corresponds to a row (column) of the table \( D \) where each row is identified by an integer and each column by a node. The symbol ‘ represents \(-\infty\). The numbers just to the right of each circle represents the values stored at the corresponding table entries, e.g. \( D[2, c] = 9 \) and \( D[3, a] = -\infty \). There are two cycles in this graph: \( \langle s,a,b,c,s \rangle \) and \( \langle s,b,c,s \rangle \). The maximum cycle mean is then max \( \{ (3+4+7+2)/4, (2+7+2)/3 \} \), and \( \langle s,a,b,c,s \rangle \) is the critical cycle. Figure 8 presents how Karp’s algorithm is modified to determine the restricted maximum cycle means for the digraph in Fig. 7. Figure 8a shows the table entries starting with state \( s \) in the direction of state \( a \). There is one cycle in this graph: \( \langle s,a,b,c,s \rangle \) with maximum cycle mean \( (3+4+7+2)/4 \). Figure 8b shows the table entries starting with state \( s \) in the direction of state \( b \). There is one cycle in this graph: \( \langle s,b,c,s \rangle \) with maximum cycle mean \( (2+7+2)/3 \). ### 4.4 Time complexity Table 1 summarises the worst-case time complexity of this algorithm, in terms of the number of nodes, \( n \), and number of transitions, \( m \), in the product state machine. Running time is dominated by step 3, due to the complexity of the repeated maximum cycle mean calculation. It should be noted that step 3 operates on a product FSM which has been pruned by step 2. The typical running time is therefore less than this worst-case figure. Further optimisation of step 3 can be implemented by noting the special status of self-transitions in \( F_{\text{in}} \). These must either have non-zero weights, in which case they comprise the maximum cycle mean path, or zero weights, in which case they cannot be part of any maximum mean cycle path and may be removed before calculation of maximum cycle means. ### 4.5 Evaluating the quality of the synthesised converters The digraph that is output from step 3 is not necessarily connected. We decompose the resulting digraph into individually subgraphs called strongly connected components (SCCs). An efficient algorithm exists with two DFSs which performs the above decomposition in linear time \( \Theta(m+n) \) [22]. We use part of the decomposition algorithm in [22] which determines and then generates the sink SCC from the resulting digraph. This represents the set of possible behaviours into which the protocol converter eventually settles when a stimulus is applied. We define the average performance bandwidth of the synthesised converter as: the number of items transferred per cycle averaged over all possible cycles in the sink SCC of \( P^1 \). --- **Fig. 7** The entries for \( D \) and the arcs Karp’s algorithm visits for a four-node digraph from source node \( s \) **Fig. 8** The two tables produced by the restricted maximum cycle mean algorithm starting from \( s \) - **Fig. 8a**: in the direction of state \( a \) - **Fig. 8b**: in the direction of state \( b \) **Table 1: Worst-case time complexity** <table> <thead> <tr> <th>Step</th> <th>Complexity</th> </tr> </thead> <tbody> <tr> <td>Step 1</td> <td>( \Theta(m) )</td> </tr> <tr> <td>Step 2</td> <td>( \Theta(m) )</td> </tr> <tr> <td>Step 3</td> <td>( \Theta(nm^2) )</td> </tr> <tr> <td>Step 4</td> <td>( \Theta(n) )</td> </tr> </tbody> </table> Evaluating the average bandwidth over all possible cycles requires determining the number of simple cycles in the SCC. DFS from an arbitrary node in the SCC can be used to detect all possible cycles. The problem with DFS is that it may detect the same cycle more than once because the nodes constituting the cycle can be detected from more than one path. The number of cycles can therefore be determined by performing DFS from an arbitrary node and storing cycles which have already been detected thus avoiding counting the same cycle a multiple number of times. The bandwidth for each cycle is then simply the number of items produced (or consumed) per cycle divided by the cycle length. The average bandwidth is equal to the sum of all individual cycle bandwidths divided by the number of cycles. 4.6 Elaboration The algorithm has been extended to a number of related protocol converter synthesis problems with small changes to the data transfer function \( d \). 1. Protocols with data ports of different widths. The values of \( d \) represent the width of the ports, in some unit equal to their greatest common divisor. For a more detailed discussion of this, addressing the issue of data reordering, see [23]. 2. Protocols with multiple data ports. The value of \( d \) must be an integer-valued vector, with one component for each data port. The producer and consumer data ports must correspond, and therefore vectors for producer and consumer must be the same length, and ordered to determine the correspondence between producer and consumer data ports. In order to maximise bandwidth with multiple data ports an objective function is required that assigns to each value of \( d \) a positive scalar weight. This can then be used in the maximum cycle mean driven optimisation of Section 4.3. 3. Protocols with output and input ports. Ports must correspondingly match input and output ports on the other protocol, or no converter will be possible. The direction of a port is indicated by reversing the polarity of \( d \), and multiple ports are handled as above. Data-dependent protocols, for example where a specific data token is recognised by the two protocols, can be represented by adding appropriate protocol control signals. The presence of the recognised token on a data port would then drive an extra output or input signal on the corresponding protocol. 4.7 Experimental results Figure 9 shows an FSM representation of a burst transfer protocol. The protocol transfers data on entering states 1 and 2 (indicated by \( P \)) and has reset state 0. It has two control signals: acknowledge and burst size. The acknowledge signal indicates data transfer when input and data reception when output. The burst size signal indicates the size of the burst (i.e. in this case 1 or 2) which results in different protocol cycles. When output, its value is non-deterministic; when input, its value is determined by the synthesised converter. In experiments 1, 2 and 3, the burst transfer protocol of Fig. 9 is applied to both the producer \( P \), and consumer \( P' \). The protocols differ from one another with respect to the direction of the control signals: In experiment 1, \( P \) consists of input burst size and acknowledge control signals. \( P' \) consists of an input burst size signal, and an output acknowledge signal. In Experiment 2, both \( P \) and \( P' \) consist of input acknowledge and output burst size signals. Experiment 2 is repeated in experiment 3, only this time; converters are synthesised between protocols with incompatible port sizes using the data path architecture shown in Fig. 10. In experiment 4, we applied the methodology presented to model part of the communication interface between AMBA’s high performance AHB bus master and low performance APB bus slave. In fact, the AHB bus has a max bus width of 1024 bits and maximum transfer rate of 1 transfer per clock cycle. The max bus width of the APB is 32 bits and has a max bus transfer rate of 1 transfer per 2 clock cycles. The AHB bus is modelled to use both a non-deterministic/deterministic burst transfer mechanism (4/8/16 beat bursts) and the APB uses a back-to-back write transfer mechanism. A FIFO with a depth of 16 was chosen as the datapath. It assumed that the busses are of the same width. The experiments are repeated for an AHB bus with wait states. Wait states are required in the case of bus contention. 4.7.1 Experiment 1: \( P \) resumes its data transfer when an acknowledge signal has been issued. This signal is controllable which means that a protocol converter will exist in the absence of a FIFO data path. The number of identical SCCs comprising the protocol converter increases linearly with increasing FIFO storage. size. The average bandwidth therefore remains unaffected when adding a FIFO to the data path. In step 3 of the algorithm, the non-determinism is resolved by choosing the edge with the maximum cycle mean. The selected transition optimises the rate at which items are transferred. For example in Fig. 11, the state labelled (0,0,0) has to choose between three non-deterministic transitions, which are indicated with dotted edges, to states (1,1,0), (1,2,0) and (2,2,0). The maximum cycle mean calculation results in (1,1,0) being chosen, whereas in contrast minimum data transfer latency scheduling would result in state (2,2,0) being chosen. Resolving the non-determinacy with respect to maximum cycle mean generates a converter with an average bandwidth of 0.33 data items per clock cycle as opposed to one of 0.25 data items per clock cycle; the result of resolving the non-determinacy with respect to minimum data transfer latency. The former consists of three states and the latter of two states when \( N = 0 \) (i.e. with a direct connection between the data busses). 4.7.2 **Experiment 2**: Table 2 shows the results obtained for synthesising converters with varying FIFO sizes. In particular, the average bandwidth \( A \) is indicated. Increasing the FIFO size increases the state space, and thus increases the number of options (hence cycles) in the protocol converter. In this case, optimising with respect to minimum data transfer latency produces the same synthesis results because the converter does not determine the burst size signals. The minimum data transfer latency converter was deduced by performing the Floyd-Warshall algorithm which is \( O(n^3) \) [22] on \( P_{P_c} \) which solves the all-pairs shortest path problem, and labelling edges with the minimum cycle length. The scheduling heuristic that was used resolved the resulting non-determinism at the end of step 2 by choosing the transition which transfers data and minimises the cycle length. This decision results in optimising the data transfer latency (i.e. waiting time for the data bus to assume new data) for both producer and consumer. The sizes of the converter FSMs are also indicated. After synthesis it is possible to roll up the converter FSM by using a separate counter to store the number of items in the FIFO queue, and implementing the states in \( P_{P_c}^{roll} \) rather than \( P_{P_c} \). In this case the FSM transitions depend on the value of the counter. Table 2 shows the converter FSM sizes for the unrolled and rolled up cases. 4.7.3 **Experiment 3**: Table 3 shows results obtained for synthesising converters between protocols with specified port sizes \( d \) and \( d' \). The minimum number of registers in the datapath required to achieve optimum bandwidth along with the unrolled FSM controller size are indicated. The number of cycles in the sink SCC and the bandwidth \( A \) are also given. When \( d = d' \), either the producer or consumer can be faster in a protocol cycle. Therefore, increasing the number of registers increases the state space, the number of cycles and bandwidth. Increasing the number of registers beyond unity, results in negligible increase in bandwidth (indicates that \( A \) is non-optimal), which becomes smaller with increasing queue size. When \( d < d' \), the consumer is always faster than the producer in a protocol cycle. The speed of the converter is restricted to that of the producer from reset because of the causality constraint (i.e. FIFO cannot underflow). The result is a converter with one SCC. As expected, increasing the ratio \( d'/d \) increases the number of registers required to achieve optimum bandwidth. When \( d > d' \), the producer is always faster than the consumer in a protocol cycle. The speed of the converter is restricted to the speed of the consumer because of the FIFO overflow constraint. The resulting converter consists of a multiple number of SCCs. The sink SCC is identical to the only SCC in the \( d < d' \) cases. This is because the producer and consumer protocols are identical. The remaining SCCs ![Fig. 11](image-url) in the converter are single state SCCs, required in setting up the registers. Therefore, increasing the number of registers increases the number of single state SCCs without affecting the behaviour of the sink SCC. 4.7.4 **Experiment 4:** Table 4 illustrates the results obtained for synthesising converters between the AMBA AHB and APB busses. The algorithm was run on a 3.2 GHz Xeon and the synthesis times are given. The times required to execute steps 1 and 2 are negligible compared to that required to execute step 3 of the algorithm. The numbers in brackets indicate the corresponding results for minimum data transfer latency scheduling. <table> <thead> <tr> <th>Experiment</th> <th>States in $P_{e_0}^w$</th> <th>Edges in $P_{e_0}^w$</th> <th>States in $P_{e_1}^w$</th> <th>Edges in $P_{e_1}^w$</th> <th>$A$</th> <th>Synthesis time</th> </tr> </thead> <tbody> <tr> <td>Non-deterministic AHB to APB</td> <td>961</td> <td>2101</td> <td>651</td> <td>731</td> <td>0.490</td> <td>4 min 21 s</td> </tr> <tr> <td>Non-deterministic AHB to APB with wait states</td> <td>1199</td> <td>4626</td> <td>805</td> <td>1581</td> <td>cannot be determined</td> <td>30 min 27 s</td> </tr> <tr> <td>Deterministic AHB to APB</td> <td>961</td> <td>2101</td> <td>403(289)</td> <td>435(321)</td> <td>0.490(0.487)</td> <td>4 min 21 s</td> </tr> <tr> <td>Deterministic AHB to APB</td> <td>1199</td> <td>4626</td> <td>761(1103)</td> <td>1489(2172)</td> <td>cannot be determined</td> <td>30 min 27 s</td> </tr> </tbody> </table> 5 Conclusions We have defined bandwidth-optimal protocol converters, and posed the converter controller synthesis problem in a way that allows uniform treatment of a variety of datapaths. We have presented a synthesis algorithm that allows fast determination of bandwidth-optimal controllers. The synthesis algorithm has the capacity to solve without structural change a number of converter synthesis problems, as indicated in Section 4.6. The algorithm described in Section 4 can synthesise a converter which is optimal for any given datapath: iterative application would allow brute-force optimisation of the datapath storage. This paper makes a contribution beyond the previous work of [2, 3, 14] in two major ways: (i) unrolling the product state machine, as detailed in Section 4.3.2, is shown to model datapaths with arbitrary storage in a very elegant way; and (ii) protocol bandwidth is measured using restricted maximum cycle means. An algorithm is presented to calculate these and resolve controller non-determinacy accordingly to optimise bandwidth. Our work shows that the FSM construction first suggested by Akella and McMillan [1], and modified in [2, 3, 14], is a powerful technique for automatic synthesis. Its advantage is that where applicable it poses the synthesis data correspondence problem in a way that allows compact exploration of all possible solutions, and choice of the optimal solution. We have shown here that its range of applicability can be extended significantly, at the expense of a small increase in state space size. In terms of further work we note that the calculation of maximum cycle means given here is not particularly elegant. A computational optimisation based on the typically sparse nature of $d$ (i.e. it is zero in nearly all states) might be worthwhile. More interestingly, it would be useful to find a fast method that calculated all the required maximum cycle means simultaneously while reusing intermediate results whenever possible. Protocols as defined here, with multiple input and output ports, can be used to describe the temporal relationship between inputs and outputs in synchronous hardware blocks. This might extend the possible use of this technique to optimise multiple blocks of hardware simultaneously. We are investigating a further extension of the product FSM synthesis technique in which transitions of the protocol FSMs are annotated with integer time intervals, representing the time between entering the state and the transition being taken. This can in principle be treated with the algorithm described here, at the cost of state proliferation. We propose to reduce the complexity of this system by using a direct representation of time intervals in the product FSM. 6 References
Universal Prediction Distribution for Surrogate Models Malek Ben Salem, Olivier Roustant, Fabrice Gamboa, Lionel Tomaso To cite this version: HAL Id: emse-01239789 https://hal-emse.ccsd.cnrs.fr/emse-01239789 Submitted on 23 Dec 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. UNIVERSAL PREDICTION DISTRIBUTION FOR SURROGATE MODELS Malek Ben Salem∗† Olivier Roustant∗ Fabrice Gamboa † Lionel Tomaso † May 10, 2019 Abstract The use of surrogate models instead of computationally expensive simulation codes is very convenient in engineering. Roughly speaking, there are two kinds of surrogate models: the deterministic and the probabilistic ones. These last are generally based on Gaussian assumptions. The main advantage of probabilistic approach is that it provides a measure of uncertainty associated with the surrogate model in the whole space. This uncertainty is an efficient tool to construct strategies for various problems such as prediction enhancement, optimization or inversion. In this paper, we propose a universal method to define a measure of uncertainty suitable for any surrogate model either deterministic or probabilistic. It relies on Cross-Validation (CV) sub-models predictions. This empirical distribution may be computed in much more general frames than the Gaussian one. So that it is called the Universal Prediction distribution (UP distribution). It allows the definition of many sampling criteria. We give and study adaptive sampling techniques for global refinement and an extension of the so-called Efficient Global Optimization (EGO) algorithm. We also discuss the use of the UP distribution for inversion problems. The performances of these new algorithms are studied both on toys models and on an engineering design problem. keywords Surrogate models, Design of experiments, Bayesian optimization 1 Introduction Surrogate modeling techniques are widely used and studied in engineering and research. Their main purpose is to replace an expensive-to-evaluate function \( s \) by a simple response surface \( ˆs \) also called surrogate model or meta-model. Notice that \( s \) can be a computation-intensive simulation code. These surrogate models are based on a given training set of \( n \) observations \( z_j = (x_j, y_j) \) where \( 1 \leq j \leq n \) and \( y_j = s(x_j) \). The accuracy of the surrogate model relies, inter ∗EMSE Ecole des Mines de St-Etienne, UMR CNRS 6158, LIMOS, F-42023: 158 Cours Fauriel, Saint-Etienne. †ANSYS, Inc: 11 Avenue Albert Einstein F-69100 Villeurbanne. ‡IMT Institut de Mathmatiques de Toulouse: 118 route de Narbonne, 31062 TOULOUSE Cedex 9. alia, on the relevance of the training set. The aim of surrogate modeling is generally to estimate some features of the function $s$ using $\hat{s}$. Of course one is looking for the best trade-off between a good accuracy of the feature estimation and the number of calls of $s$. Consequently, the design of experiments (DOE), that is the sampling of $(x_j)_{1 \leq j \leq n}$, is a crucial step and an active research field. There are two ways to sample: either drawing the training set $(x_j)_{1 \leq j \leq n}$ at once or building it sequentially. Among the sequential techniques, some are based on surrogate models. They rely on the feature of $s$ that one wishes to estimate. Popular examples are the EGO [17] and the Stepwise Uncertainty Reduction (SUR) [3]. These two methods use Gaussian process regression also called kriging model. It is a widely used surrogate modeling technique. Its popularity is mainly due to its statistical nature and properties. Indeed, it is a Bayesian inference technique for functions. In this stochastic frame, it provides an estimate of the prediction error distribution. This distribution is the main tool in Gaussian surrogate sequential designs. For instance, it allows the introduction and the computation of different sampling criteria such as the Expected Improvement (EI) [17] or the Expected Feasibility (EF) [4]. Away from the Gaussian case, many surrogate models are also available and useful. Notice that none of them including the Gaussian process surrogate model are the best in all circumstances [14]. Classical surrogate models are for instance support vector machine [36], linear regression [5], moving least squares [22]. More recently a mixture of surrogates has been considered in [38, 13]. Nevertheless, these methods are generally not naturally embeddable in some stochastic frame. Hence, they do not provide any prediction error distribution. To overcome this drawback, several empirical design techniques have been discussed in the literature. These techniques are generally based on resampling methods such as bootstrap, jackknife, or cross-validation. For instance, Gazut et al. [10] and Jin et al. [15] consider a population of surrogate models constructed by resampling the available data using bootstrap or cross-validation. Then, they compute the empirical variance of the predictions of these surrogate models. Finally, they sample iteratively the point that maximizes the empirical variance in order to improve the accuracy of the prediction. To perform optimization, Kleijnen et al. [20] use a bootstrapped kriging variance instead of the kriging variance to compute the expected improvement. Their algorithm consists in maximizing the expected improvement computed through bootstrapped kriging variance. However, most of these resampling method-based design techniques lead to clustered designs [2, 15]. In this paper, we give a general way to build an empirical prediction distribution allowing sequential design strategies in a very broad frame. Its support is the set of all the predictions obtained by the cross-validation surrogate models. The novelty of our approach is that it provides a prediction uncertainty distribution. This allows a large set of sampling criteria. Furthermore, it leads naturally to non-clustered designs as explained in Section 4. The paper is organized as follows. We start by presenting in Section 2 the background and notations. In Section 3 we introduce the Universal Prediction (UP) empirical distribution. In Sections 4 and 5, we use and study features estimation and the corresponding sampling schemes built on the UP empirical distribution. Section 4 is devoted to the enhancement of the overall model accuracy. Section 5 concerns optimization. In Section 6, we study a real life industrial case implementing the methodology developed in Section 4. 7 deals with the inversion problem. In Section 8, we conclude and discuss the possible extensions of our work. All proofs are postponed to Section 9. 2 Background and notations 2.1 General notation To begin with, let $s$ denote a real-valued function defined on $\mathbb{X}$, a nonempty compact subset of the Euclidean space $\mathbb{R}^p$ ($p \in \mathbb{N}^*$). In order to estimate $s$, we have at hand a sample of size $n$ ($n \geq 2$): $X_n = (x_1, \ldots, x_n)^\top$ with $x_j \in \mathbb{X}$, $j \in [1;n]$ and $Y_n = (y_1, \ldots, y_n)^\top$ where $y_j = s(x_j)$ for $j \in [1;n]$. We note $Y_n = s(X_n)$. Let $Z_n$ denote the observations: $Z_n := \{(x_j, y_j), j \in [1;n]\}$. Using $Z_n$, we build a surrogate model $\hat{s}_n$ that mimics the behaviour of $s$. For example, $\hat{s}_n$ can be a second order polynomial regression model. For $i \in \{1 \ldots n\}$, we set $Z_{n,-i} := \{(x_j, y_j), j = 1 \ldots, n, j \neq i\}$ and so $\hat{s}_{n,-i}$ is the surrogate model obtained by using only the dataset $Z_{n,-i}$. We will call $\hat{s}_n$ the master surrogate model and $(\hat{s}_{n,-i})_{i=1 \ldots n}$ its sub-models. Further, let $d(\cdot,\cdot)$ denote a given distance on $\mathbb{R}^p$ (typically the Euclidean one). For $x \in \mathbb{X}$ and $A \subset \mathbb{X}$, we set: $d_A(x) = \inf\{d(x, x') : x' \in A\}$ and if $A = \{x_1, \ldots, x_m\}$ is finite ($m \in \mathbb{N}^*$), for $i \in 1, \ldots, m$ let $A_{-i}$ denote $\{x_j, j = 1 \ldots m, j \neq i\}$. Finally, we set $d(A) = \max\{d_{A_{-i}}(x_i) : i = 1, \ldots, m\}$, the largest distance of an element of $A$ to its nearest neighbor. 2.2 Cross-validation Training an algorithm and evaluating its statistical performances on the same data yields an optimistic result [1]. It is well known that it is easy to overfit the data by including too many degrees of freedom and so inflate the fit statistics. The idea behind Cross-validation (CV) is to estimate the risk of an algorithm splitting the dataset once or several times. One part of the data (the training set) is used for training and the remaining one (the validation set) is used for estimating the risk of the algorithm. Simple validation or hold-out [8] is hence a cross-validation technique. It relies on one splitting of the data. Then one set is used as training set and the second one is used as validation set. Some other CV techniques consist in a repetitive generation of hold-out estimator with different data splitting [11]. One can cite, for instance, the Leave-One-Out Cross-Validation (LOO-CV) and the $K$-Fold Cross-Validation (KFCV). KFCV consists in dividing the data into $k$ subsets. Each subset plays the role of validation set while the remaining $k-1$ subsets are used together as the training set. LOO-CV method is a particular case of KFCV with $k = n$. The sub-models $\hat{s}_{n,-i}$ introduced in paragraph 2.1 are used to compute LOO estimator of the master surrogate model $\hat{s}_n$. In fact, the LOO errors are $\varepsilon_i = \hat{s}_{n,-i}(x_i) - y_i$. Notice that the sub-models are used to estimate a feature of the master surrogate model. In our study, we will be interested in the distribution of the local predictor for all $x \in \mathbb{X}$ ($x$ is not necessarily a design point) and we will also use the sub-models to estimate this feature. Indeed, this distribution will be estimated by using LOO-CV predictions leading to the definition of the Universal Prediction (UP) distribution. 3 Universal Prediction distribution 3.1 Overview As discussed in the previous section, cross-validation is used as a method for estimating the prediction error of a given model. In our case, we introduce a novel use of cross-validation in order to estimate the local uncertainty of a surrogate model prediction. In fact, we assume, in Equation (1), that CV errors are an approximation of the errors of the master model. The idea is to consider CV prediction as realizations of \( \hat{s} \). Hence, for a given surrogate model \( \hat{s} \) and for any \( x \in \mathbb{X} \), \( \hat{s}_{n,-i}(x) \),..., \( \hat{s}_{n,-n}(x) \) define an empirical distribution of \( \hat{s}(x) \) at \( x \). In the case of an interpolating surrogate model and a deterministic simulation code \( s \), it is natural to enforce a zero variance at design points. Consequently, when predicting on a design point \( x_i \) we neglect the prediction \( \hat{s}_{n,-i} \). This can be achieved by introducing weights on the empirical distribution. These weights avoid the pessimistic sub-model predictions that might occur in a region while the global surrogate model fits the data well in that region. Let \( P_{n,0}^{(0)} \) be the weighted empirical distribution based on the \( n \) different predictions of the LOO-CV sub-models \( \{\hat{s}_{n,-i}(x)\}_{1 \leq i \leq n} \) and weighted by \( w_{i,n}(x) \) defined in Equation (1): \[ \begin{align*} w_{i,n}^{(0)}(x) & = \frac{1}{n-1} \quad \text{if } x_i \neq \arg \min \{d(x, x_i), i = 1, \ldots, n\} \\ & = 0 \quad \text{otherwise} \end{align*} \] (1) Such binary weights lead to unsmooth design criteria. In order to avoid this drawback, we introduce smoothed weights. Direct smoothing based on convolution product would lead to computations on Voronoi cells. We prefer to use the simpler smoothed weights defined in Equation (2). \[ w_{i,n}(x) = \frac{1 - e^{-\frac{d^2((x, x_i))^2}{\rho^2}}}{\sum_{j=1}^{n} \left(1 - e^{-\frac{d^2((x, x_j))^2}{\rho^2}}\right)} \] (2) Notice that \( w_{i,n}(x) \) increases with the distance between the \( i^{th} \) design point \( x_i \) and \( x \). In fact, the least weighted predictions is \( \hat{s}_{n,-p_{n,n}} \) where \( p_{n,n}(x) \) is the index of the nearest design point to \( x \). In general, the prediction \( \hat{s}_{n,-i} \) is locally less reliable in a neighborhood of \( x_i \). The proposed weights determine the local relative confidence level of a given sub-model predictions. The term “relative” means that the confidence level of one sub-model prediction is relative to the remaining sub-models predictions due to the normalization factor in Equation (2). The smoothing parameter \( \rho \) tunes the amount of uncertainty of \( \hat{s}_{n,-i} \) in a neighborhood of \( x_i \). Several options are possible to choose \( \rho \). We suggest setting \( \rho = d(\mathbf{X}_n) \). Indeed, this is a well suited choice for practical cases. **Definition 3.1.** The Universal Prediction distribution (UP distribution) is the weighted empirical distribution: \[ \mu_{(n,x)}(dy) = \sum_{i=1}^{n} w_{i,n}(x) \delta_{\hat{s}_{n,-i}(x)}(dy). \] (3) This probability measure is nothing more than the empirical distribution of all the predictions provided by cross-validation sub-models weighted by local smoothed masses. Definition 3.2. For \( x \in \mathcal{X} \) we call \( \hat{\sigma}^2_n(x) \) (Equation (5)) the local UP variance and \( \hat{m}_n(x) \) (Equation (4)) the UP expected value. \[ \hat{m}_n(x) = \int y \mu_{(n,x)}(dy) = \sum_{i=1}^{n} w_{i,n}(x) \hat{s}_{n,-i}(x) \quad (4) \] \[ \hat{\sigma}^2_n(x) = \int (y - \hat{m}_n(x))^2 \mu_{(n,x)}(dy) = \sum_{i=1}^{n} w_{i,n}(x)(\hat{s}_{n,-i}(x) - \hat{m}_n(x))^2 \quad (5) \] 3.2 Illustrative example Let us consider the Viana function defined over \([-3, 3]\) \[ f(x) = \frac{10 \cos(2x) + 15 - 5x + x^2}{50} \quad (6) \] Let \( Z_n = (X_n, Y_n) \) be the design of experiments such that \( X_n = (x_1 = -2.4, x_2 = -1.2, x_3 = 0, x_4 = 1.2, x_5 = 1.4, x_6 = 2.4, x_7 = 3) \) and \( Y_n = (y_1, \ldots, y_7) \) their image by \( f \). We used a Gaussian process regression [27, 21, 18] with constant trend function and Matérn 5/2 covariance function \( \hat{s} \). We display in Figure 1 the design points, the cross-validation sub-models predictions \( \hat{s}_{n,-i} \), \( i = 1, \ldots, 7 \) and the master model prediction \( \hat{s}_n \). Figure 1: Illustration of the UP distribution. Dashed lines: CV sub-models predictions, solid red line: master model prediction, horizontal bars: local UP distribution at \( x_a = -1.8 \) and \( x_b = 0.2 \), black squares: design points. Notice that in the interval \([1, 3]\) (where we have 4 design points) the discrepancy between the master model and the CV sub-models predictions is smaller than in the remaining space. Moreover, we displayed horizontally the UP distribution at \( x_a = -1.8 \) and \( x_b = 0.2 \) to illustrate the weighting effect. One can notice that: • At $x_a$ the least weighted predictions are $\hat{s}_{n-1}(x_a)$ and $\hat{s}_{n-2}(x_a)$. These predictions do not use the two closest design points to $x_a$: $(x_1, \text{respectively } x_2)$. • At $x_b$, $\hat{s}_{n-3}(x_b)$ is the least weighted prediction. Furthermore, we display in Figure 2 the master model prediction and region delimited by $\hat{s}_n(x) + 3\hat{\sigma}_n(x)$ and $\hat{s}_n(x) - 3\hat{\sigma}_n(x)$. Figure 2: Uncertainty quantification based on the UP distribution. Red solid line: master model prediction $\hat{s}_n(x)$, blue area: region delimited by $\hat{s}_n(x) \pm 3\hat{\sigma}_n(x)$. One can notice that the standard deviation is null at design points. In addition, its local maxima in the interval $[1, 3]$ (where we have more design points density) are smaller than its maxima in the remaining space region. 4 Sequential Refinement In this section, we use the UP distribution to define an adaptive refinement technique called the Universal Prediction-based Surrogate Modeling Adaptive Refinement Technique UP-SMART. 4.1 Introduction The main goal of sequential design is to minimize the number of calls of a computationally expensive function. Gaussian surrogate models [18] are widely used in adaptive design strategies. Indeed, Gaussian modeling gives a Bayesian framework for sequential design. In some cases, other surrogate models might be more accurate although they do not provide a theoretical framework for uncertainty assessment. We propose here a new universal strategy for adaptive sequential design of experiments. The technique is based on the UP distribution. So, it can be applied to any type of surrogate model. In the literature, many strategies have been proposed to design the experiments (for an overview, the interested reader is referred to [12, 40, 34]). Some strategies, such as Latin Hypercube Sampling (LHS) [28], maximum entropy design [35], and maximin distance designs [16] are called one-shot sampling methods. These methods depend neither on the output values nor on the surrogate model. However, one would naturally expect to design more points in the regions with high nonlinear behavior. This intuition leads to adaptive strategies. A DOE approach is said to be adaptive when information from the experiments (inputs and responses) as well as information from surrogate models are used to select the location of the next point. By adopting this definition, adaptive DOE methods include for instance surrogate model-based optimization algorithms, probability of failure estimation techniques and sequential refinement techniques. Sequential refinement techniques aim at creating a more accurate surrogate model. For example, Lin et al. [25] use Multivariate Adaptive Regression Splines (MARS) and kriging models with Sequential Exploratory Experimental Design (SEED) method. It consists in building a surrogate model to predict errors based on the errors on a test set. Goel et al. [13] use an ensemble of surrogate models to identify regions of high uncertainty by computing the empirical standard deviation of the predictions of the ensemble members. Our method is based on the predictions of the CV sub-models. In the literature, several cross-validation-based techniques have been discussed. Li et al. [23] propose to add the design point that maximizes the Accumulative Error (AE). The AE on \( x \in \mathbb{X} \) is computed as the sum of the LOO-CV errors on the design points weighted by influence factors. This method could lead to clustered samples. To avoid this effect, the authors [24] propose to add a threshold constraint in the maximization problem. Busby et al. [6] propose a method based on a grid and CV. It affects the CV prediction errors at a design point to its containing cell in the grid. Then, an entropy approach is performed to add a new design point. More recently, Xu et al. [41] suggest the use of a method based on Voronoi cells and CV. Kleijnen et al. [19] propose a method based on the Jackknife’s pseudo values predictions variance. Jin et al. [15] present a strategy that maximizes the product between the deviation of CV sub-models predictions with respect to the master model prediction and the distance to the design points. Aute et al. [2] introduce the Space-Filling Cross-Validation Trade-off (SFCVT) approach. It consists in building a new surrogate model over LOO-CV errors and then add a point that maximizes the new surrogate model prediction under some space-filling constraints. In general cross-validation-based approaches tend to allocate points close to each other resulting in clustering [2]. This is not desirable for deterministic simulations. 4.2 UP-SMART The idea behind UP-SMART is to sample points where the UP distribution variance (Equation (5)) is maximal. Most of the CV-based sampling criteria use CV errors. Here, we use the local predictions of the CV sub-models. Moreover, notice that the UP variance is null on design points for interpolating surrogate models. Hence, UP-SMART does not naturally promote clustering. However, \( \hat{\sigma}^2_n(x) \) can vanish even if \( x \) is not a design points. To overcome this drawback, we add a distance penalization. This leads to the UP-SMART sampling criterion \( \gamma_n \) (Equation (7)). \[ \gamma_n(x) = \hat{\sigma}^2_n(x) + \delta d_{x_n}(x) \] where \( \delta > 0 \) is called exploration parameter. One can set \( \delta \) as a small percentage of the global variation of the output. UP-SMART is the adaptive refinement algorithm consisting in adding at step \(n\) a point \(x_{n+1} \in \arg \max_{x \in \mathcal{X}} (\gamma_n(x))\). ## 4.3 Performances on a set of test functions In this subsection, we present the performance of the UP-SMART. We present first the used surrogate-models. ### 4.3.1 Used surrogate models **Kriging** Kriging [27] or Gaussian process regression is an interpolation method. Universal Kriging fits the data using a deterministic trend and governed by prior covariances. Let \(k(x, x')\), be a covariance function on \(\mathbb{X} \times \mathbb{X}\), and let \((h_1(x), \ldots, h_p(x))\) be the basis functions of the trend. Let us denote \(h(x)\) the vector \((h_1(x), \ldots, h_p(x))\) and let \(H\) be the matrix with entries \(h_{ij} = h_j(x_i), 1 \leq i, j \leq n\). Furthermore, let \(k_n(x)\) be the vector \((k(x, x_1), \ldots, k(x, x_n))\) and \(K_n\) the matrix with entries \(k_{ij} = k(x_i, x_j)\), for \(1 \leq i, j \leq n\). Then, the conditional mean of the Gaussian process with covariance \(k(x, x')\) and its variance are given in Equations \((8),(9)) \[ m_{G_n}(x) = h(x)^\top \hat{\beta} + k_n(x)^\top K_n^{-1}(Y - H^\top \hat{\beta}) \] \[ \sigma^2_{G_n}(x) = k(x, x) - k_n(x)^\top K_n^{-1}k_n(x) + V(x)^\top (H^\top K_n^{-1}H)^{-1}V(x) \] \[ \hat{\beta} = (H^\top K_n^{-1}H)^{-1}H^\top K_n^{-1}Y \text{ and } V(x) = h(x)^\top + k_n(x)^\top K_n^{-1}H \] Note that the conditional mean is the prediction of the Gaussian process regression. Further, we used two kriging instances with different sampling schemes in our test bench. Both use constant trend function and a Matèrn 5/2 covariance function. The first design is obtained by maximizing the UP distribution variance (Equation (5)). And the second one is obtained by maximizing the kriging variance \(\sigma^2_{G_n}(x)\). **Genetic aggregation** The genetic aggregation response surface is a method that aims at selecting the best response surface for a given design of experiments. It solves several surrogate models, performs aggregation and selects the best response surface according to the cross-validation errors. The use of such response surface, in this test bench, aims at checking the universality of the UP distribution: the fact that it can be applied for all types of surrogate models. ### 4.3.2 Test bench In order to test the performances of the method we launched different refinement processes for the following set of test functions: - **Branin**: \(f_b(x_1, x_2) = (x_2 - \frac{\pi}{2} x_1^2 + \frac{\pi}{2} x_1 - 6)^2 + 10(1 - \frac{1}{8\pi} \cos(x_1))^2 + 10\). - **Six-hump camel**: \(f_c(x_1, x_2) = (4 - 2.1x_1^2 + x_1^4) x_1^2 + x_1x_2 + x_2^2(4x_2^2 - 4)\). - **Hartmann6**: \(f_h(X = (x_1, \ldots, x_6)) = -\sum_{i=1}^{4} \alpha_i \exp \left(-\sum_{j=0}^{6} A_{ij}(x_j - P_{ij})^2 \right)\). \(A, P\) and \(\alpha\) can be found in [9]. For each function we generated by optimal Latin hyper sampling design the number of initial design points \( n_0 \), the number of refinement points \( N_{max} \). We also generated a set of \( N_t \) test points and their response \( Z^{(t)} = (X^{(t)}, Y^{(t)}) \). The used values are available in Table 1. **Table 1: Used test functions** <table> <thead> <tr> <th>Function</th> <th>dimension ( d )</th> <th>( n_0 )</th> <th>( N_{max} )</th> <th>( N_t )</th> </tr> </thead> <tbody> <tr> <td>Viana</td> <td>1</td> <td>5</td> <td>7</td> <td>500</td> </tr> <tr> <td>Branin</td> <td>2</td> <td>10</td> <td>10</td> <td>1600</td> </tr> <tr> <td>Camel</td> <td>2</td> <td>20</td> <td>10</td> <td>1600</td> </tr> <tr> <td>Hartmann6</td> <td>6</td> <td>60</td> <td>150</td> <td>10000</td> </tr> </tbody> </table> We fixed \( n_0 \) in order to get non-accurate surrogate models at the first step. Usually, one follows the rule-of-thumb \( n_0 = 10 \times d \) proposed in [26]. However, for Branin and Viana functions, this rule leads to a very good initial fit. Therefore, we choose lower values. - Kriging variance-based refinement process (Equation (9)) as refinement criterion. - Kriging using the UP-SMART: UP-variance as refinement criterion (Equation (7)). - Genetic aggregation using the UP-SMART: UP-variance as refinement criterion (Equation (7)). **4.3.3 Results** For each function, we compute at each iteration the Q squared (\( Q^2 \)) of the predictions of the test set \( Z^{(t)} \) where \( Q^2(\hat{s}, Z^{(t)}) = 1 - \frac{\frac{1}{N_t} \sum_{i=1}^{N_t} (y_i^{(t)} - \hat{y}_i^{(t)})^2}{\frac{1}{N_t} \sum_{i=1}^{N_t} (y_i^{(t)} - \bar{y})^2} \) and \( \bar{y} = \frac{1}{N_t} \sum_{i=1}^{N_t} y_i^{(t)} \). We display in Figure 3 the performances of the three different techniques described above for Viana (Figure 3a), Branin (Figure 3b) and Camel (Figure 3c) functions measured by \( Q^2 \) criterion. For these tests, the three techniques have comparable performances. The \( Q^2 \) converges for all of them. It appears that the UP variance criterion refinement process gives at least as good a result as the kriging variance criterion. This may be due to the high kriging uncertainty on the boundaries. In fact, minimizing kriging variance sampling algorithm generates, in general, more points on the boundaries for a high dimensional problem. For instance, let us focus on the Hartmann function (dimension 6). We present, in Figure 4, the results after 150 iterations of the algorithms. It is clear that the UP-SMART gives a better result for this function. The results show that: Figure 3: Performance of three refinement strategies on three test functions measured by the $Q^2$ criterion on a test set. x axis: number of added refinement points. y axis: $Q^2$. UP-SMART with kriging in green, UP-SMART with genetic aggregation in blue and kriging variance-based technique in red. - UP-SMART gives a better global response surface accuracy than the maximization of the variance. This shows the usefulness of the method. - UP-SMART is a universal method. Here, it has been applied with success to an aggregation of response surfaces. Such usage highlights the universality of the strategy. 5 Empirical Efficient Global optimization In this section, we introduce UP distribution-based Efficient Global Optimization (UP-EGO) algorithm. This algorithm is an adaptation of the well known EGO algorithm. 5.1 Overview Surrogate model-based optimization refers to the idea of speeding optimization processes using surrogate models. In this section, we present an adaptation of the well-known efficient global optimization (EGO) algorithm [17]. Our method is based on the weighted empirical distribution UP distribution. We show that asymptotically, the points generated by the algorithm are dense around the optimum. For the EGO algorithm, the proof has been done by Vazquez et al. [37] for the EGO algorithm. The basic unconstrained surrogate model-based optimization scheme can be summarized as follows [30] - Construct a surrogate model from a set of known data points. - Define a sampling criterion that reflects a possible improvement. - Optimize the criterion over the design space. - Evaluate the true function at the criterion optimum/optima. - Update the surrogate model using new data points. - Iterate until convergence Several sampling criteria have been proposed to perform optimization. The Expected Improvement (EI) is one of the most popular criteria for surrogate model-based optimization. Sasena et al. [33] discussed some sampling criteria such as the threshold-bounded extreme, the regional extreme, the generalized expected improvement and the minimum surprises criterion. Almost all of the criteria are computed in practice within the frame of Gaussian processes. Consequently, among all possible response surfaces, Gaussian surrogate models are widely used in surrogate model-based optimization. Recently, Viana et al. [39] performed multiple surrogate assisted optimization by importing Gaussian uncertainty estimate. 5.2 UP-EGO Algorithm Here, we use the \textit{UP distribution} to compute an empirical expected improvement. Then, we present an optimization algorithm similar to the original EGO algorithm that can be applied with any type of surrogate models. Without loss of generality, we consider the minimization problem: \[ \min_{x \in \mathcal{X}} s(x) \] Let \((y(x))_{x \in \mathcal{X}}\) be a Gaussian process model. \(m_{Gn}\) and \(\sigma_{GPn}^2\) denote respectively the mean and the variance of the conditional process \(y(x) | Z_n\). Further, let \(y^*_n\) be the minimum value at step \(n\) when using observations \(Z_n = (z_1, \ldots, z_n)\) where \(z_i = (x_i, y_i)\). \(y^*_n = \min_{i=1,\ldots,n} y_i\). The EGO algorithm [17] uses the expected improvement \(EI_n\) (Equation (11)) as sampling criterion: \[ EI_n(x) = \text{E}[\max(y^*_n - y(x), 0) | Z_n] \] The EGO algorithm adds the point that maximizes \(EI_n\). Using some Gaussian computations, Equation (11) is equivalent to Equation (12). \[ EI_n(x) = \begin{cases} (y^*_n - m_{Gn}(x))\Phi \left( \frac{y^*_n - m_{Gn}(x)}{\sigma_{GPn}(x)} \right) + \sigma_{GPn}(x)\phi \left( \frac{y^*_n - m_{Gn}(x)}{\sigma_{GPn}(x)} \right) & \text{if } \sigma_{GPn}(x) \neq 0 \\ 0 & \text{otherwise} \end{cases} \] We introduce a similar criterion based on the \textit{UP distribution}. With the notations of Sections 2 and 3, \(EEI_n\) (Equation (13)) is called the empirical expected improvement. \[ EEI_n(x) = \int \max(y^*_n - y, 0)\mu_{(n,x)}(dy) = \sum_{i=1}^{n} w_{i,n}(x) \max(y^*_n - \hat{s}_{n,i}(x), 0) \] We can remark that \(EEI_n(x)\) can vanish even if \(x\) is not a design point. This is one of the limitations of the empirical \textit{UP distribution}. To overcome this drawback, we suggest the use of the Universal Prediction Expected Improvement (UP-EI) \(\kappa_n\) (Equation (14)) \[ \kappa_n(x) = EEI_n(x) + \xi_n(x) \] where \(\xi_n(x)\) is a distance penalization. We use \(\xi_n(x) = \delta d_{X_n}(x)\) where \(\delta > 0\) is called the exploration parameter. One can set \(\delta\) as a small percentage of the global variation of the output for less exploration. Greater value of \(\delta\) means more exploration. \(\delta\) fixes the wished trade-off between exploration and local search. Furthermore, notice that \(\kappa_n\) has the desirable property also verified by the usual EI: \textbf{Proposition 5.1.} \(\forall n > 1, \forall Z_n = (X_n = (x_1, \ldots, x_n)^\top, Y_n = s(X_n)),\) if the used model interpolates the data then \(\kappa_n(x_i) = 0,\) for \(i = 1, \ldots, n\) The UP distribution-based Efficient Global Optimization (UP-EGO) (Algorithm 1) consists in sampling at each iteration the point that maximize $\kappa_n$. The point is then added to the set of observations and the surrogate model is updated. Algorithm 1. UP-EGO(\(\hat{s}\)) **Inputs:** \(Z_{n_0} = (X_{n_0}, Y_{n_0}), n_0 \in \mathbb{N} \setminus \{0, 1\}\) and a deterministic function \(s\) 1. \(m := n_0, S_m := X_{n_0}, Y_m := Y_{n_0}\) 2. Compute the surrogate model \(\hat{s}_{Z_m}\) 3. \(\text{Stop\_conditions} := \text{False}\) 4. While \(\text{Stop\_conditions} \) are not satisfied - (4.1) Select \(x_{m+1} \in \arg \max_{x} (\kappa_m(x))\) - (4.2) Evaluate \(y_{m+1} := s(x_{m+1})\) - (4.3) \(S_{m+1} := S_m \cup \{x_{m+1}\}, Y_{m+1} := Y_m \cup \{y_{m+1}\}\) - (4.4) \(Z_{m+1} := (S_{m+1}, Y_{m+1}), m := m + 1\) - (4.5) Update the surrogate model \(\hat{s}_{Z_m}\) - (4.6) Check \(\text{Stop\_conditions}\) end loop **Outputs:** \(Z_m := (S_m, Y_m), \text{ surrogate model } \hat{s}_{Z_m}\) 5.3 UP-EGO convergence We first recall the context. \(X\) is a nonempty compact subset of the Euclidean space \(\mathbb{R}^p\) where \(p \in \mathbb{N}^*\). \(s\) is an expensive-to-evaluate function. The weights of the UP distribution are computed as in Equation (2) with \(\rho > 0\) a fixed real parameter. Moreover, we consider the asymptotic behaviour of the algorithm so that, here, the number of iterations goes to infinity. Let \(x^* \in \arg \min \{s(x), x \in X\}\) and \(\hat{s}\) be a continuous interpolating surrogate model bounded on \(X\). Let \(Z_{n_0} = (X_{n_0} = (x_1, \ldots, x_{n_0})^T, Y_{n_0})\) be the initial data. For all \(k > n_0\), \(x_k\) denotes the point generated by the UP-EGO algorithm at step \(k - n_0\). Let \(S_m\) denote the set \(\{x_i, i \leq m\}\) and \(S = \{x_i, i > 0\}\). Finally, \(\forall m > n_0\) we note \(\kappa_m\) the UP-EI of \(\hat{s}_{Z_m}\). We are going to prove that \(x^*\) is adherent to the sequence of points \(S\) generated by the UP-EGO(\(\hat{s}\)) algorithm. **Lemma 5.2.** \(\exists \theta > 0, \forall m > n_0, \forall x \in X, \forall i \in 1, \ldots, m, \forall n > m, w_i, n(x) \leq \theta d(x, x_i)^2\). **Definition 5.3.** A surrogate model \(\hat{s}\) is called an interpolating surrogate model if for all \(n \in \mathbb{N}^*\) and for all \(Z_n = (X_n, Y_n) \in X^n \times \mathbb{R}^n\), \(\hat{s}_{Z_n}(x) = s(x)\) if \(x \in X_n\). **Definition 5.4.** A surrogate model \(\hat{s}\) is called bounded on \(X\) if for all \(s\) a continuous function on \(X\), \(\exists L, U\), such that for all \(n > 1\) and for all \(Z_n = (X_n, Y_n) \in X^n \times \mathbb{R}^n\), \(\forall x \in X, L \leq \hat{s}_{Z_n}(x) \leq U\) **Definition 5.5.** A surrogate model \(\hat{s}\) is called continuous if \(\forall n_0 > 1 \forall x \in X, \forall \epsilon > 0, \exists \delta > 0, \forall n > n_0, \forall Z_n = (X_n, Y_n) \in X^n \times \mathbb{R}^n, \forall x' \in X, d(x, x') < \delta \implies |\hat{s}_{Z_n}(x) - \hat{s}_{Z_n}(x')| < \epsilon\) **Theorem 5.6.** Let \(s\) be a real function defined on \(X\) and let \(x^* \in \arg \min \{s(x), x \in X\}\). If \(\hat{s}\) is an interpolating continuous surrogate model bounded on \(X\), then \(x^*\) is adherent to the sequence of points \(S\) generated by UP-EGO(\(\hat{s}\)). The proofs (Section 9) show that the exploration parameter is important for this theoretical result. In our implementation, we scale the input spaces to be the hypercube $[-1, 1]$ and we set $\delta$ to 0.005% of the output variation. Hence, the exploratory effect only slightly impacts the UP-EI criterion in practical cases. 5.4 Numerical examples Let us consider the set of test functions (Table 2). Table 2: Optimization test functions <table> <thead> <tr> <th>Function</th> <th>Dimension $d^{(i)}$</th> <th>Number of initial points $n_0^{(i)}$</th> <th>Number of iterations $N_{max}^{(i)}$</th> </tr> </thead> <tbody> <tr> <td>Branin</td> <td>2</td> <td>5</td> <td>40</td> </tr> <tr> <td>Ackley</td> <td>2</td> <td>10</td> <td>30</td> </tr> <tr> <td>Six-hump</td> <td>2</td> <td>10</td> <td>30</td> </tr> <tr> <td>Hartmann6</td> <td>6</td> <td>20</td> <td>40</td> </tr> </tbody> </table> We launched the optimization process for these functions with three different optimization algorithms: - UP-EGO algorithm applied to a universal kriging surrogate model $\hat{s}_k$ that uses Matérn 5/2 covariance function and a constant trend function. We denote this algorithm UP-EGO($\hat{s}_k$). - UP-EGO algorithm applied to the genetic aggregation $\hat{s}_a$. It is then denoted UP-EGO($\hat{s}_a$). For each function $f^{(i)}$, we launched each optimization process for $N_{max}^{(i)}$ iterations starting with $N_{seed} = 20$ different initial design of experiments of size $n_0^{(i)}$ generated by an optimal space-filling sampling. The results are given using boxplots in Appendix 10. We also display the mean best value evolution in Figure 5. The results shows that the UP-EGO algorithms give better results than the EGO algorithm for Branin and Camel functions. These cases illustrate the efficiency of the method. Moreover, for Ackley and Hartmamn6 functions the best results are given by UP-EGO using the genetic aggregation. Even if this is related to the nature of the surrogate model, it underlines the efficient contribution of the universality of UP-EGO. Further, let us focus on the boxplots of the last iterations of Figures 8 and 11 (Appendix 10). It is important to notice that UP-EGO results for Branin function depend slightly on the initial design points. On the other hand, let us focus on the Hartmann function case. The results of UP-EGO using the genetic aggregation depend on the initial design points. In fact, more optimization iterations are required for a full convergence. However, Figure 5: Comparison of three surrogate-based optimization strategies. Mean over $N_{seed}$ of the best value as a function of the number of iterations. UP-EGO with kriging in green, UP-EGO with genetic aggregation in blue, EGO in red and theoretical minimum in dashed gray. compared to the EGO algorithm, UP-EGO algorithm has good performances for both cases: - Full convergence - Limited-budget optimization. Otherwise, the Branin function has multiple solutions. We are interested in checking whether the UP-EGO algorithm would focus on one local optimum or on the three possible regions. We present in Figure 6 the spatial distribution of the generated points by the UP-EGO(kriging) algorithm for the Branin function. We can notice that UP-EGO generated points around the three local minima. 6 Fluid Simulation Application: Mixing Tank The problem addressed here concerns a static mixer where hot and cold fluid enter at variable velocities. The objective of this analysis is generally to find inlet velocities that minimize pressure loss from the cold inlet to the outlet and minimize the temperature spread at the outlet. In our study, we are interested in a better exploration of the design using an accurate cheap-to-evaluate surrogate model. The simulations are computed within ANSYS Workbench environment and we used DesignXplorer to perform surrogate-modeling. We started the study using 9 design points generated by a central composite design. We produced also a set of $N_t = 80$ test points $Z_t = (X_t = (x_1^{(t)}, \ldots, x_{N_t}^{(t)}), Y_t = (y_1^{(t)}, \ldots, y_{N_t}^{(t)}))$. We launched UP-SMART applied to the genetic aggregation response surface (GARS) in order to generate 10 suitable design points and a kriging-based refinement strategy. The genetic aggregation response surface (GARS) developed by DesignXplorer creates a mixture of surrogate models including support vector machine regression, Gaussian process regression, moving Least Squares and polynomial regression. We computed the root mean square error (Equation (15)), the relative root mean square error (Equation (16)) and the relative average absolute error (Equation (17)) before and after launching the refinement. processes. \[ RMSE_Z(t)(\hat{s}) = \frac{1}{N_t} \sum_{i=1}^{N_t} (y_i(t) - \hat{s}(x_i(t)))^2 \] (15) \[ RRMSE_Z(t)(\hat{s}) = \frac{1}{N_t} \sum_{i=1}^{N_t} \left( \frac{y_i(t) - \hat{s}(x_i(t))}{y_i(t)} \right)^2 \] (16) \[ RAAE_Z(t)(\hat{s}) = \frac{1}{N_t} \sum_{i=1}^{N_t} \left| \frac{y_i(t) - \hat{s}(x_i(t))}{\sigma_Y} \right| \] (17) We give in Table 3 the obtained quality measures for the temperature spread output. In fact, the pressure loss is nearly linear and every method gives a good approximation. Table 3: Quality measures of different response surfaces of static mixer simulations <table> <thead> <tr> <th>Surrogate model</th> <th>RRMSE</th> <th>RMSE</th> <th>RAAE</th> </tr> </thead> <tbody> <tr> <td>GARS Initial</td> <td>0.16</td> <td>0.10</td> <td>0.50</td> </tr> <tr> <td>GARS Final</td> <td>0.10</td> <td>0.07</td> <td>0.31</td> </tr> <tr> <td>Kriging Initial</td> <td>0.16</td> <td>0.11</td> <td>0.48</td> </tr> <tr> <td>Kriging Final</td> <td>0.16</td> <td>0.11</td> <td>0.50</td> </tr> </tbody> </table> The results show that UP-SMART gives a better approximation. Here, it is used with a genetic aggregation of several response surfaces. Even if the good quality may be due to the response surface itself, it highlights the fact that UP-SMART made the use of such surrogate model-based refinement strategy possible. 7 Empirical Inversion 7.1 Empirical inversion criteria adaptation Inversion approaches consist in the estimation of contour lines, excursion sets or probability of failure. These techniques are specially used in constrained optimization and reliability analysis. Several iterative sampling strategies have been proposed to handle these problems. The empirical distribution \( \mu_{n,x} \) can be used for inversion problems. In fact, we can compute most of the well-known criteria such as the Bichon’s criterion [4] or the Ranjan’s criterion [31] using the UP distribution. In this section, we discuss some of these criteria: the targeted mean square error TMSE [29], Bichon [4] and the Ranjan criteria [31]. The reader can refer to Chevalier et al. [7] for an overview. Let us consider the contour line estimation problem: let \( T \) be a fixed threshold. We are interested in enhancing the surrogate model accuracy in \( \{ x \in \mathbb{X}, s(x) = T \} \) and in its neighborhood. **Targeted MSE (TMSE)** The targeted Mean Square Error (TMSE) \([29]\) aims at decreasing the mean square error where the kriging prediction is close to \(T\). It is the probability that the response lies inside the interval \([T - \varepsilon, T + \varepsilon]\) where the parameter \(\varepsilon > 0\) tunes the size of the window around the threshold \(T\). High values make the criterion more exploratory while low values concentrate the evaluation around the contour line. We can compute an estimation of the value of this criterion using the **UP distribution** (Equation (18)). \[ TMSE_{T,n}(x) = \sum_{i=1}^{n} w_{i,n}(x) \mathbb{1}_{[T - \varepsilon, T + \varepsilon]}(\hat{s}_{n,i}(x)) \\ = \sum_{i=1}^{n} w_{i,n}(x) \mathbb{1}_{[-\varepsilon, \varepsilon]}(\hat{s}_{n,i}(x) - T) \] Notice that the last criterion takes into account neither the variability of the predictions at \(x\) nor the magnitude of the distance between the predictions and \(T\). **Bichon criterion** The expected feasibility defined in \([4]\) aims at indicating how well the true value of the response is expected to be close to the threshold \(T\). The bounds are defined by \(\varepsilon_x\) which is proportional to the kriging standard deviation \(\hat{\sigma}(x)\). Bichon proposes using \(\varepsilon_x = 2\hat{\sigma}(x)\) \([4]\). This criterion can be extended to the case of the UP distribution. We define in Equation (19) \(EF_n\) the empirical Bichon’s criterion where \(\varepsilon_x\) is proportional to the empirical standard deviation \(\hat{\sigma}_n^2(x)\) (Equation (5)). \[ EF_n(x) = \sum_{i=1}^{n} w_{i,n}(x) (\varepsilon_x - |T - \hat{s}_{n,i}(x)|) \mathbb{1}_{[-\varepsilon_x, \varepsilon_x]}(\hat{s}_{n,i}(x) - T) \] **Ranjan criterion** Ranjan et al. \([31]\) proposed a criterion that quantifies the improvement \(I_{Ranjan}(x)\) defined in Equation (20) \[ I_{Ranjan}(x) = (\varepsilon_x^2 - (y(x) - T)^2) \mathbb{1}_{[-\varepsilon_x, \varepsilon_x]}(y(x) - T) \] where \(\varepsilon_x = \alpha \hat{\sigma}(x)\), and \(\alpha > 0\). \(\varepsilon_x\) defines the size of the neighborhood around the contour \(T\). It is possible to compute the UP distribution-based Ranjan’s criterion (Equation (21)). Note that we set \(\varepsilon_x = \alpha \hat{\sigma}_n^2(x)\). \[ E[I_{Ranjan}(x)] = \sum_{i=1}^{n} w_{i,n}(x) (\varepsilon_x^2 - (\hat{s}_{n,i}(x) - T)^2) \mathbb{1}_{[-\varepsilon_x, \varepsilon_x]}(\hat{s}_{n,i}(x) - T) \] 7.2 Discussion The use of the pointwise criteria (Equations (18), (19), (21)) might face problems when the region of interest is relatively small to the prediction jumps. In fact, as the cumulative distribution function of the UP distribution is a step function, the probability of the prediction being inside an interval can vanish even if it is around the mean value. For instance $\mu_n, x(y(x) \in [T - \varepsilon, T + \varepsilon])$ can be zero. This is one of the drawbacks of the empirical distribution. Some regularization techniques are possible to overcome this problem. For instance, the technique that consists in defining the region of interest by a Gaussian density $N(0, \sigma^2)$ [29]. Let $g_c$ be this Gaussian probability distribution function. The new $TMSE$ denoted $TMSE_{T,n}^{(2)}(x)$ criterion is then as in Equation (22). $$TMSE_{T,n}^{(2)}(x) = \sum_{i=1}^{n} w_{i,n}(x) g_c\left(\hat{s}_{n,-i}(x) - T\right)$$ (22) The use of the Gaussian density to define the targeted region seems more relevant when using the UP local variance. Similarly, we can apply the same method to the Ranjan’s and Bichon’s criteria. 8 Conclusion To perform surrogate model-based sequential sampling, several relevant techniques require to quantify the prediction uncertainty associated to the model. Gaussian process regression provides directly this uncertainty quantification. This is the reason why Gaussian modeling is quite popular in sequential sampling. In this work, we defined a universal approach for uncertainty quantification that could be applied for any surrogate model. It is based on a weighted empirical probability measure supported by cross-validation sub-models predictions. Hence, one could use this distribution to compute most of the classical sequential sampling criteria. As examples, we discussed sampling strategies for refinement, optimization and inversion. Further, we showed that, under some assumptions, the optimum is adherent to the sequence of points generated by the optimization algorithm UP-EGO. Moreover, the optimization and the refinement algorithms were successfully implemented and tested both on single and multiple surrogate models. We also discussed the adaptation of some inversion criteria. The main drawback of UP distribution is that it is supported by a finite number of points. To avoid this, we propose to regularize this probability measure. In a future work, we will study and implement such regularization scheme and study the case of multi-objective constrained optimization. 9 Proofs We present in this section the proofs of Proposition 5.1, Lemma 5.2 and Theorem 5.6. Here, we use the notations of Section 5.3. Proposition 5.1. Let $n > 1$, $Z_n = (X_n = (x_1, \ldots, x_n)^\top, Y_n = s(X_n))$, and $\hat{s}$ a model that interpolates the data i.e $\forall i \in 1, \ldots, n, \hat{s}_{Z_n}(x_i) = s(x_i) = y_i$. First, we have $\xi_n(x_i) = \delta d_{X_n}(x_i)$. Since $x_i \in X_n$ then $\xi_n(x_i) = 0$ . Further, $EEI_n(x_i) = w_{i,n}(x_i) \max(y^*_{n} - \hat{s}_{n,-i}(x_i), 0) + \sum_{j=1}^{n} w_{j,n}(x_i) \max(y^*_{n} - y_i, 0)$. Notice that $w_{i,n}(x_i) = 0$ and $\max(y^*_{n} - y_i, 0) = 0$ Then $EEI_n(x_1) = 0$. Finally, $\kappa_n(x_1) = EEI_n(x_1) + \zeta_n(x_1) = 0$. **Lemma 5.2.** Let us note: - $\phi_p(x, x') = 1 - e^{-\frac{d((x, x'))^2}{\rho^2}}$. - $w_{i,n}(x) = \frac{\phi_{x}(x, x_i)}{\sum_{k=1}^{n}\phi_{x}(x, x_k)}$. Convex inequality gives $\forall a \in \mathbb{R}, 1 - e^{-a} < a$ then $\phi_p(x, x_k) \leq \frac{d((x, x_k))^2}{\rho^2}$. Further, let $x_{k_1}, x_{k_2}$ be two different design points of $X_n$, $\forall x \in X$, $\max \{d(x, x_i)\} \geq \frac{d(x_{k_1}, x_{k_2})}{\rho^2}$ otherwise the triangular inequality would be violated. Consequently, - $\forall n > n_0, \sum_{k=1}^{n}\phi_p(x, x_k) \geq \phi_p(x, x_{k_1}) + \phi_p(x, x_{k_2}) \geq \phi_{2p}(x, x_{k_1}, x_{k_2}) > 0$ - $\forall n > n_0, \forall x \in X$: $w_{i,n}(x) = \frac{\phi_{x}(x, x_i)}{\sum_{k=1}^{n}\phi_{x}(x, x_k)} \leq \frac{\phi_{x}(x, x_{k_1}, x_{k_2})}{\rho^2}$ Considering $\theta = \frac{1}{\rho^2 \phi_{2p}(x, x_{k_1}, x_{k_2})}$ ends the proof. **Theorem 5.6.** $X$ is compact so $S$ has a convergent sub-sequence in $X^N$ (Bolzano-Weierstrass theorem). Let $(x(\psi))$ denote that sub-sequence and $x_\infty \in X$ its limit. We can assume by considering a sub-sequence of $\psi$ and using the continuity of the surrogate model $\hat{s}$ that: - $d(x_\infty, x(\psi)) \leq \frac{1}{n}$ for all $n > 0$ - $\exists n_0 \geq d(x_\infty, x(\psi))$ such that $\forall x', d(x', x_\infty) \leq n_0 \implies |\hat{s}_{m-n}(x_\infty) - \hat{s}_{m-n}(x')| \leq \frac{1}{n}$, $\forall i \in 1, \ldots, m$, where $m > n_0$. For all $k > 1$, we note $v_k = \psi(k+1)-1$, the step at which UP-EGO algorithm selects the point $x_{\psi(k+1)}$. So, $\kappa_{v_k}(x_{\psi(k+1)}) = \max_{\chi \in X} \{\kappa_{v_k}(\chi)\}$. Notice first that for all $n > 0$, $x_{\psi(n)}, x_{\psi(n+1)} \in B(x_\infty, \frac{1}{n})$ where $B(x_\infty, \frac{1}{n})$ is the closed ball of center $x_\infty$ and radius $\frac{1}{n}$. So: $$\xi_{v_n}(x_{\psi(n+1)}) = \delta d_{X_{v_n}}(x_{\psi(n+1)}) \leq \delta d(x_{\psi(n)}, x_{\psi(n+1)}) \leq \frac{2\delta}{n}$$ (i) According to Lemma 5.2, $w_{\psi(n), v_n} \leq \theta \left(d(x_{\psi(n+1)}, x_{\psi(n)})\right)^{\frac{1}{2}}$ so $w_{\psi(n), v_n} \leq \frac{\theta}{\rho^2}$. Consequently: $$w_{\psi(n), v_n}(x_{\psi(n+1)}) \max(y_{v_n}^* - \hat{s}_{v_n, -\psi(n)}(x_{\psi(n+1)}), 0) \leq \frac{4\theta(U - L)}{n^2}$$ (ii) Further, $\forall i \in 1, \ldots, v_n, i \neq \psi(n)$, $\hat{s}_{v_n, -i}(x_{\psi(n)}) = y_{\psi(n)}$ since the surrogate model is an interpolating one. hence $\hat{s}_{v_n, -i}(x_{\psi(n)}) \geq y_{\psi(n)}^*$ and so max$(y_{v_n}^* - \hat{s}_{v_n, -i}, 0) \leq \frac{\delta}{2}$, $\hat{s}_{v_n, -i}(x_{\psi(n)}) = y_{\psi(n)}$ and finally: $$\max(y_{v_n}^* - \hat{s}_{v_n, -i}, 0) \leq \frac{2}{n}$$ (iii) We have: \[ |\kappa_{v_n}(x_{\psi(n+1)})| = \xi_{v_n}(x_{\psi(n+1)}) + \sum_{i=1}^{v_n} w_{i,v_n}(x_{\psi(n+1)}) \max(y_{v_n}^* - \hat{s}_{v_n-1}(x_{\psi(n+1)}), 0) \] \[ = \xi_{v_n}(x_{\psi(n+1)}) + w_{\psi(n),v_n}(x_{\psi(n+1)}) \max(y_{v_n}^* - \hat{s}_{v_n-\psi(n)}(x_{\psi(n+1)}), 0) \] \[ + \sum_{i=1}^{v_n} w_{i,v_n}(x_{\psi(n+1)}) \max(y_{v_n}^* - \hat{s}_{v_n-1}(x_{\psi(n+1)}), 0) \] \[ \leq \frac{2\delta}{n} + \frac{4\theta(U - L)}{n^2} + \frac{2}{n} \] Considering (i),(ii) and (iii): \[ |\kappa_{v_n}(x_{\psi(n+1)})| \leq \frac{2\delta}{n} + \frac{4\theta(U - L)}{n^2} + \frac{2}{n} \] Notice that: \[ \kappa_{v_n}(x_{\psi(n+1)}) = \max_{x \in X} \{\kappa_{v_n}(x)\} \text{ and } \delta_{\kappa_{v_n}}(x^*) = \xi_{v_n}(x^*) \leq \kappa_{v_n}(x^*) \leq \kappa_{v_n}(x_{\psi(n)}) \] Since \( \lim_{n \to \infty} |\kappa_{v_n}(x_{\psi(n+1)})| = 0 \) so \( \lim_{n \to \infty} d_{\kappa_{v_n}}(x^*) \to 0. \) 10 Acknowledgment We gratefully acknowledge the French National Association for Research and Technology (ANRT, CIFRE grant number 2015/1349). Appendix: Optimization test results In this section, we use boxplots to display the evolution of the best value of the optimization test bench. For each iteration, we display: left: EGO in red., middle UP-EGO using genetic aggregation in blue, right: UP-EGO using kriging in green. Figure 9: Six-hump camel: Box plots convergence Figure 10: Ackley: Box plots convergence Figure 11: Hartmann6: Box plots convergence References
Hai Lin · Donald G. Truhlar QM/MM: what have we learned, where are we, and where do we go from here? 1 Introduction Despite the increasing computational capability available now, molecular modeling and simulation of large, complex systems at the atomic level remain a challenge to computational chemists. At the same time, there is an increasing interest in nanostructured materials, condensed-phase reactions, and catalytic systems, including designer zeolites and enzymes, and in modeling systems over longer time scales that reveal new mechanistic details. The central problem is: can we efficiently accomplish accurate calculations for large reactive systems over long time scales? As usual, we require advances in modeling potential energy surfaces, in statistical mechanical sampling, and in dynamics. The present article is concerned with the potentials. Models based on classical mechanical constructs such as molecular mechanical (MM) force fields that are based on empirical potentials describing small-amplitude vibrations, torsions, van der Waals interactions, and electrostatic interactions have been widely used in molecular dynamics (MD) simulations of large and complex organic and biological systems [1–25] as well as inorganic and solid-state systems [26–31]. However, the MM force fields are unable to describe the changes in the electronic structure of a system undergoing a chemical reaction. Such changes in electronic structure in processes that involve bond-breaking and bond-forming, charge transfer, and/or electronic excitation, require quantum mechanics (QM) for a proper treatment. However, due to the very demanding computational cost, the application of QM is still limited to relatively small systems consisting of up to tens or several hundreds of atoms, or even smaller systems when the highest levels of theory are employed. Algorithms that combine quantum mechanics and molecular mechanics provide a solution to this problem. These algorithms in principle combine the accuracy of a quantum mechanical description with the low computational cost of molecular mechanics, and they have become popular in the past decades. The incorporation of quantum mechanics into molecular mechanics can be accomplished in various ways, and one of them is the so-called combined quantum mechanical and molecular mechanical (QM/MM) methodology [32–151]. A QM/MM method (see Fig. 1) treats a localized region, e.g., the active site and its neighbors in an enzyme (called the primary subsystem, PS), with QM methods and includes the influence of the surroundings (e.g., the protein environment, 2 Interactions between the primary and secondary subsystems The coupling between the primary system (PS) and the secondary subsystem (SS) is the heart of a QM/MM method. The coupling, in general, must be capable of treating both bonded (bond stretching, bond bending, and internal rotation, sometimes called valence forces) and non-bonded interactions (electrostatic interaction and van der Waals interactions). Various QM/MM schemes have been developed to treat the interactions between the PS and SS. As might be expected from its general importance in a myriad of contexts [152], the electrostatic interaction is the key element of the coupling. Depending on the treatment of the electrostatic interaction between the PS and SS, the QM/MM schemes can be divided into two groups, the group of mechanical embedding and the group of electric embedding. [44] A mechanical embedding (ME) scheme performs QM computations for the PS in the absence of the SS, and treats the interactions between the PS and SS at the MM level. These interactions usually include both bonded (stretching, bending, and torsional) and non-bonded (electrostatic and van der Waals) interactions. The original integrated molecular-orbital molecular-mechanics (IMOMM) scheme by Morokuma and coworkers [39,52,62], which is also known as the two-layer ONIOM(MO:MM) method, is an ME scheme. In an electrostatic embedding (EE) scheme, also called electric embedding, the QM computation for the PS is carried out in the presence of the SS by including terms that describe the electrostatic interaction between the PS and SS as one-electron operators that enter the QM Hamiltonian. Since most popular MM force fields, like CHARMM [18] or OPLS-AA [17,19,20,22,24,25], have developed extensive sets of atomic-centered partial point charges for calculating electrostatic interactions at the MM level, it is usually convenient to represent the SS atoms by atomic-centered partial point charges in the effective QM Hamiltonian. However, more complicated representations involving distributed multipoles have also been attempted [46,89]. The bonded (stretching, bending, and torsional) interactions and non-bonded van der Waals interactions between the PS and SS are retained at the MM level. A comparison between the ME and EE schemes are presented in Table 1, which will be discussed in detail in Sects. 2.1 and 2.2. 2.1 Mechanical embedding? The key difference between an ME scheme and an EE scheme is how they treat the electrostatic interaction between PS and SS. An ME scheme handles the interaction at the MM level, which is simpler. However, such a treatment has drawbacks. First, the treatment requires an accurate set of MM parameters such as atom-centered point charges for both the PS and SS. It is relatively easier to get such parameters for the SS, and the problem with getting such parameters for the PS, where reactions are taking place, was the central reason for moving from MM to QM in the first place. Since the charge distribution in the PS usually changes as reaction progresses, the error in using a single set of MM parameters could be very serious. The second drawback of an ME scheme is that it ignores the potential perturbation of the electronic structure. Table 1 A Comparison between the ME and EE Schemes <table> <thead> <tr> <th></th> <th>ME</th> <th>EE</th> </tr> </thead> <tbody> <tr> <td>Electrostatic interaction</td> <td>Handled in the standard MM way</td> <td>Treated by including certain one-electron terms in the QM Hamiltonian</td> </tr> <tr> <td>between the primary system (PS) and the secondary system (SS)</td> <td>Simple</td> <td>1. Do not need electrostatic MM parameters for PS atoms, which may change their character during the simulation</td> </tr> <tr> <td>Advantage</td> <td>2. Ignores the potential perturbation of the electronic structure of the PS by the charge distribution of the SS</td> <td>2. The electronic structure of the PS adjusts to the charge distribution in the SS</td> </tr> <tr> <td>Disadvantage</td> <td>1. An accurate set of electrostatic MM parameters is often not available for PS atoms</td> <td>1. More computational effort</td> </tr> <tr> <td></td> <td>2. Ignores the potential perturbation of the electronic structure of the PS by the charge distribution of the SS</td> <td>2. Need to construct an appropriate representation for the charge distribution in the SS</td> </tr> </tbody> </table> of the PS due to the electrostatic interaction between the PS and SS. The atom-centered charges in the SS polarize the PS and alter its charge distribution. This is especially a problem if the reaction in the PS is accompanied by charge transfer. Another problematic situation would be a system (e.g., an open-shell system containing transition metals) having several electronic states close in energy, for which the polarization could change the energetic order of these states, e.g., predicting a different ground state with a different charge and/or spin distribution. To deal with the lack of accurate MM electrostatic parameters for the PS atoms during a reaction, one might consider obtaining these parameters dynamically as the reaction progresses, e.g., deriving atom-centered point charges for the PS atoms when the system evolves along the reaction path. This idea works in principle, but in practice, it requires a large PS to achieve the desired accuracy due to the second drawback of ME schemes, which was just discussed above. That is, the PS system must be large enough to assure that its calculated charge distribution is converged with respect to the location of the QM/MM boundary. Moreover, an accurate and fast algorithm is necessary to derive the MM electrostatic parameters on the fly (with no or only a little calibration by experimental data or validation by doing pure MM simulation). These requirements will apparently increase the computational effort considerably. This problem motivates consideration of the mechanically embedded three-layer ONIOM (MO:MO:MM) method [52]. This method attempts to overcome the drawbacks of a mechanically embedded two-layer ONIOM (MO:MM) [39] by introducing a buffer (middle) layer, which is treated by an appropriate lower-level QM theory (e.g., semi-empirical molecular orbital theory), which is computationally less expensive than the method used for the innermost primary subsystem. One can label such a treatment as QM1:QM2:MM or QM1/MM2/MM. The second QM layer is designed to allow a consistent treatment of the polarization of the active center by the environment. The new treatment does improve the description, but, with mechanical embedding, it does not solve the problem completely, since the QM calculation for the first layer is still performed in the absence of the rest of the atoms. 2.2 Electrostatic embedding? In contrast to an ME scheme, an EE scheme does not require the MM electrostatic parameters for the PS atoms because the electrostatic interaction between PS and SS is now treated at a more advanced level by including certain one-electron terms in the QM Hamiltonian. The polarization of the PS by the charge distribution of the SS is also taken into account automatically. The recent progress in the development of electrostatic embedded ONIOM method [137,138] reflects the trend of moving from ME to EE in QM/MM methodology. The price to pay for this improvement is more complicated implementation and increased computational cost. The unsolved issue for EE schemes is how to construct the one-electron terms in the effective QM Hamiltonian. As mentioned earlier, the simplest way is to represent the charge distribution of the SS as a background of atom-centered partial charges. This is further facilitated by the availability of a set of pre-parameterized MM point charges in many MM force fields; these MM point charges have in principle been parameterized consistently with the other MM parameters to give accurate MM energies, and they have been validated by extensive test calculations. The use of these MM atom-centered partial charges is very efficient, and it is the most popular way in constructing the effective QM Hamiltonian. Nevertheless, the question is raised: are charge parameterized for a particular MM context also appropriate for use in a QM Hamiltonian? In an extreme case, for example, a zeolite-substrate system, the formal atomic charges used in aluminosilicate force field are chosen to reproduce the structural rather than electrostatic data; such charges may not be appropriate for the construction of the one-electron terms in the effective QM Hamiltonian [56]. The MM point charges actually include the contributions due to higher-order multipoles implicitly, i.e., the higher-order contributions are folded into the zero-order parameters. By considering higher-order multipole contributions explicitly, one might increase the accuracy of calculated electrostatic interactions, but this makes the implementation more difficult, and the computational costs grow. The development of distributed multipole parameters is also a difficult and time-consuming task, but the biggest obstacle is that the higher-order terms are generally sensitive to the geometry or conformation changes [153–155]. The high conformation dependence of the multipole expansion limits the transferability [156]. For example, only about 20 amino acids are commonly encountered in proteins. It would be ideal to have one set of parameters for these 20 amino acids, which could be used to simulate any proteins, and it would be very inconvenient if one had to develop a new set of parameters whenever another protein is studied or whenever the conformation of a given protein changes considerably. Another unsolved issue in ascertaining the best EE strategy is the question of the polarization of the SS. In principle, the PS and SS will polarize each other until their charge distributions are self-consistent; properly account for this in a computation is usually accomplished by an iterative scheme [157] (or matrix inversion) or by an extended Lagrangian scheme [158]. Ideally, an EE scheme should include this self-consistency, but usually the charge distribution of the SS is considered frozen for a given set of SS nuclear coordinates. Schemes that relax this constraint can be called self-consistent embedding schemes (or polarized embedding schemes). However, self-consistency is difficult to achieve because it requires a polarizable MM force field [157–169], which has the flexibility to respond to perturbation by an external electric field. Such flexibility is not available in today’s most popular MM force fields, although research to develop a polarizable force field has received much attention [164,166]. Moreover, the use of a self-consistent embedding scheme also brings additional complications to the treatment of the boundary between the PS and SS, which we will discuss later in next section. Finally, it increases the computational effort, since iterations are required to achieve self-consistent polarization of the PS and SS. Thus, in most EE implementations, the PS is polarized by the SS, but the SS is not polarized by the PS. Early examinations on the self-consistent embedding scheme was carried out by Thompson and Schenter [42] and Bakowies and Thiel [44]. Their treatments are based on models that describe the mutual polarization of QM and MM fragments in the spirit of reaction field [170–173] theory, with the difference that the response is generated by a discrete reaction field (atomic polarizabilities) rather than a continuum. Their results suggests that the polarization of the SS by the PS can be crucial in applications involving a charged PS that generates large electric fields. 2.3 Interactions other than electrostatic Although, as discussed above, the key difference between the ME and EE schemes is the treatment of the electrostatic interaction between the PS and the SS, there are also important issues involving in the treatments of the other interactions between the PS and the SS. These interactions include the bonded (stretching, bending, and torsional) interactions and the non-bonded van der Waals interactions, which are handled at the MM level. A similar question arises here, as in the case of electrostatic interactions for the ME scheme, but now even for the EE schemes, i.e., all the interactions calculated at the MM level rely on the availability of MM parameters for the PS atoms. These parameters are not necessarily the same for the PS atoms in the reactant and product because the atom types are changed for some atoms, e.g., a carbon atom may change from C = O type to C–O–H type. Which set of MM parameters should we use? Should one switch between two sets of MM parameters during a dynamics calculation following the reaction path? Switching between these two sets of parameters during a dynamics calculation or along the reaction path is not convenient, and, again, avoiding this was the one of the reasons for moving up from MM to QM. Moreover, even if the switching between parameters could be done, one does not know at which point along the reaction path it should be done and how suddenly if the change is gradual. There is no unambiguous answer. One key difference between the need for non-bonded electrostatic parameters and the need for bonded parameters is that the latter requirement can always be obviated by making the PS bigger, i.e., moving the QM–MM boundary out. The change of atom types might change the force constants for associated bonded interactions. Usually force constants for stretches are much bigger than force constants for bends, and force constants for torsions are the smallest. The changes of force constants due to the change of atom types are often in this order, too. This provides us with gauge for monitoring the error due to using a single set of MM parameters. The bonded interactions between PS and SS are localized at the boundary. In principle, the use of a larger PS pushes the boundary away from the reaction center and helps to alleviate the uncertainty due to parameter choices, but at a price of increasing computational effort. In many cases, though, enlarging the PS is not a practical solution. What then? Our suggestion is to keep using one set of MM parameters, and examine whether the errors introduced by using one set of parameters exceeds the errors produced by other approximations that are introduced by the QM/MM framework. Although our treatment is not a perfect solution, it is very practical, and it appears to be reasonable. For the van der Waals interactions, any PS atoms that change atom types are intrinsically ambiguous; this problem cannot be avoided even if a larger QM subsystem is adopted. Fortunately, in practice, it does not appear to be a serious problem in most cases, since the van der Waals interactions are significant only at short distances (as compared to longer range forces associated charged species and permanent dipoles), and the use of only one set of van der Waals parameters is often adequate. 2.4 Treating solid-state systems So far we have been talking about QM/MM methodology in a very general sense. In this subsection, we more specifically address some question about how to treat periodic systems and other solid-state materials such as metals, metal oxides, and surface-adsorbate systems. Excellent discussions [47,56,74,85,96,97,101–103,108,115,133,140,145] are available for many aspects, and we focus here especially on studies of zeolites. As we mentioned above, the most important interaction between the PS and the SS is the electrostatic interaction. Thus, the central problem in treating periodic systems like the zeolite-substrate systems is how to incorporate the long-range electrostatic interactions between the SS and PS into a cluster model. The basic idea [174] is to develop a representation of charge distribution with a finite number of multipoles (usually point charges) to mimic the infinite and periodic charge distribution of the environment in which the cluster model is embedded. This effective charge-distribution can be obtained by minimizing the difference between the electrostatic potentials that are generated by the effective charge distribution and by the original infinite and periodic charge distribution at a set of sampling points at the active site. Additional effective core potentials can be associated with selected point charges if needed. For example, parameterized effective core potentials can be used to replace point charges that are close to anions in the PS in order to reduce the overpolarization of these anions [175]. By doing so, one truncates the infinite and periodic system to a finite embedded cluster model, which is now much easier to handle. A simple example is the surface charge representation of the electrostatic embedding potential (SCREEP) method, in which the electrostatic potential from the infinite crystal lattice is modeled by a finite number (usually several hundreds) of point charges located on a surface enclosing the cluster [176]. More sophisticated models [97,101,103] also include polarization effects on the SS by using the shell-model [159]. The shell model [159] represents an ion, e.g., an O$^{2−}$ ion in silica, by a pair of charges, namely, a positive core and a negative shell. The pair of charges are connected by a harmonic potential. The positions of all charge are optimized to get the lowest energy, i.e., the polarization effect is modeled as charge redistribution. It is a concern that, in QM/MM calculations, as a consequence of the finite size of the cluster, the calculated HOMO–LUMO gap for solid is still typically larger than that for the corresponding extended solid, despite corrections to the energy to take into account the electrostatic contribution of the MM region. One might expect this to cause some errors in the calculation of absorption (of ions, electrons, or molecules) into the QM center. One important question that seems to be involved is whether the neglect of orbital interactions between the QM and MM subsystems underestimates the bandwidth of the QM system. This would be a serious problem if the QM–MM boundary passed through a conjugated system or a metallic region. But if the boundary passes through a covalent bond? First, it is important to keep in mind that the HOMO–LUMO gap is not a physical observable, and the LUMO itself is somewhat arbitrary as long as it remains unoccupied. (For example, the LUMO of Hartree–Fock theory is unphysical, and the meaning of orbital energies in DFT is still a subject of debate.) It is most profitable to cast the problem in terms of observables. An example of a physical observable of concern would be the absorption energy of an electron into the QM region, i.e., the electron affinity of a molecule in the QM region. This is a difficult question to address because one of the main failings of QM/MM methods is that they neglect charge transfer between the MM region. Nevertheless, we can imagine the case of transferring an integer charge into the QM region and ask whether the electron affinity might be systematically in error, due to a systematic error in the HOMO–LUMO gap caused by neglecting the overlap of QM orbitals with the (missing) MM orbitals. This would be hard to answer because the electron affinity of a subsystem is not well defined. Therefore, one might ask a related practical question such as whether one systematically underestimates the energy of anionic QM subsystems, such as carboxylates. In practice, we have not seen such an effect. The errors due to the inexact treatment of the electrostatic effects of the MM system are large enough that the error in energies of reaction can be in either direction. Another practical example might be the calculation of electronic excitation energies. Is there a way, other than increasing the size of the QM region, to stabilize the excitation energy? Or: can one calculate accurate electronic excitation energies of a non-isolated QM system without converging the calculation with respect to enlarging the size of subsystem that is treated quantum mechanically. We think that it is reasonable to hope that one can do this, if one makes the QM/MM treatment sophisticated enough. For example, one can obtain reasonable values for solvatochromic shifts from continuum solvation models in which the solvent is not treated quantum mechanically [177]. 2.5 Adaptive QM/MM An important issue that arises in simulating liquid-state phenomena and diffusion through solids is the adaptive movement of the quantum mechanical region, which is called the “hot spot” [50,77,116,178]. Algorithms have been reported for liquid-phase simulation that allows water molecules to enter and leave the QM region dynamically. The basic idea is to identify a narrow “buffer region” or “switching shell” between the QM and MM regions. The cut-off is group-based, i.e., a solvent molecule like water is considered to be in the buffer region when its center of mass is in the buffer region. In order to avoid a discontinuity in the force as a solvent molecule enters or leaves the hot spot, Rode and coworkers [50] proposed to use a smooth function for the forces experienced... by the atoms in the buffer region to ensure a smooth transition between QM and MM force. The smooth function takes the same form as the one [179] used in the CHARMM program to handle the discontinuity in energy and force due to the use of cut-offs for nonbonded (especially electrostatic) interactions. Despite its success, this treatment lacks a unique definition for the energy, which is obtained by integration of the force. Later, Kerdeharoen and Morokuma [116] described another scheme to cope with the discontinuity. In their scheme, two QM/MM calculations are performed for a given configuration of the whole system. The first calculation is done with the atoms in the buffer region and the atoms in the MM region treated at the MM level, and the second calculation is carried out with the atoms in the buffer region and the atoms in the QM region treated at the QM level. The total QM/MM energy is a weighted average of the QM/MM energies obtained in these two calculations; the weight function is determined by the position of the atoms in the buffer region. This treatment can be viewed as making a smooth connection of two potential energy surfaces. 3 QM/MM boundary treatment In this section, we examine the problem with a stronger microscope, and we consider details, especially for the troublesome implementation of the EE scheme. In some cases, the boundary between PS and SS does not go through a covalent bond, e.g., a molecule being solvated in water, where the solute is the PS and the solvent (water) molecules are the SS [36,69]. The effective fragment potential method [46] can also be considered as a special case of MM in this catalog. In many cases, however, one cannot avoid passing the boundary between the PS and SS through covalent bonds (e.g., in enzymes or reactive polymers) or through ionic bonds (in solid-state catalysts). This is called cutting a bond. In such cases, special care is required to treat the boundary, and this section (Sect. 3) is mainly concerned with this problem. 3.1 Link atom or local orbital? Treatments of the boundary between PS and SS regions can be largely grouped into two classes. The first is the so-called link atom approach, where a link atom is used to saturate the dangling bond at the “frontier atom” of the PS. This link atom is usually taken to be a hydrogen atom, [34,39,52,72,106,116,119] or a parameterized atom, e.g., a one-free-valence atom in the “connection atom” [70], “pseudobond” [82], and “quantum capping potential” [111] schemes, which involve a parameterized semiempirical Hamiltonian [70] or a parameterized effective core potential (ECP) [82,111] adjusted to mimic the properties of the original bond being cut. The second class of QM/MM methods consists of methods that use localized orbitals at the boundary between the PS and SS. An example is the so-called local self-consistent field (LSCF) algorithm [35,38,43,51,112], where the bonds connecting the PS and SS are represented by a set of strictly localized bond orbitals (SLBOs) that are determined by calculations on small model compounds and assumed to be transferable. The SLBOs are excluded from the self-consistent field (SCF) optimization of the large molecule to prevent their admixture with other QM basis functions. Another approach in the spirit of the LSCF method is the generalized hybrid orbital (GHO) method [63,83,113,123,125,142,144,149]. In this approach, a set of four sp³ hybrid orbitals is assigned to each MM boundary atom. The hybridization scheme is determined by the local geometry of the three MM atoms to which the boundary atom is bonded, and the parametrization is assumed to be transferable. The hybrid orbital that is directed toward the frontier QM atom is called the active orbital, and the other three hybrid orbitals are called auxiliary orbitals. All four hybrid orbitals are included in the QM calculations, but the active hybrid orbital participates in the SCF optimizations, while the auxiliary orbitals do not. Each kind of boundary treatment has its strength and weakness. The link atom method is straightforward and is widely used. However, it introduces the artificial link atoms that are not present in the original molecular system, and this makes the definition of the QM/MM energy more complicated. It also presents complications in optimizations of geometries. In addition, it is found, at least in the original versions of the link atom method that the polarization of the bond between the QM frontier atom and the link atom is unphysical due to the nearby point charge on the MM “boundary atom” (an MM boundary atom is the atom whose bond to a frontier QM atom is cut). The distance between the link atom and the MM boundary atom is about 0.5 Å in the case of cutting a C–C bond (the bond distance is about 1.1 Å for a C–H bond and about 1.5 Å and for a C–C bond). Similar problem is found in the case of cutting a Si–O bond (the bond distance is about 1.4 Å for a Si–H bond and about 1.6 Å and for a Si–O bond). At such a short distance, the validity of using a point charge to represent the distribution of electron density is questionable. Special treatments are applied to the MM charges near the boundary so as to avoid this unphysical polarization [33,44,70,71,82,93,110,124]. We will discuss this problem in more detail later in Sect. 3.2. The methods using local orbitals are theoretically more fundamental than the methods using link atoms, since they provide a quantum mechanical description of the charge distribution around the QM/MM boundary. The delocalized representation of charges in these orbitals helps to prevent or reduce the overpolarization that, as mentioned above, is sometimes found in the link-atom methods. However, the local orbital methods are much more complicated than the link atom methods. The local orbital method can be regarded as a mixture of molecular-orbital and valence-bond calculations; a major issue in these studies is the implementation of orthogonality constraints of MOs [142]. Moreover, additional work is required to obtain an accurate representation of the local orbitals before the actual start of a QM/MM calculation. For example, in the LSCF method, the SLBOs are predetermined by calculations on small model compounds, and specific force field parameters are needed to be developed to work with the SLBOs. In the GHO method, extensive parameterization for integral scaling factors in the QM calculations is needed [63, 125, 142, 144, 149]. Such parameters usually require reconsideration if one switches MM scheme (e.g., from CHARMM to OPLS-AA), QM scheme (e.g., from semiempirical molecular orbital methods to density functional theory or post-Hartree-Fock ab initio methods), or QM basis set. The low transferability limits the wide application of the local orbital methods. The performance of both the link-atom and local-orbital methods has been examined by extensive test calculations. The conclusion is that reasonably good accuracy can be achieved by both approaches if they are used with special care. It is expected that the development and application of both the link-atom and local-orbital methods will continue in the future. ### 3.2 Using link-atom methods A central objective in the development of a universal QM/MM algorithm is to make the algorithm as general as possible and to avoid or to minimize the requirement of introducing any new parameters. Thus, for example, one way to define a universally applicable method would be that, when one makes an application to a new system, no MM parameters need to be changed, no QM integral scaling factors needed to be determined, no effective core potentials (ECP) needed to be developed. From this point of view, the link-atom method seems very attractive. Furthermore the method will be more easily built into a standard QM code if the link atom is an ordinary hydrogen atom with a standard basis set. Methods having these features will be examined in more detail in this section. To facilitate our further discussion, we will label the atoms according to “tiers”. The MM boundary atom will be denoted as M1. Those MM atoms directly bonded to M1 will be called second-tier molecular mechanics atoms or M2; similarly, one defines M3 atoms as those MM atoms bonded to M2 atoms …. The QM boundary atom that is directly connected to M1 is labeled as Q1. Similarly, one defines Q2 and Q3 atoms in the QM subsystem. We will denote the link-atom as HL, which stands for “hydrogen-link”, emphasizing that an ordinary hydrogen atom is used/preferred. #### 3.2.1 Location of the link-atom As we mentioned in the previous section, the link-atom method has its problems. The first problem is the introduction of the coordinates of the link atom, which are extra degrees of freedom. By definition, a link atom is neither a QM nor an MM atom because it is not present in the original PS or SS. This causes ambiguity to the definition of QM/MM energy for the ES. One way to avoid this problem is to make the coordinates of a link atom depend on the coordinates of the PS frontier atom and the SS boundary atom, i.e., the Q1 and M1 atoms. Such a constraint removes the extra degrees of freedom due to the link atom. Usually the link-atom is put on the line that connects the corresponding Q1 and M1 atoms. Morokuma and coworkers [72, 180] proposed to scaled the Q1–HL distance \( R(Q1–HL) \) with respect to the Q1–M1 distance \( R(Q1–M1) \) by a scaling factor \( C_{HL} \): \[ R(Q1–HL) = C_{HL} R(Q1–M1) \] During a QM/MM geometry optimization or a molecular dynamics of reaction path calculation, the equilibrium Q1–HL and Q1–M1 distances are constrained to satisfy Eq. (3). The scaling factor, \( C_{HL} \), depends on the nature of the bonds being cut and constructed. It has been suggested [72] that it should be the ratio of standard bond lengths between Q1–HL and Q1–M1 bonds, which is close to 0.71 for replacement of a C–C single bond by a C–H bond. This treatment is reasonable, and its simplicity facilitates the implementation of analytic energy derivatives (gradient and Hessians). However, the meaning of “standard bond length” is ambiguous. Our treatment is to set the scaling factor by \[ C_{HL} = R_0(Q1–H)/R_0(Q1–M1). \] where \( R_0(Q1–H) \) and \( R_0(Q1–M1) \) are the MM bond distance parameters for the Q1–H and Q1–M1 stretches in the employed MM force field, respectively. It is worthwhile to mention that Eichinge et al. [73] also proposed a scaled-bond-distance scheme that is similar to the above scheme by Morokuma and coworkers. However, the scheme by Eichinge et al. [73] makes the scaling factor depend on the force constants of the C–C stretch and C–H stretch instead of the bond distances, and it introduces some additional terms to correct the energy. #### 3.2.2 MM charges near the boundary Another problem (in fact, the problem that has caused the most concern) for the link atom method, as we mention in the previous section, is the overpolarization of the Q1–HL bond due to the nearby M1 point charge. The main reason for this problem is that at such a short distance (usually about 0.5 Å in the case of cutting a C–C bond and about 0.2 Å in the case of cutting a Si–O bond), a point charge assigned to the M1 nucleus does not provide a good approximation for the smeared distribution of charge density. For nearby charge distributions, one must considers screening and charge penetration, and dipole or higher-order multipole moments can also become important. Various approaches have been attempted to avoid or minimize this overpolarization effect, and they are outlined below. If a scheme does nothing to modify the MM charges, we label the scheme as straight-electrostatic-embedding (SEE). The SEE scheme causes the overpolarization problem. The simplest way to avoid overpolarization is to ignore the M1 charge by setting it to zero [181]; we call this method the Z1 scheme. One can also zero out both M1 and M2 charges; the method can be called Z2. If all M1, M2, and M3 charges are omitted [33], the scheme is called Z3. The Z3 scheme is the default option for electrostatic embedding in ONIOM calculations carried out by the Gaussian03 package [182], but Gaussian03 also allows one to use scaling factors other than zero for M2, M3, and M4 charges (the M1 charge is always set to zero). The scaled-charge schemes are generalizations of the eliminated-charge scheme. Schemes that eliminate or scale MM charges often result in changing the net charge of the SS, e.g., a neutral SS might become partially charged, and generate artifacts in the calculation of energies or spurious long-range forces. In many force fields such as CHARMM, the neutrality of certain groups is enforced during the parameterization by imposing a constraint that the sum of charges for several neighboring atoms is zero. An improved eliminated-group-charge scheme [58] takes advantage of this feature by deleting the atomic charges for the whole group that contains the M1 atom. This ensures that the net charge of the SS does not change. It has been found that this scheme is more robust than the Z1, Z2, and Z3 schemes because it preserves the charge for SS. A shifted-charge scheme [67] has been developed to work with force fields where the neutral-groups feature is not available. (Of course, the scheme can also be used for force fields having the neutral-groups feature). In this scheme, called the Shift scheme, the M1 charge is shifted evenly onto the M2 atoms that are connected to M1, and an additional pair of point charges is placed in vicinity of the M2 atom in order to compensate for the modification of the M1–M2 dipole by the initial shift. As pointed out above, the overpolarization effect is largely due to the poor approximation for the distribution of charge density around the M1 atom by a point charge. Therefore, one might think of using a more realistic description for the charge distribution. Recently, it has been proposed [110, 124] to use Gaussian charge distributions instead of point charges for selected atoms. Recently, we [147] developed two new schemes: a redistributed-charge (RC) scheme and a redistributed-charge- and dipole (RCD) scheme, which are based on combining the link-atom treatment and the local-orbital treatment. As indicated in Fig. 2, both schemes use redistributed charges as molecular mechanical mimics of the auxiliary orbitals associated with the MM boundary atom in the GHO theory. In the RC scheme, the M1 charge is distributed evenly onto midpoints of the M1–M2 bonds, i.e., at the nominal centers of the bond charge distributions. The redistributed charge and M2 charges are further modified in the RCD scheme to restore the original M1–M2 bond dipole. The RC and RCD schemes handle the charges in ways that are justified as molecular mechanical analogs to the quantal description of the charge distribution offered by GHO theory. 4 Validation of a QM/MM algorithm Validating QM/MM methods by comparing to high-level calculations or experiment is essential, since the use of un-validated methods is unacceptable. Although the motivation for developing QM/MM methods is to apply them to large systems (e.g., reactions in the condensed phase, including liquids, enzymes, nanoparticles, and solid-state materials), most of the validation studies have been based on small gas-phase model systems, where a “model system” is a small- or medium-sized molecule. It is important, in interpreting such validation tests, to keep two important issues in mind. First, the molecular mechanics parameters, especially partial charges, are usually designed for treating condensed-phase systems where partial charges are systematically larger due to polarization effects in the presence of dielectric screening; thus electrostatic effects of the MM subsystem may be overemphasized in the gas phase. Special attention is given to alkyl groups that are frequently involved in these test examples, because a nonpolar C–C bond is often considered to be the most suitable place for putting the QM/MM boundary. An alkyl group in the gas phase appears to be very un polar, and the C and H atoms are often assign atom-center point charges of small values. For example, in a recent study [147], charges for the C and H atoms in a C2H6 molecule are −0.05 e and 0.02 e, respectively, as derived by the Merz-Singh-Kollmann [33, 183] electrostatic potential (ESP) fitting procedure. An alkyl group becomes more polar in water or in other polar solvents, and the point charges on the atoms in the alkyl group increase significantly. The OPLS-AA force field assigns a charge of −0.18 e for each C atom and 0.06 e for each H atom in the C2H6 molecule, and in the CHARMM force field, the values are even larger (−0.27 e for each C atom and 0.09 e for each H atom). Our calculations [147] on the proton affinity of several gas-phase molecules having alkyl groups found much bigger errors when using the charges developed for simulations in water than when using the ESP-fitted charges. We believe that our conclusion is general since we tested several QM/MM schemes that treat the MM charges near the boundary differently, and observed a similar trend. We learned from this that it is very hard to test schemes designed for complex processes in the condensed phase by carrying calculations on small molecules in the gas phase. It is probably more important to focus on the fact that the QM/MM interface can introduce artifacts. Thus, the main goal of validation tests should usually be to ensure that no unacceptably large energetic or structural artifacts are introduced, rather than to achieve high quantitative accuracy for MM substituent effects. In this regard, a QM/MM method is often tested by examples that are more difficult in one or another sense than those in a normal application because one wants to know where the performance envelope lies. Thus, calculations on examples having large proton affinities are very suitable for testing. Proton or hydrid transfer involves significant charge transfer and is thus a crucial test for the treatment of electrostatic interactions (especially the procedure for handling MM charges near the QM/MM boundary) in a QM/MM method. A large value of the energy difference between the reactant and product also helps us to draw conclusions that are not compromised by the intrinsic uncertainty of the QM/MM approach. The proton affinity of CF₃CH₂O⁻, where CF₃ is the SS and CH₂O⁻ is the PS, is one of these difficult examples, due to both the close location of the reaction center to the boundary and the presence of significant point charges on the atoms in the CF₃ group near the boundary. A recent study [147] on this difficult example by making comparisons between full QM/MM computations and various QM/MM schemes with the ESP-fitted MM charges for the CF₃ group. The QM/MM schemes include the capped PS, the SEE scheme, three eliminated-charge (Z₁, Z₂, and Z₃) schemes, the Shift scheme, the RC scheme, and the RCD scheme. It is found that the Shift and RCD schemes, both of which preserve both the charge of the SS and the M₁–M₂ bond dipoles, are superior to the other schemes in comparison. For example, when, the errors for the RCD and Shift schemes are 1 and 2 kcal/mol, respectively. It is also found that the largest error is caused by the Z₁ scheme (75 kcal/mol), where neither the charge nor the dipole is preserved. The results suggest that it is critical to retain the feature of charge distribution near the QM/MM boundary. By this criterion, the SEE scheme does not seem to be too bad with an error of 9 kcal/mol; actually it is even better than the RC scheme (error of 12 kcal/mol) and the best charge-elimination schemes Z₂ and Z₃ (errors of 25 kcal/mol). However, the error in the optimized Q₁–M₁ (C–C) distance is 3–4 times larger for the SEE scheme than for any of the other schemes to which comparison was made, and this makes the SEE scheme a poor choice in practical applications. 5 Implementation and software As summarized in a recent review article, there are basically three kinds of programming architecture for implementing QM/MM methods. 1. Extension of a “traditional” QM package by incorporating the MM environment as a perturbation. Many QM packages have added or are adding the QM/MM options. A well-known example is the ONIOM method implemented in Gaussian03 (http://www.gaussian.com/). Other examples include the ADF (http://www.scm.com/), CHIMIST/MM (http://www.lctn.uhp-nancy.fr/logiciels.html), GAMESS-UK (http://www.cse.clrc.ac.uk/qcg/gamess-uk/), MCQUB (http://www.chem.umn.edu/groups/gao/software.htm), MOLCAS (http://www.teokem.lu.se/molca/s/), MOZYME (http://www.chem.ac.ru/Chemistry/Soft/MOZYME.en.html), and QSite (http://www.schro-dinger.com/Products/qsite.html) packages. 2. Extension of a “traditional” MM package by incorporating a QM code as a force-field extension. Examples include AMBER (http://www.amber.scripps.edu/), CHARMM (http://www.charmm.org/), and CGPLUS (http://www.comp.chem.umn.edu/cgplus/). 3. A central control program interfacing QM and MM packages, where users can select between several QM and/or MM packages. For example, CHEMShELL (http://www.cse.clrc.ac.uk/qcg/chemshell/) and QMMM (http://www.comp.chem.umn.edu/qmmm/) belong to this catalog. Each kind of program architecture has its own merits and disadvantages. The options based on extension of traditional QM and/or MM packages can make use of the many features of the original program, for example, the ability of the MM program to manipulate large, complex biological systems. The disadvantage is that both options need modification of the codes. The third option is based on module construction and is more flexible. It often needs no or little modification on the original QM and MM programs. The program is automatically updated when the QM or MM packages that it interfaces are updated. The drawback is that it requires a considerable amount of effort to transfer the data between the QM and MM packages, which is usually done by reading data from files, rearranging the data, and writing the data to files. Such additional manipulations can lower the efficiency. Our recently developed program QMMM adopts the third programming architecture. The QMMM program currently interfaces Gaussian03 for doing QM computations and TINKER for doing MM calculations. Geometry optimization and transition-state searching can be done by using the algorithm built into the QMMM program or by using an optimizer in the Gaussian03 program via the external option of Gaussian03. In addition to the RC and RCD schemes, the QMMM program also implements several other schemes such as the SEE, scaled-charge, and shifted-charge schemes for handling the MM point charges near the QM/MM boundary. Currently we are working on combination of the QMMM program with the molecular dynamics program POLYRATE (http://comp.chem.umn.edu/polyrate/) for doing QM/MM direct dynamics calculations [184]. 6 What do we learn from a QM/MM calculation? As discussed in the Sect. 1, one benefit that QM/MM calculations bring to us is the inclusion of the effect of a chemical environment (secondary subsystem, SS) on the reaction center (primary subsystem, PS). The interactions between a PS and an SS are of two kinds: (1) interactions that are significant even at long range (electrostatic interaction), and (2) interactions that are local (bonded interactions) or are only significant at short range (van der Waals interactions). The electrostatic interactions are usually dominant, as they perturb the electronic structure of the PS, and they often have great effects on energetic quantities such as the reaction energy. The bonded and van der Waals interaction act in other ways: for example in enzyme reactions or solid-state reactions, they impose geometry constraints by providing a rigid frame in the active site or lattice site to hold the PS (in fact, the electrostatic interaction will also affect the equilibrium geometry of the PS). A practical way to examine the effect of the environment is to compare quantities such as reaction energies or barrier heights as calculated from an isolated QM model and from a QM/MM model. Usually such quantities show significant differences for processes that involve significant changes in the charge distribution. The calculations for proton transfer reactions are good examples (see the discussion on the proton affinity in Sect. 4). However, one sometimes finds the same or very similar reaction energies and barrier heights from isolated QM model systems and QM/MM model calculations. This is likely to be observed for a reaction without much charge transfer, e.g., a radical abstraction reaction. This does not mean that the SS does not affect the PS. In such a case, it is likely that the effect due to the SS is roughly the same for the reactant, transition state, and product, and the cancellation leads to small net effects. An approximate analysis of the effects due to SS can be obtained by an energy-decomposition as follows (see also Fig. 3). The energy difference between the QM energy for the PS (or CPS, i.e., a PS capped by link atoms) in the gas phase and the QM energy for the PS in an interacting MM environment is defined by $$E_{PS/MM} = E(QM;PS**) - E(QM;PS), \tag{5}$$ where $E(QM;PS**)$ is the QM energy for the PS embedded in the background point charges of SS, and $E(QM;PS)$ is the QM energy for the PS in the gas phase. In either case, the geometry is fully optimized at the corresponding level of theory, i.e., at the QM/MM level for $E(QM;PS**)$ and at the QM level for $E(QM;PS)$. Equation (5) provides a measure of the magnitude of the perturbation on the QM subsystem due to the MM subsystem. Generally speaking, the two geometries in Eq. (5) are different. We further decompose $E_{PS/MM}$ into two contributions: the energy due to the polarization of the background point charges ($E_{pol}$) and the energy due to the geometry distortion from the PS in the gas phase ($E_{steric}$), $$E_{pol} = E(QM;PS**) - E(QM;PS^{dis}), \tag{6}$$ $$E_{steric} = E(QM;PS^{dis}) - E(QM;PS), \tag{7}$$ $$E_{PS/MM} = E_{pol} + E_{steric}, \tag{8}$$ where $E(QM;PS^{dis})$ is the gas-phase single-point PS energy for the QM/MM optimized geometry, i.e., one takes the PS geometry that resulted from QM/MM optimization and removes the MM point charges. Although such an energy-decomposition is approximate, it is informative and provides us deeper understanding of the QM/MM calculations. The energy decomposition is applied to a reaction that we studied recently [147] (Fig. 4). \[ \text{CH}_3 + \text{CH}_3\text{CCH}_2\text{OH} \rightarrow \text{CH}_4 + \text{CH}_2\text{CH}_2\text{CH}_2\text{OH} \] \[ \text{CH}_3 + \text{CH}_3\text{CH}_2\text{OH} \rightarrow \text{CH}_4 + \text{CH}_2\text{CH}_2\text{CH}_2\text{OH} \] Fig. 3 Decomposition of energy due to MM environment into two contributions: the energy due to the polarization by the background point charges ($E_{pol}$) and the energy due to the geometry distortion ($E_{steric}$). See Eqs. (5), (6), (7), (8). The PS is CH$_3$ + CH$_3$CH$_2$OH, giving rise to a CPS as CH$_3$ + CH$_3$CH$_3$. The SS is CH$_2$OH. For each of the reactant, saddle point, and product of this reaction, we found a small steric effect (0.1 kcal/mol) and a dominant polarization effect (9 kcal/mol). It is not surprising to see such a small steric effect, since the distortion of geometry for the CPS from the fully relaxed geometry in the gas phase can be rather small. The critical point is that the energies due to geometry distortion and polarization are so similar along the reaction path that they almost cancel out, giving rise to negligibly small net contributions to the reaction energy and barrier height. Although the MM environment does not have a large net effect on the relative energies of the H atom transfer reaction (R1), it does have effects on the electronic structure of the CPS through polarization and perturbs the charge distribution. The ESP-fitted charges in Fig. 4 clearly show a trend of stepwise change from the unperturbed CPS (denoted as CPS), to the CPS with distorted geometry (CPS$^{dis}$), then to the CPS embedded in the background point charge distribution (CPS**), and finally to the ES, as modeled by full QM calculations. It is interesting to note that the Cb–Cc bond seems to be very unpolar according to the CPS calculations, with a small bond dipole pointing from the Cb to the Cc atom. The CPS** calculations predict that the Cb–Cc bond is more polar, with a larger and inverse bond dipole pointing from the Cc to the Cb atom, in qualitative agreement with full QM calculations. The CPS** result is generally closer to the full QM results, suggesting that QM/MM calculations provide a more realistic description for the QM subsystem than the isolated gas-phase QM model calculations. The conclusion that for a reaction that does not involve much charge transfer, the inclusion of the electrostatic field of the SS will yield small effects in relative energies but large affects in the electronic structure of the PS also gain support from studies of zeolite-substrate systems [56]. In [56], it has been found that the inclusion of the electrostatic field of the SS slightly alters the barrier (by ca. 2 kcal/mol) of a methyl shift reaction over a zeolite acid site, but has considerably large effects on the charge distribution and vibrational QM/MM: what have we learned, where are we, and where do we go from here? Fig. 4 ESP-fitted charges for selected atoms in reaction \( \text{CH}_3 + \text{CH}_3\text{CH}_2\text{CH}_2\text{OH} \rightarrow \text{CH}_4 + \text{CH}_2\text{CH}_2\text{CH}_2\text{OH} \) frequencies. For example, the charge on the \( O \) atoms is changed by 0.16e. Comparing the results of QM/MM calculations with experimental results is the ultimate test of a QM/MM scheme. However, such a comparison is not trivial, and we mention here several points that need attention in general. First, one should consider what kinds of experimental results, including their temperature and pressure, are most informative to be compared, and how reliable these results are, i.e., what is the error bar associated with the observed quantity. It is also important to understand, in the case of the experimental data were derived from fitting to a simplified model, what kinds of assumptions and simplifications have been invoked. In addition, it is important to understand the parameters characterizing the QM/MM calculations. For example are the results converged with respect to increasing the size of the PS, increasing the QM level of theory and/or basis sets, tightening of the optimization convergence threshold, and increasing the cutoff distance or cutoff threshold if any, in the calculation of MM nonbonded interactions? If we are treating a complex system like an enzyme solvated in water, is the phase space sampled adequately? If we cannot afford an extensive phase-space sampling, can we examine several representative conformations? Do we need to consider potential anharmonicity in vibrational analysis? Of particular importance is to separate, at least approximately, the errors due to the insufficient QM treatment of the PS and the errors due to the insufficient consideration of the SS effect. As we showed previously, the barrier height and reaction energy are often not sensitive to the electrostatic interaction between the PS and SS for a reaction that does not involve significant charge transfer. In such a case, it may be more desirable to increase the QM level of theory to improve the results than to improve the MM level or the QM/MM interface. On the other hand, for reactions that are sensitive to the electrostatic interaction between the PS and the SS, simply increasing the QM level of theory does not necessarily improve the reliability. 7 Where do we go from here? Combining QM and MM by applying them to separate subsystems with a boundary in physical space is very natural, and it is safe to say that it is now a permanent part of the theoretical toolbox. However, there are other ways to combine QM and MM, and future work may see greater use of methods that blend QM and MM more intimately. A venerable example of such an approach is the use of quantum mechanics to suggest new functional forms for molecular mechanics. The oldest example would be the Morse curve, which was originally derived by a QM treatment of \( \text{H}_2^+ \) [185]. Replacing a harmonic bond stretching potential by a Morse curve allows MM to treat bond breaking. It is much harder to treat synchronous bond breaking and bond forming, but this was also accomplished in the early days of quantum mechanics, resulting in the London equation [186–188]. Raff [189] was apparently the first to combine the valence-bond-derived QM London equation with molecular mechanics terms to make a QM/MM reactive potential, and in recent years many other workers, [190, 191] especially Espinosa-Garcia and coworkers [192–197], have made fruitful use of this technique. Various workers, of which we single out Brenner [198–200] and Goddard and coworkers [201–204] for noteworthy systematic efforts, have made generalizations to more complex reactive systems. However, these methods are neither universal nor systematically improvable like the methods discussed in Sect. 3 and 4. One way to make the combination of valence bond theory and molecular mechanics more universal and systematic is multi-configuration molecular mechanics (MCMM). MCMM combines QM and MM in a different way than QM/MM; in MCMM the whole system is treated by QM, and simultaneously the whole system is treated by MM. In fact, every atom is treated by two different MM schemes, one corresponding to a reactant and the other to a product, and the two MM energy expansions interact by a \( 2 \times 2 \) configuration interaction matrix highly reminiscent of London’s \( 2 \times 2 \) matrix or the similar \( 2 \times 2 \) matrices used by Raff [189], Warshel and Weiss [210], and others. However, in MCMM the off-diagonal elements of the Hamiltonian are not empirical MM parameters as in previous work, but rather are determined by systematically improvable QM methods. The method is very young but very promising. In the future, we can expect further progress in MCMM. One straightforward improvement is to use QM/MM to replace QM in determination of the off-diagonal element of the Hamiltonian matrices, and this scheme can be called QM/MM-based-MCMM, or QM/MM-MCMM for short. Another even more promising scheme is to combine MCMM with MM in the “same way” that QM is combined with MM in combined QM/MM methods. That is, use MCMM to replace the QM in QM/MM; this scheme can be called MCMM-based-QM/MM, or MCMM-QM/MM. Both schemes make MCMM suitable for handling very large systems. Work in this direction is in progress, and encouraging preliminary results have been obtained [211]. Another area of future improvement is use of improved MM formalisms in QM/MM methods. We have already mentioned polarizable MM force fields (see Sect. 2.2) [166]. Even without allowing polarization, the charges in MM force fields can be improved in various ways. In particular, various charge models have been developed or are in refinement. Examples include the restrained electrostatic potential [212] (RESP) fitting procedure and its latest improved version “dynamically generated restrained electrostatic potential” (DRESP) [213] for QM/MM calculations, the family of CM \( x(x = 1, 214) 2, 215, 216 \), 3 [217–219], and 4 [220] charge models, the charge equilibration [221, 222] (CEq) method, and the consistent charge equilibration (CQE) [223–225] method. An interesting and important future development is the adaptive QM/MM scheme, which was discussed in Sect. 2.5. If we allow atoms to be exchanged between the QM and MM regions, can we take one more step forward, allowing fractional (or whole) charges to be transferred between the QM and MM regions? Of course, to accomplish this goal, we need to work out how to describe the electronic structure of a system with fractional electrons [226]. Another important trend we can expect to see in the near future is the incorporation in MM of methods for modeling metallic systems that were developed in the physics community. For example, just as simplified valence bond theory can be used to obtain functional forms for extending MM to reacting systems, the second-moment approximation to the tight-binding approximation (also known as extended Hückel theory) can be used to obtain new forms for modeling metals [227]. An example of this kind of approach is the use of the embedded-atom functional form to develop MM potentials for Al nanoparticles [228]. A theme that emerges from several of the approaches discussed in this section is the difficulty of classifying the potential energy function as QM or MM. For example, is the embedded-atom method an approximate version of density functional theory or is it MM? Is MCMM an automatic fitting method for QM energies or is it an extension of MM to reactive systems? We prefer to think of these methods as new kinds of QM-MM hybrids where the QM-MM combination is more intimate than in the first generation of combined QM/MM methods. Acknowledgements This work is supported in part by the National Science Foundation and the Office of Naval Research. The authors are grateful to Jiali Gao and Jingzhi Pu for valuable discussions. References QM/MM: what have we learned, where are we, and where do we go from here?
Sidedoor Season 4, Episode 17: Cars, Stars, and Rock 'n' Roll [MUSIC] Lizzie Peabody: This is Sidedoor, a podcast from the Smithsonian with support from PRX. I’m Lizzie Peabody. [MUSIC] Lizzie Peabody: A few weeks ago, I sat down with my boss. Actually, my boss’s, boss’s, boss’s boss: Secretary Lonnie G. Bunch III. Lizzie Peabody: Would you like for us to call you Secretary Bunch? Secretary Lonnie G. Bunch III: How about Lonnie? Lizzie Peabody: Lonnie’s okay? Secretary Lonnie G. Bunch III: Hmm, mmm. Lizzie Peabody: Alright. Lizzie Peabody: This was an intimidating interview. Not because Lonnie is hard to talk to, he’s actually really friendly. Secretary Lonnie G. Bunch III: Do you need water? You guys okay? Lizzie Peabody: I’m alright, thank you. Lizzie Peabody: And not even because of the location, Smithsonian headquarters, on the National Mall in Washington D.C. Lizzie Peabody: This is the most regal place I’ve ever conducted an interview. Secretary Lonnie G. Bunch III: Well, I must admit it is, it is pretty special. I mean, there is something about when you pull up and you're like, “Okay, it’s the Castle!” Lizzie Peabody: It’s an actual castle built for the Smithsonian in the 1850s. Red sandstone, turrets, flags and all. The only thing missing is a drawbridge. And this interview was intimidating because, as you may have begun to suspect, Lonnie is a big deal. Lizzie Peabody: So, you are the Secretary of the Smithsonian. I think there might be people out there who might assume that a Secretary is an administrative assistant, but that is not your job! Secretary Lonnie G. Bunch III: (Laughs). Lizzie Peabody: Can you tell us what your job is? Secretary Lonnie G. Bunch III: I think being Secretary is like being the CEO of an organization. Lizzie Peabody: Lonnie is the head of the Smithsonian: the world's largest museum, education, and research complex. That's 19 museums, 9 research centers, 21 libraries and the Zoo! It's a big job, and it's a rare one. There have only been 13 Secretaries in the past 174 years. [MUSIC] Lizzie Peabody: The first Secretary was basically the personal advisor to President Lincoln. He took office 1846 and John Phillips Sousa wrote a march just for him. [Audio of John Phillips Sousa’s, “Transit of Venus March”] Lizzie Peabody: Lonnie Bunch is the 14th Secretary of the Smithsonian, and he stands out from all the men who held that role before him. Not only is he the first African American to hold this position, he’s also the first historian. Lizzie Peabody: So, the first Secretary of the Smithsonian was a scientist and most of the Secretaries since, have been scientists. You are the first historian to head this Institution. If someone were to say to you, you know, “The past is the past. We need to look forward,” what would you say to that person? Secretary Lonnie G. Bunch III: Oh, that's easy. The past isn't even done with yet! Look at the discussions around Confederate monuments, look at the notions of, “What does it mean to be an American” as more people want to come to the United States; all of this is shaped by the past. We're comfortable recognizing that certain things shape our DNA, but I would argue our DNA is also shaped by the past, by our experiences. So that, in some ways, there is nothing more powerful than a people who are steeped in their history. And there's nothing sadder than people who don't understand how almost everything they do has been shaped by the past. [Audio of John Phillips Sousa’s, “Transit of Venus March”] Lizzie Peabody: So, this time on Sidedoor, we talk about how history can help us better understand who we are today, through a couple of seemingly ordinary objects, with extraordinary stories. Don't miss it! [Audio of John Phillips Sousa’s, “Transit of Venus March”] [MUSIC] Lizzie Peabody: I sat down with Secretary Lonnie Bunch in an actual castle on the National Mall, but inside, it was pretty homey. We sat in a room with a fireplace, surrounded by objects from America's history. And fittingly, we met to talk about what objects from the past can teach us about today. Lizzie Peabody: So, let's get into some specifics then. The Smithsonian is, of course, made up of collections and within those collections are many, many objects. So, you've chosen a few favorite objects to talk about. Um, where would you like to begin? Secretary Lonnie G. Bunch III: I think it's important to realize, as we begin to talk about specific objects, that in some ways, the only thing that is permanent at the Smithsonian are the collections. Everything else is fleeting: exhibitions, staff, even buildings, but it’s the collections that are always going to be the center of the Smithsonian. Lizzie Peabody: There are nearly 155 million objects in the Smithsonian collections. Far more than could ever be displayed at once. And the first thing Lonnie chose to talk about is not in any museum. Lizzie Peabody: Tell me what we’re looking at. Secretary Lonnie G. Bunch III: When you look at this basically glass plate, it looks like it has little pencil marks on it, and it looks like something that a kid might draw on. Lizzie Peabody: And when Lonnie says, “plate,” he’s not talking about a dinner plate, more like a rectangular piece of glass, the size of a small window pane. Lizzie Peabody: It almost looks like, um, I don’t know. You know when you have spiders in your house? And you look up in a little corner and there’s like a little dimpling of black, almost looks like dust from where a spider built a web? Secretary Lonnie G. Bunch III: (Laughs). Lizzie Peabody: Do you not have spiders in your house? Secretary Lonnie G. Bunch III: (Laughs). Lizzie Peabody: Well, maybe it’s just me. (Laughs). Secretary Lonnie G. Bunch III: (Laughs). Okay, well, all of us have spiders. I guess I never thought about it that way, but yeah, some of this does look web-like. Lizzie Peabody: Yeah! Secretary Lonnie G. Bunch III: Um… Lizzie Peabody: Or like, like spider, spider excrement. That’s what my mom always told me. Anyway! It looks like little gnats or bugs or…. (Laughs). Secretary Lonnie G. Bunch III: (Laughs). You know something I don’t know. Sorry. I’m just a 19th century historian. Lizzie Peabody: (Laughs). Lizzie Peabody: “Excrement” is the word that comes out when you’re trying not to say, “poop” in front of a very important man, while wondering why you’ve decided to bring up poop in the first place. Anyway, the point is, this glass plate is covered in tiny black dots, some so small and so close together that they look like bruises, and others, bigger, the size of gnats. And between them are faint pencil scratches and letters drawn in cursive. It looks like nonsense to my untrained eye. Secretary Lonnie G. Bunch III: But what you’re really looking at is the use of photography as a way to capture the stars. This is really a glass plate from the 19th century. And what is so powerful about this, for me, is this is something that is an example, not just of science, but an example of the intersection of science and gender. [MUSIC] Lizzie Peabody: This glass plate is an early example of, “astrophotography,” technology that enabled astronomers in the 1870s to record photographs of the night sky by attaching a camera to a telescope. [MUSIC] Lizzie Peabody: This was a big deal, because photographic plates lasted longer, and they provided more reliable records than notes and observations made by the fallible and easily-fatigued human eye. That meant astronomers could reliably record and therefore, track celestial movements over long periods of time. And this particular plate is one of about 500,000 like it at the Center for Astrophysics (or CfA), a Harvard & Smithsonian collaboration. That’s where Lonnie first saw the plates. Secretary Lonnie G. Bunch III: Well, one of the things you do when you become Secretary, is you go on a Goodwill tour. Lizzie Peabody: Oh! (Laughs). Secretary Lonnie G. Bunch III: And so, you visit where the Smithsonian has branches. And one of the things I really wanted to do was to go to the Smithsonian Astrophysical Observatory at Harvard. Lizzie Peabody: The Observatory is an active research center, studying things like black holes, dark matter and the formation of stars. Secretary Lonnie G. Bunch III: When I was going through the space, they said, “I bet you’ve never seen this.” And we just walked into this tiny room that was sort of full of boxes and crates. And suddenly, they said, “here are these glass plates from a hundred years ago that document the sky.” I was stunned. [MUSIC] Lizzie Peabody: Before Harvard and Smithsonian partnered to form the CfA in 1973, it was the Harvard Observatory. And in 1877, Edward Pickering was the guy in charge. He saw the benefits of these photographic plates and he pushed to expand the astrophotography program at Harvard. And he was successful, almost too successful because by 1881, the lab had a backlog of uncatalogued photographic data they couldn’t keep up with. [MUSIC] Lizzie Peabody: On top of that, Pickering’s assistant seemed inefficient and disorganized. So, he fired him, and hired someone whose work he could vouch for. His maid. Lizzie Peabody: And she was so efficient at computing and cataloguing the positions of stars that Pickering realized, maybe he should be hiring more women instead of men. [MUSIC] Secretary Lonnie G. Bunch III: Women, who were really restricted in the late 19th century and early 20th century, suddenly played an important role. They were the computers. Lizzie Peabody: That’s right! Before computers were machines, they were people, who did mathematical computations. And during Pickering’s 42-year tenure as Director, he hired over 80 female computers. [MUSIC] Secretary Lonnie G. Bunch III: Here was something that initially was seen as maybe not that important. And suddenly, they’ve turned to women to handle this and they took something that someone could see as custodial and made it more research driven; that they were the scientists, um, not the sort of handmaidens. [MUSIC] Lizzie Peabody: Initially, Pickering had seen women as a cheap source of labor. But over the years, several of the women he hired made significant astronomical discoveries in their own right. Like Henrietta Swan Leavitt, who developed a method to calculate distances between stars by measuring their brightness. Or Annie Jump Cannon, who created the system for classifying stars that is still used today. Secretary Lonnie G. Bunch III: In some ways, this is one of those wonderful hidden figures, stories; stories of how women who are left out of the narrative profoundly shaped what we know about the stars. Lizzie Peabody: But they stood out even at that time. I mean, it’s surprising to hear that now, but it would have been surprising even then? Secretary Lonnie G. Bunch III: Absolutely. I think it’s really important to recognize this is rare, but it’s also tells us a lot about how women were able to overcome the challenges and the barriers that they faced. Lizzie Peabody: Those barriers were significant. For American women in the 1800s, going to school simply to get an education was pretty much unheard of. In fact, some medical professionals believed women were too fragile to handle the stress of learning. In 1873, a Harvard professor and doctor wrote in his book, “Sex in Education.” [MUSIC] Lizzie Peabody: (Clears throat). Reads from the text of, “Sex in Education:” “A woman’s body could only handle a limited number of developmental tasks at one time. Girls who spent too much energy developing their minds during puberty would end up with undeveloped or diseased reproductive systems.” [MUSIC] Lizzie Peabody: Well, there’s a lot to unpack there, but basically, you could either have ovaries or brains. Not both. [MUSIC] Lizzie Peabody: And even though the computers Pickering hired made significant contributions to our modern understanding of the size and shape of the universe, they were not remembered individually for their accomplishments, but rather collectively, and insultingly, as quote, “Pickering’s Harem.” Secretary Lonnie G. Bunch III: Let’s be clear. The role of women has always been undervalued and downplayed. There are probably so many inventions across the board that should be named after people of color or women that aren’t. This is the kind of story that often institutions don’t want to tell about themselves. Lizzie Peabody: Hmm. Secretary Lonnie G. Bunch III: And so, I think one of the great things about the Smithsonian is to be able to say, this is a part of our past. We didn't handle this the way we’d handle it today. And the key is to learn from that. Lizzie Peabody: 120 years later, women continue to work as scientists and mathematicians, now more publicly and with more recognition. The 2018 Nobel Prize categories of chemistry & physics were both won by women; Donna Strickland received the prize in physics, and Frances Arnold for chemistry. In 2019, Karen Uhlenbeck was the first woman to win the Abel Prize, the equivalent of the Nobel Prize for mathematics. And for the first time in history, a national observatory was named after a woman: Vera C. Rubin, the astronomer who proved the existence of dark matter. Unlike the women of Pickering’s Harem, and countless others, these women’s names made headlines worldwide. [MUSIC] Lizzie Peabody: And the astronomical plates give us context to understand this moment. They tie us to a legacy that we might not be proud of, but one that gives us context, helps us identify echoes of the past, when we hear them today. [MUSIC] Secretary Lonnie G. Bunch III: For me, it’s a great lesson of sort of what it means to look into your history and to use that history to make sure you don't make the same mistakes today. [MUSIC] Lizzie Peabody: Coming up, we’ll put the “roll” in rock and roll with a much larger object from the Smithsonian Collections. So, get ready to tap your toes! [MUSIC] Lizzie Peabody: We’re back! And we’re talking with Secretary of the Smithsonian, Lonnie Bunch, about the power of history to give context to the present, and even hope for the future. We're doing it by looking at a couple objects from the Smithsonian Collections. Now, Lonnie understands better than most how an object can be a connection to a particular moment or person in history. That’s because, before becoming the Secretary, he was the founding Director and the force behind the creation of the Smithsonian’s newest museum: The National Museum of African American History and Culture. That meant figuring out which objects would best represent the story of an entire group of people in America, across all areas of American culture, including music. Secretary Lonnie G. Bunch III: I knew that as we looked at the history of music, that it was important to talk about that rock and roll was not just a white creation. That there are many ties with rhythm and blues and many of the earlier performers. And one of the most important performers was Chuck Berry. [Audio of the Opening Guitar riff to Chuck Berry’s, “Roll Over Beethoven”] Lizzie Peabody: Chuck Berry was an African American singer-songwriter who hit the music scene in the 1950s, when it was largely segregated. There was black music and there was white music. Chuck Berry was among the first artists to straddle that divide, combining the twang of country with the swing of blues to create a new sound that appealed to both black and white audiences. It was called, “Rock and Roll.” [Audio of Chuck Berry’s, “Roll Over Beethoven”] Lizzie Peabody: So, this is Chuck Berry's 1973, “Candy Apple Red Convertible Eldorado!” Secretary Lonnie G. Bunch III: Yes. Lizzie Peabody: I think I just heard my Dad faint somewhere. Secretary Lonnie G. Bunch III: (Laughs). Lizzie Peabody: (Laughs). Lizzie Peabody: This car is part of the Musical Crossroads Gallery at the National Museum of African American History and Culture. Um, so tell me, why did you choose this snazzy car? Secretary Lonnie G. Bunch III: Well, I’m going to be honest, I didn't choose it. I wasn't that smart! Lizzie Peabody: (Laughs). Secretary Lonnie G. Bunch III: Um, I knew that we needed to tell his story. And I thought, well, the best way to tell his story is to get a guitar. And so, we went to Chuck Berry and said, “Alright. I want a guitar.” And he said, “I’m not going to give you a guitar, unless you take this car as well.” Lizzie Peabody: Really? Secretary Lonnie G. Bunch III: That's right! And so, I said, “Why would I want with a car? Why do I want with a 1973 Cadillac?” Lizzie Peabody: (Laughs). Secretary Lonnie G. Bunch III: Um… Lizzie Peabody: Where do you put it? Secretary Lonnie G. Bunch III: Well, I’m not a Cadillac guy. Alright? Lizzie Peabody: (Laughs). Secretary Lonnie G. Bunch III: Um, but the staff was so much smarter than me. They basically recognized that this car tells us so much about Chuck Berry. [Audio of Chuck Berry’s, “Maybellene”] Lizzie Peabody: Berry’s first hit, “Maybellene” came out in 1955, and it reached number 5 on the Billboard pop charts. But with this success, Chuck Berry got an unpleasant surprise. Although he wrote the song himself, he found out that his record label had split the songwriting credit three ways. He shared the royalties with a white disc jockey and a random white guy the record label owed a favor. For every record he sold, he earned half of one penny. He realized he would have to stay vigilant in order to earn what was rightfully his. [Audio of Chuck Berry’s, “Maybellene”] Lizzie Peabody: For the next several years, Chuck Berry churned out hits like, “Johnny B. Goode,” “Rock and Roll Music,” and “Roll Over Beethoven.” He sang about cars, school, and teenage love. He was a showman, dancing and duck-walking and shredding guitar licks onstage. His sound inspired some of the most iconic rock and roll bands of the era: the Beatles, The Rolling Stones, the Beach Boys, but he had trouble getting the recognition he deserved. Secretary Lonnie G. Bunch III: I remember my earliest memories, I had an aunt who had, who had left her records behind when she moved or married. And so, as a kid, I would sort of pull out, now this is now in the 60s, I’d pull out her records and she would have Chuck Berry. I didn’t know who this guy was. Here was a time when most of the rock and roll guitarists were white. Eric Clapton and people from the Yardbirds. Who is this guy? He had a distinctive style and suddenly I realized that these songs he wrote, like, “Maybellene,” that they were then covered by white artists. [Audio of “Maybellene” Cover by Everly Brothers 1963] Lizzie Peabody: White artists covered his songs, they copied his guitar riffs, his sound, and some even copied his duck walk. Secretary Lonnie G. Bunch III: Everybody built on Chuck Berry. Nobody knew that. For me, Chuck Berry is one of those symbols of how so much of African American culture got appropriated and that wasn’t then recognized as African-American. Lizzie Peabody: You know, I was listening to Chuck Berry last night before bed. I’ve been listening to Chuck Berry for a few days now. Secretary Lonnie G. Bunch III: Sure! Lizzie Peabody: And there's a song I think, “Sweet Little 1Sixteen.” Secretary Lonnie G. Bunch III: Yup. “Sweet Little Sixteen.” Lizzie Peabody: That is exactly the same as, “Surfing’ USA.” Secretary Lonnie G. Bunch III: Hmm, mmm! [Audio of Chuck Berry’s, “Sweet Little Sixteen”] Lizzie Peabody: For a side-by-side comparison, here’s Chuck Berry’s, “Sweet Little Sixteen,” followed by the Beach Boys’, “Surfin’ USA.” [Audio of Chuck Berry’s, “Sweet Little Sixteen” that fades into the Beach Boys’, “Surfin USA”] Lizzie Peabody: How is that allowed? Secretary Lonnie G. Bunch III: I would argue that in the, from World War II into the 1970’s really, so much appropriation of music was done. I mean, you know, you think of people like, you know, Mama Thornton who did, “You Ain’t Nothing, But A Hound Dog,” before Elvis Presley. Elvis Presley took that song, made all this money, she never got anything out of it. So, I think in a way, what you’re seeing with Chuck Berry is, here’s what happened to so many musicians. That their work was taken because African-Americans were restricted to the race records and they thought that this music wouldn’t translate to a white audience. So… Lizzie Peabody: The race records? Secretary Lonnie G. Bunch III: Race records were records that were… There were race films and race records that were created just for the black community. So, part of what Chuck Berry does is breaks out of that. Lizzie Peabody: Chuck Berry threatened to sue the Beach Boys over, “Surfin’ USA,” and the threat of that lawsuit earned him song-writing credit and publishing royalties. White artists had been lifting songs from black albums and passing them off as their own for ages, but this was one of the first major plagiarism scuffles in rock history. Secretary Lonnie G. Bunch III: Many people thought Chuck Berry was a difficult person to deal with. I think he was only difficult because he demanded the respect that other artists received, to make sure that his contributions to music were known and respected and that he was respected. Part of Chuck Berry’s persona was making sure they knew he was special by riding in that Cadillac. Lizzie Peabody: (Laughs). Oh yes. That Cadillac. [Audio of the 1974 commercial: America’s Number One Luxury Car is Cadillac…] Secretary Lonnie G. Bunch III: It really is a symbol of making it. Having a Cadillac symbolized that you were able to overcome the racism in this country, that you were able to be middle-class or upper class. So, it had that sort of symbol for the African American community as well. Lizzie Peabody: Chuck Berry owned several Cadillacs, but this Cadillac, here at the Smithsonian, has a special story. As a kid in Saint Louis, Berry went with his father to get tickets to see a play at the historic Fox Theater. Here’s a clip of Chuck Berry telling that story in the 1987 documentary, “Hail! Hail! Rock and Roll.” [Audio of clip from the 1987 documentary, “Hail! Hail! Rock and Roll!”] Chuck Berry: You know, when I was 11 years old, I came up to this very box office to get a ticket to see, “The Tale of Two Cities.” My father wanted us to see it because it had a lot of artistic qualities about it. Lady said, “Come on, we’re not selling you a ticket. You know you people can’t come in here. Go away.”] Lizzie Peabody: He did, but 50 years later, Chuck Berry returned to Fox Theater, and not as an audience member. He played to a sold-out crowd in celebration of his 60th birthday. The same year he was inducted into the Rock and Roll Hall of Fame. The documentary captures an incredible scene from that night. [Audio of clip of, “Rock and Roll Music” featuring Chuck Berry, with Etta James on vocals, Keith Richards, Robert Cray and Eric Clapton on guitars from the 1987 documentary, “Hail! Hail! Rock and Roll!”] Lizzie Peabody: In the final act, Chuck Berry rides on stage in his candy apple red Cadillac, playing his guitar. He gets out of the car, the crowd is on their feet going crazy, as silver confetti falls to the stage. [Audio of clip of, “Rock and Roll Music” featuring Chuck Berry, with Etta James on vocals, Keith Richards, Robert Cray and Eric Clapton on guitars from the 1987 documentary, “Hail! Hail! Rock and Roll!”] Secretary Lonnie G. Bunch III: So, imagine what that felt like, right? That here you are, someone who has been shaped by the racial attitudes of America and hurt by them. And then suddenly, because of your success, because of a changing time, you get to be where you were told you weren’t, weren’t wanted. Lizzie Peabody: And not just be there. In a car on a stage! Secretary Lonnie G. Bunch III: (Laughs). That’s Chuck Berry! (Laughs). Lizzie Peabody: (Laughs). Lizzie Peabody: Lonnie says the Cadillac he reluctantly accepted is now one of the most popular items in the Smithsonian. Maybe Chuck Berry recognized something that Lonnie didn’t at the time. Secretary Lonnie G. Bunch III: I think he knew that this was an important symbol to him. That it spoke of his success. It spoke of his visibility. Um, it spoke of the work it took to get to be Chuck Berry. That he was more than just a musician. Lizzie Peabody: Oh! So, “who I am is more than just the music I created.” Secretary Lonnie G. Bunch III: That’s right. And the music allowed me to express myself more fully. And this Cadillac is part of that expression. Lizzie Peabody: The Cadillac is about power and control. It’s about the place Chuck Berry claimed for himself in mainstream pop culture and in music history. It’s about the business of success. [Audio of Jay-Z’s 2003, “Threat”] Lizzie Peabody: In the music industry today, artists still work hard to claim their due, which isn’t easy at a time when you can find and stream just about any music you want, for free, on the internet. Rapper and songwriter, Jay-Z famously said, “I’m not a businessman. I’m a business, man.” When he first began his rap career and labels shut their doors to him, Jay-Z started his own record label, Roc-A-Fella Records. He later sold the record company to Def Jam Recordings and has since become the first billionaire rapper. [Audio of Jay-Z’s 2003, “Threat”] Lizzie Peabody: Chuck Berry’s Cadillac is a candy apple red reminder of why Jay-Z might have chosen to buy the first artist-owned streaming subscription service, TIDAL, to control the distribution of his own music. [Audio of Jay-Z’s 2003, “Threat”] Lizzie Peabody: What are you most excited for, for the future? I mean, you have this job. How long do you have this job for? Is it a lifetime appointment? Do you get to decide? Secretary Lonnie G. Bunch III: As long as…Yeah, unless they chase me out! (Laughs). Lizzie Peabody: (Laughs). Secretary Lonnie G. Bunch III: But you know, I mean, I think that what I want to do is to help the Smithsonian build on its traditions, but to not be held captive by those traditions, so that we can think of new ways to engage audiences and new ways to be of value. [MUSIC] Secretary Lonnie G. Bunch III: I want the Smithsonian to be visited, venerated. And I want it to be valued in a way that says, “This is a place that helps me understand my life today.” [MUSIC] Lizzie Peabody: Thank you so much! This has been a really, really interesting discussion and it’s such a privilege to speak with you. [MUSIC] Secretary Lonnie G. Bunch III: Oh, it’s my pleasure. Thank you. [MUSIC] Lizzie Peabody: And I know when you were talking about other ways for people to engage with the Smithsonian, I know you were really talking about this podcast, so thank you for that. Secretary Lonnie G. Bunch III: I’m the podcast guy. You call me. I’m there! Lizzie Peabody: (Laughs). Secretary Lonnie G. Bunch III: Thank you! My great pleasure. Lizzie Peabody: You’ve been listening to Sidedoor, a podcast from the Smithsonian with support from PRX. Lizzie Peabody: For a full list of songs we used in this episode, and a photo of Lonnie with the astronomical glass plates, check out our newsletter! Subscribe at si.edu/Sidedoor and follow us on Twitter @Sidedoorpod! Lizzie Peabody: The story of the astronomical plates is just one of many you can hear through the Smithsonian American Women’s History Initiative. To learn more, go to womenshistory.si.edu or join the conversation using #becauseofherstory on social media. Lizzie Peabody: This is our last episode of the season! We’ll be back in just over a month with a whole new season! While we’re working hard to produce more stories you’ll love, you can help us by spreading the word about Sidedoor. Next time you’re at a party making small talk and someone asks you if you listen to any podcasts, you can say: Secretary Lonnie G. Bunch III: I’m the podcast guy! Lizzie Peabody: Then tell them about Sidedoor! Lizzie Peabody: Special thanks to Linda St. Thomas, Lindsey Orbal, Natalia Rawls, LeShawn Burrell-Jones, Beah Jacobson, Greg Bettwy, Dave Haddock and Maxwell Suechting. And thanks to the producers of, “Hail! Hail! Rock and Roll” for their work, which helped bring this episode to life. And of course, mammoth thanks to Secretary Lonnie G. Bunch III for taking the time to talk with us about cars and stars and rock and roll. Lizzie Peabody: Our podcast team is Justin O’Neill, Jason Orfanon, Nathalie Boyd, Caitlin Shaffer, Jess Sadeq, Lara Koch, and Sharon Bryant. Episode artwork is by Greg Fisk. Extra support comes from John, Jason and Genevieve at PRX. Our show is mixed by Tarek Fouda. Our theme song and other episode music are by Breakmaster Cylinder, with notable exceptions by John Phillips Sousa and Jay-Z. [MUSIC] Lizzie Peabody: If you want to sponsor our show, please email sponsorship@prx.org. [MUSIC] Lizzie Peabody: We end this season with a hello and a goodbye. After leading Sidedoor production for four seasons, from conception to this very moment, our esteemed and beloved video-game playing Executive Producer, Jason Orfanon, is moving onto his next great adventure. We wish him well, but we will really miss him. [MUSIC] Lizzie Peabody: And, not a moment too soon, we welcome our newest podcast team member, Aida Josephine O’Neill, daughter of Senior Producer, Justin O’Neill. Welcome to the world, and welcome to the team! We expect you’ll be jumping into Jason’s shoes any minute. [MUSIC] Lizzie Peabody: I’m your host, Lizzie Peabody. Thanks for listening and see you next season! [MUSIC] Secretary Lonnie G. Bunch III: Baseball was something I loved. I used to… I was not…. I was pretty good, wasn’t great, but I was pretty good. Lizzie Peabody: You played? Secretary Lonnie G. Bunch III: I always thought I wanted to be a second baseman for the Yankees, but that never happened. Umm… Lizzie Peabody: This is a pretty good backup plan. Secretary Lonnie G. Bunch III: (Laughs). It’s not too bad. I’m pretty happy about that. Lizzie Peabody: (Laughs).
Abstract The painter is trying to realize a certain value in the canvas, the value which he feels, he is looking for and he can see in his imagination. Nevertheless, that value is not given to him, it is undefined and unclear. For this reason, painting a picture is both creating and looking for a fully perceptible value. The emerging image shows the painter the form of that value, it is controlled by the artist, but the artist is also controlled by the image which, in a way, leads him. The demanded and achieved value is not a label which appraises the image, stuck on it by the painter, but it is like a light that permeates and illuminates the painting. Key words painter, value, hierarchy, aesthetic experience Since at least the end of World War II there has been a debate on values going on among philosophers related to Roman Ingarden in Cracow. Although the issue was not new to Polish philosophy, in the Cracovian circle of Ingarden, thanks to the Master himself and his disciples, it has acquired a particular flavour. The question of the existence of values, their formal structure, their relativity versus absoluteness as well as distinguishing between the domains of values, or pointing to a possible hierarchy within and between them occupied Janina Makota, Władysław Stróżewski, Józef Tischner or Adam Węgrzecki; it still enlivens the thought of Stróżewski and Węgrzecki, despite the fact that “in the outside” the axiology itself is being contested by Heidegger and philosophical trends close to positivism. Regardless the philosophical stand, words describing values and evaluations, as well as those evaluating judgments or hierarchisation of works concerning their value, are present in the texts of critics, art historians and all those occupied with art. They are latently present in the purchase decisions made by particular museums, or decisions to exhibit some works in galleries while rejecting others. Having said that, while dealing with art, and particularly with painting, the terms indicating values cannot be rejected. Moreover, the values are given to the painters visually, and the terms describing values appear when they discuss the works of others or of their own, particularly as they struggle to explain what they really mean by a particular work of art, or what – as painters – they are searching for. Considering the question of values, Władysław Stróżewski\(^1\) mentions the concept of sought values. According to him, apart from other qualities, a value can also awake particular experiences (including aesthetic and creative experiences), becoming in itself a value for the artist, a value artists are searching for, or seek to express in their work. It is not my point now to establish whether it is a concrete value or an ideal one. I am interested in the process of a painter searching for values or valuable qualities and in the values being usually sought by painters. **Sensitivity and searching** A subjective condition for the ability to search is the painter’s sensitivity and its sophistication, which ensure that the artist’s growing openness is selective, enabling choosing and hierarchising the values that emerge during the search. An insensitive painter, or one whose sensitivity lacks sophistication and development, chooses miasma of values and, as a result, not everything he creates can be called art. Consequently, to say: *art is what an artist does*, or more precisely: painting is what a painter paints, does not seem right. Not everything a painter paints is actually painting. A painter’s painting flows out of and develops within sophisticated sensitivity and is regulated by it, as are the works which result from it. Painting is searching, but the word “searching” has at least two meanings here. One is visualised by Rembrandt’s self-portrait from Boston. The artist is not painting. He is standing by the wall of his studio looking at a painting on an easel which is standing with its back to us. The self-portrait visualises the reflection, *inventio*, mentioned by Ernst Van de Wetering in his monumental work on Rembrandt’s painting. Rembrandt, the painter, is seeking his painterly awareness, something that is called an idea, but what he finds is a value and he is being dazzled by it. The term *inventio* originates from *invenio* and suggests even that the value – like some Muse – comes to the painter and enters his spirit. The other meaning of the word “searching” is visualised by Courbet’s *Painter’s studio* as well as numerous self-portraits of artists holding a palette and the brushes, sitting or standing in front of canvas placed on an easel. Here, “searching” equals painting. The painter searches for values in different layers of the painting, also values related to the layer of colour patches. These can, for example, be colour compositions. I can still recall the words of my painting professor: “Please, search for compositions”. A composition in a painting is a valuable association between colours. Searching in the latter sense happens on canvas, but it is not disconnected from “inventio”. Quite the opposite, they interweave and intermingle to the extent that however “inventio” can be distinguished in the process of painting, it cannot be separated from it. As a matter of fact, the word “searching”, indicates two different moments of the creative process whose course is not entirely random. Searching for values induces experiencing them. Disclosing, or displaying values, experience is at the same time a dialogue or a dramatic encounter with these values. It seems to me that such experience has been described – after George Bataille – by Barbara Skarga. It is “of no rule, no purpose and lacking any prestige, still powerful enough to shake and give birth...” And further – this time after Michel Henry – “this experience gives something, something is revealed in it, manifested, displayed. [...] This experience could in this way be [...] a gift, or a vision, or a dis- --- 3 G. Courbet, *L’Atelier du peintre, allégorie réelle déterminant une phase de sept années de ma vie artistique et morale*, 1855, oil on canvas, 361 x 598 cm; Musée d’Orsay, Paris. closure, a discovery, an opening, a sensation, a revelation, a quest [...] or perhaps an awakening too\textsuperscript{6}. Searching is not random. Depending on the philosophy prevailing in the given times, both searching and its results are regulated, forming the so-called canons\textsuperscript{7}. They cannot be merely ascribed to epochs like antiquity, the Middle Ages or the Renaissance – to use those conventional names. Every style of painting has its particular canon, which needs to be respected, if the work's coherence is to be preserved. Even in the deliberately incoherent works, such incoherence itself becomes a particular canon that must be rigorously observed. Leonardo da Vinci – being ahead of conceptualism – claimed that painting is a \textit{cosa mentale} and that this \textit{mens} is regulated by geometry, but also by mood, particular emotion outstanding among others thanks to the famous \textit{sfumato}. In my opinion, also expressionism is a \textit{cosa mentale}; it is not, however, related to geometry, but to the state of mind animated with violent emotions, I seems that – in the end – every painting is a \textit{cosa mentale}, because a painting – before it appears on canvas – already exists in the painter, altering his or her awareness. Painting as a \textit{cosa mentale} has been brought to extreme by conceptualism... Searching is a dialogue of a painter with himself, with other painters or with the spirit of the times – not necessarily his own. It does happen, too that a painter reaches to the spirit of another period. Searching is a journey, in which some values are chosen and others rejected, there is a constant hierarchisation happening. A painter makes intuitive \textit{judgments about the value}, meaning – according to Stróżewski – making descriptive judgments telling something about the sought value, subsequently making \textit{evaluating judgments} which assume the judgments about the value. Finally, the painter evaluates, trying to reach the individual essence of the value as seen in the light of the ideal value which defines it. According to Stróżewski, the evaluating judgments can be either true or false! Sometimes painters search for values they misunderstood, for example taking so-called \textit{sauce}\textsuperscript{8} for depth, or a fake pose for a dramatic gesture... And not only individual painters would do so, but even whole shallow trends in painting would try to im- \textsuperscript{6} Ibidem, p. 129. \textsuperscript{8} Speaking about the so called \textit{Munich sauce}. iterate great trends, preferring shallow values put on a show of great ones. For example, the time of Van Megeren was blind to his forgery of Ver Meer, because it misunderstood the value of Ver Meer’s painting, taking secondary qualities for essential ones – impossible to achieve in a painting for the second time. Paintings of Josefa Israëls were at one point thought equal to those of Rembrandt, taking their gallery form as a repetition of Rembrandt’s artistry. **The poles of oppositions governing creative activity** A painter – as well as any artist – searches for his own way within the field of energy, in between the poles of oppositions described in the *Dialectics of Creative Activity* by Władysław Stróżewski\(^9\). Painters operate within the area demarcated by dialectical poles of oppositions that govern creative activity. Aiming at completing their work within the field of tensions created by the poles, they search for and fulfil a particular value. The dialectical poles should not be understood as points; in their clear form they create centres of energy whose radiation intermingles and mixes, subjecting the painter to their interweaving influence. Stróżewski discussed the activity of poles in such detail that I will content myself with a few remarks only. Stróżewski distinguishes a few pairs of oppositions, of which some are particularly interesting to me because of the painterly quest. First of all, it is determinism and necessity. This pair of oppositions plays an important role in the painterly quest. However, at the starting point every painter is more or less determined by the rules which he is aware of, and which bind him and can be subject to interpretation\(^10\), still *in the very beginning* these very rules were also searched for: e.g. the history of Greek sculptural canon, Greek architectural canons or icon writing canon in the Eastern Church. The new canon was searched for through constructing, sculpting, painting and writing icons... Once found, it was not applied conventionally, and whenever it was, the works produced were dead. Robert Musil writes about it: “Es kann deshalb nützen, sich daran erinnern zu lassen, dass in schlechten Zeiten die schrecklichsten Häuser und Gedichte nach genau ebenso schönen Gr- --- undsätzen gemacht werden wie in den besten...". It applies to writing icons as well. The rules are established, but every real icon displays new, unexpected values. The potential to reveal new modalities of icons can be seen in the schools of icon writing and in the deep difference between the icons of Theophanes the Greek and Andrei Rublev, which – however different – are both icons! Such new values were revealed through the icons written by Jerzy Nowosielski, who pointed to the unexpected potential hidden in the iconography of the Eastern Church. New modalities of values, by necessitating their realisation, liberated the iconographer from the determinism of rules. However, many icons are not written, but remain dead, despite having been created in full accordance with the canon. A similar regularity can be noted in the works of architecture or sculpture completed most strictly according to the rules, yet dead. I am not a musician myself, but I trust the words of professionals that also here one can find such works. In this way, a painter establishes certain rules, which then form a canon and are inherited by the successors creating a school. The school – having exhausted the potential of the canon – fossilises and the need for further quest occurs. Further on, Stróżewski mentions such poles of oppositions as: spontaneity and control; freedom and rigour or improvisation and calculation. In the Gemeentemuseum near the Hague we can see an unfinished painting of Piet Mondrian. I do not remember now, whether it is one of the few paintings entitled “Broadway Boogie-Woogie” or “Victory Boogie Woogie”, a precise indication of the work is not crucial here. It is important, however, that the painting is unfinished and that thanks to it we can learn about the painterly procedure of Mondrian, who – contrary to what one could imagine looking at the completed works – did not outline the pattern of squares and rectangles first to then fill them with colours. Mondrian searched for a pattern when sketching on canvas, and then searched for the colour by sticking pieces of colourful paper in a desired format. The colour of the papers was supposed to prompt him with a colour solution and point to the right direction of the final colour arrangement on canvas. In this unfinished painting we can see an interaction of spontaneity and control; freedom and rigour; improvisation and calculation. Spontaneous and free but at the same time rigorously controlled drawing of lines of the developing --- 12 Gemeentemuseum Den Haag, Stadhouderslaan 41, 2517 HV Den Haag. composition; improvisation in putting forward possible colour solutions of which eventually only one will prove right – in accordance with the inner necessity to regulate the painting. **Logos of the epoch: from vision to composition** A painter following a route pervaded by the energies emanating from the opposite poles, searches for values led – so to say – by painting itself. His quest depends on his artistic stance. For example, every painter will search for different values and in a different way – an impressionist, a cubist, an abstractionist (this too depending on the trend of abstractionism), an expressionist, etc. The spectrum will broaden endlessly, if we take into account all the past and future trends and painterly stances... Stróżewski’s texts are interspersed with an idea of a logos particular for each epoch and central for all the arts of its time. In every epoch, every art – including painting – spurts from a central logos, which it then explains using its own particular ratio. In their quest of values, painters follow their path, directed by the central logos of the epoch encompassing them, the logos of painting and ultimately of their own. Some values are definitely suggested by the spirit of the times – these, of course, fade the fastest. There exist, however, so called *eternal* values, which remain even after the qualities related to the epoch are no longer there in the work – these are works pervaded by the values – such as the masterpieces of various epochs... – times changed, the worlds in which the masterpieces were created are no longer there, but we still admire them, even though in this way we participate in another epoch... What values deserve this admiration? What values immanent to the logos of painting were sought by the painters of all times? The answer is risky, but I will not try to avoid it. I emphasise that I am interested in painting, because in the present state of art my exposition could prompt a question why I omit numerous phenomena which are not painting, but which these days are also – or perhaps first of all – considered art. I confine myself to painting, believing that through it – thanks to the analogy of arts, sometimes called *correspondance des arts* – I will also get an insight into other arts. In my opinion, the following values sought by painters are immanent to the logos of painting: One is for sure a *vision*. It is about directing the spiritual gaze towards the ideas of certain principal values, more precisely towards the constants of these values, whose variables facilitate various concretisations of the vision until its potential is exhausted. New possibilities can also arise later – the phenomena of renaissances support this notion. The constant data in the romantic and classical vision have been described by Władysław Tatarkiewicz\(^\text{13}\). A painter searches for a language, not necessarily a new one. In many epochs it was enough to learn the given language, which was, however, modified by each artist independently. The language of painting is a language of silence and it cannot be replaced by any other. It is untranslatable. "But just as in the written language we have words and phrases, also painting has its words, its syntax, its style"\(^\text{14}\). A painter searches for a technique, for technical values: a line drawn with a piece of coal, a brush, a pencil – of different thickness... each has a particular value in the painting. One can insist that these are elongated patches, but their painterly effect, their valuable painterly quality is very different. The choice of material determinates the style of the painting. Depending on what technique, what kind of paint we choose (oil, tempera, acrylic...), the value of the painting and its style will differ. The way of concretising these values in a work of art is the painterly technique. Balthus writes on it: [...] painting is a handicraft in the fullest, most "handicraft" meaning of the word. It implies such a high degree of mastery that the painter’s life is not enough for it\(^\text{15}\). [...] handicraft is a consequence of a moral stance, which requires intelligence of the mind and of the hand, as well as high discipline of the spirit\(^\text{16}\). [...] the division between art and painterly handicraft is the sanctification of the split between art and the work of an artisan. [...] handicraft, craftsmanship [...] was art’s compost, it provided food for it, its substance. For the master values his profession so highly that he does not let anyone look at his works until they are finished. For him incompleteness – the trademark of our times – is merely a sign of negligence or perhaps even inability to complete one’s work [...] an indication of a loss of professional dignity\(^\text{17}\). \(^{13}\) W. Tatarkiewicz, Dzieje sześciu pojęć, Warszawa 1975, pp. 207, 217. \(^{15}\) Ibidem, p. 76. \(^{16}\) Ibidem, pp. 76–77. \(^{17}\) Ibidem, p. 65-66. Throwing academism aside, also the concepts of technique and handicraft have been cast away. But at the same time [...] technique and handicraft are essential to art.\textsuperscript{18} A painter always searches for the formal values, which make the work a work of art. It happens also when the formal values are not themed. A painter searches for colour: it carries qualities of value and provides a foundation for the painting in its qualitative endowment. It is about the whole canvas to be filled with one kind of colour; so that the position is not differentiated depending on the object. In this way, all the elements of the painting should be homogeneous. Every colourist would for sure say: I search for colour. In every painting there are colourful patches, and in each case they are treated differently (in terms of size, shape, position and technique), depending on the style: we will see different patches in the paintings of van Eyck, Titian, Rembrandt, Turner, Monet, Braque, Matisse, Balthus, Bacon, Freud... The list of names could be much longer. The way a painter treats the colourful patches determines the ultimate character of the painting. The patches interact with each other in various ways, creating so called combinations. A combination is a valuable state of affairs, in which more than one colourful patch participates, especially because of a particular trait of the patch. I characterised the combinations using the theory of relations by Roman Ingarden\textsuperscript{19}. I would like to include also a few remarks on the experience of colour made by Balthus: There is a certain kind of colour memory – thanks to it I recognise them, react to them, feel them vibrating. I apply them next to each other – and they interact. They are like waves which need to be matched. [...] colours are an expression of what I call their «body». Every colour emits its particular light. A precisely rendered colour in a way approximates the absolute. \textsuperscript{18} Ibidem, p. 69. I remembered a missed statement of Jan Cybis, which sounded more or less like this: "a colour aptly applied is thereby applied technically correctly". \textsuperscript{19} In his work on the structure of paintings, Ingarden distinguished a layer of colourful patches. This structure – beside the semantic and axiological side of a painting – belongs to technical aspect of a painterly work. The content of this layer is very important for the painting. It constitutes the material side of the layer of colourful patches, in which valuable relations of patches occur. Colour exists exclusively in relation to other colours, it is like a tone of music, whose ultimate sound depends on the context. Only after the work is complete [...] it becomes clear what it is the real colour of the painting. It is in connection to colour that painters search for the painterly matter, the painterly substance. They also search for composition. A valuable quality of composition is the unity of composition, achieved through composing in accordance with the laws of logic of composing and composition. The logic of composing and composition is not simply invented and does not result from *a priori* assumptions. To all painterly work an assumption-less stance applies. The logic of composing is immanent to the emerging painting, therefore it should be thoroughly understood and fully developed while work is in progress. However, the work is still governed by the general rules described by the formal aesthetics, the establishments of which as well as his own suggestions were presented recently by Lambert Wiesing. What is more, the logic here enters the open plane of the painting, which is also governed by its own rules. These rules were described by Kandinsky in his two studies on the foundations of painting. **Beauty and the aesthetically valuable moments** Apart from these values sought by the majority, many painters – more or less consciously – searched for more detailed values. In various ways, they searched for beauty. It is beauty and the related values to which Władysław Stróżewski’s book *Wokół piękna* is dedicated. Sublime [Loftiness] was sought after too, but not only that! Roman Ingarden put the aesthetically valuable moments together forming groups of material moments – including emotional, intellectual, formal – within them also --- purely objective moments, derivatives for the perceptor... He also pointed to the ways (modi) in which the qualities exist... These valuable qualities were explored differently by different painters, depending on the epoch. At one time, painters looked for symmetry and harmony, as well as ideal proportions, at another, their very opposi- tions: asymmetry, dissonance, proportions not ideal, but this time, say, full of emotional expression. But what does it mean to search for symmetry or proportion? Is it only about composing the painting along the axis of symmetry? The word symmetry means commensurateness. It is about all parts and moments of a painting being mutually commensurate, and the axial symmetry is one case of commensurateness. Whenever everything in the painting is mutually commensurate, it can be said (after Stróżewski) that the painting – as a painting – exist commensurately. The same applies to all other values. A painting exists in the way that the values penetrate the entire work, all its layers and their content. Translated by Marta Bregiel-Benedyk Bibliography 2. Ingarden R., Zagadnienie systemu jakości estetycznie doniosłych, in: idem, Przeży- cie, dzieło, wartość, Kraków 1966. 4. Kandinsky W., Punkt i linia a płaszczyzna. Przyczynek do analizy elementów malar- 7. Stróżewski W., Dialektyka twórczości, Kraków 2007.